content
stringlengths 0
188k
|
---|
Sign up
Sign in
Sign up
Sign in
Generative AI
Home
Newsletter
About
Follow publication
All the latest news and updates on the rapidly evolving field of Generative AI space. From…
Follow publication
Accelerated Computing
CUDA Programming
Sam Mokhtari
Follow
Generative AI
--
Listen
Share
In previous modules, we covered the foundations of parallel computing and explored the capabilities of NVIDIA GPUs. This blog aims to build on that knowledge by introducing you to CUDA programming. We will explore the broader CUDA ecosystem, including key concepts, essential libraries, and techniques to improve the performance of your programs. By the end of this post, you will clearly understand how to leverage the power of CUDA for high-performance computing.
Subscribe and watch a detailed video on YouTube. Also, check out CUDA Programming Tutorials repository which is designed to help you learn NVIDIA CUDA programming through well-structured and practical examples.
Data Parallelism vs. Task Parallelism: The GPU-CPU Collaboration
Data parallelism involves applying the same operation to multiple data elements simultaneously. Imagine you have a large array of numbers, and you want to square each element. Each GPU thread could be responsible for squaring a single element, resulting in all the elements being processed simultaneously. This is what we mean by data parallelism — all threads perform the same task but on different pieces of data.
On the other hand, task parallelism is about executing different tasks independently — whether they’re working on the same dataset or separate ones. For instance, if you have several unrelated tasks, like image processing, network requests, and file I/O, you could assign each task to a separate CPU core to execute simultaneously. Task parallelism is more about dividing responsibilities than dividing data.
CPUs and multi-core systems are better suited for task parallelism because they can execute different instructions concurrently, giving them the flexibility to handle different types of tasks simultaneously.
GPUs are specifically designed for data parallelism. They use a model called SIMD (Single Instruction, Multiple Data), where all threads within a group, called a “warp,” execute the same instruction at the same time, but on different data. This approach aligns perfectly with the concept of data parallelism. In other words, GPUs are like an army of workers, each equipped to perform the same task on their individual piece of the workload, making them incredibly efficient for repetitive calculations.
CUDA leverages a heterogeneous programming model, meaning that the CPU (host) and GPU (device) work together to maximize efficiency. Each of them is assigned tasks that align with their strengths:
This division of labour ensures that the GPU can focus on the heavy lifting while the CPU manages and orchestrates the overall flow — kind of like a project manager (CPU) delegating tasks to an efficient team (GPU).
What is CUDA?
CUDA provides a comprehensive set of tools to take advantage of GPU power for high-performance computing. Here’s a look at some key components:
CUDA Toolkits
Writing CUDA Kernels: Where the Magic Happens
In CUDA, you write kernels, which are functions that run on the GPU. Kernels are like small, parallel tasks that can execute thousands of times concurrently across the GPU threads.
CUDA Execution Model
The CUDA execution model uses a hierarchical structure to organize and execute parallel tasks efficiently on the GPU. Let’s break it down:
Understanding Memory in CUDA Programming
Understanding and choosing the appropriate memory space is critical for achieving optimal performance in CUDA programming. Here’s a breakdown of the memory spaces and their characteristics:
Efficient memory usage directly impacts the performance of CUDA programs. By understanding the strengths and limitations of each memory type, you can maximize throughput, reduce latency, and fully leverage the GPU’s capabilities.
The CUDA Compiler (nvcc)
The CUDA compiler, nvcc, is a powerful tool that simplifies the process of preparing CUDA code for execution on a GPU. Let’s walk through its key features and functionality:
nvcc compiles the GPU code into PTX (Parallel Thread Execution), which is an intermediate representation, or directly into machine-specific binary code. It then links the host and device code into a single executable file that the program can run.
2. Common Command-Line Options: Developers can use various options with nvcc to customize the compilation process:
Offline Compilation: This method allows you to compile your code and link the pieces together later. It’s great for modular development where different parts of a program are developed independently.
Just-in-Time (JIT) Compilation: In some cases, PTX code is compiled into machine code at runtime. This provides flexibility, especially when targeting a variety of GPU architectures.
CUDA Runtime API vs. CUDA Driver API
CUDA offers two main APIs for interacting with the GPU: the CUDA Runtime API and the CUDA Driver API. Both serve important roles, but they cater to different levels of programming complexity and control.
The CUDA Runtime API is a high-level API that abstracts away many of the complexities involved in managing GPU resources. It is the most common choice for developers due to its simplicity and ease of use. The Runtime API handles many details like context management, memory allocation, and device setup automatically, which allows developers to focus more on writing efficient kernels rather than low-level GPU management.
Key Features of CUDA Runtime API:
The CUDA Runtime API is ideal for developers who want to get started quickly with GPU programming without diving into intricate details about hardware resources.
The CUDA Driver API provides a lower-level interface to the GPU, offering finer control over the hardware. Unlike the Runtime API, it requires explicit context management and provides more flexibility when managing GPU resources. The Driver API is often used in scenarios where maximum control and performance tuning are essential.
Key Features of CUDA Driver API:
The Driver API is ideal for developers who need more flexibility and control, such as when integrating CUDA into larger systems or when interacting closely with hardware resources.
CUDA Graphs
CUDA Graphs allow developers to define a sequence of operations — such as kernel launches, memory transfers, or other tasks — and treat them as a single, unified workload. Once a CUDA graph is created, it can be executed repeatedly without reissuing individual commands, which saves both time and reduces code complexity.
Benefits of Using CUDA Graphs
How to Create CUDA Graphs
There are two main ways to define a CUDA Graph:
By using CUDA Graphs, developers can create more optimized, less CPU-intensive workflows, maximizing the performance potential of both the CPU and GPU.
Dynamic Parallelism
Dynamic Parallelism is a powerful feature in CUDA that takes GPU programming to the next level. Let me explain how it works and why it’s so useful: Normally, the CPU (host) is responsible for launching kernels on the GPU (device). With Dynamic Parallelism, however, a kernel running on the GPU can launch and synchronize other kernels directly. This means the GPU can make decisions and create new tasks during execution without involving the CPU.
Why Is Dynamic Parallelism Useful?
•Supports Complex Program Structures: In applications with unpredictable data or workload requirements, Dynamic Parallelism allows kernels to create new tasks as needed. For example, in simulations, some regions of data might require more processing than others. Dynamic Parallelism lets the GPU adjust and handle these variations efficiently.
•Reduces CPU Overhead: Without Dynamic Parallelism, the CPU would need to manage and launch every kernel. This back-and-forth communication between the CPU and GPU can be slow. By letting the GPU launch kernels, we eliminate this bottleneck, improving performance.
•Adapts Dynamically: Kernels can respond to the current state of the computation and split the workload further, leading to better resource utilization and scalability.
Dynamic Parallelism unlocks new possibilities for designing efficient, scalable, and adaptive GPU programs, especially for applications like scientific simulations, hierarchical computations, and recursive algorithms.
Key CUDA Libraries for Accelerated Linear Algebra and Specialized Computations
CUDA libraries like cuBLAS, cuDNN, CUTLASS, cuFFT, and cuSPARSE are critical tools for high-performance computing on GPUs. Here’s an overview of their roles and benefits:
1. cuBLAS: Accelerating Linear Algebra
2. cuDNN: Driving Deep Learning
3. CUTLASS: Customizable Linear Algebra
4. cuFFT: Fast Fourier Transforms
5. cuSPARSE: Sparse Matrix Operations
Optimizing CUDA Programs for Maximum GPU Utilization
Profiling and debugging are critical steps in optimizing CUDA programs, allowing developers to pinpoint bottlenecks, improve performance, and ensure reliability. Here’s an in-depth look at the tools and techniques available:
1. Profiling CUDA Programs
Profiling helps identify inefficiencies in CUDA applications, enabling targeted optimizations for better GPU utilization and faster execution.
2. NVIDIA Nsight Systems
3. Command-line Profiler (nvprof)
A lightweight tool for quick profiling, capturing kernel execution times and memory transfer metrics.
2. Debugging CUDA Programs
Debugging in CUDA is complex due to the parallel nature of GPU programming. Tools like cuda-gdb simplify this process by providing powerful debugging features.
cuda-gdb is a CUDA-specific debugging tool that enables developers to:
Conclusion
CUDA offers a powerful framework for leveraging GPUs to tackle computationally intensive tasks. By understanding its core concepts, utilizing its rich ecosystem of libraries, and applying effective optimization techniques, developers can achieve exceptional performance for a wide range of applications.
This story is published on Generative AI. Connect with us on LinkedIn and follow Zeniteq to stay in the loop with the latest AI stories.
Subscribe to our newsletter and YouTube channel to stay updated with the latest news and updates on generative AI. Let’s shape the future of AI together!
--
--
Published in Generative AI
All the latest news and updates on the rapidly evolving field of Generative AI space. From cutting-edge research and developments in LLMs, text-to-image generators, to real-world applications, and the impact of generative AI on various industries.
Written by Sam Mokhtari
Technology thought leader with 15+ years in cloud, data analytics, and AI @ AWS | PhD | Author & Speaker | Life Mentor & Coach
No responses yet
Help
Status
About
Careers
Press
Blog
Privacy
Terms
Text to speech
Teams
|
Sign up
Sign in
Sign up
Sign in
CUDA programming: Custom MNIST MLP engine
Harsh Patel
Follow
--
Listen
Share
In this blog, I will guide you through how to code the cuda kernel for MNIST MLP inference.
Introduction:
In the ever-evolving landscape of artificial intelligence and deep learning, neural networks have emerged as a dominant force, propelling groundbreaking advancements across various domains. Among these neural network architectures, the Multilayer Perceptron (MLP) stands out as one of the most fundamental and widely-used models for its simplicity and effectiveness in solving various tasks. One such task is digit recognition, where the classic MNIST dataset has become the de facto benchmark for evaluating the performance of machine learning algorithms.
In this blog post, we embark on an exhilarating journey into the realm of high-performance computing and CUDA programming to unleash the full potential of custom MNIST MLP inference. NVIDIA’s CUDA (Compute Unified Device Architecture) has revolutionized the way we harness the power of Graphics Processing Units (GPUs) to accelerate computationally intensive tasks. By leveraging the parallel processing capabilities of GPUs, we can achieve significant speedups in training and inference times, making them indispensable tools for modern deep learning applications.
Our objective throughout this article is to demystify the complexities of CUDA programming and empower you to build a custom MLP inference engine tailored to the MNIST dataset. Whether you are a seasoned deep learning practitioner or a curious enthusiast eager to dive into the world of GPU acceleration, this guide will equip you with the essential knowledge to wield CUDA effectively.
The knowledge gained from this blog will not only empower you to optimize MNIST digit recognition but also serve as a stepping stone to explore more complex deep learning tasks that demand high computational power.
So, buckle up and get ready to harness the parallel processing capabilities of CUDA as we embark on an exhilarating journey to supercharge our custom MNIST MLP inference engine!
Training the Model and Saving Weights in NumPy:
In this section of the blog, we covered the initial steps of implementing custom MNIST MLP inference using CUDA. We began by setting up the MNIST dataset and then designing the architecture of our Multilayer Perceptron (MLP) model and training it using PyTorch.
To preserve the trained model’s learned parameters for future use and compatibility with other environments, we saved the optimized weights and biases in NumPy format.
Data reading and Model training:
import torchimport torch.nn as nnimport torch.nn.functional as Ffrom torchvision import datasets, transformsimport numpy as npimport matplotlib.pyplot as pltimport osif torch.cuda.is_available(): device = torch.device("cuda")else: device = torch.device("cpu")print("Using PyTorch version:", torch.__version__, " Device:", device)batch_size = 32train_dataset = datasets.MNIST( "./data", train=True, download=True, transform=transforms.ToTensor())validation_dataset = datasets.MNIST( "./data", train=False, transform=transforms.ToTensor())train_loader = torch.utils.data.DataLoader( dataset=train_dataset, batch_size=batch_size, shuffle=True)validation_loader = torch.utils.data.DataLoader( dataset=validation_dataset, batch_size=batch_size, shuffle=False)for X_train, y_train in train_loader: print("X_train:", X_train.size(), "type:", X_train.type()) print("y_train:", y_train.size(), "type:", y_train.type()) breakclass Net(nn.Module): def __init__(self): super(Net, self).__init__() self.fc1 = nn.Linear(28 * 28, 50) self.fc1_drop = nn.Dropout(0.2) self.fc2 = nn.Linear(50, 50) self.fc2_drop = nn.Dropout(0.2) self.fc3 = nn.Linear(50, 10) def forward(self, x): x = x.view(-1, 28 * 28) x = F.relu(self.fc1(x)) x = self.fc1_drop(x) x = F.relu(self.fc2(x)) x = self.fc2_drop(x) return F.log_softmax(self.fc3(x), dim=1)model = Net().to(device)optimizer = torch.optim.SGD(model.parameters(), lr=0.01, momentum=0.5)criterion = nn.CrossEntropyLoss()print(model)def train(epoch, log_interval=200): # Set model to training mode model.train() # Loop over each batch from the training set for batch_idx, (data, target) in enumerate(train_loader): data = data.to(device) target = target.to(device) optimizer.zero_grad() output = model(data) loss = criterion(output, target) loss.backward() optimizer.step() if batch_idx % log_interval == 0: print( "Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}".format( epoch, batch_idx * len(data), len(train_loader.dataset), 100.0 * batch_idx / len(train_loader), loss.data.item(), ) )def validate(loss_vector, accuracy_vector): model.eval() val_loss, correct = 0, 0 for data, target in validation_loader: data = data.to(device) target = target.to(device) output = model(data) val_loss += criterion(output, target).data.item() pred = output.data.max(1)[1] # get the index of the max log-probability correct += pred.eq(target.data).cpu().sum() val_loss /= len(validation_loader) loss_vector.append(val_loss) accuracy = 100.0 * correct.to(torch.float32) / len(validation_loader.dataset) accuracy_vector.append(accuracy) print( "\nValidation set: Average loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n".format( val_loss, correct, len(validation_loader.dataset), accuracy ) )epochs = 5lossv, accv = [], []for epoch in range(1, epochs + 1): train(epoch) validate(lossv, accv)
Weights preserving:
weights = [ (name, param.detach().cpu().numpy()) for name, param in model.named_parameters()]for w in weights: with open("./model/" + str(w[0]) + ".npy", "wb") as f: print("shape") if "bias" in w[0]: weights = np.expand_dims(w[1], axis=(0)) print(weights.shape) weights = np.ascontiguousarray(weights) np.save(f, weights) else: weights = w[1].T print(weights.shape) weights = np.ascontiguousarray(weights) np.save(f, weights)
Loading NumPy Weights in CUDA in C++:
In this section we will read the numpy weight in C++ using python interpeter. As we ready using Python interpeter we get PyArrayObject , which we will convert it into float. And we will move the float array to GPU using CUDA.
#include <stdio.h>#include <stdlib.h>#include <Python.h>#include <cuda_runtime.h>#include <numpy/arrayobject.h>#define NPY_NO_DEPRECATED_API NPY_1_7_API_VERSIONstruct Weights { float* matrix; int ndims; int *shape; long int size;};PyObject* read_numpy_file(const char* file_path) { // Import the Python module containing the function PyObject* numpy_module = PyImport_ImportModule("numpy"); if (numpy_module == nullptr) { PyErr_Print(); return nullptr; } // Get the reference to the function PyObject* numpy_function = PyObject_GetAttrString(numpy_module, "load"); if (numpy_function == nullptr) { PyErr_Print(); Py_DECREF(numpy_module); return nullptr; } // Create the arguments tuple PyObject* args = PyTuple_New(1); PyTuple_SetItem(args, 0, PyUnicode_FromString(file_path)); // Call the Python function with the arguments PyObject* result = PyObject_CallObject(numpy_function, args); if (result == nullptr) { PyErr_Print(); Py_DECREF(numpy_module); Py_DECREF(numpy_function); Py_DECREF(args); return nullptr; } // Print the shape of the NumPy array PyObject* shape = PyObject_GetAttrString(result, "shape"); if (shape != nullptr) { PyObject* repr = PyObject_Repr(shape); const char* str = PyUnicode_AsUTF8(repr); printf("Shape: %s\n", str); Py_DECREF(repr); Py_DECREF(shape); } else { printf("Failed to get shape.\n"); } // Clean up references Py_DECREF(numpy_module); Py_DECREF(numpy_function); Py_DECREF(args); return result;}PyArrayObject* read_weights_from_numpy(const char* file_path, int print) { // Call the Python function and get the PyObject reference to the NumPy array PyObject* numpy_array = read_numpy_file(file_path); if (numpy_array == nullptr) { // Handle error return nullptr; } // Use the PyObject reference to the NumPy array as needed // Example: Print the array if (print == 1){ PyObject* repr = PyObject_Repr(numpy_array); const char* str = PyUnicode_AsUTF8(repr); printf("%s\n", str); } // Convert the result to a NumPy array PyArrayObject* array = reinterpret_cast<PyArrayObject*>(numpy_array); return array;}int get_numpy_ndims(PyArrayObject* array){ // Get number of dimensions int ndim = PyArray_NDIM(array); printf("%d \n", ndim); return ndim;}long int get_numpy_size(PyArrayObject* array){ // Get the total size of the array npy_intp total_size_intp = PyArray_SIZE(array); printf("%" NPY_INTP_FMT "\n", total_size_intp); long int total_size = static_cast<long int>(total_size_intp); return total_size;}float* convert_PyArrayObject_to_float(PyArrayObject* array, int print, int *shape, int ndim) { printf("values %d %d \n", PyArray_TYPE(array), NPY_DOUBLE); // Check the data type of the numpy array if (PyArray_TYPE(array) != NPY_FLOAT32) { printf("Input numpy array is not of type float.\n"); } // Convert the weight into float float* matrix = static_cast<float*>(PyArray_DATA(array)); // Printing weights for checking it after conversion if (print == 1){ if (ndim == 2){ for (int i = 0; i < shape[0]; ++i) { if (i ==0){ for (int j = 0; j < shape[1]; ++j) { // matrix[j+(i*shape[1])] = 0.5f; if(j<3 || j>shape[1]-3){ printf("%d index %.5f ", j+(i*shape[1]), matrix[j+(i*shape[1])]); } } printf("\n"); } } } else if (ndim == 1){ for (int i = 0; i < shape[0]; ++i) { // matrix[i] = 0.5f; if(i<5 || i>shape[0]-5){ printf("%d index %.5f ", i, matrix[i]); } } printf("\n"); } } return matrix; }float* move_weight_to_cuda(float* weights ,long int total_size){ // Allocate CUDA device memory float* d_data; printf("total size %d \n", total_size); cudaError_t cudaStatus; cudaStatus = cudaMalloc((void**)&d_data, total_size * sizeof(float)); if (cudaStatus != cudaSuccess) { fprintf(stderr, "cudaMalloc failed: %s\n", cudaGetErrorString(cudaStatus)); // Handle the error or return an error code } // Copy the array data from host (CPU) to device (CUDA) cudaStatus = cudaMemcpy(d_data, weights, total_size * sizeof(float), cudaMemcpyHostToDevice); if (cudaStatus != cudaSuccess) { fprintf(stderr, "cudaMalloc failed: %s\n", cudaGetErrorString(cudaStatus)); // Handle the error or return an error code } return d_data;}Weights read_weights(const char* file_path, int print){ Weights weight; // reading numpy weights PyArrayObject* array = read_weights_from_numpy(file_path, print); if (array == nullptr) { // Handle error weight.matrix = nullptr; return weight; } int ndims = get_numpy_ndims(array); get_numpy_shape(array, weight, ndims); long int size = get_numpy_size(array); float* matrix = convert_PyArrayObject_to_float(array, print, weight.shape, ndims); // Release the PyObject reference float* cuda_weights = move_weight_to_cuda(matrix, size); Py_DECREF(array); weight.ndims = ndims; weight.size = size; weight.matrix = cuda_weights; return weight;}
Defining CUDA kernels for MLP in C++:
We have to define 3 kernel for make MLP to work matrixMulKernel, matrixAddKernel and softmaxKernel. For more details please check links in references.
__global__ void matrixMulKernel(float* matrixA, float* matrixB, float* matrixC, int rowsA, int colsA, int colsB) { int row = blockIdx.y * blockDim.y + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; if (row < rowsA && col < colsB) { float sum = 0.0f; for (int k = 0; k < colsA; k++) { sum += matrixA[row * colsA + k] * matrixB[k * colsB + col]; } matrixC[row * colsB + col] = sum; // Print intermediate result // printf("Intermediate result at rowsA %d, colsB %d [%d][%d]: %.5f\n", rowsA, colsB, row, col, sum); }}__global__ void matrixAddKernel(float* matrixA, float* matrixB, float* matrixC, int rows, int cols) { int row = blockIdx.y * blockDim.y + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; if (row < rows && col < cols) { int index = row * cols + col; matrixC[index] = matrixA[index] + matrixB[index]; }}__global__ void softmaxKernel(float* input, float* output, int rows, int cols) { int row = blockIdx.y * blockDim.y + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; if (row < rows && col < cols) { int index = row * cols + col; // Compute the exponential of each element float expVal = expf(input[index]); // Compute the sum of exponentials for the row float sumExp = 0.0f; for (int i = 0; i < cols; ++i) { sumExp += expf(input[row * cols + i]); } // Compute the softmax value for the element output[index] = expVal / sumExp; }}float* matrixMul(float* matrixA, float* matrixB, int rowsA, int colsA, int rowsB, int colsB){ float* matrixC; printf(" ROW A %d COLS A %d \n",rowsA, colsA); printf(" ROW B %d COLS B %d \n",rowsB, colsB); cudaMalloc((void **)&matrixC, rowsA * colsB * sizeof(float)); dim3 blockSize(16, 16); dim3 gridSize((colsB + blockSize.x - 1) / blockSize.x, (rowsA + blockSize.y - 1) / blockSize.y); matrixMulKernel<<<gridSize, blockSize>>>(matrixA, matrixB, matrixC, rowsA, colsA, colsB); return matrixC;}float* matrixAdd(float* matrixA, float* matrixB, int rows, int cols){ float* matrixC; printf(" ROW %d COLS %d \n",rows, cols); cudaMalloc((void **)&matrixC, rows * cols * sizeof(float)); dim3 blockSize(16, 16); dim3 gridSize((cols + blockSize.x - 1) / blockSize.x, (rows + blockSize.y - 1) / blockSize.y); matrixAddKernel<<<gridSize, blockSize>>>(matrixA, matrixB, matrixC, rows, cols); return matrixC;}float* softmax(float* input, int rows, int cols){ float* matrixC; printf(" ROW %d COLS %d \n",rows, cols); cudaMalloc((void **)&matrixC, rows * cols * sizeof(float)); dim3 blockSize(16, 16); dim3 gridSize((cols + blockSize.x - 1) / blockSize.x, (rows + blockSize.y - 1) / blockSize.y); softmaxKernel<<<gridSize, blockSize>>>(input, matrixC, rows, cols); return matrixC;}
Defining main function for inference in C++:
In this section, we will combine all the reading and kernel defination for MLP inference on CUDA.
int main() { // Initialize the Python interpreter Py_Initialize(); // Ensure that NumPy is available import_array(); Weights fc1_w = read_weights("./mlp/mnist_mlp/model/fc1.weight.npy", 0); Weights fc2_w = read_weights("./mlp/mnist_mlp/model/fc2.weight.npy", 0); Weights fc3_w = read_weights("./mlp/mnist_mlp/model/fc3.weight.npy", 0); printf("#############################\n"); Weights fc1_b = read_weights("./mlp/mnist_mlp/model/fc1.bias.npy", 0); Weights fc2_b = read_weights("./mlp/mnist_mlp/model/fc2.bias.npy", 0); Weights fc3_b = read_weights("./mlp/mnist_mlp/model/fc3.bias.npy", 0); //Read image Inputs image = read_image("./mlp/mnist_mlp/images/1.npy", 0); // Inputs image = read_image("./mlp/mnist_mlp/2.npy", 0); // Finalize the Python interpreter Py_Finalize(); int output_row = 1; int output_col = 10; float* matrixC = nullptr; matrixC = matrixMul(image.matrix, fc1_w.matrix, image.shape[0], image.shape[1], fc1_w.shape[0], fc1_w.shape[1]); matrixC = matrixAdd(matrixC, fc1_b.matrix, fc1_b.shape[0], fc1_b.shape[1]); matrixC = matrixMul(matrixC, fc2_w.matrix, fc1_b.shape[0], fc1_b.shape[1], fc2_w.shape[0], fc2_w.shape[1]); matrixC = matrixAdd(matrixC, fc2_b.matrix, fc2_b.shape[0], fc2_b.shape[1]); matrixC = matrixMul(matrixC, fc3_w.matrix, fc2_b.shape[0], fc2_b.shape[1], fc3_w.shape[0], fc3_w.shape[1]); matrixC = matrixAdd(matrixC, fc3_b.matrix, fc3_b.shape[0], fc3_b.shape[1]); matrixC = softmax(matrixC, fc3_b.shape[0], fc3_b.shape[1]); printf("output shape 1: %d, %d", fc3_b.shape[0], fc3_b.shape[1]); float* C = (float *)malloc(output_row * output_col * sizeof(float)); cudaMemcpy(C, matrixC, output_row * output_col * sizeof(float), cudaMemcpyDeviceToHost); cudaDeviceSynchronize(); printf("rowsA %d\n", output_row); printf("colsB %d\n", output_col); for (int i = 0; i < output_row; i++) { for (int j = 0; j < output_col; j++){ printf("%f ", C[i * output_row + j]); } printf("\n"); } sleep(5); // Clean up CUDA device memory cudaFree(fc1_w.matrix); cudaFree(fc2_w.matrix); cudaFree(fc3_w.matrix); cudaFree(fc1_b.matrix); cudaFree(fc2_b.matrix); cudaFree(fc3_b.matrix); return 0;}
Compile and Running:
To compile the program, we need to use the “nvcc” compiler provided by the CUDA Toolkit. We can compile the program with the following command:
nvcc -o ml.o ml.cu -I/usr/include/python3.8/ -I/user/lib/python3/dist-packages/numpy/ -lpython3.8
To run the program, we simply execute the binary file generated by the compiler:
./ml.o
Conclusion:
I hope this blog has given you a good introduction to CUDA programming with C, and that you’re excited to explore more advanced topics in CUDA programming. Happy coding!
References:
--
--
Written by Harsh Patel
Computer Science || Android, Computer Vision || Blockchain
No responses yet
Help
Status
About
Careers
Press
Blog
Privacy
Terms
Text to speech
Teams
|
Sign up
Sign in
Sign up
Sign in
Power of GPU’s for General-Purpose using NVIDIA’s CUDA Architecture
Vigneshwara
Follow
--
Listen
Share
CUDA (Compute Unified Device Architecture) is a parallel computing platform and programming model developed by NVIDIA. It allows developers to harness the immense power of NVIDIA GPUs (graphics processing units) for tasks beyond their traditional role in graphics rendering.
Why CUDA? The Need for Parallelism
Traditional CPUs (central processing units) excel at sequential tasks, processing instructions one after another. However, many real-world problems, such as:
involve massive amounts of data and parallel operations. This is where GPUs shine! They have thousands of cores designed to handle many computations concurrently. CUDA bridges the gap, enabling programmers to use languages like C, C++, and Fortran to write code that runs on GPUs.
Key Concepts in CUDA
GPU as a Co-Processor:
CUDA views the GPU as a co-processor working alongside the CPU. The CPU manages the overall program flow, while the GPU handles the heavy parallel computations.
Kernels:
A kernel is a function written in CUDA C/C++ that executes on the GPU. It’s designed to be highly parallel, with each thread of the kernel performing the same operation on different parts of the data.
Example Kernel Code (Vector Addition): This is a classic example that demonstrates parallel computation. Each thread in the kernel adds corresponding elements from two input arrays and stores the result in an output array.
#include <cuda.h>#include <stdio.h>// Kernel definition for vector addition__global__ void vectorAdd(float *a, float *b, float *c, int n) { int i = blockIdx.x * blockDim.x + threadIdx.x; if (i < n) { c[i] = a[i] + b[i]; }}int main() { // Allocate memory on the host (CPU) int n = 1024; float *a, *b, *c; a = (float *)malloc(n * sizeof(float)); b = (float *)malloc(n * sizeof(float)); c = (float *)malloc(n * sizeof(float)); // Initialize vectors on the host for (int i = 0; i < n; ++i) { a[i] = i; b[i] = 2*i; } // Allocate memory on the device (GPU) float *d_a, *d_b, *d_c; cudaMalloc((void **)&d_a, n * sizeof(float)); cudaMalloc((void **)&d_b, n * sizeof(float)); cudaMalloc((void **)&d_c, n * sizeof(float)); // Copy data from host to device cudaMemcpy(d_a, a, n * sizeof(float), cudaMemcpyHostToDevice); cudaMemcpy(d_b, b, n * sizeof(float), cudaMemcpyHostToDevice); // Define grid and block dimensions int threadsPerBlock = 256; int blocksPerGrid = (n + threadsPerBlock - 1) / threadsPerBlock; // Launch the kernel vectorAdd<<<blocksPerGrid, threadsPerBlock>>>(d_a, d_b, d_c, n); // Copy result from device to host cudaMemcpy(c, d_c, n * sizeof(float), cudaMemcpyDeviceToHost); // Print the result (for verification) for (int i = 0; i < 10; ++i) { printf("c[%d] = %f\n", i, c[i]); } // Free memory on the device cudaFree(d_a); cudaFree(d_b); cudaFree(d_c); // Free memory on the host free(a); free(b); free(c); return 0;}
Threads, Blocks, and Grids:
CUDA organizes parallel execution using a hierarchy:- Thread: The smallest unit of execution on the GPU.- Block: A group of threads that execute together.- Grid: A collection of blocks that defines the overall parallel execution structure.
Memory Hierarchy:
CUDA provides different types of memory:- Global Memory: Large, but relatively slow, accessible by all threads.- Shared Memory: Smaller, faster memory shared by threads within a block, used for efficient data sharing.- Constant Memory: Read-only memory for data that doesn’t change, cached for fast access.- Texture Memory: Optimized for spatial locality, used for tasks like image processing.
Data Transfer:
Moving data between the CPU’s main memory and the GPU’s memory is a key aspect of CUDA programming. It’s important to minimize data transfers to optimize performance.
CUDA Programming Model
Advantages of CUDA
Challenges of CUDA
Beyond the Basics — Advanced CUDA Concepts
Conclusion: The Power and Potential of CUDA
CUDA empowers developers to take advantage of the immense parallel processing capabilities of GPUs, revolutionizing fields like scientific computing, AI, and high-performance data processing. While it requires learning new concepts and tools, the potential for dramatic performance gains makes CUDA a compelling choice for modern computing.
To learn more about CUDA:
Note: Let me know for any in-depth explanations.
CUDA empowers developers to take advantage of the immense parallel processing capabilities of GPUs, revolutionizing fields like scientific computing, AI, and high-performance data processing. While it requires learning new concepts and tools, the potential for dramatic performance gains makes CUDA a compelling choice for modern computing.
--
--
Written by Vigneshwara
No responses yet
Help
Status
About
Careers
Press
Blog
Privacy
Terms
Text to speech
Teams
|
Sign up
Sign in
Sign up
Sign in
GPGPU
Home
About
The best of GPGPU development articles, tutorials, and news
Block Sparse Matrix-Vector Multiplication with CUDA
Georgii Evtushenko
Follow
GPGPU
--
Listen
Share
In the previous post, we’ve discussed sparse matrix-vector multiplication. It was shown that it’s possible to take advantage of knowledge about a position of zeroes by storing matrices in special data structures. Although we’ve improved performance and memory space requirements, we haven’t used all the information about zeroes position. In some applications, non-zeroes are gathered in blocks. The knowledge about these blocks could give us more room for optimization. In this post, I’m going to discuss the efficiency of block sparse matrix-vector multiplication on GPU. To show some real-live application results, I develop a Matrix Structural Analysis application, which is used to simulate the Golden Gate bridge structure.
Block Compressed Sparse Row (BCSR)
BCSR is one of the most popular block sparse matrix formats. In BCSR, all blocks have the same size. To understand this format imagine a sparse matrix with the block size equal to one. In this case, CSR and BCSR matrix representations are equivalent. Block size increasing doesn’t affect the column and row pointer arrays. Instead, it just extends the values array. That is, columns and row pointer arrays contain values for blocks. Blocks are stored in the value array contiguously. The modification of CSR dramatically reduces memory space requirements.
To access block rows data, we need to get a number of blocks before the block row (row_ptr[block_row]) and multiply this value by block size square. It’s possible to store blocks in row-major or column-major order. As I’ll show further, elements order within blocks might affect performance. I’ll start with row-major order.
In listing 1 I assign one thread per one non-block row. To test the performance of BCSR implementations I’ve generated N-diagonal block sparse matrices. These matrices are suited for now, because we want to exclude load balancing issues.
The best speedup of this BCSR SpMV is about 65.8 (for block size equal to 16). Although the kernel achieves coalesced access to data of matrix for small block sizes, it drops performance for block size equal to 32. I used float matrices in this post, so each thread in the warp read data in an exact cache line of 128 bytes.
In contrast, cuSPARSE implementation of SpMV for block sparse matrices doesn’t seem to have such a dramatic performance drop. It means that we have some room for optimizations. The obvious solution is to assign warp per non-block row. That would give us fully-coalesced memory access.
This optimization improves the performance in case of matrices with big block sizes. Although it is cheap to perform inner warp reduction, the new version drops performance for small blocks.
The less obvious optimization is to use column-major order within each block.
This change allows coalesced access to block. That means that threads access data of block consecutively.
Now it might be noticed, that threads read vector x consecutively. We could try to cache it.
But it doesn’t affect performance at all. Is it the limit? I don’t think so.
We could allocate warp per block row in another way. The new algorithm is claimed to be more efficient in handling matrices with block sizes that are not powers of two. In the algorithm warp iterates through its block row’s columns, covering 32/bs columns at a time. The algorithm limits max block size by warp size. Although this optimization would limit matrix block size, it might be quite interesting to try.
The algorithm must compute a block index and a horizontal position within the block. In the paper, where I found this algorithm, it’s noted that the algorithm comes at the cost of increased latency from integer operations. Is it possible that the column-by-column algorithm performance (fig. 6) is limited by integer division operations?
The NVIDIA best practices guide states “Integer division and modulo operations are particularly costly and should be avoided or replaced with bitwise operations whenever possible”. To improve performance I’ve tried int_fastdiv library. Unfortunately, it doesn’t improve performance a lot (fig. 7).
Is it possible that shift operations solve our performance issue?
Although it does outperform int_fastdiv (fig. 8), it’s still not enough.
Is there a way for further optimization of integer divisions? Well, it’s better to not perform them at all. Let’s perform division at compile-time.
The templated version is quite close to our best version. Its performance is achieved not only by integer division optimization. The compiler also unrolled inner loops.
I think it’s time to stop using ideal cases of N-diagonal matrices. To test the proposed kernels on real-life matrix structures I’ve selected matrix structural analysis area.
BCSR with non-uniform pattern and matrix structural analysis
Structural analysis is the process of predicting the displacements within a given structure under a prescribed loading condition. I’m not going to describe a matrix structural analysis here. Instead, I’m going to describe the way a mesh affects a matrix structure. To get the idea of global stiffness matrix formation we need to understand the structure of a member stiffness matrix k for each element in mesh:
where L denotes length, E — Young’s modulus of elasticity and A — cross-sectional area of an element. Also, we need a transformation matrix T.
To get a member stiffness matrix for the particular element in global coordinates we need to perform:
That gives us a 4x4 matrix. To assemble it into a global stiffness matrix we need to consider K elements position. The first two rows of the K matrix are assigned for the first element’s node. The first two columns of the first two rows are to be added into the first nodes’ diagonal block. The second two columns of the first two rows are to be added into the first node’s off-diagonal block. The same logic is applied for the second two rows and the second node of the element. That gives us the global stiffness block matrix, which blocks size are equal to 2 and which rows and columns count is equal to nodes count of a mesh. It’s possible to increase the block size of the stiffness matrix by considering a plane frame structure instead of plane trusses. To compute plane frame structure, k and T matrices are to be changed:
where I denotes a moment of inertia. To vary a stiffness matrix size I’ve written parametrized Golden Gate Bridge structure generator.
There is a repeating structure in this mesh. So it’s easy to vary segments count or length of the bridge to control global stiffness matrix size.
The Golden Gate Bridge produces a matrix structure that is shown in figure 11. It has a more irregular pattern than a simple N-diagonal matrix.
The performance of different BSpMV kernels is presented in figure 12. In my experiments, the column-major sparse matrix outperformed other options when I assigned thread per non-block row.
It’s important to note that load disbalance questions weren’t considered in the scope of this post. I hope to consider this questions in one of the following posts. As always, you could find the source codes for this post on github.
--
--
Published in GPGPU
The best of GPGPU development articles, tutorials, and news
Written by Georgii Evtushenko
C++ Software Developer with experience in high-performance computing.
No responses yet
Help
Status
About
Careers
Press
Blog
Privacy
Terms
Text to speech
Teams
|
Sign up
Sign in
Sign up
Sign in
Bellman-Ford Single Source Shortest Path Algorithm on GPU using CUDA
Parallel algorithm to compute the shortest distance from a source vertex to all other vertices in a connected graph
Raj Sengo
Follow
TDS Archive
--
1
Listen
Share
Overview
Traversing large graphs to compute different information has various use cases in the real-world like social media networks, communication networks, search engine indexing & page rank, VLSI design and biological network analysis. Bellman-Ford, Dijkstra’s and Delta Stepping are widely used Single Source Shortest Path Algorithm (SSSP) algorithms. Dijkstra’s algorithm provides a work efficient implementation, whereas Bellman-Ford provides scope for easy parallel implementation. Delta Stepping algorithm introduces a trade-off between the two. This article presents three parallel implementation techniques to accelerate the Bellman-Ford SSSP algorithm using Compute Unified Device Architecture (CUDA) on General-purpose computing on graphics processing units (GPGPU). We also compare the performance of all three variations on large graphs. We observed that the run time of a sequential implementation can be reduced by 99.4% for a sparse graph with 400k vertices and 1 million edges with optimized CUDA implementation.
What is Single Source Shortest Path (SSSP)?
A Graph G = (V, E) where V is set of vertices and E is set of Edges. An edge (u,v,w) is a path from node u to v and its weight is w. If we think the nodes as cities, then the edges are routes between the cities and weights are the distance between the cities.
In the example graph shown below V = {A, B, C, D, E} and E = {(A,B,9), (A,C,4), (B,C,10), (B,D,2), (B,E,3), (C,D,2), (C,E,11), (D,B,2),(E,D,2)}. Single source shortest path is a shortest distance from a source vertex e.g. “A” to all other vertices.
In the above example graph, SSSP from source vertex “A” to all other vertices is given by the blue arrows. i.e A → C → D → B → E
Data set
We used two different types of large graphs. USA road network graph data sets from DIMACS Shortest Paths Implementation challenge (The Center for Discrete Mathematics and Theoretical Computer Science. 9th DIMACS Implementation Challenge) and random generated graphs using SPRAND tool by Cherkassky, Goldberg and Radzik (Reference [1])
Graph Representation
A graph G(V, E) is generally represented via an adjacency matrix or adjacency list. For a sparse graph such as road networks, adjacency list is the preferred representation, since it takes less space. Compressed Sparse Row (CSR) representation is an alternate form of an adjacency list, in which the lists of vertices are packed into one single large array. We found that this representation is suitable for CUDA implementation (Reference [2], [3]). As shown in the below figure, four arrays are used to represent the graph; a vertex array V that stores all the vertices, an index array I that stores the starting position of the adjacency list of edges for each V[i], an edge array E, and a weight array W that stores the weights of each edge. I[i+1]−I[i] provides the number of edges of V[i]
GPU and CUDA
An increase in the use of general-purpose computing on graphics processing units (GPGPU) gives us massive scale parallel computing capabilities. The GPGPU is based on single instruction, multiple thread (SIMT) execution model in which each thread executes the same code. Compute Unified Device Architecture CUDA (Compute Unified Device Architecture) is a parallel computing platform and APIs created by Nvidia. It exposes GPU parallelism for general-purpose computing and retains performance. It’s developed based on industry-standard C++. CUDA consists of a small set of extensions to enable heterogeneous programming.
The GPGPU consists of several streaming multiprocessors (SMs). Each SM consists of several streaming processors(SPs). Each SM has its memory, known as shared memory, which is shared across all the SPs in the SM. From a developer’s standpoint GPUs can be viewed as grids, SMs can be viewed as blocks and SPs can be viewed as threads. The kernel is the piece of code executed by threads. Each thread has its ID that plays a vital role in determining which part of the input data to be accessed by the thread.
Bellman-Ford Sequential Algorithm
Bellman-Ford algorithm is a simple algorithm which uses edge relaxation technique. It relaxes all the edges, | V | − 1 times. Where | V | is the number of vertices in the graph. It can also work on the graph with negative weight edges provided there is no negative edge cycle. For this study, only graphs with positive edge weights were considered.
What is edge relaxation?
It’s a technique to correct the approximate distances with the correct ones. In the following graph below, d[u] is the distance from source “s” to “u” and d[v] is the distance from source “s” to “v”. There is an edge between “u” and “v” with weight “w”. if the distance of v is more than the distance of u + weight of (u, v) then the distance of v is updated with the distance of u + weight of (u, v).
Bellman-Ford Parallel CUDA Implementations
Version 1 — Monolithic kernel
In our first approach, we introduced a monolithic CUDA kernel in which each vertex of the graph is assigned to a separate thread. Each thread relaxes all the outgoing edges of the vertex identified by the thread ID. In this approach, the number of GPU blocks is calculated during run time based on the number of vertices in the input graph. This type of kernel assumes we have adequate threads to cover all vertices of the input graph.
Version 2 — Kernel with Grid-Stride loop
In our second technique to accelerate the parallel implementation, we used grid stride in the CUDA kernel. The stride is calculated using blockDim.x * gridDim.x, which is equal to the total number threads in the grid. As an example, if there are 1024 threads in the grid, thread 0 will process the vertex at indices 0, 1024, 2048, etc. This grid stride loop approach provides scalability and thread reuse. It also ensures that no thread is idle and every thread does an equal amount of work
Version 3 — Kernel with Grid-Stride loop and relax edges only when needed
To further optimize the performance, we introduced a boolean array F of size |V|. This array is initialized to false at the beginning for all vertices of the graph. The source is set to true. When a vertex is updated with a shorter distance from the source as part of the relaxation, its corresponding flag in the F array is set to true. This indicates that all the outgoing edges from that vertex need further relaxation in the next iteration. The relax kernel uses this information and relaxes only if the flag is set to true for the corresponding vertex. This approach combined with the grid-stride loop reduces the work done by each thread that is not necessary hence ensures further speed up of the overall execution.
Test Environment
We used the Texas Advanced Computing Center (TACC) Maverick2 supercomputer system to analyze the performance of all our implementations on large graphs. Maverick2 has Nividia GeForce GTX 1080 Ti GPU device with 28 SMs and 128 SPs per SM and 11GB of global memory. Each SM has around 49 KB of shared memory.
Performance Analysis
Our analysis reveals that version 3 of the implementation performed the best. The performance was even better on randomly generated dense graphs in comparison with sparse real-world road network graphs. First, we recorded that the basic sequential implementation took 3.25 hours to perform SSSP calculation on a sparse New York road network graph with 400k+ vertices, and over 1 million edges. In comparison, CUDA version-1 took 39 seconds, version-2 took 42 seconds, and version-3 took 16 seconds to compute the same. We also observed that version 2 i.e grid-stride loop kernel performed better for larger graphs with more than 1 million vertices and 4 million edges. The performance was even better for the randomly generated dense graphs. This indicates that the overhead of launching a new thread block reduces with the introduction of the grid-stride loop, which ensures the block size does not increase as the input increases, thus allowing each thread to handle more work. The table below shows the elapsed run time of our Bellman-Ford implementations for various large graphs.
Conclusion
We presented three variations of parallel implementations of Bellman-Ford single source shortest path algorithm on GPU using CUDA. The kernel with grid-stride loop and the logic to relax edges only when needed, performed better for larger graphs from 1–10 million edges. We used CSR representation in our implementations. This approach takes more space for dense graphs. Further study can be done to use an alternate format (like adjacency matrix) for dense graphs and use of shared memory inside the CUDA blocks to study the performance.
This blog post is based on a term paper that I and my partner Stephan Garland submitted for the Parallel Algorithms class at The University of Texas, Austin, Summer 2020. Many thanks to Dr. Vijay Garg for the well structured and wonderfully taught Parallel Algorithms class.
Github link: https://github.com/sengorajkumar/gpu_graph_algorithms
References
[1]Boris Cherkassky, Andrew V. Goldberg and Tomasz Radzik. Shortest Paths Algorithms: Theory And Experimental Evaluation. Mathematical Programming, pages 129–174. 1993.
[2]Pankhari Agarwal and Maitreyee Dutta. New Approach of Bellman-Ford Algorithm on GPU using Compute Unified Design Architecture (CUDA). International Journal of Computer Applications (0975–8887, 110 — №13.2015.
[3]Pawan Harish and P. J. Narayanan. Accelerating large graph algorithms on the GPU using CUDA. Center for Visual Information Technology, International Institute of Information Technology Hyderabad, INDIA.
[4]Mark Harris. CUDA Pro Tip: Write Flexible Kernels with Grid-Stride Loops.https://developer.nvidia.com/blog/cuda-pro-tip-write-flexible-kernels-grid-stride-loops/, April 22, 2013.
[5]Mark Harris. How to Access Global Memory Efficiently in CUDA C/C++ Kernels.https://developer.nvidia.com/blog/how-access-global-memory-efficiently-cuda-c-kernels/, January 7, 2013.
[6]Cyril Zeller, NVIDIA Corporation, CUDA C/C++ Basics, Supercomputing 2011 Tutorial
--
--
1
Published in TDS Archive
An archive of data science, data analytics, data engineering, machine learning, and artificial intelligence writing from the former Towards Data Science Medium publication.
Written by Raj Sengo
Software Professional, Machine Learning / AI Enthusiast. Master of Science in Software Engineering from The University of Texas, Austin.
Responses (1)
Help
Status
About
Careers
Press
Blog
Privacy
Terms
Text to speech
Teams
|
Sign up
Sign in
Sign up
Sign in
Flash Attention with CUDA
Damien J
Follow
--
Listen
Share
Introduction
Attention mechanisms have revolutionized the field of natural language processing and deep learning. They allow models to focus on relevant parts of input data while performing tasks like machine translation, language generation, and more. Attention mechanisms enable models to weigh different parts of input data differently, focusing on the most relevant information while performing a task. This mimics the human ability to selectively pay attention to certain aspects of our surroundings while filtering out distractions. Attention mechanisms have been instrumental in improving the performance of various AI models, particularly in sequence-to-sequence tasks.
Flash Attention, as the name suggests, brings a fast and memory-efficient solution to attention mechanisms. It addresses some of the inefficiencies present in traditional attention mechanisms, making them more suitable for large-scale tasks and complex models. It is particularly crucial for training large-scale deep learning models, such as BERT-large and GPT-2, where the attention step often becomes a bottleneck. By introducing memory-efficient and exact computations, Flash Attention accelerates the training process while maintaining high accuracy.
Note: This article focuses on Flash Attention 1. Subsequent work has introduced newer variants — such as Flash Attention 2 and even Flash Attention 3 — that build upon these techniques to further improve speed, memory efficiency, and numerical stability.
1. Flash Attention Intuition & Algorithm:
Simplified Intuition - Imagine you have a huge list of words or tokens (like all the words in a paragraph) and your goal is to figure out how each word relates to all the other words. Normally, you’d compare every word with every other word in a big grid (a big table of numbers). This works, but it can use a lot of memory — like trying to keep the entire grid in your computer’s brain at once. Flash Attention is a trick to handle that comparison in smaller pieces. Instead of storing the entire giant grid, it looks at only a small part of the information at a time, does the math, and then moves on to the next piece. By processing the data in “bite-sized chunks,” Flash Attention avoids filling up memory with a huge table. The result is that you still get the same overall understanding of how each word relates to every other word, but you use much less memory, and you often run faster too. Essentially, Flash Attention is like taking the same big job and chopping it into smaller tasks that are more manageable and efficient.
In traditional mechanisms, the quadratic memory complexity (O(N²), where N is the sequence length) significantly limits the scalability of models. Flash Attention reduces this to linear complexity (O(N)) through innovative techniques such as memory tiling, thread coarsening, and recomputation, making it a more feasible option for large-scale tasks.
Core Components of Flash Attention
Flash Attention’s effectiveness lies in its understanding of the hardware it runs on. It exploits the fact that different types of memory in GPUs have varying capacities and speeds. For instance, SRAM is faster but smaller, while HBM (high bandwidth memory, the global memory) is larger but slower. By minimising the communication between these memory types, Flash Attention significantly speeds up computations
Flash Attention Algorithm: Tiling and Recomputation
Flash Attention’s algorithm can be summarised in two main ideas: tiling and recomputation.
Tiling: During both forward and backward passes, Flash Attention divides the attention matrices into smaller blocks, optimizing memory usage and improving computation efficiency.
Recomputation:In the backward pass, Flash Attention recomputes attention matrices using stored outputs and softmax normalization statistics, eliminating the need for excessive memory storage.
Complexity and Real-World Challenges
Flash Attention’s space complexity scales linearly with the sequence length and attention head dimension. This makes it suitable for handling large-scale models and tasks.
However, implementing Flash Attention comes with challenges, particularly in writing optimized CUDA kernels. The need for lower-level language coding can hinder adoption, but projects like Triton offer potential solutions to this issue.
Impact Modern AI models, particularly those in natural language processing (NLP) and computer vision, rely heavily on attention mechanisms to handle dependencies across long sequences or image patches. Flash Attention addresses the pressing need for scalable, efficient solutions in these domains. It enables researchers to train larger models faster and with fewer resources, directly impacting the pace of advancements in AI.
Benefits of Using GPUs and CUDA
These aspects make Flash Attention a pivotal step in advancing AI workloads, unlocking new possibilities for large-scale models in research and production
2. Parallelization: Concept, Semantics, and Implementation
1. Phasing-and-Tiling
Concept/Semantics:
Implementation:
2. Thread Coarsening
Concept/Semantics:
Implementation:
3. Shared Memory
Concept/Semantics:
Implementation:
4. Memory Coalescing
Concept/Semantics:
Implementation:
3. Parallel Patterns
1. Map
2. Reduce
3. Tiling
4. Experimental Results
Here are some output samples that has smaller sizes:
One can mimic the real embeddings, query, key, and value matrices with randomly generated matrices. Also implemented is the CPU code as the benchmark and expected output and designed two kernels.
5. Source Code
All the source code can be found in the https://github.com/damienjose/cuda-flashattention Github link.
// CPU Implementation of Attentionvoid computeAttentionCPU(float* query, float* key, float* value, float* attentionScores, float* output) { float* transposedKey = (float*)malloc(FEATURE_DIMENSION * NUM_SAMPLES * sizeof(float)); transposeMatrix(key, transposedKey, NUM_SAMPLES, FEATURE_DIMENSION); float scalingFactor = 1.0f / sqrtf((float)FEATURE_DIMENSION); // Compute attention scores for (int i = 0; i < NUM_SAMPLES; i++) { for (int j = 0; j < NUM_SAMPLES; j++) { for (int k = 0; k < FEATURE_DIMENSION; k++) { attentionScores[i * NUM_SAMPLES + j] += query[i * FEATURE_DIMENSION + k] * transposedKey[k * NUM_SAMPLES + j]; } attentionScores[i * NUM_SAMPLES + j] *= scalingFactor; } } // Softmax row-wise for (int row = 0; row < NUM_SAMPLES; row++) { float maxScore = attentionScores[row * NUM_SAMPLES]; for (int col = 1; col < NUM_SAMPLES; col++) { if (attentionScores[row * NUM_SAMPLES + col] > maxScore) { maxScore = attentionScores[row * NUM_SAMPLES + col]; } } float sumExp = 0.0f; for (int col = 0; col < NUM_SAMPLES; col++) { attentionScores[row * NUM_SAMPLES + col] = exp(attentionScores[row * NUM_SAMPLES + col] - maxScore); sumExp += attentionScores[row * NUM_SAMPLES + col]; } for (int col = 0; col < NUM_SAMPLES; col++) { attentionScores[row * NUM_SAMPLES + col] /= sumExp; } } // Multiply by Value matrix for (int i = 0; i < NUM_SAMPLES; i++) { for (int j = 0; j < FEATURE_DIMENSION; j++) { for (int k = 0; k < NUM_SAMPLES; k++) { output[i * FEATURE_DIMENSION + j] += attentionScores[i * NUM_SAMPLES + k] * value[k * FEATURE_DIMENSION + j]; } } } free(transposedKey);}
2. Naive GPU Implementation (Global Memory)
// Kernel: QK^T__global__ void computeScoresKernel(float* queryMatrix, float* keyMatrix, float* scoreMatrix) { int row = blockIdx.y * blockDim.y + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; if (row < NUM_SAMPLES && col < NUM_SAMPLES) { float score = 0.0f; for (int d = 0; d < FEATURE_DIMENSION; ++d) { score += queryMatrix[row * FEATURE_DIMENSION + d] * keyMatrix[col * FEATURE_DIMENSION + d]; } scoreMatrix[row * NUM_SAMPLES + col] = score / sqrtf(static_cast<float>(FEATURE_DIMENSION)); }}// Kernel: Softmax__global__ void applySoftmaxKernel(float* scoreMatrix, float* softmaxMatrix) { int row = blockIdx.y * blockDim.y + threadIdx.y; if (row < NUM_SAMPLES) { float maxScore = -1e30f; for (int col = 0; col < NUM_SAMPLES; ++col) { maxScore = fmaxf(maxScore, scoreMatrix[row * NUM_SAMPLES + col]); } float sumExp = 0.0f; for (int col = 0; col < NUM_SAMPLES; ++col) { softmaxMatrix[row * NUM_SAMPLES + col] = expf(scoreMatrix[row * NUM_SAMPLES + col] - maxScore); sumExp += softmaxMatrix[row * NUM_SAMPLES + col]; } for (int col = 0; col < NUM_SAMPLES; ++col) { softmaxMatrix[row * NUM_SAMPLES + col] /= sumExp; } }}// Kernel: Output = Softmax(QK^T) * V__global__ void computeOutputKernel(float* softmaxMatrix, float* valueMatrix, float* outputMatrix) { int row = blockIdx.y * blockDim.y + threadIdx.y; int col = blockIdx.x * blockDim.x + threadIdx.x; if (row < NUM_SAMPLES && col < FEATURE_DIMENSION) { float result = 0.0f; for (int k = 0; k < NUM_SAMPLES; ++k) { result += softmaxMatrix[row * NUM_SAMPLES + k] * valueMatrix[k * FEATURE_DIMENSION + col]; } outputMatrix[row * FEATURE_DIMENSION + col] = result; }}// Complete naive GPU routinevoid computeAttentionGPUGlobal(float* queryMatrix, float* keyMatrix, float* valueMatrix, float* attentionMatrix, float* outputMatrix) { // Device pointers + memory allocations + memcpys omitted for brevity... // Launch kernels computeScoresKernel<<<gridDim, blockDim>>>(d_queryMatrix, d_keyMatrix, d_scoreMatrix); cudaDeviceSynchronize(); applySoftmaxKernel<<<softmaxGridDim, softmaxBlockDim>>>(d_scoreMatrix, d_softmaxMatrix); cudaDeviceSynchronize(); computeOutputKernel<<<outputGrid, outputBlock>>>(d_softmaxMatrix, d_valueMatrix, d_outputMatrix); cudaDeviceSynchronize(); // Copy results back + free memory ...}
3. Optimized GPU Implementation (Shared Memory)
// Kernel: QK^T using Shared Memory__global__ void shared_compute_scores(float* queryMatrix, float* keyTransposeMatrix, float* attentionScores) { __shared__ float sharedQuery[MEM_WIDTH][MEM_WIDTH]; __shared__ float sharedKeyTranspose[MEM_WIDTH][MEM_WIDTH]; int threadX = threadIdx.x; int threadY = threadIdx.y; int blockX = blockIdx.x; int blockY = blockIdx.y; int scoreColumnIndex = blockX * TILE_WIDTH + threadX; int scoreRowIndex = blockY * TILE_WIDTH + threadY; float scoreValue = 0.0f; int numPhases = (FEATURE_DIMENSION + TILE_WIDTH - 1) / TILE_WIDTH; for (int phase = 0; phase < numPhases; phase++) { // Load query tile if (phase * TILE_WIDTH + threadX < FEATURE_DIMENSION && blockY * TILE_WIDTH + threadY < NUM_SAMPLES) { sharedQuery[threadY][threadX] = queryMatrix[(blockY * TILE_WIDTH + threadY) * FEATURE_DIMENSION + phase * TILE_WIDTH + threadX]; } else { sharedQuery[threadY][threadX] = 0.0f; } // Load keyTranspose tile if (phase * TILE_WIDTH + threadY < FEATURE_DIMENSION && blockX * TILE_WIDTH + threadX < NUM_SAMPLES) { sharedKeyTranspose[threadY][threadX] = keyTransposeMatrix[(phase * TILE_WIDTH + threadY) * NUM_SAMPLES + (blockX * TILE_WIDTH + threadX)]; } else { sharedKeyTranspose[threadY][threadX] = 0.0f; } __syncthreads(); // Dot product if (scoreColumnIndex < NUM_SAMPLES && scoreRowIndex < NUM_SAMPLES) { for (int i = 0; i < TILE_WIDTH; i++) { scoreValue += sharedQuery[threadY][i] * sharedKeyTranspose[i][threadX]; } } __syncthreads(); } if (scoreColumnIndex < NUM_SAMPLES && scoreRowIndex < NUM_SAMPLES) { attentionScores[scoreRowIndex * NUM_SAMPLES + scoreColumnIndex] = scoreValue / sqrtf(static_cast<float>(FEATURE_DIMENSION)); }}// Kernel: Softmax__global__ void shared_softmax(float* attentionScores, float* softmaxScores) { int rowIndex = blockIdx.y * blockDim.y + threadIdx.y; if (rowIndex < NUM_SAMPLES) { float maxScore = -1e30f; // Find max for numerical stability for (int colIndex = 0; colIndex < NUM_SAMPLES; ++colIndex) { maxScore = fmaxf(maxScore, attentionScores[rowIndex * NUM_SAMPLES + colIndex]); } float sumExp = 0.0f; for (int colIndex = 0; colIndex < NUM_SAMPLES; ++colIndex) { softmaxScores[rowIndex * NUM_SAMPLES + colIndex] = expf(attentionScores[rowIndex * NUM_SAMPLES + colIndex] - maxScore); sumExp += softmaxScores[rowIndex * NUM_SAMPLES + colIndex]; } for (int colIndex = 0; colIndex < NUM_SAMPLES; ++colIndex) { softmaxScores[rowIndex * NUM_SAMPLES + colIndex] /= sumExp; } }}// Kernel: Output = SoftmaxScores * V__global__ void shared_compute_output(float* softmaxScores, float* valueMatrix, float* outputMatrix) { __shared__ float sharedSoftmaxScores[TILE_WIDTH][TILE_WIDTH]; __shared__ float sharedValueMatrix[TILE_WIDTH][TILE_WIDTH]; int threadX = threadIdx.x; int threadY = threadIdx.y; int blockX = blockIdx.x; int blockY = blockIdx.y; int outputColumnIndex = blockX * TILE_WIDTH + threadX; int outputRowIndex = blockY * TILE_WIDTH + threadY; float outputValue = 0.0f; int numPhases = (NUM_SAMPLES + TILE_WIDTH - 1) / TILE_WIDTH; for (int phase = 0; phase < numPhases; phase++) { if (phase * TILE_WIDTH + threadX < NUM_SAMPLES && blockY * TILE_WIDTH + threadY < NUM_SAMPLES) { sharedSoftmaxScores[threadY][threadX] = softmaxScores[(blockY * TILE_WIDTH + threadY) * NUM_SAMPLES + (phase * TILE_WIDTH + threadX)]; } else { sharedSoftmaxScores[threadY][threadX] = 0.0f; } if (phase * TILE_WIDTH + threadY < NUM_SAMPLES && blockX * TILE_WIDTH + threadX < FEATURE_DIMENSION) { sharedValueMatrix[threadY][threadX] = valueMatrix[(phase * TILE_WIDTH + threadY) * FEATURE_DIMENSION + (blockX * TILE_WIDTH + threadX)]; } else { sharedValueMatrix[threadY][threadX] = 0.0f; } __syncthreads(); // Dot product if (outputColumnIndex < FEATURE_DIMENSION && outputRowIndex < NUM_SAMPLES) { for (int i = 0; i < TILE_WIDTH; i++) { outputValue += sharedSoftmaxScores[threadY][i] * sharedValueMatrix[i][threadX]; } } __syncthreads(); } if (outputColumnIndex < FEATURE_DIMENSION && outputRowIndex < NUM_SAMPLES) { outputMatrix[outputRowIndex * FEATURE_DIMENSION + outputColumnIndex] = outputValue; }}// Complete shared memory GPU routinevoid computeAttentionGPUShared(float* queryMatrix, float* keyTransposeMatrix, float* valueMatrix, float* attentionScores, float* outputMatrix) { // Device pointers + memory allocations + memcpys omitted for brevity... // Launch kernels shared_compute_scores<<<gridDimension, blockDimension>>>( deviceQuery, deviceKeyTranspose, deviceAttentionScores); cudaDeviceSynchronize(); shared_softmax<<<softmaxGridDimension, softmaxBlockDimension>>>( deviceAttentionScores, deviceSoftmaxScores); cudaDeviceSynchronize(); shared_compute_output<<<outputGrid, outputBlock>>>( deviceSoftmaxScores, deviceValue, deviceOutput); cudaDeviceSynchronize(); // Copy results back + free memory ...}
Key Differences Between Implementations
CPU:
Global Memory CUDA:
Shared Memory CUDA:
6. Comparative Analysis
Below are the CPU, GPU Global Memory and GPU Shared Memory Latency measurements for different TILE_WIDTH, BLOCK_SIZE, NUM_SAMPLES and FEATURE_DIMENSION as measured on a NVIDIA GeForce RTX 4050 Laptop GPU
a. Performance Breakdown:
CPU Implementation:
Naive GPU Implementation (Global Memory):
Optimized GPU Implementation (Shared Memory):
b. Computational Cost/Complexity
CPU Implementation:
Naive GPU Implementation (Global Memory):
Optimized GPU Implementation (Shared Memory):
c. Work Efficiency
CPU Implementation:
Naive GPU Implementation (Global Memory):
Optimized GPU Implementation (Shared Memory):
7. Conclusion
Key Optimizations:
Performance Insights:
Scalability:
Limitations:
Experimental Outcomes:
Latency Measurements (1024 x 1024 matrices):
CPU: 9,427 ms
Naive GPU (Global Memory): 38.78 ms
Optimized GPU (Shared Memory): 14.12 ms
For large matrix sizes (e.g., 1024x1024), the optimized shared memory implementation is ~3x faster than the naive GPU implementation 600x faster than the CPU implementation and achieves close-to-ideal theoretical occupancy with minimal control divergence.
8. References
9. Appendix
Change the SM and compute in Visual Studio Project Properties, before you run the code for both Debug and Release builds
Profiling and Performance Analysis
Two main GPU implementations were profiled — Naive (Global Memory) vs. Optimized (Shared Memory) — on an NVIDIA GeForce RTX 4050 Laptop GPU with N = D = 1024 and BLOCK_SIZE = TILE_WIDTH = MEM_WIDTH = 32.
Naive (Global Memory) Highlights
Compute Scores Kernel:
Softmax Kernel:
Output Kernel:
Optimized (Shared Memory) Highlights
Shared Compute Scores:
Shared Softmax:
Shared Compute Output:
--
--
Written by Damien J
No responses yet
Help
Status
About
Careers
Press
Blog
Privacy
Terms
Text to speech
Teams
|
Sign up
Sign in
Sign up
Sign in
Introduction to Parallel Programming with CUDA and C++
Avin
Follow
--
Listen
Share
Parallel programming on GPUs is one of the best ways to speed up processing of compute intensive workloads. Programming for CUDA enabled GPUs can be as complex or simple as you want it to be. Some developers elect to take the simpler route of using libraries such as Thrust and OpenCL. While they facilitate faster serial to parallel conversion times, they do not reveal the GPU architecture that facilitate speedups in the magnitude of 10x to 100x in comparison to CPUs. In this article, I will explore some of the key concepts that facilitate GPU acceleration and walk through parallelizing a simple algorithm.
CUDA Architecture
A modern CPU consists of several cores that are each capable of executing a single thread. Each of these threads can contain their own sequence of unique instructions. Each core executes a single thread
CUDA Architecture utilizes a different approach where a collection of “streaming multiprocessors” (SM) execute the same set of instructions, including branch conditions on multiple threads on different regions of data. Nvidia has coined the term SIMT, single instruction, multiple threads, to describe this architecture.
Each corresponding pair of arrows entering and leaving the streaming multiprocessor represent a single thread. 21 threads are working in parallel in this theoretical GPU. Therefore it is able to map 21 elements in a single step. A CPU calculating in serial will require 21 steps one after the other to map this array.
Our programs do not always map one to one to the hardware implementation of the above architecture, for example, we may have more than 21 elements in the array or this program may need to run on a lesser GPU with only 10 parallel threads. The Nvidia CUDA runtime API provides a layer of abstraction between thread execution in hardware and thread representation in software.
In the simplest of terms, developers can group threads into blocks and a collection of blocks form a grid. Maximum number of threads per block is 1024. The number of blocks per grid can go up to 2³¹-1, i.e. 2,147,483,647. i.e. the maximum value of a signed int.
Let’s apply this in practice to an array of one million elements that are to be processed in parallel. If we are to launch 1024 threads per block, this would require us to launch at least 977 blocks to process the entire array in parallel. (1,000,000 / 1024 = 976.6). Each of these threads need to know which element of the array to process. Since we have one thread for each element of the array, we can use an id based on the thread count. The CUDA runtime exposes threadIdx.x to reveal the thread Id within a block and blockIdx.x to reveal the block Id within the grid. It also exposes blockDim.x to reveal the dimensions of the block. Putting it all together we can calculate a global id for each thread within the entire grid as follows
int globalIdx = blockIdx.x * blockDim.x + threadIdx.x
Knowing the global thread id, we also need to ensure that threads do not attempt illegal memory access, as there are more threads than there are elements in an array due to rounding up. Combining these concepts, a single unit of computation, i.e. a CUDA kernel, looks as follows in pseudo C++ code.
kernel function parallel_scalar_multiply int globalIdx = blockIdx.x * blockDim.x + threadIdx.x if globalIdx < N then // where N is number of elements in array array[globalIdx] = array[globalIdx] * 10 end ifend kernel
As you might have noticed some threads in this case will not do any work as they are out of bounds of the array, This is natural in massively parallel architectures such as CUDA. To launch this kernel to process all elements in an array the kernel invocation code has to specify the launch configuration that was discussed above. In pseudo code:
launch parallel_scalar_multiply with 1024 threads in 977 blocks
Before we launch
Programming on CUDA requires the CUDA Toolkit and a CUDA GPU. While the CUDA Toolkit extends the C language, C++ provides a richer syntax on top of C and will be the language of choice. The CUDA compiler and profiler is installed with the toolkit, the debugger and visual profiler may require separate installation.
Compiler — nvcc automatically adds the CUDA source headers and links the CUDA runtime libraries.
Profiler — nvprof command line profiler that profiles kernel execution times and runtime API calls
Debugger — cuda-gdb compiler with nvcc -g to enable verbose debugging output
Visual Profiler — nvvp the swiss army knife of CUDA application optimization. Includes detailed analysis of core utilization, register usage, global memory access and timing profiles.
Our first CUDA kernel launch
As with any program, execution has to start at the CPU. We will create test data, process it on the CPU (host), time it and transfer the data to the GPU (device), process it again and copy the results back to the host machine. The code below is compiled with nvcc -std=c++11 cuda_multiply.cu and profiled with nvprof ./a.out
We first generate 60 million integers as the test data:
int main() { int count = 60000000; // 60 million elements int* test_data = new int[count];for(int i = 0; i < count; i++) test_data[i] = i;}
The CPU multiplication, timed with C++ chrono, #include <chrono>
auto t1 = std::chrono::high_resolution_clock::now();for(int i = 0; i < count; i++) test_data[i] = test_data[i] * 5;auto t2 = std::chrono::high_resolution_clock::now();
In order to perform the calculation on the GPU, the data has to be copied first, the CUDA run-time exposes cudaMalloc to allocate memory on the GPU and cudaMemcpy to transfer the data.
int* d_test_data;cudaMalloc(&d_test_data, count * sizeof(int));cudaMemcpy(d_test_data, test_data, count * sizeof(int), cudaMemcpyHostToDevice);
The kernel launch configuration is specified using the following syntax
<<<num of blocks, num of threads per block>>>
It is similar to an ordinary method call except for the above snippet follows immediately after the method name before specifying the parameters:
int block_count = ceil((double)count / 1024);_cuda_parallel_multiplication<<<block_count, 1024>>>(count, d_test_data, 5);
The kernel code is as follows
__global__ void _cuda_parallel_multiplication(int count, int* test_data, int magnitude) { int globalIdx = blockIdx.x * blockDim.x + threadIdx.x; if (globalIdx < count) test_data[globalIdx] = test_data[globalIdx] * magnitude; }
The __global__ specifier informs the compiler that the function is a CUDA kernel and has to be compiled as such. The if statement checks whether the current global thread id is within bounds of the array before accessing it and performing the multiplication. This set of instructions is executed identically across all 60 million threads launched.
The following code copies the processed data back to the host machine from the CUDA device.
cudaDeviceSynchronize();cudaMemcpy(test_data, d_test_data, count * sizeof(int), cudaMemcpyDeviceToHost);
CUDA kernel launches are asynchronous in relation to the host computer; cudaDeviceSynchronize() blocks the CPU code till all GPU functions complete. cudaMemcpy synchronizes by default as well, but I have been explicit here to demonstrate the need to synchronize before any calculations can be done with data from the GPU. The processed data is copied back to the initial test data set.
The program finally samples a portion of the results, prints the CPU execution time and exits.
for(int i = 0; i < 10; i++) std::cout << i << ": " << test_data[i] << std::endl;std::cout << "CPU time: " << std::chrono::duration_cast<std::chrono::milliseconds>(t2 - t1).count() << "ms" << std::endl;
At this point it has gone through two multiplication rounds. i.e. into 5 on CPU and into 5 on the GPU and the sample output will reflect this as: i x 5 x 5
0: 01: 252: 503: 754: 1005: 1256: 1507: 1758: 2009: 225CPU time: 97ms
Running the application with nvprof ./a.out results in:
Time(%) Time Name46.85% 26.438ms [CUDA memcpy DtoH]42.90% 24.207ms [CUDA memcpy HtoD]10.25% 5.7829ms _cuda_parallel_multiplication(int, int*, int)
The 97ms computation on the CPU takes a mere 5.8ms on the GPU. It is out shadowed by memcpy times here, but only because this kernel is trivial and performs no meaningful work. Memory latency can be hidden by the use of streams to overlap memcpy and kernel execution, which I will explore in later posts.
A simple optimization
You must have wondered why we launched 60 million threads for each element in the array when physical GPUs do not possess the capacity to process such monstrous amounts of threads in parallel. It was merely a technique to easily map the sequential workload to a parallel one. However, this introduces the overhead of switching through 60million threads. While the optimal thread count varies by workload, generally the block count should be a multiple of the number of SMs on the GPU, while the threads per block should be a maximum that does not overload register usage per block. With a trivial multiplication such as above, a thread count of 1024 does not overflow the registers, leaving us to pick a block count. On my GTX1050, a single streaming multiprocessor(SM) processes two blocks simultaneously with a total of 5 SMs. (A Telsa P100 has a whopping 56 SMs !!!) Therefore, to achieve max occupancy, a total of 10(5 x 2) blocks each with 1024 threads should be launched. But how do 10240 threads process 60 million elements? Loop.
__global__ void _cuda_parallel_multiplication(int count, int* test_data, int magnitude) { int globalIdx = blockIdx.x * blockDim.x + threadIdx.x; while (globalIdx < count) { test_data[globalIdx] = test_data[globalIdx] * magnitude; globalIdx += blockDim.x * gridDim.x; }}
The if statement from the previous kernel has been turned into a loop. Each iteration increases the id by the block dimension multiplied by the grid dimension. i.e. the entire grid of threads is “striding” to the next group of elements in the array. Switching to a grid stride resulted in a 0.6ms decrease in execution time on my GTX1050.
Another optimization
__global__ void _cuda_parallel_multiplication(int count, int* test_data, int magnitude) {int globalIdx = blockIdx.x * blockDim.x + threadIdx.x; while (globalIdx < count) { test_data[globalIdx] = test_data[globalIdx] * magnitude; globalIdx += blockDim.x * gridDim.x; __syncthreads(); }}
__syncthreads() is a block level function that synchronizes execution of all threads across the block. Using it here, ensures that all threads reach this point of execution before the next iteration begins. In the context of this kernel, it ensures that global memory loads happen in unison for the entire block. This along with the grid striding ensures maximal use of caching as entire cache lines are loaded into memory, shared across all threads in the block, read and written to before the next segment of global memory is fetched. On my GTX1050 GPU a difference in execution time of 0.2ms was observed with and without __syncthreads()
Results
CPU: Core i3–7100 @ 3.9GHZ
Average Time: 95ms
GPU: Nvidia GTX 1050, 5 SMs
Unoptimized Average Time: 5.8ms Speedup: x16
Optimized Average Time: 5.0ms Speedup: x19
GPU: Nvidia Tesla P100, 56 SMs
Optimized Average Time: 920us=0.92ms Speedup: x103
Conclusion
The results above speak to the efficacy of CUDA as a platform for accelerated computing. A simple multiplication was chosen for parallelization to allow the reader to focus on the CUDA architecture as opposed to understanding a complex calculation. The speed up seen in this trivial calculation is not an exaggeration of speed up seen in more complex useful calculations. However, overall speedup in a real application might not be as high due to memory limitations and copy latency.
Credits
Mark Harris — https://devblogs.nvidia.com/author/mharris/
Wayne Kelly — http://staff.qut.edu.au/staff/kellyw/
--
--
Written by Avin
Computer Science. Philosophy. Object Oriented.
No responses yet
Help
Status
About
Careers
Press
Blog
Privacy
Terms
Text to speech
Teams
|
Sign up
Sign in
Sign up
Sign in
Optimizing any TensorFlow model using TensorFlow Transform Tools and using TensorRT
Model Optimization and reducing precision from FP32 to FP 16 for speedup and reducing graph size.
Alex Punnen
Follow
Better Software
--
3
Listen
Share
What is this article about? Whom is this meant for?
More than an article, this is basically how to, on optimizing a Tensorflow model, using TF Graph transformation tools and NVIDIA Tensor RT. This is a bit of a Heavy Reading and meant for Data Science Engineers than Data Scientist. Also, more than a how-to, it also illustrates the bugs and unexpected results I have encountered in doing this.
Also, these results may be unexpected to me in the role of a DS Engineer but may have a rational explanation if I understand or go deeper into the abstractions of the model or the TF framework that implements this model. I don’t have the skill yet and leave it to the reader to point out the mistakes I have made if any. I have raised these two bugs, one with TF Graph Transform and one with NVIDIA TensorRT transform tool, and it gets answered, then I will update this post
https://github.com/tensorflow/tensorflow/issues/28276
https://devtalk.nvidia.com/default/topic/1048485/tensorrt/no-speed-up-with-tensorrt-fp16-or-int8-on-nvidia-v100/post/5320989/#5320989
What was my expectation — More CUDA cores, TensorCores == ( I thought) Faster inference.
From NVIDIA TensorCore product description, I got the idea/ hope that if I can convert the FP 32 based Object detection models that we use in production to FP 16 or INT8 models and weights, then I will be able to run twice or four times as fast inference speeds; as advertised.
We had one of the best GPU’s NVIDIA V100 32 GB for our experiments which supported all these modes- Tensor Cores, not to mention a set of other GPU’s server-grade as well as gaming grade, P40, GTX 1080, 1070 etc.
Before we started with these expensive GPU’s we used to run the model in GTX 1080 HP desktops. The first expectation was that the higher number of CUDA cores (~5k) in V100 will make our models run faster. I read blogs like below and knew that 1080 is pretty good and that NVIDIA prices and markets the server side GPU’s higher. ( In 1080 they throttle the FP16 which I guess is removed in 2080).
Deep Learning GPU Benchmarks - Tesla V100 vs RTX 2080 Ti vs GTX 1080 Ti vs Titan V
At Lambda, we're often asked "what's the best GPU for deep learning?" In this post and accompanying white paper, we…
lambdalabs.com
Here is an excerpt from the above
“2080 Ti vs V100 — is the 2080 Ti really that fast?If you absolutely need 32 GB of memory because your model size won’t fit into 11 GB of memory with a batch size of 1. If you are creating your own model architecture and it simply can’t fit even when you bring the batch size lower, the V100 could make sense. However, this is a pretty rare edge case. Fewer than 5% of our customers are using custom models. Most use something like ResNet, VGG, Inception, SSD, or Yolo.
So. You’re still wondering. Why would anybody buy the V100? It comes down to marketing.”
Also, I was not expecting a drastic speed up anyway with more CUDA cores as even with HD image frames, we found that the GPU utilisation could not touch 100 per cent meaning that the processing alone was not the bottleneck. Again things are grey here. There is a lot more scope of Engineering optimisations here.
Why we choose V100 was not because of ‘marketing’; it was the only GPU with that much memory 32 GB, which would enable us to batch more image frames in parallel, basically do real-time analytics of more HD video cameras on a single edge.
Reality regarding more CUDA Cores
The truth was that other than the advantage of processing more frames in parallel due to the higher memory, there was no speedup from 1080 GPU (~2.5k CUDA core). We have also tested this in Jetson TX2 which has much fewer (~256) CUDA cores and one older gaming GPU where it was very slow. So higher CUDA cores help to run faster but beyond some threshold, there is not much difference. Maybe this fact is known already and that’s why the newer models from NVIDIA like T4 have around 2.5k CUDA cores, and these are available in GCP and other cloud providers for inference at much cheaper rates. The V100 seems to be used only for training models. It is not practical at least now to train models in lower precision. Maths is easier — gradient descent, back propagation with higher precision.
You can gain more insights from the post by Tom Dettmers regarding GPUs https://timdettmers.com/2019/04/03/which-gpu-for-deep-learning/, but do be wary of things like a number of frames you need to process in parallel etc than just raw inference speeds.
Reality regarding TensorCores, half precision/lower precision FP16, INT8
The field of Data Science Engineering is still nascent in that there is no clear distinction where Data Science ends and Engineering takes over. Frameworks like Tensorflow Serving and tools helps the Dev or DS Operations team to work on the model, develop generic clients and build on top useful applications on the model. But they treat the model without knowing too much in-depth. So when they take a model and do an Optimisation and get an error like below, they don’t know what to do, than get a real inferiority complex and make a mental note on understanding things better
details = "input_max_range must be larger than input_min_range. [[{{node Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/ClipToWindow_87/Area/mul_eightbit/Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/ClipToWindow_87/Area/sub_1/quantize}}]] [[{{node Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/zeros_like_83}}]]"
With that prelude, and based on the optimisations that worked (partially) on converting an Object detection model — Single Shot Detector from TF model zoo, here are the results I got running in V100. Basically, not much speed up, as many of the layers did not get converted**. I had initially done the experiment on a Keras converted model and got similar results, but then I thought that if I used a TF written model, it may be better, and hence the experiments on SSD model.
**tensorflow/contrib/tensorrt/segment/segment.cc:443] There are 3962 ops of 51 different types in the graph that are not converted to TensorRT: TopKV2, NonMaxSuppressionV2, TensorArrayWriteV3, Const, Squeeze, ResizeBilinear, Maximum, Where, Add, Placeholder, Switch, TensorArrayGatherV3, NextIteration, Greater, TensorArraySizeV3, NoOp, TensorArrayV3, LoopCond, Less, StridedSlice, TensorArrayScatterV3, ExpandDims, Exit, Cast, Identity, Shape, RealDiv, TensorArrayReadV3, Reshape, Merge, Enter, Range, Conv2D, Mul, Equal, Sub, Minimum, Tile, Pack, Split, ZerosLike, ConcatV2, Size, Unpack, Assert, DataFormatVecPermute, Transpose, Gather, Exp, Slice, Fill, (For more information see https://docs.nvidia.com/deeplearning/dgx/integrate-tf-trt/index.html#support-ops).
The rest of the article is more details on how I did this and which you can also follow step by step.
The post that has really helped me was these from Google team -
[1] https://medium.com/google-cloud/optimizing-tensorflow-models-for-serving-959080e9ddbf
[2] https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms#optimizing-for-deployment
I am writing this post as a more detailed explanation to [1], as some parts were not clear when I started following the steps.
My Colab/Jupyter notebook for Optimization is given here; you can skip the article and follow the Notebook also as I have documented it in the notebook. The TF Serving and the Client parts are however in this article.
https://colab.research.google.com/drive/1wQpWoc40kf__WSjfTqDaReMx6fFjUn48
If you have a frozen TF graph you can use the following methods to optimize it before using it for inferences.
There are two types of optimization. One to make it faster or smaller in size to run inferences. And the other to change the weights from higher precision to lower precision. Usually from FP32 to FP16 or INT8. For the latter, the GPU should have the ability to run mixed precision operations (Tensor Cores). Usually, NVIDIA’s desktop or laptop class GTX 1080 or similar are restricted from running lower precision operations. NVIDIA’s server-class GPUs support this. Especially the newer GPUs V100, T4, etc. Not all server GPU’s support it.
The GPU I l use is NVIDIA V100 32 GB GPU which has support for mixed precision operations. Also, you need to run the optimization in the GPU that you are optimizing for. Especially if you are using TensorRT.
Step 0. The model, and the Docker Containers
The first thing that has to be done is to convert the TensorFlow graph to a Frozen Graph. If the graph is Kearns based it is the HD5 format and has to be converted to the TF model and then to the frozen graph. A frozen graph has the value of variables embedded in the graph itself. It is a GrpahDef/protocol buffer (pb) format like a Saved Model only it cannot be retrained.
What is difference frozen_inference_graph.pb and saved_model.pb?
frozen_inference_graph.pb, is a frozen graph that cannot be trained anymore, it defines the graphdef and is actually a…
stackoverflow.com
The model that we are using is the SSD model ssd_resnet_50_fpn_coco form TF model zoo -https://github.com/tensorflow/models/blob/master/research/object_detection/g3doc/detection_model_zoo.md
Docker container used for the optimization is tensorflow/tensorflow:1.13.0rc1-gpu-jupyter
docker run --entrypoint=/bin/bash --runtime=nvidia -it --rm -p 8900:8500 -p 8901:8501 -v /usr/alex/:/coding --net=host tensorflow/tensorflow:1.13.0rc1-gpu-jupyteronce insidecd /codingjupyter notebook --allow-root &
Note- I changed the entry point to something more convenient to me than default tf-notebook I believe.
After optimizing, to run inferences I am using the same docker image after installing on that TF serving API’s, as well as headless opencv-python version. This is because we will be converting the optimized model to a TF serving compatible model for inference.
docker run --entrypoint=/bin/bash --env http_proxy=<my proxy> --env https_proxy=<my proxy> --runtime=nvidia -it --rm -p 8900:8500 -p 8901:8501 -v /usr/alex/:/coding --net=host tensorflow/tensorflow:1.13.0rc1-gpu-jupyterpip install tensorflow-serving-apipip install opencv-python==3.3.0.9cd codingpython ssd_client_1.py -num_tests=1 -server=127.0.0.1:8500 -batch_size=1 -img_path='../examples/google1.jpg/'
Step 1. Get the output node names in the Tensorflow Graph
Why is this important? We need to find the output node names of the frozen graph as it is needed to optimize the graph. Note Tensorflow version that is used in TF 1.13
# To Freeze the Saved Model# We need to freeze the model to do further optimisation on itfrom tensorflow.python.saved_model import tag_constantsfrom tensorflow.python.tools import freeze_graphfrom tensorflow.python import opsfrom tensorflow.tools.graph_transforms import TransformGraphdef freeze_model(saved_model_dir, output_node_names, output_filename): output_graph_filename = os.path.join(saved_model_dir, output_filename) initializer_nodes = '' freeze_graph.freeze_graph( input_saved_model_dir=saved_model_dir, output_graph=output_graph_filename, saved_model_tags = tag_constants.SERVING, output_node_names=output_node_names, initializer_nodes=initializer_nodes, input_graph=None, input_saver=False, input_binary=False, input_checkpoint=None, restore_op_name=None, filename_tensor_name=None, clear_devices=True, input_meta_graph=False, )
For this, we can plot the model in TF Board and see the output nodes, or print the nodes and grep on some keywords.
# Source https://medium.com/google-cloud/optimizing-tensorflow-models-for-serving-959080e9ddbfdef get_graph_def_from_file(graph_filepath): tf.reset_default_graph() with ops.Graph().as_default(): with tf.gfile.GFile(graph_filepath, 'rb') as f: graph_def = tf.GraphDef() graph_def.ParseFromString(f.read()) return graph_def
let us use the above helper to print the input and output nodes, input nodes via the for loop -
graph_def =get_graph_def_from_file('/coding/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb')for node in graph_def.node: if node.op=='Placeholder': print node # this will be the input node
and output nodes by plotting it in a format readable by Tensor Board.
with tf.Session(graph=tf.Graph()) as session: mygraph = tf.import_graph_def(graph_def, name='') writer = tf.summary.FileWriter(logdir='/coding/log_tb/1', graph=session.graph) writer.flush()
Let us invoke Tensor board.
#ssh -L 6006:127.0.0.1:6006 root@<remoteip> # for tensor board - in your local machine type 127.0.0.1tensorboard --logdir '/coding/log_tb/1'
From this, I could make out the output nodes. Note that if you are building the graph yourself you don’t need to do this circus. Since I am using a model that is opensourced and with less documentation I am using this. Sometimes for auto converted/TF imported graphs, the names will be pretty long. You can then print the nodes in a for a loop as I did for Placeholder and from the output, shape make out ( for detections class, score, rectangle coordinates)
# These are the output names. Add a index usually 0 for graph nodes. # You can print the node details by nodenamesoutput_node_names = ['detection_boxes','detection_scores','detection_classes','num_detections']outputs = ['detection_boxes:0','detection_scores:0','detection_classes:0','num_detections:0']
Step 3 Optimise using TF Graph Transform Tools
The snippet below illustrates how you can optimize a graph after reading it from disk.
# Source https://medium.com/google-cloud/optimizing-tensorflow-models-for-serving-959080e9ddbf#https://gist.github.com/lukmanr# Optimizing the graph via TensorFlow libraryfrom tensorflow.tools.graph_transforms import TransformGraphdef optimize_graph(model_dir, graph_filename, transforms, output_names, outname='optimized_model.pb'): input_names = ['input_image',] # change this as per how you have saved the model graph_def = get_graph_def_from_file(os.path.join(model_dir, graph_filename)) optimized_graph_def = TransformGraph( graph_def, input_names, output_names, transforms) tf.train.write_graph(optimized_graph_def, logdir=model_dir, as_text=False, name=outname) print('Graph optimized!')
Let us use the above helper to optimize the graph first quantize_weights
# Optimization without Qunatization - Reduce the size of the model# speed may actually be slower# see https://medium.com/google-cloud/optimizing-tensorflow-models-for-serving-959080e9ddbftransforms = ['remove_nodes(op=Identity)', \ 'merge_duplicate_nodes', \ 'strip_unused_nodes', 'fold_constants(ignore_errors=true)', 'fold_batch_norms', 'quantize_weights'] #this reduces the size, but there is no speed up , actaully slows down, see belowoptimize_graph('/coding/ssd_inception_v2_coco_2018_01_28', 'frozen_inference_graph.pb' , transforms, output_node_names,outname='optimized_model_small.pb')
Let’s then convert the optimized model to TF serving compatible format.
#lets convert this to a s TF Serving compatible mode;convert_graph_def_to_saved_model('/coding/ssd_inception_v2_coco_2018_01_28/2', '/coding/ssd_inception_v2_coco_2018_01_28/optimized_model_small.pb',outputs)
The helper that does this is given below
# Source https://medium.com/google-cloud/optimizing-tensorflow-models-for-serving-959080e9ddbf#https://gist.github.com/lukmanrdef convert_graph_def_to_saved_model(export_dir, graph_filepath,outputs):graph_def = get_graph_def_from_file(graph_filepath) with tf.Session(graph=tf.Graph()) as session: tf.import_graph_def(graph_def, name='') tf.saved_model.simple_save( session, export_dir,# change input_image to node.name if you know the name inputs={'input_image': session.graph.get_tensor_by_name('{}:0'.format(node.name)) for node in graph_def.node if node.op=='Placeholder'}, outputs={t:session.graph.get_tensor_by_name(t) for t in outputs} ) print('Optimized graph converted to SavedModel!')
And then ‘quantize_weights’ and ‘quantize_nodes’.
This should really covert also the calculation to lower precision - but does not work as of now.
"This process converts all the operations in the graph that have eight-bit quantized equivalents and leaves the rest in floating point. Only a subset of ops are supported and on many platforms, the quantized code may actually be slower than the float equivalents, but this is a way of increasing performance substantially when all the circumstances are right.”https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/graph_transforms#optimizing-for-deployment
transforms = ['add_default_attributes', \ 'strip_unused_nodes', \ 'remove_nodes(op=Identity, op=CheckNumerics)',\ 'fold_constants(ignore_errors=true)', 'fold_batch_norms', 'fold_old_batch_norms', 'quantize_weights', 'quantize_nodes', 'strip_unused_nodes', 'sort_by_execution_order']optimize_graph('/coding/ssd_inception_v2_coco_2018_01_28', 'frozen_inference_graph.pb' , transforms, output_node_names,outname='optimized_model_weight_quant.pb')
However this does not work in the sense, inference using this optimized model gives the error. I had tried with a Keras model earlier and got another error message. This seems to be a bug as now this model is a pure Tensorflow model and I have not changed anything here
(‘Got an error’, <_Rendezvous of RPC that terminated with: status = StatusCode.INVALID_ARGUMENT details = “input_max_range must be larger than input_min_range. [[{{node Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/ClipToWindow_87/Area/mul_eightbit/Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/ClipToWindow_87/Area/sub_1/quantize}}]] [[{{node Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/zeros_like_83}}]]” debug_error_string = “{“created”:”@1555723203.356344655",”description”:”Error received from peer”,”file”:”src/core/lib/surface/call.cc”,”file_line”:1036,”grpc_message”:”input_max_range must be larger than input_min_range.\n\t [[{{node Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/ClipToWindow_87/Area/mul_eightbit/Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/ClipToWindow_87/Area/sub_1/quantize}}]]\n\t [[{{node Postprocessor/BatchMultiClassNonMaxSuppression/map/while/MultiClassNonMaxSuppression/zeros_like_83}}]]”,”grpc_status”:3}”>)Response Received Exiting
Step 4 Optimise using NVIDIA TenosrRT
Base reference for this is these two posts
https://docs.nvidia.com/deeplearning/dgx/integrate-tf-trt/index.html
https://developers.googleblog.com/2018/03/tensorrt-integration-with-tensorflow.html
Inference with TF-TRT `SavedModel` workflow: we are using the TF Serving model.
import tensorflow.contrib.tensorrt as trttf.reset_default_graph()graph = tf.Graph()sess = tf.Session()# Create a TensorRT inference graph from a SavedModel:with graph.as_default(): with tf.Session() as sess: trt_graph = trt.create_inference_graph( input_graph_def=None, outputs=outputs, input_saved_model_dir='/coding/ssd_inception_v2_coco_2018_01_28/01', input_saved_model_tags=['serve'], max_batch_size=1, max_workspace_size_bytes=7000000000, precision_mode='FP16') #precision_mode='FP32') #precision_mode='INT8') output_node=tf.import_graph_def(trt_graph, return_elements=outputs) #sess.run(output_node) tf.saved_model.simple_save(sess, "/coding/ssd_inception_v2_coco_2018_01_28/4", inputs={'input_image': graph.get_tensor_by_name('{}:0'.format(node.name)) for node in graph.as_graph_def().node if node.op=='Placeholder'}, outputs={t:graph.get_tensor_by_name('import/'+t) for t in outputs} )
Inference with TF-TRT `Frozen` graph workflow:
Reference https://medium.com/tensorflow/speed-up-tensorflow-inference-on-gpus-with-tensorrt-13b49f3db3fa
#Lets load a frozen model and reset the graph and usegdef =get_graph_def_from_file(‘/coding/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb’)tf.reset_default_graph()graph = tf.Graph()sess = tf.Session()# Create a TensorRT inference graph from a SavedModel:with graph.as_default(): with tf.Session() as sess: trt_graph = trt.create_inference_graph( input_graph_def=gdef, outputs=outputs, max_batch_size=8, max_workspace_size_bytes=7000000000, is_dynamic_op=True, #precision_mode=’FP16') #precision_mode=’FP32') precision_mode=’INT8') output_node=tf.import_graph_def(trt_graph, return_elements=outputs) #sess.run(output_node) tf.saved_model.simple_save(sess, “/coding/ssd_inception_v2_coco_2018_01_28/5”, inputs={‘input_image’: graph.get_tensor_by_name(‘{}:0’.format(node.name)) for node in graph.as_graph_def().node if node.op==’Placeholder’}, outputs={t:graph.get_tensor_by_name(‘import/’+t) for t in outputs} )
Step 5: Pause and Check the models
The outputs of the various models are given below. You can see that the model size reduces after optimizations.
Original model ('/coding/ssd_inception_v2_coco_2018_01_28/frozen_inference_graph.pb', '') Model size: 99591.409 KB Variables size: 0.0 KB Total Size: 99591.409 KB ---------Tensorflow Transform Optimised model Weights Quantised ('/coding/ssd_inception_v2_coco_2018_01_28/2/saved_model.pb', '') Model size: 26193.27 KB Variables size: 0.0 KB Total Size: 26193.27 KB ---------Tensorflow Transform Optimised model Weights and Nodes Quantised ('/coding/ssd_inception_v2_coco_2018_01_28/3/saved_model.pb', '') Model size: 29265.284 KB Variables size: 0.0 KB Total Size: 29265.284 KB ---------NVIDIA RT Optimised model FP16 ('/coding/ssd_inception_v2_coco_2018_01_28/4/saved_model.pb', '') Model size: 178564.229 KB Variables size: 0.0 KB Total Size: 178564.229 KB ---------NVIDIA RT Optimised model INT8 ('/coding/ssd_inception_v2_coco_2018_01_28/5/saved_model.pb', '') Model size: 178152.834 KB Variables size: 0.0 KB Total Size: 178152.834 KB
Step 6: Ready the TF Serving container to server these models
Note the container we are using here — Client
docker run --entrypoint=/bin/bash --env http_proxy=<my proxy> --env https_proxy=<my proxy> --runtime=nvidia -it --rm -p 8900:8500 -p 8901:8501 -v /usr/alex/:/coding --net=host tensorflow/tensorflow:1.13.0rc1-gpu-jupyterpip install tensorflow-serving-apipip install opencv-python==3.3.0.9cd codingpython ssd_client_1.py -num_tests=1 -server=127.0.0.1:8500 -batch_size=1 -img_path='../examples/google1.jpg/'
Server -This is pasted from Step 0. This is run in the V100 32 GB Linux/machine.
docker run --net=host --runtime=nvidia -it --rm -p 8900:8500 -p 8901:8501 -v /usr/alex/:/models tensorflow/serving:1.13.0-gpu --rest_api_port=0 --enable_batching=true --model_config_file=/models/ssd_inception_v3_coco.json
where the config json is like below. Since I have placed the different models in folders under “/models/ssd_inception_v2_coco_2018_01_28/” as 01 — original model, 2-TF Graph Transform Weight Quantized, 3- TF Graph Transform Weight and Node Quantized,4-TensorRT FP16,5-TensorRT INT8; I just change the versions in the file to load different servables for each test.
model_config_list { config { name: "ssd_inception_v2_coco", base_path: "/models/ssd_inception_v2_coco_2018_01_28/", model_version_policy: { specific: { versions:[01] } }, model_platform:"tensorflow", }}
Step 7: Write a TF Serving Client for tests
I have written about this in detail in a previous post.
Writing a Generic Tensorflow Serving Client for Tensorflow Serving model
For CNN based object detection models
towardsdatascience.com
The saved model of the SSD is like below You can use the saved model CLI to view it
saved_model_cli show --dir '/coding/ssd_inception_v2_coco_2018_01_28/3' --allMetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['input_image'] tensor_info: dtype: DT_UINT8 shape: (-1, -1, -1, 3) name: image_tensor:0 The given SavedModel SignatureDef contains the following output(s): outputs['detection_boxes:0'] tensor_info: dtype: DT_FLOAT shape: unknown_rank name: detection_boxes:0 outputs['detection_classes:0'] tensor_info: dtype: DT_FLOAT shape: unknown_rank name: detection_classes:0 outputs['detection_scores:0'] tensor_info: dtype: DT_FLOAT shape: unknown_rank name: detection_scores:0 outputs['num_detections:0'] tensor_info: dtype: DT_FLOAT shape: unknown_rank name: num_detections:0 Method name is: tensorflow/serving/predict
Note that in this, the input and output node names are slightly different from the original model- whose input is ‘inputs’ and output is ‘detection_boxes’,’detection_classes’,’detection_scores’ (without the :0 part- which is a deficiency in the conversion scripts that I have used- but can be rectified easily)
Original model
root@ndn-oe:/coding/tfclient# saved_model_cli show - dir /coding/ssd_inception_v2_coco_2018_01_28/01/ - allMetaGraphDef with tag-set: 'serve' contains the following SignatureDefs:signature_def['serving_default']: The given SavedModel SignatureDef contains the following input(s): inputs['inputs'] tensor_info: dtype: DT_UINT8 shape: (-1, -1, -1, 3) name: image_tensor:0 The given SavedModel SignatureDef contains the following output(s): outputs['detection_boxes'] tensor_info: dtype: DT_FLOAT shape: (-1, 100, 4) name: detection_boxes:0 outputs['detection_classes'] tensor_info: dtype: DT_FLOAT shape: (-1, 100) name: detection_classes:0 outputs['detection_scores'] tensor_info: dtype: DT_FLOAT shape: (-1, 100) name: detection_scores:0 outputs['num_detections'] tensor_info: dtype: DT_FLOAT shape: (-1) name: num_detections:0 Method name is: tensorflow/serving/predict
The TF Serving client is given here -https://gist.github.com/alexcpn/d7c28230af437dafb0d2cc7f50140eed
The rest of the imports are here, the client is slightly different, the names of inputs and outputs, that’s why it is on gist https://github.com/alexcpn/tf_serving_clients
The image file used for the test is https://github.com/fizyr/keras-retinanet/blob/master/examples/000000008021.jpg
Step 8: The Output from various models
Basically, there is hardly any difference between the optimized and non-optimized model. Batch size is one here.
More details below
Original Model
Invocaiton :
coding/tfclient# python ssd_client_1.py -num_tests=1 -server=127.0.0.1:8500 -batch_size=1 -img_path=’../examples/000000008021.jpg’
(‘Image path’, ‘../examples/000000008021.jpg’)(‘original image shape=’, (480, 640, 3))(‘Input-s shape’, (1, 800, 1066, 3)) → This is the size of input tensor
Ouput (‘Label’, u’person’, ‘ at ‘, array([412, 171, 740, 624]), ‘ Score ‘, 0.9980476)(‘Label’, u’person’, ‘ at ‘, array([ 6, 423, 518, 788]), ‘ Score ‘, 0.94931936)(‘Label’, u’person’, ‘ at ‘, array([ 732, 473, 1065, 793]), ‘ Score ‘, 0.88419175)(‘Label’, u’tie’, ‘ at ‘, array([529, 337, 565, 494]), ‘ Score ‘, 0.40442815)(‘Time for ‘, 1, ‘ is ‘, 0.5993821620941162)
Tensorflow Transform Optimised model Weights Quantized
(‘Label’, u’person’, ‘ at ‘, array([409, 174, 741, 626]), ‘ Score ‘, 0.99797523)(‘Label’, u’person’, ‘ at ‘, array([ 4, 424, 524, 790]), ‘ Score ‘, 0.9549346)(‘Label’, u’person’, ‘ at ‘, array([ 725, 472, 1064, 793]), ‘ Score ‘, 0.8900732)(‘Label’, u’tie’, ‘ at ‘, array([527, 338, 566, 494]), ‘ Score ‘, 0.3943166)(‘Time for ‘, 1, ‘ is ‘, 0.6182711124420 → This is higher a model size is reduced and during inference the higher precision coversion has to be done
You should see that the size of the output graph is about a quarter of the original. The downside to this approach compared to round_weights is that extra decompression ops are inserted to convert the eight-bit values back into floating point, but optimizations in TensorFlow’s runtime should ensure these results are cached and so you shouldn’t see the graph run any more slowly.- https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/graph_transforms/README.md
TensorRT FP 16 Converted model(‘Label’, u’person’, ‘ at ‘, array([412, 171, 740, 624]), ‘ Score ‘, 0.9980476)(‘Label’, u’person’, ‘ at ‘, array([ 6, 423, 518, 788]), ‘ Score ‘, 0.9493193)(‘Label’, u’person’, ‘ at ‘, array([ 732, 473, 1065, 793]), ‘ Score ‘, 0.8841917)(‘Label’, u’tie’, ‘ at ‘, array([529, 337, 565, 494]), ‘ Score ‘, 0.40442812)(‘Time for ‘, 1, ‘ is ‘, 0.5885560512542725) →
I was hoping this would be half the original value — twice as fast. But during optimization TensorRT was telling it could convert only a few of the supported* operations - "There are 3962 ops of 51 different types in the graph that are not converted to TensorRT -Conv2D" though Convolution operation is shown as supported here →https://docs.nvidia.com/deeplearning/sdk/tensorrt-support-matrix/index.html. Bug raised for this by me here
2019-04-14 08:32:31.357592: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:2019-04-14 08:32:31.357620: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-04-14 08:32:31.357645: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-04-14 08:32:31.358154: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30480 MB memory) -> physical GPU (device: 0, name: Tesla V100-PCIE-32GB, pci bus id: 0000:b3:00.0, compute capability: 7.0)2019-04-14 08:32:34.582872: I tensorflow/core/grappler/devices.cc:51] Number of eligible GPUs (core count >= 8): 12019-04-14 08:32:34.583019: I tensorflow/core/grappler/clusters/single_machine.cc:359] Starting new session2019-04-14 08:32:34.583578: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1512] Adding visible gpu devices: 02019-04-14 08:32:34.583610: I tensorflow/core/common_runtime/gpu/gpu_device.cc:984] Device interconnect StreamExecutor with strength 1 edge matrix:2019-04-14 08:32:34.583636: I tensorflow/core/common_runtime/gpu/gpu_device.cc:990] 0 2019-04-14 08:32:34.583657: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1003] 0: N 2019-04-14 08:32:34.583986: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1115] Created TensorFlow device (/job:localhost/replica:0/task:0/device:GPU:0 with 30480 MB memory) -> physical GPU (device: 0, name: Tesla V100-PCIE-32GB, pci bus id: 0000:b3:00.0, compute capability: 7.0)2019-04-14 08:32:36.713848: I tensorflow/contrib/tensorrt/segment/segment.cc:443] There are 3962 ops of 51 different types in the graph that are not converted to TensorRT: TopKV2, NonMaxSuppressionV2, TensorArrayWriteV3, Const, Squeeze, ResizeBilinear, Maximum, Where, Add, Placeholder, Switch, TensorArrayGatherV3, NextIteration, Greater, TensorArraySizeV3, NoOp, TensorArrayV3, LoopCond, Less, StridedSlice, TensorArrayScatterV3, ExpandDims, Exit, Cast, Identity, Shape, RealDiv, TensorArrayReadV3, Reshape, Merge, Enter, Range, Conv2D, Mul, Equal, Sub, Minimum, Tile, Pack, Split, ZerosLike, ConcatV2, Size, Unpack, Assert, DataFormatVecPermute, Transpose, Gather, Exp, Slice, Fill, (For more information see https://docs.nvidia.com/deeplearning/dgx/integrate-tf-trt/index.html#support-ops).2019-04-14 08:32:36.848171: I tensorflow/contrib/tensorrt/convert/convert_graph.cc:913] Number of TensorRT candidate segments: 42019-04-14 08:32:37.129266: W tensorflow/contrib/tensorrt/convert/convert_nodes.cc:3710] Validation failed for TensorRTInputPH_0 and input slot 0: Input tensor with shape [?,?,?,3] has an unknown non-batch dimension at dim 12019-04-14 08:32:37.129330: W tensorflow/contrib/tensorrt/convert/convert_graph.cc:1021] TensorRT node TRTEngineOp_0 added for segment 0 consisting of 707 nodes failed: Invalid argument: Validation failed for TensorRTInputPH_0 and input slot 0: Input tensor with shape [?,?,?,3] has an unknown non-batch dimension at dim 1. Fallback to TF...2019-04-14 08:32:37.129838: W tensorflow/contrib/tensorrt/convert/convert_nodes.cc:3710] Validation failed for TensorRTInputPH_0 and input slot 0: Input tensor with shape [?,546,?,?] has an unknown non-batch dimension at dim 22019-04-14 08:32:37.129859: W tensorflow/contrib/tensorrt/convert/convert_graph.cc:1021] TensorRT node TRTEngineOp_1 added for segment 1 consisting of 3 nodes failed: Invalid argument: Validation failed for TensorRTInputPH_0 and input slot 0: Input tensor with shape [?,546,?,?] has an unknown non-batch dimension at dim 2. Fallback to TF...2019-04-14 08:32:38.309554: I tensorflow/contrib/tensorrt/convert/convert_graph.cc:1015] TensorRT node TRTEngineOp_2 added for segment 2 consisting of 3 nodes succeeded.2019-04-14 08:32:38.420585: I tensorflow/contrib/tensorrt/convert/convert_graph.cc:1015] TensorRT node TRTEngineOp_3 added for segment 3 consisting of 4 nodes succeeded.2019-04-14 08:32:38.644767: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:581] Optimization results for grappler item: tf_graph2019-04-14 08:32:38.644837: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583] constant folding: Graph size after: 6411 nodes (-1212), 10503 edges (-1352), time = 848.996ms.2019-04-14 08:32:38.644858: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583] layout: Graph size after: 6442 nodes (31), 10535 edges (32), time = 225.361ms.2019-04-14 08:32:38.644874: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583] constant folding: Graph size after: 6432 nodes (-10), 10535 edges (0), time = 559.352ms.2019-04-14 08:32:38.644920: I tensorflow/core/grappler/optimizers/meta_optimizer.cc:583] TensorRTOptimizer: Graph size after: 6427 nodes (-5), 10530 edges (-5), time = 2087.5769ms.
TensorRT INT 8 Converted model
One can see from the V100 server logs some Tensor Core magic happening
2019–04–20 01:30:39.563827: I external/org_tensorflow/tensorflow/contrib/tensorrt/kernels/trt_engine_op.cc:574] Starting calibration thread on device 0, Calibration Resource @ 0x7f4c341ac5702019–04–20 01:30:39.563982: I external/org_tensorflow/tensorflow/contrib/tensorrt/kernels/trt_engine_op.cc:574] Starting calibration thread on device 0, Calibration Resource @ 0x7f4ce8008e60
(‘Label’, u’person’, ‘ at ‘, array([412, 171, 740, 624]), ‘ Score ‘, 0.9980476)(‘Label’, u’person’, ‘ at ‘, array([ 6, 423, 518, 788]), ‘ Score ‘, 0.9493195)(‘Label’, u’person’, ‘ at ‘, array([ 732, 473, 1065, 793]), ‘ Score ‘, 0.8841919)(‘Label’, u’tie’, ‘ at ‘, array([529, 337, 565, 494]), ‘ Score ‘, 0.40442798)(‘Time for ‘, 1, ‘ is ‘, 0.5967140197753906)
With batch size 2 there is an error/ out of memory for TensorCores
python ssd_client_1.py -num_tests=1 -server=127.0.0.1:8500 -batch_size=2 -img_path=’../examples/000000008021.jpg’2019–04–20 01:34:25.042337: F external/org_tensorflow/tensorflow/contrib/tensorrt/kernels/trt_engine_op.cc:227] Check failed: t.TotalBytes() == device_tensor->TotalBytes() (788424 vs. 394212)2019–04–20 01:34:25.042373: F external/org_tensorflow/tensorflow/contrib/tensorrt/kernels/trt_engine_op.cc:227] Check failed: t.TotalBytes() == device_tensor->TotalBytes() (34656 vs. 17328)/usr/bin/tf_serving_entrypoint.sh: line 3: 6 Aborted (core dumped)
Results from other models (and Comparison with different GPU’s)
Here are some results from other tests and models
Details here — https://docs.google.com/spreadsheets/d/1Sl7K6sa96wub1OXcneMk1txthQfh63b0H5mwygyVQlE/edit?usp=sharing
Model — Resnet_50 FP 32 and FP16
FP32 = http://download.tensorflow.org/models/official/20181001_resnet/savedmodels/resnet_v2_fp32_savedmodel_NCHW.tar.gz
FP16 = http://download.tensorflow.org/models/official/20181001_resnet/savedmodels/resnet_v2_fp16_savedmodel_NCHW.tar.gz
You can see that there is a slight difference, V100 32 GB takes slightly less time than the consumer grade GTX 1070 8GB, when the batch size increases the more memory resource of V100 stands out; but not the number of CUDA cores. It seems as is noted in other blogs, that simply having more CUDA cores does not automatically mean that an inference will run faster. It may depend on memory and the model characteristics also.
Model Retinanet
One can see here that there is not much difference. Actually, this was my first experiment, but this was a Keras model that was converted to a TF frozen model and optimised. So I thought maybe I would get better results from a pure TF written model like SSD. But did not make much difference.
Summary
One can see that there are no drastic improvements in the inference time between the models. Also, TF GraphTransform for Model Quantization has not worked for me in this nor one other model I tried. Will raise a bug for that.TensorRT is better but is only able to convert a few layers to lower precision- have raised a bug/clarification for this, and if that works, hopefully, we can see the models runs twice as fast as advertised in Tensor Cores.
Main References
https://github.com/tensorflow/tensorflow/blob/master/tensorflow/tools/graph_transforms/README.md
https://medium.com/google-cloud/optimizing-tensorflow-models-for-serving-959080e9ddbf
https://colab.research.google.com/drive/1wQpWoc40kf__WSjfTqDaReMx6fFjUn48
Other related posts
Writing a Generic Tensorflow Serving Client for Tensorflow Serving model
For CNN based object detection models
towardsdatascience.com
Using Tensorflow Serving GRPC
Once you have your Tensorflow or Keras based model trained, one needs to think on how to use it in production. You may…
towardsdatascience.com
Running Tensor Flow and Keras in Jupyter notebook via Docker
Assuming that you have NVIDIA GPU in your machine and NVIDIA Drivers installed. Please read the below before you install…
medium.com
--
--
3
Published in Better Software
People Process Technologies in the Software Engineering
Written by Alex Punnen
SW Architect/programmer- in various languages and technologies from 2001 to now. https://www.linkedin.com/in/alexpunnen/
Responses (3)
Help
Status
About
Careers
Press
Blog
Privacy
Terms
Text to speech
Teams
|
Automating GPU Kernel Generation with DeepSeek-R1 and Inference Time Scaling
As AI models extend their capabilities to solve more sophisticated challenges, a new scaling law known as test-time scaling or inference-time scaling is emerging. Also known as AI reasoning or long-thinking, this technique improves model performance by allocating additional computational resources during inference to evaluate multiple possible outcomes and then selecting the best one, neural network. This enables AI to strategize and systematically solve complex problems in a similar fashion to how humans dissect complex problems and solve them individually to arrive at a final solution.
In this post, we talk about an experiment done by NVIDIA engineers who used one of the newest open-source models, the DeepSeek-R1 model, together with additional computing power during inference to solve a complex problem. The experiment was to automatically generate GPU attention kernels that were numerically correct and optimized for different flavors of attention without any explicit programming.
The results turned out to be better than the optimized kernels developed by skilled engineers in some cases.
The need for optimized attention kernels and associated challenges
Attention is a key concept that revolutionized the development of the large language model (LLM). It’s a powerful mechanism that enables AI models to focus selectively on the most relevant parts of input when performing tasks. By focusing on important information, the attention operation helps the models make better predictions and find hidden patterns in the data.
The computational complexity of the attention operation grows quadratically in relation to the input sequence length. This motivates the need for developing an optimized lower-level implementation (that is, a GPU kernel) to prevent runtime errors arising from simple implementations (for example, out-of-memory errors) and for computational efficiency purposes.
There are multiple variants of attention (causal, relative positional embeddings, alibi, and so on) and often engineers must use a combination of these variants for a given task.
Multi-modal models (for example, vision transformers) introduce an additional layer of challenges as they require specialized attention mechanisms (Spatial Neighborhood Attention) for maintaining spatio-temporal information often encountered in computer vision, video generation models, and so on.
Creating an optimized GPU kernel for attention takes a lot of skill and time, even for experienced software engineers.
Recent LLMs like DeepSeek-R1 have shown a lot of promise in code generation tasks, but they still face challenges creating optimized code on the first try. This makes it necessary to use other strategies at inference time to generate optimized code.
The following prompt is sample user input for a relative positional embeddings attention kernel.
Please write a GPU attention kernel to support relative position encodings. Implement the relative positional encoding on the fly within the kernel. The complete code should be returned, including the necessary modifications.
Use the following function to compute the relative positional encoding:
def relative_positional(score, b, h, q_idx, kv_idx):
return score + (q_idx - kv_idx)
When implementing the kernel, keep in mind that a constant scaling factor 1.44269504 should be applied to the relative positional encoding due to qk_scale = sm_scale * 1.44269504. The PyTorch reference does not need to scale the relative positional encoding, but in the GPU kernel, use:
qk = qk * qk_scale + rel_pos * 1.44269504
Please provide the complete updated kernel code that incorporates these changes, ensuring that the relative positional encoding is applied efficiently within the kernel operations.
LLMs can occasionally produce hallucinated code or mix syntax from different languages or frameworks, causing immediate code errors or inefficiencies. Computing the optimal GPU thread mapping is also non-trivial and a challenging task, often requiring iterative refinement to achieve a correct and efficient kernel.
Inference-time scaling for generating optimized GPU Kernels
To get the best results with optimized attention kernels, NVIDIA engineers created a new workflow that includes a special verifier along with the DeepSeek-R1 model during inference in a closed-loop fashion for a predetermined duration.
The workflow is first initialized by a manual prompt and the DeepSeek-R1 model generates the GPU code (that is, the kernel) in the first pass. The verifier runs on an NVIDIA H100 GPU. It analyzes the generated kernel and creates new prompts that are provided as input to the DeepSeek-R1 model.
This closed-loop approach makes the code generation process better by guiding it in a different way each time. The team found that by letting this process continue for 15 minutes resulted in an improved attention kernel.
This workflow produced numerically correct kernels for 100% of Level-1 problems and 96% of Level-2 problems, as tested by Stanford’s KernelBench benchmark.
The Level-1 solving rate in KernelBench refers to the numerical correct metric used to evaluate the ability of LLMs to generate efficient GPU kernels for specific computational tasks. This test is part of a series of challenges to test the latest LLMs’ abilities in GPU programming.
Figure 4 shows how the inference-time budget affects the agent’s solving rate. Allocating more than 10 minutes per problem in the Level-1 category enables the workflow to produce numerical correct code for most of the 100 problems.
Optimized GPU kernels on DeepSeek-R1
These results show how you can use the latest DeepSeek-R1 model to give better GPU kernels by using more computing power during inference time. This is still a new research area with early results on a promising approach that automatically generates effective attention kernels.
While we are off to a good start, more work is needed to generate better results consistently for a wider variety of problems. We’re excited about the recent developments in DeepSeek-R1 and its potential.
For more information or to get started, see the DeepSeek-R1 NIM microservice, now available on build.nvidia.com.
Related resources
Tags
About the Authors
Comments
Related posts
Low Latency Inference Chapter 2: Blackwell is Coming. NVIDIA GH200 NVL32 with NVLink Switch Gives Signs of Big Leap in Time to First Token Performance
Deploying Accelerated Llama 3.2 from the Edge to the Cloud
NVIDIA NeMo Accelerates LLM Innovation with Hybrid State Space Model Support
Optimizing Inference on Large Language Models with NVIDIA TensorRT-LLM, Now Publicly Available
NVIDIA Slashes BERT Training and Inference Times
Related posts
Spotlight: NAVER Place Optimizes SLM-Based Vertical Services with NVIDIA TensorRT-LLM
NVIDIA AI Enterprise Adds Support for NVIDIA H200 NVL
Optimizing Qwen2.5-Coder Throughput with NVIDIA TensorRT-LLM Lookahead Decoding
Just Released: Tripy, a Python Programming Model For TensorRT
OpenAI Triton on NVIDIA Blackwell Boosts AI Performance and Programmability
|
Analysis-Driven Optimization: Preparing for Analysis with NVIDIA Nsight Compute, Part 1
In this three-part series, you discover how to use NVIDIA Nsight Compute for iterative, analysis-driven optimization. Part 1 covers the background and setup needed, part 2 covers beginning the iterative optimization process, and part 3 covers finishing the analysis and optimization process and determining whether you have reached a reasonable stopping point.
Nsight Compute is the primary NVIDIA CUDA kernel-level performance analysis tool. It is part of the NVIDIA Nsight family of tools for GPU computing. For thorough introductions to the NVIDIA Nsight family profiler tools, see the following posts:
These posts point out that the GPU code performance analysis process usually begins with Nsight Systems. Eventually, the analysis may select a specific kernel to focus on, for further analysis using Nsight Compute. In this post, I discuss how Nsight Compute facilitates analysis-driven optimization (ADO) of GPU kernels.
ADO is predicated on the idea that making the most efficient use of your time involves focusing on the most important limiters to code performance, in pareto order. In a cyclical process, you want the tool to identify the current most important limiter to performance and, to the fullest extent possible, give you some clues about how to address it. The most important limiter to performance is the code characteristic that, if modified, would yield the largest improvement in performance.
Start by focusing on the fixes that yield the largest performance improvements. In a cyclical fashion, you use the tool to identify these areas, make code changes, and then use the tool again to assess the impact of these changes and identify the next area to look at. The process completes when you either run out of time or have identified, perhaps through some calculations, that further optimization is unlikely to yield significant performance improvement.
To follow along with this post, I recommend using CUDA 11.1 and Nsight Compute 2020.2 or newer. Several kinds of profiler output reviewed in this post may not be present if you use a previous version of the tool.
Code for analysis
You’re going to analyze code that is more complicated than my previous post on Nsight Compute. This code has multiple steps and phases to it. Using ADO, you go through a set of steps that successively improves the performance of the code by uncovering, in each step, a performance limiter. For more information about an additional example, see the Summary section.
Use the performance limiter to guide your attempt to improve performance. The code that you analyze has two major phases:
This code repeats these steps on different sets of incoming vectors but uses the same matrix to produce a set of result vectors. The starting point for this code was a CPU algorithm using OpenMP for parallelism that has been ported to a GPU version to achieve higher performance.
// compile with: nvcc -Xcompiler -fopenmp -o t5 t5.cu -O3 -lineinfo
#include
#include
#define cudaCheckErrors(msg) \
do { \
cudaError_t __err = cudaGetLastError(); \
if (__err != cudaSuccess) { \
fprintf(stderr, "Fatal error: %s (%s at %s:%d)\n", \
msg, cudaGetErrorString(__err), \
__FILE__, __LINE__); \
fprintf(stderr, "*** FAILED - ABORTING\n"); \
exit(1); \
} \
} while (0)
#include
#include <sys/time.h>
#define USECPSEC 1000000ULL
unsigned long long dtime_usec(unsigned long long start){
timeval tv;
gettimeofday(&tv, 0);
return ((tv.tv_sec*USECPSEC)+tv.tv_usec)-start;
}
// perform vector averaging over M vectors of length L, followed by matrix-vector multiply
// repeat the above N times
// input vectors are stored as a set of N column-major matrices
// for each k in N: output[k] = matrix*input[k]
template
void cpu_version1(T *input, T *output, T *matrix, int L, int M, int N){
#pragma omp parallel for
for (int k = 0; k < N; k++){ // repeat the following, N times
std::vector v1(L); // vector length of L
for (int i = 0; i < M; i++) // compute average vector over M input vectors
for (int j = 0; j < L; j++)
v1[j] += input[k*M*L+j*M+i];
for (int j = 0; j < L; j++)
v1[j] /= M;
for (int i = 0; i < L; i++) // matrix-vector multiply
for (int j = 0; j < L; j++)
output[i*N+k] += matrix[i*L+j]*v1[j];
}
}
const int my_L = 1024;
const int my_M = 1024;
const int my_N = 1024;
template
__global__ void gpu_version1(const T * __restrict__ input, T * __restrict__ output, const T * __restrict__ matrix, const int L, const int M, const int N){
__shared__ T smem[my_L];
size_t idx = ((size_t)blockIdx.x)*blockDim.x + threadIdx.x;
for (int k = 0; k < N; k++){ // iterate over N data sets
T v1 = 0;
for (int i = 0; i < M; i++) // perform vector averaging
v1 += input[k*M*L+idx*M+i];
v1 /= M;
for (int i = 0; i < L; i++){ // perform matrix-vector multiply
__syncthreads();
smem[threadIdx.x] = v1 * matrix[i*L+idx];
for (int s = blockDim.x>>1; s > 0; s>>=1){
__syncthreads();
if (threadIdx.x < s) smem[threadIdx.x] += smem[threadIdx.x+s];}
if (!threadIdx.x) output[k+i*N] = smem[0];}
}
}
typedef float ft;
int main(){
ft *d_input, *h_input, *d_output, *h_outputc, *h_outputg, *d_matrix, *h_matrix;
int L = my_L; int M = my_M; int N = my_N;
// host allocations
h_input = new ft[N*L*M];
h_matrix = new ft[L*L];
h_outputg = new ft[N*L];
h_outputc = new ft[N*L];
// data initialization
for (int i = 0; i < N*L*M; i++) h_input[i] = (rand()&1)+1; // 1 or 2
for (int i = 0; i < L*L; i++) h_matrix[i] = (rand()&1)+1; // 1 or 2
// create result to test for correctness
unsigned long long dt = dtime_usec(0);
cpu_version1(h_input, h_outputc, h_matrix, L, M, N);
dt = dtime_usec(dt);
std::cout << "CPU execution time: " << dt/(float)USECPSEC << "s" << std::endl;
// device allocations
cudaMalloc(&d_input, N*L*M*sizeof(ft));
cudaMalloc(&d_output, N*L*sizeof(ft));
cudaMalloc(&d_matrix, L*L*sizeof(ft));
cudaCheckErrors("cudaMalloc failure");
// copy input data from host to device
cudaMemcpy(d_input, h_input, N*L*M*sizeof(ft), cudaMemcpyHostToDevice);
cudaMemcpy(d_matrix, h_matrix, L*L*sizeof(ft), cudaMemcpyHostToDevice);
cudaMemset(d_output, 0, N*L*sizeof(ft));
cudaCheckErrors("cudaMemcpy/Memset failure");
// run on device and measure execution time
dt = dtime_usec(0);
gpu_version1<<<1, L>>>(d_input, d_output, d_matrix, L, M, N);
cudaCheckErrors("kernel launch failure");
cudaDeviceSynchronize();
cudaCheckErrors("kernel execution failure");
dt = dtime_usec(dt);
cudaMemcpy(h_outputg, d_output, N*L*sizeof(ft), cudaMemcpyDeviceToHost);
cudaCheckErrors("cudaMemcpy failure");
for (int i = 0; i < N*L; i++) if (h_outputg[i] != h_outputc[i]) {std::cout << "Mismatch at " << i << " was: " << h_outputg[i] << " should be: " << h_outputc[i] << std::endl; return 0;}
std::cout << "Kernel execution time: " << dt/(float)USECPSEC << "s" << std::endl;
return 0;
}
Some highlights:
The code isn’t reflective of any specific scientific algorithm. However, to give a real-world application, the vector-averaging phase could reflect a naive form of a calculation performed by a DNN parameter server, and the matrix-vector multiply is used throughout scientific and AI codes and could be the basis of a naive non-batched DNN update.
The code provides for built-in timing measurement of the CPU and GPU operations, so that you can quickly assess the measure of speedup or benefit from the latest optimization. You can also use Nsight Compute directly to measure kernel duration. In a more complete treatment of using NVIDIA Nsight tools for performance analysis or optimization, you might include Nsight Systems in the iterative analysis loop. In addition, results checking is performed between CPU and GPU versions so that you can be sure that your GPU optimized versions are producing the correct results.
This is a simplistic comparison, testing for exact equality between results. This is normally not the recommended way to compare floating-point results. Because the scope of the problem and test data is limited, this method is acceptable. For general floating-point comparison, equality should be tested against some measure of a difference threshold, not exact equality.
Initial performance baseline
If you compile and run this code on a V100 GPU, you see the following results:
$ nvcc -Xcompiler -fopenmp -o t5 t5.cu -O3 -lineinfo
$ OMP_NUM_THREADS=1 ./t5
CPU execution time: 5.65601s
Kernel execution time: 2.922s
$ ./t5
CPU execution time: 0.52372s
Kernel execution time: 2.9219s
$
If you run with only a single CPU thread, then the initial port of CPU code to GPU code seems to give about a 2x speedup. If you don’t restrict the number of CPU threads, however, the OpenMP parallelization seems to give about a 10x speedup to the CPU code, meaning that your initial CUDA kernel realization is about 5x slower than the CPU code. See if you can improve this.
Getting started with Nsight Compute
The Nsight Compute profiler can collect a large range of data on your kernel execution. In addition, you make use of rules embedded in the analysis output from Nsight Compute. A rule in Nsight Compute is a set of instructions to the profiler that indicate what metrics are to be gathered and how they are to be displayed or interpreted.
The rule system in Nsight Compute is a powerful feature that allows extending the functionality provided. It is possible to create your own rules, but this analysis uses rules that are already available. You use the bottleneck rule to guide your steps. For most of the work that you are doing in this post, you use the user interface version of Nsight Compute. The Nsight Compute user interface can be used directly, in-situ, for installations that support it. Alternatively, you can collect Nsight Compute report data from the Nsight Compute CLI and import that data into a session running elsewhere.
To capture the information needed during this investigation, Nsight Compute must have access to profiling features that require permission at the GPU driver level. Depending on your operating system, there are various methods to enable this.
You can start Nsight Compute user interface on the target using the method given in a previous post. For example, using Nsight Compute 2020.2 with appropriate path setup, type ncu-ui. When the initial dialog box opens, choose Quick Launch, Continue.
In the next dialog box, under Target Platform, choose the appropriate option. I am working on a Linux x86_64 platform so for me, that choice is already selected. For Application Executable, enter the full path/name to the executable that you just compiled and ran. You can use the browse (…) button to navigate and find the app. In the Activity section, make sure that Interactive Profile is selected. You don’t have to make any other changes. Choose Launch.
Nsight Compute then launches the app and allows it to proceed up to the first CUDA call. This step may take about 10 seconds depending on your CPU, because the code is setting up and initializing 4 GB of memory. You haven’t profiled anything yet. Because your application has only one kernel in it, called only one time, you can quickly get to profiling by choosing Auto Profile, selecting the Full section set, and then choosing Run to Next Kernel.
What you discover at this point is that the application is taking a long time to profile, compared to the kernel duration. The Nsight Compute tool may require substantial data collection to gather all the requested metrics to generate a full section set in your report. For more information about why the tool may have longer profiling times, see Kernel replay.
You can reduce the profiling scope, while still getting useful results, by reducing the number of data sets N to process in the code. Make the following code change:
const int my_N = 32;
Recompile the code. In the profiler, choose Terminate, and repeat the earlier steps. Now, profiling after you choose Run to Next Kernel should only take about 10 seconds.
Profiling results
You now have your initial profiler results (Figure 2).
The profiler results are organized into sections, and the sections are arranged from top to bottom in roughly attention-priority order. Give your attention to the top section first, GPU Speed of Light (Figure 2). The section provides a high-level overview of the utilization for compute and memory resources of the GPU. For each unit, the Speed of Light (SOL) reports the achieved percentage of utilization with respect to the theoretical maximum. As previously mentioned, you want the tool to guide your analysis, and you have the bottleneck rule to help with that. The bottleneck rule is presented in the SOL section:
[Warning] This kernel grid is too small to fill the available resources on this device. Look at Launch Statistics for more details.
Launch Statistics is reporting a launch grid of one block (Figure 3).
You see a similar recommendation in this section. As a well-trained CUDA programmer, you know that such a small grid cannot hope to fill any GPU, and such a grid runs well below the expected GPU performance. The solution is to increase the grid size, by increasing the number of blocks launched. For more information about grids, blocks, and SMs, and how scheduling more blocks can help to achieve better performance, see Hardware model.
Summary
In this post, I introduced the code for profiling, covered the basic ideas of ADO, and got you started with the Nsight Compute profiler. In part 2, you continue the ADO process, by applying what you learned in this post to refactor the code, and profile again to discover next steps. In part 3, you finish your analysis and optimization. You also perform some measurements to give you confidence that you have reached a reasonable stopping point.
The analysis work in this post was performed on a machine with the following characteristics: Ubuntu 18.04.3, CUDA 11.1, GPU Driver version 455.23.05, GCC version 7.5.0, V100-SXM2-32 GB GPU, Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz. The code examples presented in this post are for instructional purposes only. They are not guaranteed to be defect-free or suitable for any particular purpose.
For more information, see the following resources:
Acknowledgements
The author would like to thank the following individuals for their contributions: Sagar Agrawal, Rajan Arora, Ronny Brendel, Max Katz, Felix Schmitt, Greg Smith, and Magnus Strengert.
Related resources
Tags
About the Authors
Comments
Related posts
CUDA Toolkit 12.3 Delivers New Features for Accelerated Computing
New Video Series: CUDA Developer Tools Tutorials
NVIDIA GTC: A Complete Overview of Nsight Developer Tools
SC20 Demos: New Nsight Systems and Nsight Compute Demos
Unleashing the Power of NVIDIA Ampere Architecture with NVIDIA Nsight Developer Tools
Related posts
CUDA Toolkit Now Available for NVIDIA Blackwell
Improving GPU Performance by Reducing Instruction Cache Misses
Optimizing llama.cpp AI Inference with CUDA Graphs
Just Released: Nsight Compute 2024.3
Just Released: CUDA Toolkit 12.6
|
How to Optimize Data Transfers in CUDA C/C++
In the previous three posts of this CUDA C & C++ series we laid the groundwork for the major thrust of the series: how to optimize CUDA C/C++ code. In this and the following post we begin our discussion of code optimization with how to efficiently transfer data between the host and device. The peak bandwidth between the device memory and the GPU is much higher (144 GB/s on the NVIDIA Tesla C2050, for example) than the peak bandwidth between host memory and device memory (8 GB/s on PCIe x16 Gen2). This disparity means that your implementation of data transfers between the host and GPU devices can make or break your overall application performance. Let’s start with a few general guidelines for host-device data transfers.
We investigate the first three guidelines above in this post, and we dedicate the next post to overlapping data transfers. First I want to talk about how to measure time spent in data transfers without modifying the source code.
Measuring Data Transfer Times with nvprof
To measure the time spent in each data transfer, we could record a CUDA event before and after each transfer and use cudaEventElapsedTime(), as we described in a previous post. However, we can get the elapsed transfer time without instrumenting the source code with CUDA events by using nvprof, a command-line CUDA profiler included with the CUDA Toolkit (starting with CUDA 5). Let’s try it out with the following code example, which you can find in the Github repository for this post.
int main()
{
const unsigned int N = 1048576;
const unsigned int bytes = N * sizeof(int);
int *h_a = (int*)malloc(bytes);
int *d_a;
cudaMalloc((int**)&d_a, bytes);
memset(h_a, 0, bytes);
cudaMemcpy(d_a, h_a, bytes, cudaMemcpyHostToDevice);
cudaMemcpy(h_a, d_a, bytes, cudaMemcpyDeviceToHost);
return 0;
}
To profile this code, we just compile it using nvcc, and then run nvprof with the program filename as an argument.
$ nvcc profile.cu -o profile_test
$ nvprof ./profile_test
When I run on my desktop PC which has a GeForce GTX 680 (GK104 GPU, similar to a Tesla K10), I get the following output.
$ nvprof ./a.out
======== NVPROF is profiling a.out...
======== Command: a.out
======== Profiling result:
Time(%) Time Calls Avg Min Max Name
50.08 718.11us 1 718.11us 718.11us 718.11us [CUDA memcpy DtoH]
49.92 715.94us 1 715.94us 715.94us 715.94us [CUDA memcpy HtoD]
As you can see, nvprof measures the time taken by each of the CUDA memcpy calls. It reports the average, minimum, and maximum time for each call (since we only run each copy once, all times are the same). nvprof is quite flexible, so make sure you check out the documentation.
nvprof is new in CUDA 5. If you are using an earlier version of CUDA, you can use the older “command-line profiler”, as Greg Ruetsch explained in his post How to Optimize Data Transfers in CUDA Fortran.
Minimizing Data Transfers
We should not use only the GPU execution time of a kernel relative to the execution time of its CPU implementation to decide whether to run the GPU or CPU version. We also need to consider the cost of moving data across the PCI-e bus, especially when we are initially porting code to CUDA. Because CUDA’s heterogeneous programming model uses both the CPU and GPU, code can be ported to CUDA one kernel at a time. In the initial stages of porting, data transfers may dominate the overall execution time. It’s worthwhile to keep tabs on time spent on data transfers separately from time spent in kernel execution. It’s easy to use the command-line profiler for this, as we already demonstrated. As we port more of our code, we’ll remove intermediate transfers and decrease the overall execution time correspondingly.
Pinned Host Memory
Host (CPU) data allocations are pageable by default. The GPU cannot access data directly from pageable host memory, so when a data transfer from pageable host memory to device memory is invoked, the CUDA driver must first allocate a temporary page-locked, or “pinned”, host array, copy the host data to the pinned array, and then transfer the data from the pinned array to device memory, as illustrated below.As you can see in the figure, pinned memory is used as a staging area for transfers from the device to the host. We can avoid the cost of the transfer between pageable and pinned host arrays by directly allocating our host arrays in pinned memory. Allocate pinned host memory in CUDA C/C++ using cudaMallocHost() or cudaHostAlloc(), and deallocate it with cudaFreeHost(). It is possible for pinned memory allocation to fail, so you should always check for errors. The following code excerpt demonstrates allocation of pinned memory with error checking.
cudaError_t status = cudaMallocHost((void**)&h_aPinned, bytes);
if (status != cudaSuccess)
printf("Error allocating pinned host memory\n");
Data transfers using host pinned memory use the same cudaMemcpy() syntax as transfers with pageable memory. We can use the following “bandwidthtest” program (also available on Github) to compare pageable and pinned transfer rates.
#include <stdio.h>
#include <assert.h>
// Convenience function for checking CUDA runtime API results
// can be wrapped around any runtime API call. No-op in release builds.
inline
cudaError_t checkCuda(cudaError_t result)
{
#if defined(DEBUG) || defined(_DEBUG)
if (result != cudaSuccess) {
fprintf(stderr, "CUDA Runtime Error: %s\n",
cudaGetErrorString(result));
assert(result == cudaSuccess);
}
#endif
return result;
}
void profileCopies(float *h_a,
float *h_b,
float *d,
unsigned int n,
char *desc)
{
printf("\n%s transfers\n", desc);
unsigned int bytes = n * sizeof(float);
// events for timing
cudaEvent_t startEvent, stopEvent;
checkCuda( cudaEventCreate(&startEvent) );
checkCuda( cudaEventCreate(&stopEvent) );
checkCuda( cudaEventRecord(startEvent, 0) );
checkCuda( cudaMemcpy(d, h_a, bytes, cudaMemcpyHostToDevice) );
checkCuda( cudaEventRecord(stopEvent, 0) );
checkCuda( cudaEventSynchronize(stopEvent) );
float time;
checkCuda( cudaEventElapsedTime(&time, startEvent, stopEvent) );
printf(" Host to Device bandwidth (GB/s): %f\n", bytes * 1e-6 / time);
checkCuda( cudaEventRecord(startEvent, 0) );
checkCuda( cudaMemcpy(h_b, d, bytes, cudaMemcpyDeviceToHost) );
checkCuda( cudaEventRecord(stopEvent, 0) );
checkCuda( cudaEventSynchronize(stopEvent) );
checkCuda( cudaEventElapsedTime(&time, startEvent, stopEvent) );
printf(" Device to Host bandwidth (GB/s): %f\n", bytes * 1e-6 / time);
for (int i = 0; i < n; ++i) {
if (h_a[i] != h_b[i]) {
printf("*** %s transfers failed ***\n", desc);
break;
}
}
// clean up events
checkCuda( cudaEventDestroy(startEvent) );
checkCuda( cudaEventDestroy(stopEvent) );
}
int main()
{
unsigned int nElements = 4*1024*1024;
const unsigned int bytes = nElements * sizeof(float);
// host arrays
float *h_aPageable, *h_bPageable;
float *h_aPinned, *h_bPinned;
// device array
float *d_a;
// allocate and initialize
h_aPageable = (float*)malloc(bytes); // host pageable
h_bPageable = (float*)malloc(bytes); // host pageable
checkCuda( cudaMallocHost((void**)&h_aPinned, bytes) ); // host pinned
checkCuda( cudaMallocHost((void**)&h_bPinned, bytes) ); // host pinned
checkCuda( cudaMalloc((void**)&d_a, bytes) ); // device
for (int i = 0; i < nElements; ++i) h_aPageable[i] = i;
memcpy(h_aPinned, h_aPageable, bytes);
memset(h_bPageable, 0, bytes);
memset(h_bPinned, 0, bytes);
// output device info and transfer size
cudaDeviceProp prop;
checkCuda( cudaGetDeviceProperties(&prop, 0) );
printf("\nDevice: %s\n", prop.name);
printf("Transfer size (MB): %d\n", bytes / (1024 * 1024));
// perform copies and report bandwidth
profileCopies(h_aPageable, h_bPageable, d_a, nElements, "Pageable");
profileCopies(h_aPinned, h_bPinned, d_a, nElements, "Pinned");
printf("n");
// cleanup
cudaFree(d_a);
cudaFreeHost(h_aPinned);
cudaFreeHost(h_bPinned);
free(h_aPageable);
free(h_bPageable);
return 0;
}
The data transfer rate can depend on the type of host system (motherboard, CPU, and chipset) as well as the GPU. On my laptop which has an Intel Core i7-2620M CPU (2.7GHz, 2 Sandy Bridge cores, 4MB L3 Cache) and an NVIDIA NVS 4200M GPU (1 Fermi SM, Compute Capability 2.1, PCI-e Gen2 x16), running BandwidthTest produces the following results. As you can see, pinned transfers are more than twice as fast as pageable transfers.
Device: NVS 4200M
Transfer size (MB): 16
Pageable transfers
Host to Device bandwidth (GB/s): 2.308439
Device to Host bandwidth (GB/s): 2.316220
Pinned transfers
Host to Device bandwidth (GB/s): 5.774224
Device to Host bandwidth (GB/s): 5.958834
On my desktop PC with a much faster Intel Core i7-3930K CPU (3.2 GHz, 6 Sandy Bridge cores, 12MB L3 Cache) and an NVIDIA GeForce GTX 680 GPU (8 Kepler SMs, Compute Capability 3.0) we see much faster pageable transfers, as the following output shows. This is presumably because the faster CPU (and chipset) reduces the host-side memory copy cost.
Device: GeForce GTX 680
Transfer size (MB): 16
Pageable transfers
Host to Device bandwidth (GB/s): 5.368503
Device to Host bandwidth (GB/s): 5.627219
Pinned transfers
Host to Device bandwidth (GB/s): 6.186581
Device to Host bandwidth (GB/s): 6.670246
You should not over-allocate pinned memory. Doing so can reduce overall system performance because it reduces the amount of physical memory available to the operating system and other programs. How much is too much is difficult to tell in advance, so as with all optimizations, test your applications and the systems they run on for optimal performance parameters.
Batching Small Transfers
Due to the overhead associated with each transfer, it is preferable to batch many small transfers together into a single transfer. This is easy to do by using a temporary array, preferably pinned, and packing it with the data to be transferred.
For two-dimensional array transfers, you can use cudaMemcpy2D().
cudaMemcpy2D(dest, dest_pitch, src, src_pitch, w, h, cudaMemcpyHostToDevice)
The arguments here are a pointer to the first destination element and the pitch of the destination array, a pointer to the first source element and pitch of the source array, the width and height of the submatrix to transfer, and the memcpy kind. There is also a cudaMemcpy3D() function for transfers of rank three array sections.
Summary
Transfers between the host and device are the slowest link of data movement involved in GPU computing, so you should take care to minimize transfers. Following the guidelines in this post can help you make sure necessary transfers are efficient. When you are porting or writing new CUDA C/C++ code, I recommend that you start with pageable transfers from existing host pointers. As I mentioned earlier, as you write more device code you will eliminate some of the intermediate transfers, so any effort you spend optimizing transfers early in porting may be wasted. Also, rather than instrument code with CUDA events or other timers to measure time spent for each transfer, I recommend that you use nvprof, the command-line CUDA profiler, or one of the visual profiling tools such as the NVIDIA Visual Profiler (also included with the CUDA Toolkit).
This post focused on making data transfers efficient. In the next post, we discuss how you can overlap data transfers with computation and with other data transfers.
Related resources
Tags
About the Authors
Comments
Related posts
How to Access Global Memory Efficiently in CUDA C/C++ Kernels
How to Overlap Data Transfers in CUDA C/C++
How to Optimize Data Transfers in CUDA Fortran
How to Implement Performance Metrics in CUDA C/C++
How to Implement Performance Metrics in CUDA Fortran
Related posts
Advanced API Performance: SetStablePowerState
Advanced Kernel Profiling with the Latest Nsight Compute
TensorFlow Performance Logging Plugin nvtx-plugins-tf Goes Public
NVIDIA Nsight Systems Adds Vulkan Support
Nsight Systems Exposes New GPU Optimization Opportunities
|
The Full Stack Optimization Powering NVIDIA MLPerf Training v2.0 Performance
MLPerf benchmarks are developed by a consortium of AI leaders across industry, academia, and research labs, with the aim of providing standardized, fair, and useful measures of deep learning performance. MLPerf training focuses on measuring time to train a range of commonly used neural networks for the following tasks:
Lower training times are important to speed time to deployment, minimizing the total cost of ownership and maximizing return on investment.
However, just as important as a platform’s performance is its versatility. The ability to train every model, as well as provide infrastructure fungibility to run all AI workloads from training to inference, is critical to allowing organizations to maximize return on their infrastructure investments.
The NVIDIA platform, with full-stack innovation and a rich developer and application ecosystem, continues to be the only one to submit results on all eight MLPerf Training tests, as well as to submit on all MLPerf Inference and MLPerf high-performance computing (HPC) tests.
In this post, you will learn about the methods that NVIDIA deployed across the entire stack to deliver more performance in MLPerf v2.0.
Full stack improvements
NVIDIA MLPerf v2.0 submissions are based on the proven A100 Tensor Core GPU, the NVIDIA DGX A100 system, as well as the NVIDIA DGX SuperPOD reference architecture. Many partners also submitted results using the A100 Tensor Core GPU.
Through continued innovation across the entire stack, including system software, libraries, and algorithms, NVIDIA has yet again delivered performance improvements compared to prior submissions using the same A100 Tensor Core GPU.
Compared to NVIDIA MLPerf v0.7 submissions, which marked the first A100 Tensor Core GPU submissions, results showed gains of up to 2.1x on a per-chip basis, and 5.7x for max-scale training (Table 1).
MLPerf v1.1 submission details: Per-Accelerator: BERT: 1.1-2066 | DLRM: 1.1-2064 | Mask R-CNN: 1.1-2066 | Resnet50 v1.5: 1.1-2065 | RNN-T: 1.1-2066 | 3D U-Net: 1.1-2065 | MiniGo: 1.1-2067 Max-Scale: BERT: 1.1-2083 | DLRM: 1.1-2073 | Mask R-CNN: 1.1-2076 | Resnet50 v1.5: 1.1-2082 | SSD: 1.1-2070 | RNN-T: 1.1-2080 | 3D U-Net: 1.1-2077 | MiniGo: 1.1-2081 (*)
MLPerf v2.0 submission details: Per-Accelerator: BERT: 2.0-2070 | DLRM: 2.0-2068 | Mask R-CNN: 2.0-2070 | Resnet50 v1.5: 2.0-2069 | RetinaNet: 2.0-2091 | RNN-T: 2.0-2066 | 3D U-Net: 2.0-2060 | MiniGo: 2.0-2059 Max-Scale: BERT: 2.0-2106 | DLRM: 2.0-2098 | Mask R-CNN: 2.0-2099 | Resnet50 v1.5: 2.0-2107 | RetinaNet: 2.0-2103 | RNN-T: 2.0-2104 | 3D U-Net: 2.0-2100 | MiniGo: 2.0-2105
Per-Accelerator performance for A100 computed using 8xA100 server time-to-train and multiplying it by 8. 3D U-Net and RNN-T were not part of MLPerf v0.7. (**) RetinaNet was not part of either MLPerf v0.7 or v1.1. MLPerf name and logo are trademarks. For more information, see www.mlperf.org.
The following sections highlight some of the work done to achieve these improvements.
BERT
The latest NVIDIA BERT submission took advantage of the following optimizations:
Sequence packing
In previous rounds, the overhead related to padding required to fill up the batch was already optimized by introducing an unpadding optimization. Unpadding, however, results in dynamically sized buffers, as the total number of tokens is not fixed anymore.
This is not an issue when we do not have to use CUDA graphs, such when large-batch sizes are used. However, for small batch sizes, where CUDA graphs are used to reduce CPU overhead, dynamically sized buffers require many separate graphs for each possible size. To take advantage of CUDA graphs efficiently while minimizing the overhead of padding, NVIDIA used the concept of sequence packing this round.
In MLPerf Bidirectional Encoder Representations from Transformers (BERT), a training sample has been restricted to 512 tokens, but it often has fewer tokens than 512. As the training sequences have different lengths, it is possible to fit more than one sequence within a 512-token sample.
Sequence packing requires that the length distribution of the training set sequences be known in advance. The sequences can be merged into packed samples such that none of the merged samples exceed a length of 512 tokens.
NVIDIA used a similar packing algorithm that another submitter used in MLPerf v1.1. Adopting an algorithm from a different submitter was made possible by the high degree of general-purpose programmability of GPUs.
To strike a good balance between implementation complexity and performance, up to three sequences were packed in per sample. This results in each training sample containing a varying number of sequences as a batch of three samples can each contain three to nine sequences.
CUDA graphs require buffer sizes to be fixed across time for each graph. The varying number of total sequences was handled by creating a separate graph for each possible number of sequences within a batch.
For large-scale training, we used a batch size of two per chip. This translates into five to seven separate graphs, which is much less than would be required with the unpadding optimization mentioned at the beginning.
Overall, this technique improved the results for large-scale runs by 10% and 33% for 4096-GPU and 1024-GPU scenarios, respectively.
Fusion of fully connected and GELU layers
BERT uses a Gaussian error linear unit (GELU) activation function that follows a fully connected layer. In prior submissions, the GELU activation function was implemented as a single kernel. This approach required additional memory transactions for both input reads and output writes.
In this round, NVIDIA implemented the fusion of a fully connected layer (a matrix multiply operation) with the GELU activation function. This eliminated the need for a large number of memory read and write operations, yielding a 2-4% increase in overall throughput – larger gains are observed for larger per-chip batch sizes.
In general, it is more efficient to fuse activation math into the end of matrix multiply operations, which means fusing the GELU activation function into different fully connected layers (Figure 1).
Deep learning recommendation models
The latest NVIDIA deep learning recommendation models (DLRM) submission again leverages NVIDIA Merlin HugeCTR, an optimized open-source deep neural network training framework for recommender systems.
Kernel fusions
Multilayer perceptrons (MLP) represent a key building block for DLRM. To reduce the number of trips to global memory, fusions of elementwise kernels and general matrix multiply (GEMM) kernels have been widely employed.
The NVIDIA cuBLAS library has recently introduced a new fusion type: GEMM with DReLU (fusing ReLU gradient computation with matrix multiply operations in the backward pass). HugeCTR takes advantage of this new fusion type to enhance the performance of MLPs.
Improved overlap of computation and communication
Increasing GPU utilization is important to provide the highest performance.
In this latest submission, NVIDIA notably improved the overlap of computations and communications in the evaluation of hybrid embeddings to improve GPU utilization. Specifically, the execution of the dense network in iteration i overlapped with the execution of the embedding in iteration i+1 through pipelining, increasing the utilizations of the GPUs.
This overlapping is possible because there are no inter-iteration dependences in the evaluation phase.
Additionally, several of the key kernels in the forward/backward hierarchical all-to-all operations for hybrid embedding were also optimized.
ResNet-50
For ResNet-50, we employed the following optimizations to boost performance:
Better max scale training configuration
When a model is trained on a large scale, if the global batch size is not an integer multiple of the training images in the dataset, the last iteration of the epoch gets added extra data to keep the batch size consistent across iterations. This wasted computation can be saved if the global batch size is close to the integer multiple of the dataset. This is especially important for larger-scale training, where the global batch size is relatively large.
In this round of MLPerf, we concluded that using 527 nodes with a global batch size of 67,456 significantly reduced wasted computation, resulting in a performance boost of 3.5% compared to NVIDIA’s ResNet-50 submission in MLPerf v1.1.
Faster CuDNN kernels
For the ResNet50 submission, NVIDIA significantly improved the kernels picked up by cuDNN. This includes both better kernels being picked up for the layer sizes and optimized kernel implementation for different tile sizes.
From these optimized kernel samplings, we observed over 4% throughput improvement in the large-scale configurations from MLPerf v1.1.
RetinaNet
The NVIDIA RetinaNet submission takes advantage of several software optimizations, including:
Channel last memory format and automatic mixed precision
To avoid memory reorganization and to effectively increase peak performance, NVIDIA used the PyTorch channel last memory format (NHWC instead of NCHW) and the PyTorch automatic mixed precision (AMP).
Using fusion for speed-up
For the RetinaNet submission, NVIDIAtook advantage of several fusion opportunities. The cuDNN runtime fusion through the Apex library was used to fuse CONV-bias-ReLU and CONV-bias patterns, and PyTorch NVFuser were used to fuse element-wise operations, such as scale-bias-ReLU and scale-bias-add-ReLU.
The cuDNN runtime fusion Python interface can be found in the Apex repository (import ConvBiasReLU or ConvBias from apex.contrib.conv_bias_relu).
Optimizing loss block
The RetinaNet loss-related calculations are separated into two stages: ground truth data preprocessing and the actual loss calculation.
As the ground truth data preprocessing is not dependent on the model output, part of the ground truth data processing was offloaded to DALI with custom functions, enabling it to be performed asynchronously, improving system resource utilization. The remainder of the preprocessing was reimplemented and then merged into the model graph to avoid jitter.
For the loss calculation, an optimized focal loss implementation was used, which can be found in the Apex library.
Asynchronous scoring
The RetinaNet submission guidelines require that evaluation (inference and scoring) be performed after each training epoch. The scoring time overhead is significant due to the large number of images and bounding boxes in the OpenImages validation dataset, as well as the sequential implementation of the scoring code.
To mitigate the scoring overheads—particularly in large-scale execution, asynchronous scoring was implemented so that the next training epoch masked the previous epoch scoring procedure.
CUDA Graphs
CUDA Graphs were used extensively in the NVIDIA RetinaNet submission. The entire model and portions of the ground truth preprocessing were graphed which required that they be reimplemented to fit CUDA graph constraints.
The model’s forward and backward passes were graph captured, as well as portions of the ground truth preprocessing. The latter required code adaptation to fit CUDA graph constraints.
For more information, see Accelerating PyTorch with CUDA Graphs.
Mask R-CNN
The NVIDIA Mask R-CNN submissions utilized several techniques to improve performance:
Bottleneck block optimizations
The resnet backbone is built as a stack of bottleneck blocks, each composed of three sequential convolutions. Each convolution is followed by a batch-norm and a ReLu. The batch-norm modules have four parameters, and some math is required to compute a couple of intermediate terms in the forward method.
As the batch-norms are frozen, the parameters never change, meaning that the intermediate terms do not change either. To save time, these intermediate terms were computed just one time.
Backpropagation of ReLu involves creating and applying a mask. In earlier versions of the code, this mask was stored with half (FP16) precision. In this round, the DReLU mask is represented as a Boolean and not FP16 to reduce memory transactions.
During back propagation, data gradients and weight gradients were computed for each of the three convolution layers. NVIDIA found empirically that while the data gradient GPU kernels were launched with a sufficient number of CTAs to fully use the GPUs, the weight gradient kernels were launched with far fewer CTAs.
One optimization that was implemented was to launch the data gradient kernels first, and then launch all three weight gradient kernels on separate streams so that they ran concurrently. This reduced total computation time for weight gradients.
These optimizations are available for PyTorch users in the bottleneck block module in Apex.
RPN head fusion
A new Apex module, which fuses convolution, bias, and ReLu, was implemented, as discussed in the RetinaNet section. This module was in MaskR-CNN as well to fuse forward propagation of some of the layers in RPN head block.
Evaluation
Evaluation, on average, takes almost as much time as training. Evaluation is done asynchronously on dedicated nodes, but the results are shared with the training nodes through a blocking broadcast.
The training nodes wait for a certain number of steps before they start waiting for the evaluation broadcast to minimize any evaluation result wait time. The learning rate curve has two inflection points in it, and it is extremely unlikely that the model will have converged before passing the last inflection point. That’s why you should wait for as long as possible to check for evaluation results, until training has passed the last learning rate curve inflection point.
Top-K
In earlier versions of PyTorch, the number of CTAs launched by the top-k kernel was proportional to the per-GPU batch size. This yielded poor performance when the batch size equaled 1, the batch size that is always used for NVIDIA max-scale runs.
This issue was addressed in previous rounds with a two-stage top-k method, which was implemented in Python, but this solution did not generalize well. Work on a more general solution was already underway.
In this round, NVIDIA worked with the PyTorch team to ensure that the new top-k implementation that yielded far better performance for a batch size of 1 made it into PyTorch. When this was complete, the prior two-stage top-k implementation was replaced with the new PyTorch module.
3D U-Net
3D U-Net has multiple large layers with an input channel count of 32. For wgrad kernels, using a kernel with the default 64x256x64 meant significant tile size quantization loss.
Thanks to the introduction of the new 32x256x32 wgrad kernels in cuDNN, this tile size quantization loss was saved. This resulted in a speedup of over 5% on a single node in MLPerf v2.0 over MLPerf v1.1.
RNN-T
The preprocessing step of Recurrent Neural Network Transducer (RNN-T) is relatively intensive. Thanks to DALI, most of the preprocessing overhead can be pipelined and hidden under the main training loop.
However, because the size of the input data might vary, there was a need for relocating the internal memory buffers after the initial iteration, increasing the length of the warm-up phase.
DALI has recently switched to a memory-pool based allocator, where the pool is managed using the cuMem API. This significantly reduces the overhead of allocating new buffers, yielding a much faster warm-up process in training.
Conclusion
Thanks to optimizations across the stack, the NVIDIA platform was yet again able to boost performance in MLPerf Training v2.0 using the proven NVIDIA A100 Tensor Core GPU and NVIDIA DGX A100 platforms.
NVIDIA continues to be the only platform to submit results in the MLPerf benchmarking suite, including MLPerf Training, MLPerf Inference, and MLPerf HPC. This showcases the performance and versatility of the entire platform, which is crucial as modern AI becomes pervasive across every computing domain.
In addition to providing the software used for NVIDIA MLPerf submissions in the MLPerf repository, dozens of additional models were also made and optimized for NVIDIA GPUs, available on the NGC hub.
The NVIDIA platform is also ubiquitous, providing customers with the choice of where to run models. NVIDIA A100 is available from all major server makers and cloud service providers, allowing you to deploy on-premises, in the cloud, in a hybrid environment, or at the edge.
Related resources
Tags
About the Authors
Comments
Related posts
Leading MLPerf Training 2.1 with Full Stack Optimizations for AI
Full-Stack Innovation Fuels Highest MLPerf Inference 2.1 Results for NVIDIA
Boosting NVIDIA MLPerf Training v1.1 Performance with Full Stack Optimization
Optimizing NVIDIA AI Performance for MLPerf v0.7 Training
NVIDIA Captures Top Spots on World’s First Industry-Wide AI Benchmark
Related posts
Processing High-Quality Vietnamese Language Data with NVIDIA NeMo Curator
Access to NVIDIA NIM Now Available Free to Developer Program Members
Revolutionizing Graph Analytics: Next-Gen Architecture with NVIDIA cuGraph Acceleration
Efficient CUDA Debugging: Memory Initialization and Thread Synchronization with NVIDIA Compute Sanitizer
Analyzing the Security of Machine Learning Research Code
|
Analysis-Driven Optimization: Finishing the Analysis with NVIDIA Nsight Compute, Part 3
In part 1, I introduced the code for profiling, covered the basic ideas of analysis-driven optimization (ADO), and got you started with the NVIDIA Nsight Compute profiler. In part 2, you began the iterative optimization process. In this post, you finish the analysis and optimization process, determine whether you have reached a reasonable stopping point, and I draw some final conclusions.
Converting the reduction to warp-shuffle
The result of the analysis from part 2 is that your focus has been placed on the following line of code, with the idea of reducing shared memory pressure:
if (id < s) smem[id] += smem[id+s];}
What can you do? In the code refactoring of the previous step, you converted to a warp-stride loop, to permit coalesced access. That resulted in the averaging sum operation spreading across all 32 members of the warp. Thus, you had to combine these, before computing the average. You used a warp-shuffle reduction there, for convenience and simplicity.
The line of code you are focused on now is also part of a reduction, but it is using a classical shared-memory sweep parallel reduction methodology. You can reduce the pressure on shared memory here, by converting the reduction to use a similar warp-shuffle based reduction methodology. Because this involves multiple warps in this second phase of your kernel activity, the code is a two-stage warp-shuffle reduction. For more information about warp-shuffle, see Faster Parallel Reductions on Kepler.
The refactored kernel looks like the following code example:
template
__global__ void gpu_version4(const T * __restrict__ input, T * __restrict__ output, const T * __restrict__ matrix, const int L, const int M, const int N){
// parallelize threadIdx.x over vector length, and blockIdx.x across k (N)
// do initial vector reduction via warp-stride loop
__shared__ T smem[my_L];
int idx = threadIdx.x;
int idy = threadIdx.y;
int id = idy*warpSize+idx;
int k = blockIdx.x;
T v1;
for (int y = threadIdx.y; y < L; y+=blockDim.y){ // vertical block-stride loop
v1 = 0;
for (int x = threadIdx.x; x < M; x+=warpSize) // horizontal warp-stride loop
v1 += input[k*M*L+y*M+x];
for (int offset = warpSize>>1; offset > 0; offset >>= 1) // warp-shuffle reduction
v1 += __shfl_down_sync(0xFFFFFFFF, v1, offset);
if (!threadIdx.x) smem[y] = v1/M;}
__syncthreads();
v1 = smem[id];
for (int i = 0; i < L; i++){ // matrix-vector multiply
T v2 = v1 * matrix[i*L+id];
// 1st warp-shuffle reduction
for (int offset = warpSize>>1; offset > 0; offset >>= 1)
v2 += __shfl_down_sync(0xFFFFFFFF, v2, offset);
if (idx == 0) smem[idy] = v2;
__syncthreads(); // put warp results in shared mem
// hereafter, just warp 0
if (idy == 0){
// reload v2 from shared mem if warp existed
v2 = (idx < ((blockDim.x*blockDim.y)>>5))?smem[idx]:0;
// final warp-shuffle reduction
for (int offset = warpSize>>1; offset > 0; offset >>= 1)
v2 += __shfl_down_sync(0xFFFFFFFF, v2, offset);}
if (!id) output[k+i*N] = v2;}
}
You have replaced your shared-memory sweep reduction with a two-stage warp-shuffle reduction. No changes are needed at the kernel launch point, other than to change the name of the kernel to your new gpu_version4. If you compile and run this code, you see an additional speedup:
CPU execution time: 0.5206s
Kernel execution time: 0.012659s
baseline
Step 1
Step 2
Step 3
Kernel duration
2.92s
0.0789s
0.0216s
0.0127s
Return to the profiler. Repeat the disconnect, connect, launch, run sequence and then reset the baseline. Figure 1 shows the results.
The bottleneck rule has pointed you back to latency again, with the same message as in the previous post, because this latest change relieved pressure on most of the various GPU subsystems. The latency that the profiler is now pointing out is just the memory latency inherent in your loading of ~4 GB of data for this processing. You can get a sense of this by looking at the Warp State Statistics section, where you now see Stall Long Scoreboard as your most significant stall reason (Figure 2).
Hover over Stall Long Scoreboard for the description: “Average number of warps resident per issue cycle, waiting on a scoreboard dependency on L1TEX (local, global, surface, tex) operation”. For more information, see Warp Scheduler States. Likewise, the rule states:
[Warning] On average each warp of this kernel spends 15.4 cycles being stalled waiting for a scoreboard dependency on a L1TEX (local, global, surface, texture) operation. This represents about 46.1% of the total average of 33.3 cycles between issuing two instructions. To reduce the number of cycles waiting on L1TEX data accesses verify the memory access patterns are optimal for the target architecture, attempt to increase cache hit rates by increasing data locality or by changing the cache configuration, and consider moving frequently used data to shared memory.
You already know you don’t have local, surface, or (explicit) tex operations, so global memory is again the focus. You can again get an additional sense of this by looking at the source view (Figure 3).
The following line of code is dominating the sampling data, as well as being the biggest contributor to your warp stall reasons:
v1 += input[k*M*L+y*M+x];
Hover the mouse over the brown bars to get the warp stall reasons. It is easy to see in Figure 3, but what if it were less obvious or you had more code to sort through? The profiler can help you here. On the Details page, the Warp State Statistics section listed the highest stall reason as Stall Long Scoreboard. How can you find the line with the highest contributor to that? First, for Navigation, select stall_long_sb. Then, choose the button to the right with an up-arrow and a line. This asks the profiler to show the line with the highest reported value for that metric (Figure 4). The profiler highlighted the expected line for you.
You have optimized this step as much as possible. How can you be sure of that? At this point, to achieve your next (and final) round of optimization and to answer this important question, you must revisit your code and consider more major refactoring.
At a high level, your code is producing a set of intermediate vectors that are the results of the averaging phase and then multiplying each of those vectors by an unchanging matrix to get a set of result vectors. This second phase of operations, the matrix-vector multiply step, could be refactored to be a matrix-matrix multiply because the input matrix is constant across each matrix-vector multiply step.
You could rewrite or decompose your kernel into two separate kernels. The first kernel performs the vector averaging, writing out the set of average vectors as a matrix. The second kernel performs the matrix-matrix multiply. Rather than writing your own kernel for this second phase refactored, use cuBLAS, a highly optimized library. This refactoring also means that you must store the intermediate vector results in global memory to facilitate passing them to the cuBLAS gemm (matrix-matrix multiply) operation, along with the input matrix. This store to global was not necessary in your previous realizations, because you could just carry the vector forward in local thread storage, for use in the matrix-vector multiply.
This refactoring also isolates the vector averaging step, which allows you to get an independent measurement of whether this step is truly optimal. The first phase vector averaging, now isolated in its own kernel, is dominated by the global load operations of 4 GB of data, focused on the line of code the profiler has already indicated in this step.
Refactoring redux
As indicated in the previous section, your task now is to refactor your code by breaking the kernel into two pieces, the first of which is your existing phase 1 kernel code but writing out the intermediate vector into a matrix of results, in global memory. The second piece is a properly crafted cuBLAS SGEMM call, to perform the matrix-matrix multiply operation. Getting this right involves accounting for the transpositions needed on the input data and when comparing the results for accuracy. The final refactored code looks like the following code example:
// compile with: nvcc -Xcompiler -fopenmp -o t5 t5.cu -O3 -lineinfo -lcublas
#include
#include
#include
#define cudaCheckErrors(msg) \
do { \
cudaError_t __err = cudaGetLastError(); \
if (__err != cudaSuccess) { \
fprintf(stderr, "Fatal error: %s (%s at %s:%d)\n", \
msg, cudaGetErrorString(__err), \
__FILE__, __LINE__); \
fprintf(stderr, "*** FAILED - ABORTING\n"); \
exit(1); \
} \
} while (0)
// cuBLAS API errors
static const char *_cudaGetErrorEnum(cublasStatus_t error)
{
switch (error)
{
case CUBLAS_STATUS_SUCCESS:
return "CUBLAS_STATUS_SUCCESS";
case CUBLAS_STATUS_NOT_INITIALIZED:
return "CUBLAS_STATUS_NOT_INITIALIZED";
case CUBLAS_STATUS_ALLOC_FAILED:
return "CUBLAS_STATUS_ALLOC_FAILED";
case CUBLAS_STATUS_INVALID_VALUE:
return "CUBLAS_STATUS_INVALID_VALUE";
case CUBLAS_STATUS_ARCH_MISMATCH:
return "CUBLAS_STATUS_ARCH_MISMATCH";
case CUBLAS_STATUS_MAPPING_ERROR:
return "CUBLAS_STATUS_MAPPING_ERROR";
case CUBLAS_STATUS_EXECUTION_FAILED:
return "CUBLAS_STATUS_EXECUTION_FAILED";
case CUBLAS_STATUS_INTERNAL_ERROR:
return "CUBLAS_STATUS_INTERNAL_ERROR";
}
return "";
}
#include
#include <sys/time.h>
#define USECPSEC 1000000ULL
unsigned long long dtime_usec(unsigned long long start){
timeval tv;
gettimeofday(&tv, 0);
return ((tv.tv_sec*USECPSEC)+tv.tv_usec)-start;
}
// perform vector averaging over M vectors of length L, followed by matrix-vector multiply
// repeat the above N times
// input vectors are stored as a set of N column-major matrices
// for each k in N: output[k] = matrix*input[k]
template
void cpu_version1(T *input, T *output, T *matrix, int L, int M, int N){
#pragma omp parallel for
for (int k = 0; k < N; k++){ // repeat the following, N times
std::vector v1(L); // vector length of L
for (int i = 0; i < M; i++) // compute average vector over M input vectors
for (int j = 0; j < L; j++)
v1[j] += input[k*M*L+j*M+i];
for (int j = 0; j < L; j++)
v1[j] /= M;
for (int i = 0; i < L; i++) // matrix-vector multiply
for (int j = 0; j < L; j++)
output[i*N+k] += matrix[i*L+j]*v1[j];
}
}
const int my_L = 1024; // maximum 1024
const int my_M = 1024;
const int my_N = 1024;
template
__global__ void gpu_version5(const T * __restrict__ input, T * __restrict__ output, const int L, const int M, const int N){
// parallelize threadIdx.x over vector length, and blockIdx.x across k (N)
// do initial vector reduction via warp-stride loop
int k = blockIdx.x;
T v1;
for (int y = threadIdx.y; y < L; y+=blockDim.y){ // vertical block-stride loop
v1 = 0;
for (int x = threadIdx.x; x < M; x+=warpSize) // horizontal warp-stide loop
v1 += input[k*M*L+y*M+x];
for (int offset = warpSize>>1; offset > 0; offset >>= 1) // warp-shuffle reduction
v1 += __shfl_down_sync(0xFFFFFFFF, v1, offset);
if (!threadIdx.x) output[k+y*N] = v1/M;}
}
typedef float ft;
int main(){
ft *d_input, *h_input, *d_output, *h_outputc, *h_outputg, *d_matrix, *h_matrix, *d_result;
int L = my_L; int M = my_M; int N = my_N;
// host allocations
h_input = new ft[N*L*M];
h_matrix = new ft[L*L];
h_outputg = new ft[N*L];
h_outputc = new ft[N*L];
// data initialization
for (int i = 0; i < N*L*M; i++) h_input[i] = (rand()&1)+1; // 1 or 2
for (int i = 0; i < L*L; i++) h_matrix[i] = (rand()&1)+1; // 1 or 2
// create result to test for correctness
unsigned long long dt = dtime_usec(0);
cpu_version1(h_input, h_outputc, h_matrix, L, M, N);
dt = dtime_usec(dt);
std::cout << "CPU execution time: " << dt/(float)USECPSEC << "s" << std::endl;
// device allocations
cudaMalloc(&d_input, N*L*M*sizeof(ft));
cudaMalloc(&d_output, N*L*sizeof(ft));
cudaMalloc(&d_matrix, L*L*sizeof(ft));
cudaMalloc(&d_result, N*L*sizeof(ft));
cudaCheckErrors("cudaMalloc failure");
// copy input data from host to device
cudaMemcpy(d_input, h_input, N*L*M*sizeof(ft), cudaMemcpyHostToDevice);
cudaMemcpy(d_matrix, h_matrix, L*L*sizeof(ft), cudaMemcpyHostToDevice);
cudaMemset(d_output, 0, N*L*sizeof(ft));
cudaCheckErrors("cudaMemcpy/Memset failure");
// cublas setup
cublasHandle_t h;
ft alpha = 1.0;
ft beta = 0.0;
cublasStatus_t c_res = cublasCreate(&h);
if (c_res != CUBLAS_STATUS_SUCCESS) {std::cout << "CUBLAS create error: " << _cudaGetErrorEnum(c_res) << std::endl; return 0;}
// run on device and measure execution time
dim3 block(32,32);
dt = dtime_usec(0);
gpu_version5<<<N, block>>>(d_input, d_output, L, M, N);
cudaCheckErrors("kernel launch failure");
c_res = cublasSgemm(h, CUBLAS_OP_T, CUBLAS_OP_T,
N, N, L, &alpha,
d_matrix, L,
d_output, N, &beta,
d_result, N);
if (c_res != CUBLAS_STATUS_SUCCESS) {std::cout << "CUBLAS gemm error: " << _cudaGetErrorEnum(c_res) << std::endl; return 0;}
cudaDeviceSynchronize();
cudaCheckErrors("execution failure");
dt = dtime_usec(dt);
cudaMemcpy(h_outputg, d_result, N*L*sizeof(ft), cudaMemcpyDeviceToHost);
cudaCheckErrors("cudaMemcpy failure");
for (int i = 0; i < N; i++)
for (int j = 0; j < L; j++) if (h_outputg[i+N*j] != h_outputc[j+N*i]) {std::cout << "Mismatch at " << i << " was: " << h_outputg[i] << " should be: " << h_outputc[i] << std::endl; return 0;}
std::cout << "Kernel execution time: " << dt/(float)USECPSEC << "s" << std::endl;
return 0;
}
If you compile and run this code, you get results like the following example:
$ nvcc -o t5 t5.cu -Xcompiler -fopenmp -O3 -lineinfo -lcublas
$ ./t5
CPU execution time: 0.521357s
Kernel execution time: 0.005525s
$
baseline
Step 1
Step 2
Step 3
Step 4
Kernel duration:
2.92s
0.0789s
0.0216s
0.0127s
0.00553s
You have again improved the performance of your code and your GPU implementation is now almost 100x faster than your CPU OpenMP version. To be fair, this final optimization to convert the sequence of matrix-vector multiply operations into a single matrix-matrix multiply could equivalently be done on the CPU version. Using a high-quality CPU BLAS library would also probably give a better result there.
What about the question asked earlier, “Is your global load operation optimal?” Because the first kernel is now dominated by the global loading of 4 GB of data, you can estimate the achieved bandwidth and compare it to a proxy measurement of the achievable memory bandwidth on your GPU. If the two numbers are close to each other, you can conclude that the global loading operation is nearly optimal and could not get any better. For the proxy measurement of achievable memory bandwidth on your GPU, use the CUDA sample code bandwidthTest. When run on this V100 GPU, the output looks like the following code example:
$ /usr/local/cuda/samples/bin/x86_64/linux/release/bandwidthTest
[CUDA Bandwidth Test] - Starting...
Running on...
Device 0: Tesla V100-SXM2-32GB
Quick Mode
Host to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 12.4
Device to Host Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 13.2
Device to Device Bandwidth, 1 Device(s)
PINNED Memory Transfers
Transfer Size (Bytes) Bandwidth(GB/s)
32000000 739.2
The last number is the one you are interested in. A V100 has about 740 GB/s of available memory bandwidth, according to this measurement. The GB used here is 1 billion bytes, not 2^30 bytes. To get a comparison for your kernel, you must get a timing duration for just the kernel, not the kernel plus cuBLAS call.
Of course, you could modify the timing in your code to print this out but look at the profiler one last time. There are now two kernels in your code: one that you wrote, and one that is launched by the cuBLAS call. Not all cuBLAS calls result in one single kernel call, but this usage does. Now when you disconnect, connect, launch, and choose Run to next kernel, you profile just your version 5 kernel. The profiler reports the execution duration as 5.22 milliseconds (Figure 5).
This is most of the overall execution time that you measured of ~5.5 milliseconds! The 4 GB of data that you have is 4x1024x1024x1024 bytes. If you divide that by 5.22 milliseconds, you get an achieved bandwidth of approximately 823 GB/s, using the GB that is used by bandwidthTest. So, your averaging kernel is performing even better than bandwidthTest and is approximately optimal. The profiler output also indicates greater than 90% memory utilization, agreeing with your measurement. Choose Run to next kernel one more time, because you still have the cuBLAS SGEMM kernel waiting in the wings. Figure 6 shows the results in the GPU Speed of Light section after the baseline is cleared.
As you suspected, this kernel (volta_sgemm_32x128_tt) is short, around 220 microseconds, making up most of the difference between your 5.2 millisecond global load kernel time and the overall measured duration of ~5.5 milliseconds. The profiler also reports this highly optimized library kernel is running the GPU at a high level of both compute utilization and memory utilization. You have some solid data now that says your code is roughly optimal, and now you should spend your precious time elsewhere.
Suggestions
The Nsight Compute profiler provides a large amount of information. In this post, I’ve only scratched the surface of the data presented and the tool’s capabilities. No post like this could hope to give a complete treatment. The amount of information here may require some effort to process. The following are some observations and suggestions to help you:
Summary
This post focused on an ADO process using Nsight Compute and a single kernel. If you have a complex application with multiple kernels, the ADO process usually starts with Nsight Systems, and you may iterate back and forth between Nsight Systems and Nsight Compute, as you optimize kernels and other kernels move to the top of the priority list.
The analysis work in this post was performed on a machine with the following characteristics: Ubuntu 18.04.3, CUDA 11.1, GPU Driver version 455.23.05, GCC version 7.5.0, V100-SXM2-32 GB GPU, Intel® Xeon® Gold 6130 CPU @ 2.10GHz. The code examples presented in this post are for instructional purposes only. They are not guaranteed to be defect-free or suitable for any particular purpose.
For more information, see the following resources:
Acknowledgements
The author would like to thank the following individuals for their contributions: Sagar Agrawal, Rajan Arora, Ronny Brendel, Max Katz, Felix Schmitt, Greg Smith, and Magnus Strengert.
Related resources
Tags
About the Authors
Comments
Related posts
Analysis-Driven Optimization: Analyzing and Improving Performance with NVIDIA Nsight Compute, Part 2
Analysis-Driven Optimization: Preparing for Analysis with NVIDIA Nsight Compute, Part 1
Optimizing VK/VKR and DX12/DXR Applications Using Nsight Graphics: GPU Trace Advanced Mode Metrics
Using Nsight Compute to Inspect your Kernels
CUDA 7.5: Pinpoint Performance Problems with Instruction-Level Profiling
Related posts
CUDA Toolkit Now Available for NVIDIA Blackwell
Improving GPU Performance by Reducing Instruction Cache Misses
Optimizing llama.cpp AI Inference with CUDA Graphs
Just Released: Nsight Compute 2024.3
Just Released: CUDA Toolkit 12.6
|
Nsight Systems Exposes New GPU Optimization Opportunities
As GPU performance steadily ramps up, your application may be overdue for a tune-up to keep pace. Developers have used independent CPU profilers and GPU profilers in search of bottlenecks and optimization opportunities across their disjointed datasets for years. Using these independent tools can result in picking small optimizations based on false positive indicators or missing large opportunities that fall into the gaps between the tools if you’re not careful. NVIDIA now offers its new Nsight Systems to address these problems, helping you see the bigger picture.
Introducing NVIDIA Nsight Systems
NVIDIA Nsight Systems provides developers with a more complete and unified view of how their applications utilize a computer’s CPUs and GPUs. These new performance analysis tools assist you in visualizing an application’s algorithms in order to identify the largest opportunities for optimizing and tuning algorithms. Using Nsight Systems can help you to scale your application more efficiently across varying levels of hardware, from small laptops to powerful multi-GPU servers such as DGX-2.
Nsight Systems allows you to identify issues such as GPU starvation, unnecessary GPU synchronization, insufficient CPU parallelization or pipelining, and unexpectedly expensive CPU or GPU algorithms. Nsight Systems utilizes low overhead tracing and sampling techniques to collect process and thread activity. It correlates that data across CPU cores and GPU streams. This correlation data will enable developers to investigate bottlenecks from the “scene of the crime” back to the circumstances that facilitated it.
GPU Starvation Investigations
NVIDIA designed Nsight Systems to make it easy to spot GPU starvation and work backward to understand the cause.
Figure 1 shows how you can track kernel coverage. The CUDA device row contains blue height graphs representing CUDA kernel coverage for a given segment of time, relative to the zoom level. The first red box shows the GPU having no work to execute, while the second red box shows very sparse coverage; only 5% of the GPU is occupied with work at the time most of those pixels represents. The CPU algorithms feeding the GPU should be investigated for optimization opportunities in these cases.
Nsight also allows the developer to investigate back in time by using our GPU correlation feature in order to spot the underlying CPU algorithm. The developer will find the correlated CPU-side CUDA API launch event by selecting the CUDA kernel on the GPU after the gap. Figure 2 shows the CPU thread (413) launching the kernel from GPU stream 70. Later examples will show how to learn more about investigating the CPU side of your algorithms.
Unnecessary GPU Synchronization Calls
Similar to GPU starvation, low or empty GPU utilization areas can also reveal unnecessary GPU synchronization calls. You can see in figure 3 how the CPU asks the GPU to synchronize even though the CUDA stream already enforces ordering of execution. The user is paying a second time penalty by immediately invoking cudaStreamSynchronize after a cudaMemcpyAsync, ensuring that the CPU and GPU are in sync instead of skipping the first sync. Identifying these situations and removing unnecessary cuda*Synchronize functions, or switching to CUDA events frequently results in more tightly packaged GPU work.
API Trace
Understanding the CPU algorithms leading up a particular GPU event can be done with a combination of automation instrumentation and optional manual annotations. Several features in Nsight System offer assistance unavailable in previous GPU profilers. API trace has been shown in all of the images in this article. Libraries such as CUDA, cuDNN, cuBLAS, and OpenGL can all be traced to identify GPU API related issues.
Figure 4 below reveals OS runtime libraries trace and thread call-stack backtrace. These features can identify the context of resource management and thread synchronization issues which could prevent a thread from launching work in sufficient time to keep the GPU busy.
Sampling
Thread call-stack information can also be collected via sampling and is presented relative to samples that fall into a selected range of time and filter properties. Figure 5 shows how thread call-stack information is viewed.
Annotated source code
NVIDIA’s Tools Extensions (NVTX) is a source code annotation library that developers can use to highlight their code so that it can appear in the timeline. The latest version of NVTX offers incredibly low overhead and only logs data when tools are profiling the application. The following picture is an example from VMD, a high-performance molecular visualization tool, marking its application phases and algorithms with NVTX.
3x Performance Increase in VMD
VMD developer John Stone presented how he achieved a greater than 3x performance increase in VMD at the 2018 GTC in San Jose, California. This presentation is full of other great examples, including optimizations also made to Lattice Microbes for spatial stochastic simulation.
Getting Started
NVIDIA Nsight Systems offer a robust set of profiling and analysis tools for developers using NVIDIA GPUs. NSight Systems enables analysis of CPU and GPU code, lets you investigate CPU-GPU interactions, and collect thread-call statistics. A straightforward GUI enables you to quickly visualize key elements. You can learn more about NVIDIA Nsight Systems, as well as download the tool, by visiting the product page.
Related resources
Tags
About the Authors
Comments
Related posts
NVIDIA GTC: A Complete Overview of Nsight Developer Tools
Announcing NVIDIA Nsight Systems 2021.5
Nsight: The Most Important Ampere Tools In Your Utility Belt
NVIDIA announces Nsight Systems 2019.3
NVIDIA Announces Nsight Systems 2018.3!
Related posts
Advanced API Performance: SetStablePowerState
Advanced Kernel Profiling with the Latest Nsight Compute
TensorFlow Performance Logging Plugin nvtx-plugins-tf Goes Public
NVIDIA Nsight Systems Adds Vulkan Support
CUDA 8 Features Revealed
|
New Optimizations To Accelerate Deep Learning Training on NVIDIA GPUs
The pace of AI adoption across diverse industries depends on maximizing data scientists’ productivity. NVIDIA releases optimized NGC containers every month with improved performance for deep learning frameworks and libraries, helping scientists maximize their potential. NVIDIA continuously invests in the full data science stack, including GPU architecture, systems, and software stacks. This holistic approach provides the best performance for deep learning model training as proven by NVIDIA winning all six benchmarks submitted to MLPerf, the first industry-wide AI benchmark. NVIDIA accomplished this feat by introducing several new generations of GPU architectures in recent years, culminating in the Tensor Core architecture on the Volta and Turing GPUs, which include native support for mixed-precision calculations. NVIDIA accomplished these records on MXNet and PyTorch frameworks, showcasing the versatility of our platform. Automatic mixed precision in popular deep learning frameworks provides 3x faster training performance on Tensor Cores by adding one or two line(s) of code to your application. Learn more on the automatic mixed precision page.
The NeurIPS 2018 conference proved to be an opportune time for deep learning scientists to learn about some of the significant recent performance improvements in NVIDIA’s optimized containers that accelerate a variety of deep learning models. Let’s look at improvements to the latest 18.11 release of NVIDIA GPU Cloud (NGC) deep learning framework containers and key libraries. The new release builds on earlier enhancements, which you can read about in the Volta Tensor Core GPU Achieves New AI Performance Milestones post.
Optimized Frameworks
MXNet
This latest release improves the performance of training deep learning models at large scale, where it is crucial that GPU training performance is optimized across a large range of batch sizes. As studies have shown, limits exist to how large the total training batch size can be across all processors before the final training accuracy achieved starts to degrade. Thus, when scaling to a large number of GPUs, adding more GPUs decreases the batch size processed per GPU once the total batch size limit is reached. So, we introduced several improvements to the MXNet framework in the 18.11 NGC container to optimize performance across a variety of training batch sizes, and especially smaller ones, not only large batch sizes:
These optimizations enabled a throughput of 1060 images/sec when training ResNet-50 with a batch size of 32 using Tensor Core mixed-precision on a single Tesla V100 GPU using the 18.11 MXNet container as compared to 660 images/sec with the 18.09 MXNet container.
You can find the most up to date performance results here.
We worked closely with Amazon and the MXNet development community to integrate the popular Horovod communication library to improve performance when running on a large number of GPUs. The Horovod library uses the NVIDIA Collective Communications Library (NCCL), which incorporates the allreduce approach to handling distributed parameters. This eliminates performance bottlenecks with the native MXNet distributed kvstore approach.
We are currently merging our improvements to the upstream MXNet and Horovod repositories so that their user communities can benefit from these improvements.
TensorFlow
The 18.11 TensorFlow NGC container includes the latest version of Tensorflow 1.12. This offers major improvements for GPU performance enabled by the experimental XLA compiler. Google outlines XLA in their recent blog, including instructions on how to enable it. XLA delivers significant speedups by fusing multiple operations into a single GPU kernel, eliminating the need for multiple memory transfers, dramatically improving performance. The XLA compiler is experimental at this time, with caveats outlined in the Google blog post. However, promising performance improvements of up to 3X on Google’s internal models with GPUs have been recorded.
Furthermore, the 18.11 NGC Tensorflow container integrates the latest TensorRT 5.0.2, enabling data scientists to easily deploy their trained model with optimized inference performance. TensorRT addresses the specific challenges for inference performance. It efficiently executes with small batch sizes with low latencies, down to a batch size of 1. TensorRT 5.0.2 supports low-precision data types, such as 16-bit floating point or 8-bit integers.
On a related note, NVIDIA provides profilers with powerful insights into CUDA application performance. However, while these profiles provide voluminous data about the low-level performance of the application, it is often hard to interpret for a TensorFlow user. That’s because the profile doesn’t correlate its output back to the original graph the TensorFlow user built. We enhanced TensorFlow’s graph executor (using the NVIDIA profiler NVTX extensions) to emit markers into profiles collected with CUDA profilers such as nvprof, simplifying performance analysis.
These markers show the time range spent in each graph operator and can be used by power users to easily identify compute kernels with their associated TensorFlow layers. Previously, the profile would only show the kernel launches and host/device memory operations (the Runtime API row). Now, TensorFlow adds markers into the profile with meaningfully names in relation to the TensorFlow graph, as shown in figure 1. This allows users to map GPU execution profile events to specific nodes in their model graph.
PyTorch
NVIDIA works closely with the PyTorch development community to continually improve performance of training deep learning models on Volta Tensor Core GPUs. Apex is a set of light weight extensions to PyTorch that are maintained by NVIDIA to accelerate training. These extensions are currently being evaluated for merging directly into the main PyTorch repository. However, the PyTorch NGC container comes pre-built with Apex utilities, so data scientists and researchers can easily start using them. Learn more about Apex capabilities in this blog. We recently added some performance-oriented utilities in addition to the automatic mixed precision utilities and distributed training wrapper originally included with Apex.
First, we added a new fused implementation of the Adam optimizer. The existing default PyTorch implementation requires several redundant passes to and from GPU device memory. These redundant passes create significant overhead, especially when scaling training across many GPUs in a data parallel fashion. The fused Adam optimizer in Apex eliminates these redundant passes, improving performance. For example, an NVIDIA-optimized version of the Transformer network using the fused Apex implementation delivered end-to-end training speedups between 5% and 7% over the existing implementation in PyTorch. The observed end-to-end speedups ranged from 6% to as high as 45% (for small batch sizes) for an optimized version of Google Neural Machine Translation (GNMT).
Next, we added an optimized implementation of layer normalization. For that same Transformer network, Apex’s layer normalization delivered a 4% end-to-end speedup in training performance.
Finally, we augmented the distributed data parallel wrapper, for use in multi-GPU and multi-node training. This included significant under-the-hood performance tuning as well as new user-facing options to improve performance and accuracy. One example is the “delay_allreduce” option. This option buffers all the gradients from all the layers to be accumulated across the GPUs,then link them together once the backward pass is completed.
While this option omits the opportunity to overlap communications of already calculated gradients with computation of gradients of other model layers, it can improve performance in cases where persistent kernel implementations are used, including batch normalization and certain cuDNN RNNs. The details of the “delay_allreduce” option, as well as other user-facing options, can be found in the Apex documentation.
These PyTorch optimizations enabled NVIDIA to caputre multiple speed records on MLPerf, which you can read about here.
Performance Libraries
cuDNN
The latest version of cuDNN 7.4.1 contains significant performance improvements for NHWC data layouts, persistent RNN data gradient calculation, strided convolution activation gradient calculation, and improved heuristics in the cudnnGetConvolution<*>() set of APIs.
A key to improved performance from Volta Tensor Cores is reducing the number of tensor transpositions needed when training a model, as discussed in this previous blog post. The natural tensor data layout for convolutions with Tensor Cores is the NHWC layout. Over the last few releases of cuDNN, we also added highly optimized kernels that operate on the NHWC data layout for a series of memory bound operations such as add tensor, op tensor, activation, average pooling, and batch normalization. These are all available in the latest cuDNN 7.4.1 release.
These new implementations enable more efficient memory access and can reach close to peak memory bandwidth in many typical use-cases. In addition, the new extended batch normalization API also supports optional fused element-wise add activation saving several round-trips to and from global memory, noticeably improving performance.These fused operations will speed up training of networks with batch normalization and skip connections. This includes most of modern image networks, for classification, detection, segmentation, and other tasks.
As an example, performance increased more than 20% using cuDNN’s new NHWC and fused batch normalization support when training the SSD network (with a ResNet-34 backbone) on a DGX-1V, with 8 Tesla V100 GPUs, as compared to running with the NCHW data layout and without the fused batch normalization.
As discussed earlier in this blog, training deep neural networks at scale requires processing smaller batch sizes than the maximum that can fit on each GPU. This provides a new opportunity for optimization, especially models with RNNs (recurrent neural networks). When the batch size is small, the cuDNN library can use RNN implementations which use persistent algorithms in certain cases.
(This post explains the performance benefits of persistent algorithms for RNN implementations.) While cuDNN has supported persistent RNNs for several releases, we recently optimized them heavily for Tensor Cores. The graph in figure 2 shows one example of the performance improvements we’ve made to the persistent RNNs used for the GNMT language translation model running with a batch size of 32 on a Tesla V100. As the chart shows, the performance of many of the RNN calls have significantly improved in performance.
The latest cuDNN 7.4.1 significantly enhanced the performance of calculating the activation gradients. Previously, unit-stride cases were handled by highly specialized and fast kernels, whereas the non-unit stride cases fell back to more generalized but slower kernel implementations. The latest cuDNN addresses this gap and has much improved performance for non-unit stride cases. With this enhancement, the relevant activation gradient computation operations in networks such as Deep Speech 2, and Inception v3, are improved by up to 25x.
DALI
Training and inference of models for vision tasks (such as classification, object detection, segmentation, and others) requires a significant and involved data input and augmentation pipeline, When running at scale with optimized code, this pipeline can quickly become a bottleneck in overall performance when multiple GPUs have to wait for the CPU to prepare the data. Even when utilizing multiple CPU cores for this processing, the CPU can struggle to provide data quickly enough for the GPUs. This results in periods of idle GPU time spent waiting on the CPU to complete its tasks. It becomes advantageous to move these data pipelines from the CPU to the GPU. DALI, an open source, framework agnostic, library for GPU accelerated data input and augmentation pipelines has been developed to address this issue, migrating work from the CPU to the GPU.
Let’s take the example of the popular Single Shot Detector (SSD) model. The data input pipeline has multiple stages, as shown in figure 3.
All of these pipeline stages look fairly standard across computer vision tasks, except SSD Random (IoU- Intersection over Union- based) Crop which is SSD specific. Newly-added operators in DALI provide a fast GPU based pipeline for the entire workflow by providing access to the COCO dataset (COCOReader), IoU-based cropping (SSDRandomCrop) and bounding-box flipping (BbFlip).
Conclusion
Researchers can take advantage of the latest performance enhancements discussed in this blog to accelerate their deep learning training with minimal effort. Jumpstart your AI research by visiting NVIDIA GPU Cloud (NGC) to download the fully optimized deep learning framework containers, pre-trained AI models, and model scripts, giving you access to the world’s highest-performing deep learning solutions. In addition, the individual libraries are also available with the enhancements in cuDNN andDALI.
Related resources
Tags
About the Authors
Comments
Related posts
Saving Time and Money in the Cloud with the Latest NVIDIA-Powered Instances
Accelerating TensorFlow on NVIDIA A100 GPUs
NVIDIA at TensorFlow World 2019
Volta Tensor Core GPU Achieves New AI Performance Milestones
New NVIDIA Deep Learning Software Tools for Developers
Related posts
Transforming Industrial Defect Detection with NVIDIA TAO and Vision AI Models
Why Automatic Augmentation Matters
Accelerating Quantized Networks with the NVIDIA QAT Toolkit for TensorFlow and NVIDIA TensorRT
Just Released: TensorRT 8.4
Training a Recommender System on DGX A100 with 100B+ Parameters in TensorFlow 2
|
CUDA Refresher: The CUDA Programming Model
This is the fourth post in the CUDA Refresher series, which has the goal of refreshing key concepts in CUDA, tools, and optimization for beginning or intermediate developers.
The CUDA programming model provides an abstraction of GPU architecture that acts as a bridge between an application and its possible implementation on GPU hardware. This post outlines the main concepts of the CUDA programming model by outlining how they are exposed in general-purpose programming languages like C/C++.
Let me introduce two keywords widely used in CUDA programming model: host and device.
The host is the CPU available in the system. The system memory associated with the CPU is called host memory. The GPU is called a device and GPU memory likewise called device memory.
To execute any CUDA program, there are three main steps:
CUDA kernel and thread hierarchy
Figure 1 shows that the CUDA kernel is a function that gets executed on GPU. The parallel portion of your applications is executed K times in parallel by K different CUDA threads, as opposed to only one time like regular C/C++ functions.
Every CUDA kernel starts with a __global__ declaration specifier. Programmers provide a unique global ID to each thread by using built-in variables.
A group of threads is called a CUDA block. CUDA blocks are grouped into a grid. A kernel is executed as a grid of blocks of threads (Figure 2).
Each CUDA block is executed by one streaming multiprocessor (SM) and cannot be migrated to other SMs in GPU (except during preemption, debugging, or CUDA dynamic parallelism). One SM can run several concurrent CUDA blocks depending on the resources needed by CUDA blocks. Each kernel is executed on one device and CUDA supports running multiple kernels on a device at one time. Figure 3 shows the kernel execution and mapping on hardware resources available in GPU.
CUDA defines built-in 3D variables for threads and blocks. Threads are indexed using the built-in 3D variable threadIdx. Three-dimensional indexing provides a natural way to index elements in vectors, matrix, and volume and makes CUDA programming easier. Similarly, blocks are also indexed using the in-built 3D variable called blockIdx.
Here are a few noticeable points:
The CUDA program for adding two matrices below shows multi-dimensional blockIdx and threadIdx and other variables like blockDim. In the example below, a 2D block is chosen for ease of indexing and each block has 256 threads with 16 each in x and y-direction. The total number of blocks are computed using the data size divided by the size of each block.
// Kernel - Adding two matrices MatA and MatB
__global__ void MatAdd(float MatA[N][N], float MatB[N][N],
float MatC[N][N])
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
int j = blockIdx.y * blockDim.y + threadIdx.y;
if (i < N && j < N)
MatC[i][j] = MatA[i][j] + MatB[i][j];
}
int main()
{
...
// Matrix addition kernel launch from host code
dim3 threadsPerBlock(16, 16);
dim3 numBlocks((N + threadsPerBlock.x -1) / threadsPerBlock.x, (N+threadsPerBlock.y -1) / threadsPerBlock.y);
MatAdd<<<numBlocks, threadsPerBlock>>>(MatA, MatB, MatC);
...
}
Memory hierarchy
CUDA-capable GPUs have a memory hierarchy as depicted in Figure 4.
The following memories are exposed by the GPU architecture:
The NVIDIA CUDA compiler does a good job in optimizing memory resources but an expert CUDA developer can choose to use this memory hierarchy efficiently to optimize the CUDA programs as needed.
Compute capability
The compute capability of a GPU determines its general specifications and available features supported by the GPU hardware. This version number can be used by applications at runtime to determine which hardware features or instructions are available on the present GPU.
Every GPU comes with a version number denoted as X.Y where X comprises a major revision number and Y a minor revision number. The minor revision number corresponds to an incremental improvement to the architecture, possibly including new features.
For more information about the compute capability of any CUDA-enabled device, see the CUDA sample code deviceQuery. This sample enumerates the properties of the CUDA devices present in the system
Summary
The CUDA programming model provides a heterogeneous environment where the host code is running the C/C++ program on the CPU and the kernel runs on a physically separate GPU device. The CUDA programming model also assumes that both the host and the device maintain their own separate memory spaces, referred to as host memory and device memory, respectively. CUDA code also provides for data transfer between host and device memory, over the PCIe bus.
CUDA also exposes many built-in variables and provides the flexibility of multi-dimensional indexing to ease programming. CUDA also manages different memories including registers, shared memory and L1 cache, L2 cache, and global memory. Advanced developers can use some of these memories efficiently to optimize the CUDA program.
Related resources
Tags
About the Authors
Comments
Related posts
Exploring the New Features of CUDA 11.3
CUDA Refresher: Getting started with CUDA
How to Access Global Memory Efficiently in CUDA Fortran Kernels
An Easy Introduction to CUDA C and C++
An Easy Introduction to CUDA Fortran
Related posts
Profit and Loss Modeling on GPUs with ISO C++ Language Parallelism
How to Accelerate Quantitative Finance with ISO C++ Standard Parallelism
Just Released: NVIDIA HPC SDK v24.1
Webinar: Analysis of OpenACC Validation and Verification Testsuite
Simplifying GPU Programming for HPC with NVIDIA Grace Hopper Superchip
|
Maximizing Unified Memory Performance in CUDA
Many of today’s applications process large volumes of data. While GPU architectures have very fast HBM or GDDR memory, they have limited capacity. Making the most of GPU performance requires the data to be as close to the GPU as possible. This is especially important for applications that iterate over the same data multiple times or have a high flops/byte ratio. Many real-world codes have to selectively use data on the GPU due to its limited memory capacity, and it is the programmer’s responsibility to move only necessary parts of the working set to GPU memory.
Traditionally, developers have used explicit memory copies to transfer data. While this usually gives the best performance, it requires very careful management of GPU resources and predictable access patterns. Zero-copy access provides fine-grained direct access to the entire system memory, but the speed is limited by the interconnect (PCIe or NVLink) and it’s not possible to take advantage of data locality.
Unified Memory combines the advantages of explicit copies and zero-copy access: the GPU can access any page of the entire system memory and at the same time migrate the data on-demand to its own memory for high bandwidth access. To get the best Unified Memory performance it’s important to understand how on-demand page migration works. In this post I’ll break it down step by step and show you what you can do to optimize your code to get the most out of Unified Memory.
A Streaming Example
I will focus on a streaming example that reads or writes a contiguous range of data originally resident in the system memory. Although this type of access pattern is quite basic, it is fundamental for many applications. If Unified Memory performance is good on this common access pattern, we can remove all manual data transfers and just directly access the pointers relying on automatic migration. The following simple CUDA kernel reads or writes a chunk of memory in a contiguous fashion.
template <typename data_type, op_type op>
__global__ void stream_thread(data_type *ptr, const size_t size,
data_type *output, const data_type val)
{
size_t tid = threadIdx.x + blockIdx.x * blockDim.x;
size_t n = size / sizeof(data_type);
data_type accum = 0;
for(; tid < n; tid += blockDim.x * gridDim.x)
if (op == READ) accum += ptr[tid];
else ptr[tid] = val;
if (op == READ)
output[threadIdx.x + blockIdx.x * blockDim.x] = accum;
}
This benchmark migrates data from CPU to GPU memory and accesses all data once on the GPU. The input data (ptr) is allocated with cudaMallocManaged or cudaMallocHost and initially populated on the CPU. I tested three different approaches to migrating the data.
In all three cases I measure any explicit data transfer time and the kernel time.
Figure 1 shows initial performance results for the GPU inbound (read) transfers when using different allocators for PCIe and NVLink systems. All systems are using the CUDA 9 toolkit and driver. There are two PCIe systems, one with Tesla P100 and another with Tesla V100. For both PCIe systems the peak bandwidth between the CPU and the GPU is 16GB/s. The NVink system is an IBM Minsky server with 2 links of NVLink connecting the CPU and the GPU with peak interconnect bandwidth of 40GB/s.
Considering that Unified Memory introduces a complex page fault handling mechanism, the on-demand streaming Unified Memory performance is quite reasonable. Still it’s almost 2x slower (5.4GB/s) than prefetching (10.9GB/s) or explicit memory copy (11.4GB/s) for PCIe. The difference is more profound for NVLink. The upside is that if you have a lot of compute in your kernel then the migrations can be amortized or overlapped with other computation, and in some scenarios Unified Memory performance may even be better than a non-overlapping cudaMemcpy and kernel approach. In my simple example there is a minimal amount of compute (only local per-thread accumulation) and the explicit prefetching and copy approaches set an upper bound for the achievable bandwidth. Let’s see if we can improve the pure streaming Unified Memory performance and understand how close we can get to the achieved bandwidth of explicit data transfers.
Page Migration Mechanism
Before diving into optimizations I want to explain what happens when a cudaMallocManaged allocation is accessed on the GPU. You can check out my GTC 2017 talk for more details.The sequence of operations (assuming no cudaMemAdvise hints are set and there is no thrashing) is:
Much like CPUs, GPUs have multiple levels of TLBs (Translation Lookaside Buffer: a memory cache that stores recent virtual to physical memory address translations) to perform address translations. When Pascal and Volta GPUs access a page that is not resident in the local GPU memory the translation for this page generates a fault message and locks the TLBs for the corresponding SM (on Tesla P100 it locks a pair of SMs that share a single TLB). This means any outstanding translations can proceed but any new translations will be stalled until all faults are resolved. This is necessary to make sure the SM’s view of memory is consistent since during page fault processing the driver may modify the page table and add or revoke access to pages. The GPU can generate many faults concurrently and it’s possible to get multiple fault messages for the same page. The Unified Memory driver processes these faults, remove duplicates, updates mappings and transfers the data. This fault handling adds significant overhead to streaming performance of Unified Memory on current generation GPU architectures.
Understanding Profiler Output
Since each fault increases the driver’s processing time it is important to minimize page faults during CUDA kernel execution. At the same time you want to provide enough information about your program’s access pattern to the driver so it can prefetch efficiently. Here’s the nvprof profiler output from running my initial streaming code on a small 64MB dataset.
==95657== Unified Memory profiling result:
Device "Tesla P100-SXM2-16GB (0)"
Count Avg Size Min Size Max Size Total Size Total Time Name
349 187.78KB 64.000KB 896.00KB 64.00000MB 2.568640ms Host To Device
88 - - - - 5.975872ms Gpu page fault groups
The total migration size is 64MB, which matches the setup. There are also the minimum and the maximum migration sizes. The minimum size usually equals the OS page size which is 64KB on the test system (IBM Power CPU). In practice, the transfer size is not fixed to the OS page size and can vary significantly. As you can see from the profiler output the driver has transferred chunks of up to 896 KB. The mechanism for this is called density prefetching, which works by testing how much of the predefined region has been or is being transferred; if it meets a certain threshold the driver prefetches the rest of the pages. In addition, the driver may also merge nearby smaller pages into larger pages on the GPU to improve TLB coverage. All this happens automatically during page fault processing (and outside of user control). Note that this is the current driver behavior and the performance heuristics might change in future. (Note that the Linux Unified Memory driver is open source, so keen developers can review what happens under the hood).
The number 88 above on the second line is not the total number of faults, but rather the number of page fault groups. The faults are written to a special buffer in system memory and multiple faults forming a group are processed simultaneously by the Unified Memory driver. You can get the total number of faults for each group by specifying --print-gpu-trace, as the following nvprof excerpt shows.
==32593== Profiling result:
...,"Unified Memory","Virtual Address","Name"
...,"114","0x3dffe6c00000","[Unified Memory GPU page faults]"
...
...,"81","0x3dffe6c00000","[Unified Memory GPU page faults]"
...
...,"12","0x3dffe6c40000","[Unified Memory GPU page faults]"
...
The profiler shows that there are 114 faults reported just for a single page, and then more faults for the same address later. The driver must filter duplicate faults and transfer each page just once. Moreover, for this simple implementation very few different pages are accessed at the same time. Therefore, during fault processing the driver doesn’t have enough information about what data can be migrated to the GPU. Using vectorized load/store instructions up to 128 bits wide may reduce the overall number of faults and spread out the access pattern a bit, but it won’t change the big picture. So the question is how to increase the number of uniquely accessed pages to take advantage of the driver prefetching mechanism?
Warp-Per-Page Approach
Instead of having multiple hardware warps accessing the same page, we can divide pages between warps to have a one-to-one mapping and have each warp perform multiple iterations over the 64K region. Here is an updated kernel implementing this idea.
#define STRIDE_64K 65536
template
__global__ void stream_warp(data_type *ptr, const size_t size, data_type *output, const data_type val)
{
int lane_id = threadIdx.x & 31;
size_t warp_id = (threadIdx.x + blockIdx.x * blockDim.x) >> 5;
int warps_per_grid = (blockDim.x * gridDim.x) >> 5;
size_t warp_total = (size + STRIDE_64K-1) / STRIDE_64K;
size_t n = size / sizeof(data_type);
data_type accum = 0;
for(; warp_id < warp_total; warp_id += warps_per_grid) {
#pragma unroll
for(int rep = 0; rep < STRIDE_64K/sizeof(data_type)/32; rep++) {
size_t ind = warp_id * STRIDE_64K/sizeof(data_type) + rep * 32 + lane_id;
if (ind < n) {
if (op == READ) accum += ptr[ind];
else ptr[ind] = val;
}
}
}
if (op == READ)
output[threadIdx.x + blockIdx.x * blockDim.x] = accum;
}
The profiler output shows that now there is just one fault per page in most cases and overall the number of page fault groups is also reduced.
...,"Unified Memory","Virtual Address","Name"
...,"1","0x3dffe6e00000","[Unified Memory GPU page faults]"
...,"1","0x3dffe6e10000","[Unified Memory GPU page faults]"
...
Figure 2 shows updated results for the streaming benchmark.
There is a solid speedup up to 2x compared to the original code and now on-demand migration is just 30% short of the maximum achieved bandwidth for both PCIe and NVLink. Note that this minor change in access pattern is not intrusive so you can easily wrap it into a lightweight macro or a C++ class to reuse in your applications. For many other access patterns it may be possible to apply similar techniques. As GPUs are getting wider with more SMs the number of concurrent page faults is increasing so it is even more important to process them efficiently.
Overlapping Kernels and Prefetches
On-demand migration is powerful in the way it enables fine-grain overlap between data transfers and kernel execution. However, as I explained previously this overlap is severely limited due to the SM stalls caused by page fault handling. Even with very sophisticated driver prefetching heuristics, on-demand access with migration will never beat explicit bulk data copies or prefetches in terms of performance for large contiguous memory regions. This is the price for simplicity and ease of use. If the application’s access pattern is well defined and structured you should prefetch usingcudaMemPrefetchAsync. You can completely avoid stalls by manually tiling your data into contiguous memory regions and sending them to the GPU with cudaMemPrefetchAsync similar to cudaMemcpyAsync. This allows for more explicit control of what’s happening and at the same time provides a uniform view of memory by using a single address space, but there are some caveats.
Looking at Figure 2 it’s clear that cudaMemPrefetchAsync is on par with cudaMemcpyAsync for achieved bandwidth. However, prefetches and copies have different sequences of operations. While cudaMemcpyAsync only needs to submit copies over the interconnect, cudaMemPrefetchAsync also needs to traverse a list of pages and update corresponding mappings in the CPU and GPU page tables. Some of the operations have to be done in order, which limits concurrency and latency hiding opportunities. On the other hand, cudaMemcpyAsync requires the application to maintain host and device memory allocations separately.
There are specific rules on how prefetching interacts with CUDA streams. For busy CUDA streams, the call to prefetch is deferred to a separate background thread by the driver because the prefetch operation has to execute in stream order. The background thread performs the prefetch operation when all prior operations in the stream complete. For idle streams, the driver has a choice to either defer the operation or not, but the driver typically does not defer because of the associated overhead. The exact scenarios under which the driver may decide to defer can vary from driver to driver.
For host-to-device prefetches that are not deferred by the driver, the call returns after the pages have been unmapped from the CPU and the work to migrate those pages to the GPU and update the GPU’s page tables has been enqueued on the GPU. In other words, the call returns before the entire prefetch operation has completed. For device-to-host prefetches that are not deferred by the driver, the call doesn’t return until the entire prefetch operation has completed. This is because the CPU’s page tables cannot be updated asynchronously. So to unblock the CPU for device-to-host prefetches, the stream should not be idle when calling cudaMemPrefetchAsync. The tradeoff is that the deferred path has some additional overhead but it helps to enqueue more work without stalling the CPU, which may lead to better overlapping opportunities.
Achieving good one-way prefetch-kernel overlap is relatively easy as long as the kernel is submitted first. This may be counterintuitive, but it works because CUDA kernel launches are non-blocking and return almost immediately. Two-way prefetch overlap is more complicated because if you use the same CPU path (either deferred or non-deferred) for device-to-host and host-to-device prefetches they are likely to be serialized. Let’s look at a simple example.
for (int i = 0; i < num_tiles; i++) { // offload previous tile to the cpu if (i > 0)
cudaMemPrefetchAsync(a + tile_size * (i-1), tile_size * sizeof(size_t), cudaCpuDeviceId, s1);
// run multiple kernels on current tile
for (int j = 0; j < num_kernels; j++)
kernel<<<1024, 1024, 0, s2>>>(tile_size, a + tile_size * i);
// prefetch next tile to the gpu
if (i < num_tiles)
cudaMemPrefetchAsync(a + tile_size * (i+1), tile_size * sizeof(size_t), 0, s3);
// sync all streams
cudaDeviceSynchronize();
}
This is a common tiling approach that partitions the working set into equal chunks and transfers the data for the previous and the next tiles in parallel with the processing of the current tile. For example, such a scheme is used in the NVIDIA cuBLAS XT library for out-of-core matrix multiplication. In the simple example here I have used a dummy kernel running multiple times to emulate real work happening on the GPU. All operations are submitted to three different streams so you would expect to get all three of them running concurrently. This would be the case for cudaMemcpyAsync but not for cudaMemPrefetchAsync. If you run it through the profiler you’ll see a timeline like the one in Figure 3, effectively showing no overlap between the transfers due to the device-to-host prefetch blocking the CPU.
Therefore, it’s important to make sure that we have the device-to-host prefetch issued in a busy stream while the host-to-device prefetch is issued in an idle stream. Here is a modified version that achieves the new overlapping strategy.
// prefetch first tile
cudaMemPrefetchAsync(a, tile_size * sizeof(size_t), 0, s2);
cudaEventRecord(e1, s2);
for (int i = 0; i < num_tiles; i++) {
// make sure previous kernel and current tile copy both completed
cudaEventSynchronize(e1);
cudaEventSynchronize(e2);
// run multiple kernels on current tile
for (int j = 0; j < num_kernels; j++)
kernel<<<1024, 1024, 0, s1>>>(tile_size, a + tile_size * i);
cudaEventRecord(e1, s1);
// prefetch next tile to the gpu in a separate stream
if (i < num_tiles-1) {
// make sure the stream is idle to force non-deferred HtoD prefetches first
cudaStreamSynchronize(s2);
cudaMemPrefetchAsync(a + tile_size * (i+1), tile_size * sizeof(size_t), 0, s2);
cudaEventRecord(e2, s2);
}
// offload current tile to the cpu after the kernel is completed using the deferred path
cudaMemPrefetchAsync(a + tile_size * i, tile_size * sizeof(size_t), cudaCpuDeviceId, s1);
// rotate streams and swap events
st = s1; s1 = s2; s2 = st;
st = s2; s2 = s3; s3 = st;
et = e1; e1 = e2; e2 = et;
}
Figure 4 shows the profiler timeline for this new code with almost perfect three-way overlap (compute, DtoH and HtoD).
The overall speedup from better overlapping will depend on your compute to copy ratio. I ran the benchmark by using 16 tiles of 256MB and varying the compute workload weight to see the performance impact. Figure 5 shows timings in ms for the naive and optimized methods and two additional lines: no overlap using a single stream (sum of kernel and prefetch times), and ideal overlap (maximum of kernel and prefetch times). The optimized approach is 1.3x-1.5x faster than the original multi-stream code. For compute intensive workloads (high compute to data transfer ratio) the optimized version is only 10% slower than the ideal scenario.
Future Unified Memory Performance Improvements
When using Unified Memory on Pascal or Volta in CUDA 9 all pages that are accessed by the GPU get migrated to that GPU by default. Although it is possible to modify this behavior by using explicit hints (cudaMemAdvise) for the Unified Memory driver, sometimes you just don’t know if your data is accessed often enough to ensure there will be benefit from moving it to the GPU.
Volta introduces new hardware access counters that can track remote accesses to pages. These counters can be used internally to notify the driver when a certain page is accessed too often remotely so the driver can decide to move it to local memory. This helps to resolve thrashing situations more elegantly by accurately capturing and moving only the hot pages to the processor’s local memory. For applications with a mixed access pattern you can imagine the pages that are accessed sparsely will not be migrated and it can help to save bandwidth. Stay tuned for future CUDA updates with more details on access counters and updated Unified Memory performance data.
Get Started with Unified Memory in CUDA
In this post I’ve aimed to provide experienced CUDA developers the knowledge needed to optimize applications to get the best Unified Memory performance. If you are new to CUDA and would like to get started with Unified Memory, please check out the posts An Even Easier Introduction to CUDA and Unified Memory for CUDA Beginners. To learn how Unified Memory makes it possible to build applications that process data sets much larger than GPU memory, read my previous post, Beyond GPU Memory Limits with Unified Memory on Pascal.
Related resources
Tags
About the Authors
Comments
Related posts
Improving GPU Memory Oversubscription Performance
Maximizing Unified Memory Performance in CUDA
Unified Memory for CUDA Beginners
Beyond GPU Memory Limits with Unified Memory on Pascal
Unified Memory in CUDA 6
Related posts
CUDA 12.1 Supports Large Kernel Parameters
Case Study: ResNet50 with DALI
Machine Learning Acceleration in Vulkan with Cooperative Matrices
Tensor Core Programming Using CUDA Fortran
Speeding Up Semantic Segmentation Using MATLAB Container from NVIDIA NGC
|
An Efficient Matrix Transpose in CUDA C/C++
My last CUDA C++ post covered the mechanics of using shared memory, including static and dynamic allocation. In this post I will show some of the performance gains achievable using shared memory. Specifically, I will optimize a matrix transpose to show how to use shared memory to reorder strided global memory accesses into coalesced accesses.
Matrix Transpose
The code we wish to optimize is a transpose of a matrix of single precision values that operates out-of-place, i.e. the input and output are separate arrays in memory. For simplicity of presentation, we’ll consider only square matrices whose dimensions are integral multiples of 32 on a side. The entire code is available on Github. It consists of several kernels as well as host code to perform typical tasks such as allocation and data transfers between host and device, launches and timing of several kernels as well as validation of their results, and deallocation of host and device memory. In this post I’ll only include the kernel code; you can view the rest or try it out on Github.
In addition to performing several different matrix transposes, we run simple matrix copy kernels because copy performance indicates the performance that we would like the matrix transpose to achieve. For both matrix copy and transpose, the relevant performance metric is effective bandwidth, calculated in GB/s by dividing twice the size in GB of the matrix (once for loading the matrix and once for storing) by time in seconds of execution.
All kernels in this study launch blocks of 32×8 threads (TILE_DIM=32, BLOCK_ROWS=8 in the code), and each thread block transposes (or copies) a tile of size 32×32. Using a thread block with fewer threads than elements in a tile is advantageous for the matrix transpose because each thread transposes four matrix elements, so much of the index calculation cost is amortized over these elements.
The kernels in this example map threads to matrix elements using a Cartesian (x,y) mapping rather than a row/column mapping to simplify the meaning of the components of the automatic variables in CUDA C: threadIdx.x is horizontal and threadIdx.y is vertical. This mapping is up to the programmer; the important thing to remember is that to ensure memory coalescing we want to map the quickest varying component to contiguous elements in memory. In Fortran contiguous addresses correspond to the first index of a multidimensional array, and threadIdx.x and blockIdx.x vary quickest within blocks and grids, respectively.
Simple Matrix Copy
Let’s start by looking at the matrix copy kernel.
__global__ void copy(float *odata, const float *idata)
{
int x = blockIdx.x * TILE_DIM + threadIdx.x;
int y = blockIdx.y * TILE_DIM + threadIdx.y;
int width = gridDim.x * TILE_DIM;
for (int j = 0; j < TILE_DIM; j+= BLOCK_ROWS)
odata[(y+j)*width + x] = idata[(y+j)*width + x];
}
Each thread copies four elements of the matrix in a loop at the end of this routine because the number of threads in a block is smaller by a factor of four (TILE_DIM/BLOCK_ROWS) than the number of elements in a tile. Note also that TILE_DIM must be used in the calculation of the matrix index y rather than BLOCK_ROWS or blockDim.y. The loop iterates over the second dimension and not the first so that contiguous threads load and store contiguous data, and all reads from idata and writes to odata are coalesced.
Naive Matrix Transpose
Our first transpose kernel looks very similar to the copy kernel. The only difference is that the indices for odata are swapped.
__global__ void transposeNaive(float *odata, const float *idata)
{
int x = blockIdx.x * TILE_DIM + threadIdx.x;
int y = blockIdx.y * TILE_DIM + threadIdx.y;
int width = gridDim.x * TILE_DIM;
for (int j = 0; j < TILE_DIM; j+= BLOCK_ROWS)
odata[x*width + (y+j)] = idata[(y+j)*width + x];
}
In transposeNaive the reads from idata are coalesced as in the copy kernel, but for our 1024×1024 test matrix the writes to odata have a stride of 1024 elements or 4096 bytes between contiguous threads. This puts us well into the asymptote of the strided memory access plot from our global memory coalescing post, and we expect the performance of this kernel to suffer accordingly. The results of the copy and transposeNaive kernels bear this out.
The transposeNaive kernel achieves only a fraction of the effective bandwidth of the copy kernel. Because this kernel does very little other than copying, we would like to get closer to copy throughput. Let’s look at how we can do that.
Coalesced Transpose Via Shared Memory
The remedy for the poor transpose performance is to use shared memory to avoid the large strides through global memory. The following figure depicts how shared memory is used in the transpose.
The following kernel performs this “tiled” transpose.
__global__ void transposeCoalesced(float *odata, const float *idata)
{
__shared__ float tile[TILE_DIM][TILE_DIM];
int x = blockIdx.x * TILE_DIM + threadIdx.x;
int y = blockIdx.y * TILE_DIM + threadIdx.y;
int width = gridDim.x * TILE_DIM;
for (int j = 0; j < TILE_DIM; j += BLOCK_ROWS)
tile[threadIdx.y+j][threadIdx.x] = idata[(y+j)*width + x];
__syncthreads();
x = blockIdx.y * TILE_DIM + threadIdx.x; // transpose block offset
y = blockIdx.x * TILE_DIM + threadIdx.y;
for (int j = 0; j < TILE_DIM; j += BLOCK_ROWS)
odata[(y+j)*width + x] = tile[threadIdx.x][threadIdx.y + j];
}
In the first do loop, a warp of threads reads contiguous data from idata into rows of the shared memory tile. After recalculating the array indices, a column of the shared memory tile is written to contiguous addresses in odata. Because threads write different data to odata than they read from idata, we must use a block-wise barrier synchronization __syncthreads(). This approach gives us a nice speed up, as shown in this updated effective bandwidth table.
The transposeCoalesced results are an improvement over the transposeNaive case, but they are still far from the performance of the copy kernel. We might guess that the cause of the performance gap is the overhead associated with using shared memory and the required synchronization barrier __syncthreads(). We can easily test this using the following copy kernel that uses shared memory.
__global__ void copySharedMem(float *odata, const float *idata)
{
__shared__ float tile[TILE_DIM * TILE_DIM];
int x = blockIdx.x * TILE_DIM + threadIdx.x;
int y = blockIdx.y * TILE_DIM + threadIdx.y;
int width = gridDim.x * TILE_DIM;
for (int j = 0; j < TILE_DIM; j += BLOCK_ROWS)
tile[(threadIdx.y+j)*TILE_DIM + threadIdx.x] = idata[(y+j)*width + x];
__syncthreads();
for (int j = 0; j < TILE_DIM; j += BLOCK_ROWS)
odata[(y+j)*width + x] = tile[(threadIdx.y+j)*TILE_DIM + threadIdx.x];
}
Note that the synchthreads() call is technically not needed in this case, because the operations for an element are performed by the same thread, but we include it here to mimic the transpose behavior. The second line of the table below shows that the problem is not the use of shared memory or the barrier synchronization.
Shared Memory Bank Conflicts
For a shared memory tile of 32 × 32 elements, all elements in a column of data map to the same shared memory bank, resulting in a worst-case scenario for memory bank conflicts: reading a column of data results in a 32-way bank conflict. Luckily, the solution for this is simply to pad the width in the declaration of the shared memory tile, making the tile 33 elements wide rather than 32.
__shared__ float tile[TILE_DIM][TILE_DIM+1];
Removing the bank conflicts in this way brings us to about 95% of our fastest copy throughput.
Summary
In this post we presented three kernels that represent various optimizations for a matrix transpose. The kernels show how to use shared memory to coalesce global memory access and how to pad arrays to avoid shared memory bank conflicts. Looking at the relative gains of our kernels, coalescing global memory accesses is by far the most critical aspect of achieving good performance, which is true of many applications. Because global memory coalescing is so important, we revisit it again in the next post when we look at a finite difference computation on a 3D mesh.
Related resources
Tags
About the Authors
Comments
Related posts
Peer-to-Peer Multi-GPU Transpose in CUDA Fortran (Book Excerpt)
An Efficient Matrix Transpose in CUDA Fortran
Using Shared Memory in CUDA C/C++
Using Shared Memory in CUDA Fortran
How to Access Global Memory Efficiently in CUDA C/C++ Kernels
Related posts
Just Released: cuDSS 0.3.0
Breaking MLPerf Training Records with NVIDIA H100 GPUs
Boosting Application Performance with GPU Memory Prefetching
Controlling Data Movement to Boost Performance on the NVIDIA Ampere Architecture
GPU Pro Tip: Fast Histograms Using Shared Atomics on Maxwell
|
Profiling and Optimizing Deep Neural Networks with DLProf and PyProf
Software profiling is key for achieving the best performance on a system and that’s true for the data science and machine learning applications as well. In the era of GPU-accelerated deep learning, when profiling deep neural networks, it is important to understand CPU, GPU, and even memory bottlenecks, which could cause slowdowns in training or inference.
In this post, we explore many ways of profiling, from the basics to more advanced techniques. We also provide some tips and tricks to optimize deep learning models based upon the profiling results.
Before diving deep into profiling, here are some logistics. We used a ResNet50-based image classification model on different frameworks, such as TensorFlow and PyTorch. When we profiled the ResNet50 model using TensorFlow and PyTorch, we used the most recent and performant NVIDIA A100 GPU on a NVIDIA DGX A100 system. This GPU has 40 GB of memory and has support for multiple data types, including the new data type TensorFloat-32 (TF32). We employed a variety of tools for profiling to show you the alternatives.
nvidia-smi
The first go-to tool for working with GPUs is the nvidia-smi Linux command. This command brings up useful statistics about the GPU, such as memory usage, power consumption, and processes running on GPU. The goal is to see if the GPU is well-utilized or underutilized when running your model.
First, check how much GPU memory you are utilizing. You usually want to see your models using most of the available GPU memory—especially while training a deep learning model—as it is an indicator of a well-utilized GPU. Power consumption is another strong indicator of GPU utilization. Typically, the more CUDA or Tensor Cores that are fired up, the more GPU power is consumed.
Figure 1 shows an underutilized GPU. This conclusion was formed from two metrics:
The GPU-utilization (GPU-Util) column confirms this conclusion with a rate of 62%. One remedy is to increase the batch size. More cores are fired to process a larger batch size. As a result, you get more out of the GPU.
Increase the batch size and make the same Python program call. Figure 2 shows a GPU utilization of 98%. You can confirm this finding when you check the power consumption and memory usage. They are close to their limits.
You did your first optimization. Using a larger batch size (that is, a batch size that occupies almost all the GPU memory) is the most common optimization technique in the deep learning world to improve GPU utilization.
Power consumption and memory usage are not the only metrics that nvidia-smi reveals. You can also try nvidia-smi dmon, which prints out more statistics about the GPU in a scrolling manner (Figure 3).
Each GPU has several streaming multiprocessors (SMs), which run the CUDA kernels. Using many SMs is a signal of a well-utilized GPU. Figure 3 shows that SM utilization starts around 0% when the call starts and then climbs up to the upper 90s when the actual training starts. In addition to the SM utilization, nvidia-smi dmon prints the following statistics:
So far, you have focused on using only one GPU. In the case of multiple GPUs, nvidia-smi and nvidia-smi dmon show metrics separately for each GPU. Another tool that you can leverage when you have multiple GPUs is nvidia-topo -m. This call displays the topology of the GPU devices and how they are connected to each other.
Figure 4 shows the topology configuration on a DGX A100 system with 8x A100 GPUs connected with NVLink. When choosing specific GPUs to run your workload, you may want to pick the NVLink-connected GPUs as they would have higher bandwidth, especially on DGX-1 systems.
So far, we have shown you how to analyze GPU utilization using the nvidia-smi tool. These metrics are indicators of an underutilized or a well-utilized GPU. In your modeling, you should always aim at fully harnessing GPUs to better leverage the accelerated computing.
TensorFlow and DLProf
GPU utilization is a great starting point for profiling and optimization. You can do more analysis of modeling in detail by employing tools like DLProf and PyProf. You can also take advantage of user interfaces to visually inspect your code. Deep Learning Profiler (DLProf) provides support for TensorBoard so that you can visually inspect your models.
The following code example trains a ResNet50 model with TensorFlow 1.15. It also hooks up DLProf parameters to get the profiling running while training your model.
dlprof --nsys_opts="--sample=cpu --trace 'nvtx,cuda,osrt,cudnn'" \
--profile_name=/ecan/tf_a100_profiling --nsys_base_name=resnet50_tf_fp32_b408 \
--output_path=/ecan/tf_a100_profiling --tb_dir=resnet50_tf_fp32_b408 \
--force=true --iter_start=20 --iter_stop=40 \
python main.py \
--arch resnet50 \
--mode train \
--data_dir /ecan/tfr \
--export_dir /ecan/results \
--batch_size 256 \
--num_iter 100 \
--iter_unit batch \
--results_dir /ecan/results \
--display_every 100 \
--lr_init 0.01 \
--seed 12345
The code starting from python main.py starts the training for the ResNet50 model (borrowed from the NVIDIA DeepLearningExamples GitHub repo). The beginning dlprof command sets the DLProf parameters for profiling. The following DLProf parameters are used to set the output file and folder names:
The force parameter is set to true so that existing output files are overridden. The iter_start and iter_stop parameters specify the range of iterations to which the profiling tool pays attention. For large models, limit the amount of profiling as the resulting file gets large quickly.
DLProf uses the NVIDIA Nsight Systems profiler under the hood and the nsys_opts parameter is used to pass NVIDIA Nsight parameters. The sample parameter specifies whether CPU samples are collected. The trace parameter selects the calls to be traced.
In this setting, we chose to collect nvtx API, CUDA API, operating system runtime, and CUDNN API calls. DLProf can be used with its default parameters, such as dlprof python main.py, and the default parameters give good coverage. We used more options here to show you how to customize NVIDIA Nsight parameters through DLProf and get a more detailed profiling output.
The DLProf call results in two files (sqlite and qdrep) and the events_folder. These files contain all operations traced by the profiler. Qdrep files can be fed into Nsight Systems where you can visually inspect the profiling outputs. The Nsight Systems profiler can be used from the command line as well as through an application with a user interface for visualization. Start TensorBoard with the following command:
tensorboard --logdir events_folder
Figure 5 shows a sample TensorBoard with the DLProf plugin.
TensorBoard with the DLProf plugin has plenty of information about your model, ranging from the average time spent on iterations to the top 10 time-consuming kernels. For more information about the DLProf user interface, see the DLProf Plugin for TensorBoard User Guide.
Figure 6 summarizes runtime metrics about the training. The total time spent for 20 iterations (the command defines starting at the 20th iteration and stopping at the 40th iteration) is 12.3 seconds, 588 ms for each iteration on average.
When you spend 588 ms on average for an iteration, you are not taking advantage of the new precision type, TF32, supported in A100. TF32 uses fewer bits in the matrix multiplications while providing the same model accuracy and therefore yields faster iterations. In addition to fewer bits to deal with, TF32 also makes use of Tensor Cores, which are specialized hardware for deep learning that help accelerate matrix multiply and accumulate operations. The Volta (V100), Turing (T4), and Ampere (A100) generations of GPUs have Tensor Cores.
TF32 is enabled by default in the NVIDIA NGC TensorFlow and PyTorch containers and is controlled with the NVIDIA_TF32_OVERRIDE=0 and NVIDIA_TF32_OVERRIDE=1 environment variables.
After enabling TF32, make the same call without changing any parameters. Figure 7 shows the top 10 GPU operations and if they are using Tensor Cores (TC).
You can see that some operations are already using Tensor Cores, which is great. Look at the average time spent on an iteration to see if you have any speed up.
The average iteration time is reduced to 399 ms from 588 ms when you switch to the TF32 precision. This is a great speed up with switching one environment variable. The million-dollar question is whether you can do better than 399 ms. You know that you can do better than 588 ms, as DLProf makes this recommendation.
DLProf not only provides plenty of information about your model, it also makes suggestions on how you can improve it. In this case, it is suggesting that you enable XLA and AMP (automatic mixed precision). XLA is a linear algebra compiler targeting speeding up linear algebra operations. Numerical precision describes the number of digits that are used to express a value. Mixed precision combines different numerical precisions in a computational method. Deep learning networks can be trained with lower precision for high throughput, by halving storage requirements and memory traffic on certain tensors. Mixed precision accelerates training speed for large matrix to matrix multiply-add operations.
To enable XLA and AMP, set the following environment variables in a NVIDIA container:
export TF_XLA_FLAGS="--tf_xla_auto_jit=2"
export TF_ENABLE_AUTO_MIXED_PRECISION=1
That said, most of the recent repositories already have built-in support for XLA and AMP. What you usually must do is to pass related parameters. In this case, they were use_xla and use_tf_amp. After having XLA and AMP enabled, you can have your model use Tensor Cores efficiently, require less amount of memory, and take advantage of faster linear algebra operations.
Figure 10 shows that almost all the Tensor Core–eligible operations are already using Tensor Cores (no pink slice in the pie chart). This is what you want to see. More importantly, it helps reduce the training time.
Average iteration time is reduced to 341 ms from 399 ms (and 588 ms). Using half precision yields less memory usage. To have a fair comparison, don’t change the batch size with mixed precision. That said, by enabling AMP, you could double the batch size for your model compared to full floating-point precision and it further reduces the training time.
To summarize, you first employed TF32 precision and reduced the training time. Then, you enabled AMP and XLA and further reduced the training time while using DLProf to help profile.
PyTorch and PyProf
In this section, we show you how to do profiling when creating models with PyTorch. We have already experienced several optimization techniques so far. Use TF32 and AMP for optimizing the model in PyTorch.
Here, you follow a more advanced path, where you inject some extra code to the code base. Further, you use PyProf and the Nsight Systems profiler directly, with no DLProf call. You can still use DLProf and TensorBoard for profiling PyTorch models, as DLProf supports PyTorch as well. However, we wanted to show you alternate ways of profiling.
You can cherry-pick what to profile, such as the 17th iteration only. In the data iteration loop, check to see if you are in the 17th iteration. If so, surround the lines that do the forward pass, loss calculation, gradient computation (backward), and update parameters (step) with the profiler start and stop markers.
Borrow the ResNet50 training code from the same repo. Make the profiling changes in the training code and add the pyprof parameter to enable profiling for the only forward pass. You can leave the back propagation and set the range in any way you want, then push them to this branch for reference. With the changes, you make the call to run the PyTorch ResNet50 training along with profiling:
nsys profile --trace 'nvtx,cuda,osrt,cudnn' -c cudaProfilerApi --stop-on-range-end true \
--show-output true --sample=cpu --export=sqlite \
-o /ecan/pytorch_a100_profiling/resnet50_pytorch_fp32_b256 \
python main.py /ecan/imagenet_small \
--raport-file raport.json -j16 -p 100 --lr 2.048 \
--optimizer-batch-size 256 --warmup 8 --arch resnet50 \
-c fanin --label-smoothing 0.1 \
--lr-schedule cosine --training-only --mom 0.875 --wd 3.0517578125e-05 -b 256\
--epochs 1 --workspace /ecan/results \
--pyprof
This time, you call the Nsight Systems profiler directly. You already know the trace, sample, and output (-o) parameters. The -c cudaProfilerApi --stop-on-range-end true parameters are added to tell the profiler that you incepted start and stop markers so that the profiler profiles whatever happens only in between. When the --show-output parameter is set to true, the target process stdout and stderr streams are printed to the console.
This call resulted in two files: qdrep and sqlite. In TensorFlow, you used the event_files folder for TensorBoard but didn’t touch the qdrep file. This time, you use qdrep to inspect the profiling results visually in the Nsight Systems application.
The following code example uses PyProf calls to analyze kernels:
python -m pyprof.parse (resulting_sqlite_file_from_our_call) > a_file
python -m pyprof.prof a_file -w 100 -c idx,trace,sil,tc,flops,bytes,kernel \
| (read -r; printf “%s\n” “$REPLY”; sort -k5 -n -r)
The w parameter sets the column widths and the c parameter specifies the options to be printed out. There are several options and we have chosen these (see the complete list following). We also sorted by number of floating-point operations for a better analysis; otherwise, it is sorted in the order of execution.
We provide a few kernels from the top of the resulting list. The first few ones are batch normalization kernels. You can also identify the line number for a file in which it is called, for example, resnet50.py:201. This is useful for better understanding these kernel statistics, as there might be multiple batch normalizations in your model. The last row is a matrix multiplication using half precision. It is also using Tensor Cores, which is great.
Change the final line of the earlier PyProf call to obtain the total amount of nanoseconds that you spent on a forward pass of an iteration:
python -m pyprof.prof a_file -w 100 -c idx,trace,sil,tc,flops,bytes,kernel \
| awk ‘{total+=$3}END{print total}’
The result of the call is 188,388,811 ns (188.4 ms). Thus far, your profiling has been done using the FP32 precision type. You already know that switching to the TF32 precision type enables you to optimize your code. By toggling the NVIDIA_TF32_OVERRIDE environment variable, you can take advantage of the TF32 precision type.
When you have the same training and profiling calls but with the TF32 precision type enabled this time, you get a total time of 110,250,534 ns (110.25 ms). By switching to TF32, you almost halved the execution time.
You got used to doing optimization on TensorFlow, and now you are optimizing code on PyTorch. You have one more step to go: Enable mixed precision and see if you can further optimize your code.
nsys profile --trace 'nvtx,cuda,osrt,cudnn' -c cudaProfilerApi --stop-on-range-end true \
--show-output true --sample=cpu --export=sqlite \
-o /ecan/pytorch_a100_profiling/resnet50_pytorch_amp_b256 \
python main.py /ecan/imagenet_small \
--raport-file raport.json -j16 -p 100 --lr 2.048 \
--optimizer-batch-size 256 --warmup 8 --arch resnet50 \
-c fanin --label-smoothing 0.1 \
--lr-schedule cosine --training-only --mom 0.875 --wd 3.0517578125e-05 -b 256 \
--amp --static-loss-scale 128 \
--epochs 1 --workspace /ecan/results \
--pyprof
Most of the parameters are the same as the previous call except the amp and static-loss-scale parameters. The amp parameter enables AMP as the code base supports it. The static-loss-scale parameter scales the loss. For more information about ResNet50 training parameters, see the Command-line options section in the ResNet50 v1.5 For PyTorch guide.
When you run the code example for the call with AMP mode on, you get 72,860,695 ns (72.86 ms). This is wonderful news, as you further optimized your code using mixed precision. You get similar improvements on TensorFlow. Even though TensorFlow has an extra optimization (XLA), you get further improvements on PyTorch with only AMP.
Nsight Systems for profiling
So far, you have used the statistics harvested from training with the profiler call. You have also made use of PyProf to take a quick peek at the kernels used in the model. You produced great visualizations using TensorBoard and the DLProf plugin. In the beginning of the post, you used nvidia-smi to check the GPU utilization. If you think these are not enough and you want to dive deeper, no worries. We have more for you.
The qdrep files are obtained when training with the profiler call is completed. Now it is time to use them to analyze your model more deeply using the user interface of the NVIDIA Nsight Systems profiler. For more information, see the Nsight Systems User Guide.
Even if you limit profiling to forward propagation of an iteration only, with the Nsight Systems profiler you can inspect lots of information visually. We sometimes zoom in to a particular region from this view to further analyze. To get a closer look, zoom into the beginning of the training, focused on a few milliseconds.
You first see some memory operations in green, which is followed by the convolution operation. Then, batch normalization kicks in. Not surprisingly, an activation function is the next step. In this case, it is ReLU. Finally, you see that max pooling is executed. This is the order that you see in the code base and in most of the ResNet models. You can also see stack trace for more information for selected operations, which become turquoise green when selected.
Before we end this post, we would like to show yet another optimization method. In PyTorch, you can also change the memory format. Data is usually stored in the following format:
[ number of elements in the batch, number of channels (depth or number of filters), height, width ]
That said, PyTorch operates on the [n, h, w, c] format. The batch-norm like layers are processed faster in the [n, h, w, c] format. The most time-consuming operations were batch normalization (as in Figure 14). Furthermore, Tensor Cores natively use the [n, h, w, c] format. Basically, by changing the memory format, you can save some time while processing batch-norm–like layers as well as avoiding some format conversion time inside the CUDNN kernels.
Adding --memory format nchw to the earlier call does the trick and enables you to use the [n, c, h, w] memory format. With the [n, c, h, w] memory format, your training no longer needs memory format conversion operations, for example, nhwcToNchwKernel and nchwToNhwcKernel (see Figure 18). This means that you saved some more time. In other words, you did yet another optimization by changing the memory format. Just to confirm this, calculate the total amount of time spent on kernels. For us, it turned out to be 45,631,828 ns (45.6 ms). It was around 70 ms with the [n, c, h, w] memory format. You further cut your execution time with the memory format optimization technique.
Summary
This post covered the details of profiling deep learning models using a variety of tools: nvidia-smi, DLProf and PyProf, and the NVIDIA Nsight Systems profiler. Each tool is useful to point out performance improvement opportunities at different levels. The profiling runs used two common deep learning frameworks: PyTorch and TensorFlow. The code examples are provided in the DeepLearningExamples GitHub repo, which also has the code changes for the PyProf and PyTorch calls. We encourage you to replicate these steps to get more familiar with the profiling tools.
For more information, see the following resources:
Acknowledgements
Thanks to David Zier, Timothy Gerdes, Elias Bermudez, Matthew Kotila, and Ujval Kapasi for their continued support throughout the progress of this post.
Related resources
Tags
About the Authors
Comments
Related posts
Designing Deep Learning Applications with NVIDIA Nsight Deep Learning Designer
Maximizing Deep Learning Inference Performance with NVIDIA Model Analyzer
Tips for Optimizing GPU Performance Using Tensor Cores
Using Nsight Compute or Nvprof to Show Mixed Precision Use in Deep Learning Models
A Trio of New Nsight Tools That Empower Developers to Fully Optimize their CPU and GPU Performance
Related posts
Transforming Industrial Defect Detection with NVIDIA TAO and Vision AI Models
Why Automatic Augmentation Matters
Accelerating Quantized Networks with the NVIDIA QAT Toolkit for TensorFlow and NVIDIA TensorRT
Just Released: TensorRT 8.4
Training a Recommender System on DGX A100 with 100B+ Parameters in TensorFlow 2
|
CUDA Pro Tip: Optimize for Pointer Aliasing
Often cited as the main reason that naïve C/C++ code cannot match FORTRAN performance, pointer aliasing is an important topic to understand when considering optimizations for your C/C++ code. In this tip I will describe what pointer aliasing is and a simple way to alter your code so that it does not harm your application performance.
What is pointer aliasing?
Two pointers alias if the memory to which they point overlaps. When a compiler can’t determine whether pointers alias, it has to assume that they do. The following simple function shows why this is potentially harmful to performance:
void example1(float *a, float *b, float *c, int i) {
a[i] = a[i] + c[i];
b[i] = b[i] + c[i];
}
At first glance it might seem that this function needs to perform three load operations from memory: one for a[i], one for b[i] and one for c[i]. This is incorrect because it assumes that c[i] can be reused once it is loaded. Consider the case where a and c point to the same address. In this case the first line modifies the value c[i] when writing to a[i]. Therefore the compiler must generate code to reload c[i] on the second line, in case it has been modified.
Because the compiler must conservatively assume the pointers alias, it will compile the above code inefficiently, even if the programmer knows that the pointers never alias.
What can I do about aliasing?
Fortunately almost all C/C++ compilers offer a way for the programmer to give the compiler information about pointer aliasing. The C99 standard includes the keyword restrict for use in C. In C++ there is no standard keyword, but most compilers allow the keywords __restrict__ or __restrict to be used for the same purpose as restrict in C.
By giving a pointer the restrict property, the programmer is promising the compiler that any data written to through that pointer is not read by any other pointer with the restrict property. In other words, the compiler doesn’t have to worry that a write to a restrict pointer will cause a value read from another restrict pointer to change. This greatly helps the compiler optimize code.
To show the performance benefits of restrict-decorated pointers, consider the following function:
void example2a(float *a, float *b, float *c) {
for (int i = 0; i < 1024; i++) {
a[i] = 0.0f;
b[i] = 0.0f;
for (int j = 0; j < 1024; j++) {
a[i] = a[i] + c[i*1024 + j];
b[i] = b[i] + c[i*1024 + j] * c[i*1024 + j];
}
}
}
This function is similar to our original example and, as before, the compiler generates sub-optimal code to ensure that it works with aliased pointers. Because the compiler must assume that a[i] and b[i] overlap, it must both read and write them every iteration of the inner loop.
If we know at compile time that our three pointers are not used to access overlapping regions, we can add __restrict__ to our pointers. Now the compiler knows that a[i] and b[i] cannot overlap, so it can optimize the inner loop by storing the running sum in a local variable and only writing it once at the end.
void example2b(float * __restrict__ a, float * __restrict__ b, float * __restrict__ c) {
for (int i = 0; i < 1024; i++) {
a[i] = 0.0f;
b[i] = 0.0f;
for (int j = 0; j < 1024; j++) {
a[i] = a[i] + c[i*1024 + j];
b[i] = b[i] + c[i*1024 + j] * c[i*1024 + j];
}
}
}
Timing these two functions:
Just adding __restrict__ in this case produces 3x faster code! I could have achieved the same result by introducing local summation variables myself, but in real-world situations allowing the compiler to do this optimization is often easier.
Wait, where’s the CUDA?
I haven’t talked about GPUs or CUDA at all so far. This is because pointer aliasing is something developers of high-performance code need to be aware of on both the GPU and the CPU and, as demonstrated above, proper use can significantly improve performance.
There is, however, one potential GPU-specific benefit to __restrict__. Compute Capability 3.5 NVIDIA GPUs (e.g. Kepler) have a cache designed for read-only data which can, for some codes, improve data access performance. This cache can only be used for data that is read-only for the lifetime of the kernel. To use the read-only data cache, the compiler must determine that data is never written. Due to potential aliasing, the compiler can’t be sure a pointer references read-only data unless the pointer is marked with both const and __restrict__. Also, as the Kepler Tuning Guide points out, “adding these qualifiers where applicable can improve code generation quality via other mechanisms on earlier GPUs as well.”
In the following code I copy elements of array a into array b. These elements are chosen by reading an index in array c, which is initialized with random integers between 0 and the array length.
__global__ void example3a(float* a, float* b, int* c) {
int index = blockIdx.x * blockDim.x + threadIdx.x;
b[index] = a[c[index]];
}
Note that in this case there are no redundant memory accesses due to potential pointer aliasing. Each thread reads one element of c and a and writes one element of b. However, because both a and c are read-only, and I know that the data does not overlap, I can add const and __restrict__ to the above code.
__global__ void example3b(const float* __restrict__ a, float* __restrict__ b, const int* __restrict__ c) {
int index = blockIdx.x * blockDim.x + threadIdx.x;
b[index] = a[c[index]];
}
This extra information allows the CUDA compiler to use the read-only data cache and improves performance by more than 2x.
Conclusion
It’s important to understand pointer aliasing when writing code where every clock cycle counts. While you can sometimes explicitly write around performance problems caused by potential aliasing, using the __restrict__ keyword allows the compiler to do much of the work for you. It also allows the use of the GPU read-only data cache, potentially accelerating data movement to your kernel.
As with most code-level optimizations, your mileage may vary. Always profile your code and try to determine the bottlenecks and how far it is from hardware performance limits before spending too much time trying to optimize.
Related resources
Tags
About the Authors
Comments
Related posts
Using Fortran Standard Parallel Programming for GPU Acceleration
Boosting Productivity and Performance with the NVIDIA CUDA 11.2 C++ Compiler
Accelerating Fortran DO CONCURRENT with GPUs and the NVIDIA HPC SDK
CUDA Pro Tip: How to Call Batched cuBLAS routines from CUDA Fortran
An Easy Introduction to CUDA Fortran
Related posts
CUDA Pro Tip: The Fast Way to Query Device Properties
Pro Tip: Improved GLSL Syntax for Vulkan DescriptorSet Indexing
Pro Tip: Pinpointing Runtime Errors in CUDA Fortran
Pro Tip: Linking OpenGL for Server-Side Rendering
Pro Tip: cuBLAS Strided Batched Matrix Multiply
|
Analysis-Driven Optimization: Analyzing and Improving Performance with NVIDIA Nsight Compute, Part 2
In part 1, I introduced the code for profiling, covered the basic ideas of analysis-driven optimization (ADO), and got you started with the Nsight Compute profiler. In part 2, you apply what you learned to improve the performance of the code and then continue the analysis and optimization process.
Refactoring
To refactor the code based on the previous analysis in part 1, you observe that your kernel design and the original CPU code has an outer loop that iterates over the N data sets. This means you are using one block to compute the results for all N data sets. Because the data sets are all independent, you can easily distribute this work across N blocks, and the code refactoring at this point is simple. Get rid of the outer loop, compute or replace the loop variable k using the blockIdx.x built-in variable, and then launch the kernel with N blocks instead of one block. Create a new version of your kernel code:
template
__global__ void gpu_version2(const T * __restrict__ input, T * __restrict__ output, const T * __restrict__ matrix, const int L, const int M, const int N){
// parallelize threadIdx.x over vector length, and blockIdx.x across k (N)
__shared__ T smem[my_L];
int idx = threadIdx.x;
int k = blockIdx.x;
T v1 = 0;
for (int i = 0; i < M; i++) // perform vector averaging
v1 += input[k*M*L+idx*M+i];
v1 /= M;
for (int i = 0; i < L; i++){ // perform matrix-vector multiply
__syncthreads();
smem[threadIdx.x] = v1 * matrix[i*L+idx];
for (int s = blockDim.x>>1; s > 0; s>>=1){
__syncthreads();
if (threadIdx.x < s) smem[threadIdx.x] += smem[threadIdx.x+s];}
if (!threadIdx.x) output[k+i*N] = smem[0];}
}
You also need to update the kernel launch:
gpu_version2<<<N, L>>>(d_input, d_output, d_matrix, L, M, N);
From this point forward, you should not have trouble with profiling due to extended profiling overhead, so you can restore the N value to its original value:
const int my_N = 1024;
If you recompile and run the code, you should now see something like the following results:
CPU execution time: 0.554662s
Kernel execution time: 0.07894s
baseline
Step 1
Kernel duration:
2.92s
0.0789s
That change has made a tremendous improvement in GPU performance. To continue the ADO process, you must start again with the profiler. Disconnect from the previous session, connect to a new session, and then launch again. Alternatively, because you are using Auto Profile, and there is only one kernel, you could also choose Resume instead of Run to Next Kernel, which would profile the rest of the application until it terminates.
After launching, you should not have to make any setting changes at this point. Choose Run to Next Kernel again. After about 10 seconds, a new results tab opens, labelled Untitled Y. Your previous results are still there, in the Untitled X tab, where X and Y are numbers like 1 and 2.
A nice feature of the profiler is the baseline capability. Because you are profiling almost the same kernel, it is interesting to do a comparison in the profiler from your previous results to your current result. To do this, select the previous results tab, choose Add Baseline, and select the latest tab (Figure 1).
Not all comparisons are meaningful, because the two profiling runs had significantly different N values. Therefore, kernel duration, for example, won’t be directly comparable. However, GPU utilization has improved substantially. The intent here is to let the profiler guide your optimization efforts. Start by inspecting the rules outputs. The bottleneck rule in the SOL section states the following:
[Warning] This kernel exhibits low compute throughput and memory bandwidth utilization relative to the peak performance of this device. Achieved compute throughput and/or memory bandwidth below 60.0% of peak typically indicate latency issues. Look at Scheduler Statistics and Warp State Statistics for potential reasons.
Look at the Scheduler Statistics section (Figure 2).
There is another rule in this section, called Issue Slot Utilization, which is reporting as follows:
[Warning] Every scheduler is capable of issuing one instruction per cycle, but for this kernel each scheduler only issues an instruction every 6.8 cycles. This might leave hardware resources underutilized and may lead to less optimal performance. Out of the maximum of 16 warps per scheduler, this kernel allocates an average of 16.00 active warps per scheduler, but only an average of 0.69 warps were eligible per cycle. Eligible warps are the subset of active warps that are ready to issue their next instruction. Every cycle with no eligible warp results in no instruction being issued and the issue slot remains unused. To increase the number of eligible warps either increase the number of active warps or reduce the time the active warps are stalled.
You have the maximum of 16 warps per scheduler but most of the time, the warps are stalled. About 30% of the time, the warps are stalled so extensively that none can be issued by that scheduler in that cycle.
The Warp State Statistics section also has a rule called CPI Stall LG Throttle, as follows:
[Warning] On average each warp of this kernel spends 82.8 cycles being stalled waiting for the local/global instruction queue to be not full. This represents about 75.9% of the total average of 109.1 cycles between issuing two instructions. Typically, this stall occurs only when executing local or global memory instructions extremely frequently. If applicable, consider combining multiple lower-width memory operations into fewer wider memory operations and try interleaving memory operations and math instructions.
You can make a few other observations. The Stall LG Throttle stall reason is listed high, at over 80 cycles. Hover over any bar to get the numerical value. In the Stall LG Throttle legend (on the left side), hover to get a definition of that item, both in terms of the metric calculation used and a text description, “average # of warps resident per issue cycle, waiting for a free entry in the LSU instruction queue”. LSU is Load/Store Unit, one of the functional units in a GPU SM. For more information, see Warp Scheduler States and Metrics Decoder.
Because the directives so far have blended local and global activity together, you can also get a clue about the likely candidate by looking at the Memory Workload Analysis section (Figure 4).
The Memory (hierarchy) Chart shows on the top left arrow that the kernel is issuing instructions and transactions targeting the global memory space, but none are targeting the local memory space. Global is where you want to focus.
Assembling the observations so far, you could conjecture that because the LSU queue is backed up with instructions targeting the global space, but the overall memory utilization is only in the range of ~50% (from the SOL section), perhaps the issue is one of transaction efficiency. For global transactions, transaction efficiency is high when the number of transactions per request is minimized.
At this point, you could profile your kernel again, asking for a global transaction efficiency measurement as you did in the previous post. However, that may only confirm what you already suspect. You would also like the profiler to direct you to the instructions in the code that are giving rise to this global memory or LSU pressure.
At the top of the current profiler results tab, switch the Page: indicator from Details to Source. For more information, see Source Page. In the new page presented, switch View: from Source and SASS to Source. If you don’t see any source code, it may be that you did not include -lineinfo when compiling your code. Fortunately, your kernel is short, so scroll the window vertically to the gpu_version2 kernel source lines. You may want to adjust column widths by dragging the column dividers at the top of this pane.
Can this source view help you to confirm your suspicions about efficiency, and identify lines of code to focus your efforts? Yes. In this view, the profiler is attributing some statistics, metrics, and measurements to specific lines of code. Scroll the window horizontally until you can see both the Memory Ideal L2 Transactions Global and Memory L2 Transactions Global columns. The first column is just a scaled measurement of the number of requests, the ideal number of transactions that is the minimum. The second column is the actual number of transactions that had to be issued to satisfy those requests. The ratio of these, if greater than 1, gives an indicator of inefficiency in global memory access patterns.
By inspecting the output, you observe that for the following line of code (line 81 in figure 5), the number of ideal transactions would be 134,217,728, whereas the actual number of transactions were much larger, at 1,073,741,824, a ratio of 8:1.
v1 += input[k*M*L+idx*M+i];
This is a good indication from the profiler that this line of code is suffering from a poor or uncoalesced access pattern. On the Details page, confirm your suspicion of uncoalesced access on line 81 by looking at the Source Counters section and its associated rule output at the bottom of the section.
You see that the rule here is also reporting the issue on line 81. By comparison, the following line of code is also doing global accesses, but the corresponding ratio is exactly 1:1:
smem[threadIdx.x] = v1 * matrix[i*L+idx];
Therefore, the access pattern must be fully coalesced. Now, focus your efforts on the first line of code mentioned earlier (line 81), as your next optimization target.
Restructuring
The previous analysis showed uncoalesced access at a particular line of the kernel code. This is because the initial parallelization strategy of one thread per vector element resulted in a thread access pattern for the first phase of the problem that was columnar in memory. Adjacent threads are reading adjacent vector elements, but those adjacent vector elements are not stored in adjacent locations in memory. Your focus now is on the first phase of the algorithm (the vector averaging operation), because the profiler has indicated that the global activity during the second phase (the matrix-vector multiply) is already coalesced. To fix this, your next refactoring effort is more involved.
You must restructure the threads in the first phase, so that adjacent threads are reading adjacent elements in memory. Algorithmically, this is possible because the averaging operation moves horizontally through memory. You therefore want to assign a set of threads that work collectively, instead of just one thread, to process each vector element.
The next logical step up from one thread to a set of threads that can process adjacent elements in memory efficiently would be the warp. Refactor the averaging operation at the threadblock level to be a warp-stride loop moving horizontally, one warp per vector element moving horizontally across vectors, and a block-stride loop moving vertically, so that the threadblock as a whole strides vertically, to cover all the elements in the vector, during the averaging phase. For more information about grid-stride loops, see CUDA Pro Tip: Write Flexible Kernels with Grid-Stride Loops.
Block-stride and warp-stride loops are a variation, operating at the block level or warp level, respectively. Your refactored kernel looks like the following code example:
template
__global__ void gpu_version3(const T * __restrict__ input, T * __restrict__ output, const T * __restrict__ matrix, const int L, const int M, const int N){
// parallelize threadIdx.x over vector length, and blockIdx.x across k (N)
// do initial vector reduction via warp-stride loop
__shared__ T smem[my_L];
int idx = threadIdx.x;
int idy = threadIdx.y;
int id = idy*warpSize+idx;
int k = blockIdx.x;
T v1;
for (int y = threadIdx.y; y < L; y+=blockDim.y){ // vertical block-stride loop
v1 = 0;
for (int x = threadIdx.x; x < M; x+=warpSize) // horizontal warp-stride loop
v1 += input[k*M*L+y*M+x];
for (int offset = warpSize>>1; offset > 0; offset >>= 1) // warp-shuffle reduction
v1 += __shfl_down_sync(0xFFFFFFFF, v1, offset);
if (!threadIdx.x) smem[y] = v1/M;}
__syncthreads();
v1 = smem[id];
for (int i = 0; i < L; i++){ // matrix-vector multiply
__syncthreads();
smem[id] = v1 * matrix[i*L+id];
for (int s = (blockDim.x*blockDim.y)>>1; s > 0; s>>=1){
__syncthreads();
if (id < s) smem[id] += smem[id+s];}
if (!id) output[k+i*N] = smem[0];}
}
You also need to update your kernel launch:
dim3 block(32,32);
gpu_version3<<<N, block>>>(d_input, d_output, d_matrix, L, M, N);
If you recompile and run this code, you observe another performance improvement in the kernel:
CPU execution time: 0.539879s
Kernel execution time: 0.021637s
baseline
Step 1
Step 2
Kernel duration
2.92s
0.0789s
0.0216s
To continue the ADO process, return to the profiler. Choose Disconnect, Connect, Launch, and then Run to Next Kernel. After about 10 seconds, you are presented with a new report in a new tab. At this point, you may wish to switch back to the first tab, choose Clear Baselines, move to the previous tab, and choose Add Baseline. Move to the current tab and report. Figure 7 shows the results.
The bottleneck rule states:
[Warning] Compute is more heavily utilized than Memory: Look at the Compute Workload Analysis report section to see what the compute pipelines are spending their time doing. Also, consider whether any computation is redundant and could be reduced or moved to look-up tables.
You’re being directed to the Compute Workload Analysis report section (Figure 8) to determine which pipe is the most heavily utilized.
This section doesn’t have any rule warnings, but you see that the LSU pipe is the most heavily utilized. This again suggests pressure associated with memory transactions. Can you determine which memory transactions might be the biggest contributor to this? Look at the next section, Memory Workload Analysis (Figure 9).
I’ve truncated the bottom portion of the Memory Workload Analysis screen. However, you can study the Memory Chart to see that the shared-memory usage is the highest reported usage, at 49% of peak. If you are going to focus on LSU cycle pressure, the attention shifts to shared-memory activity. As a result of the sweep-style reduction construction considered across the threadblock, there are many iterations of the sweep reduction loop that have one or more entire warps that do not participate. They are predicated off, completely across the warp. These non-participating warps still contribute to shared-memory pressure, and this is reflected in the Other category in the screenshot.
Figure 10 shows the source display, focusing on the kernel code.
The profiler has indicated, based on warp-state sampling, that the following line accounts for many shared-memory-related warp stalls (line 114 in figure 10):
if (id < s) smem[id] += smem[id+s];}
This line of code is doing two loads and one store from/to shared memory, as part of your sweep reduction. Coupled with the previous analysis, this becomes the focus for your next optimization.
Summary
In this post, you continued the ADO process started in part 1, by applying what you learned in this post to refactor the code, and profile again to discover next steps. In part 3, you finish your analysis and optimization. You also perform some measurements to give you confidence that you have reached a reasonable stopping point.
The analysis work in this post was performed on a machine with the following characteristics: Ubuntu 18.04.3, CUDA 11.1, GPU Driver version 455.23.05, GCC version 7.5.0, V100-SXM2-32 GB GPU, Intel® Xeon® Gold 6130 CPU @ 2.10GHz. The code examples presented in this post are for instructional purposes only. They are not guaranteed to be defect-free or suitable for any particular purpose.
For more information, see the following resources:
Acknowledgements
The author would like to thank the following individuals for their contributions: Sagar Agrawal, Rajan Arora, Ronny Brendel, Max Katz, Felix Schmitt, Greg Smith, and Magnus Strengert.
Related resources
Tags
About the Authors
Comments
Related posts
Advanced Kernel Profiling with the Latest Nsight Compute
Analysis-Driven Optimization: Finishing the Analysis with NVIDIA Nsight Compute, Part 3
Analysis-Driven Optimization: Preparing for Analysis with NVIDIA Nsight Compute, Part 1
Using Nsight Compute to Inspect your Kernels
CUDA 7.5: Pinpoint Performance Problems with Instruction-Level Profiling
Related posts
CUDA Toolkit Now Available for NVIDIA Blackwell
Improving GPU Performance by Reducing Instruction Cache Misses
Optimizing llama.cpp AI Inference with CUDA Graphs
Just Released: Nsight Compute 2024.3
Just Released: CUDA Toolkit 12.6
|
Boosting Application Performance with GPU Memory Access Tuning
NVIDIA GPUs have enormous compute power and typically must be fed data at high speed to deploy that power. That is possible, in principle, as GPUs also have high memory bandwidth, but sometimes they need the programmer’s help to saturate that bandwidth.
In this post, we examine one method to accomplish that and apply it to an example taken from financial computing. We explain the circumstances under which this method can be expected to work well, and how to find out whether these circumstances apply to your workload.
Context
NVIDIA GPUs derive their power from massive parallelism. Many warps of 32 threads can be placed on a streaming multiprocessor (SM), awaiting their turn to execute. When one warp is stalled for whatever reason, the warp scheduler switches to another with zero overhead, making sure that the SM always has work to do.
On the high-performance NVIDIA Ampere 100 (A100) GPU, up to 64 active warps can share an SM, each with its own resources. On top of that, A100 has many SMs—108—that can all execute warp instructions simultaneously.
Most instructions must operate on data, and that data almost always originates in the device memory (DRAM) attached to the GPU. One of the main reasons why even the abundance of warps on an SM can run out of work is because they are waiting on data to arrive from memory. If this happens and the bandwidth to memory is not fully utilized, it may be possible to reorganize the program to improve memory access and reduce warp stalls, which in turn makes the program complete faster.
First step: Wide loads
In a previous post, Boosting Application Performance with GPU Memory Prefetching, we examined a workload that did not fully utilize the available compute and memory bandwidth resources of the GPU. We determined that prefetching data from memory before it is needed substantially reduced memory stalls and improved performance.
When prefetching is not applicable, the quest is to determine what other factors may be limiting performance of the memory subsystem. One possibility is that the rate at which requests are made of that subsystem is too high. Intuitively, you may reduce the request rate by fetching multiple words per load instruction. It is best shown with an example.
In all code examples in this post, uppercase variables are compile-time constants. BLOCKDIMX assumes the value of the predefined variable blockDim.x. For some purposes, it must be a constant known at compile time, whereas for other purposes, it is useful for avoiding computations at run time.
The original code looked like the following example, where index is a helper function to compute array indices. It implicitly assumes that just a single, one-dimensional thread block is being used, which is not the case for the motivating application from which it was derived. However, it reduces code clutter and does not change the argument.
for (pt = threadIdx.x; pt < ptmax ; pt += BLOCKDIMX ) {
double best = 0.0;
#pragma unroll
for (int k = 0; k < kmax; ++k) {
double c = big_array[index(pt, k)];
c += small_array[k] ;
best = max(c, best);
}
final[pt] = best;
}
Observe that each thread loads kmax consecutive values from the suggestively named small_array. This array is sufficiently small that it fits entirely in the L1 cache, but asking it to return data at a very high rate may become problematic.
The following change recognizes that each thread can issue requests for two double-precision words in the same instruction if you restructure the code slightly and introduce the double2 data type, which is supported natively on NVIDIA GPUs. It stores two double-precision words in adjacent memory locations, which can be accessed with field selectors x and y. The reason this works is that each thread accesses successive elements of small_array. This technique is called wide loads. The inner loop over index k is now incremented by two instead of one.
for (pt = threadIdx.x; pt < ptmax ; pt += BLOCKDIMX ) {
double best = 0.0;
#pragma unroll
for (int k = 0; k < kmax; k+=2) {
double c = big_array[index(pt, k)];
double2 val = *(double2 *) &small_array[k];
c += val.x;
best = max(c, best);
c = big_array[index(pt, k+1)];
c += val.y;
best = max(c, best);
}
final[pt] = best;
}
A few caveats are in order. First, the example did not check whether kmax is even. If not, the modified loop over k would execute an extra iteration, and you’d have to write some special code to prevent that.
Second, the example did not confirm that small_array is properly aligned on a 16-byte boundary. If not, the wide loads would fail. If it was allocated using cudaMalloc, it would automatically be aligned on a 256-byte boundary. But if it was passed to the kernel using pointer arithmetic, you’d have to carry out some checks.
Next, inspect the helper function index and discover that it is linear in pt with coefficient 1. You can apply a similar wide-load approach to values fetched from big_array by requesting two double-precision values in one instruction. The difference between accesses to big_array and to small_array is that now successive threads within a warp access adjacent array elements. The following restructured code example doubles the increments of the loop over elements of array big_array, and now each thread processes two array elements in each iteration.
for (pt = 2*threadIdx.x; pt < ptmax ; pt += 2*BLOCKDIMX ) {
double best1 = 0.0, best2 = 0.0;
#pragma unroll
for (int k = 0; k < kmax; k+=2) {
double2 c1 = *(double2 *) &big_array[index(pt, k)];
double2 c2 = *(double2 *) &big_array[index(pt, k+1)];
double2 val = *(double2 *) &small_array[k];
c1.x += val.x;
best1 = max(c1.x, best1);
c2.x += val.y;
best1 = max(c2.x, best1);
c1.y += val.x;
best2 = max(c1.y, best2);
c2.y += val.y;
best2 = max(c2.y, best2);
}
final[pt] = best1;
final[pt+1] = best2;
}
The same caveats as before apply, and they should now be extended to parity of ptmax and alignment of big_array. Fortunately, the application from which this example was derived satisfies all the requirements. Figure 1 shows the duration (in nanoseconds) of a set of kernels that gets repeated identically multiple times in the application. The average speedup of the kernel was 1.63x for the combination of wide loads.
Second step: Register use
You might be tempted to stop here and declare success, but a deeper analysis of the execution of the program, using NVIDIA Nsight Compute, shows that you haven’t fundamentally changed the rate of requests to the memory subsystem, even though you halved the number of load instructions.
The reason is that a warp load instruction—32 threads simultaneously issuing load instructions—results in one or more sector requests, which is the actual unit of memory access processed by the hardware. Each sector is 32 bytes, so one warp load instruction of one 8-byte double-precision word per thread results in 8 sector requests (accesses are with unit stride), and one warp load instruction of double2 words results in 16 sector requests. The total number of sector requests is the same for plain and wide loads. So, what caused the performance improvement?
To understand the code behavior, consider a resource not yet discussed: registers. These are used to store the data loaded from memory and serve as input for arithmetic instructions. Registers are a finite resource. If an SM hosts the maximum number of warps possible on the A100 GPU, 32 4-byte registers are available to each thread, which together can hold 16 double-precision words.
The compiler that translates the code into machine language is aware of this and limits the number of registers per thread. How do you determine the register use of your code and the role it plays in performance? Use the source view in Nsight Compute to see the assembly code (SASS) and C source code side by side.
The innermost loop of the code is the one that is executed most. If you select Instructions executed and ask to be taken to the line in the SASS code that has the highest number of those, you automatically land in the inner loop. If you are uncertain, you can compare the SASS and highlighted corresponding source code to confirm.
Next, identify in the SASS code of the inner loop all the instructions that load data from memory (LDG). Figure 2 shows an example of the SASS where we found the start of the inner loop. It is on line 166 where the number of times an instruction is executed suddenly jumps to its maximum value.
LDG.E.64 is the instruction you are after. It loads from global memory (DRAM) a 64-bit word with an extended address. The load of a wide word corresponds to LDG.E.128. The first parameter after the name of the load instruction (R34 in Figure 2) is the register that receives the value. As a double-precision value occupies two adjacent registers, R35 is implied in the load instruction.
Next, compare for the three versions of your code the way registers are used in the inner loop:
Recall that the compiler tries to stay within limits and sometimes has to play musical chairs with the registers. That is, if not enough registers are available to receive each unique value from memory, it reuses a register previously used in the inner loop.
The effect of that is that the previous value must be used by an arithmetic instruction so that it can be overwritten by the new value. At this time, the load from memory must wait until that instruction completes: a memory latency is exposed.
On all modern computer architectures, this latency constitutes a significant delay. On the GPU, some of it can be hidden by switching to another warp, but often not all of it. Consequently, the number of times a register is reused in the inner loop can be an indication of the slowdown of the code.
With this insight, analyze the three versions of your code and find that they experience 8, 6, and 3 memory latencies per inner loop, respectively. This explains the differences in performance shown in Figure 1.
The main reason behind the different register reuse patterns is that when two plain loads are fused into a single wide load, typically fewer address calculations are needed, and the result of an address calculation also goes into a register. With more registers holding addresses, fewer addresses are left over to act as landing zones for values fetched from memory, and you lose seats in the musical chairs game; the register pressure grows.
Third step: Launch bounds
You are not yet done. Now that you know the critical role that registers play in the performance of your program, you can review total register use by the three versions of the code. The easiest method is to inspect Nsight Compute reports again. You find that the numbers of registers used are 40, 36, and 44, respectively.
The way the compiler determines these numbers is by using sophisticated heuristics that take a large number of factors into account, including how many active warps may be present on an SM, the number of unique values to be loaded in busy loops, and the number of registers required for each operation.
If the compiler has no knowledge of the number of warps that may be present on an SM, it tries to limit the number of registers per thread to 32, because that is the number that would be available if the absolute maximum simultaneous number of warps allowed by the hardware (64) were present. In this case, you did not tell the compiler what to expect, so it did its best, but evidently determined that the code generated using just 32 registers would be too inefficient.
However, the actual size of the thread block specified in the launch statement of the kernel is 1024 threads, so 32 warps. This means that if only a single thread block is present on the SM, each thread can use up to 64 registers. At 40, 36, and 44 registers per thread of actual use, not enough registers would be available to support two or more thread blocks per SM, so exactly one is launched, and you leave 24, 28, and 20 registers per thread unused, respectively.
You can do a lot better by informing the compiler of your intent through the use of launch bounds. By telling the compiler the maximum number of threads in a thread block (1024) and also the minimum number of blocks to support simultaneously (1), it relaxes and is happy to use 63, 56, and 64 registers per thread, respectively.
Interestingly, the fastest version of the code is now the baseline version without any wide loads. While the combined wide loads without launch bounds gave a speedup of 1.64x, with launch bounds the speedup with wide loads becomes 1.76x, whereas the baseline code speeds up by 1.77x. This means we did not have to go to the trouble of modifying the kernel definition. Merely supplying launch bounds was enough to obtain optimal performance for this particular thread block size in this case.
Experimenting a little more with thread block sizes and a minimum number of threads blocks to be expected on the SM, we reached a speedup of 1.79x at two thread blocks of 512 threads each per SM, also for the baseline version without wide loads.
Conclusion
Efficient use of registers is critical to obtaining good performance of GPU kernels. Sometimes a technique called wide loads can give significant benefits. It reduces the number of memory addresses that are computed and must be stored in registers, leaving a larger number of registers to receive data from memory. However, giving the compiler hints about the way you launch kernels in your application may give the same benefit without having to change the kernel itself.
For more information about performance tuning and debugging, see NVIDIA Nsight Compute or Nsight Systems. The product pages list relevant posts, videos, and more.
Acknowledgements
Thanks to Mark Gebhart and Jerry Zheng of NVIDIA for providing the expertise to analyze register use in the example discussed in this post.
Related resources
Tags
About the Authors
Comments
Related posts
Optimize GPU Workloads for Graphics Applications with NVIDIA Nsight Graphics
Improving GPU Performance by Reducing Instruction Cache Misses
Measuring the GPU Occupancy of Multi-stream Workloads
Boosting Application Performance with GPU Memory Prefetching
How to Access Global Memory Efficiently in CUDA Fortran Kernels
Related posts
Processing High-Quality Vietnamese Language Data with NVIDIA NeMo Curator
Access to NVIDIA NIM Now Available Free to Developer Program Members
Revolutionizing Graph Analytics: Next-Gen Architecture with NVIDIA cuGraph Acceleration
Efficient CUDA Debugging: Memory Initialization and Thread Synchronization with NVIDIA Compute Sanitizer
Analyzing the Security of Machine Learning Research Code
|
Boosting CUDA Efficiency with Essential Techniques for New Developers
To fully harness the capabilities of NVIDIA GPUs, optimizing NVIDIA CUDA performance is essential, particularly for developers new to GPU programming. This talk is specifically designed for those stepping into the world of CUDA, providing a solid foundation in GPU architecture principles and optimization techniques.
Athena Elafrou, a developer technology engineer at NVIDIA, leads a foundational session that dives into the basics of writing high-performance CUDA kernels tailored for NVIDIA GPUs. You’ll gain insights into critical aspects of GPU architecture, focusing on the NVIDIA H200 Tensor Core GPU, and learn how to use its features to enhance performance.
Follow along with a PDF of the session, which emphasizes fundamental memory access optimization techniques, where you’ll discover how to boost memory throughput by aligning and coalescing memory accesses. It also explores strategies to increase parallelism in your applications by improving instruction-level parallelism (ILP) and thread-level parallelism (TLP), key techniques for hiding latencies, and maximizing the overall throughput of your CUDA programs.
Additionally, you’ll learn how to manage atomic operations efficiently through practical examples and tested optimization techniques.
You’ll walk through real-world examples and performance analyses to provide you with actionable knowledge that you can directly apply to your CUDA development work. Whether you’re just starting with CUDA or looking to refine your skills, this session will equip you with the tools needed to unlock the power of NVIDIA GPUs.
Watch the talk Introduction to CUDA Programming and Performance Optimization, explore more videos on NVIDIA On-Demand, and gain valuable skills and insights from industry experts by joining the NVIDIA Developer Program.
This content was partially crafted with the assistance of generative AI and LLMs. It underwent careful review and was edited by the NVIDIA Technical Blog team to ensure precision, accuracy, and quality.
Related resources
Tags
About the Authors
Comments
Related posts
Advanced Strategies for High-Performance GPU Programming with NVIDIA CUDA
CUDA Toolkit 12.3 Delivers New Features for Accelerated Computing
New Video Series: CUDA Developer Tools Tutorials
Exploring the New Features of CUDA 11.3
SC20 Demos: New Nsight Systems and Nsight Compute Demos
Related posts
Strengthening Climate Resilience with AI-Powered Flood Modeling and 3D Visualizations
How AI is Making Climate Modeling Faster, Greener, and More Accurate
Deep Learning Model Boosts Accuracy in Long-Range Weather and Climate Forecasting
Teaching Robots to Tackle Household Chores
Optimizing Drug Discovery with CUDA Graphs, Coroutines, and GPU Workflows
|
Understanding the Visualization of Overhead and Latency in NVIDIA Nsight Systems
Recently, a user came to us in the forums. They sent a screenshot of a profiling result using NVIDIA Nsight Systems on a PyTorch program. A single launch of an element-wise operation gave way to questions about the overheads and latencies in CUDA code, and how they are visualized with the Nsight Systems GUI. This seemed like a question for which a lot of people could use an answer.
Some definitions
First, here are some definitions of latency and overhead regarding CUDA. It’s a complicated topic.
Latency
There are two common definitions of latency. Launch latency, sometimes called induction time, is the time between requesting an asynchronous task and beginning to execute it. For example, CUDA kernel launch latency could be defined as the time range from the beginning of the launch API call to the beginning of the kernel execution. There are about 20 µs of launch latency in Figure 1 between the beginning of the launch call (in the CUDA API row) and the beginning of the kernel execution (in the CUDA Tesla V100-SXM row). This definition includes the time of the launch API call.
Task latency, or total time, is the time between adding a task to the queue and the task finishing. In this post, we mostly talk about launch latency.
Latency is not always bad in asynchronous systems. Imagine that you make 10 short API calls to launch a dependent sequence of 10 large kernels. You’d expect minimal latency from the first API call to the first kernel execution. But the 10th kernel in the sequence might have a large launch latency by this definition, because the CPU enqueued the 10 launch commands and returned quickly, while the GPU has to complete the first nine kernels before it can start the 10th. The long gap between the 10th API call and the 10th launch would be a high latency, but an intentional one.
The point of using asynchronous launches from the CPU is to allow the CPU to send commands to the GPU and then perform other tasks while the GPU executes the commands.
Overhead
We define overhead as the time it takes to perform some operation that you’d ideally want to take zero time, and this ends up limiting the rate at which you can do that operation. It is time spent (latency) with no useful kernel work done. For example, consider the overhead of the CPU code for launching a CUDA kernel. If the launch API call takes 10 µs on your system, you can only launch at most 100,000 kernels per second.
Here are definitions for a couple of kinds of overhead: CPU wrapper, memory, and GPU launch overhead.
CPU wrapper overhead
This is the overhead of the wrappers around a CUDA kernel on the host CPU side. In the Nsight Systems GUI, you would see this as the full duration of the kernel launch API call (the blue ranges on the CUDA API row in Figure 1). This includes any mutex-lock contention that occurs in the driver if doing multi-threaded launching. You can see if you are hitting mutex contention within the driver by collecting OS Runtime data, which shows any pthread_mutex_lock calls lasting above a user-settable threshold.
Nsight Systems adds a bit of overhead to capture trace data. Events may appear longer in the timeline than they would take when the app runs without the tool. If the events being traced by Nsight Systems are relatively short in duration (a few microseconds or less) and occur frequently in the workload, Nsight System’s overhead seems higher due to the fixed cost of tracing an event.
Memory overhead
This is the overhead of moving data back and forth from the CPU to the GPU, or from one GPU to another. For example, this would be the time it takes to read the input tensors and writing output to DRAM. It is shown as the time range from the API call to enqueue the commands to copy the input data to the GPU’s memory until the copy is finished.
The memory overhead can be hidden with kernel launches, because the GPU can simultaneously be executing a kernel while uploading the input data for the next kernel and downloading the output from the previous kernel.
GPU launch overhead
This is the time it takes for the GPU to retrieve the command and begin executing it. Examples include:
Understanding overhead and latency in the timeline
Now that we have defined the terminology, here’s a deeper look at some Nsight Systems timelines. We discuss how to interpret what is shown.
Lifecycle of a kernel
Using the Nsight Systems GUI, you can trace the events that happen during the lifetime of a kernel.
CPU gaps
You can find an example of a gap on the CPU timeline when profiling the vectorAdd CUDA Toolkit example.
Just above the CUDA API timeline, the thread’s state indicates that it is busy, so the CPU is executing some other operations. In Nsight Systems, CPU sampling, OS Runtime, API tracing, or adding NVTX instrumentation and tracing NVTX can help you figure out what the CPU is doing between CUDA API calls. To learn more about NVIDIA Tools Extension (NVTX) API, see CUDA Pro Tip: Generate Custom Application Profile Timelines with NVTX.
On the GPU timeline, the GPU might be executing another context (it might have context-switched), or it might be idle while it waits for more work to be scheduled.
Sampling data was also collected, as you can see by the orange/yellow marks below the thread state timeline. Each mark represents the point when a CPU IP/backtrace sample was collected. When this screenshot was captured, the mouse (not shown) was hovering on the sampling mark just above the left side of the tooltip. The tooltip shows the CPU IP/backtrace for that thread at that moment. Looking at the vectorAdd source code, you can easily see the application was checking the results of the GPU’s calculation.
GPU context switching
The following figures are screenshots of the Nsight Systems timeline showing the important part of the vectorAdd CUDA Toolkit sample. The dGPU (Quadro GP100) row shows GPU context switching. During this time, the consistently green Run range indicates that the GPU did not switch away from vectorAdd’s context.
Host to device memory overhead
Figure 5 shows HtoD memory overhead in vectorAdd. Even though the cudaMemcpy API was used, this was an asynchronous HtoD copy from pageable memory because the CPU function returns before the GPU work completes. Using asynchronous copy operations and overlapping kernel launches with copies allows you to hide the latency of the copies. Latency hiding is an important technique in GPU programming. For more information about CUDA memcpy behavior, see API synchronization behavior.
Nsight Systems overhead
Figure 6 shows the gap between the kernel launch API and the execution of the kernel on the GPU to highlight Nsight Systems overhead, typically less than a microsecond, as seen in this screenshot. It also shows that, in this case, the GPU takes about a microsecond to begin executing the kernel after the launch API call has finished.
CPU overhead
Figure 7 shows the CPU overhead, for the full duration of the launch API call.
Stream synchronization
Figure 8 shows the matrixMul CUDA toolkit sample, where the CPU enqueues a memcpy (in red) and kernel (in blue) into a stream, and then calls cudaStreamSynchronize (in green) to block until the enqueued work in that stream completes. The kernel launch latency here is due to the GPU having to execute the stream tasks in order, so the kernel execution doesn’t begin until the preceding memcpy finishes. The long call to cudaStreamSynchronize shows the CPU waiting for the GPU work to complete. The CPU could have done other work before calling cudaStreamSynchronize, enabling the CPU and GPU to execute in parallel.
Launch latency
Regarding the kernel launch latency, matrixMul also shows the CPU launching a sequence of kernels asynchronously without waiting for them, and the GPU executing them in order. Figures 9-11 show how the kernel launch latency increases for each launch, because the CPU API calls are short, and the GPU kernel executions are longer.
This increasing kernel launch latency does not indicate inefficiency. It shows the CPU getting ahead of the GPU, so that the CPU is free to do other tasks while the GPU executes work in the queue. These figures show that matrixMul is doing a good job of keeping work queued up for the GPU, as there are no gaps in GPU work for that part of the timeline.
The first challenge of GPU programming is avoiding CPU bottlenecks and keeping the GPU busy—you want the GPU to be the bottleneck. When you have optimized the CPU part of your program to where Nsight Systems shows that the GPU is the bottleneck, it’s time to use NVIDIA Nsight Compute to profile individual kernels and investigate how to make them more efficient. Or, you could use faster or more GPUs to make the GPU part of the program execute faster.
Want to know what proportion of the total duration of the kernel was due to what work? Nsight Systems is just the tool for seeing when events start and end on a timeline. All the work done by the kernel—that is, arithmetic and memory access instructions—occur within the blue bar. To understand what is happening inside the kernel, use NVIDIA Nsight Compute. That tool isolates individual kernels and does deep-dive analysis on them. It takes longer to run, but you get more detailed information.
Call to action
Get NVIDIA Nsight Systems and NVIDIA Nsight Compute from the NVIDIA CUDA ToolKit public download. You can also obtain the most recent, updated Nsight tools with enhancements and fixes beyond the version shipping in the NVIDIA CUDA Toolkit at the Nsight Systems, Nsight Compute, or Nsight Graphics pages.
Want more information?
This post is part of a series that describes the new Nsight family of tools, shows the functionality, and explains how to move your development to the new tools. Check the NVIDIA Technical Blog for additional posts covering related topics.
Previous entries:
To see the tools in action, check out the following links featuring videos from recent GTCs:
NVIDIA Nsight Systems
NVIDIA Nsight Compute
NVIDIA Nsight Graphics
We’ve also covered NVIDIA Nsight tools in older posts, including Nsight Systems Exposes New GPU Optimization Opportunities and https://devtalk.nvidia.com/.
Have a question?
Post it to the NVIDIA forums using either NVIDIA Nsight Systems or NVIDIA Nsight Compute. Drop a message at [email protected] or [email protected]. Or just choose Feedback in the application to let us know what you are seeing and what you think.
Related resources
Tags
About the Authors
Comments
Related posts
Measuring the GPU Occupancy of Multi-stream Workloads
CUDA 7.5: Pinpoint Performance Problems with Instruction-Level Profiling
How to Optimize Data Transfers in CUDA C/C++
How to Optimize Data Transfers in CUDA Fortran
How to Implement Performance Metrics in CUDA Fortran
Related posts
CUDA Toolkit Now Available for NVIDIA Blackwell
Improving GPU Performance by Reducing Instruction Cache Misses
Optimizing llama.cpp AI Inference with CUDA Graphs
Just Released: Nsight Compute 2024.3
Just Released: CUDA Toolkit 12.6
|
Advanced Kernel Profiling with the Latest Nsight Compute
NVIDIA Nsight Compute is an interactive kernel profiler for CUDA applications. It provides detailed performance metrics and API debugging through a user interface and a command-line tool. Nsight Compute 2022.1 brings updates to improve data collection modes enabling new use cases and options for performance profiling.
Download Now>>
What’s New
Range Replay
This release of Nsight Compute extends the existing replay modes with the highly requested feature of Range Replay. Range Replay captures and replays complete ranges of CUDA API calls and kernel launches within the profiled application. Metrics are associated with the entire range as opposed to individual kernels.This allows the tool to execute kernels without serialization and support profiling kernels that need to be run concurrently for correctness or performance reasons. A range consists of a start and an end marker; and includes all CUDA API calls and kernels launched between these markers from any CPU thread.
Range markers can be defined using either:
For complete details, see the “Replay” section in Nsight Compute’s Kernel Profiling Guide.
Memory Analysis
When profiling on A100, a new L2 Cache Eviction Policies table in the Memory Analysis section helps you understand the number of accesses and achieved hit rates by the various cache eviction policies. In the same section, the L2 Cache table now has a new ECC row to show traffic created from enabling hardware Error Correction Code on the GPU.
Guided Analysis
Nsight Compute now makes it easier to select initial analysis targets in multiresult collection by dynamically selecting between the Summary and Details pages when opening a report. Rules were extended to detect non-fused floating-point instructions as an optimization opportunity. Last, but not least, when the Uncoalesced Memory Access rules are triggered, they show a table of the five most valuable instances, making it easier to inspect and resolve them on the Source page.
Additional improvements
Further improvements include an Occupancy Calculator auto-update. There is also a new ‘Thread Instructions Executed’ metric and register name tooltips for the Register Dependency columns in the Source page, as well as NVLink updates.
At GTC in November of 2021, we released insightful assets showcasing Nsight tools capabilities:
Resources
Related resources
Tags
About the Authors
Comments
Related posts
CUDA Toolkit 12.3 Delivers New Features for Accelerated Computing
New Video Series: CUDA Developer Tools Tutorials
SC20 Demos: New Nsight Systems and Nsight Compute Demos
Announcing Nsight Compute 2020.2 and Nsight Visual Studio Edition 2020.2
NVIDIA announces Nsight Systems 2019.4
Related posts
Advanced API Performance: SetStablePowerState
TensorFlow Performance Logging Plugin nvtx-plugins-tf Goes Public
NVIDIA Nsight Systems Adds Vulkan Support
Nsight Systems Exposes New GPU Optimization Opportunities
CUDA 8 Features Revealed
|
Using Nsight Compute to Inspect your Kernels
By now, hopefully you read the first two blogs in this series “Migrating to NVIDIA Nsight Tools from NVVP and Nvprof” and “Transitioning to Nsight Systems from NVIDIA Visual Profiler / nvprof,” and you’ve discovered NVIDIA added a few new tools, both Nsight Compute and Nsight Systems, to the repertoire of CUDA tools available for developers. The tools become more and more important when using newer GPU architectures. For the example project in this blog, using the new tools will be necessary to get the results we are after for Turing architecture GPUs and beyond.
As covered previously, Nsight Compute and Nsight Systems differ in their purpose and functionality, so profiling activities will be accomplished in one or the other of these new tools. One of the main purposes of Nsight Compute is to provide access to kernel-level analysis using GPU performance metrics. If you’ve used either the NVIDIA Visual Profiler, or nvprof (the command-line profiler), you may have inspected specific metrics for your CUDA kernels. This blog focuses on how to do that using Nsight Compute. Many of the other profiler activities you may be interested in (e.g. inspecting timelines, measuring activity durations, etc.) can be performed using Nsight Systems.
Getting Started
We’re going to analyze a code that is a variant of the vector add code that was used in the previous blog. In this case, we’ll be looking at a CUDA code that does a matrix-matrix element-wise add operation, effectively a vector add, but using a 2D CUDA grid configuration, along with 2D (i.e. doubly-subscripted) array access. The code is still quite simple:
#include
const size_t size_w = 1024;
const size_t size_h = 1024;
typedef unsigned mytype;
typedef mytype arr_t[size_w];
const mytype A_val = 1;
const mytype B_val = 2;
__global__ void matrix_add_2D(const arr_t * __restrict__ A, const arr_t * __restrict__ B, arr_t * __restrict__ C, const size_t sw, const size_t sh){
size_t idx = threadIdx.x+blockDim.x*(size_t)blockIdx.x;
size_t idy = threadIdx.y+blockDim.y*(size_t)blockIdx.y;
if ((idx < sh) && (idy < sw)) C[idx][idy] = A[idx][idy] + B[idx][idy];
}
int main(){
arr_t *A,*B,*C;
size_t ds = size_w*size_h*sizeof(mytype);
cudaError_t err = cudaMallocManaged(&A, ds);
if (err != cudaSuccess) {std::cout << "CUDA error: " << cudaGetErrorString(err) << std::endl; return 0;}
cudaMallocManaged(&B, ds);
cudaMallocManaged(&C, ds);
for (int x = 0; x < size_h; x++)
for (int y = 0; y < size_w; y++){
A[x][y] = A_val;
B[x][y] = B_val;
C[x][y] = 0;}
int attr = 0;
cudaDeviceGetAttribute(&attr, cudaDevAttrConcurrentManagedAccess,0);
if (attr){
cudaMemPrefetchAsync(A, ds, 0);
cudaMemPrefetchAsync(B, ds, 0);
cudaMemPrefetchAsync(C, ds, 0);}
dim3 threads(32,32);
dim3 blocks((size_w+threads.x-1)/threads.x, (size_h+threads.y-1)/threads.y);
matrix_add_2D<<<blocks,threads>>>(A,B,C, size_w, size_h);
cudaDeviceSynchronize();
err = cudaGetLastError();
if (err != cudaSuccess) {std::cout << "CUDA error: " << cudaGetErrorString(err) << std::endl; return 0;}
for (int x = 0; x < size_h; x++)
for (int y = 0; y < size_w; y++)
if (C[x][y] != A_val+B_val) {std::cout << "mismatch at: " << x << "," << y << " was: " << C[x][y] << " should be: " << A_val+B_val << std::endl; return 0;} ;
std::cout << "Success!" << std::endl;
return 0;
}
Some highlights:
The above code hopefully seems pretty straightforward. As a CUDA developer you probably know that two of the most important optimization priorities for any CUDA developer, or CUDA code, are to expose enough parallel work to the GPU, and to make efficient use of the memory subsystem(s). We’ll focus on that second objective. Since our code only makes use of global memory, we’re interested in efficient use of global memory. An important efficiency objective is to strive for coalesced access to global memory, for both load and store operations.
If you’ve profiled CUDA codes already, you may have attempted to verify, using the profiler, that global memory accesses are coalesced. With the Visual Profiler (nvvp) or nvprof, the command line profiler, this is fairly quick and easy to determine using metrics such as gld_efficiency (global load efficiency) and gst_efficiency (global store efficiency).
Which Metrics to Use?
This brings us to our first point of departure. Generally speaking, the metrics available using Nsight Compute are not the same as those that were available with previous tools. For example, there is no exact corresponding metric (at this time) that provides the same information as the gld_efficiency and gst_efficiency metrics that we might previously have used to ascertain whether our kernel does a good job of coalesced loads and stores. So two key points here are, in general, we need to use a different set of metrics, and we also may have to come up with alternate techniques to get the desired information.
First of all, what are the new metrics? There are two ways to review them:
nv-nsight-cu-cli --devices 0 --query-metrics >my_metrics.txt
(you may need to specify the full path, see below). There are also command line switches to instead query metrics for any specific architecture, regardless of the GPUs you actually have.
Considering a global load or store request, the definition of high-efficiency is when the number of memory (or cache) transactions that are needed to service the request are minimized. For a global load request of a 32-bit quantity per thread, such as what our example code is doing for the load from A and B, we need a total of 128 bytes to satisfy each request warp-wide. Therefore, inspecting transactions per request gives us similar information to the gld_efficiency and gst_efficiency metrics, if we have some idea of how many transactions should be needed per request in the best case. For Maxwell GPUs and newer, generally the minimum number here would be four transactions to cover a 128-byte warp-wide request (each transaction is 32 bytes). If we observe more than that, it indicates less than optimal efficiency.
Unfortunately, we also don’t have corresponding “new tools” metrics for the gld_transactions_per_request or the gst_transactions_per_request metric we might previously used. However, these metrics are essentially a fraction where the numerator is the total number of transactions, and the denominator is total number of requests. At least for compute capability 7.0 and newer architectures (currently Volta and Turing) we can find metrics (using the comparison table in the above mentioned Transition Guide) to represent the numerator and denominator. For global load transactions, we will use l1tex__t_sectors_pipe_lsu_mem_global_op_ld.sum and for global load requests we will use l1tex__t_requests_pipe_lsu_mem_global_op_ld.sum. At this point you might be wondering about the length of these metric names and naming convention. There is a method to the naming, and you can review it in the documentation. The naming convention is intended to make it easier to understand what a metric represents from its name. Briefly, the metric name preceding the period identifies where in the architecture the data is being collected, and the token after the period identifies mathematically how the number is gathered. For most base metric names on Volta and newer, suffixes (like .sum, .avg, …) exist that together with the base name make up the actual metric name that can be collected. Once you understand this concept for one metric, you can easily apply it to almost any other available metric on this architecture.
Why the change in metrics? Nsight Compute design philosophy has been to expose each GPU architecture and memory system in greater detail. Many more performance metrics are provided, mapping the specific architectural traits in greater detail. The customizable analysis section and rules were also designed to provide a flexible mechanism to build more advanced analyzers combining a greater number of performance counters.
Since we are discussing memory metrics, the following chart shows a GPU memory model with various metrics identified:
l1tex__t_sectors_pipe_lsu_mem_global_op_ld.sum,.per_second, l1tex__t_requests_pipe_lsu_mem_global_op_ld.sum
l1tex__t_bytes_pipe_lsu_mem_global_op_st.sum, .per_second
l1tex__t_sectors_pipe_lsu_mem_local_op_ld.sum, .per_second
l1tex__t_sectors_pipe_lsu_mem_local_op_st.sum, .per_second
smsp__inst_executed_op_shared_ld.sum, .per_second
smsp__inst_executed_op_shared_st.sum, .per_second
lts__t_sectors_srcunit_tex_op_read.sum, .per_second
lts__t_sectors_srcunit_tex_op_write.sum, .per_second
lts__t_sectors_aperture_sysmem_op_read.sum, .per_second
lts__t_sectors_aperture_sysmem_op_write.sum, .per_second
dram__bytes_read.sum, .per_second
dram__bytes_write.sum, .per_second
In the above table, each line corresponds to a numbered path in the diagram. The first entry in each line indicates a cumulative metric (for that path). By appending .per_second to that metric, it can be converted into a throughput metric. For example, dram__bytes_write.sum is a cumulative metric, and dram__bytes_write.sum.per_second is a throughput metric. This table is not an exhaustive list of metrics applicable to each path, but gives some representative examples.
Getting familiar with the Nsight Compute CLI
If you’re familiar with using nvprof, using the Nsight Compute CLI (command line interface) may be the most comfortable. As we’ll see, we can get similar data using either the CLI or the GUI (graphical user interface), but the CLI might be easier if you know specifically what data you are looking for (e.g. running before/after experiments, as we will do here, capturing the same metrics), and/or if you want to use command line style automation (e.g. scripts to compile data). So let’s start there. For this discussion, we will use the linux tool, although windows command line usage should be quite similar (installation paths, and path related characteristics, will be different). One of the first things to know is that the path to the tool may not be set up by default, nor is it part of the /usr/local/cuda/bin path that you may have set up, if you followed the CUDA toolkit install instructions carefully. (In later CUDA toolkits, the path should be setup by default during installation.) The Nsight Compute tool is installed with CUDA toolkit versions 10.0 and later (I strongly recommend using the latest version, at least from CUDA 10.1 Update 1 or later). If you want to or need to, you can install the Nsight Compute tool directly using a standalone installer from https://developer.nvidia.com/nsight-compute. This is also a way to get the latest version. So you’ll either want to add the path to the nsight compute binaries to your PATH environment variable, or else you’ll need to specify the full path when executing it. On CUDA 10.1, the full path is: /usr/local/cuda/NsightCompute-2019.3/, so to fully specify the CLI executable, use: /usr/local/cuda/NsightCompute-2019.3/nv-nsight-cu-cli. At this point you may want to try running the query metrics command from above. For the commands presented in this blog, we will assume that you have added the path to your PATH variable.
While it is not the focus of this blog, there are quite a few capabilities that Nsight Compute offers. We can start by running it in “details page mode” on our executable. Using the code above, compile with nvcc -arch=sm_70 example.cu -o example, modifying the -arch specification to match your GPU. The examples here will use a Volta device (sm_70), but should run equally well on a Turing device. You will not be able to follow this example exactly on an earlier GPU (e.g. Kepler, Maxwell, Pascal) architecture because the available metrics vary between GPUs of compute capability 7.0 and higher, compared to GPUs of compute capability 6.x. Furthermore, use of Nsight Compute is not supported on devices of compute capability 6.0 and lower. To show the details page, try the following:
$ /usr/local/cuda/NsightCompute-2019.3/nv-nsight-cu-cli ./example
==PROF== Connected to process 30244
==PROF== Profiling "matrix_add_2D" - 1: 0%....50%....100% - 48 passes
Success!
==PROF== Disconnected from process 30244
[30244] [email protected]
matrix_add_2D, 2019-Jun-06 23:12:59, Context 1, Stream 7
Section: GPU Speed Of Light
----------------------------------------- --------------- ------------------------------
Memory Frequency cycle/usecond 866.22
SOL FB % 21.46
Elapsed Cycles cycle 73,170
SM Frequency cycle/nsecond 1.21
Memory [%] % 56.20
Duration usecond 60.16
SOL L2 % 53.58
SOL TEX % 60.21
SM Active Cycles cycle 68,202.96
SM [%] % 8.97
----------------------------------------- --------------- ------------------------------
Section: Compute Workload Analysis
----------------------------------------- --------------- ------------------------------
Executed Ipc Active inst/cycle 0.18
Executed Ipc Elapsed inst/cycle 0.17
Issue Slots Max % 5.00
Issued Ipc Active inst/cycle 0.18
Issue Slots Busy % 4.57
SM Busy % 9.61
----------------------------------------- --------------- ------------------------------
Section: Memory Workload Analysis
----------------------------------------- --------------- ------------------------------
Memory Throughput Gbyte/second 251.25
Mem Busy % 56.20
Max Bandwidth % 53.58
L2 Hit Rate % 89.99
Mem Pipes Busy % 3.36
L1 Hit Rate % 90.62
----------------------------------------- --------------- ------------------------------
Section: Scheduler Statistics
----------------------------------------- --------------- ------------------------------
Active Warps Per Scheduler warp 11.87
Eligible Warps Per Scheduler warp 0.15
No Eligible % 95.39
Instructions Per Active Issue Slot inst/cycle 1
Issued Warp Per Scheduler 0.05
One or More Eligible % 4.61
----------------------------------------- --------------- ------------------------------
Section: Warp State Statistics
----------------------------------------- --------------- ------------------------------
Avg. Not Predicated Off Threads Per Warp 29.87
Avg. Active Threads Per Warp 32
Warp Cycles Per Executed Instruction cycle 261.28
Warp Cycles Per Issued Instruction 257.51
Warp Cycles Per Issue Active 257.51
----------------------------------------- --------------- ------------------------------
Section: Instruction Statistics
----------------------------------------- --------------- ------------------------------
Avg. Executed Instructions Per Scheduler inst 3,072
Executed Instructions inst 983,040
Avg. Issued Instructions Per Scheduler inst 3,116.96
Issued Instructions inst 997,428
----------------------------------------- --------------- ------------------------------
Section: Launch Statistics
----------------------------------------- --------------- ------------------------------
Block Size 1,024
Grid Size 1,024
Registers Per Thread register/thread 16
Shared Memory Configuration Size byte 0
Dynamic Shared Memory Per Block byte/block 0
Static Shared Memory Per Block byte/block 0
Threads thread 1,048,576
Waves Per SM 6.40
----------------------------------------- --------------- ------------------------------
Section: Occupancy
----------------------------------------- --------------- ------------------------------
Block Limit SM block 32
Block Limit Registers block 4
Block Limit Shared Mem block inf
Block Limit Warps block 2
Achieved Active Warps Per SM warp 48.50
Achieved Occupancy % 75.78
Theoretical Active Warps per SM warp/cycle 64
Theoretical Occupancy % 100
----------------------------------------- --------------- ------------------------------
That’s a lot of output. (If your code has multiple kernel invocations, details page data will be gathered and displayed for each.) We won’t try and go through it all in detail, but notice there are major sections for SOL (speed of light – comparison against best possible behavior), compute analysis, memory analysis, scheduler, warp state, instruction and launch statistics, and occupancy analysis. You can optionally select which of these sections are collected and displayed with command-line parameters. Command-line parameter help is available in the usual way (--help), and also in the documentation. Note the choice of sections and metrics will affect profiling time in general, as well as the size of the output.
We could possibly make some inferences about our objective (global load/store efficiency) using the above data, but let’s focus on the metrics of interest. We gather these in a fashion very similar to how you would do it with nvprof:
$ nv-nsight-cu-cli --metrics l1tex__t_sectors_pipe_lsu_mem_global_op_ld.sum,l1tex__t_requests_pipe_lsu_mem_global_op_ld.sum ./example
==PROF== Connected to process 30749
==PROF== Profiling "matrix_add_2D" - 1: 0%....50%....100% - 4 passes
Success!
==PROF== Disconnected from process 30749
[30749] [email protected]
matrix_add_2D, 2019-Jun-06 23:25:45, Context 1, Stream 7
Section: Command line profiler metrics
------------------------------------------------ ------------ ------------------------------
l1tex__t_requests_pipe_lsu_mem_global_op_ld.sum request 65,536
l1tex__t_sectors_pipe_lsu_mem_global_op_ld.sum sector 2,097,152
------------------------------------------------ ------------ ------------------------------
This first metric above represents the denominator (requests) of the desired measurement (transactions per request) and the second metric represents the numerator (transactions). If we divide these, we get 32 transactions per request. Therefore, each thread in the warp is generating a separate transaction. This is a good indication that our access pattern (reading, in this case) is not coalesced.
Using the Nsight Compute GUI
What if we wanted to gather these metrics using the GUI? One requirement (for linux), similar to using the NVIDIA Visual Profiler (nvvp) on linux, is that we will need an X session to run the GUI app version in. To get started, from an X-capable session if you were using the Visual Profiler, you would type nvvp at the command prompt. To use Nsight Compute GUI, type:
/usr/local/cuda/NsightCompute-2019.3/nv-nsight-cu
Or just nv-nsight-cu if you already added the path to your PATH variable. Next you should see a window open that looks something like below:
For the easiest start, we can click on Continue under Quick Launch as circled above. (Alternatively, you can create a project by selecting the Create New Project button under New Project.) Next, a profiling configuration window should open (“Connect to process”); you can click on Additional Options at the bottom of the window, then click on the Other tab. We will then enter input on the Application Executable:, Output File:, and Metrics: lines:
Here we entered the path and name of the executable to be profiled (example), the file name where we will store the metric results, and the comma-separated metric names:
l1tex__t_sectors_pipe_lsu_mem_global_op_ld.sum,l1tex__t_requests_pipe_lsu_mem_global_op_ld.sum
After that, you can minimize the Additional Options section and click the blue Launch button. The profiler will then run and capture the requested data, displaying it like this:
In the picture above, the requested metric data (shown underlined in red above) as well as one other collected section are reported (in this case, Memory Workload Analysis). Note the file saved to disk in this case is not human-readable, but is in a report format designed to be viewed (opened) from the Nsight Compute GUI. For a human-readable file copy, most pages in the report have export buttons available, usually in the upper-right corner.
If you want to explore GUI features in more detail, the documentation contains a quick-start section introducing the GUI.
Fixing the Code
The reason for the low-efficiency (high number of transactions per request) in this code is due to our method of 2D indexing:
... C[idx][idy] = A[idx][idy] + B[idx][idy];
The index built with threadIdx.x (i.e. idx) should appear in the last subscript for coalesced access across a warp; instead, it appears in the first subscript. While either method can give correct results, they are not the same from a performance perspective. This arrangement results in each thread in a warp accessing data in a “column” in memory, rather than a “row” (i.e. adjacent). We can fix this by modifying our kernel code as follows:
__global__ void matrix_add_2D(const arr_t * __restrict__ A, const arr_t * __restrict__ B, arr_t * __restrict__ C, const size_t sw, const size_t sh){
size_t idx = threadIdx.x+blockDim.x*(size_t)blockIdx.x;
size_t idy = threadIdx.y+blockDim.y*(size_t)blockIdx.y;
if ((idy < sh) && (idx < sw)) C[idy][idx] = A[idy][idx] + B[idy][idx];
}
The only change is to the last line of code, where we reversed the usage of idx and idy. When we recompile and run the same profiling experiment on this modified code, we see:
$ nv-nsight-cu-cli --metrics l1tex__t_sectors_pipe_lsu_mem_global_op_ld.sum,l1tex__t_requests_pipe_lsu_mem_global_op_ld.sum ./example
==PROF== Connected to process 5779
==PROF== Profiling "matrix_add_2D" - 1: 0%....50%....100% - 4 passes
Success!
==PROF== Disconnected from process 5779
[5779] [email protected]
matrix_add_2D, 2019-Jun-11 12:01:26, Context 1, Stream 7
Section: Command line profiler metrics
----------------------------------------------- --------------- ------------
l1tex__t_requests_pipe_lsu_mem_global_op_ld.sum request 65,536
l1tex__t_sectors_pipe_lsu_mem_global_op_ld.sum sector 262,144
----------------------------------------------- --------------- ------------
Now the ratio of the metrics is 4:1 (transactions per request), indicating the desired transaction size of 32 bytes is achieved, and the efficiency of loads (and stores) is substantially improved over the previous case.
Since this work involves a comparison of a new result to an older (comparable) result, we can demonstrate an additional feature of the GUI. We can use the GUI to collect profiling results for both cases, and show the comparison. We collect the first set of results as described above. Leave the GUI open. Then select the Connect button in the upper-left corner of the GUI, and simply change the output file to a new name. If needed, you should also change the file name to be profiled to the modified version. After doing this, the blue Launch button is available again. Press the Launch button to create a New Results tab with the data from the new, modified code run. Finally, select the Original Results tab, then press the Add Baseline button at the top. Then select the New Results tab, and any differences in metrics are reported:
In the above case, we see the improved metric is shown as an 87.5% reduction compared to the baseline (an 8:1 reduction in transactions).
So does this help? The reason we are interested in making this change is because improving the memory usage efficiency should improve the performance of this memory-bound code. That means things should run faster. In order to verify that, we can use the Nsight Systems profiler covered in the previous blog to check the kernel duration before, and after the change. In order to do this, we could use the Nsight Systems CLI, and use a command similar to the first CLI command presented in the previous blog (requires Nsight Systems version 2019.3.6 or newer):
$ nsys profile -o example.nsysprofout --stats=true ./example
However, since the focus of this blog is on Nsight Compute, we could make a similar measurement using the Elapsed Cycles data from the GPU SOL report section. We can also use the comparison method outlined in the last section. In the GUI, we can start by selecting the Connect button in the upper left hand corner, to open the profiling configuration settings. Select the Additional Options drop-down again, and you can clear out the metrics from the Other tab. Now select the Sections tab, and select the GPU Speed of Light section (and you can deselect all other sections, to simplify the output and reduce profiling time). You may also need to change the output file name for this new profiling session. The blue Launch button should then appear.
Click the Launch button to collect the new profiling data. As in the previous activity, we will repeat these steps for the original version of the application and also for the improved version. After that, we can then set the original version as a baseline, and see the improvement in the elapsed cycles SOL output:
Based on the above data, we see the change resulted in about a 68% reduction in kernel execution duration (elapsed cycles).
Careful study of the other data contained in the Memory Analysis sections of the Nsight Compute output (whether in the GUI or CLI output) will also show the beneficial effect of this change on other analysis data.
What Else is New?
There are many new features in Nsight Compute compared to the NVIDIA Visual Profiler and nvprof, and we’ve only touched on a few in this blog.
New features in Nsight Compute GUI compared to Visual Profiler:
New features in Nsight Compute GUI and CLI compared to Visual Profiler/nvprof:
Conclusion
The new tools are intended to provide the same (and better) capability compared to nvprof and the Visual Profiler, but will require some new setup and new methods to get similar results. With respect to metrics profiling which is the primary focus of this blog, it’s important to become familiar with the new metrics, and if need be, synthesize the data you are looking for, from combinations of these metrics. For users transitioning from nvprof, the Transition Guide in the documentation for Nsight Compute will be especially helpful. Looking for more help or have additional questions? Visit the NVIDIA Developer forums and browse or ask a question in the Nsight Compute forum.
Related resources
Tags
About the Authors
Comments
Related posts
New Video Series: CUDA Developer Tools Tutorials
Improve Guidance and Performance Visualization with the New Nsight Compute
Advanced Kernel Profiling with the Latest Nsight Compute
SC20 Demos: New Nsight Systems and Nsight Compute Demos
NVIDIA announces Nsight Systems 2019.4
Related posts
CUDA Toolkit Now Available for NVIDIA Blackwell
Improving GPU Performance by Reducing Instruction Cache Misses
Optimizing llama.cpp AI Inference with CUDA Graphs
Just Released: Nsight Compute 2024.3
Just Released: CUDA Toolkit 12.6
|
How to Access Global Memory Efficiently in CUDA C/C++ Kernels
In the previous two posts we looked at how to move data efficiently between the host and device. In this sixth post of our CUDA C/C++ series we discuss how to efficiently access device memory, in particular global memory, from within kernels.
There are several kinds of memory on a CUDA device, each with different scope, lifetime, and caching behavior. So far in this series we have used global memory, which resides in device DRAM, for transfers between the host and device as well as for the data input to and output from kernels. The name global here refers to scope, as it can be accessed and modified from both the host and the device. Global memory can be declared in global (variable) scope using the __device__ declaration specifier as in the first line of the following code snippet, or dynamically allocated using cudaMalloc() and assigned to a regular C pointer variable as in line 7. Global memory allocations can persist for the lifetime of the application. Depending on the compute capability of the device, global memory may or may not be cached on the chip.
__device__ int globalArray[256];
void foo()
{
...
int *myDeviceMemory = 0;
cudaError_t result = cudaMalloc(&myDeviceMemory, 256 * sizeof(int));
...
}
Before we go into global memory access performance, we need to refine our understanding of the CUDA execution model. We have discussed how threads are grouped into thread blocks, which are assigned to multiprocessors on the device. During execution there is a finer grouping of threads into warps. Multiprocessors on the GPU execute instructions for each warp in SIMD (Single Instruction Multiple Data) fashion. The warp size (effectively the SIMD width) of all current CUDA-capable GPUs is 32 threads.
Global Memory Coalescing
Grouping of threads into warps is not only relevant to computation, but also to global memory accesses. The device coalesces global memory loads and stores issued by threads of a warp into as few transactions as possible to minimize DRAM bandwidth (on older hardware of compute capability less than 2.0, transactions are coalesced within half warps of 16 threads rather than whole warps). To make clear the conditions under which coalescing occurs across CUDA device architectures we run some simple experiments on three Tesla cards: a Tesla C870 (compute capability 1.0), a Tesla C1060 (compute capability 1.3), and a Tesla C2050 (compute capability 2.0).
We run two experiments that use variants of an increment kernel shown in the following code (also available on GitHub), one with an array offset that can cause misaligned accesses to the input array, and the other with strided accesses to the input array.
#include
#include
// Convenience function for checking CUDA runtime API results
// can be wrapped around any runtime API call. No-op in release builds.
inline
cudaError_t checkCuda(cudaError_t result)
{
#if defined(DEBUG) || defined(_DEBUG)
if (result != cudaSuccess) {
fprintf(stderr, "CUDA Runtime Error: %sn", cudaGetErrorString(result));
assert(result == cudaSuccess);
}
#endif
return result;
}
template
__global__ void offset(T* a, int s)
{
int i = blockDim.x * blockIdx.x + threadIdx.x + s;
a[i] = a[i] + 1;
}
template
__global__ void stride(T* a, int s)
{
int i = (blockDim.x * blockIdx.x + threadIdx.x) * s;
a[i] = a[i] + 1;
}
template
void runTest(int deviceId, int nMB)
{
int blockSize = 256;
float ms;
T *d_a;
cudaEvent_t startEvent, stopEvent;
int n = nMB*1024*1024/sizeof(T);
// NB: d_a(33*nMB) for stride case
checkCuda( cudaMalloc(&d_a, n * 33 * sizeof(T)) );
checkCuda( cudaEventCreate(&startEvent) );
checkCuda( cudaEventCreate(&stopEvent) );
printf("Offset, Bandwidth (GB/s):n");
offset<<>>(d_a, 0); // warm up
for (int i = 0; i <= 32; i++) {
checkCuda( cudaMemset(d_a, 0.0, n * sizeof(T)) );
checkCuda( cudaEventRecord(startEvent,0) );
offset<<>>(d_a, i);
checkCuda( cudaEventRecord(stopEvent,0) );
checkCuda( cudaEventSynchronize(stopEvent) );
checkCuda( cudaEventElapsedTime(&ms, startEvent, stopEvent) );
printf("%d, %fn", i, 2*nMB/ms);
}
printf("n");
printf("Stride, Bandwidth (GB/s):n");
stride<<>>(d_a, 1); // warm up
for (int i = 1; i <= 32; i++) {
checkCuda( cudaMemset(d_a, 0.0, n * sizeof(T)) );
checkCuda( cudaEventRecord(startEvent,0) );
stride<<>>(d_a, i);
checkCuda( cudaEventRecord(stopEvent,0) );
checkCuda( cudaEventSynchronize(stopEvent) );
checkCuda( cudaEventElapsedTime(&ms, startEvent, stopEvent) );
printf("%d, %fn", i, 2*nMB/ms);
}
checkCuda( cudaEventDestroy(startEvent) );
checkCuda( cudaEventDestroy(stopEvent) );
cudaFree(d_a);
}
int main(int argc, char **argv)
{
int nMB = 4;
int deviceId = 0;
bool bFp64 = false;
for (int i = 1; i < argc; i++) {
if (!strncmp(argv[i], "dev=", 4))
deviceId = atoi((char*)(&argv[i][4]));
else if (!strcmp(argv[i], "fp64"))
bFp64 = true;
}
cudaDeviceProp prop;
checkCuda( cudaSetDevice(deviceId) )
;
checkCuda( cudaGetDeviceProperties(&prop, deviceId) );
printf("Device: %sn", prop.name);
printf("Transfer size (MB): %dn", nMB);
printf("%s Precisionn", bFp64 ? "Double" : "Single");
if (bFp64) runTest(deviceId, nMB);
else runTest(deviceId, nMB);
}
This code can run both offset and stride kernels in either single (default) or double precision by passing the “fp64” command line option. Each kernel takes two arguments, an input array and an integer representing the offset or stride used to access the elements of the array. The kernels are called in loops over a range of offsets and strides.
Misaligned Data Accesses
The results for the offset kernel on the Tesla C870, C1060, and C2050 appear in the following figure.
Arrays allocated in device memory are aligned to 256-byte memory segments by the CUDA driver. The device can access global memory via 32-, 64-, or 128-byte transactions that are aligned to their size. For the C870 or any other device with a compute capability of 1.0, any misaligned access by a half warp of threads (or aligned access where the threads of the half warp do not access memory in sequence) results in 16 separate 32-byte transactions. Since only 4 bytes are requested per 32-byte transaction, one would expect the effective bandwidth to be reduced by a factor of eight, which is roughly what we see in the figure above (brown line) for offsets that are not a multiple of 16 elements, corresponding to one half warp of threads.
For the Tesla C1060 or other devices with compute capability of 1.2 or 1.3, misaligned accesses are less problematic. Basically, the misaligned accesses of contiguous data by a half warp of threads are serviced in a few transactions that “cover” the requested data. There is still a performance penalty relative to the aligned case due both to unrequested data being transferred and to some overlap of data requested by different half-warps, but the penalty is far less than for the C870.
Devices of compute capability 2.0, such as the Tesla C2050, have an L1 cache in each multiprocessor with a 128-byte line size. The device coalesces accesses by threads in a warp into as few cache lines as possible, resulting in negligible effect of alignment on throughput for sequential memory accesses across threads.
Strided Memory Access
The results of the stride kernel appear in the following figure.
For strided global memory access we have a different picture. For large strides, the effective bandwidth is poor regardless of architecture version. This should not be surprising: when concurrent threads simultaneously access memory addresses that are very far apart in physical memory, then there is no chance for the hardware to combine the accesses. You can see in the figure above that on the Tesla C870 any stride other than 1 results in drastically reduced effective bandwidth. This is because compute capability 1.0 and 1.1 hardware requires linear, aligned accesses across threads for coalescing, so we see the familiar 1/8 bandwidth that we also saw in the offset kernel. Compute capability 1.2 and higher hardware can coalesce accesses that fall into aligned segments (32, 64, or 128 byte segments on CC 1.2/1.3, and 128-byte cache lines on CC 2.0 and higher), so this hardware results in a smooth bandwidth curve.
When accessing multidimensional arrays it is often necessary for threads to index the higher dimensions of the array, so strided access is simply unavoidable. We can handle these cases by using a type of CUDA memory called shared memory. Shared memory is an on-chip memory shared by all threads in a thread block. One use of shared memory is to extract a 2D tile of a multidimensional array from global memory in a coalesced fashion into shared memory, and then have contiguous threads stride through the shared memory tile. Unlike global memory, there is no penalty for strided access of shared memory. We will cover shared memory in detail in the next post.
Summary
In this post we discussed some aspects of how to efficiently access global memory from within CUDA kernel code. Global memory access on the device shares performance characteristics with data access on the host; namely, that data locality is very important. In early CUDA hardware, memory access alignment was as important as locality across threads, but on recent hardware alignment is not much of a concern. On the other hand, strided memory access can hurt performance, which can be alleviated using on-chip shared memory. In the next post we will explore shared memory in detail, and in the post after that we will show how to use shared memory to avoid strided global memory accesses during a matrix transpose.
Related resources
Tags
About the Authors
Comments
Related posts
CUDA Refresher: The CUDA Programming Model
Unified Memory in CUDA 6
Using Shared Memory in CUDA C/C++
Using Shared Memory in CUDA Fortran
How to Access Global Memory Efficiently in CUDA Fortran Kernels
Related posts
Optimizing Memory and Retrieval for Graph Neural Networks with WholeGraph, Part 1
Deploying Retrieval-Augmented Generation Applications on NVIDIA GH200 Delivers Accelerated Performance
Simplifying GPU Application Development with Heterogeneous Memory Management
Boosting Application Performance with GPU Memory Access Tuning
Using the NVIDIA CUDA Stream-Ordered Memory Allocator, Part 2
|
Using Shared Memory in CUDA C/C++
In the previous post, I looked at how global memory accesses by a group of threads can be coalesced into a single transaction, and how alignment and stride affect coalescing for various generations of CUDA hardware. For recent versions of CUDA hardware, misaligned data accesses are not a big issue. However, striding through global memory is problematic regardless of the generation of the CUDA hardware, and would seem to be unavoidable in many cases, such as when accessing elements in a multidimensional array along the second and higher dimensions. However, it is possible to coalesce memory access in such cases if we use shared memory. Before I show you how to avoid striding through global memory in the next post, first I need to describe shared memory in some detail.
Shared Memory
Because it is on-chip, shared memory is much faster than local and global memory. In fact, shared memory latency is roughly 100x lower than uncached global memory latency (provided that there are no bank conflicts between the threads, which we will examine later in this post). Shared memory is allocated per thread block, so all threads in the block have access to the same shared memory. Threads can access data in shared memory loaded from global memory by other threads within the same thread block. This capability (combined with thread synchronization) has a number of uses, such as user-managed data caches, high-performance cooperative parallel algorithms (parallel reductions, for example), and to facilitate global memory coalescing in cases where it would otherwise not be possible.
Thread Synchronization
When sharing data between threads, we need to be careful to avoid race conditions, because while threads in a block run logically in parallel, not all threads can execute physically at the same time. Let’s say that two threads A and B each load a data element from global memory and store it to shared memory. Then, thread A wants to read B’s element from shared memory, and vice versa. Let’s assume that A and B are threads in two different warps. If B has not finished writing its element before A tries to read it, we have a race condition, which can lead to undefined behavior and incorrect results.
To ensure correct results when parallel threads cooperate, we must synchronize the threads. CUDA provides a simple barrier synchronization primitive, __syncthreads(). A thread’s execution can only proceed past a __syncthreads() after all threads in its block have executed the __syncthreads(). Thus, we can avoid the race condition described above by calling __syncthreads() after the store to shared memory and before any threads load from shared memory. It’s important to be aware that calling __syncthreads() in divergent code is undefined and can lead to deadlock—all threads within a thread block must call __syncthreads() at the same point.
Shared Memory Example
Declare shared memory in CUDA C/C++ device code using the __shared__ variable declaration specifier. There are multiple ways to declare shared memory inside a kernel, depending on whether the amount of memory is known at compile time or at run time. The following complete code (available on GitHub) illustrates various methods of using shared memory.
#include
__global__ void staticReverse(int *d, int n)
{
__shared__ int s[64];
int t = threadIdx.x;
int tr = n-t-1;
s[t] = d[t];
__syncthreads();
d[t] = s[tr];
}
__global__ void dynamicReverse(int *d, int n)
{
extern __shared__ int s[];
int t = threadIdx.x;
int tr = n-t-1;
s[t] = d[t];
__syncthreads();
d[t] = s[tr];
}
int main(void)
{
const int n = 64;
int a[n], r[n], d[n];
for (int i = 0; i < n; i++) {
a[i] = i;
r[i] = n-i-1;
d[i] = 0;
}
int *d_d;
cudaMalloc(&d_d, n * sizeof(int));
// run version with static shared memory
cudaMemcpy(d_d, a, n*sizeof(int), cudaMemcpyHostToDevice);
staticReverse<<<1,n>>>(d_d, n);
cudaMemcpy(d, d_d, n*sizeof(int), cudaMemcpyDeviceToHost);
for (int i = 0; i < n; i++)
if (d[i] != r[i]) printf("Error: d[%d]!=r[%d] (%d, %d)n", i, i, d[i], r[i]);
// run dynamic shared memory version
cudaMemcpy(d_d, a, n*sizeof(int), cudaMemcpyHostToDevice);
dynamicReverse<<<1,n,n*sizeof(int)>>>(d_d, n);
cudaMemcpy(d, d_d, n * sizeof(int), cudaMemcpyDeviceToHost);
for (int i = 0; i < n; i++)
if (d[i] != r[i]) printf("Error: d[%d]!=r[%d] (%d, %d)n", i, i, d[i], r[i]);
}
This code reverses the data in a 64-element array using shared memory. The two kernels are very similar, differing only in how the shared memory arrays are declared and how the kernels are invoked.
Static Shared Memory
If the shared memory array size is known at compile time, as in the staticReverse kernel, then we can explicitly declare an array of that size, as we do with the array s.
__global__ void staticReverse(int *d, int n)
{
__shared__ int s[64];
int t = threadIdx.x;
int tr = n-t-1;
s[t] = d[t];
__syncthreads();
d[t] = s[tr];
}
In this kernel, t and tr are the two indices representing the original and reverse order, respectively. Threads copy the data from global memory to shared memory with the statement s[t] = d[t], and the reversal is done two lines later with the statement d[t] = s[tr]. But before executing this final line in which each thread accesses data in shared memory that was written by another thread, remember that we need to make sure all threads have completed the loads to shared memory, by calling __syncthreads().
The reason shared memory is used in this example is to facilitate global memory coalescing on older CUDA devices (Compute Capability 1.1 or earlier). Optimal global memory coalescing is achieved for both reads and writes because global memory is always accessed through the linear, aligned index t. The reversed index tr is only used to access shared memory, which does not have the sequential access restrictions of global memory for optimal performance. The only performance issue with shared memory is bank conflicts, which we will discuss later. (Note that on devices of Compute Capability 1.2 or later, the memory system can fully coalesce even the reversed index stores to global memory. But this technique is still useful for other access patterns, as I’ll show in the next post.)
Dynamic Shared Memory
The other three kernels in this example use dynamically allocated shared memory, which can be used when the amount of shared memory is not known at compile time. In this case the shared memory allocation size per thread block must be specified (in bytes) using an optional third execution configuration parameter, as in the following excerpt.
dynamicReverse<<<1, n, n*sizeof(int)>>>(d_d, n);
The dynamic shared memory kernel, dynamicReverse(), declares the shared memory array using an unsized extern array syntax, extern shared int s[] (note the empty brackets and use of the extern specifier). The size is implicitly determined from the third execution configuration parameter when the kernel is launched. The remainder of the kernel code is identical to the staticReverse() kernel.
What if you need multiple dynamically sized arrays in a single kernel? You must declare a single extern unsized array as before, and use pointers into it to divide it into multiple arrays, as in the following excerpt.
extern __shared__ int s[];
int *integerData = s; // nI ints
float *floatData = (float*)&integerData[nI]; // nF floats
char *charData = (char*)&floatData[nF]; // nC chars
In the kernel launch, specify the total shared memory needed, as in the following.
myKernel<<<gridSize, blockSize, nI*sizeof(int)+nF*sizeof(float)+nC*sizeof(char)>>>(...);
Shared memory bank conflicts
To achieve high memory bandwidth for concurrent accesses, shared memory is divided into equally sized memory modules (banks) that can be accessed simultaneously. Therefore, any memory load or store of n addresses that spans b distinct memory banks can be serviced simultaneously, yielding an effective bandwidth that is b times as high as the bandwidth of a single bank.
However, if multiple threads’ requested addresses map to the same memory bank, the accesses are serialized. The hardware splits a conflicting memory request into as many separate conflict-free requests as necessary, decreasing the effective bandwidth by a factor equal to the number of colliding memory requests. An exception is the case where all threads in a warp address the same shared memory address, resulting in a broadcast. Devices of compute capability 2.0 and higher have the additional ability to multicast shared memory accesses, meaning that multiple accesses to the same location by any number of threads within a warp are served simultaneously.
To minimize bank conflicts, it is important to understand how memory addresses map to memory banks. Shared memory banks are organized such that successive 32-bit words are assigned to successive banks and the bandwidth is 32 bits per bank per clock cycle. For devices of compute capability 1.x, the warp size is 32 threads and the number of banks is 16. A shared memory request for a warp is split into one request for the first half of the warp and one request for the second half of the warp. Note that no bank conflict occurs if only one memory location per bank is accessed by a half warp of threads.
For devices of compute capability 2.0, the warp size is 32 threads and the number of banks is also 32. A shared memory request for a warp is not split as with devices of compute capability 1.x, meaning that bank conflicts can occur between threads in the first half of a warp and threads in the second half of the same warp.
Devices of compute capability 3.x have configurable bank size, which can be set using cudaDeviceSetSharedMemConfig() to either four bytes (cudaSharedMemBankSizeFourByte, the default) or eight bytes (cudaSharedMemBankSizeEightByte). Setting the bank size to eight bytes can help avoid shared memory bank conflicts when accessing double precision data.
Configuring the amount of shared memory
On devices of compute capability 2.x and 3.x, each multiprocessor has 64KB of on-chip memory that can be partitioned between L1 cache and shared memory. For devices of compute capability 2.x, there are two settings, 48KB shared memory / 16KB L1 cache, and 16KB shared memory / 48KB L1 cache. By default the 48KB shared memory setting is used. This can be configured during runtime API from the host for all kernels using cudaDeviceSetCacheConfig() or on a per-kernel basis using cudaFuncSetCacheConfig(). These accept one of three options: cudaFuncCachePreferNone, cudaFuncCachePreferShared, and cudaFuncCachePreferL1. The driver will honor the specified preference except when a kernel requires more shared memory per thread block than available in the specified configuration. Devices of compute capability 3.x allow a third setting of 32KB shared memory / 32KB L1 cache which can be obtained using the option cudaFuncCachePreferEqual.
Summary
Shared memory is a powerful feature for writing well optimized CUDA code. Access to shared memory is much faster than global memory access because it is located on chip. Because shared memory is shared by threads in a thread block, it provides a mechanism for threads to cooperate. One way to use shared memory that leverages such thread cooperation is to enable global memory coalescing, as demonstrated by the array reversal in this post. By reversing the array using shared memory we are able to have all global memory reads and writes performed with unit stride, achieving full coalescing on any CUDA GPU. In the next post I will continue our discussion of shared memory by using it to optimize a matrix transpose.
Related resources
Tags
About the Authors
Comments
Related posts
Flexible CUDA Thread Programming
An Efficient Matrix Transpose in CUDA C/C++
Using Shared Memory in CUDA Fortran
How to Access Global Memory Efficiently in CUDA C/C++ Kernels
How to Access Global Memory Efficiently in CUDA Fortran Kernels
Related posts
Just Released: cuDSS 0.3.0
Boosting Application Performance with GPU Memory Prefetching
Controlling Data Movement to Boost Performance on the NVIDIA Ampere Architecture
Maximizing Unified Memory Performance in CUDA
GPU Pro Tip: Fast Histograms Using Shared Atomics on Maxwell
|
Accelerating AI Training with NVIDIA TF32 Tensor Cores
NVIDIA Ampere GPU architecture introduced the third generation of Tensor Cores, with the new TensorFloat32 (TF32) mode for accelerating FP32 convolutions and matrix multiplications. TF32 mode is the default option for AI training with 32-bit variables on Ampere GPU architecture. It brings Tensor Core acceleration to single-precision DL workloads, without needing any changes to model scripts. Mixed-precision training with a native 16-bit format (FP16/BF16) is still the fastest option, requiring just a few lines of code in model scripts. Table 1 shows the math throughput of A100 Tensor Cores, compared to FP32 CUDA cores. It’s also worth pointing out that for single-precision training, the A100 delivers 10x higher math throughput than the previous generation training GPU, V100.
Internals
TF32 is a new compute mode added to Tensor Cores in the Ampere generation of GPU architecture. Dot product computation, which forms the building block for both matrix multiplies and convolutions, rounds FP32 inputs to TF32, computes the products without loss of precision, then accumulates those products into an FP32 output (Figure 1).
TF32 is only exposed as a Tensor Core operation mode, not a type. All storage in memory and other operations remain completely in FP32, only convolutions and matrix-multiplications convert their inputs to TF32 right before multiplication. In contrast, 16-bit types provide storage, various math operators, and so on.
Numerics
Figure 2 shows the various precision options. TF32 mode in the Ampere generation of GPUs adopts 8 exponent bits, 10 bits of mantissa, and one sign bit. As a result, it covers the same range of values as FP32. TF32 also maintains more precision than BF16 and the same amount as FP16. The precision for TF32 remains the only difference from FP32 and has been shown to have more than sufficient margin for AI workloads with extensive studies.
We validated single-precision training in TF32 mode on a wide breadth of AI networks across a variety of applications from computer vision to natural language processing to recommender systems. All the dozens of considered DL workloads match FP32 accuracy, loss values, and training behavior, with no changes to hyperparameters or training scripts. Figure 3 shows a sampling of networks trained. All workloads use identical hyperparameters for training in FP32 and TF32 modes, all differences in accuracy are within respective bounds of run-to-run variation (different random seeds, and so on) for each network. Figure 4 shows the training curves for a few select models on a sampling of networks trained.
Training speedups
As shown earlier, TF32 math mode, the default for single-precision DL training on the Ampere generation of GPUs, achieves the same accuracy as FP32 training, requires no changes to hyperparameters for training scripts, and provides an out-of-the-box 10X faster “tensor math” (convolutions and matrix multiplies) than single-precision math on Volta GPUs. However, speedups observed for networks in practice vary, since all memory accesses remain FP32 and TF32 mode doesn’t affect layers that are not convolutions or matrix multiplies.
Figure 5 shows that speedups of 2-6x are observed in practice for single-precision training of various workloads when moving from V100 to A100. Furthermore, switching to mixed precision with FP16 gives a further speedup of up to ~2x, as 16-bit Tensor Cores are 2x faster than TF32 mode and memory traffic is reduced by accessing half the bytes. Thus, TF32 is a great starting point for models trained in FP32 on Volta or other processors, while mixed-precision training is the option to maximize training speed on A100.
For researchers
In this section, we summarize everything that you must know to accelerate deep learning workloads with TF32 Tensor Cores.
DL frameworks
TF32 is the default mode for AI on A100 when using the NVIDIA optimized deep learning framework containers for TensorFlow, PyTorch, and MXNet, starting with the 20.06 versions available at NGC. TF32 is also enabled by default for A100 in framework repositories starting with PyTorch 1.7, TensorFlow 2.4, as well as nightly builds for MXNet 1.8. Deep learning researchers can use the framework repositories and containers listed earlier to train single-precision models with benefits from TF32 Tensor Cores.
Operations
TF32 mode accelerates single-precision convolution and matrix-multiply layers, including linear and fully connected layers, recurrent cells, and attention blocks. TF32 does not accelerate layers that operate on non-FP32 tensors, such as 16-bits, FP64, or integer precisions. TF32 also does not apply to layers that are not convolution or matrix-multiply operations (for example, batch normalization), as well as optimizer or solver operations. Tensor storage is not changed when training with TF32. Everything remains in FP32, or whichever format is specified in the script.
For developers
Across the NVIDIA libraries, you see Tensor Core acceleration for the full range of precisions available on A100, including FP16, BF16, and TF32. This includes convolutions in cuDNN, matrix multiplies in cuBLAS, factorizations and dense linear solvers in cuSOLVER, and tensor contractions in cuTENSOR. In this post, we discuss the various considerations for enabling Tensor Cores in NVIDIA libraries.
cuDNN
cuDNN is the deep neural network library primarily used for convolution operations. Convolutional layers in cuDNN have descriptors that describe the operation to be performed, such as the math type. With version 8.0 and greater, convolution operations are performed with TF32 Tensor Cores when you use the default math mode CUDNN_DEFAULT_MATH or specify the math type as CUDNN_TENSOR_OP_MATH. The library internally selects TF32 convolution kernels if they exist when operating on 32-bit data. For Volta and previous versions of cuDNN, the default math option continues to be FP32.
cuBLAS
cuBLAS is used to perform basic dense linear algebra operations such as matrix multiplications that occur in deep neural networks. cuBLAS continues to default to FP32 operations for CUBLAS_DEFAULT_MATH because of the traditional use of cuBLAS in HPC applications, which require more precision.
With version 11.0 and greater, cuBLAS supports TF32 Tensor Core operations with the cublasSetMathMode function, by setting the math mode to CUBLAS_TF32_TENSOR_OP_MATH for legacy BLAS APIs and by setting the compute type to CUBLAS_COMPUTE_32F_FAST_TF32 for the cublasGemmEx and cublasLtMatmul APIs. When these options are selected, the library internally selects TF32 kernels, if available, when operating on 32-bit data.
To get the benefits of TF32, NVIDIA optimized deep learning frameworks set the global math mode state on the cuBLAS handle to CUBLAS_TF32_TENSOR_OP_MATH using cublasSetMathMode. However, there are still some linear algebra operations in deep learning that cuBLAS needs full FP32 precision to preserve the numerics for training or inference. The frameworks have guards around such operations (for example, that are performing solver operations) and set the math mode back to CUBLAS_DEFAULT_MATH, which uses FP32 kernels.
cuSOLVER
cuSOLVER is primarily used for solver operations such as factorizations and dense linear solvers. Some of the deep learning frameworks use cuSOLVER from the CUDA toolkit. There is no need to change the default math operation, as it always uses the precision defined by the API call.
cuTENSOR
cuTENSOR is primarily used for tensor primitives such as contractions, reductions, and element-wise operations. The precision is always defined by the API call. With version 1.1.0 and greater, cuTENSOR supports TF32 Tensor Core operations through the compute type CUTENSOR_COMPUTE_TF32.
Rounding options
BF16 is introduced as Tensor Core math mode in cuBLAS 11.0 and as a numerical type in CUDA 11.0. Deep learning frameworks and AMP will support BF16 soon. Conversions between 16-bit and FP32 formats are typical when devising custom layers for mixed-precision training. We recommend using type casts or intrinsic functions, as shown in the following example. The appropriate header files cuda_fp16.h and cuda_bf16.h must be included.
#include <cuda_fp16.h>
half a = (half)(1.5f);
half b = (half)(1.0f);
half c = a + b;
#include <cuda_bf16.h>
nv_bfloat16 a = (nv_bfloat16)(1.5f);
nv_bfloat16 b = (nv_bfloat16)(1.5f);
nv_bfloat16 c = a + b;
Example: Sample CUDA code for converting two FP32 values to 16-bits (FP16 or BF16), adding them with 16-bit operations, and storing the result in a 16-bit register.
Global platform control
A100 introduces the global platform control to allow changes to the default math behavior for AI training. A global environment variable NVIDIA_TF32_OVERRIDE can be used to toggle TF32 mode at the system level, overriding programmatic settings in the libraries or frameworks (Table 3).
The global variable is designed as a debugging tool when training goes wrong. It provides a quick way to rule out any concern regarding TF32 libraries and allows you to focus on other issues in the training script.
NVIDIA_TF32_OVERRIDE must be set before the application is launched, as the effect of any change after the application launch is unspecified. The variable affects only the mode of FP32 operations. Operations using FP64 or one of the 16-bit formats are not affected and continue to use those corresponding types.
Conclusion
This post briefly introduces the variety of precisions and Tensor Core capabilities that the NVIDIA Ampere GPU architecture offers for AI training. TensorFloat32 brings the performance of Tensor Cores to single-precision workloads, while mixed precision with a native 16-bit format (FP16/BF16) remains the fastest options for training deep neural networks. All options are available in the latest deep learning frameworks optimized for A100 GPUs. For more information about the various possibilities to train neural networks with Tensor Cores, see the following online talks:
Related resources
Tags
About the Authors
Comments
Related posts
Accelerating AI Inference Workloads with NVIDIA A30 GPU
Getting Immediate Speedups with NVIDIA A100 TF32
Accelerating TensorFlow on NVIDIA A100 GPUs
NVIDIA Ampere Architecture In-Depth
Programming Tensor Cores in CUDA 9
Related posts
GPU Memory Essentials for AI Performance
NVIDIA H200 Tensor Core GPUs and NVIDIA TensorRT-LLM Set MLPerf LLM Inference Records
Setting New Records in MLPerf Inference v3.0 with Full-Stack Optimizations for AI
NVIDIA Hopper Architecture In-Depth
Explore and Test Experimental Models for DLSS Research
|
Tips for Optimizing GPU Performance Using Tensor Cores
Our most popular question is “What can I do to get great GPU performance for deep learning?” We’ve recently published a detailed Deep Learning Performance Guide to help answer this question. The guide explains how GPUs process data and gives tips on how to design networks for better performance. We also take a close look at Tensor Core optimization to help improve performance.
This post takes a closer look at some of the most important recommendations from the guide. We’ll give a general guideline and explanation for each tip, apply the guideline to an example layer, and compare performance before and after.
This post can be read standalone. However, we suggest you refer to the Deep Learning Performance Guide for a better understanding of why deep learning tasks perform the way they do on GPUs and how to improve that performance.
Tip 1: Activating Tensor Cores
Tensor Cores, available on Volta and subsequent GPU architectures, accelerate common deep learning operations—specifically computationally-intensive tasks such as fully-connected and convolutional layers.
Workloads must use mixed precision to take advantage of Tensor Cores. Check out our post on Automatic Mixed Precision for quick setup and our Training With Mixed Precision Guide for more details. Additionally, Tensor Cores are activated when certain parameters of a layer are divisible by 8 (for FP16 data) or 16 (for INT8 data). A fully-connected layer with a batch size and number of inputs and outputs that follow this rule will use Tensor Cores, as will a convolutional layer with a number of input and output channels that do the same.
This is due to how GPUs store and access data. Layers that don’t meet this requirement are still accelerated on the GPU. However, these layers use 32-bit CUDA cores instead of Tensor Cores as a fallback option.
Note: There are cases where we relax the requirements. However, following these guidelines is the easiest way to ensure enabling Tensor Cores. For details, see sections on Tensor Core Requirements for matrix multiplies and Channels In and Out of convolutions from the Deep Learning Performance Guide.
Let’s look at two examples from the popular Transformer neural network to illustrate the kind of speedup you can expect from activating Tensor Cores . Transformers, described in Attention Is All You Need [Vaswani 2017], are currently state-of-the-art networks for language translation and other sequence tasks. Much of a Transformer network consists of fully-connected layers. We’ll discuss ways to optimize a few for Tensor Cores.
Padding Vocabulary Size – Projection Layer Example
Figure 1 shows a simplified representation of a Transformer network. The network outputs a vector containing a probability for each token in the vocabulary. This vector of probabilities is produced using the softmax function over the outputs from a fully-connected layer, which we’ll call the projection layer. The number of outputs of this layer is equal to the vocabulary size, often in excess of 30,000. Given the heavyweight computation involved, it’s important to ensure effective Tensor Core use.
Figure 2 shows the performance of one such projection layer, with 1024 inputs and a batch size of 5120, training on FP16 data on a Volta Tesla V100. Suppose we are using the combined English-German training datasets for the WMT14 task, which have a vocabulary size of 33708. Simply padding the vocabulary size to the next multiple of 8 activates Tensor Cores and improves throughput significantly.
Choosing Batch Size for Tensor Cores – Feed-Forward Layer Example
The Transformer architecture also contains fully-connected layers as part of self-attention and feed-forward blocks. Let’s consider the first layer in a feed-forward block, a fully-connected layer with 1024 inputs and 4096 outputs. This layer’s batch size depends on batch assembly, which splits inputs to the network into batches, up to some maximum batch size. When assembly doesn’t consider Tensor Cores, irregularly-sized batches may be created.
Performance of this layer’s training steps with several batch sizes is shown in figure 3. This is an example where Tensor Core requirements are relaxed. Both forward and activation gradient passes perform the same with and without padding. The weight gradient pass, on the other hand, shows the same dramatic performance difference we saw in figure 2. CUDA cores are used as a fallback for weight gradient computation with batch sizes of 4084 or 4095 tokens, using 4088 or 4096 tokens per batch instead enables Tensor Core acceleration.
At least one of the forward, activation gradient, and weight gradient passes will not be accelerated by Tensor Cores when any relevant parameter is not optimally sized. We recommend ensuring all such parameters are multiples of 8 when training with FP16 and multiples of 16 when training with INT8. These include batch size and number of inputs and outputs, for a fully-connected layer and channels in and out, for a convolutional layer. This is the easiest way to guarantee Tensor Cores will accelerate your task!
Checking for Tensor Core Usage
You can use NVIDIA’s profiling tools to check if Tensor Cores have been activated. More information about these tools is available in the CUDA documentation.
Note: although we focus on Tensor Cores in this post, deep learning operations not accelerated by Tensor Cores also contribute to overall network performance. You can read about these operations in the Memory-Limited Layers section of the Deep Learning Performance Guide, and about further optimizations and decreasing non-Tensor-Core work in the Training With Mixed Precision documentation.
Tip 2: Considering Quantization Effects
We’ve focused so far on how to ensure Tensor Cores are accelerating your task. Now let’s discuss efficiency on the GPU and a few parameter tweaks that can help you get the most out of Tensor Cores.
GPUs perform many computations concurrently; we refer to these parallel computations as threads. Conceptually, threads are grouped into thread blocks, each of which is responsible for a subset of the calculations being done. When the GPU executes a task, it is split into equally-sized thread blocks.
Now consider a fully-connected layer. During training, forward propagation, activation gradient calculation, and weight gradient calculation are each represented as a matrix multiply. The GPU divides the output matrix into uniformly-sized, rectangular tiles. Each tile is computed by a thread block; figure 4 illustrates the process for one such tile. You can find cases where multiple thread blocks contribute to one tile, but for simplicity, we’ll assume one thread block per tile in this post. More detail can be found in the Deep Learning Performance Guide, in the sections discussing GPU efficiency and tiling.
However, not all output matrices divide evenly into an available tile size. Further, the thread blocks created may not divide evenly among the multiprocessors on the GPU. These effects, called tile quantization and wave quantization respectively, can lead to wasted cycles and inefficiency.
Tile quantization occurs when one dimension of the output matrix is not evenly divisible by the corresponding tile dimension. The thread blocks for the final row or column of tiles created for the remainder then perform the same amount of math as any other column, but produce a smaller amount of useful output data. While the cuBLAS library tries to choose the best tile size available, most tile sizes are powers of 2. To avoid tile quantization, choose parameters that are divisible by powers of 2 (at least 64 and ideally 256, to account for the most common tile sizes).
We also consider the number of thread blocks that can run concurrently on the GPU for wave quantization. Take the example of a Tesla V100 GPU, which has 80 multiprocessors and a tile size of 256×128, where the V100 GPU can execute one thread block per multiprocessor. In this case, a wave of 80 thread blocks fully occupies the GPU. Suppose a task creates 96 thread blocks. The first 80 will be computed efficiently as a ‘full wave’ while the 16 leftover thread blocks will make up an inefficient ‘tail wave’ during which the GPU is underutilized. Figure 5 illustrates a simple version of this situation.
Absent information about what tile size will be used, choose parameters so that the total number of tiles/thread blocks is divisible by the number of multiprocessors to avoid wave quantization effects.
Now let’s look at how this maps back to parameters of a fully-connected layer. Figure 6 shows the dimensions of equivalent matrix multiplies for forward, activation gradient, and weight gradient passes.
Batch size directly controls the width of the output matrix during both forward and activation gradient passes. Consider again our previous example of the first layer in a Transformer feed-forward block (a fully-connected layer with 1024 inputs and 4096 outputs). During forward propagation, the output matrix is of shape 4096 x batch size. Assuming a tile size of 256×128, this matrix divides into 4096/256 = 16 rows and (batch size) / 128 columns of tiles.
Avoiding tile quantization is straightforward: batch size should be divisible by 128. Wave quantization is more complex. For some integer n, we want n*80 total tiles and already know that there will be 16 rows of tiles. Therefore, our task should create n*5 columns of tiles. Given a tile width of 128, this corresponds to an output matrix width (and batch size) of n*5*128 = n*640. Thus, choosing batch size to be divisible by 640 avoids wave quantization effects.
The Deep Learning Performance Guide goes into more detail about both types of quantization effects, as well as how this applies to convolutions, with examples.
Choosing Batch Size for Quantization – Feed-Forward Layer Example
Figure 7 shows the performance of our example feed-forward layer for several different batch sizes. Choosing a quantization-free batch size (2560 instead of 2048, 5120 instead of 4096) considerably improves performance. Notice that a batch size of 2560 (resulting in 4 waves of 80 thread blocks) achieves higher throughput than the larger batch size of 4096 (a total of 512 tiles, resulting in 6 waves of 80 thread blocks and a tail wave remainder of 32 thread blocks). The weight gradient pass doesn’t show this drastic change. Batch size maps to the ‘K’ dimension of the matrix multiply during this pass and thus does not directly control the size of the output matrix or the number of tiles and thread blocks created.
Learning More
Learn more about how to ensure your network is taking advantage of Tensor Cores from the Deep Learning Performance Guide. To get started, read our summary of performance guidelines, which offers quick rundown of the most important information about Tensor Core performance and includes tips that you can apply to your network in a few minutes! Each part of the summary links to other sections in the guide where you can find more detail about the topic.
Also, check out the recording of GTC Silicon Valley 2019 session S9926: Tensor Core Performance: The Ultimate Guide and S9143: Mixed Precision Training of Deep Neural Networks. Additional information about how to train using mixed precision can be found in the Mixed Precision Training paper and Training With Mixed Precision documentation.
References
[Vaswani 2017] Ashish Vaswani, Attention Is All You Need, arXiv:1706.03762, 2017.
Related resources
Tags
About the Authors
Comments
Related posts
Rapid Data Pre-Processing with NVIDIA DALI
NVIDIA at TensorFlow World 2019
Video Series: Mixed-Precision Training Techniques Using Tensor Cores for Deep Learning
Programming Tensor Cores in CUDA 9
Programming Tensor Cores in CUDA 9
Related posts
GPU Memory Essentials for AI Performance
NVIDIA H200 Tensor Core GPUs and NVIDIA TensorRT-LLM Set MLPerf LLM Inference Records
Setting New Records in MLPerf Inference v3.0 with Full-Stack Optimizations for AI
NVIDIA Hopper Architecture In-Depth
Explore and Test Experimental Models for DLSS Research
|
Accelerating GPU Applications with NVIDIA Math Libraries
There are three main ways to accelerate GPU applications: compiler directives, programming languages, and preprogrammed libraries. Compiler directives such as OpenACC aIlow you to smoothly port your code to the GPU for acceleration with a directive-based programming model. While it is simple to use, it may not provide optimal performance in certain scenarios.
Programming languages such as CUDA C and C++ give you greater flexibility when accelerating your applications, but it is also the user’s responsibility to write code that takes advantage of new hardware features to achieve optimal performance on the latest hardware. This is where preprogrammed libraries fill in the gap.
In addition to enhancing code reusability, the NVIDIA Math Libraries are optimized to make best use of GPU hardware for the greatest performance gain. If you’re looking for a straightforward way to speed up your application, continue reading to learn about using libraries to improve your application’s performance.
The NVIDIA math libraries, available as part of the CUDA Toolkit and the high-performance computing (HPC) software development kit (SDK), offer high-quality implementations of functions encountered in a wide range of compute-intensive applications. These applications include the domains of machine learning, deep learning, molecular dynamics, computational fluid dynamics (CFD), computational chemistry, medical imaging, and seismic exploration.
These libraries are designed to replace the common CPU libraries such as OpenBLAS, LAPACK, and Intel MKL, as well as accelerate applications on NVIDIA GPUs with minimal code changes. To show the process, we created an example of the double precision general matrix multiplication (DGEMM) functionality to compare the performance of cuBLAS with OpenBLAS.
The code example below demonstrates the use of the OpenBLAS DGEMM call.
// Init Data
…
// Execute GEMM
cblas_dgemm(CblasColMajor, CblasNoTrans, CblasTrans, m, n, k, alpha, A.data(), lda, B.data(), ldb, beta, C.data(), ldc);
Code example 2 below shows the cuBLAS dgemm call.
// Init Data
…
// Data movement to GPU
…
// Execute GEMM
cublasDgemm(cublasH, CUBLAS_OP_N, CUBLAS_OP_T, m, n, k, &alpha, d_A, lda, d_B, ldb, &beta, d_C, ldc));
As shown in the example above, you can simply add and replace the OpenBLAS CPU code with the cuBLAS API functions. See the full code for both the cuBLAS and OpenBLAS examples. This cuBLAS example was run on an NVIDIA(R) V100 Tensor Core GPU with a nearly 20x speed-up. The graph below displays the speedup and specs when running these examples.
Fun fact: These libraries are invoked in the higher-level Python APIs such as cuPy, cuDNN and RAPIDS, so if you have experience with those, then you have already been using these NVIDIA Math Libraries.
The remainder of this post covers all of the math libraries available. For the latest updates and information, watch Recent Developments in NVIDIA Math Libraries.
Delivering better performance compared to CPU-only alternatives
There are many NVIDIA Math Libraries to take advantage of, from GPU-accelerated implementations of BLAS to random number generation. Take a look below at an overview of the NVIDIA Math Libraries and learn how to get started to easily boost your application’s performance.
Speed up Basic Linear Algebra Subprograms with cuBLAS
General Matrix Multiplication (GEMM) is one of the most popular Basic Linear Algebra Subprograms (BLAS) deployed in AI and scientific computing. GEMMs also form the foundational blocks for deep learning frameworks. To learn more about the use of GEMMs in deep learning frameworks, see Why GEMM Is at the Heart of Deep Learning.
The cuBLAS Library is an implementation of BLAS which leverages GPU capabilities to achieve great speed-ups. It comprises routines for performing vector and matrix operations such as dot products (Level 1), vector addition (Level 2), and matrix multiplication (Level 3).
Additionally, if you would like to parallelize your matrix-matrix multiplies, cuBLAS supports the versatile batched GEMMs which finds use in tensor computations, machine learning, and LAPACK. For more details about improving efficiency in machine learning and tensor contractions, see Tensor Contractions with Extended BLAS Kernels on CPU and GPU.
cuBLASXt
If the problem size is too big to fit on the GPU, or your application needs single-node, multi-GPU support, cuBLASXt is a great option. cuBLASXt allows for hybrid CPU-GPU computation and supports BLAS Level 3 operations that perform matrix-to-matrix operations such as herk which performs the Hermitian rank update.
cuBLASLt
cuBLASLt is a lightweight library that covers GEMM. cuBLASLt uses fused kernels to speed up applications by “combining” two or more kernels into a single kernel which allows for reuse of data and reduced data movement. cuBLASLt also allows users to set the post-processing options for the epilogue (apply Bias and then ReLU transform or apply bias gradient to an input matrix).
cuBLASMg: CUDA Math Library Early Access Program
For large-scale problems, check out cuBLASMg for state-of-the-art multi-GPU, multi-node matrix-matrix multiplication support. It is currently a part of the CUDA Math Library Early Access Program. Apply for access.
Process sparse matrices with cuSPARSE
Sparse-matrix, dense-matrix multiplication (SpMM) is fundamental to many complex algorithms in machine learning, deep learning, CFD, and seismic exploration, as well as economic, graph, and data analytics. Efficiently processing sparse matrices is critical to many scientific simulations.
The growing size of neural networks and the associated increase in cost and resources incurred has led to the need for sparsification. Sparsity has gained popularity in the context of both deep learning training and inference to optimize the use of resources. For more insight into this school of thought and the need for a library such as cuSPARSE, see The Future of Sparsity in Deep Neural Networks.
cuSPARSE provides a set of basic linear algebra subprograms used for handling sparse matrices which can be used to build GPU-accelerated solvers. There are four categories of the library routines:
cuSPARSELt
For a lightweight version of the cuSPARSE library with compute capabilities to perform sparse matrix-dense matrix multiplication along with helper functions for pruning and compression of matrices, try cuSPARSELt. For a better understanding of the cuSPARSELt library, see Exploiting NVIDIA Ampere Structured Sparsity with cuSPARSELt.
Accelerate tensor applications with cuTENSOR
The cuTENSOR library is a tensor linear algebra library implementation. Tensors are core to machine learning applications and are an essential mathematical tool used to derive the governing equations for applied problems. cuTENSOR provides routines for direct tensor contractions, tensor reductions, and element-wise tensor operations. cuTENSOR is used to improve performance in deep learning training and inference, computer vision, quantum chemistry, and computational physics applications.
cuTENSORMg
If you still want cuTENSOR features, but with support for large tensors that can be distributed across multi-GPUs in a single node such as with the DGX A100, cuTENSORMg is the library of choice. It provides broad mixed-precision support, and its main computational routines include direct tensor contractions, tensor reductions, and element-wise tensor operations.
GPU-accelerated LAPACK features with cuSOLVER
The cuSOLVER library is a high-level package useful for linear algebra functions based on the cuBLAS and cuSPARSE libraries. cuSOLVER provides LAPACK-like features, such as matrix factorization, triangular solve routines for dense matrices, a sparse least-squares solver, and an eigenvalue solver.
There are three separate components of cuSOLVER:
cuSOLVERMg
For GPU-accelerated ScaLAPACK features, a symmetric eigensolver, 1-D column block cyclic layout support, and single-node, multi-GPU support for cuSOLVER features, consider cuSOLVERMg.
cuSOLVERMp
Multi-node, multi-GPU support is needed for solving large systems of linear equations. Known for its lower-upper factorization and Cholesky factorization features, cuSOLVERMp is a great solution.
Large-scale generation of random numbers with cuRAND
The cuRAND library focuses on the generation of random numbers through pseudo-random or quasi-random number generators on either the host (CPU) API or a device (GPU) API. The host API can generate random numbers purely on the host and store them in host memory, or they can be generated on the device where the calls to the library happen on the host, but the random number generation occurs on the device and is stored to global memory.
The device API defines functions for setting up random number generator states and generating sequences of random numbers which can be immediately used by user kernels without having to read and write to global memory. Several physics-based problems have shown the need for large-scale random number generation.
Monte Carlo simulation is one such use case for random number generators on the GPU. The Development of GPU-Based Parallel PRNG for Monte Carlo Applications in CUDA Fortran highlights the application of cuRAND in large-scale generation of random numbers.
Calculate fast Fourier transforms with cuFFT
cuFFT, the CUDA Fast Fourier Transform (FFT) library provides a simple interface for computing FFTs on an NVIDIA GPU. The FFT is a divide-and-conquer algorithm for efficiently computing discrete Fourier transforms of complex or real-valued data sets. It is one of the most widely used numerical algorithms in computational physics and general signal processing.
cuFFT can be used for a wide range of applications, including medical imaging and fluid dynamics. Parallel Computing for Quantitative Blood Flow Imaging in Photoacoustic Microscopy illustrates the use of cuFFT in physics-based applications. Users with existing FFTW applications should use cuFFTW to easily port code to NVIDIA GPUs with minimal effort. The cuFFTW library provides the FFTW3 API to facilitate porting of existing FFTW applications.
cuFFTXt
To distribute FFT calculations across GPUs in a single node, check out cuFFTXt. This library includes functions to help users manipulate data on multiple GPUs and keep track of data ordering, which allows data to be processed in the most efficient way possible.
cuFFTMp
Not only is there multi-GPU support within a single system, cuFFTMp provides support for multi-GPUs across multiple nodes. This library can be used with any MPI application since it is independent of the quality of MPI implementation. It uses NVSHMEM which is a communication library based on OpenSHMEM standards that was designed for NVIDIA GPUs.
cuFFTDx
To improve performance by avoiding unnecessary trips to global memory and allowing fusion of FFT kernels with other operations, check out cuFFT device extensions (cuFFTDx) . Part of the Math Libraries Device Extensions, it allows applications to compute FFTs inside user kernels.
Optimize standard mathematical functions with CUDA Math API
The CUDA Math API is a collection of standard mathematical functions optimized for every NVIDIA GPU architecture. All of the CUDA libraries rely on the CUDA Math Library. CUDA Math API supports all C99 standard float and double math functions, float, double, and all rounding modes, and different functions such as trigonometry and exponential functions ( cospi, sincos) and additional inverse error functions (erfinv, erfcinv).
Customize code using C++ templates with CUTLASS
Matrix multiplications are the foundation of many scientific computations. These multiplications are particularly important in efficient implementation of deep learning algorithms. Similar to cuBLAS, CUDA Templates for Linear Algebra Subroutines (CUTLASS) comprises a set of linear algebra routines to carry out efficient computation and scaling.
It incorporates strategies for hierarchical decomposition and data movement similar to those used to implement cuBLAS and cuDNN. However, unlike cuBLAS, CUTLASS is increasingly modularized and reconfigurable. It decomposes the moving parts of GEMM into fundamental components or blocks available as C++ template classes, thereby giving you flexibility to customize your algorithms.
The software is pipelined to hide latency and maximize data reuse. Access shared memory without conflict to maximize your data throughput, eliminate memory footprints, and design your application exactly the way you want. To learn more about using CUTLASS to improve the performance of your application, see CUTLASS: Fast Linear Algebra in CUDA C++.
Compute differential equations with AmgX
AmgX provides a GPU-accelerated AMG (algebraic multi-grid) library and is supported on a single GPU or multi-GPUs on distributed nodes. It allows users to create complex nested solvers, smoothers, and preconditioners. This library implements classical and aggregation-based algebraic multigrid methods with different smoothers such as block-Jacobi, Gauss-Seidel, and dense LU.
This library also contains preconditioned Krylov subspace iterative methods such as PCG and BICGStab. AmgX provides up to 10x acceleration to the computationally intense linear solver portion of simulations and is well-suited for implicit unstructured methods.
AmgX was specifically developed for CFD applications and can be used in domains such as energy, physics, and nuclear safety. A real-life example of the AmgX library is in solving the Poisson Equation for small-scale to large-scale computing problems.
The flying snake simulation example shows the reduction in time and cost incurred when using the AmgX wrapper on GPUs to accelerate CFD codes. There is a 21x speed-up with 3 million mesh points on one K20 GPU when compared to one 12-core CPU node.
Get started with NVIDIA Math Libraries
We continue working to improve the NVIDIA Math Libraries. If you have questions or a new feature request, contact Product Manager Matthew Nicely.
Acknowledgements
We would like to thank Matthew Nicely for his guidance and active feedback. A special thank you to Anita Weemaes for all her feedback and her continued support throughout.
Related resources
Tags
About the Authors
Comments
Related posts
Developing Accelerated Code with Standard Language Parallelism
Announcing the NVIDIA HPC SDK
Introducing the NVIDIA OpenACC Toolkit
3 Versatile OpenACC Interoperability Techniques
OpenACC: Directives for GPUs
Related posts
Processing High-Quality Vietnamese Language Data with NVIDIA NeMo Curator
Access to NVIDIA NIM Now Available Free to Developer Program Members
Revolutionizing Graph Analytics: Next-Gen Architecture with NVIDIA cuGraph Acceleration
Efficient CUDA Debugging: Memory Initialization and Thread Synchronization with NVIDIA Compute Sanitizer
Analyzing the Security of Machine Learning Research Code
|
CUDA Pro Tip: Optimized Filtering with Warp-Aggregated Atomics
Note: This post has been updated (November 2017) for CUDA 9 and the latest GPUs. The NVCC compiler now performs warp aggregation for atomics automatically in many cases, so you can get higher performance with no extra effort. In fact, the code generated by the compiler is actually faster than the manually-written warp aggregation code. This post is mainly intended for those who want to learn how it works, and apply a similar technique to other problems.
In this post, I’ll introduce warp-aggregated atomics, a useful technique to improve performance when many threads atomically add to a single counter. In warp aggregation, the threads of a warp first compute a total increment among themselves, and then elect a single thread to atomically add the increment to a global counter. This aggregation reduces the number of atomics performed by up to the number of threads in a warp (up to 32x on current GPUs), and can dramatically improve performance. Moreover, in many typical cases, you can implement warp aggregation as a drop-in replacement for standard atomic operations, so it is useful as a simple way to improve performance of complex applications.
Problem: Filtering by a Predicate
Consider the following filtering problem: I have a source array, src, containing n elements, and a predicate, and I need to copy all elements of src satisfying the predicate into the destination array, dst. For the sake of simplicity, assume that dst has length of at least n and that the order of elements in the dst array does not matter. For this example, I assume that the array elements are integers, and the predicate is true if and only if the element is positive. Here is a sample CPU implementation of filtering.
int filter(int *dst, const int *src, int n) {
int nres = 0;
for (int i = 0; i < n; i++)
if (src[i] > 0)
dst[nres++] = src[i];
// return the number of elements copied
return nres;
}
Filtering, also known as stream compaction, is a common operation, and it is a part of the standard libraries of many programming languages, where it goes under a variety of names, including grep, copy_if, select, FindAll and so on. It is also very often implemented simply as a loop, as it may be very tightly integrated with the surrounding code.
Solutions with Global and Shared Memory
Now, what if I want to implement filtering on a GPU, and process the elements of the array src in parallel? A straightforward approach is to use a single global counter and atomically increment it for each new element written int the dst array. A GPU implementation of this may look as follows.
__global__
void filter_k(int *dst, int *nres, const int *src, int n) {
int i = threadIdx.x + blockIdx.x * blockDim.x;
if(i < n && src[i] > 0)
dst[atomicAdd(nres, 1)] = src[i];
}
The main problem with this implementation is that all threads in the grid that read positive elements from src increment a single counter, nres. Depending on the number of positive elements, this may be a very large number of threads. Therefore, the degree of collisions for atomicAdd() is high, which limits performance. You can see this in Figure 1, which plots the kernel bandwidth (counting both reads and writes, but not atomics) achieved on a Kepler K80 GPU when processing 100 million (100*220) elements.
The bandwidth is inversely proportional to the number of atomics executed, or the fraction of positive elements in the array. While performance is acceptable (about 55 GiB/s) for a 5% fraction, it drops drastically when more elements pass the filter, to just around 8 GiB/s for a 50% fraction. Atomic operations are clearly a bottleneck, and need to be removed or reduced to increase application performance.
One way to improve filtering performance is to use shared memory atomics. This increases the speed of each operation, and reduces the degree of collisions, as the counter is only shared between threads in a single block. With this approach, we only need one global atomicAdd() per thread block. Here is a kernel implemented with this approach.
__global__
void filter_shared_k(int *dst, int *nres, const int* src, int n) {
__shared__ int l_n;
int i = blockIdx.x * (NPER_THREAD * BS) + threadIdx.x;
for (int iter = 0; iter < NPER_THREAD; iter++) {
// zero the counter
if (threadIdx.x == 0)
l_n = 0;
__syncthreads();
// get the value, evaluate the predicate, and
// increment the counter if needed
int d, pos;
if(i < n) {
d = src[i];
if(d > 0)
pos = atomicAdd(&l_n, 1);
}
__syncthreads();
// leader increments the global counter
if(threadIdx.x == 0)
l_n = atomicAdd(nres, l_n);
__syncthreads();
// threads with true predicates write their elements
if(i < n && d > 0) {
pos += l_n; // increment local pos by global counter
dst[pos] = d;
}
__syncthreads();
i += BS;
}
}
Another approach is to first use a parallel prefix sum to compute the output index of each element. Thrust’s copy_if() function uses an optimized version of this approach. Performance of both approaches for Kepler K80 is presented in Figure 2. Though shared memory atomics improve filtering performance, it still stays within 1.5x of the original approach. Atomics are still a bottleneck, as the number of operations hasn’t changed. Thrust is better than both approaches for high filtering fractions, but incurs large upfront costs which are not amortized for small filtering fractions.
It is important to note that the comparison to Thrust is not apples-to-apples, because Thrust implements a stable filter: it preserves the relative order of the input elements in the output. This is a result of using prefix sum to implement it, but it is more expensive as a result. If we don’t need a stable filter, then a purely atomic approach is simpler and performs less work.
Warp-Aggregated Atomics
Warp aggregation is the process of combining atomic operations from multiple threads in a warp into a single atomic. This approach is orthogonal to using shared memory: the type of the atomics remains the same, but we use fewer of them. With warp aggregation, we replace atomic operations with the following steps.
Starting from CUDA 9.0, there are two APIs available to implement this: Cooperative Groups, an extension to the CUDA programming model for managing groups of cooperating threads, and warp-synchronous primitive functions.
After performing a warp-aggregated atomic, each thread proceeds as in the original code, and writes its value to its position in the dst array. Let’s now consider each of the steps in detail.
Step 1: Leader Election
In filtering, it’s possible to reorganize the code so that all threads are active. However, in other cases, atomics can occur within nested conditionals where some threads may be inactive. Generally, the approach should assume that only some threads are active, so I need a group made up of all active threads.
To use Cooperative Groups, include the header file and use the cooperative_groups namespace.
#include <cooperative_groups.h>
using namespace cooperative_groups;
Create a group of all currently coalesced threads.
auto g = coalesced_threads();
Getting the thread rank is easy with Cooperative Groups: call g.thread_rank(). The thread with rank 0 will be the leader.
If you prefer to use primitive functions, start with __activemask().
unsigned int active = __activemask();
(An older approach is to use __ballot(1). This works with CUDA 8, but is deprecated starting with CUDA 9.)
Then elect a leader. Threads within a warp are called lanes; the simplest way to elect a leader is to use the active lane with the lowest number. The __ffs() primitive returns the 1-based index of the lowest set bit, so subtract 1 to get a 0-based index.
int leader = __ffs(active) - 1;
Step 2: Computing the Total Increment
For the filtering example, each thread with a true predicate increments the counter by 1. The total increment for the warp is equal to the number of active lanes (I don’t consider here the case of increments that vary across lanes). This is trivial with Cooperative Groups: g.size() returns the number of threads in the group.
If you prefer to use primitive functions, you can compute the total increment as the number of bits set in the mask returned by __activemask(). For this, use the __popc(int v) intrinsic, which returns the number of bits set in the binary representation of integer v. The following code computes the total increment.
int change = __popc(active);
Step 3: Performing the Atomic Add
Only the leader thread (lane 0) performs the atomic operation. With Cooperative Groups, just check if thread_rank() returns 0, like this.
int warp_res;
if(g.thread_rank() == 0)
warp_res = atomicAdd(ctr, g.size());
If you prefer to use primitive functions, you must compute the rank of each lane using __lanemask_lt(), which returns the mask of all lanes (including inactive ones) with ID less than the current lane. You can then compute the rank by ANDing this mask with the active lane mask, and counting the number of bits set.
unsigned int rank = __popc(active & __lanemask_lt());
int warp_old;
if(rank == 0)
warp_old = atomicAdd(ctr, change); // ctr is the pointer to the counter
Step 4: Broadcasting the Result
In this step, the leader thread broadcasts the result of the atomicAdd() to other lanes in the warp. We can do this by using the shuffle operation across the active lanes.
With Cooperative Groups, you can broadcast the result using g.shfl(warp_res, 0). The 0 is the index of the leader thread, which works since only active threads are part of the group (because it was created using coalesced_threads()).
If you prefer to use primitive functions, call __shfl_sync(), which has the following signature, where T is a 32- or 64-bit integer or floating-point type.
T __shfl_sync(unsigned int mask, T var, int srcLane, int width=warpSize);
shfl_sync() returns the value var held by the thread whose ID is given by srcLane. mask is the mask of threads participating in the call. All non-exited threads for which the mask bit is 1 must execute the same intrinsic with the same mask, or the result is undefined. width must be a power of two less than or equal to the warp size. The warp is broken into groups of that size, and srcLane refers to the lane number within the group. If srcLane is outside of range [0:width-1] (including both ends), then srcLane modulo width gives the lane number.
The following code uses __shfl_sync() to broadcast the result.
warp_res = __shfl_sync(active, warp_res, leader);
CUDA 8 and earlier implementations used __shfl(), which is deprecated starting with CUDA 9.
Step 5: Computing the Result for Each Lane
The last step computes the output position for each lane, by adding the broadcast counter value for the warp to the lane’s rank among the active lanes.
In Cooperative Groups:
return g.shfl(warp_res, 0) + g.thread_rank();
With primitive functions:
return warp_res + rank;
We can now join the pieces of the code for steps 1-5 to obtain the full warp-aggregated version of the increment function.
With Cooperative Groups, the code is concise and clear.
__device__ int atomicAggInc(int *ctr) {
auto g = coalesced_threads();
int warp_res;
if(g.thread_rank() == 0)
warp_res = atomicAdd(ctr, g.size());
return g.shfl(warp_res, 0) + g.thread_rank();
}
With primitive functions, the code is more complex.
__device__ int atomicAggInc(int *ctr) {
unsigned int active = __activemask();
int leader = __ffs(active) - 1;
int change = __popc(active);
unsigned int rank = __popc(active & __lanemask_lt());
int warp_res;
if(rank == 0)
warp_res = atomicAdd(ctr, change);
warp_res = __shfl_sync(active, warp_res, leader);
return warp_res + rank;
}
Performance Comparison
The warp-aggregated atomic increment function is a drop-in replacement for atomicAdd(ctr, 1) where ctr is the same across all threads of a warp. Therefore, we can rewrite GPU filtering using atomicAggInc() as follows.
__global__ void filter_k(int *dst, const int *src, int n) {
int i = threadIdx.x + blockIdx.x * blockDim.x;
if(i >= n)
return;
if(src[i] > 0)
dst[atomicAggInc(nres)] = src[i];
}
Note that though we defined warp aggregation with global atomics in mind, nothing precludes doing the same for shared memory atomics. In fact, the atomicAggInc(int *ctr) function defined above works if ctr is a pointer to shared memory. Warp aggregation can thus also be used to accelerate filtering with shared memory. Figure 3 shows a performance comparison of different variants of filtering with and without warp aggregation for a Kepler GPU.
For Kepler GPUs, the version with warp-aggregated global atomics is the clear winner. It always provides more than 80 GiB/s bandwidth, and the bandwidth actually increases with the fraction of elements that successfully pass through the filter. This also indicates that atomics are no longer a significant bottleneck. Compared to global atomics, performance improves up to 21x. Performance of a simple copy operation on the same GPU is around 190 GiB/s. We can thus say that the performance of filtering with warp-aggregated atomics is comparable to that of a simple copy operation. This also means that filtering can now be used in performance-critical portions of the code. Also note that shared memory atomics (with warp aggregation) are actually slower than warp-aggregated atomics. This indicates that warp aggregation already does a very good job, and using shared memory on Kepler brings no benefit and only introduces additional overhead.
Since warp-aggregated atomics can be used as a drop-in replacement for normal atomics in certain cases, it is not a surprise that the compiler now performs this optimization automatically in many cases now. In fact, the compiler does the optimization for post-Kepler GPUs starting with CUDA 7.5, and in CUDA 9, it also does it for Kepler GPUs. Therefore, earlier comparisons were performed with CUDA 8 on Kepler, where warp-aggregated atomics were not yet inserted automatically.
Figures 4, 5 and 6 show the comparison for Kepler, Pascal and Volta with CUDA 9. The performance of simple atomicAdd() is similar to that of warp-aggregated atomics.
Conclusion
Warp aggregation of atomics is a useful technique to improve performance of applications that perform many operations on a small number of counters. In this post we applied warp aggregation to filtering, and obtained more than an order-of-magnitude performance improvement for Kepler with CUDA 8. In fact, the technique turns out to be so useful that it is now implemented in the NVCC compiler, and you get warp aggregation in many cases by default with no additional effort required.
Warp-aggregated atomics are by no means limited to filtering; you can use it for many other applications which make use of atomic operations.
Related resources
Tags
About the Authors
Comments
Related posts
Optimize GPU Workloads for Graphics Applications with NVIDIA Nsight Graphics
Boosting Application Performance with GPU Memory Access Tuning
Register Cache: Caching for Warp-Centric CUDA Programs
Voting and Shuffling to Optimize Atomic Operations
How to Access Global Memory Efficiently in CUDA Fortran Kernels
Related posts
CUDA Pro Tip: The Fast Way to Query Device Properties
Pro Tip: Improved GLSL Syntax for Vulkan DescriptorSet Indexing
Pro Tip: Pinpointing Runtime Errors in CUDA Fortran
Pro Tip: Linking OpenGL for Server-Side Rendering
Pro Tip: cuBLAS Strided Batched Matrix Multiply
|
CUDA Pro Tip: Write Flexible Kernels with Grid-Stride Loops
One of the most common tasks in CUDA programming is to parallelize a loop using a kernel. As an example, let’s use our old friend SAXPY. Here’s the basic sequential implementation, which uses a for loop. To efficiently parallelize this, we need to launch enough threads to fully utilize the GPU.
void saxpy(int n, float a, float *x, float *y)
{
for (int i = 0; i < n; ++i)
y[i] = a * x[i] + y[i];
}
Common CUDA guidance is to launch one thread per data element, which means to parallelize the above SAXPY loop we write a kernel that assumes we have enough threads to more than cover the array size.
__global__
void saxpy(int n, float a, float *x, float *y)
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
if (i < n)
y[i] = a * x[i] + y[i];
}
I’ll refer to this style of kernel as a monolithic kernel, because it assumes a single large grid of threads to process the entire array in one pass. You might use the following code to launch the saxpy kernel to process one million elements.
// Perform SAXPY on 1M elements
saxpy<<<4096,256>>>(1<<20, 2.0, x, y);
Instead of completely eliminating the loop when parallelizing the computation, I recommend to use a grid-stride loop, as in the following kernel.
__global__
void saxpy(int n, float a, float *x, float *y)
{
for (int i = blockIdx.x * blockDim.x + threadIdx.x;
i < n;
i += blockDim.x * gridDim.x)
{
y[i] = a * x[i] + y[i];
}
}
Rather than assume that the thread grid is large enough to cover the entire data array, this kernel loops over the data array one grid-size at a time.
Notice that the stride of the loop is blockDim.x * gridDim.x which is the total number of threads in the grid. So if there are 1280 threads in the grid, thread 0 will compute elements 0, 1280, 2560, etc. This is why I call this a grid-stride loop. By using a loop with stride equal to the grid size, we ensure that all addressing within warps is unit-stride, so we get maximum memory coalescing, just as in the monolithic version.
When launched with a grid large enough to cover all iterations of the loop, the grid-stride loop should have essentially the same instruction cost as the if statement in the monolithic kernel, because the loop increment will only be evaluated when the loop condition evaluates to true.
There are several benefits to using a grid-stride loop.
int numSMs;
cudaDeviceGetAttribute(&numSMs, cudaDevAttrMultiProcessorCount, devId);
// Perform SAXPY on 1M elements
saxpy<<<32*numSMs, 256>>>(1 << 20, 2.0, x, y);
When you limit the number of blocks in your grid, threads are reused for multiple computations. Thread reuse amortizes thread creation and destruction cost along with any other processing the kernel might do before or after the loop (such as thread-private or shared data initialization).
saxpy<<<1,1>>>(1<<20, 2.0, x, y);
This makes it easier to emulate a serial host implementation to validate results, and it can make printf debugging easier by serializing the print order. Serializing the computation also allows you to eliminate numerical variations caused by changes in the order of operations from run to run, helping you to verify that your numerics are correct before tuning the parallel version.
HEMI_LAUNCHABLE
void saxpy(int n, float a, float *x, float *y)
{
for (auto i : hemi::grid_stride_range(0, n)) {
y[i] = a * x[i] + y[i];
}
}
We can launch the kernel using this code, which generates a kernel launch when compiled for CUDA, or a function call when compiled for the CPU.
hemi::cudaLaunch(saxpy, 1<<20, 2.0, x, y);
Grid-stride loops are a great way to make your CUDA kernels flexible, scalable, debuggable, and even portable. While the examples in this post have all used CUDA C/C++, the same concepts apply in other CUDA languages such as CUDA Fortran.
I’d like to thank Justin Luitjens from the NVIDIA Developer Technology group for the idea and many of the details in this CUDA Pro Tip.
Related resources
Tags
About the Authors
Comments
Related posts
CUDA Dynamic Parallelism API and Principles
Adaptive Parallel Computation with CUDA Dynamic Parallelism
CUDA Pro Tip: Increase Performance with Vectorized Memory Access
How to Implement Performance Metrics in CUDA Fortran
An Easy Introduction to CUDA C and C++
Related posts
CUDA Pro Tip: The Fast Way to Query Device Properties
Pro Tip: Improved GLSL Syntax for Vulkan DescriptorSet Indexing
Pro Tip: Pinpointing Runtime Errors in CUDA Fortran
Pro Tip: Linking OpenGL for Server-Side Rendering
Pro Tip: cuBLAS Strided Batched Matrix Multiply
|
Optimizing the Deep Learning Recommendation Model on NVIDIA GPUs
Recommender systems help people find what they’re looking for among an exponentially growing number of options. They are a critical component for driving user engagement on many online platforms.
With the rapid growth in scale of industry datasets, deep learning (DL) recommender models, which capitalize on large amounts of training data, have started to show advantages over traditional methods. Current DL–based models for recommender systems include the Wide and Deep model, Deep Learning Recommendation Model (DLRM), neural collaborative filtering (NCF), Variational Autoencoder (VAE) for Collaborative Filtering, and BERT4Rec among others.
There are multiple challenges when it comes to performance of large-scale recommender systems solutions: huge datasets, complex data preprocessing and feature engineering pipelines, as well as extensive repeated experimentation. To meet the computational demands for large-scale DL recommender systems training and inference, recommender-on-GPU solutions aim to provide fast feature engineering and high training throughput (to enable both fast experimentation and production retraining), as well as low latency, high-throughput inference.
In this post, we discuss our reference implementation of DLRM, which is part of the NVIDIA GPU-accelerated DL model portfolio. It covers a wide range of network architectures and applications in many different domains, including image, text and speech analysis, and recommender systems. With DLRM, we systematically tackle the challenges mentioned.
For data preprocessing tasks on massive datasets, we introduce new Spark-on-GPU tools. With automatic mixed precision training on NVIDIA Tensor Core GPUs, an optimized data loader and a custom embedding CUDA kernel, on a single Tesla V100 GPU, you can train a DLRM model on the Criteo Terabyte dataset in just 44 minutes, compared to 36.5 hours on 96-CPU threads.
We also demonstrate how to deploy trained DLRM models into production with the NVIDIA Triton Inference Server.
DLRM overview
DLRM is a DL-based model for recommendations introduced by Facebook research. Like other DL-based approaches, DLRM is designed to make use of both categorical and numerical inputs which are usually present in recommender system training data. Figure 1 shows the model architecture. To handle categorical data, embedding layers map each category to a dense representation before being fed into multilayer perceptrons (MLP). Numerical features can be fed directly into an MLP.
At the next level, second-order interactions of different features are computed explicitly by taking the dot product between all pairs of embedding vectors and processed dense features. Those pairwise interactions are fed into a top-level MLP to compute the likelihood of interaction between a user and item pair.
Compared to other DL-based approaches to recommendation, DLRM differs in two ways. First, it computes the feature interaction explicitly while limiting the order of interaction to pairwise interactions.
Second, DLRM treats each embedded feature vector (corresponding to categorical features) as a single unit, whereas other methods (such as Deep and Cross) treat each element in the feature vector as a new unit that should yield different cross terms. These design choices help reduce computational/memory cost while maintaining competitive accuracy.
Criteo dataset
The Criteo Terabyte click logs public dataset, one of the largest public datasets for recommendation tasks, offers a rare glimpse into the scale of real enterprise data. It contains ~1.3 TB of uncompressed click logs containing over four billion samples spanning 24 days, and can be used to train recommender system models that predict the ad clickthrough rate.
This is a large dataset in the collection of public DL datasets. Yet, real datasets can be potentially one or two orders of magnitude larger. Enterprises try to leverage as much historical data as feasible, for this generally translates into better accuracy.
For this post, we used the Criteo Terabyte dataset to demonstrate the efficiency of the GPU-optimized DLRM training pipeline. Each record in this dataset contains 40 values: a label indicating a click (value 1) or no click (value 0), 13 values for numerical features, and 26 values for categorical features. Features are anonymized and categorical values are hashed to ensure privacy.
End-to-end training pipeline
We provide an end-to-end training pipeline on the Criteo Terabyte data that help you get started with just a few simple steps.
git clone https://github.com/NVIDIA/DeepLearningExamples
cd DeepLearningExamples/PyTorch/Recommendation/DLRM
docker build . -t nvidia_dlrm_pyt
mkdir -p datadocker run --runtime=nvidia -it --rm --ipc=host -v ${PWD}/data:/data nvidia_dlrm_pyt bash
Before downloading data, you must check out and agree with the terms and conditions of the Criteo Terabyte dataset. The dataset contains 24 zipped files and require about 1 TB of disk storage for the data and another 2 TB for immediate results.
If you don’t want to experiment on the full set of 24 files, you can download a subset of files and modify the data preprocessing scripts to work on these files only.
cd preproc && ./prepare_dataset.sh && cd -
python -m dlrm.scripts.main --mode train --dataset /data --save_checkpoint_path model.pt
Next, we discuss several details of this training pipeline.
Data preprocessing and transformation with Spark
The original Facebook DLRM code base comes with a data preprocessing utility to preprocess the data.
This data utility, based on NumPy, runs on a single CPU thread and takes ~5.5 days to transform the whole Criteo Terabyte dataset.
We improved the data preprocessing process with Spark to make use of all available CPU threads. In the DLRM Docker image, we used Spark 2.4.5, which starts a standalone Spark cluster. This results in significant improvement in data preprocessing speed, scaling well with the number of available CPU cores. Spark outputs the transformed data in Parquet format. Finally, we converted the Parquet data files into a binary format designed especially for the Criteo dataset.
On an AWS r5d.24xl instance with 96 cores and 768 GB RAM, the whole process takes 9.45 hours (without frequency capping) and 2.87 hours (with frequency capping to map all rare categories that occur fewer than 15 times to a special category).
Spark can be improved even further. We introduced a Spark-GPU plugin for DLRM. Figure 2 shows the data preprocessing time improvement for Spark on GPU. With 8 V100 32-GB GPUs, you can further speed up the processing time by a factor of up to 43X compared to an equivalent Spark-CPU pipeline. The Spark-GPU plugin is currently in early access for select developers. We invite you to register your interest in the Spark-GPU plugin.
Our preprocessing scripts are designed for the Criteo Terabyte dataset but should work with any other dataset with the same format. The data should be split into text files. Each line of those text files should contain a single training example. An example should consist of multiple fields separated by tabulators:
You must modify data parameters, such as the number of unique values for each categorical feature and the number of numerical features in preproc/spark_data_utils.py, and Spark configuration parameters in preproc/run_spark.sh.
Data loading
We employ a binary data format, which is essentially a serialization of NumPy arrays that load particularly fast. This, combined with overlapping data loading and host2device transfer with neural net computations, allows us to achieve high GPU utilization.
Embedding tables and custom embedding kernel
DL-based recommendation models are often too large to fit onto a single device memory. This is mainly due to the sheer size of the embedding tables, which is proportional to the cardinality of categorical features and the dimensionality of the latent space (the number of rows and columns in the embedding tables).
We adopted a common practice to map all rare categorical values to a special ‘missing category’ value (here, any category that occurs fewer than 15 times in the dataset is treated as a missing category). This reduces embedding table size and avoids embedding entries that would not be sufficiently updated during training from their random initializations.
Unlike other compute-intensive layers, embedding layers are memory bandwidth–constrained. GPUs have very high bandwidth memory compared to current state-of-the-art commodity CPUs. To efficiently use the available memory bandwidth, we combine all categorical embedding tables into one single table and use a custom kernel to perform embedding lookups. The kernel uses vectorized load-store instructions for optimal performance.
Training with automatic mixed precision
Mixed precision is the use of multiple numerical precisions, such as FP32 and FP16, in a computing procedure.
Starting with the Volta architecture, NVIDIA GPUs are equipped with Tensor Cores, specialized compute units that perform matrix multiplication, a building block for linear (also known as fully connected) and convolution layers. The automatic mixed precision (AMP) features available in the NVIDIA NGC PyTorch container enables mixed precision training with minimal changes to the code base. Under the hood, AMP is provided by the NVIDIA APEX library, which enables mixed precision training by changing only three lines of your script.
In our experiments on a wide range of models and architectures in the NVIDIA DL model library, AMP usually offers speedup in the range of 1.3x up to 3x or more. For DLRM, AMP offers a 2.37x speed up compared to FP32 training. With a V100 32GB GPU, DLRM can be trained on the Criteo Terabyte dataset for one epoch in just 44 minutes, converging to an AUC value of 0.8036.
End-to-end inference pipeline
Recommender system inference involves determining an ordered list of items with which the query user most likely interacts.
For large commercial databases with millions to hundreds of millions of items to choose from (like advertisements or apps), an item retrieval procedure is usually carried out to reduce the number of items to a more manageable quantity, for example, a few hundreds to a few thousands. The methods include computationally efficient algorithms such as approximate neighborhood search or filtering based on user preferences and business rules. From there, a DL recommender model is invoked to re-rank the items. Those with the highest scores are presented to the user. This process is demonstrated in Figure 3.
As you can see, for each query user, the number of user-item pairs to score can be as large as a few thousands. This places a heavy duty on the recommender system inference server. The server must handle high throughput to serve many users concurrently, yet operate at low latency to satisfy the stringent latency thresholds of online commerce engines.
NVIDIA Triton Inference Server provides a cloud inferencing solution optimized for NVIDIA GPUs. The server provides an inference service using an HTTP or GRPC endpoint, allowing remote clients to request inferencing for any model being managed by the server. Triton Server automatically manages and makes use of all the available GPUs.
The next section covers how to prepare the DLRM model for inference with Triton Server and see how Triton Server performs.
Prepare the model for inference
Triton Server can serve TorchScript and ONNX models, as well as others. We provides an export tool to prepare trained DLRM models ready for production inference.
Using TorchScript
Exporting pretrained PyTorch DLRM models to TorchScript models can be done using either torch.jit.script or torch.jit.trace with the following command:
python triton/deployer.py --ts-script --triton-max-batch-size 65536 --model_checkpoint dlrm.pt --save-dir /repository [other optional parameters]
This produces a production-ready model for Triton Server from a checkpoint named dlrm.pt, using the torch.jit.script and a maximum servable batch size of 65536.
Using ONNX
Similarly, an ONNX production-ready model can be created with the following command:
python triton/deployer.py --onnx --triton-max-batch-size 65536 --model_checkpoint dlrm.pt --save-dir /repository [other optional parameters]
The outcome of the export tool is a packaged directory /repository, which Triton Server can readily make use of.
Set up Triton Inference Server
With the model ready to go, Triton Server can be set up with the following steps.
docker pull nvcr.io/nvidia/tensorrtserver:<tag>
docker run --network=host -v /repository:/models nvcr.io/nvidia/tensorrtserver:<tag> trtserver --model-store=/models
Use the Triton Server perf_client tool to measure inference performance
The Triton Server comes with a handy performance client tool, perf_client. This tool stress tests the inference server with either synthetic or real data, using multiple parallel threads. It can be invoked with the following command:
/workspace/install/bin/perf_client --max-threads 10 -m dlrm-onnx-16 -x 1 -p 5000 -v -i gRPC -u localhost:8001 -b 4096 -l 5000 --concurrency-range 1 --input-data /location/for/perfdata -f result.csv
Using the perf client, we collected the latency and throughput data to populate the figures shown later in this post.
Triton Server batching strategies
By default, the exported model is deployed with the Triton Server static batching strategy: each request is immediately fulfilled. On the other hand, dynamic batching is a feature of the inference server that allows inference requests to be combined by the server, so that a batch is created dynamically. This results in the same increased throughput seen for batched inference requests.
The inferencing for a batch of inputs is performed at the same time, which is especially important for GPUs as it can greatly increase inferencing throughput. In many use cases, the individual inference requests are not batched and do not benefit from the throughput benefits of batching.
For online applications with a strict latency threshold, Triton Server is configurable so that queue time with dynamic batching is limited to an upper limit while forming the largest batch possible to maximize the throughput. In the model directory, there is a config file named config.pbtxt that can be configured with an extra batching option as follows:
ddynamic_batching {
preferred_batch_size: [ 65536 ]
max_queue_delay_microseconds: 7000
}
Static batch throughput
Figure 4 shows Triton Server throughput with the TorchScript DLRM model at various batch sizes. For recommender systems, large batch sizes are of the most interest. For each query user, several thousands of items are sent along in a single request for item re-ranking. Compared to an 80-thread CPU inference, a Tesla V100 32-GB GPU offers up to 20x improvement in throughput. You can see that the GPU throughput starts to saturate at around a batch size of 8K.
Figure 5 shows the Triton TorchScript inference latency on GPU compared to CPU. At a batch size of 8192, a V100 32-GB GPU reduces the latency by 19x compared to an 80-thread CPU inference.
Dynamic batch throughput
With dynamic batching, you can improve the throughput further over static batching. In this experiment, we set the individual per-user request batch size to 1024, and Triton maximum and preferred batch size to 65536. Figure 5 shows the latency and throughput at various request concurrency levels. Latency is broken down into client send/receive time, server queue and compute time, networking, server send/receive time.
Concurrency level is a parameter of perf_client that allows you to control the latency-throughput trade-off. By default, perf_client measures your model’s latency and throughput using the lowest possible load on the model at a request concurrency of 1. To do this, perf_client sends one inference request to the server and waits for the response. When that response is received, perf_client immediately sends another request, and then repeats this process.
At higher concurrency levels of N, perf_client immediately fires up requests one after another without waiting for the previous request to be fulfilled, while maintaining at any time at most N outstanding requests.
Figure 6 shows that if you have a 10-ms upper bound on latency, you can achieve a throughput of 1,318,710 samples/sec. This means ~1288 users can be served per second, each within the 10-ms latency limit, on a single V100 GPU, assuming that you want to score 1024 items for each user and that the user requests come at a uniform rate of maximum 12 requests within any 10-ms window.
Conclusion
In this post, we walked through a complete DLRM pipeline, from data preparation to training to production inference. The GPU-optimized DLRM is available from the NVIDIA deep learning model zoo, under /PyTorch/Recommendation/DLRM. We provide ready-to-go Docker images for training and inference, data downloading and preprocessing tools, and Jupyter demo notebooks to get you up and running quickly. Trained models can then be prepared for production inference in one simple step with our exporter tool. We also invite you to register your interest for early access to the Spark-GPU component.
DLRM forms part of NVIDIA Merlin, a framework for building high-performance, DL–based recommender systems. To learn more about Merlin and the larger ecosystem, see the recent post, Announcing NVIDIA Merlin: An Application Framework for Deep Recommender Systems.
We cordially invite you to try out and benefit from our newly developed tools for your recommender system applications. Your issues and feature requests help guide future development. We are excited to see what you can do with this model on your own data.
Related resources
Tags
About the Authors
Comments
Related posts
Scaling Recommendation System Inference with NVIDIA Merlin Hierarchical Parameter Server
Learn How to Build Intelligent Recommender Systems
Accelerating ETL for Recommender Systems on NVIDIA GPUs with NVTabular
Announcing NVIDIA Merlin: An Application Framework for Deep Recommender Systems
Accelerating Wide & Deep Recommender Inference on GPUs
Related posts
Optimize AI Inference Performance with NVIDIA Full-Stack Solutions
Supercharging Fraud Detection in Financial Services with Graph Neural Networks
Accelerating Hebrew LLM Performance with NVIDIA TensorRT-LLM
Advancing Security for Large Language Models with NVIDIA GPUs and Edgeless Systems
Level Up Your Skills with Five New NVIDIA Technical Courses
|
Improving GPU Application Performance with NVIDIA CUDA 11.2 Device Link Time Optimization
CUDA 11.2 features the powerful link time optimization (LTO) feature for device code in GPU-accelerated applications. Device LTO brings the performance advantages of device code optimization that were only possible in the nvcc whole program compilation mode to the nvcc separate compilation mode, which was introduced in CUDA 5.0.Separate compilation mode allows CUDA device kernel code to span across multiple source files whereas in whole program compilation mode all the CUDA device kernel code in the program is required to be in a single source file. Separate compilation mode introduced source code modularity to device kernel code and so was an important step for improving developer productivity. Separate compilation mode enabled developers to better design and organize device kernel code and to GPU-accelerate many more existing applications without significant code refactoring effort to move all the device kernel code to a single source file. It also improved developer productivity for large parallel application development by only requiring re-compilations of device source files with incremental changes.
The scope of CUDA compiler optimizations is generally limited to each source file that’s being compiled. In separate compilation mode, the scope of compile time optimization may be limited as the compiler does not have visibility to any device code referenced outside of a source file, as the compiler cannot take advantage of optimization opportunities that cross file boundaries.
In comparison, in the whole program compilation mode, all the device kernel code that is present in the program is in the same source file eliminating any external dependencies and allowing the compiler to perform optimizations that were not possible in separate compilation mode. Consequently, programs compiled in whole program compilation mode are usually more performant compared to those compiled in separate compilation mode.With Device Link Time Optimization (LTO), which was previewed in CUDA 11.0, you can get the source code modularity of separate compilation along with the runtime performance of whole program compilation for device code. While the compiler may not be able to make globally optimal code transformations when optimizing separately compiled CUDA source files, the linker is in a better position to do so.
Compared to the compiler, the linker has a whole program view of the executable being built including source code and symbols from multiple source files and libraries. A whole program view of the executable enables the linker to choose the most performant optimization suitable for the separately compiled program. This Device Link Time Optimization is performed by linker and is a feature of the nvlink utility in CUDA 11.2. Applications with multiple source files and libraries can now be GPU-accelerated without compromising performance in separate compilation mode.
Figure 1, in nvcc whole program compilation mode the device program to be compiled in a single source file X.cu, without any unresolved external references to device functions or variables, can be fully optimized by the compiler at compile time. However, in separate compilation mode, the compiler can only optimize the device code within the individual source file being compiled leaving the final executable to be not as optimized as possible for there may be more optimization possible across the source files which the compiler cannot perform. Device Link Time Optimization bridges this gap by deferring optimization to the link step instead.
In device LTO mode, we store a high-level intermediate form of the code for each translation unit, and then at link time we merge all those intermediates to create a high-level representation of all the device code. This enables the linker to perform high-level optimizations like inlining across file boundaries, which not only eliminates the overhead of the calling conventions, but also further enables other optimizations on the inlined block of code itself. The linker can also take advantage of offsets that have been finalized. For instance, shared memory allocations are finalized, and the data offsets are known only at link time, so Device Link Time Optimization can now make low-level optimizations such as constant propagation or folding possible for device code. Even if a function is not inlined, the linker can still see both sides of a call for optimizing the calling convention. Hence, the quality of the code generated for separately compiled programs can be improved with device link time optimization and be as performant as if the program were compiled in whole program mode.
To understand the limitations of separate compilation and possible performance gains with device LTO, let’s look at an example from a MonteCarlo benchmark. There is a call to a device function get_domain() that is defined in another file:
In the below sample code, MC_Location::get_domain is not inlined in standard compilation mode as it is defined in another file, but will beinlined using Device link optimization from CUDA 11.2
__device__ void MCT_Reflect_Particle(MonteCarlo *monteCarlo,
MC_Particle &particle){
MC_Location location = particle.Get_Location();
const MC_Domain &domain = location.get_domain(monteCarlo);
...
...
/* uses domain */
}
The function get_domain is part of another class, so it makes sense that it is defined in another file. But in separate compilation mode, the compiler will not know what get_domain() does or even where it exists when it is being called, therefore the compiler cannot inline the function and has to emit the call along with the parameter and return handling, while also saving space for things like the return address after the call. This in turn makes it unable to potentially optimize the subsequent statements that use the domain value. In device LTO mode, get_domain() can be fully inlined and the compiler can perform more optimizations thus eliminating the code for the calling convention and enabling optimizations based on the domain value.In short, device LTO brings all the performance optimizations to the separate compilation mode that were previously only available in the whole program compilation mode.
Using device LTO
To use device LTO, add the option -dlto to both the compilation and link commands as shown below. Skipping the -dlto option from either of these two steps affects your results.Compilation of cuda source files with -dlto option:
nvcc -dc -dlto *.cu
Linking of cuda object files with -dlto option:
nvcc -dlto *.o
Using -dlto option at compile time instructs the compiler to store a high-level intermediate representation (NVVM-IR) of the device code being compiled into the fatbinary. The -dlto option at link time will instruct the linker to retrieve the NVVM IR from all the link objects and merge them together into a single IR and perform optimization on the resulting IR for code generation. Device LTO works with any supported SM arch target.
Using device LTO with existing libraries
Device LTO can only take effect when both the compile and link steps use -dlto. If -dlto is used at compile time but not at link time then at link time each object is individually compiled to SASS and then linked as normal without any opportunity for optimization. If -dlto is used at link time but not at compile time, then the linker does not find the intermediate representations to perform LTO on and skips the optimization step linking the objects directly.
Device LTO works best if all the objects that contain device code are built with -dlto. However, it can still be used even if only some of the objects use -dlto, as in Figure 2.
In that case, at link time, the objects built with -dlto are linked together to form a relocatable object, and then linked with the other non-LTO objects. This does not provide optimal performance but may still improve performance by optimizing within the LTO objects. This feature enables the usage of -dlto even with outside libraries that are not built with-dlto; it just means that the library code does not benefit from Device LTO.
Fine-grained per architecture device link optimization support
The global -dlto option is suitable when compiling for a single target architecture.When you compile for multiple architectures with -gencode, specify exactly what intermediates to store into the fat binary. For example, to store Volta SASS and Ampere PTX in an executable, you would currently compile with following options:
nvcc -gencode arch=compute_70,code=sm_70 -gencode arch=compute_80,code=compute_80
With a new code target, lto_70, you can get fine-grained control to indicate which target architecture should store the LTO intermediary instead of SASS or PTX. For example, to store Volta LTO and Ampere PTX, you would compile with the following code example:
nvcc -gencode arch=compute_70,code=lto_70
-gencode arch=compute_80,code=compute_80
Performance results
What kind of performance impact can you expect with device LTO?GPUs are sensitive to memory traffic and register pressure. As a result, the device optimizations generally have more impact than the corresponding host optimizations. As expected, we observed many applications benefiting from device LTO. In general, the speedup through device LTO depends on the CUDA application characteristics.
Figures 3 and 4 show graphs that are comparisons of the runtime performance and build time of an internal benchmark application and another real-world application, both Monte-Carlo applications in three compilation modes:
The customer application that we tested had a single main computational kernel that accounted for 80%+ of the runtime, which called into hundreds of separate device functions spread across different translation units or source files. Manual inlining of the functions is effective but is cumbersome if you’d prefer to use separate compilation to maintain your traditional development workflow and library boundaries. In these situations, using device LTO to realize potential performance benefits without additional development effort is particularly attractive.
The runtime performance, as shown in Figure 3, of both the benchmark and the customer application with device LTO was close to whole program compilation mode overcoming the limitations posed by separate compilation mode. Remember that the performance gains are largely dependent on how the application itself is crafted. As we observed, in some cases, the gains were marginal. With another CUDA application suite, device LTO resulted in an average runtime performance speed-up of around 25%.
Later in this post, we cover more about the scenarios where device LTO is not particularly beneficial.
There is another aspect to device LTO in addition to GPU performance, and that is build time. The total build time using device LTO depends largely on the application size and other system factors. In Figure 4, the relative difference in the build time of the internal benchmark is compared against the customer application for the three different compilation modes as earlier. The internal benchmark comprises roughly 12 thousand lines of code whereas the customer application has tens of thousands of lines of code.There are situations where the whole program mode compilation may be faster due to fewer passes required to compile and optimize those programs. In addition, smaller programs in whole program mode could sometimes compile faster because it has fewer compile commands and therefore fewer invocations of host compiler also. But large programs in whole program mode can pose higher optimization cost and memory usage. In such cases then compiling using separate compilation mode can be faster. This can be observed for the internal benchmark in Figure 4 where the whole program mode compilation time was faster by 17% while with the customer application, the whole program mode compilation was slower by 25%.
The limited range of optimizations and smaller translation units make compilation faster in separate compilation mode. Separate compilation mode also reduces the overall incremental build times when incremental changes are isolated to a few source files. When device link time optimization is enabled the compiler optimization phase is eliminated reducing the compile time significantly, thus speeding up compilation of separate compilation mode even further. But, at the same time, as the device code optimization phase is deferred to the linker and since the linker can perform more optimizations in separate compilation mode, the link time of separate compiled programs may be higher with device link time optimization. In Figure 4, we can observe the Device LTO build time was only slower by 7% with the benchmark but with the customer application, the build time was slower by almost 50%.
In 11.2, we have also introduced the new nvcc -threads option, which enables parallel compilation when targeting multiple architectures. That can help to reduce build times. In general, the total (compile and link) build time may vary for these compilation modes depending on a diverse set of factors. Nevertheless, because the compile time is significantly reduced using device LTO, we expect that the overall build of separate compilation mode with device link time optimization enabled should be comparable in most typical scenarios.
Limitations of device LTO
Device LTO is particularly powerful when it inlines device functions across file objects. However, in the case of some applications, the device code may all reside within a source file, in which case device LTO does not make much difference.
Indirect calls from function pointers such as callbacks do not benefit much from LTO, as those indirect calls cannot be inlined.
Be aware that device LTO performs aggressive code optimization and therefore it is not compatible with the usage of the -G NVCC command-line option for enabling symbolic debug support of device code.
For CUDA 11.2, device LTO only works with offline compilation. JIT LTO is not yet supported for device LTO intermediate forms.
File-scope commands like -maxrregcount or -use_fast_math are not compatible with device LTO as LTO optimizations cross file boundaries. If all files are compiled with the same option then everything is fine, but if they differ, then device LTO complains at link time. You can override these compilation attributes for device LTO by specifying -maxrregcount or -use_fast_math at link time, and then that value is used for all the LTO objects.
Even though using device LTO moves much of the time spent on optimization during compile time to link time, the overall build time is usually comparable between an LTO build and a non-LTO build, as the compile time is significantly reduced. However, it increases the amount of memory needed during link time. We believe that the benefits from device LTO should offset the limitations in the most common cases.
Try out device LTO
If you are looking to build GPU-accelerated applications in separate compilation mode without compromising performance or device source code modularity, device LTO is for you! Using device LTO programs compiled in separate compilation mode can leverage the performance benefits of code optimizations that cross file boundaries and thus help close the performance gap relative to whole program compilation mode.
To assess and exploit the benefits of device LTO for your CUDA application, download the CUDA 11.2 Toolkit today and try it out. Also, please let us know what you think. We are always looking for ways to improve the CUDA application development and runtime performance tuning experience.
Related resources
Tags
About the Authors
Comments
Related posts
CUDA 12.0 Compiler Support for Runtime LTO Using nvJitLink Library
Reducing Application Build Times Using CUDA C++ Compilation Aids
Boosting Productivity and Performance with the NVIDIA CUDA 11.2 C++ Compiler
New Compiler Features in CUDA 8
CUDA Pro Tip: Understand Fat Binaries and JIT Caching
Related posts
One Giant Superchip for LLMs, Recommenders, and GNNs: Introducing NVIDIA GH200 NVL32
Announcing NVIDIA DGX GH200: The First 100 Terabyte GPU Memory System
Fueling High-Performance Computing with Full-Stack Innovation
Optimizing Data Movement in GPU Applications with the NVIDIA Magnum IO Developer Environment
Accelerating NVSHMEM 2.0 Team-Based Collectives Using NCCL
|
Register Cache: Caching for Warp-Centric CUDA Programs
In this post we introduce the “register cache”, an optimization technique that develops a virtual caching layer for threads in a single warp. It is a software abstraction implemented on top of the NVIDIA GPU shuffle primitive. This abstraction helps optimize kernels that use shared memory to cache thread inputs. When the kernel is transformed by applying this optimization, the data ends up being distributed across registers in the threads of each warp, and shared memory accesses are replaced with accesses to registers in other threads by using shuffle, thereby enabling significant performance benefits.
We develop the register cache abstraction and show how to use it in the context of a simple kernel that computes a 1D-stencil. We then provide a general recipe for transforming a kernel to use the register cache, evaluate its performance, and discuss its limitations. A more elaborate analysis and evaluation are presented in our paper: “Fast Multiplication in Binary Fields on GPUs via Register Cache.”
Where is the Warp-Level Cache?
GPU kernels may store data in the following three memory layers: global memory, shared memory and registers (see Figure 1). These layers effectively form a hierarchy in terms of size, performance and scope of sharing: the largest and slowest—global memory—is shared across all the kernel threads; smaller but much faster shared memory is shared across the threads of a single thread block; and the smallest and fastest—registers—are private to each thread.
You can view each layer in the memory hierarchy as a cache for the respective layer in the execution hierarchy. Specifically, shared memory serves as a cache for the threads in a thread block, while registers allow “caching” of data in a single thread.
Interestingly, as Figure 1 shows, a single warp does not have its own explicit caching layer. Thus, kernels that follow a warp-centric design do not have a specialized memory layer in hardware in which to cache the warp’s input.
Caching in Registers Using Shuffle
In the CUDA Kepler microarchitecture (2012) NVIDIA introduced the SHFL (shuffle) instruction, which enables intra-warp communication. The primitive function shfl_sync(m, r, t) enables an issuing thread to share a value stored in register r while reading the value shared by thread t in the same warp (m is a 32-bit mask of participating threads within the warp). Figure 2 presents the semantics of shfl_sync(). More detailed description can be found in a previous Parallel Forall blog post.
There are a few good reasons to use registers for data sharing among threads in a warp.
Unfortunately, the use of shuffle is fairly complex. In particular, if a kernel has already been written to use shared memory, modifying it to use shuffle may require significant algorithmic changes.
The technique presented here aims to help optimize kernels by replacing shared memory accesses with shuffles. It targets a specific yet quite common case: when shared memory is used to cache the kernel input.
To guide the transformation we redesign the code to use a “virtual” warp-level cache we call a register cache. Each thread holds and manages a local partition of the cache in an array rc stored in its registers. The cache management is logically decoupled from the rest of the program, which simplifies the development.
There is no implementation of a real cache, of course (no replacement policy, etc.). However, we found that building a mental picture of such a cache while optimizing the code greatly simplifies the process.
We now explain the idea by developing a simple example 1D stencil kernel.
Register Cache by Example: 1D Stencil
Definition. 1D k-stencil: Given an array A of size n, the k-stencil of A is an array B of size n-2k where B[i] = (A[i]+…+A[i+2k])/(2k+1). For example, array B is the 1-stencil of array A.
B[0] = (A[0] + A[1] + A[2])/3 = (0 + 1 + 2)/3 = 1
B[1] = (A[1] + A[2] + A[3])/3 = (1 + 2 + 3)/3 = 2
…
B[5] = (A[5] + A[6] + A[7])/3 = (5 + 6 + 7)/3 = 6
To compute a 1D k-stencil each input element (except for margins) is read 2k+1 times. Thus, any implementation must cache the input in order to exploit data reuse. For simplicity we drop 1D from the notation and use k=1.
We start with a simple implementation of the kernel which uses shared memory, and then show how we transform it to use shuffle with the help of the register cache abstraction.
Step 1: Shared Memory Implementation
We start with the following implementation (see Listing 1)
Figure 3 illustrates the computation steps.
__global__ void one_stencil (int *A, int *B, int sizeOfA)
{
extern __shared__ int s[];
// Id of thread in the block.
int localId = threadIdx.x;
// The first index of output element computed by this block.
int startOfBlock = blockIdx.x * blockDim.x;
// The Id of the thread in the scope of the grid.
int globalId = localId + startOfBlock;
if (globalId >= sizeOfA)
return;
// Fetching into shared memory.
s[localId] = A[globalId];
if (localId < 2 && blockDim.x + globalId < sizeOfA) {
s[blockDim.x + localId] = A[blockDim.x + globalId];
}
// We must sync before reading from shared memory.
__syncthreads();
// Each thread computes a single output.
if (globalId < sizeOfA - 2)
B[globalId] = (s[localId] + s[localId + 1] + s[localId + 2]) / 3;
}
Step 2: Identify Warp Inputs
Given that i is the index of the output element computed by thread 0 in a warp, the warp calculates the output elements i, i+1, …, i+31, and depends on 34 input elements i-1, …, i+32 denoted as input array.
Step 3: Determine Input Distribution Among the Threads
We distribute the input among the registers of each thread in a warp. In our example here we use a round-robin distribution of input arrays among the threads. In this scheme, input[i] is assigned to thread j such that j = i % 32. Thread 0 and thread 1 store two elements each, while all the other threads store only one array element. We denote the first cached element in each thread’s local partition as rc[0] and the second as rc[1]. Observe that this distribution scheme mimics the data distribution across the banks of shared memory.
Table 1 illustrates the distribution of inputs among the threads assuming 4 threads in a warp.
Step 4: Communication and Computation
We split the kernel into communication and computation phase(s). In the communication phase threads effectively access the register cache. In the computation phase each thread, locally, performs some arithmetic or logical operation using the values it read from the cache.
Here comes the main technical component of the register cache recipe. We introduce two communication primitives used by the threads in the warp, to make it easier to design the communication phase:
Each communication phase is composed of one or more of these primitives. Note that for one thread to Read, another thread has to Publish the requested data stored in its local registers.
We identify three communication phases in 1-stencil: one for each input element read by each thread. Table 2 lists all Read (R) and Publish (P) operations performed by each thread, assuming 4 threads in a warp, and Figure 4 illustrates the communication phases.
Read(j, i) indicates a read from thread j of its element rc[i]. The first communication phase is local, and provided for clarity.
P(rc[0])
P(rc[0])
P(rc[0])
P(rc[0])
(=input[0])
(=input[0])
(=input[0])
(=input[0])
P(rc[1])
P(rc[0])
P(rc[0])
P(rc[0])
For example, this phase T0 reads input[1] from rc[0] from T1.
(=input[0]+
input[1])
(=input[1]+
input[2])
(=input[2]+
input[3])
(=input[3]+
input[4])
P(rc[1])
P(rc[1])
P(rc[0])
P(rc[0])
After the computations described in Table 2 are finished each thread holds the value _ac that stores the output it next writes to global memory.
Step four: Replace Publish-Reads with shfl_sync()
CUDA doesn’t provide the Read and Publish primitives, but we can merge them using the shuffle primitive to implement the code in a real GPU. Say thread t_i calls Read(t_j, rc[0]) and Publish(r_i). We can implement these two calls using shfl_sync(t_j, r_i). All that remains is to efficiently compute thread and register indexes in the shfl_sync() calls while avoiding divergence.
This step concludes the transformation, and now the implementation does not use shared memory.
There is a problem in the case where two threads call Read(t_i, v) and Read(t_i, u), where v and u are two different values stored by t_i. One of the threads will not receive the value since t_i can publish only a single value in each cache access. Two or more accesses are therefore needed to satisfy these requests. We call this case a register cache conflict.
With this translation from Publish + Read into shfl_sync we can implement the full 1-stencil without using shared memory (Listing 2) . The full implementation is available in this repository.
__global__ void one_stencil_with_rc (int *A, int *B, int sizeOfA)
{
// Declaring local register cache.
int rc[2];
// Id of thread in the warp.
int localId = threadIdx.x % WARP_SIZE;
// The first index of output element computed by this warp.
int startOfWarp = blockIdx.x * blockDim.x + WARP_SIZE*(threadIdx.x / WARP_SIZE);
// The Id of the thread in the scope of the grid.
int globalId = localId + startOfWarp;
if (globalId >= sizeOfA)
return;
// Fetching into shared memory.
rc[0] = A[globalId];
if (localId < 2 & & WARP_SIZE + globalId < sizeOfA)
{
rc[1] = A[WARP_SIZE + globalId];
}
// Each thread computes a single output.
int ac = 0;
int toShare = rc[0];
for (int i = 0 ; i < 3 ; ++i)
{
// Threads decide what value will be published in the following access.
if (localId < i)
toShare = rc[1];
// Accessing register cache.
unsigned mask = __activemask();
ac += __shfl_sync(mask, toShare, (localId + i) % WARP_SIZE);
}
if (globalId < sizeOfA - 2)
B[globalId] = ac/3;
}
Evaluation
Figure 5 shows the speedup of the register cache over shared memory implementation of a k-stencil for increasing values of k. We used a GTX-1080 GPU and CUDA 9 to run the experiments.
For small values of k the data reuse is small or negligible, hence the speedup is small, but as k grows, the reuse achieved by register cache increases and the speedup as well.
Thread coarsening
The speedup reaches a plateau starting at k=12, since the register cache also performs more global memory accesses due to the overlapping edges of the input, which are read twice by two consecutive warps.
One common technique to reduce these global memory accesses is thread coarsening. This technique increases the number of outputs produced by each thread, and thus enables some of the input data to be reused across iterations by storing it in registers.
In the case of the register cache, thread coarsening becomes critical to achieve the desired performance improvements. The reason lies in the small number of threads sharing the cache. Since the register cache is limited to the threads of a single warp, only the inputs necessary for the warp threads are prefetched and cached. However, the input reuse might occur across consecutive warps. For the 1-stencil kernel we develop here, the input to the first thread in warp i is the same as the input to the last thread in warp i-1. Hence, this value is read twice from the global memory. For 32 warps per thread block, a register cache implementation performs 34 * 32 = 1088 global memory accesses, which is 6% more than the number of global memory accesses in a standard implementation using shared memory. Note that as k grows, the number of redundant global memory accesses becomes high. For example, almost half of the accesses are redundant for k= 16.
Thread coarsening helps reduce the effect of redundant global memory accesses. Figure 6 shows the speedup over a shared memory implementation achieved by computing with a varying number of outputs per thread (1 to 8), for different values of k. Thanks to thread coarsening, the register cache version achieves the speedup of up to 1.8x. For larger values of k the number of registers required for the cache is too large to fit in the physical registers and the compiler spills them to local memory (usually in L1 cache, but also global memory if not enough space), so the performance drops.
When Should You Use Register Caching?
There are cases where the register cache is not applicable. First, the access pattern should be known at compile time. Second, the efficiency of the register cache is predicated on the availability of spare registers. Otherwise, registers start spilling to global memory, leading to a dramatic performance drop, as is the case for k=25 in Figure 6.
CUDA 9 and Cooperative Groups
CUDA 9, introduced by NVIDIA at GTC 2017 includes Cooperative Groups, a new programming model for organizing groups of communicating and cooperating parallel threads. In particular, programmers should not rely on implicit synchronization of threads within a warp. Cooperative Groups makes it easier to explicitly synchronize groups of threads—especially at the warp level. Additionally, existing primitives such as __shfl(), which assumed implicit full-warp synchronization, have been deprecated. The new synchronizing versions allow the programmer to explicitly synchronize a subset of threads in a warp.
In more detail, starting with CUDA 9, the __shfl() function (along with related variants, such as __shfl_down()) is deprecated, and you should use the __shfl_sync() function instead, as we have done in all code in this post.
For example in our code, we replaced the call
__shfl(v, i)
with
unsigned mask = __activemask();
__shfl_sync(mask, v, i);
Here __activemask() returns a mask of all currently active (in other words, not blocked by execution flow) threads. In cases where the required mask is known (the common case is all threads in the warp), that can be specified explicitly instead of using __activemask().
You can also use Cooperative Groups to create a statically sized tiled partition of the thread block group. Statically sized groups support the shfl() method (which calls shfl_sync() internally, passing a mask that includes all threads in the group). Here’s how to create a warp-sized group that supports shfl().
namespace cg = cooperative_groups;
...
auto tile = cg::tiled_partition<32>(cg::this_thread_block());
tile.shfl(v, i);
In pre-Volta GPUs each warp maintained a single program counter (PC), pointing to the next instruction executed by the warp as well as a mask of all the currently active threads in the warp. Independent thread scheduling in Volta GPUs maintains a PC for every thread, enabling separate and independent execution flows of threads in a single warp, which gives more freedom to the GPU scheduler.
Changing all __shfl() calls in our code to __shfl_sync() did not affect the execution of our code on Pascal GPUs (NVIDIA GeForce GTX 1080), and this change will make the code safe to execute on Volta GPUs and beyond.
Additional Details
For additional details regarding the implementation and internals of the register cache, and its use for the computation of finite field multiplication, we refer the readers to the paper by Hamilis, Ben-Sasson,Tromer and Silberstein. The source code for the examples in this post is available on Github.
Related resources
Tags
About the Authors
Comments
Related posts
Simplifying GPU Programming for HPC with NVIDIA Grace Hopper Superchip
Using the NVIDIA CUDA Stream-Ordered Memory Allocator, Part 1
Using CUDA Warp-Level Primitives
How to Access Global Memory Efficiently in CUDA C/C++ Kernels
How to Access Global Memory Efficiently in CUDA Fortran Kernels
Related posts
High-Performance Remote IO With NVIDIA KvikIO
Mastering the cudf.pandas Profiler for GPU Acceleration
Accelerating GPU Analytics Using RAPIDS and Ray
Unified Virtual Memory Supercharges pandas with RAPIDS cuDF
Event: NVIDIA cuOpt at INFORMS 2024
|
CUDA Pro Tip: Increase Performance with Vectorized Memory Access
Many CUDA kernels are bandwidth bound, and the increasing ratio of flops to bandwidth in new hardware results in more bandwidth bound kernels. This makes it very important to take steps to mitigate bandwidth bottlenecks in your code. In this post, I will show you how to use vector loads and stores in CUDA C/C++ to help increase bandwidth utilization while decreasing the number of executed instructions.
Let’s begin by looking at the following simple memory copy kernel.
__global__ void device_copy_scalar_kernel(int* d_in, int* d_out, int N) {
int idx = blockIdx.x * blockDim.x + threadIdx.x;
for (int i = idx; i < N; i += blockDim.x * gridDim.x) {
d_out[i] = d_in[i];
}
}
void device_copy_scalar(int* d_in, int* d_out, int N)
{
int threads = 128;
int blocks = min((N + threads-1) / threads, MAX_BLOCKS);
device_copy_scalar_kernel<<<blocks, threads>>>(d_in, d_out, N);
}
In this code, I am using grid-stride loops, described in an earlier CUDA Pro Tip post. Figure 1 shows the throughput of the kernel in GB/s as a function of copy size.
We can inspect the assembly for this kernel using the cuobjdump tool included with the CUDA Toolkit.
%> cuobjdump -sass executable
The SASS for the body of the scalar copy kernel is the following:
/*0058*/ IMAD R6.CC, R0, R9, c[0x0][0x140]
/*0060*/ IMAD.HI.X R7, R0, R9, c[0x0][0x144]
/*0068*/ IMAD R4.CC, R0, R9, c[0x0][0x148]
/*0070*/ LD.E R2, [R6]
/*0078*/ IMAD.HI.X R5, R0, R9, c[0x0][0x14c]
/*0090*/ ST.E [R4], R2
Here we can see a total of six instructions associated with the copy operation. The four IMAD instructions compute the load and store addresses and the LD.E and ST.E load and store 32 bits from those addresses.
We can improve performance of this operation by using the vectorized load and store instructions LD.E.{64,128} and ST.E.{64,128}. These operations also load and store data but do so in 64- or 128-bit widths. Using vectorized loads reduces the total number of instructions, reduces latency, and improves bandwidth utilization.
The easiest way to use vectorized loads is to use the vector data types defined in the CUDA C/C++ standard headers, such as int2, int4, or float2. You can easily use these types via type casting in C/C++. For example in C++ you can recast the int pointer d_in to an int2 pointer using reinterpret_cast<int2*>(d_in). In C99 you can do the same thing using the casting operator: (int2*(d_in)).
Dereferencing those pointers will cause the compiler to generate the vectorized instructions. However, there is one important caveat: these instructions require aligned data. Device-allocated memory is automatically aligned to a multiple of the size of the data type, but if you offset the pointer the offset must also be aligned. For example reinterpret_cast<int2*>(d_in+1) is invalid because d_in+1 is not aligned to a multiple of sizeof(int2).
You can safely offset arrays if you use an “aligned” offset, as in reinterpret_cast<int2*>(d_in+2). You can also generate vectorized loads using structures as long as the structure is a power of two bytes in size.
struct Foo {int a, int b, double c}; // 16 bytes in size
Foo *x, *y;
…
x[i]=y[i];
Now that we have seen how to generate vectorized instructions let’s modify the memory copy kernel to use vector loads.
__global__ void device_copy_vector2_kernel(int* d_in, int* d_out, int N) {
int idx = blockIdx.x * blockDim.x + threadIdx.x;
for (int i = idx; i < N/2; i += blockDim.x * gridDim.x) {
reinterpret_cast<int2*>(d_out)[i] = reinterpret_cast<int2*>(d_in)[i];
}
// in only one thread, process final element (if there is one)
if (idx==N/2 && N%2==1)
d_out[N-1] = d_in[N-1];
}
void device_copy_vector2(int* d_in, int* d_out, int n) {
threads = 128;
blocks = min((N/2 + threads-1) / threads, MAX_BLOCKS);
device_copy_vector2_kernel<<<blocks, threads>>>(d_in, d_out, N);
}
This kernel has only a few changes. First, the loop now executes only N/2 times because each iteration processes two elements. Second, we use the casting technique described above in the copy. Third, we handle any remaining elements which may arise if N is not divisible by 2. Finally, we launch half as many threads as we did in the scalar kernel.
Inspecting the SASS we see the following.
/*0088*/ IMAD R10.CC, R3, R5, c[0x0][0x140]
/*0090*/ IMAD.HI.X R11, R3, R5, c[0x0][0x144]
/*0098*/ IMAD R8.CC, R3, R5, c[0x0][0x148]
/*00a0*/ LD.E.64 R6, [R10]
/*00a8*/ IMAD.HI.X R9, R3, R5, c[0x0][0x14c]
/*00c8*/ ST.E.64 [R8], R6
Notice that now the compiler generates LD.E.64 and ST.E.64. All the other instructions are the same. However, it is important to note that there will be half as many instructions executed because the loop only executes N/2 times. This 2x improvement in instruction count is very important in instruction-bound or latency-bound kernels.
We can also write a vector4 version of the copy kernel.
__global__ void device_copy_vector4_kernel(int* d_in, int* d_out, int N) {
int idx = blockIdx.x * blockDim.x + threadIdx.x;
for(int i = idx; i < N/4; i += blockDim.x * gridDim.x) {
reinterpret_cast<int4*>(d_out)[i] = reinterpret_cast<int4*>(d_in)[i];
}
// in only one thread, process final elements (if there are any)
int remainder = N%4;
if (idx==N/4 && remainder!=0) {
while(remainder) {
int idx = N - remainder--;
d_out[idx] = d_in[idx];
}
}
}
void device_copy_vector4(int* d_in, int* d_out, int N) {
int threads = 128;
int blocks = min((N/4 + threads-1) / threads, MAX_BLOCKS);
device_copy_vector4_kernel<<<blocks, threads>>>(d_in, d_out, N);
}
The corresponding SASS is the following:
/*0090*/ IMAD R10.CC, R3, R13, c[0x0][0x140]
/*0098*/ IMAD.HI.X R11, R3, R13, c[0x0][0x144]
/*00a0*/ IMAD R8.CC, R3, R13, c[0x0][0x148]
/*00a8*/ LD.E.128 R4, [R10]
/*00b0*/ IMAD.HI.X R9, R3, R13, c[0x0][0x14c]
/*00d0*/ ST.E.128 [R8], R4
Here we can see the generated LD.E.128 and ST.E.128. This version of the code has reduced the instruction count by a factor of 4. You can see the overall performance for all 3 kernels in Figure 2.
In almost all cases vectorized loads are preferable to scalar loads. Note however that using vectorized loads increases register pressure and reduces overall parallelism. So if you have a kernel that is already register limited or has very low parallelism, you may want to stick to scalar loads. Also, as discussed earlier, if your pointer is not aligned or your data type size in bytes is not a power of two you cannot use vectorized loads.
Vectorized loads are a fundamental CUDA optimization that you should use when possible, because they increase bandwidth, reduce instruction count, and reduce latency. In this post, I’ve shown how you can easily incorporate vectorized loads into existing kernels with relatively few changes.
Related resources
Tags
About the Authors
Comments
Related posts
CUDA Dynamic Parallelism API and Principles
CUDA Pro Tip: Write Flexible Kernels with Grid-Stride Loops
An Efficient Matrix Transpose in CUDA C/C++
How to Optimize Data Transfers in CUDA C/C++
How to Optimize Data Transfers in CUDA Fortran
Related posts
CUDA Pro Tip: The Fast Way to Query Device Properties
Pro Tip: Improved GLSL Syntax for Vulkan DescriptorSet Indexing
Pro Tip: Pinpointing Runtime Errors in CUDA Fortran
Pro Tip: Linking OpenGL for Server-Side Rendering
Pro Tip: cuBLAS Strided Batched Matrix Multiply
|
How to Optimize Data Transfers in CUDA Fortran
In the previous three posts of this CUDA Fortran series we laid the groundwork for the major thrust of the series: how to optimize CUDA Fortran code. In this and the following post we begin our discussion of code optimization with how to efficiently transfer data between the host and device. The peak bandwidth between the device memory and the GPU is much higher (144 GB/s on the NVIDIA Tesla C2050, for example) than the peak bandwidth between host memory and device memory (8 GB/s on the PCIe x16 Gen2). This disparity means that your implementation of data transfers between the host and GPU devices can make or break your overall application performance. Let’s start with a few general guidelines for host-device data transfers.
We investigate the first three guidelines above in this post, and we dedicate the next post to overlapping data transfers. First I want to talk about how to measure time spent in data transfers without modifying the source code.
Measuring Data Transfer Times with the Command-line Profiler
To measure the time spent in each data transfer, we could record a CUDA event before and after each transfer and use cudaEventElapsedTime(), as we described in a previous post. However, we can get the elapsed transfer time without instrumenting the source code with CUDA events by using the command-line CUDA profiler.
Enable the command-line profiler by setting the environment variable COMPUTE_PROFILE to 1 (here we set it using the Unix Bash shell).
% export COMPUTE_PROFILE=1
With the profiler enabled, when we execute any CUDA code (CUDA Fortran, CUDA C, or any other code that runs on the CUDA platform), the CUDA runtime records profiler output to a file (the default file is cuda_profile_0.log in the local directory, but you can configure this). Let’s look at the following code example.
program profile
use cudafor
implicit none
integer, parameter :: N=1024
real :: a(N,N)
real, device :: a_d(N,N)
a = 0
a_d = a
a = a_d
end program
When we execute this code the file cuda_profile_0.log is created in the current directory containing the following text.
# CUDA_PROFILE_LOG_VERSION 2.0
# CUDA_DEVICE 0 GeForce 8600M GT
# CUDA_CONTEXT 1
method,gputime,cputime,occupancy
method=[ memcpyHtoD ] gputime=[ 3720.288 ] cputime=[ 5576.532 ]
method=[ memcpyDtoH ] gputime=[ 2919.072 ] cputime=[ 3686.712 ]
The first three lines in the output are the header information. The fourth line lists the values that appear in the lines below it for each executed method. By default these are the method name being measured, the execution time in microseconds as recorded on the GPU, the time in microseconds as recorded by the CPU, and the occupancy, which is only reported for kernel execution (we will cover this in a later post).
Take care when interpreting the value reported by cputime. For non-blocking methods, such as kernels, the value reported by cputime is only the CPU overhead to launch the method, in which case the wall clock time is cputime + gputime. For blocking methods, such as these data transfers, cputime includes gputime and CPU overhead, so it is equivalent to wall clock time. In addition to launch overhead, the timing of the first called method also includes overhead associated with device initialization.
An alternative to the command-line profiler is the nvprof command-line application contained in the CUDA 5 Toolkit distribution. The command-line profiler and nvprof are mutually exclusive, so COMPUTE_PROFILE must be set to 0 when using nvprof. Aside from that caveat, using nvprof is a simple as running it with your CUDA app command as an argument, as shown in the output below. nvprof is quite flexible, so make sure you check out the documentation.
$ nvprof ./a.out
======== NVPROF is profiling a.out...
======== Command: a.out
======== Profiling result:
Time(%) Time Calls Avg Min Max Name
52.78 3.46ms 1 3.46ms 3.46ms 3.46ms [CUDA memcpy HtoD]
47.22 3.09ms 1 3.09ms 3.09ms 3.09ms [CUDA memcpy DtoH]
Minimizing Data Transfers
We should not use only the GPU execution time of a kernel relative to the execution time of its CPU implementation to decide whether to run the GPU or CPU version. We also need to consider the cost of moving data across the PCI-e bus, especially when we are initially porting code to CUDA. Because CUDA’s heterogeneous programming model uses both the CPU and GPU, code can be ported to CUDA one subroutine at a time. In the initial stages of porting, data transfers may dominate the overall execution time. It’s worthwhile to keep tabs on time spent on data transfers separately from time spent in kernel execution. It’s easy to use the command-line profiler for this, as we already demonstrated. As we port more of our code, we’ll remove intermediate transfers and decrease the overall execution time correspondingly.
Pinned Host Memory
Host (CPU) data allocations are pageable by default. The GPU cannot access data directly from pageable host memory, so when a data transfer from pageable host memory to device memory is invoked, the CUDA driver must first allocate a temporary page-locked, or “pinned”, host array, copy the host data to the pinned array, and then transfer the data from the pinned array to device memory, as illustrated below.
As you can see in the figure, pinned memory is used as a staging area for transfers from the device to the host. We can avoid the cost of the transfer between pageable and pinned host arrays by directly allocating our host arrays in pinned memory. In CUDA Fortran, denote pinned memory using the pinned variable attribute. Pinned memory declarations must also be allocatable. It is possible for the allocate statement to fail to allocate pinned memory, in which case it will attempt a pageable memory allocation. The following code excerpt demonstrates the declaration and allocation of pinned memory with error checking.
real, allocatable, pinned :: array(:)
logical :: pinnedFlag
integer :: istat
allocate(array(N), STAT=istat, PINNED=pinnedFlag)
if (istat /= 0) then
write(*,*) 'Allocation of array failed'
call handleAllocationFailure(istat)
else
if (.not. pinnedFlag) write(*,*) &
'Pinned allocation of array failed - using pageable memory'
end if
This example performs pinned memory allocation with the optional keyword arguments for STAT and PINNED, and then checks to see if the allocation succeeded, and if so whether the resulting allocation is pinned. Data transfers using host pinned memory use the same syntax as transfers with pageable memory. We can use the following code to compare pageable and pinned transfer rates.
program BandwidthTest
use cudafor
implicit none
integer, parameter :: nElements = 4*1024*1024
! host arrays
real :: a_pageable(nElements), b_pageable(nElements)
real, allocatable, pinned :: a_pinned(:), b_pinned(:)
! device arrays
real, device :: a_d(nElements)
! events for timing
type (cudaEvent) :: startEvent, stopEvent
! misc
type (cudaDeviceProp) :: prop
real :: time
integer :: istat, i
logical :: pinnedFlag
! allocate and initialize
do i = 1, nElements
a_pageable(i) = i
end do
b_pageable = 0.0
allocate(a_pinned(nElements), b_pinned(nElements), &
STAT=istat, PINNED=pinnedFlag)
if (istat /= 0) then
write(*,*) 'Allocation of a_pinned/b_pinned failed'
pinnedFlag = .false.
else
if (.not. pinnedFlag) write(*,*) 'Pinned allocation failed'
end if
if (pinnedFlag) then
a_pinned = a_pageable
b_pinned = 0.0
endif
istat = cudaEventCreate(startEvent)
istat = cudaEventCreate(stopEvent)
! output device info and transfer size
istat = cudaGetDeviceProperties(prop, 0)
write(*,*)
write(*,*) 'Device: ', trim(prop%name)
write(*,*) 'Transfer size (MB): ', 4*nElements/1024./1024.
! pageable data transfers
write(*,*)
write(*,*) 'Pageable transfers'
istat = cudaEventRecord(startEvent, 0)
a_d = a_pageable
istat = cudaEventRecord(stopEvent, 0)
istat = cudaEventSynchronize(stopEvent)
istat = cudaEventElapsedTime(time, startEvent, stopEvent)
write(*,*) ' Host to Device bandwidth (GB/s): ', &
nElements*4*1e-6/time
istat = cudaEventRecord(startEvent, 0)
b_pageable = a_d
istat = cudaEventRecord(stopEvent, 0)
istat = cudaEventSynchronize(stopEvent)
istat = cudaEventElapsedTime(time, startEvent, stopEvent)
write(*,*) ' Device to Host bandwidth (GB/s): ', &
nElements*4*1e-6/time
if (any(a_pageable /= b_pageable)) &
write(*,*) '*** Pageable transfers failed ***'
! pinned data transfers
if (pinnedFlag) then
write(*,*)
write(*,*) 'Pinned transfers'
istat = cudaEventRecord(startEvent, 0)
a_d = a_pinned
istat = cudaEventRecord(stopEvent, 0)
istat = cudaEventSynchronize(stopEvent)
istat = cudaEventElapsedTime(time, startEvent, stopEvent)
write(*,*) ' Host to Device bandwidth (GB/s): ', &
nElements*4*1e-6/time
istat = cudaEventRecord(startEvent, 0)
b_pinned = a_d
istat = cudaEventRecord(stopEvent, 0)
istat = cudaEventSynchronize(stopEvent)
istat = cudaEventElapsedTime(time, startEvent, stopEvent)
write(*,*) ' Device to Host bandwidth (GB/s): ', &
nElements*4*1e-6/time
if (any(a_pinned /= b_pinned)) &
write(*,*) '*** Pinned transfers failed ***'
end if
write(*,*)
! cleanup
if (allocated(a_pinned)) deallocate(a_pinned)
if (allocated(b_pinned)) deallocate(b_pinned)
istat = cudaEventDestroy(startEvent)
istat = cudaEventDestroy(stopEvent)
end program BandwidthTest
The data transfer rate can depend on the type of host system (motherboard, CPU, and chipset) as well as the GPU. On a Harpertown CPU system with an NVIDIA Tesla C2050 GPU, running BandwidthTest produces the following results. As you can see, pinned transfers are much faster.
Device: Tesla C2050
Transfer size (MB): 16.00000
Pageable transfers
Host to Device bandwidth (GB/s): 1.585274
Device to Host bandwidth (GB/s): 1.661195
Pinned transfers
Host to Device bandwidth (GB/s): 5.693893
Device to Host bandwidth (GB/s): 6.370604
On a Nehalem CPU system with a Tesla M2050 GPU (equivalent to a C2050), we get better pageable transfer performance, as the following output shows. This is presumably because the faster Nehalem CPU reduces the host-side memory copy cost.
Device: Tesla M2050
Transfer size (MB): 16.00000
Pageable transfers
Host to Device bandwidth (GB/s): 3.428861
Device to Host bandwidth (GB/s): 3.723064
Pinned transfers
Host to Device bandwidth (GB/s): 5.965163
Device to Host bandwidth (GB/s): 6.314567
You should not over-allocate pinned memory. Doing so can reduce overall system performance because it reduces the amount of physical memory available to the operating system and other programs. How much is too much is difficult to tell in advance, so as with all optimizations, test your applications and the systems they run on for optimal performance parameters.
Batching Small Transfers
Due to the overhead associated with each transfer, it is preferable to batch many small transfers together into a single transfer. This is easy to do by using a temporary array, preferably pinned, and packing it with the data to be transferred.
When transferring data via assignment statements, multiple actual transfers may result from a single assignment statement. The chance of this happening has been greatly reduced with recent compiler versions, but it may still occur. (You can see the number of actual transfers that result from an assignment statement using the command-line profiler.) To make sure only a single transfer is performed, use the cudaMemcpy() function, which has the following syntax.
istat = cudaMemcpy(destination, source, nElements)
The arguments of cudaMemcpy() are the destination array, source array, and the number of elements to transfer. Because CUDA Fortran is strongly typed, there is no need to specify transfer direction as in CUDA C/C++. The compiler is able to detect where the data in the first two arguments reside based on whether they were declared with the device attribute, and generates the appropriate data transfer calls.
You can also use assignment notation for sub-array transfers.
a_d(2:5, 3:8) = a(2:5, 3:8)
Alternatively, you can use cudaMemcpy2D(). The following code shows how to perform the same copy as in the assignment notation above, assuming the arrays are of dimension (n,n).
istat = cudaMemcpy2D(a_d(2,3), n, a(2,3), n, 5-2+1, 8-3+1)
The arguments here are the first destination element and the pitch of the destination array, the first source element and pitch of the source array, and the width and height of the submatrix to transfer. There is also a cudaMemcpy3D() function for transfers of rank three array sections.
Summary
Transfers between the host and device are the slowest link of data movement involved in GPU computing, so you should take care to minimize transfers. Following the guidelines in this post can help you make sure necessary transfers are efficient. When you are porting or writing new CUDA Fortran code, I recommend that you start by using transfers via assignment statements on pageable memory. As I mentioned earlier, as you write more device code you will eliminate some of the intermediate transfers, so any effort you spend optimizing transfers early in porting may be wasted. Also, rather than instrument code with CUDA events or other timers to measure time spent for each transfer, I recommend that you use the command-line profiler or nvprof.
This post focused on making data transfers efficient. In the next post, we discuss how you can overlap data transfers with computation and with other data transfers.
Related resources
Tags
About the Authors
Comments
Related posts
How to Access Global Memory Efficiently in CUDA Fortran Kernels
How to Overlap Data Transfers in CUDA Fortran
How to Optimize Data Transfers in CUDA C/C++
How to Implement Performance Metrics in CUDA Fortran
An Easy Introduction to CUDA Fortran
Related posts
Advanced API Performance: SetStablePowerState
Advanced Kernel Profiling with the Latest Nsight Compute
TensorFlow Performance Logging Plugin nvtx-plugins-tf Goes Public
NVIDIA Nsight Systems Adds Vulkan Support
Nsight Systems Exposes New GPU Optimization Opportunities
|
Improving GPU Memory Oversubscription Performance
Since its introduction more than 7 years ago, the CUDA Unified Memory programming model has kept gaining popularity among developers. Unified Memory provides a simple interface for prototyping GPU applications without manually migrating memory between host and device.
Starting from the NVIDIA Pascal GPU architecture, Unified Memory enabled applications to use all available CPU and GPU memory in the system, enabling easier scaling to larger problem sizes. For more information about getting started with GPU computing using Unified Memory, see An Even Easier Introduction to CUDA.
Do you want to run your application seamlessly with large datasets and also keep memory management simple? Unified Memory can be used to make virtual memory allocations larger than available GPU memory. At the event of oversubscription, GPU automatically starts to evict memory pages to system memory to make room for active in-use virtual memory addresses.
However, application performance greatly depends on the memory access pattern, data residency, and the system you’re running on. Over the past few years, we’ve published a few posts on using Unified Memory for GPU memory oversubscription. We’ve helped you achieve higher performance for your applications through various programming techniques, such as prefetching and memory usage hints.
In this post, we dive into the performance characteristics of a micro-benchmark that stresses different memory access patterns for the oversubscription scenario. It helps you break down and understand all the performance aspects of Unified Memory: When it’s a good fit, when it’s not, and what you can do about it. As you will see from our results, the performance can vary up to 100x depending on the platform, oversubscription factor, and memory hints. We hope that this post makes it clearer when and how to use Unified Memory in your applications!
Benchmark setup and access patterns
To evaluate Unified Memory oversubscription performance, you use a simple program that allocates and reads memory. A large chunk of contiguous memory is allocated using cudaMallocManaged, which is then accessed on GPU and effective kernel memory bandwidth is measured. Different Unified Memory performance hints such as cudaMemPrefetchAsync and cudaMemAdvise modify allocated Unified Memory. We discuss their impact on performance later in this post.
We define a parameter called “oversubscription factor,” which controls the fraction of the available GPU memory allocated for the test.
We tested three memory access kernels in our micro-benchmarks: grid-stride, block-side, and random-per-warp. Grid-stride and block-stride are the most common sequential access patterns in many CUDA applications. However, unstructured or random access is also widely popular in emerging CUDA workloads like graph applications, hash tables, and embeddings in recommendation systems. We decided to test all three.
Grid stride
Each thread block accesses elements in neighboring memory region in a loop iteration and then takes a grid stride (blockDim.x * gridDim.x).
template<typename data_type>
__global__ void read_thread(data_type *ptr, const size_t size)
{
size_t n = size / sizeof(data_type);
data_type accum = 0;
for(size_t tid = threadIdx.x + blockIdx.x * blockDim.x; tid < n; tid += blockDim.x * gridDim.x)
accum += ptr[tid];
if (threadIdx.x == 0)
ptr[0] = accum;
}
Block stride
Each thread block accesses a large chunk of contiguous memory, which is determined based on total allocated memory size. At any given time, resident blocks on an SM can be accessing different pages of memory due to the large memory domains assigned to each of the blocks.
template<typename data_type>
__global__ void read_thread_blockCont(data_type *ptr, const size_t size)
{
size_t n = size / sizeof(data_type);
data_type accum = 0;
size_t elements_per_block = ((n + (gridDim.x - 1)) / gridDim.x) + 1;
size_t startIdx = elements_per_block * blockIdx.x;
for (size_t rid = threadIdx.x; rid < elements_per_block; rid += blockDim.x) {
if ((rid + startIdx) < n)
accum += ptr[rid + startIdx];
}
if (threadIdx.x == 0)
ptr[0] = accum;
}
Random warp
In this access pattern, for each loop iteration of the warp, a random page is selected and then a contiguous 128B (32 elements of 4B) region is accessed. This results in a random page being accessed by each warp of the thread block, across all thread blocks. The loop count of the warp is determined by total number of warps and total memory allocated.
The kernel is launched with thread block and grid parameters that achieve 100% occupancy. All the blocks of the kernel are always resident on the GPU.
Hardware setup
We used a single GPU of the following three different hardware setups for the benchmarks in this post.
We’ve investigated different memory residency techniques to improve oversubscription performance for these access patterns. Fundamentally, we have tried to remove Unified Memory page faults and find the optimal data-partition strategy to get best read bandwidth for the benchmark. In this post, we discuss the following memory modes:
In the following sections, we dive into performance analysis and an explanation of all the optimizations. We also discuss what workloads work well with Unified Memory for oversubscription.
Baseline implementation: On-demand migration
In this test case, the memory allocation is performed using cudaMallocManaged and then pages are populated on system (CPU) memory in the following way:
cudaMallocManaged(&uvm_alloc_ptr, allocation_size);
// all the pages are initialized on CPU
for (int i = 0; i < num_elements; i++)
uvm_alloc_ptr[i] = 0.0f;
Then, a GPU kernel is executed and the performance of the kernel is measured:
read_thread<float><<<grid, block, 0, task_stream>>>((float*)uvm_alloc_ptr, allocation_size);
We used one of the three access patterns described in the previous section. This is the easiest way to use Unified Memory for oversubscription, because no hints are required by the programmer.
Upon kernel invocation, GPU tries to access the virtual memory addresses that are resident on the host. This triggers a page-fault event that results in memory page migration to GPU memory over the CPU-GPU interconnect. The kernel performance is affected by the pattern of generated page faults and the speed of CPU-GPU interconnect.
The page fault pattern is dynamic, as it depends on the scheduling of blocks and warps on streaming multiprocessors. This is followed by the memory load instruction issue from the GPU threads.
Figure 5 shows how page fault is serviced on an empty GPU and an oversubscribed GPU. At oversubscription, a memory page is first evicted from GPU memory to system memory, followed by transfer of requested memory from CPU to GPU.
Figure 6 shows the memory bandwidth achieved by the different access patterns on V100, A100, and V100 with Power9 CPU.
Sequential access analysis
The difference in page fault driven memory read bandwidth between access pattern and different platforms can be explained by following factors:
Tip: During the experiments for this post, we discovered that the streaming grid and block stride kernel access patterns are not sensitive to thread block size and intra-block synchronization. However, to achieve better performance using the other optimization methods discussed, we used 128 threads in a block with intra-block synchronization at each loop unroll. This ensured that all the warps of the block used the SM’s address translation units efficiently. To look at kernel design for intra-block synchronization, see the source code released with this post. Try out the variant with and without synchronization with different block sizes.
Random access analysis
Random warp access pattern yields only a few hundred KB/s read bandwidth in the oversubscription domain for x86 platform due to many page faults and the resulting memory migration from CPU to GPU. Since accesses are random, a small fraction of migrated memory is used. The migrated memory may end up evicted back to the CPU to make space for other memory fragments.
However, access counters are enabled on Power9 systems that lead to CPU mapped memory access from GPU and not all accessed memory fragments are immediately migrated to GPU. This results in consistent memory read bandwidth with less memory thrashing than x86 systems.
Optimization 1: Direct access to system memory (zero-copy)
As an alternative to moving memory pages from system memory to GPU memory over the interconnect, you can also directly access the pinned system memory from the GPU. This memory allocation methodology is also known as zero-copy memory.
The pinned system memory can be allocated using CUDA API call cudaMallocHost or from the Unified Memory interface by setting the preferred location of a virtual address range to the CPU.
cudaMemAdvise(uvm_alloc_ptr, allocation_size, cudaMemAdviseSetPreferredLocation, cudaCpuDeviceId);
cudaMemAdvise(uvm_alloc_ptr, allocation_size, cudaMemAdviseSetAccessedBy, current_gpu_device);
Figure 9 shows the memory bandwidth achieved by the read kernels. On the x86 platform, an A100 GPU can achieve higher bandwidth compared to a V100 because of the faster PCIe Gen4 interconnect between CPU and GPU on DGX A100. Similarly, the Power9 system achieves peak bandwidth close to interconnect bandwidth with the grid stride access pattern. The grid stride bandwidth pattern on an A100 GPU degrades with oversubscription due to the GPU MMU address translation misses that add to latency for load instructions.
Random warp access yields a constant bandwidth of 3-4 GB/s across the oversubscription domain for all the systems tested. This is much better than the fault-driven scenario covered earlier.
Tip: The performance of the block stride pattern can be improved to the same level as grid stride by making the per-warp memory access 128-byte aligned. 128-byte aligned access ensures that the CPU-GPU link and system DRAM are used efficiently. The grid stride access pattern has this characteristic implicitly and performs optimal memory operations.
Takeaway
It is clear from the data that the zero-copy approach achieves higher bandwidth than the baseline. Pinned system memory is advantageous when you want to avoid the overhead of memory unmap and map from CPU and GPU. If an application is going to use the allocated data just one time, then directly accessing using zero-copy memory is better. However, if there is reuse of data in the application, then faulting and migrating data to GPU can yield a higher aggregate bandwidth, depending on the access pattern and reuse.
Optimization 2: Direct memory access with data partitioning between CPU-GPU
For the fault-driven migration explained earlier, there is an additional overhead of the GPU MMU system stalling until the required memory range is available on GPU. To overcome this overhead, you can distribute memory between CPU and GPU, with memory mappings from GPU to CPU to facilitate fault-free memory access.
There are a couple of methods to distribute memory between CPU and GPU:
We’ve found that both methods perform similarly for many access-pattern and architecture combinations, with a few exceptions. In this section, we primarily discuss the manual page distribution. You can look up the code for both in the unified-memory-oversubscription GitHub repo.
In hybrid memory distribution, few memory pages can be pinned to CPU and memory mapped explicitly using cudaMemAdvise API call with the setAccessedBy hint set to the GPU device. In our test case, we map the excess memory pages to CPU in a round-robin manner, where the map to CPU is determined by how much GPU is oversubscribed by. For example, at an oversubscription factor value of 1.5, every third page is mapped to CPU. At an oversubscription factor of 2.0, every other page is mapped to CPU.
In our experiments, a memory page is set to be 2 MB, which is the largest page size at which GPU MMU can operate.
For oversubscription values less than 1.0, all the memory pages are resident on GPU. You see higher bandwidth there compared to cases with a greater than 1.0 oversubscription factor. For oversubscription values greater than 1.0, factors like base HBM memory bandwidth and CPU-GPU interconnect speed steer the final memory read bandwidth.
Tip: When testing on a Power9 system, we came across an interesting behavior of explicit bulk memory prefetch (option a). Because access counters are enabled on P9 systems, the evicted memory doesn’t always stay pinned to CPU and Unified Memory driver can initiate data migration from CPU to GPU. This results in evictions from GPU and the cycle continues throughout the lifetime of a kernel. This process negatively affects the streaming block and grid stride kernels, and they get lower bandwidth than the manual page distribution.
Tip: As described in the tip for optimization 1 earlier, having 128-byte warp-aligned access for transaction to CPU memory results in better performance for all block stride access test cases.
Solution: Single GPU oversubscription
Of the three different memory allocation strategies for GPU oversubscription using Unified Memory, the optimal choice for an allocation method for a given application depends on the memory access pattern and reuse of on-GPU memory.
When you are choosing between the fault and the pinned system memory allocation, the latter performs consistently better across all platforms and GPUs. If GPU residency of the memory subregion benefits from overall application speed, then memory page distribution between GPU and CPU is a better allocation strategy.
Try Unified Memory optimizations
In this post, we reviewed a benchmark with some common access patterns and analyzed performance on various platforms from x86 to P9, and V100 and A100 GPUs. You can use this data as a reference to make projections and consider whether using Unified Memory in your code would be beneficial. We also covered multiple data distribution patterns and Unified Memory modes, which can sometimes yield significant performance benefits. For more information, see the unified-memory-oversubscription microbenchmark source code on GitHub.
In a previous post, we demonstrated that Unified Memory–based oversubscription is especially effective for large data analytics and large deep learning models. Try Unified Memory for oversubscription in your code and let us know how it helps you improve application performance.
Related resources
Tags
About the Authors
Comments
Related posts
Simplifying GPU Programming for HPC with NVIDIA Grace Hopper Superchip
Enhancing Memory Allocation with New NVIDIA CUDA 11.2 Features
Maximizing Unified Memory Performance in CUDA
Maximizing Unified Memory Performance in CUDA
Unified Memory for CUDA Beginners
Related posts
Simplify System Memory Management with the Latest NVIDIA GH200 NVL2 Enterprise RA
Analyzing the RNA-Sequence of 1.3M Mouse Brain Cells with RAPIDS on NVIDIA GPUs
Maximizing Unified Memory Performance in CUDA
Unified Memory for CUDA Beginners
Beyond GPU Memory Limits with Unified Memory on Pascal
|
CUDA Pro Tip: Minimize the Tail Effect
When I work on the optimization of CUDA kernels, I sometimes see a discrepancy between Achieved and Theoretical Occupancies. The Theoretical Occupancy is the ratio between the number of threads which may run on each multiprocessor (SM) and the maximum number of executable threads per SM (2048 on the Kepler architecture). This value is estimated from the size of the blocks and the amount of resources (registers and shared memory) used by those blocks for a particular GPU and is computed without running the kernel on the GPU. The Achieved Occupancy, on the other hand, is measured from the execution of the kernel (as the number of active warps divided by the number of active cycles compared to the maximum number of executable warps).
Recently, while working on a kernel for a finance benchmark, I could see an Achieved Occupancy of 41.52% whereas the Theoretical Occupancy was 50%. In NVIDIA Nsight Visual Studio Edition, the Instruction per Clock (IPC) showed a lot of load imbalance between the different SMs with respect to the number of executed instructions by the kernel (see the left graph in the figure below).
The reason for that imbalance is a Tail Effect. When the GPU launches a grid of threads for a kernel, that grid is divided into waves of thread blocks. The size of a wave depends on the number of SMs on the GPU and the Theoretical Occupancy of the kernel. On a NVIDIA Tesla K20 there are 13 SMs and the Theoretical Occupancy of my kernel was 4 blocks of 256 threads per SM (50%). Each full wave was composed by 13×4 = 52 blocks. Since the kernel was only launching a total of 128 blocks (a very low number), the code was executed in 2 full waves and a much smaller wave of 24 blocks. The last wave under-utilized the GPU but represented a significant fraction of the run time.
To improve the performance of the kernel, I used the __launch_bounds__ attribute to constrain the number of registers since it was the main occupancy limiter. As a result, the kernel is now able to run 5 blocks of 256 threads per SM instead of 4 blocks as before. The same computation is achieved in 1 full wave of 13×5 = 65 blocks and an almost-full wave of 63 blocks. The impact of the tail effect is much reduced (see the right graph in the figure above). The Theoretical and Achieved Occupancies have increased to 62.50% and 61.31%, respectively, and the performance has improved by 1.19x (from 4.535ms to 3.825ms).
Tail effect may play an important role when the number of blocks executed for a kernel is small. This is one of the reasons we recommend launching a large number of blocks per grid, when possible. With 100s or 1,000s of waves, the impact of a partial wave at the end is much reduced. When it is not possible to extract as much parallelism, keep the tail effect in mind. There are ways to work around it.
Related resources
Tags
About the Authors
Comments
Related posts
Measuring the GPU Occupancy of Multi-stream Workloads
Optimizing GPU Utilization with Nsight Compute 2021.3
CUDA Pro Tip: Occupancy API Simplifies Launch Configuration
CUDA Pro Tip: Increase Performance with Vectorized Memory Access
CUDA Pro Tip: Write Flexible Kernels with Grid-Stride Loops
Related posts
CUDA Pro Tip: The Fast Way to Query Device Properties
Pro Tip: Improved GLSL Syntax for Vulkan DescriptorSet Indexing
Pro Tip: Pinpointing Runtime Errors in CUDA Fortran
Pro Tip: Linking OpenGL for Server-Side Rendering
Pro Tip: cuBLAS Strided Batched Matrix Multiply
|
Enhancing Memory Allocation with New NVIDIA CUDA 11.2 Features
CUDA is the software development platform for building GPU-accelerated applications, providing all the components needed to develop applications targeting every NVIDIA GPU platform for general purpose compute acceleration. The latest CUDA release, CUDA 11.2, is focused on improving the user experience and application performance for CUDA developers.
CUDA 11.2 has several important features including programming model updates, new compiler features, and enhanced compatibility across CUDA releases. This post offers an overview of the key CUDA 11.2 software features and highlights:
CUDA 11.2 is available to download now.
CUDA programming model enhancements
With every CUDA release, we continue to enhance the CUDA programming model to enable you to get the most out of NVIDIA GPUs, while maintaining the programming flexibility of the higher-level APIs. In this release, we added an exciting new feature for stream-ordered memory allocation and extended some of the APIs for improving the functionality of cooperative groups and CUDA graphs.
Stream-ordered memory allocator
One of the highlights of CUDA 11.2 is the new stream-ordered CUDA memory allocator. This feature enables applications to order memory allocation and deallocation with other work launched into a CUDA stream such as kernel launches and asynchronous copies. This improves application performance by taking advantage of stream-ordering semantics to reuse memory allocations, using and managing memory pools to avoid expensive calls into the OS. The new asynchronous memory allocation and free API actions allow you to manage memory use as part of your application’s CUDA workflow. For many applications, this reduces the need for custom memory management abstractions, and makes it easier to create high-performance custom memory management for applications that need it. Moreover, this feature makes it easier to share memory pools across entities within an application.
cudaMallocAsync(&ptr, size, stream); // Allocates physical memory
kernel<<<...,stream>>>(ptr);
cudaFreeAsync(ptr, stream); // releases memory back into a pool
cudaMallocAsync(&ptr, size, stream); // Reuses previously freed pointer
kernel<<<...,stream>>>(ptr);
cudaFreeAsync(ptr, stream); // releases memory back into a pool
.... // Executes other work in the stream
As shown in this example, CUDA 11.2 introduces new stream-ordered versions of cudaMalloc and cudaFree—called cudaMallocAsync and cudaFreeAsync—which take a stream as an additional argument. The first call to cudaMallocAsync in the example allocates memory from the OS, but the subsequent call to cudaFreeAsync does not free it back to the OS. Instead, the memory is stored in a pool maintained by the CUDA driver, which allows the second call to cudaMallocAsync to reuse the memory previously freed, if it is of sufficient size.
For more information, see cudaMallocAsync in the C++ API Routines topic in the CUDA Toolkit documentation. For more information about how stream-ordered allocation works, the performance benefits, and how to port your application to use the new APIs, see Using the NVIDIA CUDA Stream-Ordered Memory Allocator posts.
Cooperative groups
Cooperative groups, introduced in CUDA 9, provides device code API actions to define groups of communicating threads and to express the granularity at which threads synchronize for more efficient parallel decompositions. For more information, see Cooperative Groups: Flexible CUDA Thread Programming.
When you are using cooperative groups to launch kernels into separate streams with cuLaunchCooperativeKernel, these kernels can now execute concurrently on a GPU. Prior to CUDA 11.2, cooperative kernels were always serialized as if launched into the same stream. Kernels A and B launched into separate streams would execute sequentially on the GPU, with B waiting for A to finish before it could start. With CUDA 11.2, cooperative kernels now run concurrently if they can fit together within the GPU resources.
You can take advantage of this functionality with the existing cuLaunchCooperativeKernel API action. If you were already using multiple streams in your application, you may not even need to modify your application code to benefit from this feature.
CUDA graphs
CUDA graphs were introduced in CUDA 10.0 and have seen a steady progression of new features with every CUDA release. For more information about the performance enhancement, see Getting Started with CUDA Graphs.
CUDA 11.2 introduces a new mechanism for synchronization between graph workloads and non-graph workloads. CUDA graphs now support two pairs of graph node types for external synchronization: signal and wait for CUDA events (available since CUDA 11.1), and external semaphore signal and wait (new in CUDA 11.2). These enhance existing graph functionality allowing internal graph operations to depend upon external work. Allowing graphs to inter-operate with the existing external semaphore infrastructure in CUDA enables new types of synchronization between graph workloads and non-CUDA workloads.
cudaGraphCreate(&graph, 0); // Create the graph
cudaGraphAddKernelNode(&a, graph, NULL, 0, &nodeParams); // create the nodes
cudaGraphAddKernelNode(&b, graph, NULL, 0, &nodeParams);
..
cudaGraphAddExternalSemaphoresSignalNode( &ext_sem, graph, NULL, 0, &nodeParams); // New node for external semaphore signal
..
// Now set up dependencies on each node
cudaGraphAddDependencies(graph, &a, &b, 1); // A->B
..
CUDA 11.2 now also allows graph update to change the kernel function launched by a kernel node, using either explicit node update with cudaGraphExecKernelNodeSetParams or whole graph update with cudaGraphExecUpdate. This is an enhancement when compared to prior releases, where the kernel function could not be modified and had to match the original value.
CUDA compiler
In CUDA 11.2, the compiler tool chain gets multiple feature and performance upgrades that are aimed at accelerating the GPU performance of applications and enhancing your overall productivity.
The compiler toolchain has an LLVM upgrade to 7.0, which enables new features and can help improve compiler code generation for NVIDIA GPUs. The CUDA C++ compiler, libNVVM, and NVRTC shared library have all been upgraded to the LLVM 7.0 code base. The libNVVM library provides GPU extensions to LLVM in support of the wider community of developers of compilers, DSL translators, and parallel applications targeting computational workloads on NVIDIA GPUs. The NVRTC shared library helps compile dynamically generated CUDA source code at runtime.
Link-time optimization for device kernel code (Device LTO), introduced as a preview feature in the CUDA 11.0 toolkit release, is now available as a full-featured optimization capability in CUDA 11.2. Device LTO enables you to enjoy the productivity benefits of separate compilation of device code without incurring an undue runtime performance overhead relative to whole-program device compilation.
The 11.2 CUDA C++ compiler can optionally generate a diagnostic report on inline functions which can provide insights into the compiler’s function inlining decisions. These diagnostic reports can aid in advanced application performance analysis and tuning efforts.
The CUDA C++ compiler aggressively inlines device functions into call sites by default. This can make assembly-level debugging of optimized device code a difficult task. For source code compiled using the 11.2 CUDA C++ compiler toolchain, the cuda-gdb and NVIDIA Nsight Compute debugger can display names of inlined device functions in call-stack backtraces, thereby improving the debugging experience. You can single step through inline functions just like any other device function.
Nsight Developer Tools
NVIDIA Developer Tools are a collection of applications, spanning desktop and mobile targets, which enable you to build, debug, profile, and develop CUDA applications that use the latest visual computing hardware. The NVIDIA Nsight tools have introduced some new functionality as well in CUDA 11.2.
Nsight Systems is a system-wide performance analysis tool, designed to help developers tune and scale software across CPUs and GPUs. The new 2020.5 update enhances Vulkan ray tracing, and profile tracing for NVIDIA Collectives Communication Library (NCCL) and CUDA memory allocation. It also delivers performance and UX improvements.
NVIDIA Nsight Systems 2020.5 is now available for download.
The 2020.3 release of NVIDIA Nsight Compute included in the 11.2 CUDA Toolkit introduces several new features that simplify the process of CUDA kernel profiling and optimization. The update for Nsight Compute introduces a new Profile Series feature enabling you to configure ranges for multiple kernel parameters, and a source file import functionality.
NVIDIA Nsight Compute 2020.3 is now available for download.
CUDA enhanced compatibility
Here’s a review of the enhanced CUDA compatibility support that was enabled in CUDA 11.1 and what it means for CUDA developers. By leveraging semantic versioning across components in the CUDA Toolkit, these components remain binary-compatible across all minor versions of a toolkit release. This means that CUDA has relaxed the minimum driver version check for the CUDA Toolkit and no longer requires a driver upgrade with minor releases. This is especially important for users who don’t have root privileges on their system.
For enterprise users, upgrading to the newer version of the CUDA driver was particularly cumbersome as it required quite a bit of planning and execution to ensure that all components in the production stack dependent on the driver were accounted for and validated. With enhanced compatibility, you can upgrade to a newer version of the CUDA Toolkit while still using an older version of the CUDA driver.
Enhanced CUDA compatibility also gives you the general flexibility to move to newer toolkits and features, only excepting the ones that have new APIs or which depend on the kernel mode driver. You get the compatibility of the CUDA Toolkit with the CUDA driver across all minor versions. An application can be built for one CUDA minor release (for example, 11.1) and work across all future minor releases within the major family (for example, 11.x), as shown in Figure 2.
For more information about the enhanced compatibility feature and the overall CUDA compatibility model in the toolkit documentation, see the CUDA Compatibility guide.
Summary
To learn more about the CUDA 11 generation toolkit capabilities and introductions, see CUDA 11 Features Revealed and follow future CUDA posts. For more
Related resources
Tags
About the Authors
Comments
Related posts
Revealing New Features in the CUDA 11.5 Toolkit
Discovering New Features in CUDA 11.4
Using the NVIDIA CUDA Stream-Ordered Memory Allocator, Part 1
Exploring the New Features of CUDA 11.3
CUDA 11.2 Introduces Improved User Experience and Application Performance
Related posts
NVIDIA Open GPU Datacenter Drivers for RHEL9 Signed by Red Hat
Dynamic Loading in the CUDA Runtime
Effortlessly Scale NumPy from Laptops to Supercomputers with NVIDIA cuPyNumeric
Runtime Fatbin Creation Using the NVIDIA CUDA Toolkit 12.4 Compiler
CUDA Toolkit 12.4 Enhances Support for NVIDIA Grace Hopper and Confidential Computing
|
CUDA 7.5: Pinpoint Performance Problems with Instruction-Level Profiling
[Note: Thejaswi Rao also contributed to the code optimizations shown in this post.]
Today NVIDIA released CUDA 7.5, the latest release of the powerful CUDA Toolkit. One of the most exciting new features in CUDA 7.5 is new Instruction-Level Profiling support in the NVIDIA Visual Profiler. This powerful new feature, available on Maxwell (GM200) and later GPUs, helps pinpoint performance bottlenecks, letting you quickly identify the specific lines of source code (and assembly instructions) limiting the performance of GPU code, along with the underlying reason for execution stalls.
In this post, I demonstrate Instruction-Level Profiling by showing how it helped understand and improve the performance limitations of a CUDA kernel that implements the Iterative Closest Point algorithm (the original source code, by Thomas Whelan, is available on Github). I’ll show how instruction-level profiling makes it easier to apply advanced optimizations, helping speed up the example kernel by 2.7X on an NVIDIA Quadro M6000 GPU.
Profiling the kernel using the Guided Analysis feature of the Visual Profiler showed that the kernel performance was bound by instruction and memory latency. Latency issues indicate that the hardware resources are not used efficiently since most warps are stalled by a dependency on a data value from a previous math or memory instruction. Figure 1 shows that the compute units are only 40% utilized and memory units are around 25% utilized, so there is definitely room for improvement.
Stall Analysis in Previous Profiler Versions
Before CUDA 7.5, the Visual Profiler was only capable of pointing out performance issues at the application or CUDA kernel level. For stall latency analysis, the CUDA 7.0 Visual Profiler produces the pie chart in Figure 2 by collecting various stall reason events for the entire kernel.
This pie chart shows that the two primary stall reasons in this kernel are synchronization and memory dependencies. But if I look into the kernel code, there are lots of memory accesses and __syncthreads() calls, so this high-level analysis doesn’t provide any specific insight into which instructions are potential bottlenecks. In general it can be very difficult to find exact bottleneck causes in complex kernels using kernel-level profiling analysis. This is where CUDA 7.5 can help, as you’ll see.
Instruction-Level Profiling
Instruction-level profiling in CUDA 7.5 uses program counter (PC) sampling with a per-streaming-multiprocessor (SM) fixed-frequency sampler. At each sample period, the collector picks an active warp from the SM and captures the warp’s PC and warp state. Warp selection is performed round-robin across all active warps on the SM.
Assuming there are 8 active warps on each SM warp scheduler and the sampling period is 256 cycles, the profiler collects samples as Figure 3 shows.
In the Visual Profiler, the sampling period is fixed from run to run on a given GPU, but it can vary across different GPUs.
Visual Profiler PC Sampling Results View
The Visual Profiler shows the Instruction-level profiling view when you select “Kernel Profile – PC sampling”. This view shows the distribution of samples across CUDA functions including kernels, non-inlined device functions and child kernels (launched via Dynamic Parallelism). It also lists the files containing the sampled GPU code. Figure 4 shows the output for the Iterative Closest Point example, where a single kernel is profiled (the kernel calls only inlined functions).
The view also contains a pie chart of warp state distribution as Figure 5 shows. The pie chart in this view statistically matches the stall reasons pie chart in Figure 2 that was produced using event collection.
Clicking on a function or file in the results opens the source-assembly view, as Figure 6 shows. The source-assembly view presents the warp state samples as a stacked bar graph next to the source line and instruction associated with the Program Counter in the sample. Hovering the mouse pointer over the stacked bar shows a tooltip containing the sample count and stall reason count for the source line or instruction. The stall reasons are sorted in decreasing order.
Analyzing the PC Sampling Output
Clicking on “CombinedKernel…” in the “Cuda Functions” table under “Analysis Results” opens the source assembly view and jumps to the first hotspot in the kernel at source line 197. The two primary stall reasons are synchronization (51.1%) and memory dependency (34.1%), as Figure 7 highlights.
Reducing Memory Dependency Stalls
Examining the assembly instructions corresponding to the hot spot source line shows that memory stalls occur at instructions that use the result of earlier LDL (“load local”) instructions, and synchronization stalls occur at the instruction after a BAR.SYNC barrier instruction (__syncthreads()).
Source line 197 indexes the local (stack-allocated) array row, and the profiler shows that this line correlates to an LDL instruction. NVIDIA GPUs do not have indexed register files, so if a stack array is accessed with dynamic indices, the compiler must allocate the array in local memory. In the Maxwell architecture, local memory stores are not cached in L1 and hence the latency of local memory loads after stores is significant. To work around this, I decided to use individual local variables instead of indexed variables.
I replaced the local array with individual local variables and unrolled the ‘for’ loops to use the new local variables.
Original code:
float row[7]
//Initialize array row
int shift = 0;
__shared__ float smem[CTA_SIZE];
for (int i = 0; i < 6; ++i) // rows
{
#pragma unroll
for (int j = i; j < 7; ++j) // cols + b
{
__syncthreads ();
smem[tid] = row[i] * row[j];
__syncthreads ();
reduce(smem);
if (tid == 0)
gbuf.ptr (shift++)[blockIdx.x + gridDim.x * blockIdx.y]
= smem[0];
}
}
New code:
float row0, row1, row2, row3, row4, row5, row6;
//Initialize all elements
#define UNROLL_REDUCE(val, buf) \
do { \
smem[tid] = val; \
__syncthreads(); \
reduce(smem); \
if (tid == 0) \
buf.ptr (shift++)[blockIdx.x + gridDim.x * blockIdx.y] \
= smem[0]; \
} while(0)
UNROLL_REDUCE(row0*row0, gbuf);
UNROLL_REDUCE(row0*row1, gbuf);
UNROLL_REDUCE(row0*row2, gbuf);
UNROLL_REDUCE(row0*row3, gbuf);
UNROLL_REDUCE(row0*row4, gbuf);
This change removed all the memory dependency stalls due to local memory accesses, which reduced the kernel run time from 3.9ms to 2.3ms, giving a 1.6X improvement in performance.
Figure 9 shows that memory dependency stalls reduced from 21.68% (Figure 5) to 8.21% (Figure 9) along with an overall sample count reduction from 30908 (Figure 4) to 20023 (Figure 8).
Reducing Synchronization Stalls
After the first optimization, Figure 9 shows that the kernel is bottlenecked by synchronization stalls, so I investigated how to reduce these next. Synchronization stalls occur when warps arrive at a barrier instruction and then wait for all other warps in the thread block to arrive. This kernel uses shared memory reductions, which introduce many barrier instructions, as Figure 10 illustrates.
The Parallel Forall post “Faster Parallel Reductions with Kepler” shows how reduction operations can be optimized using the shuffle instruction (available on the Kepler GPU architecture and later). I decided to use the same strategy on this kernel.
I modified the kernel to use shuffle instructions to implement intra-warp reductions first as Figure 11 shows. This reduced inter-warp synchronization and removed the need for many shared memory accesses, and therefore also eliminated multiple calls to __syncthreads().
Figure 12 shows the PC sampling distribution for the modified code, with a decrease in synchronization stalls, but an increase in execution dependency stalls.
This optimization didn’t result in a significant performance increase; as you can see the total samples only went down from 20023 (Figure 8) to 19842 (Figure 13). Clicking on the kernel function again showed that line 93 has the maximum synchronization stalls as Figure 14 shows.
There is a shared store before the __syncthreads(), so it is evident that the stalls are due to shared memory store latency: all warps in the block wait for the shared store to complete. This wait happens for all 27 elements processed by each thread. Software pipelining is a common technique used to hide this sort of latency. The compiler cannot automatically perform software pipelining because it can’t reorder the code across __syncthreads(). Also, there is a global store after each reduction and the compiler can’t reorder the code across global loads/stores due to pointer aliasing issues. However, I know at the algorithm level that the elements each thread reduces do not depend on each other, so I modified the code to operate on multiple elements before syncing (manual software pipelining). Figure 15 shows the resulting code and sampling.
This indeed hides significant shared store latency, reducing synchronization samples from 4987 (Figure 14) to 2548 (Figure 15). It also results in a reduction of execution dependency stalls seen at line 76 as the second highest hotspot in figure 14. This is because the compiler now interleaves __shfl_down instructions for different elements while performing the intra-warp reduction. Checking the PC sampling output of this final code, now I see a significant reduction in the total number of samples from 19482 (Figure 13) to 13280 (Figure 16). The kernel time decreased from 2.33ms to 1.41ms.
Summary
The following table summarizes the samples collected, the stall reasons and the performance improvement achieved.
Thanks to the deep insight provided by Instruction-level profiling, I was able to decrease the kernel run time by 2.7X. Note that the Visual Profiler still shows latency as the limiter in the new code, but the compute and memory utilization have increased from 40% and 25% to 60% and 35%, respectively, as Figure 17 shows.
The NVIDIA Visual Profiler Instruction-level analysis features in CUDA 7.5 are very helpful for locating and fixing performance issues in CUDA code. You can read more about these features in the Profiler Users Guide.
Download CUDA 7.5 Today
CUDA Toolkit 7.5 is available now, so download it today! CUDA 7.5 is chock full of new features, including instruction-level profiling, mixed-precision (FP16) data storage, new cuSPARSE routines for accelerating natural language processing, and experimental support for GPU lambdas in C++. You can read all about it in the Parallel Forall post “New Features in CUDA 7.5“.
To learn more about the features in CUDA 7.5, register for the webinar “CUDA Toolkit 7.5 Features Overview” and put it on your calendar for September 22.
Related resources
Tags
About the Authors
Comments
Related posts
CUDA Toolkit 12.3 Delivers New Features for Accelerated Computing
New Video Series: CUDA Developer Tools Tutorials
Improve Guidance and Performance Visualization with the New Nsight Compute
Advanced Kernel Profiling with the Latest Nsight Compute
Analysis-Driven Optimization: Preparing for Analysis with NVIDIA Nsight Compute, Part 1
Related posts
Advanced API Performance: SetStablePowerState
Advanced Kernel Profiling with the Latest Nsight Compute
TensorFlow Performance Logging Plugin nvtx-plugins-tf Goes Public
NVIDIA Nsight Systems Adds Vulkan Support
Nsight Systems Exposes New GPU Optimization Opportunities
|
Profit and Loss Modeling on GPUs with ISO C++ Language Parallelism
The previous post How to Accelerate Quantitative Finance with ISO C++ Standard Parallelism demonstrated how to write a Black-Scholes simulation using ISO C++ standard parallelism with the code found in the /NVIDIA/accelerated-quant-finance GitHub repo. This approach enables you to productively write code that is both concise and portable.
Using solely standard C++, it’s possible to write an application that can be run in parallel on a modern, multicore CPU or on a GPU without modification. This post builds a more complex model, starting from the previously developed parallel Black-Scholes code, and optimizes it to use the benefits of the GPU, while remaining in standard C++.
Profit and loss modeling explained
A popular strategy to trade realized volatility involves delta-hedging an option position. Under Black-Scholes assumptions, if an investor succeeds in hedging away the underlying risk, then the main contributor of the profit and loss (P&L) from this strategy is proportional to the difference between the squares of the realized volatility and the volatility used to price and hedge the option.
The P&L is dependent on the path of the underlying. Estimating a full P&L distribution to a given horizon for a large portfolio of options can be rather compute-intensive and warrants an extension of the parallel Black-Scholes code.
Consider a grid of long European call options of various strikes and maturities on the same underlying . Assume that the options are held to a given time horizon ( timesteps) and that they are delta-hedged at each timestep .
As time passes, the underlying moves, the moneyness of each option changes accordingly, and the expiry draws nearer.
For a given option contract, the premium is a function of several parameters, including the underlying and the option’s remaining time to expiry: . Theoretically, the shorter , the less opportunity for movement in .
Assuming all parameters remain unchanged as time passes, the option loses value with each tick of the clock. This negative change in the option’s value as time goes by is known as theta, or time decay.
As the underlying moves over time, the value of the option also changes.
To the first order, the resulting change in the option value is given by the delta of the option. For example, if delta is 0.55 and moves up by 1, then the option price also moves up by approximately 0.55.
To the second order, as moves, so does the delta of the option, by an amount proportional to the second-order Greek gamma. Because a long call is a convex function of the underlying, gamma is positive and the gain in price due to gamma is also positive regardless of the direction of the underlying move.
In the case of a delta-hedged option, the aggregate delta P&L is zero, and the gamma gain has the potential to counteract, surpass, or succumb to losses due to theta (Figure 1).
In this example, the goal is to characterize the distribution of the gamma-theta P&L at a given horizon for each option in the grid by simulating paths of the underlying and accumulating the P&L along those paths.
In this simple Black-Scholes world, the underlying follows a lognormal dynamics under the risk-neutral measure with realized volatility :
The daily (or one time step) P&L can be obtained as:
In this equation, and are the gamma and theta greeks at the time step computed using a hedging volatility (in practice, the implied volatility is often used as the hedging volatility).
The P&L over a single path of the underlying consisting of time steps is the cumulation of the daily P&Ls:
Parallel P&L simulations
Figure 2 shows the options grid and four simulated paths. Each grid cell represents an option contract with its corresponding moneyness and time to maturity marked on the horizontal and vertical axes, respectively. The color in the heatmap is proportional to the average P&L across these paths. The average is just one statistic that can be computed from the simulated P&L for which the full distribution is available through simulation.
The parallel code from the previous post serves as the baseline. Each path is looped over and then walked, parallelizing the P&L calculation over options, as was done in the previous example.
This is a reasonable approach because there is the potential to have a significant number of options to parallelize. The code itself is straightforward and the only major difference is a transform added at the end to convert the sums to means.
However, there are still opportunities to further optimize the code and improve performance.
void calculate_pnl_paths_sequential(stdex::mdspan<const double, stdex::dextents<size_t,2>> paths,
std::span<const double>Strikes,
std::span<const double>Maturities,
std::span<const double>Volatilities,
const double RiskFreeRate,
std::span<double>pnl,
const double dt)
{
int num_paths = paths.extent(0);
int horizon = paths.extent(1);
auto steps = std::views::iota(1,horizon);
// Iterate from 0 to num_paths - 1
auto path_itr = std::views::iota(0,num_paths);
// Note - In this version path remains in CPU memory
// Note - Also that when built for the GPU this will result in
// num_paths * (horizon - 1) kernel launches
std::for_each(path_itr.begin(), path_itr.end(),
[=](int path) // Called for each path from 0 to num_paths - 1
{
// Iterate from 1 to horizon - 1
std::for_each(steps.begin(), steps.end(),
[=](int step) // Called for each step along the chosen path
{
// Query the number of options from the pnl array
int optN = pnl.size();
// Enumerate from 0 to (optN - 1)
auto opts = std::views::iota(0,optN);
double s = paths(path,step);
double s_prev = paths(path,step-1);
double ds2 = s - s_prev;
ds2 *= ds2;
// Calculate pnl for each option
std::transform(std::execution::par_unseq, opts.begin(), opts.end(),
pnl.begin(), [=](int opt)
{
double gamma = 0.0, theta = 0.0;
BlackScholesBody(gamma,
s_prev,
Strikes[opt],
Maturities[opt] - std::max(dt*(step-1),0.0),
RiskFreeRate,
Volatilities[opt],
CALL,
GAMMA);
BlackScholesBody(theta,
s_prev,
Strikes[opt],
Maturities[opt] - std::max(dt*(step-1),0.0),
RiskFreeRate,
Volatilities[opt],
CALL,
THETA);
// P&L = 0.5 * Gamma * (dS)^2 + Theta * dt
return pnl[opt] + 0.5 * gamma * ds2 + (theta*dt);
});
});
});
}
Increasing parallelism for increased performance
Whenever a parallel algorithm is offloaded to a GPU, two overheads are introduced:
Neither of these overheads is particularly large, a small fraction of a second each time, but they add up when done repeatedly. Even worse, the NVIDIA Nsight Systems profiler reveals that each kernel requires a device synchronization step that is longer than the kernel itself.
The paths are independent random walks that have no relation aside from the same initial value of the underlying . Therefore, you can parallelize across paths too, as long as no two paths attempt to update the same place in memory at the same time, which would be a race condition.
To address this potential race condition, use C++ atomic_ref to make sure that if two paths try to update the same location in the P&L array at the same time, they do so in a safe manner.
By moving the iteration over paths into the function, it is now possible to parallelize over both paths and options along each path. Though this example is more complicated, it’s essentially the same refactoring done for the initial example.
void calculate_pnl_paths_parallel(stdex::mdspan<const double,
stdex::dextents<size_t,2>> paths,
std::span<const double>Strikes,
std::span<const double>Maturities,
std::span<const double>Volatilities,
const double RiskFreeRate,
std::span<double>pnl,
const double dt)
{
int num_paths = paths.extent(0);
int horizon = paths.extent(1);
int optN = pnl.size();
// Create an iota to enumerate the flatted index space of
// options and paths
auto opts = std::views::iota(0,optN*num_paths);
std::for_each(std::execution::par_unseq, opts.begin(), opts.end(),
[=](int idx)
{
// Extract path and option number from flat index
// C++23 cartesian_product would remove the need for below
int path = idx/optN;
int opt = idx%optN;
// atomic_ref prevents race condition on elements of pnl array.
std::atomic_ref<double> elem(pnl[opt]);
// Walk the path from 1 to (horizon - 1) in steps of 1
auto path_itr = std::views::iota(1,horizon);
// Transform_Reduce will apply the lambda to every option and perform
// a plus reduction to sum the PNL value for each option.
double pnl_temp = std::transform_reduce(path_itr.begin(), path_itr.end(),
0.0, std::plus{},
[=](int step) {
double gamma = 0.0, theta = 0.0;
double s = paths(path,step);
double s_prev = paths(path,step-1);
double ds2 = s - s_prev;
ds2 *= ds2;
// Options in the grid age as the simulation progresses
// along the path
double time_to_maturity = Maturities[opt] –
std::max(dt*(step-1),0.0);
BlackScholesBody(gamma,
s_prev,
Strikes[opt],
time_to_maturity,
RiskFreeRate,
Volatilities[opt],
CALL,
GAMMA);
BlackScholesBody(theta,
s_prev,
Strikes[opt],
time_to_maturity,
RiskFreeRate,
Volatilities[opt],
CALL,
THETA);
// P&L = 0.5 * Gamma * (dS)^2 + Theta * dt
return 0.5 * gamma * ds2 + (theta*dt);
});
// accumulate on atomic_ref to pnl array
elem.fetch_add(pnl_temp, std::memory_order_relaxed);
});
}
A std::for_each algorithm is used to iterate across paths and options. Within each iteration, a std::transform_reduce algorithm is used to traverse each path for each option, adding up the profits and losses and returning that result. Each of these intermediate results is then automatically added to the P&L array.
The main benefit of this approach is that rather than repeatedly bouncing back and forth between the GPU and CPU, a single operation is launched on the GPU for the complete data set and the program only waits for the result one time (Figure 3).
This approach results in a significant performance improvement over the original, which itself was already accelerated on the GPU (Figure 4).
The lesson learned from this second example is to expose as much parallelism for the hardware as possible. Both the CPU and GPU versions improved with the first approach, but the GPU version really shines after reducing the launch and synchronization overheads by exposing more parallelism.
Explore the code
The acceleration realized in this quantitative finance example using the code in the /NVIDIA/accelerated-quant-finance GitHub repo can be easily applied to your C++ applications. Any C++ code written with serial loops can be easily modified using standard language parallelism to achieve significant GPU acceleration.
To easily produce your own portable and parallel-first code, download the NVIDIA HPC SDK, which contains all of the tools to make use of ISO C++ standard parallelism and profile the results.
Related resources
Tags
About the Authors
Comments
Related posts
How to Accelerate Quantitative Finance with ISO C++ Standard Parallelism
Multi-GPU Programming with Standard Parallel C++, Part 2
Multi-GPU Programming with Standard Parallel C++, Part 1
Accelerating Python for Exotic Option Pricing
How We Achieved Record Finance Benchmark Performance on Tesla K80
Related posts
How to Accelerate Quantitative Finance with ISO C++ Standard Parallelism
Just Released: NVIDIA HPC SDK v24.1
Webinar: Analysis of OpenACC Validation and Verification Testsuite
Simplifying GPU Programming for HPC with NVIDIA Grace Hopper Superchip
Optimize Energy Efficiency of Multi-Node VASP Simulations with NVIDIA Magnum IO
|
Accelerating Matrix Multiplication with Block Sparse Format and NVIDIA Tensor Cores
Sparse-matrix dense-matrix multiplication (SpMM) is a fundamental linear algebra operation and a building block for more complex algorithms such as finding the solutions of linear systems, computing eigenvalues through the preconditioned conjugate gradient, and multiple right-hand sides Krylov subspace iterative solvers. SpMM is also an important kernel used in many domains such as fluid dynamics, deep learning, graph analytics, and economic modeling. In the specific context of deep learning, sparsity has emerged as one of the leading approaches for increasing training and inference performance as well as reducing the model sizes while keeping the accuracy.
Even though sparse linear algebra allows representing huge matrices very efficiently, it typically does not provide competitive performance compared to dense counterparts in cases when sparsity is below 95%. This is due to irregular computation and scattered memory accesses. In fact, many of the linear algebra applications that benefit from sparsity have over 99% sparsity in their matrices.
To overcome this limitation, the NVIDIA Ampere architecture introduces the concept of fine-grained structured sparsity, which doubles throughput of dense-matrix multiplies by skipping the computation of zero values in a 2:4 pattern. Recently, NVIDIA introduced the cuSPARSELt library to fully exploit third-generation Sparse Tensor Core capabilities.
The primary alternative to fine-grained sparsity is through the organization of matrix entries/network weights in groups, such as vectors or blocks. This coarse-grained sparsity allows regular access pattern and locality, making the computation amenable for GPUs. In deep learning, block sparse matrix multiplication is successfully adopted to reduce the complexity of the standard self-attention mechanism, such as in Sparse Transformer models or in its extensions like Longformer.
Starting with cuSPARSE 11.4.0, the CUDA Toolkit provides a new high-performance block sparse matrix multiplication routine that allows exploiting NVIDIA GPU dense Tensor Cores for nonzero sub-matrices and significantly outperforms dense computations on Volta and newer architecture GPUs.
cuSPARSE Block-SpMM: Efficient, block-wise SpMM
Figure 1 shows the general matrix multiplication (GEMM) operation by using the block sparse format. On the left are the full matrix organized in blocks and its internal memory representation: compressed values and block indices. As the usual dense GEMM, the computation partitions the output matrix into tiles. The kernel computes the output tile by stepping through the active tiles (left to right) and accumulates the results into the C matrix. Differently from classical GEMM, not all values of the dense-matrix B are accessed for computing the output. This approach allows skipping unnecessary computation represented by nonzero values and dramatically improves the performance.
Blocked-Ellpack format
Figure 2 shows that the Blocked-Ellpack (Blocked-ELL) storage format contains two 2-D arrays. The right array stores nonzero values in consecutive blocks, while the second array contains the column indices of the corresponding nonzero blocks. All rows in the arrays must have the same number of blocks. Non-structural zero blocks are also accepted as padding. These arrays store components in row-major order, like the compressed sparse row (CSR) format.
cuSPARSE SpMM
The cuSPARSE library provides cusparseSpMM routine for SpMM operations. Compute the following multiplication:
In this operation, A is a sparse matrix of size MxK, while B and C are dense matrices of size KxN MxN, respectively. Denote the layouts of the matrix B with N for row-major order, where op is non-transposed, and T for column-major order, where op is transposed.
cusparseSpMM selects suitable kernels depending on the storage format, the number of nonzero components, and matrix layouts. This routine supports CSR, Coordinate (COO), as well as the new Blocked-ELL storage formats. Table 1 shows the supported data types, layouts, and compute types.
Block-SpMM performance
Here’s a snapshot of the relative performance of dense and sparse-matrix multiplications exploiting NVIDIA GPU Tensor Cores. Figures 3 and 4 show the performance of Block-SpMM on NVIDIA V100 and A100 GPUs with the following settings:
The speedup ratio compared to cuBLAS is nearly linear to the sparsity on both NVIDIA V100 and A100 GPUs. When the block size is 32, the kernel is faster than cuBLAS if the density is less than 40% on NVIDIA Volta and 50% on NVIDIA Ampere architecture.
For better performance, it is important to satisfy the following conditions:
Block-SpMM code example
For this new storage format, perform similar steps as with CSR and COO cusparseSpMM. For more information, see the cuSPARSE/spmm_blockedell repo.
First, include the cuSPARSE header, set up some device pointers, and initialize the cuSPARSE handle:
#include <cusparse.h>
cusparseHandle_t handle = nullptr;
cusparseCreate(&handle);
float alpha = 1.0f;
float beta = 0.0f;
int* d_ell_colidx = ...
__half* d_ell_values = ...
__half* dB = ...
__half* dC = …
int ell_blocksize = 32;
Next, create the block sparse input matrix A, dense input matrix B, and dense output matrix C descriptors:
cusparseSpMatDescr_t matA;
cusparseDnMatDescr_t matB, matC;
cusparseCreateBlockedEll(&matA, A_num_rows, A_num_cols,
ell_blocksize, ell_cols,
d_ell_colidx, d_ell_values,
CUSPARSE_INDEX_32I, CUSPARSE_INDEX_BASE_ZERO,
AB_type);
cusparseCreateDnMat(&matB, B_num_rows, B_num_cols, B_ld,
d_B, AB_type, CUSPARSE_ORDER_ROW);
cusparseCreateDnMat(&matC, C_num_rows, C_num_cols, C_ld,
d_C, C_type, CUSPARSE_ORDER_ROW);
Then, allocate an external buffer for the multiplication:
void* d_buffer;
cusparseSpMM_bufferSize(handle,
CUSPARSE_OPERATION_NON_TRANSPOSE,
CUSPARSE_OPERATION_NON_TRANSPOSE,
&alpha, matA, matB, &beta, matC, CUDA_R_32F,
CUSPARSE_SPMM_ALG_DEFAULT, &bufferSize);
cudaMalloc(&dBuffer, bufferSize);
Now you can execute SpMM:
cusparseSpMM(handle, opA, opB, alpha, matA, matB,
beta, matC, compute_type,
CUSPARSE_SPMM_ALG_DEFAULT, d_buffer);
Finally, destroy the cuSPARSE descriptors and handle and clean up the used memory:
cusparseDestroySpMat(matA);
cusparseDestroyDnMat(matB);
cusparseDestroyDnMat(matC);
cusparseDestroy(handle);
cudaFree(dBuffer);
Get started with cuSPARSE Block-SpMM
The cuSPARSE library now provides fast kernels for block SpMM exploiting NVIDIA Tensor Cores. With the Blocked-ELL format, you can compute faster than dense-matrix multiplication depending on the sparsity of the matrix. The latest version of cuSPARSE can be found in the CUDA Toolkit.
For more information, see the following resources:
Related resources
Tags
About the Authors
Comments
Related posts
Just Released: NVIDIA cuSPARSELt 0.6
Structured Sparsity in the NVIDIA Ampere Architecture and Applications in Search Engines
Accelerating ReLu and GeLu Activation Functions, and Batched Sparse GEMM in cuSPARSELt v0.2.0
Exploiting NVIDIA Ampere Structured Sparsity with cuSPARSELt
CUTLASS: Fast Linear Algebra in CUDA C++
Related posts
Fusing Epilog Operations with Matrix Multiplication Using nvmath-python
Unlocking GPU-Accelerated RDMA with NVIDIA DOCA GPUNetIO
Just Released: CUDA Toolkit 12.5
Dynamic Control Flow in CUDA Graphs with Conditional Nodes
CUDA Toolkit 12.4 Enhances Support for NVIDIA Grace Hopper and Confidential Computing
|
CUDA 11 Features Revealed
The new NVIDIA A100 GPU based on the NVIDIA Ampere GPU architecture delivers the greatest generational leap in accelerated computing. The A100 GPU has revolutionary hardware capabilities and we’re excited to announce CUDA 11 in conjunction with A100.
CUDA 11 enables you to leverage the new hardware capabilities to accelerate HPC, genomics, 5G, rendering, deep learning, data analytics, data science, robotics, and many more diverse workloads.
CUDA 11 is packed full of features—from platform system software to everything that you need to get started and develop GPU-accelerated applications. This post offers an overview of the major software features in this release:
A single post cannot do justice to every feature available in CUDA 11. At the end of this post, there are links to GTC Digital sessions that offer deeper dives into the new CUDA features.
CUDA and NVIDIA Ampere microarchitecture GPUs
Fabricated on the TSMC 7nm N7 manufacturing process, the NVIDIA Ampere GPU microarchitecture includes more streaming multiprocessors (SMs), larger and faster memory, and interconnect bandwidth with third-generation NVLink to deliver massive computational throughput.
The A100’s 40 GB (5-site) high-speed, HBM2 memory has a bandwidth of 1.6 TB/sec, which is over 1.7x faster than V100. The 40 MB L2 cache on A100 is almost 7x larger than that of Tesla V100 and provides over 2x the L2 cache-read bandwidth. CUDA 11 provides new specialized L2 cache management and residency control APIs on the A100. The SMs in A100 include a larger and faster combined L1 cache and shared memory unit (at 192 KB per SM) to provide 1.5x the aggregate capacity of the Volta V100 GPU.
The A100 comes equipped with specialized hardware units including third-generation Tensor Cores, more video decoder (NVDEC) units, JPEG decoder and optical flow accelerators. All of these are used by various CUDA libraries to accelerate HPC and AI applications.
The next few sections discuss the major innovations introduced in NVIDIA A100 and how CUDA 11 enables you to make the most of these capabilities. CUDA 11 offers something for everyone, whether you’re a platform DevOps engineer managing clusters or a software developer writing GPU-accelerated applications. For more information about the NVIDIA Ampere GPU microarchitecture, see the NVIDIA Ampere Architecture In Depth post.
Multi-Instance GPU
The MIG feature can physically divide a single A100 GPU into multiple GPUs. It enables multiple clients such as VMs, containers, or processes to run simultaneously while providing error isolation and advanced quality of service (QoS) between these programs.
A100 is the first GPU that can either scale up to a full GPU with NVLink or scale out with MIG for many users by lowering the per-GPU instance cost. MIG enables several use cases to improve GPU utilization. This could be for CSPs to rent separate GPU instances, running multiple inference workloads on the GPU, hosting multiple Jupyter notebook sessions for model exploration, or resource sharing of the GPU among multiple internal users in an organization (single-tenant, multi-user).
MIG is transparent to CUDA and existing CUDA programs can run under MIG unchanged to minimize programming effort. CUDA 11 enables configuration and management of MIG instances on Linux operating systems using the NVIDIA Management Library (NVML) or its command-line interface nvidia-smi (nvidia-smi mig subcommands).
Using the NVIDIA Container Toolkit and A100 with MIG enabled, you can also run GPU containers with Docker (using the --gpus option starting with Docker 19.03) or scale out with the Kubernetes container platform using the NVIDIA device plugin.
The following command shows MIG management using nvidia-smi:
# List gpu instance profiles:
# nvidia-smi mig -i 0 -lgip
+-------------------------------------------------------------------------+
| GPU instance profiles: |
| GPU Name ID Instances Memory P2P SM DEC ENC |
| Free/Total GiB CE JPEG OFA |
|=========================================================================|
| 0 MIG 1g.5gb 19 0/7 4.95 No 14 0 0 |
| 1 0 0 |
+-------------------------------------------------------------------------+
| 0 MIG 2g.10gb 14 0/3 9.90 No 28 1 0 |
| 2 0 0 |
+-------------------------------------------------------------------------+
| 0 MIG 3g.20gb 9 0/2 19.81 No 42 2 0 |
| 3 0 0 |
+-------------------------------------------------------------------------+
| 0 MIG 4g.20gb 5 0/1 19.81 No 56 2 0 |
| 4 0 0 |
+-------------------------------------------------------------------------+
| 0 MIG 7g.40gb 0 0/1 39.61 No 98 5 0 |
| 7 1 1 |
+-------------------------------------------------------------------------+
System software platform support
For use in the enterprise datacenter, the NVIDIA A100 introduces new memory error recovery features that improve resilience and avoid impacting running CUDA applications. Uncorrectable ECC errors on prior architectures would impact all running workloads on the GPU, requiring a reset of the GPU.
On the A100, the impact is limited to the application that encountered the error and which is terminated, while other running CUDA workloads are unaffected. The GPU no longer requires a reset to recover. The NVIDIA driver performs dynamic page blacklisting to mark the page unusable so that current and new applications do not access the affected memory region.
When the GPU is reset, as part of a regular GPU/VM service window, the A100 is equipped with a new hardware mechanism called row-remapping that replaces degraded cells in memory with spare cells and avoids creating any holes in the physical memory address space.
The NVIDIA driver with CUDA 11 now reports various metrics related to row-remapping both in-band (using NVML/nvidia-smi) and out-of-band (using the system BMC). A100 includes new out-of-band capabilities, in terms of more available GPU and NVSwitch telemetry, control and improved bus transfer data rates between the GPU and the BMC.
For improved resiliency and high availability on multi-GPU systems such as DGX A100 and HGX A100, the system software supports the ability to disable a failing GPU or NVSwitch node rather than the entire baseboard as in previous generations of systems.
CUDA 11 is the first release to add production support for Arm servers. By combining Arm’s energy-efficient CPU architecture with CUDA, the Arm ecosystem will benefit from GPU-accelerated computing for a variety of use cases: from edge, cloud, and gaming to powering supercomputers. CUDA 11 supports Marvell’s high-performance ThunderX2-based servers and is working closely with Arm and other hardware and software partners in the ecosystem to quickly enable support for GPUs.
Third-generation, multi-precision Tensor Cores
The four large Tensor Cores per SM (for a total of 432 Tensor Cores) in the NVIDIA A100 provide faster matrix-multiply-accumulate (MMA) operations for all datatypes: Binary, INT4, INT8, FP16, Bfloat16, TF32, and FP64.
You access Tensor Cores through either different deep learning frameworks, CUDA C++ template abstractions provided by CUTLASS, or CUDA libraries such as cuBLAS, cuSOLVER, cuTENSOR, or TensorRT.
CUDA C++ makes Tensor Cores available using the warp-level matrix (WMMA) API. This portable API abstraction exposes specialized matrix load, matrix multiply and accumulate, and matrix store operations to efficiently use Tensor Cores from a CUDA C++ program. All functions and data types for WMMA are available in the nvcuda::wmma namespace. You can also directly access the Tensor Cores for A100 (that is, devices with compute capability compute_80 and higher) using the mma_sync PTX instruction.
CUDA 11 adds support for the new input data type formats: Bfloat16, TF32, and FP64. Bfloat16 is an alternate FP16 format but with reduced precision that matches the FP32 numerical range. Its usage results in lower bandwidth and storage requirements and therefore higher throughput. Bfloat16 is exposed as a new CUDA C++ __nv_bfloat16 data type in cuda_bf16.h, through WMMA and supported by the various CUDA math libraries.
TF32 is a special floating-point format meant to be used with Tensor Cores. TF32 includes an 8-bit exponent (same as FP32), 10-bit mantissa (same precision as FP16), and one sign-bit. It is the default math mode to allow you to get speedups over FP32 for DL training, without any changes to models. Finally, A100 brings double precision (FP64) support to MMA operations, which is also supported by the WMMA interfaces.
Programming NVIDIA Ampere architecture GPUs
With the goal of improving GPU programmability and leveraging the hardware compute capabilities of the NVIDIA A100 GPU, CUDA 11 includes new API operations for memory management, task graph acceleration, new instructions, and constructs for thread communication. Here’s a look at some of these new operations and how they can enable you to take advantage of A100 and the NVIDIA Ampere microarchitecture.
Memory management
One of the optimization strategies to maximize the performance of a GPU kernel is to minimize data transfer. If the memory is resident in global memory, the latency of reading data into the L2 cache or into shared memory might take several hundred processor cycles.
For example, on the GV100, shared memory provides a bandwidth of 17x faster than global memory or 3x faster than L2. Thus, some algorithms with a producer-consumer paradigm may observe performance benefits with persisting data in L2 between kernels, and therefore achieve higher bandwidth and performance.
On A100, CUDA 11 offers API operations to set aside a portion of the 40-MB L2 cache to persist data accesses to global memory. Persisting accesses have prioritized use of this set-aside portion of L2 cache, whereas normal or streaming accesses to global memory can only use this portion of L2 when it is unused by persisting accesses.
L2 persistence can be set for use in a CUDA stream or in a CUDA graph kernel node. Some considerations need to be made when setting aside the L2 cache area. For example, multiple CUDA kernels executing concurrently in different streams, while having a different access policy window, share the L2 set-aside cache. The following code example shows setting aside the L2 cache ratio for persistence.
cudaGetDeviceProperties( &prop, device_id);
// Set aside 50% of L2 cache for persisting accesses
size_t size = min( int(prop.l2CacheSize * 0.50) , prop.persistingL2CacheMaxSize );
cudaDeviceSetLimit( cudaLimitPersistingL2CacheSize, size);
// Stream level attributes data structure
cudaStreamAttrValue attr ;
attr.accessPolicyWindow.base_ptr = /* beginning of range in global memory */ ;
attr.accessPolicyWindow.num_bytes = /* number of bytes in range */ ;
// hitRatio causes the hardware to select the memory window to designate as persistent in the area set-aside in L2
attr.accessPolicyWindow.hitRatio = /* Hint for cache hit ratio */
// Type of access property on cache hit
attr.accessPolicyWindow.hitProp = cudaAccessPropertyPersisting;
// Type of access property on cache miss
attr.accessPolicyWindow.missProp = cudaAccessPropertyStreaming;
cudaStreamSetAttribute(stream,cudaStreamAttributeAccessPolicyWindow,&attr);
The virtual memory management API operations have been extended to support compression on pinned GPU memory to reduce L2 to DRAM bandwidth. This can be important for deep learning training and inference use cases. When you create a shareable memory handle using cuMemCreate, you provide an allocation hint to the API operation.
Efficient implementations of algorithms such as 3D stencils or convolutions involve a memory copy and computation control flow pattern where data is transferred from global memory into shared memory of thread blocks, followed by computations that use this shared memory. The global to shared memory copy is expanded into a read from global memory into a register, followed by a write to shared memory.
CUDA 11 lets you take advantage of a new asynchronous copy (async-copy) paradigm. It essentially overlaps copying data from global to shared memory with computation and avoids the use of intermediate registers or the L1 cache. Async-copy has benefits: control flow no longer traverses the memory pipeline twice and not using intermediate registers can reduce register pressure, increasing kernel occupancy. On A100, async-copy operations are hardware-accelerated.
The following code example shows a simple example of using async-copy. The resulting code, while more performant, can be further optimized by pipelining multiple batches of async-copy operations. This additional pipelining can result in the elimination of one of the synchronization points in the code.
Async-copy is offered as an experimental feature in CUDA 11 and is exposed using cooperative group collectives. The CUDA C++ Programming Guide includes more advanced examples of using async-copy with multi-stage pipelining and hardware-accelerated barrier operations in A100.
//Without async-copy
using namespace nvcuda::experimental;
__shared__ extern int smem[];
// algorithm loop iteration
while ( ... ) {
__syncthreads();
// load element into shared mem
for ( i = ... ) {
// uses intermediate register
// {int tmp=g[i]; smem[i]=tmp;}
smem[i] = gldata[i];
}
//With async-copy
using namespace nvcuda::experimental;
__shared__ extern int smem[];
pipeline pipe;
// algorithm loop iteration
while ( ... ) {
__syncthreads();
// load element into shared mem
for ( i = ... ) {
// initiate async memory copy
memcpy_async(smem[i],
gldata[i],
pipe);
}
// wait for async-copy to complete
pipe.commit_and_wait();
__syncthreads();
/* compute on smem[] */
}
Task graph acceleration
CUDA Graphs, introduced in CUDA 10, represented a new model for submitting work using CUDA. A graph consists of a series of operations, such as memory copies and kernel launches, connected by dependencies and defined separately from its execution.
Graphs enable a define-once-run-repeatedly execution flow. They can reduce cumulative launch overheads and improve overall performance of the application. This is particularly true for deep learning applications that may launch several kernels with decreasing task size and runtimes, or which may have complex dependencies between tasks.
Starting with A100, the GPU provides task graph hardware acceleration to prefetch grid launch descriptors, instructions, and constants. This improves the kernel launch latency using CUDA graphs on A100 compared to prior GPUs such as V100.
The CUDA Graph API operations now have a lightweight mechanism to support in-place updates to instantiated graphs without requiring a graph rebuild. During repeated instantiations of a graph, it is common for node parameters, such as kernel parameters, to change while the graph topology remains constant. Graph API operations provide a mechanism for updates to the whole graph, where you provide a topologically identical cudaGraph_t object with updated node parameters, or explicit updates to individual nodes.
Additionally, CUDA graphs now support cooperative kernel launch (cuLaunchCooperativeKernel), including stream capture for parity with CUDA streams.
Thread collectives
Here are some of the enhancements that CUDA 11 adds to cooperative groups, introduced in CUDA 9. Cooperative Groups is a collective programming mode that aims to enable you to explicitly express granularities at which the threads can communicate. This enables new patterns of cooperative parallelism within CUDA.
In CUDA 11, cooperative group collectives expose new A100 hardware features and add several API enhancements. For more information about the complete list of changes, see the CUDA C++ Programming Guide.
A100 introduces a new reduce instruction that operates on the data provided by each thread. This is exposed as a new collective using cooperative groups, which provides a portable abstraction that can be used on older architectures as well. The reduce operation supports arithmetic (for example, add), and logical (for example, AND) operations. The following code example shows the reduce collective.
// Simple Reduction Sum
#include <cooperative_groups/reduce.h>
...
const int threadId = cta.thread_rank();
int val = A[threadId];
// reduce across tiled partition
reduceArr[threadId] = cg::reduce(tile, val, cg::plus<int>());
// synchronize partition
cg::sync(cta);
// accumulate sum using a leader and return sum
Cooperative groups provide collective operations (labeled_partition) that partition the parent group into one-dimensional subgroups within which the threads are coalesced. This is particularly helpful for control flow that attempts to keep track of active threads through basic blocks of conditional statements.
For example, multiple partitions can be formed out of a warp-level group (that is not constrained to powers of 2) using labeled_partition and used in an atomic add operation. The labeled_partition API operation evaluates a condition label and assigns threads that have the same value for the label into the same group.
The following code example shows custom thread partitions:
// Get current active threads (that is, coalesced_threads())
cg::coalesced_group active = cg::coalesced_threads();
// Match threads with the same label using match_any()
int bucket = active.match_any(value);
cg::coalesced_group subgroup = cg::labeled_partition(active, bucket);
// Choose a leader for each partition (for example, thread_rank = 0)
//
if (subgroup.thread_rank() == 0) {
threadId = atomicAdd(&addr[bucket], subgroup.size());
}
// Now use shfl to transfer the result back to all threads in partition
return (subgroup.shfl(threadId, 0));
CUDA C++ language and compiler improvements
CUDA 11 is also the first release to officially include CUB as part of the CUDA Toolkit. CUB is now one of the supported CUDA C++ core libraries.
One of the major features in nvcc for CUDA 11 is the support for link time optimization (LTO) for improving the performance of separate compilation. LTO, using the --dlink-time-opt or -dlto options, stores intermediate code during compilation and then performs higher-level optimizations at link time, such as inlining code across files.
nvcc in CUDA 11 adds support for ISO C++17 and support for new host compilers across PGI, gcc, clang, Arm, and Microsoft Visual Studio. If you want to experiment with host compilers not yet supported, nvcc supports a new --allow-unsupported-compiler flag during the compile-build workflow. nvcc adds other new features, including the following:
CUDA libraries
The libraries in CUDA 11 continue to push the boundaries of performance and developer productivity by using the latest and greatest A100 hardware features behind familiar drop-in APIs in linear algebra, signal processing, basic mathematical operations, and image processing.
Across the linear algebra libraries, you will see Tensor Core acceleration for the full range of precisions available on A100, including FP16, Bfloat16, TF32, and FP64. This includes BLAS3 operations in cuBLAS, factorizations and dense linear solvers in cuSOLVER, and tensor contractions in cuTENSOR.
In addition to the enhanced range of precisions, restrictions on matrix dimensions and alignment for Tensor Core acceleration have been removed. For appropriate precisions, the acceleration is now automatic, requiring no user opt-in. The heuristics for cuBLAS automatically adapt to resources when running on the GPU instances with MIG on A100.
CUTLASS, the CUDA C++ template abstractions for high-performance GEMM, supports all the various precision modes offered by A100. With CUDA 11, CUTLASS now achieves more than 95% performance parity with cuBLAS. This allows you to write your own custom CUDA kernels for programming the Tensor Cores in NVIDIA GPUs.
cuFFT takes advantage of the larger shared memory size in A100, resulting in better performance for single-precision FFTs at larger batch sizes. Finally, on multi-GPU A100 systems, cuFFT scales and delivers 2X performance per GPU compared to V100.
nvJPEG is a GPU-accelerated library for JPEG decoding. Together with NVIDIA DALI, a data augmentation and image loading library, it can accelerate deep learning training on image classification models, especially computer vision. The libraries accelerate the image decode and data augmentation phase of the deep learning workflow.
The A100 includes a 5-core hardware JPEG decode engine and nvJPEG takes advantage of the hardware backend for batched processing of JPEG images. JPEG acceleration by a dedicated hardware block alleviates bottlenecks on the CPU and allows better GPU utilization.
Selecting the hardware decoder is done automatically by the nvjpegDecode for a given image or by explicitly selecting the hardware backend using nvjpegCreateEx init function. nvJPEG provides acceleration of baseline JPEG decode, and various color conversion formats, for example, YUV 420, 422, and 444.
Figure 8 shows that this results in up to 18x faster image decode compared to CPU-only processing. If you use DALI, you can directly benefit from this hardware acceleration because nvJPEG is abstracted.
There are many more features in the CUDA math libraries than can be covered in a single post.
Developer tools
CUDA 11 continues to add rich features to the existing portfolio of developer tools. This includes familiar plugins for Visual Studio, with the NVIDIA Nsight Integration for Visual Studio, and Eclipse, with Nsight Eclipse Plugins Edition. It also includes standalone tools, such as Nsight Compute for kernel profiling, and Nsight Systems for system-wide performance analysis. Nsight Compute and Nsight Systems are now supported on all three CPU architectures supported by CUDA: x86, POWER, and Arm64.
One of the key features of Nsight Compute for CUDA 11 is the ability to generate the Roofline model of the application. A Roofline model is a visually intuitive method for you to understand kernel characteristics by combining floating-point performance, arithmetic intensity, and memory bandwidth into a two-dimensional plot.
By looking at the Roofline model, you can quickly determine whether the kernel is compute-bound or memory-bound. You can also understand potential directions for further optimizations, for example, kernels that are near the roofline make optimal use of computational resources.
For more information, see Roofline Performance Model.
CUDA 11 includes the Compute Sanitizer, a next-generation, functional correctness checking tool that provides runtime checking for out-of-bounds memory accesses and race conditions. Compute Sanitizer is intended to be a replacement for the cuda-memcheck tool.
The following code example shows an example of Compute Sanitizer checking memory accesses.
//Out-of-bounds Array Access
__global__ void oobAccess(int* in, int* out)
{
int bid = blockIdx.x;
int tid = threadIdx.x;
if (bid == 4)
{
out[tid] = in[dMem[tid]];
}
}
int main()
{
...
// Array of 8 elements, where element 4 causes the OOB
std::array<int, Size> hMem = {0, 1, 2, 10, 4, 5, 6, 7};
cudaMemcpy(d_mem, hMem.data(), size, cudaMemcpyHostToDevice);
oobAccess<<<10, Size>>>(d_in, d_out);
cudaDeviceSynchronize();
...
$ /usr/local/cuda-11.0/Sanitizer/compute-sanitizer --destroy-on-device-error kernel --show-backtrace no basic
========= COMPUTE-SANITIZER
Device: Tesla T4
========= Invalid __global__ read of size 4 bytes
========= at 0x480 in /tmp/CUDA11.0/ComputeSanitizer/Tests/Memcheck/basic/basic.cu:40:oobAccess(int*,int*)
========= by thread (3,0,0) in block (4,0,0)
========= Address 0x7f551f200028 is out of bounds
The following code example shows a Compute Sanitizer example for race condition checks.
//Contrived Race Condition Example
__global__ void Basic()
{
__shared__ volatile int i;
i = threadIdx.x;
}
int main()
{
Basic<<<1,2>>>();
cudaDeviceSynchronize();
...
$ /usr/local/cuda-11.0/Sanitizer/compute-sanitizer --destroy-on-device-error kernel --show-backtrace no --tool racecheck --racecheck-report hazard raceBasic
========= COMPUTE-SANITIZER
========= ERROR: Potential WAW hazard detected at __shared__ 0x0 in block (0,0,0) :
========= Write Thread (0,0,0) at 0x100 in /tmp/CUDA11.0/ComputeSanitizer/Tests/Racecheck/raceBasic/raceBasic.cu:11:Basic(void)
========= Write Thread (1,0,0) at 0x100 in /tmp/CUDA11.0/ComputeSanitizer/Tests/Racecheck/raceBasic/raceBasic.cu:11:Basic(void)
========= Current Value : 0, Incoming Value : 1
=========
========= RACECHECK SUMMARY: 1 hazard displayed (1 error, 0 warnings)
Finally, even though CUDA 11 no longer supports running applications on macOS, we are making developer tools available for users on macOS hosts:
Summary
CUDA 11 provides a foundational development environment for building applications for the NVIDIA Ampere GPU architecture and powerful server platforms built on the NVIDIA A100 for AI, data analytics, and HPC workloads, both for on-premises (DGX A100) and cloud (HGX A100) deployments.
CUDA 11 is now available. As always, you can get CUDA 11 in several ways: download local installer packages, install using package managers, or grab containers from various registries. For enterprise deployments, CUDA 11 also includes driver packaging improvements for RHEL 8 using modularity streams to improve stability and reduce installation time.
To learn more about CUDA 11 and get answers to your questions, register for the following upcoming live webinars:
Also, watch out for the following related GTC talks for deep dives on the features for A100 covered in this post. These GTC recorded talks will be posted during the month of May:
Finally, register for the NVIDIA Developer Program to receive updates on CUDA 11 and future releases of CUDA.
Related resources
Tags
About the Authors
Comments
Related posts
Discovering New Features in CUDA 11.4
CUDA 11.2 Introduces Improved User Experience and Application Performance
CUDA Toolkit 11 and Nsight Developer Tools Released for General Availability
NVIDIA Ampere Architecture In-Depth
NVIDIA Announces CUDA Toolkit 11
Related posts
AI Art Gallery: AI in the Hand of the Artist
NVIDIA Introduces Precompiled Driver Packages for RHEL 8 to Streamline Installs
Streamlining NVIDIA Driver Deployment on RHEL 8 with Modularity Streams
Inception Spotlight: Synvivia Developing a Protein Switch for COVID-19
Announcing the Winners of the DXR Spotlight Contest
|
How to Overlap Data Transfers in CUDA C/C++
In our last CUDA C/C++ post we discussed how to transfer data efficiently between the host and device. In this post, we discuss how to overlap data transfers with computation on the host, computation on the device, and in some cases other data transfers between the host and device. Achieving overlap between data transfers and other operations requires the use of CUDA streams, so first let’s learn about streams.
CUDA Streams
A stream in CUDA is a sequence of operations that execute on the device in the order in which they are issued by the host code. While operations within a stream are guaranteed to execute in the prescribed order, operations in different streams can be interleaved and, when possible, they can even run concurrently.
The default stream
All device operations (kernels and data transfers) in CUDA run in a stream. When no stream is specified, the default stream (also called the “null stream”) is used. The default stream is different from other streams because it is a synchronizing stream with respect to operations on the device: no operation in the default stream will begin until all previously issued operations in any stream on the device have completed, and an operation in the default stream must complete before any other operation (in any stream on the device) will begin.
Please note that CUDA 7, released in 2015, introduced a new option to use a separate default stream per host thread, and to treat per-thread default streams as regular streams (i.e. they don’t synchronize with operations in other streams). Read more about this new behavior in the post GPU Pro Tip: CUDA 7 Streams Simplify Concurrency.
Let’s look at some simple code examples that use the default stream, and discuss how operations progress from the perspective of the host as well as the device.
cudaMemcpy(d_a, a, numBytes, cudaMemcpyHostToDevice);
increment<<<1,N>>>(d_a)
cudaMemcpy(a, d_a, numBytes, cudaMemcpyDeviceToHost);
In the code above, from the perspective of the device, all three operations are issued to the same (default) stream and will execute in the order that they were issued.
From the perspective of the host, the implicit data transfers are blocking or synchronous transfers, while the kernel launch is asynchronous. Since the host-to-device data transfer on the first line is synchronous, the CPU thread will not reach the kernel call on the second line until the host-to-device transfer is complete. Once the kernel is issued, the CPU thread moves to the third line, but the transfer on that line cannot begin due to the device-side order of execution.
The asynchronous behavior of kernel launches from the host’s perspective makes overlapping device and host computation very simple. We can modify the code to add some independent CPU computation as follows.
cudaMemcpy(d_a, a, numBytes, cudaMemcpyHostToDevice);
increment<<<1,N>>>(d_a)
myCpuFunction(b)
cudaMemcpy(a, d_a, numBytes, cudaMemcpyDeviceToHost);
In the above code, as soon as the increment() kernel is launched on the device the CPU thread executes myCpuFunction(), overlapping its execution on the CPU with the kernel execution on the GPU. Whether the host function or device kernel completes first doesn’t affect the subsequent device-to-host transfer, which will begin only after the kernel completes. From the perspective of the device, nothing has changed from the previous example; the device is completely unaware of myCpuFunction().
Non-default streams
Non-default streams in CUDA C/C++ are declared, created, and destroyed in host code as follows.
cudaStream_t stream1;
cudaError_t result;
result = cudaStreamCreate(&stream1)
result = cudaStreamDestroy(stream1)
To issue a data transfer to a non-default stream we use the cudaMemcpyAsync() function, which is similar to the cudaMemcpy() function discussed in the previous post, but takes a stream identifier as a fifth argument.
result = cudaMemcpyAsync(d_a, a, N, cudaMemcpyHostToDevice, stream1)
cudaMemcpyAsync() is non-blocking on the host, so control returns to the host thread immediately after the transfer is issued. There are cudaMemcpy2DAsync() and cudaMemcpy3DAsync() variants of this routine which can transfer 2D and 3D array sections asynchronously in the specified streams.
To issue a kernel to a non-default stream we specify the stream identifier as a fourth execution configuration parameter (the third execution configuration parameter allocates shared device memory, which we’ll talk about later; use 0 for now).
increment<<<1,N,0,stream1>>>(d_a)
Synchronization with streams
Since all operations in non-default streams are non-blocking with respect to the host code, you will run across situations where you need to synchronize the host code with operations in a stream. There are several ways to do this. The “heavy hammer” way is to use cudaDeviceSynchronize(), which blocks the host code until all previously issued operations on the device have completed. In most cases this is overkill, and can really hurt performance due to stalling the entire device and host thread.
The CUDA stream API has multiple less severe methods of synchronizing the host with a stream. The function cudaStreamSynchronize(stream) can be used to block the host thread until all previously issued operations in the specified stream have completed. The function cudaStreamQuery(stream) tests whether all operations issued to the specified stream have completed, without blocking host execution. The functions cudaEventSynchronize(event) and cudaEventQuery(event) act similar to their stream counterparts, except that their result is based on whether a specified event has been recorded rather than whether a specified stream is idle. You can also synchronize operations within a single stream on a specific event using cudaStreamWaitEvent(event) (even if the event is recorded in a different stream, or on a different device!).
Overlapping Kernel Execution and Data Transfers
Earlier we demonstrated how to overlap kernel execution in the default stream with execution of code on the host. But our main goal in this post is to show you how to overlap kernel execution with data transfers. There are several requirements for this to happen.
So let’s modify our simple host code from above to use multiple streams and see if we can achieve any overlap. The full code for this example is available on Github. In the modified code, we break up the array of size N into chunks of streamSize elements. Since the kernel operates independently on all elements, each of the chunks can be processed independently. The number of (non-default) streams used is nStreams=N/streamSize. There are multiple ways to implement the domain decomposition of the data and processing; one is to loop over all the operations for each chunk of the array as in this example code.
for (int i = 0; i < nStreams; ++i) {
int offset = i * streamSize;
cudaMemcpyAsync(&d_a[offset], &a[offset], streamBytes, cudaMemcpyHostToDevice, stream[i]);
kernel<<<streamSize/blockSize, blockSize, 0, stream[i]>>>(d_a, offset);
cudaMemcpyAsync(&a[offset], &d_a[offset], streamBytes, cudaMemcpyDeviceToHost, stream[i]);
}
Another approach is to batch similar operations together, issuing all the host-to-device transfers first, followed by all kernel launches, and then all device-to-host transfers, as in the following code.
for (int i = 0; i < nStreams; ++i) {
int offset = i * streamSize;
cudaMemcpyAsync(&d_a[offset], &a[offset],
streamBytes, cudaMemcpyHostToDevice, cudaMemcpyHostToDevice, stream[i]);
}
for (int i = 0; i < nStreams; ++i) {
int offset = i * streamSize;
kernel<<<streamSize/blockSize, blockSize, 0, stream[i]>>>(d_a, offset);
}
for (int i = 0; i < nStreams; ++i) {
int offset = i * streamSize;
cudaMemcpyAsync(&a[offset], &d_a[offset],
streamBytes, cudaMemcpyDeviceToHost, cudaMemcpyDeviceToHost, stream[i]);
}
Both asynchronous methods shown above yield correct results, and in both cases dependent operations are issued to the same stream in the order in which they need to be executed. But the two approaches perform very differently depending on the specific generation of GPU used. On a Tesla C1060 (compute capability 1.3) running the test code (from Github) gives the following results.
Device : Tesla C1060
Time for sequential transfer and execute (ms ): 12.92381
max error : 2.3841858E -07
Time for asynchronous V1 transfer and execute (ms ): 13.63690
max error : 2.3841858E -07
Time for asynchronous V2 transfer and execute (ms ): 8.84588
max error : 2.3841858E -07
On a Tesla C2050 (compute capability 2.0) we get the following results.
Device : Tesla C2050
Time for sequential transfer and execute (ms ): 9.984512
max error : 1.1920929e -07
Time for asynchronous V1 transfer and execute (ms ): 5.735584
max error : 1.1920929e -07
Time for asynchronous V2 transfer and execute (ms ): 7.597984
max error : 1.1920929e -07
Here the first time reported is the sequential transfer and kernel execution using blocking transfers, which we use as a baseline for asynchronous speedup comparison. Why do the two asynchronous strategies perform differently on different architectures? To decipher these results we need to understand a bit more about how CUDA devices schedule and execute tasks. CUDA devices contain engines for various tasks, which queue up operations as they are issued. Dependencies between tasks in different engines are maintained, but within any engine all external dependencies are lost; tasks in each engine’s queue are executed in the order they are issued. The C1060 has a single copy engine and a single kernel engine. A time line for the execution of our example code on a C1060 is shown in the following diagram.
In the schematic we assume that the time required for the host-to-device transfer, kernel execution, and device-to-host transfer are approximately the same (the kernel code was chosen in order to achieve this). As expected for the sequential kernel, there is no overlap in any of the operations. For the first asynchronous version of our code the order of execution in the copy engine is: H2D stream(1), D2H stream(1), H2D stream(2), D2H stream(2), and so forth. This is why we do not see any speed-up when using the first asynchronous version on the C1060: tasks were issued to the copy engine in an order that precludes any overlap of kernel execution and data transfer. For version two, however, where all the host-to-device transfers are issued before any of the device-to-host transfers, overlap is possible as indicated by the lower execution time. From our schematic, we expect the execution of asynchronous version 2 to be 8/12 of the sequential version, or 8.7 ms which is confirmed in the timing results given previously.
On the C2050, two features interact to cause a behavior difference from the C1060. The C2050 has two copy engines, one for host-to-device transfers and another for device-to-host transfers, as well as a single kernel engine. The following diagram illustrates execution of our example on the C2050.
Having two copy engines explains why asynchronous version 1 achieves good speed-up on the C2050: the device-to-host transfer of data in stream[i] does not block the host-to-device transfer of data in stream[i+1] as it did on the C1060 because there is a separate engine for each copy direction on the C2050. The schematic predicts the execution time to be cut in half relative to the sequential version, and this is roughly what our timing results showed.
But what about the performance degradation observed in asynchronous version 2 on the C2050? This is related to the C2050’s ability to concurrently run multiple kernels. When multiple kernels are issued back-to-back in different (non-default) streams, the scheduler tries to enable concurrent execution of these kernels and as a result delays a signal that normally occurs after each kernel completion (which is responsible for kicking off the device-to-host transfer) until all kernels complete. So, while there is overlap between host-to-device transfers and kernel execution in the second version of our asynchronous code, there is no overlap between kernel execution and device-to-host transfers. The schematic predicts an overall time for the asynchronous version 2 to be 9/12 of the time for the sequential version, or 7.5 ms, and this is confirmed by our timing results.
A more detailed description of the example used in this post is available in CUDA Fortran Asynchronous Data Transfers. The good news is that for devices with compute capability 3.5 (the K20 series), the Hyper-Q feature eliminates the need to tailor the launch order, so either approach above will work. We will discuss using Kepler features in a future post, but for now, here are the results of running the sample code on a Tesla K20c GPU. As you can see, both asynchronous methods achieve the same speedup over the synchronous code.
Device : Tesla K20c
Time for sequential transfer and execute (ms): 7.101760
max error : 1.1920929e -07
Time for asynchronous V1 transfer and execute (ms): 3.974144
max error : 1.1920929e -07
Time for asynchronous V2 transfer and execute (ms): 3.967616
max error : 1.1920929e -07
Summary
This post and the previous one discussed how to optimize data transfers between the host and device. The previous post focused on how to minimize the time for executing such transfers, and this post introduced streams and how to use them to mask data transfer time by concurrently executing copies and kernels.
In a post dealing with streams I should mention that while using the default stream is convenient for developing code—synchronous code is simpler—eventually your code should use non-default streams or the CUDA 7 support for per-thread default streams (read GPU Pro Tip: CUDA 7 Streams Simplify Concurrency). This is especially important when writing libraries. If code in a library uses the default stream, there is no chance for the end user to overlap data transfers with library kernel execution.
Now you know how to move data efficiently between the host and device, so we’ll look at how to access data efficiently from within kernels in the next post.
Related resources
Tags
About the Authors
Comments
Related posts
Using the NVIDIA CUDA Stream-Ordered Memory Allocator, Part 1
GPU Pro Tip: CUDA 7 Streams Simplify Concurrency
How to Overlap Data Transfers in CUDA Fortran
How to Optimize Data Transfers in CUDA C/C++
How to Optimize Data Transfers in CUDA Fortran
Related posts
Optimizing Memory and Retrieval for Graph Neural Networks with WholeGraph, Part 1
Deploying Retrieval-Augmented Generation Applications on NVIDIA GH200 Delivers Accelerated Performance
Simplifying GPU Application Development with Heterogeneous Memory Management
Boosting Application Performance with GPU Memory Access Tuning
Using the NVIDIA CUDA Stream-Ordered Memory Allocator, Part 2
|
Boosting Application Performance with GPU Memory Prefetching
NVIDIA GPUs have enormous compute power and typically must be fed data at high speed to deploy that power. That is possible, in principle, because GPUs also have high memory bandwidth, but sometimes they need your help to saturate that bandwidth.
In this post, we examine one specific method to accomplish that: prefetching. We explain the circumstances under which prefetching can be expected to work well, and how to find out whether these circumstances apply to your workload.
Context
NVIDIA GPUs derive their power from massive parallelism. Many warps of 32 threads can be placed on a streaming multiprocessor (SM), awaiting their turn to execute. When one warp is stalled for whatever reason, the warp scheduler switches to another with zero overhead, making sure the SM always has work to do.
On the high-performance NVIDIA Ampere Architecture A100 GPU, up to 64 active warps can share an SM, each with its own resources. On top of that, A100 has 108 SMs that can all execute warp instructions simultaneously.
Most instructions must operate on data, and that data almost always originates in the device memory (DRAM) attached to the GPU. One of the main reasons why even the abundance of warps on an SM can run out of work is because they are waiting for data to arrive from memory.
If this happens, and the bandwidth to memory is not fully utilized, it may be possible to reorganize the program to improve memory access and reduce warp stalls, which in turn makes the program complete faster. This is called latency hiding.
Prefetching
A technology commonly supported in hardware on CPUs is called prefetching. The CPU sees a stream of requests from memory arriving, figures out the pattern, and starts fetching data before it is actually needed. While that data travels to the execution units of the CPU, other instructions can be executed, effectively hiding the travel costs (memory latency).
Prefetching is a useful technique but expensive in terms of silicon area on the chip. These costs would be even higher, relatively speaking, on a GPU, which has many more execution units than the CPU. Instead, the GPU uses excess warps to hide memory latency. When that is not enough, you may employ prefetching in software. It follows the same principle as hardware-supported prefetching but requires explicit instructions to fetch the data.
To determine if this technique can help your program run faster, use a GPU profiling tool such as NVIDIA Nsight Compute to check the following:
Unrolling
Consider the simplest possible optimization of such a loop, called unrolling. If the loop is short enough, you can tell the compiler to unroll it completely and the iterations are expanded explicitly. Because the iterations are independent, the compiler can issue all requests for data (“loads”) upfront, provided that it assigns distinct registers to each load.
These requests can be overlapped with each other, so that the whole set of loads experiences only a single memory latency, not the sum of all individual latencies. Even better, part of the single latency is hidden by the succession of load instructions itself. This is a near-optimal situation, but it may require a lot of registers to receive the results of the loads.
If the loop is too long, it could be unrolled partially. In that case, batches of iterations are expanded, and then you follow the same general strategy as before. Work on your part is minimal (but you may not be that lucky).
If the loop contains many other instructions whose operands need to be stored in registers, even just partial unrolling may not be an option. In that case, and after you have confirmed that the earlier conditions are satisfied, you must make some decisions based on further information.
Prefetching means bringing data closer to the SMs’ execution units. Registers are closest of all. If enough are available, which you can find out using the Nsight Compute occupancy view, you can prefetch directly into registers.
Consider the following loop, where array arr is stored in global memory (DRAM). It implicitly assumes that just a single, one-dimensional thread block is being used, which is not the case for the motivating application from which it was derived. However, it reduces code clutter and does not change the argument.
In all code examples in this post, uppercase variables are compile-time constants. BLOCKDIMX assumes the value of the predefined variable blockDim.x. For some purposes, it must be a constant known at compile time whereas for other purposes, it is useful for avoiding computations at run time.
for (i=threadIdx.x; i<imax; i+= BLOCKDIMX) {
double locvar = arr[i];
<lots of instructions using locvar, for example, transcendentals>
}
Imagine that you have eight registers to spare for prefetching. This is a tuning parameter. The following code fetches four double-precision values occupying eight 4-byte registers at the start of each fourth iteration and uses them one by one, until the batch is depleted, at which time you fetch a new batch.
To keep track of the batches, introduce a counter (ctr) that increments with each successive iteration executed by a thread. For convenience, assume that the number of iterations per thread is divisible by 4.
double v0, v1, v2, v3;
for (i=threadIdx.x, ctr=0; i<imax; i+= BLOCKDIMX, ctr++) {
ctr_mod = ctr%4;
if (ctr_mod==0) { // only fill the buffer each 4th iteration
v0=arr[i+0* BLOCKDIMX];
v1=arr[i+1* BLOCKDIMX];
v2=arr[i+2* BLOCKDIMX];
v3=arr[i+3* BLOCKDIMX];
}
switch (ctr_mod) { // pull one value out of the prefetched batch
case 0: locvar = v0; break;
case 1: locvar = v1; break;
case 2: locvar = v2; break;
case 3: locvar = v3; break;
}
<lots of instructions using locvar, for example, transcendentals>
}
Typically, the more values can be prefetched, the more effective the method is. While the preceding example is not complex, it is a little cumbersome. If the number of prefetched values (PDIST, or prefetch distance) changes, you have to add or delete lines of code.
It is easier to store the prefetched values in shared memory, because you can use array notation and vary the prefetch distance without any effort. However, shared memory is not as close to the execution units as registers. It requires an extra instruction to move the data from there into a register when it is ready for use. For convenience, we introduce macro vsmem to simplify indexing the array in shared memory:
#define vsmem(index) v[index+PDIST*threadIdx.x]
__shared__ double v[PDIST* BLOCKDIMX];
for (i=threadIdx.x, ctr=0; i<imax; i+= BLOCKDIMX, ctr++) {
ctr_mod = ctr%PDIST;
if (ctr_mod==0) {
for (k=0; k<PDIST; ++k) vsmem(k) = arr[i+k* BLOCKDIMX];
}
locvar = vsmem(ctr_mod);
<more instructions using locvar, for example, transcendentals>
}
Instead of prefetching in batches, you can also do a “rolling” prefetch. In that case, you fill the prefetch buffer before entering the main loop and subsequently prefetch exactly one value from memory during each loop iteration, to be used PDIST iterations later. The next example implements rolling prefetching, using array notation and shared memory.
__shared__ double v[PDIST* BLOCKDIMX];
for (k=0; k<PDIST; ++k) vsmem(k) = arr[threadIdx.x+k* BLOCKDIMX];
for (i=threadIdx.x, ctr=0; i<imax; i+= BLOCKDIMX, ctr++) {
ctr_mod= ctr%PDIST;
locvar = vsmem(ctr_mod);
if ( i<imax-PDIST* BLOCKDIMX) vsmem(ctr_mod) = arr[i+PDIST* BLOCKDIMX];
<more instructions using locvar, for example, transcendentals>
}
Contrary to the batched method, the rolling prefetch does not suffer anymore memory latencies during the execution of the main loop for a sufficiently large prefetch distance. It also uses the same amount of shared memory or register resources, so it would appear to be preferred. However, a subtle issue may limit its effectiveness.
A synchronization within the loop—for example, syncthreads—constitutes a memory fence and forces the loading of arr to complete at that point within the same iteration, not PDIST iterations later. The fix is to use asynchronous loads into shared memory, the simplest version of which is explained in the Pipeline interface section of the CUDA programmer guide. These asynchronous loads do not need to complete at a synchronization point, but only when they are explicitly waited on.
Here’s the corresponding code:
#include <cuda_pipeline_primitives.h>
__shared__ double v[PDIST* BLOCKDIMX];
for (k=0; k<PDIST; ++k) { // fill the prefetch buffer asynchronously
__pipeline_memcpy_async(&vsmem(k), &arr[threadIdx.x+k* BLOCKDIMX], 8);
__pipeline_commit();
}
for (i=threadIdx.x, ctr=0; i<imax; i+= BLOCKDIMX, ctr++) {
__pipeline_wait_prior(PDIST-1); //wait on needed prefetch value
ctr_mod= ctr%PDIST;
locvar = vsmem(ctr_mod);
if ( i<imax-PDIST* BLOCKDIMX) { // prefetch one new value
__pipeline_memcpy_async(&vsmem(ctr_mod), &arr[i+PDIST* BLOCKDIMX], 8);
__pipeline_commit();
}
<more instructions using locvar, for example, transcendentals>
}
As each __pipeline_wait_prior instruction must be matched by a __pipeline_commit instruction, we put the latter inside the loop that prefills the prefetch buffer, before entering the main computational loop, to keep bookkeeping of matching instruction pairs simple.
Performance results
Figure 1 shows, for various prefetch distances, the performance improvement of a kernel taken from a financial application under the five algorithmic variations described earlier.
Clearly, the rolling prefetching into shared memory with asynchronous memory copies gives good benefit, but it is uneven as the prefetch buffer size grows.
A closer inspection of the results, using Nsight Compute, shows that bank conflicts occur in shared memory, which cause a warp worth of asynchronous loads to be split into more successive memory requests than strictly necessary. The classical optimization approach of padding the array size in shared memory to avoid bad strides works in this case. The value of PADDING is chosen such that the sum of PDIST and PADDING equals a power of two plus 1. Apply it to all variations that use shared memory:
#define vsmem(index) v[index+(PDIST+PADDING)*threadIdx.x]
This leads to the improved shared memory results shown in Figure 2. A prefetch distance of just 6, combined with asynchronous memory copies in a rolling fashion, is sufficient to obtain optimal performance at almost 60% speedup over the original version of the code. We could actually have arrived at this performance improvement without resorting to padding by changing the indexing scheme of the array in shared memory, which is left as an exercise for the reader.
A variation of prefetching not yet discussed moves data from global memory to the L2 cache, which may be useful if space in shared memory is too small to hold all data eligible for prefetching. This type of prefetching is not directly accessible in CUDA and requires programming at the lower PTX level.
Summary
In this post, we showed you examples of localized changes to source code that may speed up memory accesses. These do not change the amount of data being moved from memory to the SMs, only their timing. You may be able to optimize more by rearranging memory accesses such that data is reused many times after it arrives on the SM.
Related resources
Tags
About the Authors
Comments
Related posts
Optimize GPU Workloads for Graphics Applications with NVIDIA Nsight Graphics
Measuring the GPU Occupancy of Multi-stream Workloads
Boosting Application Performance with GPU Memory Access Tuning
Using CUDA Warp-Level Primitives
How to Access Global Memory Efficiently in CUDA Fortran Kernels
Related posts
Processing High-Quality Vietnamese Language Data with NVIDIA NeMo Curator
Access to NVIDIA NIM Now Available Free to Developer Program Members
Revolutionizing Graph Analytics: Next-Gen Architecture with NVIDIA cuGraph Acceleration
Efficient CUDA Debugging: Memory Initialization and Thread Synchronization with NVIDIA Compute Sanitizer
Analyzing the Security of Machine Learning Research Code
|
Six Ways to SAXPY
For even more ways to SAXPY using the latest NVIDIA HPC SDK with standard language parallelism, see N Ways to SAXPY: Demonstrating the Breadth of GPU Programming Options.
This post is a GPU program chrestomathy. What’s a Chrestomathy, you ask?
In computer programming, a program chrestomathy is a collection of similar programs written in various programming languages, for the purpose of demonstrating differences in syntax, semantics and idioms for each language. [Wikipedia]
There are several good examples of program chrestomathies on the web, including Rosetta Code and NBabel, which demonstrates gravitational N-body simulation in multiple programming languages. In this post I demonstrate six ways to implement a simple SAXPY computation on the CUDA platform. Why is this interesting? Because it demonstrates the breadth of options you have today for programming NVIDIA GPUs, and it covers the three main approaches to GPU computing: GPU-accelerated libraries, GPU compiler directives, and GPU programming languages.
SAXPY stands for “Single-Precision A·X Plus Y”. It is a function in the standard Basic Linear Algebra Subroutines (BLAS)library. SAXPY is a combination of scalar multiplication and vector addition, and it’s very simple: it takes as input two vectors of 32-bit floats X and Y with N elements each, and a scalar value A. It multiplies each element X[i] by A and adds the result to Y[i]. A simple C implementation looks like this.
void saxpy(int n, float a, float * restrict x, float * restrict y)
{
for (int i = 0; i < n; ++i)
y[i] = a*x[i] + y[i];
}
// Perform SAXPY on 1M elements
saxpy(1<<20, 2.0, x, y);
Given this basic example code, I can now show you six ways to SAXPY on GPUs. Note that I chose SAXPY because it is a really short and simple code, but it shows enough of the syntax of each programming approach to compare them. Because it’s so simple, and does very little computation, SAXPY is not really a great computation to use for comparing the performance of the different approaches, but that’s not my intent. My goal is to demonstrate multiple ways to program on the CUDA platform today, not to suggest that any one is better than any other.
1. CUBLAS SAXPY
int N = 1<<20;
cublasInit();
cublasSetVector(N, sizeof(x[0]), x, 1, d_x, 1);
cublasSetVector(N, sizeof(y[0]), y, 1, d_y, 1);
// Perform SAXPY on 1M elements
cublasSaxpy(N, 2.0, d_x, 1, d_y, 1);
cublasGetVector(N, sizeof(y[0]), d_y, 1, y, 1);
cublasShutdown();
As I mentioned, SAXPY is part of the BLAS library, and therefore a GPU SAXPY is available as part of the CUBLAS library. To use it, you just need to initialize CUBLAS (device) vectors for x and y and call cublasSaxpy(). Like all CUBLAS functions, cublasSaxpy conforms to the BLAS standard, and so it is a drop-in replacement for BLAS libraries for CPUs.
Libraries are one of the most efficient ways to program GPUs, because they encapsulate the complexity of writing optimized code for common algorithms into high-level, standard interfaces. There is a wide variety of high-performance libraries available for NVIDIA GPUs.
2. OpenACC SAXPY
If you have been following Parallel Forall, you are already familiar with OpenACC. OpenACC is an open standard that defines compiler directives for parallel computing on GPUs (see my previous posts on the subject). We can add a single line to the above example to produce an OpenACC SAXPY in C.
void saxpy(int n, float a, float * restrict x, float * restrict y)
{
#pragma acc kernels
for (int i = 0; i < n; ++i)
y[i] = a*x[i] + y[i];
}
...
// Perform SAXPY on 1M elements
saxpy(1<<20, 2.0, x, y);
A Fortran OpenACC SAXPY is very similar.
subroutine saxpy(n, a, x, y)
real :: x(:), y(:), a
integer :: n, i
!$acc kernels
do i=1,n
y(i) = a*x(i)+y(i)
enddo
$!acc end kernels
end subroutine saxpy
...
! Perform SAXPY on 1M elements
call saxpy(2**20, 2.0, x_d, y_d)
3. CUDA C++ SAXPY
__global__
void saxpy(int n, float a, float * restrict x, float * restrict y)
{
int i = blockIdx.x*blockDim.x + threadIdx.x;
if (i < n) y[i] = a*x[i] + y[i];
}
...
int N = 1<<20;
cudaMemcpy(d_x, x, N, cudaMemcpyHostToDevice);
cudaMemcpy(d_y, y, N, cudaMemcpyHostToDevice);
// Perform SAXPY on 1M elements
saxpy<<<4096,256>>>(N, 2.0, d_x, d_y);
cudaMemcpy(y, d_y, N, cudaMemcpyDeviceToHost);
CUDA C++ was the first general-purpose programming language on the CUDA platform. It provides a few simple extensions to the C language to express parallel computations. GPU functions, called kernels are declared with the__global__ specifier to indicate that they are callable from the host and run on the GPU. Kernels are run by many threads in parallel. Threads can compute their global index within an array of thread blocks by accessing the built-in variables blockIdx, blockDim, and threadIdx, which are assigned by the hardware for each thread and block. To launch a kernel, we specify the number of thread blocks and the number of threads in each thread block as arguments to the kernel function call inside <<<>>>, which we call the execution configuration. We copy data to and from GPU device memory using cudaMemcpy().
That is CUDA C in a nutshell. As you can see, the SAXPY kernel contains the same computation as the sequential C version, but instead of looping over the N elements, we launch a single thread for each of the N elements, and each thread computes its array index using blockIdx.x*blockDim.x + threadIdx.x.
4. Thrust SAXPY
using thrust::placeholders;
int N = 1<<20;
thrust::host_vector x(N), y(N);
...
thrust::device_vector d_x = x; // alloc and copy host to device
thrust::device_vector d_y = y;
// Perform SAXPY on 1M elements
thrust::transform(d_x.begin(), d_x.end(), d_y.begin(), d_y.begin(), 2.0f * _1 + _2);
y = d_y; // copy results to the host vector
I wrote about Thrust in some detail in a recent post, so I will just point out a few interesting features of thrust demonstrated here. This code shows that copying a host vector to a device vector is as simple as assignment. It performes SAXPY in a single line using thrust::transform(), which acts like a parallel foreach (∥∀!), applying a multiply-add (MAD) operation to each element of the input vectors x and y. The MAD operation uses Thrust’s “placeholder” syntax (inspired by Boost placeholders), which uses the placeholder templates _1 and _2 to access the elements of the x and y input vectors, respectively.
5. CUDA Fortran SAXPY
module mymodule contains
attributes(global) subroutine saxpy(n, a, x, y)
real :: x(:), y(:), a
integer :: n, i
attributes(value) :: a, n
i = threadIdx%x+(blockIdx%x-1)*blockDim%x
if (i<=n) y(i) = a*x(i)+y(i)
end subroutine saxpy
end module mymodule
program main
use cudafor; use mymodule
real, device :: x_d(2**20), y_d(2**20)
x_d = 1.0, y_d = 2.0
! Perform SAXPY on 1M elements
call saxpy<<<4096, 256>>>(2**20, 2.0, x_d, y_d)
end program main
PGI CUDA Fortran provides parallel extensions to Fortran that are very similar to the parallel extensions to C provided by CUDA C. Here you can see how the saxpy subroutine computes an index i for each thread using the built-inthreadIdx, blockIdx, and blockDim variables, and is called using an execution configuration just like in the C version.
6. Python (Copperhead) SAXPY
from copperhead import *
import numpy as np
@cu
def saxpy(a, x, y):
return [a * xi + yi for xi, yi in zip(x, y)]
x = np.arange(2**20, dtype=np.float32)
y = np.arange(2**20, dtype=np.float32)
with places.gpu0:
gpu_result = saxpy(2.0, x, y)
with places.openmp:
cpu_result = saxpy(2.0, x, y)
Python is an extremely popular high-productivity programming language. Combined with popular modules Numpy andSciPy, Python is very popular in the scientific computing community. There have been a couple of projects to add GPU support to Python. The SAXPY example above is right off the home page of Copperhead, which is an open-source project from NVIDIA Research. Copperhead defines a small functional, data parallel subset of Python which it dynamically compiles and executes on parallel platforms, such as NVIDIA GPUs and multicore CPUs through OpenMP and Threading Building Blocks (TBB).
Enabling Endless Ways to SAXPY
So now, not only can you use “chrestomathy” in a sentence, but you know six different ways you can program GPUs on the CUDA platform. People often ask me “what’s the best way to program GPUs?”, and I hope that this SAXPY program chrestomathy makes it clear that there is no correct answer to that question. It’s just like asking “what’s the best way to program a computer?” There are hundreds of CPU programming languages, and there’s no reason that can’t and won’t be true for GPUs.
NVIDIA’s CUDA Compiler (NVCC) is based on the widely used LLVM open source compiler infrastructure (and recently, NVIDIA announced that it has contributed components to the popular open-source LLVM compiler infrastructure project). LLVM is used in a wide variety of compilers for general-purpose languages, as well as domain-specific languages (DSLs), which are languages targeted at solving problems in a specific domain.
Many GPU users want to target the CUDA platform with a general-purpose language they are already using. For this purpose, NVIDIA has released the CUDA Compiler SDK, which can be used to create or extend programming languages with support for GPU acceleration. The CUDA Compiler SDK is now included with the CUDA Toolkit. We hope that providing developers with access to an open compiler tool chain will help enable endless ways to program GPUs, so try out the CUDA Compiler SDK today!
Related resources
Tags
About the Authors
Comments
Related posts
NVIDIA HPC SDK 21.3 Now Available
N Ways to SAXPY: Demonstrating the Breadth of GPU Programming Options
Accelerating Fortran DO CONCURRENT with GPUs and the NVIDIA HPC SDK
Announcing the NVIDIA HPC SDK
OpenACC: Directives for GPUs
Related posts
Fusing Epilog Operations with Matrix Multiplication Using nvmath-python
Bridging the CUDA C++ Ecosystem and Python Developers with Numbast
Webinar: Accelerating Python with GPUs
Just Released: Torch-TensorRT v2.4.0
Just Released: Nsight Compute 2024.3
|
Unified Memory for CUDA Beginners
My previous introductory post, “An Even Easier Introduction to CUDA C++“, introduced the basics of CUDA programming by showing how to write a simple program that allocated two arrays of numbers in memory accessible to the GPU and then added them together on the GPU. To do this, I introduced you to Unified Memory, which makes it very easy to allocate and access data that can be used by code running on any processor in the system, CPU or GPU.
I finished that post with a few simple “exercises”, one of which encouraged you to run on a recent Pascal-based GPU to see what happens. (I was hoping that readers would try it and comment on the results, and some of you did!). I suggested this for two reasons. First, because Pascal GPUs such as the NVIDIA Titan X and the NVIDIA Tesla P100 are the first GPUs to include the Page Migration Engine, which is hardware support for Unified Memory page faulting and migration. The second reason is that it provides a great opportunity to learn more about Unified Memory.
Fast GPU, Fast Memory… Right?
Right! But let’s see. First, I’ll reprint the results of running on two NVIDIA Kepler GPUs (one in my laptop and one in a server).
Now let’s try running on a really fast Tesla P100 accelerator, based on the Pascal GP100 GPU.
> nvprof ./add_grid
...
Time(%) Time Calls Avg Min Max Name
100.00% 2.1192ms 1 2.1192ms 2.1192ms 2.1192ms add(int, float*, float*)
Hmmmm, that’s under 6 GB/s: slower than running on my laptop’s Kepler-based GeForce GPU. Don’t be discouraged, though; we can fix this. To understand how, I’ll have to tell you a bit more about Unified Memory.
For reference in what follows, here’s the complete code to add_grid.cu from last time.
#include <iostream>
#include <math.h>
// CUDA kernel to add elements of two arrays
__global__
void add(int n, float *x, float *y)
{
int index = blockIdx.x * blockDim.x + threadIdx.x;
int stride = blockDim.x * gridDim.x;
for (int i = index; i < n; i += stride)
y[i] = x[i] + y[i];
}
int main(void)
{
int N = 1<<20;
float *x, *y;
// Allocate Unified Memory -- accessible from CPU or GPU
cudaMallocManaged(&x, N*sizeof(float));
cudaMallocManaged(&y, N*sizeof(float));
// initialize x and y arrays on the host
for (int i = 0; i < N; i++) {
x[i] = 1.0f;
y[i] = 2.0f;
}
// Launch kernel on 1M elements on the GPU
int blockSize = 256;
int numBlocks = (N + blockSize - 1) / blockSize;
add<<<numBlocks, blockSize>>>(N, x, y);
// Wait for GPU to finish before accessing on host
cudaDeviceSynchronize();
// Check for errors (all values should be 3.0f)
float maxError = 0.0f;
for (int i = 0; i < N; i++)
maxError = fmax(maxError, fabs(y[i]-3.0f));
std::cout << "Max error: " << maxError << std::endl;
// Free memory
cudaFree(x);
cudaFree(y);
return 0;
}
The code that allocates and initializes the memory is on lines 19-27.
What is Unified Memory?
Unified Memory is a single memory address space accessible from any processor in a system (see Figure 1). This hardware/software technology allows applications to allocate data that can be read or written from code running on either CPUs or GPUs. Allocating Unified Memory is as simple as replacing calls to malloc() or new with calls to cudaMallocManaged(), an allocation function that returns a pointer accessible from any processor (ptr in the following).
cudaError_t cudaMallocManaged(void** ptr, size_t size);
When code running on a CPU or GPU accesses data allocated this way (often called CUDA managed data), the CUDA system software and/or the hardware takes care of migrating memory pages to the memory of the accessing processor. The important point here is that the Pascal GPU architecture is the first with hardware support for virtual memory page faulting and migration, via its Page Migration Engine. Older GPUs based on the Kepler and Maxwell architectures also support a more limited form of Unified Memory.
What Happens on Kepler When I call cudaMallocManaged()?
On systems with pre-Pascal GPUs like the Tesla K80, calling cudaMallocManaged() allocates size bytes of managed memory on the GPU device that is active when the call is made1. Internally, the driver also sets up page table entries for all pages covered by the allocation, so that the system knows that the pages are resident on that GPU.
So, in our example, running on a Tesla K80 GPU (Kepler architecture), x and y are both initially fully resident in GPU memory. Then in the loop starting on line 6, the CPU steps through both arrays, initializing their elements to 1.0f and 2.0f, respectively. Since the pages are initially resident in device memory, a page fault occurs on the CPU for each array page to which it writes, and the GPU driver migrates the page from device memory to CPU memory. After the loop, all pages of the two arrays are resident in CPU memory.
After initializing the data on the CPU, the program launches the add() kernel to add the elements of x to the elements of y.
add<<<1, 256>>>(N, x, y);
On pre-Pascal GPUs, upon launching a kernel, the CUDA runtime must migrate all pages previously migrated to host memory or to another GPU back to the device memory of the device running the kernel2. Since these older GPUs can’t page fault, all data must be resident on the GPU just in case the kernel accesses it (even if it won’t). This means there is potentially migration overhead on each kernel launch.
That’s what happens in my program when I run it on K80 or my Macbook Pro. Note, however, that the profiler shows the kernel run time separate from the migration time, since the migrations happen before the kernel runs.
==15638== Profiling application: ./add_grid
==15638== Profiling result:
Time(%) Time Calls Avg Min Max Name
100.00% 93.471us 1 93.471us 93.471us 93.471us add(int, float*, float*)
==15638== Unified Memory profiling result:
Device "Tesla K80 (0)"
Count Avg Size Min Size Max Size Total Size Total Time Name
6 1.3333MB 896.00KB 2.0000MB 8.000000MB 1.154720ms Host To Device
102 120.47KB 4.0000KB 0.9961MB 12.00000MB 1.895040ms Device To Host
Total CPU Page faults: 51
What Happens on Pascal When I call cudaMallocManaged()?
On Pascal and later GPUs, managed memory may not be physically allocated when cudaMallocManaged() returns; it may only be populated on access (or prefetching). In other words, pages and page table entries may not be created until they are accessed by the GPU or the CPU. The pages can migrate to any processor’s memory at any time, and the driver employs heuristics to maintain data locality and prevent excessive page faults3. (Note: Applications can guide the driver using cudaMemAdvise(), and explicitly migrate memory using cudaMemPrefetchAsync(), as this blog post describes).
Unlike the pre-Pascal GPUs, the Tesla P100 supports hardware page faulting and migration. So in this case the runtime doesn’t automatically copy all the pages back to the GPU before running the kernel. The kernel launches without any migration overhead, and when it accesses any absent pages, the GPU stalls execution of the accessing threads, and the Page Migration Engine migrates the pages to the device before resuming the threads.
This means that the cost of the migrations is included in the kernel run time when I run my program on the Tesla P100 (2.1192 ms). In this kernel, every page in the arrays is written by the CPU, and then accessed by the CUDA kernel on the GPU, causing the kernel to wait on a lot of page migrations. That’s why the kernel time measured by the profiler is longer on a Pascal GPU like Tesla P100. Let’s look at the full nvprof output for the program on P100.
==19278== Profiling application: ./add_grid
==19278== Profiling result:
Time(%) Time Calls Avg Min Max Name
100.00% 2.1192ms 1 2.1192ms 2.1192ms 2.1192ms add(int, float*, float*)
==19278== Unified Memory profiling result:
Device "Tesla P100-PCIE-16GB (0)"
Count Avg Size Min Size Max Size Total Size Total Time Name
146 56.109KB 4.0000KB 988.00KB 8.000000MB 860.5760us Host To Device
24 170.67KB 4.0000KB 0.9961MB 4.000000MB 339.5520us Device To Host
12 - - - - 1.067526ms GPU Page fault groups
Total CPU Page faults: 36
As you can see, there are many host-to-device page faults, reducing the throughput achieved by the CUDA kernel.
What Should I Do About This?
In a real application, the GPU is likely to perform a lot more computation on data (perhaps many times) without the CPU touching it. The migration overhead in this simple code is caused by the fact that the CPU initializes the data and the GPU only uses it once. There are a few different ways that I can eliminate or change the migration overhead to get a more accurate measurement of the vector add kernel performance.
Let’s look at each of these three approaches.
Initialize the Data in a Kernel
If we move initialization from the CPU to the GPU, the add kernel won’t page fault. Here’s a simple CUDA C++ kernel to initialize the data. We can just replace the host code that initializes x and y with a launch of this kernel.
__global__ void init(int n, float *x, float *y) {
int index = threadIdx.x + blockIdx.x * blockDim.x;
int stride = blockDim.x * gridDim.x;
for (int i = index; i < n; i += stride) {
x[i] = 1.0f;
y[i] = 2.0f;
}
}
When I do this, I see both kernels in the profile on the Tesla P100 GPU:
==44292== Profiling application: ./add_grid_init
==44292== Profiling result:
Time(%) Time Calls Avg Min Max Name
98.06% 1.3018ms 1 1.3018ms 1.3018ms 1.3018ms init(int, float*, float*)
1.94% 25.792us 1 25.792us 25.792us 25.792us add(int, float*, float*)
==44292== Unified Memory profiling result:
Device "Tesla P100-PCIE-16GB (0)"
Count Avg Size Min Size Max Size Total Size Total Time Name
24 170.67KB 4.0000KB 0.9961MB 4.000000MB 344.2880us Device To Host
16 - - - - 551.9940us GPU Page fault groups
Total CPU Page faults: 12
The add kernel now runs much faster: 25.8us, which equates to nearly 500 GB/s. Here’s how to calculate that bandwidth.
Bandwidth = Bytes / Seconds = (3 * 4,194,304 bytes * 1e-9 bytes/GB) / 25.8e-6s = 488 GB/s
(To learn about calculating theoretical and achieved bandwidth, see this post.) There are still device-to-host page faults, but this is due to the loop at the end of the program that checks the results on the CPU.
Run It Many Times
Another approach is to just run the kernel many times and look at the average time in the profiler. To do this I need to modify my error checking code so that the results are reported correctly. Here are the results of running the kernel 100 times on a Tesla P100:
==48760== Profiling application: ./add_grid_many
==48760== Profiling result:
Time(%) Time Calls Avg Min Max Name
100.00% 4.5526ms 100 45.526us 24.479us 2.0616ms add(int, float*, float*)
==48760== Unified Memory profiling result:
Device "Tesla P100-PCIE-16GB (0)"
Count Avg Size Min Size Max Size Total Size Total Time Name
174 47.080KB 4.0000KB 0.9844MB 8.000000MB 829.2480us Host To Device
24 170.67KB 4.0000KB 0.9961MB 4.000000MB 339.7760us Device To Host
14 - - - - 1.008684ms GPU Page fault groups
Total CPU Page faults: 36
The minimum kernel run time was just 24.5 microseconds, which means it is achieving over 500GB/s of memory bandwidth. I also included the Unified Memory profiling output from nvprof, which shows a total of 8MB of page faults from host to device, corresponding to the two 4MB arrays (x and y) copied to the device via page faults the first time add runs.
Prefetching
The third approach is to use Unified Memory prefetching to move the data to the GPU after initializing it. CUDA provides cudaMemPrefetchAsync() for this purpose. I can add the following code just before the kernel launch.
// Prefetch the data to the GPU
int device = -1;
cudaGetDevice(&device);
cudaMemPrefetchAsync(x, N*sizeof(float), device, NULL);
cudaMemPrefetchAsync(y, N*sizeof(float), device, NULL);
// Run kernel on 1M elements on the GPU
int blockSize = 256;
int numBlocks = (N + blockSize - 1) / blockSize;
saxpy<<<numBlocks, blockSize>>>(N, 1.0f, x, y);
Now when I profile on the Tesla P100, I get the following output.
==50360== Profiling application: ./add_grid_prefetch
==50360== Profiling result:
Time(%) Time Calls Avg Min Max Name
100.00% 26.112us 1 26.112us 26.112us 26.112us add(int, float*, float*)
==50360== Unified Memory profiling result:
Device "Tesla P100-PCIE-16GB (0)"
Count Avg Size Min Size Max Size Total Size Total Time Name
4 2.0000MB 2.0000MB 2.0000MB 8.000000MB 689.0560us Host To Device
24 170.67KB 4.0000KB 0.9961MB 4.000000MB 346.5600us Device To Host
Total CPU Page faults: 36
Here you can see that the kernel ran just once, taking 26.1us—similar to the fastest of 100 runs shown before. You can also see that there are no longer any GPU page faults reported, and the Host to Device transfers are shown as just four 2MB transfers, thanks to prefetching.
Now that we have it running fast on P100, let’s add it to the results table from last time.
A Note on Concurrency
Keep in mind that your system has multiple processors running parts of your CUDA application concurrently: one or more CPUs and one or more GPUs. Even in our simple example, there is a CPU thread and one GPU execution context. Therefore, we have to be careful when accessing the managed allocations on either processor, to ensure there are no race conditions.
Simultaneous access to managed memory from the CPU and GPUs of compute capability lower than 6.0 is not possible. This is because pre-Pascal GPUs lack hardware page faulting, so coherence can’t be guaranteed. On these GPUs, an access from the CPU while a kernel is running will cause a segmentation fault.
On Pascal and later GPUs, the CPU and the GPU can simultaneously access managed memory, since they can both handle page faults; however, it is up to the application developer to ensure there are no race conditions caused by simultaneous accesses.
In our simple example, we have a call to cudaDeviceSynchronize() after the kernel launch. This ensures that the kernel runs to completion before the CPU tries to read the results from the managed memory pointer. Otherwise, the CPU may read invalid data (on Pascal and later), or get a segmentation fault (on pre-Pascal GPUs).
The Benefits of Unified Memory on Pascal and Later GPUs
Starting with the Pascal GPU architecture, Unified Memory functionality is significantly improved with 49-bit virtual addressing and on-demand page migration. 49-bit virtual addresses are sufficient to enable GPUs to access the entire system memory plus the memory of all GPUs in the system. The Page Migration engine allows GPU threads to fault on non-resident memory accesses so the system can migrate pages on demand from anywhere in the system to the GPU’s memory for efficient processing.
In other words, Unified Memory transparently enables oversubscribing GPU memory, enabling out-of-core computations for any code that is using Unified Memory for allocations (e.g. cudaMallocManaged()). It “just works” without any modifications to the application, whether running on one GPU or multiple GPUs.
Also, Pascal and Volta GPUs support system-wide atomic memory operations. That means you can atomically operate on values anywhere in the system from multiple GPUs. This is useful in writing efficient multi-GPU cooperative algorithms.
Demand paging can be particularly beneficial to applications that access data with a sparse pattern. In some applications, it’s not known ahead of time which specific memory addresses a particular processor will access. Without hardware page faulting, applications can only pre-load whole arrays, or suffer the cost of high-latency off-device accesses (also known as “Zero Copy”). But page faulting means that only the pages the kernel accesses need to be migrated.
Where To From Here?
I hope that this post has helped you continue learning CUDA programming and that you are interested in learning more and applying CUDA C++ in your own computations. If you have questions or comments, don’t hesitate to reach out using the comments section below.
For more on Unified Memory prefetching and also usage hints (cudaMemAdvise()), see the post
Beyond GPU Memory Limits with Unified Memory on Pascal. If you’d like to learn about explicit memory management in CUDA using cudaMalloc and cudaMemcpy, see the old post An Easy Introduction to CUDA C/C++.
We plan to follow up this post with more CUDA programming material, but to keep you busy for now, there is a whole series of older introductory posts that you can continue with.
There is also a series of CUDA Fortran posts mirroring the above, starting with An Easy Introduction to CUDA Fortran.
You might also be interested in the DLI course on CUDA C/C++ programming or the prior Udacity course, Intro to Parallel Programming (CS344) (now available as a playlist on YouTube).
There is a wealth of other content on CUDA C++ and other GPU computing topics here on the NVIDIA Developer Blog, so look around!
1 Technically, this is a simplification. On multi-GPU systems with pre-Pascal GPUs, if some of the GPUs have peer-to-peer access disabled, the memory will be allocated so it is initially resident on the CPU.
2 Strictly speaking, you can restrict visibility of an allocation to a specific CUDA stream by using cudaStreamAttachMemAsync(). This allows the driver to migrate only pages attached to the stream the kernel is launched on. By default, managed allocations are attached to all streams so any kernel launch will trigger migrations. Read more in the CUDA programming guide.
3 The device attribute concurrentManagedAccess tells whether the GPU supports hardware page migration and the concurrent access functionality it enables. A value of 1 indicates support. At this time it is only supported on Pascal and newer GPUs running on 64-bit Linux.
Related resources
Tags
About the Authors
Comments
Related posts
Simplifying GPU Programming for HPC with NVIDIA Grace Hopper Superchip
Improving GPU Memory Oversubscription Performance
Maximizing Unified Memory Performance in CUDA
Beyond GPU Memory Limits with Unified Memory on Pascal
Unified Memory in CUDA 6
Related posts
Simplify System Memory Management with the Latest NVIDIA GH200 NVL2 Enterprise RA
Improving GPU Memory Oversubscription Performance
Analyzing the RNA-Sequence of 1.3M Mouse Brain Cells with RAPIDS on NVIDIA GPUs
Maximizing Unified Memory Performance in CUDA
Beyond GPU Memory Limits with Unified Memory on Pascal
|
How to Implement Performance Metrics in CUDA C/C++
In the first post of this series we looked at the basic elements of CUDA C/C++ by examining a CUDA C/C++ implementation of SAXPY. In this second post we discuss how to analyze the performance of this and other CUDA C/C++ codes. We will rely on these performance measurement techniques in future posts where performance optimization will be increasingly important.
CUDA performance measurement is most commonly done from host code, and can be implemented using either CPU timers or CUDA-specific timers. Before we jump into these performance measurement techniques, we need to discuss how to synchronize execution between the host and device.
Host-Device Synchronization
Let’s take a look at the data transfers and kernel launch of the SAXPY host code from the previous post:
cudaMemcpy(d_x, x, N*sizeof(float), cudaMemcpyHostToDevice);
cudaMemcpy(d_y, y, N*sizeof(float), cudaMemcpyHostToDevice);
saxpy<<<(N+255)/256, 256>>>(N, 2.0, d_x, d_y);
cudaMemcpy(y, d_y, N*sizeof(float), cudaMemcpyDeviceToHost);
The data transfers between the host and device using cudaMemcpy() are synchronous (or blocking) transfers. Synchronous data transfers do not begin until all previously issued CUDA calls have completed, and subsequent CUDA calls cannot begin until the synchronous transfer has completed. Therefore the saxpy kernel launch on the third line will not issue until the transfer from y to d_y on the second line has completed. Kernel launches, on the other hand, are asynchronous. Once the kernel is launched on the third line, control returns immediately to the CPU and does not wait for the kernel to complete. While this might seem to set up a race condition for the device-to-host data transfer in the last line, the blocking nature of the data transfer ensures that the kernel completes before the transfer begins.
Timing Kernel Execution with CPU Timers
Now let’s take a look at how to time the kernel execution using a CPU timer.
cudaMemcpy(d_x, x, N*sizeof(float), cudaMemcpyHostToDevice);
cudaMemcpy(d_y, y, N*sizeof(float), cudaMemcpyHostToDevice);
t1 = myCPUTimer();
saxpy<<<(N+255)/256, 256>>>(N, 2.0, d_x, d_y);
cudaDeviceSynchronize();
t2 = myCPUTimer();
cudaMemcpy(y, d_y, N*sizeof(float), cudaMemcpyDeviceToHost);
In addition to the two calls to the generic host time-stamp function myCPUTimer(), we use the explicit synchronization barrier cudaDeviceSynchronize() to block CPU execution until all previously issued commands on the device have completed. Without this barrier, this code would measure the kernel launch time and not the kernel execution time.
Timing using CUDA Events
A problem with using host-device synchronization points, such as cudaDeviceSynchronize(), is that they stall the GPU pipeline. For this reason, CUDA offers a relatively light-weight alternative to CPU timers via the CUDA event API. The CUDA event API includes calls to create and destroy events, record events, and compute the elapsed time in milliseconds between two recorded events.
CUDA events make use of the concept of CUDA streams. A CUDA stream is simply a sequence of operations that are performed in order on the device. Operations in different streams can be interleaved and in some cases overlapped—a property that can be used to hide data transfers between the host and the device (we will discuss this in detail later). Up to now, all operations on the GPU have occurred in the default stream, or stream 0 (also called the “Null Stream”).
In the following listing we apply CUDA events to our SAXPY code.
cudaEvent_t start, stop;
cudaEventCreate(&start);
cudaEventCreate(&stop);
cudaMemcpy(d_x, x, N*sizeof(float), cudaMemcpyHostToDevice);
cudaMemcpy(d_y, y, N*sizeof(float), cudaMemcpyHostToDevice);
cudaEventRecord(start);
saxpy<<<(N+255)/256, 256>>>(N, 2.0f, d_x, d_y);
cudaEventRecord(stop);
cudaMemcpy(y, d_y, N*sizeof(float), cudaMemcpyDeviceToHost);
cudaEventSynchronize(stop);
float milliseconds = 0;
cudaEventElapsedTime(&milliseconds, start, stop);
CUDA events are of type cudaEvent_t and are created and destroyed with cudaEventCreate() and cudaEventDestroy(). In the above code cudaEventRecord() places the start and stop events into the default stream, stream 0. The device will record a time stamp for the event when it reaches that event in the stream. The function cudaEventSynchronize() blocks CPU execution until the specified event is recorded. The cudaEventElapsedTime() function returns in the first argument the number of milliseconds time elapsed between the recording of start and stop. This value has a resolution of approximately one half microsecond.
Memory Bandwidth
Now that we have a means of accurately timing kernel execution, we will use it to calculate bandwidth. When evaluating bandwidth efficiency, we use both the theoretical peak bandwidth and the observed or effective memory bandwidth.
Theoretical Bandwidth
Theoretical bandwidth can be calculated using hardware specifications available in the product literature. For example, the NVIDIA Tesla M2050 GPU uses DDR (double data rate) RAM with a memory clock rate of 1,546 MHz and a 384-bit wide memory interface. Using these data items, the peak theoretical memory bandwidth of the NVIDIA Tesla M2050 is 148 GB/sec, as computed in the following.
BWTheoretical = 1546 * 106 * (384/8) * 2 / 109 = 148 GB/s
In this calculation, we convert the memory clock rate to Hz, multiply it by the interface width (divided by 8, to convert bits to bytes) and multiply by 2 due to the double data rate. Finally, we divide by 109 to convert the result to GB/s.
Effective Bandwidth
We calculate effective bandwidth by timing specific program activities and by knowing how our program accesses data. We use the following equation.
BWEffective = (RB + WB) / (t * 109)
Here, BWEffective is the effective bandwidth in units of GB/s, RB is the number of bytes read per kernel, WB is the number of bytes written per kernel, and t is the elapsed time given in seconds. We can modify our SAXPY example to calculate the effective bandwidth. The complete code follows.
#include
__global__
void saxpy(int n, float a, float *x, float *y)
{
int i = blockIdx.x*blockDim.x + threadIdx.x;
if (i < n) y[i] = a*x[i] + y[i];
}
int main(void)
{
int N = 20 * (1 << 20);
float *x, *y, *d_x, *d_y;
x = (float*)malloc(N*sizeof(float));
y = (float*)malloc(N*sizeof(float));
cudaMalloc(&d_x, N*sizeof(float));
cudaMalloc(&d_y, N*sizeof(float));
for (int i = 0; i < N; i++) {
x[i] = 1.0f;
y[i] = 2.0f;
}
cudaEvent_t start, stop;
cudaEventCreate(&start);
cudaEventCreate(&stop);
cudaMemcpy(d_x, x, N*sizeof(float), cudaMemcpyHostToDevice);
cudaMemcpy(d_y, y, N*sizeof(float), cudaMemcpyHostToDevice);
cudaEventRecord(start);
// Perform SAXPY on 1M elements
saxpy<<<(N+511)/512, 512>>>(N, 2.0f, d_x, d_y);
cudaEventRecord(stop);
cudaMemcpy(y, d_y, N*sizeof(float), cudaMemcpyDeviceToHost);
cudaEventSynchronize(stop);
float milliseconds = 0;
cudaEventElapsedTime(&milliseconds, start, stop);
float maxError = 0.0f;
for (int i = 0; i < N; i++) {
maxError = max(maxError, abs(y[i]-4.0f));
}
printf("Max error: %fn", maxError);
printf("Effective Bandwidth (GB/s): %fn", N*4*3/milliseconds/1e6);
}
In the bandwidth calculation, N*4 is the number of bytes transferred per array read or write, and the factor of three represents the reading of x and the reading and writing of y. The elapsed time is stored in the variable milliseconds to make units clear. Note that in addition to adding the functionality needed for the bandwidth calculation, we have also changed the array size and the thread-block size. Compiling and running this code on a Tesla M2050 we have:
$ ./saxpy
Max error: 0.000000
Effective Bandwidth (GB/s): 110.374872
Measuring Computational Throughput
We just demonstrated how to measure bandwidth, which is a measure of data throughput. Another metric very important to performance is computational throughput. A common measure of computational throughput is GFLOP/s, which stands for “Giga-FLoating-point OPerations per second”, where Giga is that prefix for 109. For our SAXPY computation, measuring effective throughput is simple: each SAXPY element does a multiply-add operation, which is typically measured as two FLOPs, so we have
GFLOP/s Effective = 2N / (t * 109)
N is the number of elements in our SAXPY operation, and t is the elapsed time in seconds. Like theoretical peak bandwidth, theoretical peak GFLOP/s can be gleaned from the product literature (but calculating it can be a bit tricky because it is very architecture-dependent). For example, the Tesla M2050 GPU has a theoretical peak single-precision floating point throughput of 1030 GFLOP/s, and a theoretical peak double-precision throughput of 515 GFLOP/s.
SAXPY reads 12 bytes per element computed, but performs only a single multiply-add instruction (2 FLOPs), so it’s pretty clear that it will be bandwidth bound, and so in this case (in fact in many cases), bandwidth is the most important metric to measure and optimize. In more sophisticated computations, measuring performance at the level of FLOPs can be very difficult. Therefore it’s more common to use profiling tools to get an idea of whether computational throughput is a bottleneck. Applications often provide throughput metrics that are problem-specific (rather than architecture specific) and therefore more useful to the user. For example, “Billion Interactions per Second” for astronomical n-body problems, or “nanoseconds per day” for molecular dynamic simulations.
Summary
This post described how to time kernel execution using the CUDA event API. CUDA events use the GPU timer and therefore avoid the problems associated with host-device synchronization. We presented the effective bandwidth and computational throughput performance metrics, and we implemented effective bandwidth in the SAXPY kernel. A large percentage of kernels are memory bandwidth bound, so calculation of the effective bandwidth is a good first step in performance optimization. In a future post we will discuss how to determine which factor—bandwidth, instructions, or latency—is the limiting factor in performance.
CUDA events can also be used to determine the data transfer rate between host and device, by recording events on either side of the cudaMemcpy() calls.
If you run the code from this post on a smaller GPU, you may get an error message regarding insufficient device memory unless you reduce the array sizes. In fact, our example code so far has not bothered to check for run-time errors. In the next post, we will learn how to perform error handling in CUDA C/C++ and how to query the present devices to determine their available resources, so that we can write much more robust code.
Related resources
Tags
About the Authors
Comments
Related posts
How to Overlap Data Transfers in CUDA C/C++
How to Optimize Data Transfers in CUDA C/C++
How to Optimize Data Transfers in CUDA Fortran
How to Implement Performance Metrics in CUDA Fortran
An Easy Introduction to CUDA C and C++
Related posts
Advanced API Performance: SetStablePowerState
Advanced Kernel Profiling with the Latest Nsight Compute
TensorFlow Performance Logging Plugin nvtx-plugins-tf Goes Public
NVIDIA Nsight Systems Adds Vulkan Support
Nsight Systems Exposes New GPU Optimization Opportunities
|
Using CUDA Warp-Level Primitives
NVIDIA GPUs execute groups of threads known as warps in SIMT (Single Instruction, Multiple Thread) fashion. Many CUDA programs achieve high performance by taking advantage of warp execution. In this blog we show how to use primitives introduced in CUDA 9 to make your warp-level programing safe and effective.
Warp-level Primitives
NVIDIA GPUs and the CUDA programming model employ an execution model called SIMT (Single Instruction, Multiple Thread). SIMT extends Flynn’s Taxonomy of computer architectures, which describes four classes of architectures in terms of their numbers of instruction and data streams. One of Flynn’s four classes, SIMD (Single Instruction, Multiple Data) is commonly used to describe architectures like GPUs. But there is a subtle but important difference between SIMD and SIMT. In a SIMD architecture, each instruction applies the same operation in parallel across many data elements. SIMD is typically implemented using processors with vector registers and execution units; a scalar thread issues vector instructions that execute in SIMD fashion. In a SIMT architecture, rather than a single thread issuing vector instructions applied to data vectors, multiple threads issue common instructions to arbitrary data.
The benefits of SIMT for programmability led NVIDIA’s GPU architects to coin a new name for this architecture, rather than describing it as SIMD. NVIDIA GPUs execute warps of 32 parallel threads using SIMT, which enables each thread to access its own registers, to load and store from divergent addresses, and to follow divergent control flow paths. The CUDA compiler and the GPU work together to ensure the threads of a warp execute the same instruction sequences together as frequently as possible to maximize performance.
While the high performance obtained by warp execution happens behind the scene, many CUDA programs can achieve even higher performance by using explicit warp-level programming. Parallel programs often use collective communication operations, such as parallel reductions and scans. CUDA C++ supports such collective operations by providing warp-level primitives and Cooperative Groups collectives. The Cooperative Groups collectives (described in this previous post) are implemented on top of the warp primitives, on which this article focuses.
Listing 1 shows an example of using warp-level primitives. It uses __shfl_down_sync() to perform a tree-reduction to compute the sum of the val variable held by each thread in a warp. At the end of the loop, val of the first thread in the warp contains the sum.
#define FULL_MASK 0xffffffff
for (int offset = 16; offset > 0; offset /= 2)
val += __shfl_down_sync(FULL_MASK, val, offset);
A warp comprises 32 lanes, with each thread occupying one lane. For a thread at lane X in the warp, __shfl_down_sync(FULL_MASK, val, offset) gets the value of the val variable from the thread at lane X+offset of the same warp. The data exchange is performed between registers, and more efficient than going through shared memory, which requires a load, a store and an extra register to hold the address.
CUDA 9 introduced three categories of new or updated warp-level primitives.
Please see the CUDA Programming Guide for detailed descriptions of these primitives.
Synchronized Data Exchange
Each of the “synchronized data exchange” primitives perform a collective operation among a set of threads in a warp. For example, Listing 2 shows three of these. Each thread that calls __shfl_sync() or __shfl_down_sync() receives data from a thread in the same warp, and each thread that calls __ballot_sync() receives a bit mask representing all the threads in the warp that pass a true value for the predicate argument.
int __shfl_sync(unsigned mask, int val, int src_line, int width=warpSize);
int __shfl_down_sync(unsigned mask, int var, unsigned detla,
int width=warpSize);
int __ballot_sync(unsigned mask, int predicate);
The set of threads that participates in invoking each primitive is specified using a 32-bit mask, which is the first argument of these primitives. All the participating threads must be synchronized for the collective operation to work correctly. Therefore, these primitives first synchronize the threads if they are not already synchronized.
A frequently asked question is “what should I use for the mask argument?”. You can consider the mask to mean the set of threads in the warp that should participate in the collective operation. This set of threads is determined by the program logic, and can usually be computed by some branch condition earlier in the program flow. Take the reduction code in Listing 1 as an example. Assume we want to compute the sum of all the elements of an array input[], whose size NUM_ELEMENTS is less than the number of threads in the thread block. We can use the method in Listing 3.
unsigned mask = __ballot_sync(FULL_MASK, threadIdx.x < NUM_ELEMENTS);
if (threadIdx.x < NUM_ELEMENTS) {
val = input[threadIdx.x];
for (int offset = 16; offset > 0; offset /= 2)
val += __shfl_down_sync(mask, val, offset);
…
}
The code uses the condition thread.idx.x < NUM_ELEMENTS to determine whether or not a thread will participate in the reduction. __ballot_sync() is used to compute the membership mask for the __shfl_down_sync() operation. __ballot_sync() itself uses FULL_MASK (0xffffffff for 32 threads) because we assume all threads will execute it.
On Volta and later GPU architectures, the data exchange primitives can be used in thread-divergent branches: branches where some threads in the warp take a different path than the others. Listing 4 shows an example where all the threads in a warp get the value of val from the thread at lane 0. The even- and odd-numbered threads take different branches of an if statement.
if (threadIdx.x % 2) {
val += __shfl_sync(FULL_MASK, val, 0);
…
}
else {
val += __shfl_sync(FULL_MASK, val, 0);
…
}
On the latest Volta (and future) GPUs, you can run library functions that use warp synchronous primitives without worrying whether the function is called in a thread-divergent branch.
Active Mask Query
__activemask() returns a 32-bit unsigned int mask of all currently active threads in the calling warp. In other words, it shows the calling thread which threads in its warp are also executing the same __activemask(). This is useful for the :opportunistic warp-level programming” technique we explain later, as well as for debugging and understanding program behavior.
However, it’s important to use __activemask() correctly. Listing 5 illustrates an incorrect use. The code tries to perform the same sum reduction shown in Listing 4, but instead of using __ballot_sync() to compute the mask before the branch, it uses __activemask() inside the branch. This is incorrect, as it would result in partial sums instead of a total sum. The CUDA execution model does not guarantee that all threads taking the branch together will execute the __activemask() together. Implicit lock step execution is not guaranteed, as we will explain.
//
// Incorrect use of __activemask()
//
if (threadIdx.x < NUM_ELEMENTS) {
unsigned mask = __activemask();
val = input[threadIdx.x];
for (int offset = 16; offset > 0; offset /= 2)
val += __shfl_down_sync(mask, val, offset);
…
}
Warp Synchronization
When threads in a warp need to perform more complicated communications or collective operations than what the data exchange primitives provide, you can use the __syncwarp() primitive to synchronize threads in a warp. It is similar to the __syncthreads() primitive (which synchronizes all threads in the thread block) but at finer granularity.
void __syncwarp(unsigned mask=FULL_MASK);
The __syncwarp() primitive causes the executing thread to wait until all threads specified in mask have executed a __syncwarp() (with the same mask) before resuming execution. It also provides a memory fence to allow threads to communicate via memory before and after calling the primitive.
Listing 6 shows an example of shuffling the ownership of matrix elements among threads in a warp.
float val = get_value(…);
__shared__ float smem[4][8];
// 0 1 2 3 4 5 6 7
// 8 9 10 11 12 13 14 15
// 16 17 18 19 20 21 22 23
// 24 25 26 27 28 29 30 31
int x1 = threadIdx.x % 8;
int y1 = threadIdx.x / 8;
// 0 4 8 12 16 20 24 28
// 1 5 10 13 17 21 25 29
// 2 6 11 14 18 22 26 30
// 3 7 12 15 19 23 27 31
int x2= threadIdx.x / 4;
int y2 = threadIdx.x % 4;
smem[y1][x1] = val;
__syncwarp();
val = smem[y2][x2];
use(val);
Assume a 1-D thread block is used (i.e. threadIdx.y is always 0). At the beginning of the code, each thread in a warp owns one element of a 4×8 matrix with row-major indexing. In other words, lane 0 owns [0][0] and lane 1 owns [0][1]. Each thread stores its value into the corresponding position of a 4×8 array in shared memory. Then __syncwarp() is used to ensure all threads have done the store, before each thread reads from a transposed position in the array. In the end, each thread in the warp owns one element of the matrix with column-major indexing: lane 0 owns [0][0] and lane 1 owns [1][0].
Make sure that __syncwarp() separates shared memory reads and writes to avoid race conditions. Listing 7 illustrates an incorrect use in a tree sum reduction in shared memory. There is a shared memory read followed by a shared memory write between every two __syncwarp() calls. The CUDA programming model does not guarantee that all the reads will be performed before all the writes, so there is a race condition.
unsigned tid = threadIdx.x;
// Incorrect use of __syncwarp()
shmem[tid] += shmem[tid+16]; __syncwarp();
shmem[tid] += shmem[tid+8]; __syncwarp();
shmem[tid] += shmem[tid+4]; __syncwarp();
shmem[tid] += shmem[tid+2]; __syncwarp();
shmem[tid] += shmem[tid+1]; __syncwarp();
Listing 8 fixes the race condition by inserting extra __syncwarp() calls. The CUDA compiler may elide some of these synchronization instructions in the final generated code depending on the target architecture (e.g. on pre-Volta architectures).
unsigned tid = threadIdx.x;
int v = 0;
v += shmem[tid+16]; __syncwarp();
shmem[tid] = v; __syncwarp();
v += shmem[tid+8]; __syncwarp();
shmem[tid] = v; __syncwarp();
v += shmem[tid+4]; __syncwarp();
shmem[tid] = v; __syncwarp();
v += shmem[tid+2]; __syncwarp();
shmem[tid] = v; __syncwarp();
v += shmem[tid+1]; __syncwarp();
shmem[tid] = v;
On the latest Volta (and future) GPUs, you can also use __syncwarp() in thread-divergent branches to synchronize threads from both branches. But once they return from the primitive, the threads will become divergent again. See Listing 13 for such an example.
Opportunistic Warp-level Programming
As we showed in the Synchronized Data Exchange section, the membership mask used in the synchronized data exchange primitives is often computed before a branch condition in the program flow. In many cases, the program needs to pass the mask along the program flow; for example, as a function argument when warp-level primitives are used inside a function. This may be difficult if you want to use warp-level programming inside a library function but you cannot change the function interface.
Some computations can use whatever threads happen to be executing together. We can use a technique called opportunistic warp-level programming, as the following example illustrates. (See this post on warp-aggregated atomics for more information on the algorithm, and this post for discussion of how Cooperative Groups makes the implementation much simpler.)
// increment the value at ptr by 1 and return the old value
__device__ int atomicAggInc(int *ptr) {
int mask = __match_any_sync(__activemask(), (unsigned long long)ptr);
int leader = __ffs(mask) – 1; // select a leader
int res;
if(lane_id() == leader) // leader does the update
res = atomicAdd(ptr, __popc(mask));
res = __shfl_sync(mask, res, leader); // get leader’s old value
return res + __popc(mask & ((1 << lane_id()) – 1)); //compute old value
}
atomicAggInc() atomically increments the value pointed to by ptr by 1 and returns the old value. It uses the atomicAdd() function, which may incur contention. To reduce contention, atomicAggInc replaces the per-thread atomicAdd() operation with a per-warp atomicAdd(). The __activemask() in line 4 finds the set of threads in the warp that are about to perform the atomic operation. __match_any_sync() returns the bit mask of the threads that have the same value ptr, partitioning the incoming threads into groups whose members have the same ptr value. Each group elects a leader thread (line 5), which performs the atomicAdd() (line 8) for the whole group. Every thread gets the old value from the leader (line 9) returned by the atomicAdd(). Line 10 computes and returns the old value the current thread would get from atomicInc() if it were to call the function instead of atomicAggInc.
Implicit Warp-Synchronous Programming is Unsafe
CUDA toolkits prior to version 9.0 provided a (now legacy) version of warp-level primitives. Compared with the CUDA 9 primitives, the legacy primitives do not accept a mask argument. For example, int __any(int predicate) is the legacy version of int __any_sync(unsigned mask, int predicate).
The mask argument, as explained previously, specifies the set of threads in a warp that must participate in the primitives. The new primitives perform intra-warp thread-level synchronization if the threads specified by the mask are not already synchronized during execution.
The legacy warp-level primitives do not allow programmers to specify the required threads and do not perform synchronization. Therefore, the threads that must participate in the warp-level operation are not explicitly expressed by the CUDA program. The correctness of such a program depends on implicit warp-synchronous behavior, which may change from one hardware architecture to another, from one CUDA toolkit release to another (due to changes in compiler optimizations, for example), or even from one run-time execution to another. Such implicit warp-synchronous programming is unsafe and may not work correctly.
For example, in the following code, let’s assume all 32 threads in a warp execute line 2 together. The if statement at line 4 causes the threads to diverge, with the odd threads calling foo() at line 5 and the even threads calling bar() at line 8.
// Assuming all 32 threads in a warp execute line 1 together.
assert(__ballot(1) == FULL_MASK);
int result;
if (thread_id % 2) {
result = foo();
}
else {
result = bar();
}
unsigned ballot_result = __ballot(result);
The CUDA compiler and the hardware will try to re-converge the threads at line 10 for better performance. But this re-convergence is not guaranteed. Therefore, the ballot_result may not contain the ballot result from all 32 threads.
Calling the new __syncwarp() primitive at line 10 before __ballot(), as illustrated in Listing 11, does not fix the problem either. This is again implicit warp-synchronous programming. It assumes that threads in the same warp that are once synchronized will stay synchronized until the next thread-divergent branch. Although it is often true, it is not guaranteed in the CUDA programming model.
__syncwarp();
unsigned ballot_result = __ballot(result);
The correct fix is to use __ballot_sync() as in Listing 12.
unsigned ballot_result = __ballot_sync(FULL_MASK, result);
A common mistake is to assume that calling __syncwarp() before and/or after a legacy warp-level primitive is functionally equivalent to calling the sync version of the primitive. For example, is __syncwarp(); v = __shfl(0); __syncwarp(); the same as __shfl_sync(FULL_MASK, 0)? The answer is no, for two reasons. First, if the sequence is used in a thread-divergent branch, then __shfl(0) won’t be executed by all threads together. Listing 13 shows an example. The __syncwarp() at line 3 and line 7 would ensure foo() is called by all threads in the warp before line 4 or line 8 is executed. Once threads leave the __syncwarp(), the odd threads and the even threads become divergent again. Therefore, the __shfl(0) at line 4 will get an undefined value because lane 0 is inactive when line 4 is executed. __shfl_sync(FULL_MASK, 0) can be used in thread-divergent branches without this problem.
v = foo();
if (threadIdx.x % 2) {
__syncwarp();
v = __shfl(0); // L3 will get undefined result because lane 0
__syncwarp(); // is not active when L3 is executed. L3 and L6
} else { // will execute divergently.
__syncwarp();
v = __shfl(0);
__syncwarp();
}
Second, even when the sequence is called by all the threads together, the CUDA execution model does not guarantee threads will stay convergent after leaving __syncwarp(), as Listing 14 shows. Implicit lock-step execution is not guaranteed. Remember, thread convergence is guaranteed only within explicitly synchronous warp-level primitives.
assert(__activemask() == FULL_MASK); // assume this is true
__syncwarp();
assert(__activemask() == FULL_MASK); // this may fail
Because using them can lead to unsafe programs, the legacy warp-level primitives are deprecated starting in CUDA 9.0.
Update Legacy Warp-Level Programming
If your program uses legacy warp-level primitives or any form of implicit warp-synchronous programming (such as communicating between threads of a warp without synchronization), you should update the code to use the sync version of the primitives. You may also want to restructure your code to use Cooperative Groups, which provides a higher level of abstraction as well as new features such as multi-block synchronization.
The trickiest part of using the warp-level primitives is figuring out the membership mask to be used. We hope the above sections give you a good idea where to start and what to look out for. Here is a list of suggestions:
One last trick. If your existing CUDA program gives a different result on Volta architecture GPUs, and you suspect the difference is caused by Volta’s new independent thread scheduling which can change warp synchronous behavior, you may want to recompile your program with nvcc options -arch=compute_60 -code=sm_70. Such compiled programs opt-in to Pascal’s thread scheduling. When used selectively, it can help pin down the culprit module more quickly, allowing you to update the code to avoid implicit warp-synchronous programming.
Related resources
Tags
About the Authors
Comments
Related posts
Optimize GPU Workloads for Graphics Applications with NVIDIA Nsight Graphics
Simplifying GPU Programming for HPC with NVIDIA Grace Hopper Superchip
CUDA Refresher: The CUDA Programming Model
Register Cache: Caching for Warp-Centric CUDA Programs
How to Access Global Memory Efficiently in CUDA Fortran Kernels
Related posts
Fusing Epilog Operations with Matrix Multiplication Using nvmath-python
Unlocking GPU-Accelerated RDMA with NVIDIA DOCA GPUNetIO
Just Released: CUDA Toolkit 12.5
Dynamic Control Flow in CUDA Graphs with Conditional Nodes
CUDA Toolkit 12.4 Enhances Support for NVIDIA Grace Hopper and Confidential Computing
|
How to Access Global Memory Efficiently in CUDA Fortran Kernels
In the previous two posts we looked at how to move data efficiently between the host and device. In this sixth post of our CUDA Fortran series we discuss how to efficiently access device memory, in particular global memory, from within kernels.
There are several kinds of memory on a CUDA device, each with different scope, lifetime, and caching behavior. So far in this series we have used global memory, which resides in device DRAM, for transfers between the host and device as well as for the data input to and output from kernels. The name global here refers to scope, as it can be accessed and modified from both the host and the device. Global memory is declared in host code via the device variable attribute and can persist for the lifetime of the application. Depending on the compute capability of the device, global memory may or may not be cached on the chip.
Before we go into how global memory is accessed, we need to refine our understanding of the CUDA execution model. We have discussed how threads are grouped into thread blocks, which are assigned to multiprocessors on the device. During execution there is a finer grouping of threads into groups of threads called warps. Multiprocessors on the GPU execute instructions for each warp in SIMD (Single Instruction Multiple Data) fashion. The warp size (effectively the SIMD width) of all current CUDA-capable GPUs is 32 threads.
Global Memory Coalescing
Grouping of threads into warps is not only relevant to computation, but also to global memory accesses. The device coalesces global memory loads and stores issued by threads of a warp into as few transactions as possible in order to minimize DRAM bandwidth (on older hardware of compute capability less than 2.0, transactions are coalesced within half warps of 16 threads rather than whole warps). To elucidate the conditions under which coalescing occurs across CUDA device architectures we run some simple experiments on three Tesla cards: a Tesla C870 (compute capability 1.0), a Tesla C1060 (compute capability 1.3), and a Tesla C2050 (compute capability 2.0).
We run two experiments that use variants of an increment kernel shown in the following code, one with an array offset that can cause misaligned accesses to the input array, and the other with strided accesses to the input array.
module kernels_m
integer, parameter :: singlePrecision = kind(0.0)
integer, parameter :: doublePrecision = kind(0.0d0)
integer, parameter :: fp_kind = singlePrecision
contains
attributes(global) subroutine offset(a, s)
real (fp_kind) :: a(*)
integer, value :: s
integer :: i
i = blockDim%x*(blockIdx%x-1)+threadIdx%x + s
a(i) = a(i)+1
end subroutine offset
attributes(global) subroutine stride(a, s)
real (fp_kind) :: a(*)
integer, value :: s
integer :: i
i = 1 + (blockDim%x*(blockIdx%x-1)+threadIdx%x-1) * s
a(i) = a(i)+1
end subroutine stride
end module kernels_m
program offsetAndStride
use cudafor
use kernels_m
implicit none
integer, parameter :: nMB = 4 ! NB: a_d(33*nMB) for stride case
integer, parameter :: blockSize = 256
integer :: n
real (fp_kind), device, allocatable :: a_d(:)
type(cudaEvent) :: startEvent, stopEvent
type(cudaDeviceProp) :: prop
integer :: i, istat
real(4) :: time
istat = cudaGetDeviceProperties(prop, 0)
write(*,'(/,"Device: ",a)') trim(prop%name)
write(*,'("Transfer size (MB): ",i0)') nMB
if (kind(a_d) == singlePrecision) then
write(*,'(a,/)') 'Single Precision'
else
write(*,'(a,/)') 'Double Precision'
endif
n = nMB*1024*1024/fp_kind
allocate(a_d(n*33))
istat = cudaEventCreate(startEvent)
istat = cudaEventCreate(stopEvent)
write(*,*) 'Offset, Bandwidth (GB/s):'
call offset<<>>(a_d, 0)
do i = 0, 32
a_d = 0.0
istat = cudaEventRecord(startEvent,0)
call offset<<>>(a_d, i)
istat = cudaEventRecord(stopEvent,0)
istat = cudaEventSynchronize(stopEvent)
istat = cudaEventElapsedTime(time, startEvent, stopEvent)
write(*,*) i, 2*nMB/time*(1.e+3/1024)
enddo
write(*,*)
write(*,*) 'Stride, Bandwidth (GB/s):'
call stride<<>>(a_d, 1)
do i = 1, 32
a_d = 0.0
istat = cudaEventRecord(startEvent,0)
call stride<<>>(a_d, i)
istat = cudaEventRecord(stopEvent,0)
istat = cudaEventSynchronize(stopEvent)
istat = cudaEventElapsedTime(time, startEvent, stopEvent)
write(*,*) i, 2*nMB/time*(1.e+3/1024)
enddo
istat = cudaEventDestroy(startEvent)
istat = cudaEventDestroy(stopEvent)
deallocate(a_d)
end program offsetNStride
This code can run both offset and stride kernels in either single or double precision by changing the fp_kind parameter at the top of the code. Each kernel takes two arguments, an input array and an integer representing the offset or stride used to access the elements of the array. The kernels are called in loops over a range of offset and strides.
Misaligned Data Accesses
The results for the offset kernel on the Tesla C870, C1060, and C2050 are shown in the following figure.
Arrays allocated (either explicitly or implicitly) in device memory, are aligned to 256-byte memory segments by the CUDA driver. The device can access global memory via 32-, 64-, or 128-byte transactions that are aligned to their size. For the C870 or any other device with a compute capability of 1.0, any misaligned access by a half warp of threads (or aligned access where the threads of the half warp do not access memory in sequence) results in 16 separate 32-byte transactions. Since only 4 bytes are requested per 32-byte transaction, one would expect the effective bandwidth to be reduced by a factor of eight, which is roughly what we see in the figure above (brown line) for offsets that are not a multiple of 16 elements, corresponding to one half warp of threads.
For the Tesla C1060 or other devices with compute capability of 1.2 or 1.3, misaligned accesses are less problematic. Basically, the misaligned accesses of contiguous data by a half warp of threads are serviced in a few transactions that “cover” the requested data. There is still a performance penalty relative to the aligned case due to both unrequested data being transferred and some overlap of data requested by different half-warps, but the penalty is far less than for the C870.
Devices of compute capability 2.0, such as the Tesla C2050, have an L1 cache in each multiprocessor with a 128-byte line size. Accesses by threads in a warp are coalesced into as few cache lines as possible, resulting in negligible effect of alignment on throughput for sequential memory accesses across threads.
Strided Memory Access
The results of the stride kernel are shown below:
For strided global memory access we have a different picture. For large strides, the effective bandwidth is poor regardless of the version of the architecture. This should not be surprising: when concurrent threads simultaneously access memory addresses that are very far apart in physical memory, then there is no chance for the hardware to combine the accesses. You can see in the figure above that on the 870 that any stride other than 1 results in drastically reduced effective bandwidth. This is because compute capability 1.0 and 1.1 hardware requires linear, aligned accesses across threads for coalescing, so we see the familiar 1/8 bandwidth that we also saw in the offset kernel. Compute capability 1.2 and higher hardware can coalesce accesses that fall into aligned segments (32, 64, or 128 byte segments on CC 1.2/1.3, and 128-byte cache lines on CC 2.0 and higher), so this hardware results in a smooth bandwidth curve.
When accessing multidimensional arrays it is often necessary for threads to index the higher dimensions of the array, so strided access is simply unavoidable. We can handle these cases by using a type of CUDA memory called shared memory. Shared memory is an on-chip memory which is shared by all threads in a thread block. One use of shared memory is to extract a 2D tile of a multidimensional array from global memory in a coalesced fashion into shared memory, and then have contiguous threads stride through the shared memory tile. Unlike global memory, there is no penalty for strided access of shared memory. We will cover shared memory in detail in the next post.
Summary
In this post we discussed some aspects of how to efficiently access global memory from within CUDA kernel code. Global memory access on the device shares performance characteristics with data access on the host; namely, that data locality is very important. In early CUDA hardware, memory access alignment was as important as locality across threads, but on recent hardware alignment is not much of a concern. On the other hand, strided memory access can hurt performance, which can be alleviated using on-chip shared memory. In the next post we will explore shared memory in detail, and in the post after that we will show how shared memory can be used to avoid strided global memory accesses during a matrix transpose.
Related resources
Tags
About the Authors
Comments
Related posts
CUDA Refresher: The CUDA Programming Model
How to Access Global Memory Efficiently in CUDA C/C++ Kernels
How to Optimize Data Transfers in CUDA Fortran
An Easy Introduction to CUDA C and C++
An Easy Introduction to CUDA Fortran
Related posts
Optimizing Memory and Retrieval for Graph Neural Networks with WholeGraph, Part 1
Deploying Retrieval-Augmented Generation Applications on NVIDIA GH200 Delivers Accelerated Performance
Simplifying GPU Application Development with Heterogeneous Memory Management
Boosting Application Performance with GPU Memory Access Tuning
Using the NVIDIA CUDA Stream-Ordered Memory Allocator, Part 2
|
Reducing Application Build Times Using CUDA C++ Compilation Aids
The CUDA 11.5 C++ compiler addresses a growing customer request. Specifically, how to reduce CUDA application build times. Along with eliminating unused kernels, NVRTC and PTX concurrent compilation help address this key CUDA C++ application development concern.
The CUDA 11.5 NVCC compiler now adds support for Clang 12.0 as a host compiler. We have also included a limited preview release of 128-bit integer support, which is becoming essential in high-fidelity computations.
This technical walkthrough on the CUDA C++ compiler toolchain complements the CUDA C++ Programming Guide and provides a broad overview of new features being introduced in the CUDA 11.5 toolkit release.
NVRTC concurrent compilation
NVRTC compilation proceeds through three main stages:
Parser -> NVVM optimizer -> PTX Compiler
Some of these stages are not thread-safe, so NVRTC would previously serialize concurrent compilation requests from multiple user-threads using a global lock.
In CUDA 11.5, the NVRTC implementation was enhanced to provide partially concurrent compilation support. This is done by removing the global lock and using per-stage locks, leading to different threads to be concurrently executing different stages of the compilation pipeline.
Figure 1 shows how NVRTC, before CUDA 11.5, serializes simultaneous compilation requests from four threads.
With 11.5, NVRTC does not serialize compilation requests. Instead, the compilation requests from different threads are pipelined, enabling different stages of the compilation pipeline to proceed concurrently.
The graph in Figure 3 shows the total compilation time for compiling a set of 100 identical sample NVRTC programs, split over the available number of threads.
As expected, with CUDA 11.4 NVRTC, the total compilation time does not change as the number of threads increases, while compilation is serialized with a global NVRTC lock. With CUDA 11.5 NVRTC, the total compilation time is reduced as the number of threads increases. We will continue to make individual stage threads safer, which should enable nearly linear speedup for this example.
PTX concurrency compilation
PTX compilation along the JIT compilation path, as well as, using the PTX static library, proceeds through multiple internal phases. The previous implementation of these phases did not guarantee concurrent compilation from multiple threads. Instead, the PTX compiler used a global lock to serialize concurrent compilations.
In CUDA 11.5 and the R495 driver, the PTX compiler implementation now uses finer-grained local locks, rather than a global lock. This enables concurrent execution of multiple compilation requests, and significantly improves compilation time.
The following graph shows the total compilation time for compiling 104 identical sample programs split over a given number of threads through cuLinkAddData with CU_JIT_INPUT_PTX as CUjitInputType.
As expected with the R470 CUDA driver, the total compilation time does not change as the number of threads increase as compilation is serialized with a global lock. With the R495 CUDA driver, the total compilation time reduces as the number of threads increases.
Eliminating unused kernels
Separate compilation mode enables CUDA kernel functions and device functions to be shipped as CUDA device code libraries and be linked against any user application using nvlink, the device linker. The generated device program is then loaded and executed on the GPU at run time.
Before CUDA 11.5, nvlink could not determine whether it was safe to remove unused kernels from the linked device program, as these kernel functions could be referenced from host code.
Consider a library that defines four kernel functions:
//library.cu
__global__ void AAA() { /* code */ }
__global__ void BBB() { /* code */ }
__global__ void CCC() { /* code */ }
__global__ void DDD() { /* code */ }
The library is built and shipped:
$nvcc -rdc=true library.cu -lib -o testlib.a
The user code refers to a single kernel from the library:
//user.cu
extern __global__ void AAA();
int main() { AAA<<<1,1>>>(); }
The code is linked:
$nvcc -rdc=true user.cu testlib.a -o user
With CUDA 11.4 for instance, the linked device program would contain all four kernel bodies, even though only a single kernel (‘AAA’) is used in the linked device program. This can be burdensome for applications linking against larger libraries.
Increased binary sizes and application load times are not the only problems with redundant device code. When using device link time optimization, unused kernels not removed before optimization can lead to longer build times, and potentially impede code optimizations.
With CUDA 11.5, the CUDA compiler will track references to kernels from host code, and propagate this information to the device linker (nvlink). nvlink then removes the unused kernels from the linked device program. For the previous example, the unused kernels BBB, CCC, and DDD will get eliminated from the linked device program.
In CUDA 11.5, this optimization is disabled by default, but can be enabled by adding the -Xnvlink -use-host-info option to the NVCC command line:
$nvcc -rdc=true user.cu testlib.a -o user -Xnvlink -use-host-info
In subsequent CUDA toolkit releases, the optimization will be enabled by default, and an opt-out flag will be provided.
Here are some caveats. In CUDA 11.5, the compiler analysis for kernel references will be conservative for the following scenarios. The compiler may consider some kernels that are not actually referenced from host code as referenced:
template<typename T>
__global__ void foo() { }
__device__ void doit() { foo<void><<<1,1>>>(); }
int main() {
// compiler will mark all instances of foo template as referenced
// from host code, including "foo<void>", which is only actually
// referenced from device code
foo<int><<<1,1>>>();
}
__global__ void foo() { }
__device__ auto *ptr = foo; // foo is considered as referenced
// from host code.
__global__ void foo(int) { }
namespace N1 {
template <typename T>
__global__ void foo(T) { }
}
template<typename T>
void doit() {
// the reference to 'foo' is template dependent, so
// both ::foo and all instances of ::N1::foo are
// considered as referenced from host code.
foo<<<1,1>>>(T{});
}
Another caveat, is that when the device link step is deferred to host application startup (JIT linking), instead of at build time, unused kernels will not be removed.
// With nonvirtual architecture (sm_80), NVLink is invoked
// at build time, and kernel pruning will occur.
$nvcc -Xnvlink -use-host-info -rdc=true foo.cu bar.cu -o foo -arch sm_80
// With virtual architecture (compute_80), NVLink is not invoked
// at build time, but only during host application startup.
// kernel pruning will not occur.
$nvcc -Xnvlink -use-host-info -rdc=true foo.cu bar.cu -o foo -arch compute_80
In CUDA 11.5, nvlink does not yet use the information about unused kernels during device link time optimization. Our goal is to enable nvlink to use this information to delete unused kernels, reduce optimizer time, and improve generated code quality by reducing code bloat.
Limited 128-bit integer support
The 11.5 CUDA C++ compiler has support for 128-bit integer data types for platforms where the host compiler supports 128-bit integers. Basic arithmetic, logical and bitwise operations would work on 128-bit integers. Support for 128-bit integer variants of CUDA math intrinsics and CUDA math functions are planned for future releases.
Similarly, debug support for 128-bit integers and integration with developer tools will be in a subsequent release. For now, we are seeking your early feedback on this preview feature on the Developer Forum.
NVRTC static library
CUDA 11.5 provides a static version of the NVRTC library. Some applications may prefer to link against the static NVRTC library to guarantee stable performance and functionality during deployment. Static library users will also want to statically link-in the static versions of the NVRTC built-in library and the PTX compiler library. For more information about linking the static NVRTC library, see the NVRTC User Guide.
__builtin_assume
CUDA 11.5 improves code generation for loads and stores when __builtin_assume is applied to the results of address space predicate functions such as __isShared(pointer). For other supported functions, see Address Space Predicate Functions.
Without an address space specifier, the compiler generates generic load and store instructions, which requires a few extra instructions to compute the specific memory segment before performing the actual memory operation. Using __builtin_assume(expr) hints the compiler with the address space of generic pointers potentially improving the performance of the code.
Correct Usage:
bool b = __isShared(ptr);
__builtin_assume(b); // OK: Proof that ptr is a pointer to shared memory
Incorrect Usage:
These hints are ignored unless the boolean expression is stored in a separate variable:
__builtin_assume(__isShared(ptr)); // IGNORED
As with other __builtin_assume, if the expression is not TRUE, then the behavior is undefined. If you are interested in learning more about __builtin_assume, see the CUDA 11.2 Compiler post.
Pragma diagnostic control
In CUDA 11.5, the NVCC CUDA compiler frontend has added support for numerous pragmas that offer more control over diagnostic messages.
You can use the following pragmas to control the compiler diagnostics for specific error numbers:
#pragma nv_diag_suppress // suppress the specified diagnostic
// message
#pragma nv_diag_warning // make the specified diagnostic a warning
#pragma nv_diag_error // make the specified diagnostic an error
#pragma nv_diag_default // restore the specified diagnostic level
// to default
#pragma nv_diag_once // only report the specified diagnostic once
Uses of these pragmas have the following form:
#pragma nv_diag_xxx error_number, error_number …
To learn how to use these pragmas with more detailed caveats, see the CUDA Programming guide. The following example suppresses the “declared but never referenced” warning on the declaration of foo:
#pragma nv_diag_suppress 177
void foo()
{
int xxx=0;
}
The pragmas nv_diagnostic push and nv_diagnostic pop may be used to save and restore the current diagnostic pragma state:
#pragma nv_diagnostic push
#pragma nv_diag_suppress 177
void foo()
{
int xxx=0;
}
#pragma nv_diagnostic pop
void bar()
{
int xxx=0;
}
None of these pragmas have any effect on the host compiler.
Deprecation note: Diagnostic pragmas without the nv_ prefix have been deprecated. For example, #pragma diag_suppress support will be removed from all future releases. Using these diagnostic pragmas will elicit warning messages like this:
pragma "diag_suppress" is deprecated, use "nv_diag_suppress" instead
The macro __NVCC_DIAG_PRAGMA_SUPPORT__ can facilitate the transition to the use of the new macros:
#ifdef __NVCC_DIAG_PRAGMA_SUPPORT__
#pragma nv_diag_suppress 177
#else
#pragma diag_suppress 177
#endif
New option -arch=all|all-major
Before the CUDA 11.5 release, if you wanted to generate code for all supported architectures, you had to list all the targets in --generate-code options. If a newer version is added, or an old version is retired, the --generate-code options must be changed accordingly. Now the new option -arch=all|all-major provides a simpler and efficient way to do the same.
If -arch=all is specified, NVCC embeds a compiled code image for all supported architectures (sm_*), and a PTX program for the highest major virtual architecture.
If -arch=all-major is specified, NVCC embeds a compiled code image for all supported major versions (sm_*0), starting from the earliest supported sm_x architecture (sm_35 for this release), and a PTX program for the highest major virtual architecture.
For example, a simple -arch=all option is equivalent to the following long list of options for this release:
-gencode arch=compute_35,\"code=sm_35\"
-gencode arch=compute_37,\"code=sm_37\"
-gencode arch=compute_50,\"code=sm_50\"
-gencode arch=compute_52,\"code=sm_52\"
-gencode arch=compute_53,\"code=sm_53\"
-gencode arch=compute_60,\"code=sm_60\"
-gencode arch=compute_61,\"code=sm_61\"
-gencode arch=compute_62,\"code=sm_62\"
-gencode arch=compute_70,\"code=sm_70\"
-gencode arch=compute_72,\"code=sm_72\"
-gencode arch=compute_75,\"code=sm_75\"
-gencode arch=compute_80,\"code=sm_80\"
-gencode arch=compute_86,\"code=sm_86\"
-gencode arch=compute_87,\"code=sm_87\"
-gencode arch=compute_80,\"code=compute_80\"
A simple -arch=all-major option is equivalent to the following long list of options for this release:
-gencode arch=compute_35,\"code=sm_35\"
-gencode arch=compute_50,\"code=sm_50\"
-gencode arch=compute_60,\"code=sm_60\"
-gencode arch=compute_70,\"code=sm_70\"
-gencode arch=compute_80,\"code=sm_80\"
-gencode arch=compute_80,\"code=compute_80\"
For all supported virtual architectures, see the Virtual Architecture Feature List. For all supported real architectures, see the GPU Feature List.
Deterministic code generation
In previous CUDA toolkits, the mangled name of an internal linkage variable or function in device code changes on every nvcc invocation, even if there was no change to the source code. Certain software management and build systems check whether the generated program bits have changed. The prior nvcc compiler behavior caused such systems to trigger and incorrectly assume that there was a semantic change in the source program; for example, potentially triggering redundant dependent builds.
The NVCC compiler behavior has been changed to be deterministic in CUDA 11.5. For example, consider this test case:
//--
static __device__ void foo() { }
auto __device__ fptr = foo;
int main() { }
//--
With CUDA 11.4, compiling the same program twice generates slightly different names in the PTX:
//--
$cuda-11.4/bin/nvcc -std=c++14 -rdc=true -ptx test.cu -o test1.ptx
$cuda-11.4/bin/nvcc -std=c++14 -rdc=true -ptx test.cu -o test2.ptx
$diff -w test1.ptx test2.ptx
13c13
< .func _ZN57_INTERNAL_39_tmpxft_00000a46_00000000_7_test_cpp1_ii_main3fooEv
---
> .func _ZN57_INTERNAL_39_tmpxft_00000a4e_00000000_7_test_cpp1_ii_main3fooEv
16c16
< .visible .global .align 8 .u64 fptr = _ZN57_INTERNAL_39_tmpxft_00000a46_00000000_7_test_cpp1_ii_main3fooEv;
---
> .visible .global .align 8 .u64 fptr = _ZN57_INTERNAL_39_tmpxft_00000a4e_00000000_7_test_cpp1_ii_main3fooEv;
18c18
< .func _ZN57_INTERNAL_39_tmpxft_00000a46_00000000_7_test_cpp1_ii_main3fooEv()
---
> .func _ZN57_INTERNAL_39_tmpxft_00000a4e_00000000_7_test_cpp1_ii_main3fooEv()
$
//--
With CUDA 11.5, compiling the same program twice generates identical PTX:
//--
$nvcc -std=c++14 -rdc=true -ptx test.cu -o test1.ptx
$nvcc -std=c++14 -rdc=true -ptx test.cu -o test2.ptx
$diff -w test1.ptx test2.ptx
$
//--
Conclusion
Learn more about the CUDA 11.5 Toolkit by reading the Revealing New Features in the CUDA 11.5 Toolkit post. To exploit the new compiler toolchain features covered in this post, download and use the CUDA 11.5 Toolkit.
Provide us your feedback on the Developer Forum, specifically which of these features were the most important to you and why. Let us know if you are able to leverage the concurrent compilation support in NVRTC and PTX for your existing code base. Contact us to share other improvements that you would like to see in future CUDA toolkit releases.
Related resources
Tags
About the Authors
Comments
Related posts
Programming Efficiently with the NVIDIA CUDA 11.3 Compiler Toolchain
Boosting Productivity and Performance with the NVIDIA CUDA 11.2 C++ Compiler
Improving GPU Application Performance with NVIDIA CUDA 11.2 Device Link Time Optimization
New Compiler Features in CUDA 8
Simple, Portable Parallel C++ with Hemi 2 and CUDA 7.5
Related posts
Processing High-Quality Vietnamese Language Data with NVIDIA NeMo Curator
Access to NVIDIA NIM Now Available Free to Developer Program Members
Revolutionizing Graph Analytics: Next-Gen Architecture with NVIDIA cuGraph Acceleration
Efficient CUDA Debugging: Memory Initialization and Thread Synchronization with NVIDIA Compute Sanitizer
Analyzing the Security of Machine Learning Research Code
|
GPU Pro Tip: Fast Histograms Using Shared Atomics on Maxwell
Histograms are an important data representation with many applications in computer vision, data analytics and medical imaging. A histogram is a graphical representation of the data distribution across predefined bins. The input data set and the number of bins can vary greatly depending on the domain, so let’s focus on one of the most common use cases: an image histogram using 256 bins for each color channel. Even though we’ll use a specific problem setup the same algorithms can benefit other computational domains as well.
A basic serial image histogram computation is relatively simple. For each pixel of the image and for each RGB color channel we find a corresponding integer bin from 0 to 255 and increment its value. Atomic operations are a natural way of implementing histograms on parallel architectures. Depending on the input distribution, some bins will be used much more than others, so it is necessary to support efficient accumulation of the values across the full memory hierarchy. This is similar to reduction and scan operations, but the main challenge with histograms is that the output location for each element is not known prior to reading its value. Therefore, it is impossible to create a generic parallel accumulation scheme that completely avoids collisions. Histograms are now much easier to handle on GPU architectures thanks to the improved atomics performance in Kepler and native support of shared memory atomics in Maxwell.
Our histogram implementation has two phases and two corresponding CUDA C++ kernels, as Figure 1 shows. In the first phase each CUDA thread block processes a region of the image and accumulates a corresponding local histogram, storing the local histogram in global memory at the end of the phase. The second kernel accumulates all per-block histograms into the final histogram stored in global memory. The work separation between blocks in the first phase reduces contention when accumulating values into the same bin.
For the first kernel we explored two implementations: one that stores per-block local histograms in global memory, and one that stores them in shared memory. Using the shared memory significantly reduces the expensive global memory traffic but requires efficient hardware for shared memory atomics. We compare the two approaches to investigate performance differences on the Kepler and Maxwell architectures.
Here is a sample kernel code for per-block accumulation using global atomics.
__global__ void histogram_gmem_atomics(const IN_TYPE *in, int width, int height, unsigned int *out)
{
// pixel coordinates
int x = blockIdx.x * blockDim.x + threadIdx.x;
int y = blockIdx.y * blockDim.y + threadIdx.y;
// grid dimensions
int nx = blockDim.x * gridDim.x;
int ny = blockDim.y * gridDim.y;
// linear thread index within 2D block
int t = threadIdx.x + threadIdx.y * blockDim.x;
// total threads in 2D block
int nt = blockDim.x * blockDim.y;
// linear block index within 2D grid
int g = blockIdx.x + blockIdx.y * gridDim.x;
// initialize temporary accumulation array in global memory
unsigned int *gmem = out + g * NUM_PARTS;
for (int i = t; i < 3 * NUM_BINS; i += nt) gmem[i] = 0;
// process pixels
// updates our block's partial histogram in global memory
for (int col = x; col < width; col += nx)
for (int row = y; row < height; row += ny) {
unsigned int r = (unsigned int)(256 * in[row * width + col].x);
unsigned int g = (unsigned int)(256 * in[row * width + col].y);
unsigned int b = (unsigned int)(256 * in[row * width + col].z);
atomicAdd(&gmem[NUM_BINS * 0 + r], 1);
atomicAdd(&gmem[NUM_BINS * 1 + g], 1);
atomicAdd(&gmem[NUM_BINS * 2 + b], 1);
}
}
The shared atomics version is similar. The main difference is the temporary array storage and the copy of the local histogram into global memory.
__global__ void histogram_smem_atomics(const IN_TYPE *in, int width, int height, unsigned int *out)
{
// pixel coordinates
int x = blockIdx.x * blockDim.x + threadIdx.x;
int y = blockIdx.y * blockDim.y + threadIdx.y;
// grid dimensions
int nx = blockDim.x * gridDim.x;
int ny = blockDim.y * gridDim.y;
// linear thread index within 2D block
int t = threadIdx.x + threadIdx.y * blockDim.x;
// total threads in 2D block
int nt = blockDim.x * blockDim.y;
// linear block index within 2D grid
int g = blockIdx.x + blockIdx.y * gridDim.x;
// initialize temporary accumulation array in shared memory
__shared__ unsigned int smem[3 * NUM_BINS + 3];
for (int i = t; i < 3 * NUM_BINS + 3; i += nt) smem[i] = 0;
__syncthreads();
// process pixels
// updates our block's partial histogram in shared memory
for (int col = x; col < width; col += nx)
for (int row = y; row < height; row += ny) {
unsigned int r = (unsigned int)(256 * in[row * width + col].x);
unsigned int g = (unsigned int)(256 * in[row * width + col].y);
unsigned int b = (unsigned int)(256 * in[row * width + col].z);
atomicAdd(&smem[NUM_BINS * 0 + r + 0], 1);
atomicAdd(&smem[NUM_BINS * 1 + g + 1], 1);
atomicAdd(&smem[NUM_BINS * 2 + b + 2], 1);
}
__syncthreads();
// write partial histogram into the global memory
out += g * NUM_PARTS;
for (int i = t; i < NUM_BINS; i += nt) {
out[i + NUM_BINS * 0] = smem[i + NUM_BINS * 0];
out[i + NUM_BINS * 1] = smem[i + NUM_BINS * 1 + 1];
out[i + NUM_BINS * 2] = smem[i + NUM_BINS * 2 + 2];
}
}
In the second kernel we accumulate all partial histograms into the global one. This kernel is the same for both global and shared atomics implementations.
__global__ void histogram_final_accum(const unsigned int *in, int n, unsigned int *out)
{
int i = blockIdx.x * blockDim.x + threadIdx.x;
if (i < 3 * NUM_BINS) {
unsigned int total = 0;
for (int j = 0; j < n; j++)
total += in[i + NUM_PARTS * j];
out[i] = total;
}
}
Now let’s pick a few interesting test cases to harness our GPU implementation. To cover a spectrum of input data, we split our test images into two categories: synthetic and real-world images.
The first data set represents random color distributions with varying entropy. In these tests we generate random distributions with progressively more histogram bin collisions. For example, 100% entropy means all bins have equal probability and this is arguably the best scenario for the atomics implementation. 0% entropy represents all-to-one collision type with equal values for all pixels. This is the worst case scenario for atomics since all threads are trying to write to the same address.
Aside from synthetic tests, it makes sense to include real images in our benchmarks as well, because they are more likely to represent memory access patterns seen in real world histogram applications. As you can see in the photos below there are usually some regions with similar color patterns. These regions can generate many collisions for atomic operations to challenge our approach.
Figures 2 and 3 show the benchmarking results for two image sets on Kepler (GeForce GTX TITAN, GK100) and Maxwell (GeForce GTX TITAN X, GM200) architectures respectively. Each image has the same resolution of 1080p. The plots show timings for our two implementations compared across various input sets.
As we can see from the Kepler performance plots, the global atomics perform better than shared in most cases, except the images with high entropy. The Kepler architecture improved (vs. the previous Fermi architecture) global memory atomics performance by resolving conflicts in L2. On the other hand, Kepler emulates shared memory atomics in software, and high collision tests and real images suffer from serialization issues. In the 100% entropy case (white noise) we have perfect bin distribution and atomic conflicts do not play a significant role in performance; in this special case the shared memory version helps save bandwidth and outperforms the global atomics.
On Maxwell the global atomics implementation performs similarly compared to Kepler. However the Maxwell architecture features hardware support for shared memory atomics and we can clearly see that in all cases the shared atomics version performs best. The shared atomics histogram implementation is almost 2x faster than the global atomics version on Maxwell. Moreover, the performance is very stable across different workloads including both synthetic and real images. This stable performance is very important in real-time histogram applications in computer vision.
To summarize, histograms are easy to implement with shared memory atomics. This approach can attain high performance on Maxwell due to its native support for shared atomics. It is also possible to improve the histogram performance by additionally compressing the bin identifiers into {bin_id, count} pairs for each thread using RLE encoding. This technique reduces the size of the aggregation problem and provides a speed-up over the standard shared/global atomic implementations. All histogram experiments are available in the experimental master branch of the CUB library.
Related resources
Tags
About the Authors
Comments
Related posts
Maximizing Performance with Massively Parallel Hash Maps on GPUs
Revealing New Features in the CUDA 11.5 Toolkit
Accelerating Scikit-Image API with cuCIM: n-Dimensional Image Processing and I/O on GPUs
Register Cache: Caching for Warp-Centric CUDA Programs
Voting and Shuffling to Optimize Atomic Operations
Related posts
Just Released: cuDSS 0.3.0
Boosting Application Performance with GPU Memory Prefetching
Controlling Data Movement to Boost Performance on the NVIDIA Ampere Architecture
CUDA Pro Tip: Always Set the Current Device to Avoid Multithreading Bugs
CUDA Pro Tip: Do The Kepler Shuffle
|
CUTLASS: Fast Linear Algebra in CUDA C++
Update May 21, 2018: CUTLASS 1.0 is now available as Open Source software at the CUTLASS repository. CUTLASS 1.0 has changed substantially from our preview release described in the blog post below. We have decomposed the structure of the GEMM computation into deeper, structured primitives for loading data, computing predicate masks, streaming data at each level of the GEMM hierarchy, and updating the output matrix. CUTLASS 1.0 is described in the Doxygen documentation and our talk at the GPU Technology Conference 2018.
Matrix multiplication is a key computation within many scientific applications, particularly those in deep learning. Many operations in modern deep neural networks are either defined as matrix multiplications or can be cast as such.
As an example, the NVIDIA cuDNN library implements convolutions for neural networks using various flavors of matrix multiplication, such as the classical formulation of direct convolution as a matrix product between image-to-column and filter datasets [1]. Matrix multiplication is also the core routine when computing convolutions based on Fast Fourier Transforms (FFT) [2] or the Winograd approach [3].
When constructing cuDNN, we began with our high-performance implementations of general matrix multiplication (GEMM) in the cuBLAS library, supplementing and tailoring them to efficiently compute convolution. Today, our ability to adapt these GEMM strategies and algorithms is critical to delivering the best performance for many different problems and applications within deep learning.
With CUTLASS, we would like to give everyone the techniques and structures they need to develop new algorithms in CUDA C++ using high-performance GEMM constructs as building blocks. The flexible and efficient application of dense linear algebra is crucial within deep learning and the broader GPU computing ecosystem.
Introducing CUTLASS
Today, we are introducing a preview of CUTLASS (CUDA Templates for Linear Algebra Subroutines), a collection of CUDA C++ templates and abstractions for implementing high-performance GEMM computations at all levels and scales within CUDA kernels. Unlike other templated GPU libraries for dense linear algebra (e.g., the MAGMA library [4]), the purpose of CUTLASS is to decompose the “moving parts” of GEMM into fundamental components abstracted by C++ template classes, allowing programmers to easily customize and specialize them within their own CUDA kernels. We are releasing our CUTLASS source code on GitHub as an initial exposition of CUDA GEMM techniques that will evolve into a template library API.
Our CUTLASS primitives include extensive support for mixed-precision computations, providing specialized data-movement and multiply-accumulate abstractions for handling 8-bit integer, half-precision floating point (FP16), single-precision floating point (FP32), and double-precision floating point (FP64) types. One of the most exciting features of CUTLASS is an implementation of matrix multiplication that runs on the new Tensor Cores in the Volta architecture using the WMMA API. Tesla V100’s Tensor Cores are programmable matrix-multiply-and-accumulate units that can deliver up to 125 Tensor TFLOP/s with high efficiency.
Efficient Matrix Multiplication on GPUs
GEMM computes C = alpha A * B + beta C, where A, B, and C are matrices. A is an M-by-K matrix, B is a K-by-N matrix, and C is an M-by-N matrix. For simplicity, let us assume scalars alpha=beta=1 in the following examples. Later, we will show how to implement custom element-wise operations with CUTLASS supporting arbitrary scaling functions.
The simplest implementation consists of three nested loops:
for (int i = 0; i < M; ++i)
for (int j = 0; j < N; ++j)
for (int k = 0; k < K; ++k)
C[i][j] += A[i][k] * B[k][j];
The element of C at position (i, j) is the K-element dot product of the i-th row of A and the j-th column of B. Ideally, performance should be limited by the arithmetic throughput of the processor. Indeed, for large square matrices where M=N=K, the number of math operations in a product of matrices is O(N3) while the amount of data needed is O(N2), yielding a compute intensity on the order of N. However, taking advantage of the theoretical compute intensity requires reusing every element O(N) times. Unfortunately, the above “inner product” algorithm depends on holding a large working set in fast on-chip caches, which results in thrashing as M, N, and K grow.
A better formulation permutes the loop nest by structuring the loop over the K dimension as the outermost loop. This form of the computation loads a column of A and a row of B once, computes its outer product, and accumulates the result of this outer product in the matrix C. Afterward, this column of A and row of B are never used again.
for (int k = 0; k < K; ++k) // K dimension now outer-most loop
for (int i = 0; i < M; ++i)
for (int j = 0; j < N; ++j)
C[i][j] += A[i][k] * B[k][j];
One concern with this approach is that it requires all M-by-N elements of C to be live to store the results of each multiply-accumulate instruction, ideally in memory that can be written as fast as the multiply-accumulate instruction can be computed. We can reduce the working set size of C by partitioning it into tiles of size Mtile-by-Ntile that are guaranteed to fit into on-chip memory. Then we apply the “outer product” formulation to each tile. This leads to the following loop nest.
for (int m = 0; m < M; m += Mtile) // iterate over M dimension
for (int n = 0; n < N; n += Ntile) // iterate over N dimension
for (int k = 0; k < K; ++k)
for (int i = 0; i < Mtile; ++i) // compute one tile
for (int j = 0; j < Ntile; ++j) {
int row = m + i;
int col = n + j;
C[row][col] += A[row][k] * B[k][col];
}
For each tile of C, tiles of A and B are fetched exactly once, which achieves O(N) compute intensity. The size of each tile of C may be chosen to match the capacity of the L1 cache or registers of the target processor, and the outer loops of the nest may be trivially parallelized. This is a great improvement!
Further restructuring offers additional opportunities to exploit both locality and parallelism. Rather than exclusively accumulate vector outer products, we can accumulate the products of matrices by stepping through the K dimension in blocks. We refer to this concept generally as accumulating matrix products.
Hierarchical GEMM Structure
CUTLASS applies the tiling structure to implement GEMM efficiently for GPUs by decomposing the computation into a hierarchy of thread block tiles, warp tiles, and thread tiles and applying the strategy of accumulating matrix products. This hierarchy closely mirrors the NVIDIA CUDA programming model, as Figure 1 shows. Here, you can see data movement from global memory to shared memory (matrix to thread block tile), from shared memory to the register file (thread block tile to warp tile), and from the register file to the CUDA cores for computation (warp tile to thread tile).
Thread Block Tile
Each thread block computes its part of the output GEMM by iteratively loading blocks of matrix data from the input matrices and computing an accumulated matrix product (C += A * B). Figure 2 shows the computation performed by a single thread block and highlights the blocks of data used in one iteration of its main loop.
The CUDA thread block tile structure is further partitioned into warps (groups of threads that execute together in SIMT fashion). Warps provide a helpful organization for the GEMM computation and are an explicit part of the WMMA API, as we shall discuss shortly.
Figure 3 shows a detailed view of the structure of one block-level matrix product. Tiles of A and B are loaded from global memory and stored into shared memory accessible by all warps. The thread block’s output tile is spatially partitioned across warps as Figure 3 shows. We refer to storage for this output tile as accumulators because it stores the result of accumulated matrix products. Each accumulator is updated once per math operation, so it needs to reside in the fastest memory in the SM: the register file.
The parameters BlockItems{X,Y,K} are compile-time constants that the programmer specifies to tune the GEMM computation for the target processor and the aspect ratio of the specific GEMM configuration (e.g. M, N, K, data type, etc.). In the figure, we illustrate an eight-warp, 256-thread thread block which is typical for the large SGEMM (FP32 GEMM) tile size implemented in CUTLASS.
Warp Tile
Once data is stored in shared memory, each warp computes a sequence of accumulated matrix products by iterating over the K dimension of the thread block tile, loading submatrices (or fragments) from shared memory, and computing an accumulated outer product. Figure 4 shows a detailed view. The sizes of the fragments are typically very small in the K dimension to maximize the compute intensity relative to the amount of data loaded from shared memory, thereby avoiding shared memory bandwidth as a bottleneck.
Figure 4 also depicts data sharing from shared memory among several warps. Warps in the same row of the thread block load the same fragments of A, and warps in the same column load the same fragments of B.
We note that the warp-centric organization of the GEMM structure is effective in implementing an efficient GEMM kernel but does not rely on implicit warp-synchronous execution for synchronization. CUTLASS GEMM kernels are well-synchronized with calls to __syncthreads() as appropriate.
Thread Tile
The CUDA Programming Model is defined in terms of thread blocks and individual threads. Consequently, the warp structure is mapped onto operations performed by individual threads. Threads cannot access each other’s registers, so we must choose an organization that enables values held in registers to be reused for multiple math instructions executed by the same thread. This leads to a 2D tiled structure within a thread as the detailed view in Figure 5 shows. Each thread issues a sequence of independent math instructions to the CUDA cores and computes an accumulated outer product.
In Figure 5, the upper left quadrant of the warp is shaded in grey. The 32 cells correspond to the 32 threads within a warp. This arrangement leads to multiple threads within the same row or the same column fetching the same elements of the A and B fragments, respectively. To maximize compute intensity, this basic structure can be replicated to form the full warp-level accumulator tile, yielding an 8-by-8 overall thread tile computed from an outer product of 8-by-1 and 1-by-8 fragments. This is illustrated by the four accumulator tiles shown in green.
WMMA GEMM
The warp tile structure may be implemented with the CUDA Warp Matrix Multiply-Accumulate API (WMMA) introduced in CUDA 9 to target the Volta V100 GPU’s Tensor Cores. For more detail on the WMMA API, see the post Programming Tensor Cores in CUDA 9.
Each Tensor Core provides a 4x4x4 matrix processing array which performs the operation D = A * B + C, where A, B, C and D are 4×4 matrices as Figure 6 shows. The matrix multiply inputs A and B are FP16 matrices, while the accumulation matrices C and D may be FP16 or FP32 matrices.
In effect, the WMMA API is an alternative to the thread tile structure described in the previous section for warp-wide matrix multiply-accumulate operations. Rather than decomposing the warp tile structure into scalar and vector elements owned by individual threads, the WMMA API provides an abstraction to the programmer for warp-cooperative matrix fragment load / store and multiply-accumulate math operations.
Figure 7 shows the warp tile structure that targets the CUDA WMMA API. Calls to wmma::load_matrix_sync load fragments of A and B into instances of the nvcuda::wmma::fragment<> template, and the accumulator elements for the warp tile are structured as an array of nvcuda::wmma::fragment<accumulator> objects. These fragments store a 2D matrix distributed among the threads of the warp. Finally, calls to nvcuda::wmma::mma_sync() for each accumulator fragment (and corresponding fragments from A and B) compute the warp-wide matrix multiply-accumulate operation using Tensor Cores.
CUTLASS implements a GEMM based on the WMMA API in the file block_task_wmma.h. The warp tile must have dimensions that are multiples of matrix multiply-accumulate shapes defined by the nvcuda::wmma templates for the target CUDA Compute Capability. In CUDA 9.0, the fundamental WMMA size is 16-by-16-by-16.
Complete GEMM
The complete GEMM structure can be expressed as nested loops executed by the threads of a thread block, as the following listing shows. All loops except the outermost “main” loop have constant iteration counts and can be fully unrolled by the compiler. For brevity, address and index calculations are omitted here but are explained in the CUTLASS source code.
// Device function to compute a thread block’s accumulated matrix product
__device__ void block_matrix_product(int K_dim) {
// Fragments used to store data fetched from SMEM
value_t frag_a[ThreadItemsY];
value_t frag_b[ThreadItemsX];
// Accumulator storage
accum_t accumulator[ThreadItemsX][ThreadItemsY];
// GEMM Mainloop - iterates over the entire K dimension - not unrolled
for (int kblock = 0; kblock < K_dim; kblock += BlockItemsK) {
// Load A and B tiles from global memory and store to SMEM
//
// (not shown for brevity - see the CUTLASS source for more detail)
...
__syncthreads();
// Warp tile structure - iterates over the Thread Block tile
#pragma unroll
for (int warp_k = 0; warp_k < BlockItemsK; warp_k += WarpItemsK) {
// Fetch frag_a and frag_b from SMEM corresponding to k-index
//
// (not shown for brevity - see CUTLASS source for more detail)
...
// Thread tile structure - accumulate an outer product
#pragma unroll
for (int thread_x = 0; thread_x < ThreadItemsX; ++thread_x) {
#pragma unroll
for (int thread_y=0; thread_y < ThreadItemsY; ++thread_y) {
accumulator[thread_x][thread_y] += frag_a[y]*frag_b[x];
}
}
}
__syncthreads();
}
}
WarpItemsK refers to the target math operation’s dot product size. For SGEMM (FP32 GEMM), DGEMM (FP64), and HGEMM (FP16), the dot product length is 1 for scalar multiply-accumulate instructions. For IGEMM (8-bit integer GEMM), CUTLASS targets the four-element integer dot product instruction (IDP4A) with WarpItemsK=4. For WMMA-based GEMM, we choose the K dimension of the wmma::fragment<> template. Currently, this is defined as WarpItemsK=16.
Software Pipelining
Tiled matrix product makes extensive use of the register file to hold fragments and accumulator tiles as well as large shared memory allocations. Relatively high demand of on-chip storage limits occupancy, the maximum number of thread blocks that may run concurrently on one SM. Consequently, GEMM implementations can fit far fewer warps and thread blocks in each SM than is typical for GPU computing workloads. We use software pipelining to hide data movement latency by executing all stages of the GEMM hierarchy concurrently within a loop and feeding the output of each stage to its dependent stage during the next iteration as Figure 8 shows.
The GEMM CUDA kernel issues three concurrent streams of operations within the pipeline which correspond to the stages of dataflow within the GEMM hierarchy (Figure 1). The relative size of each stage in the illustration indicates whether the operation’s latency is long or short, and orange arrows highlight data dependencies between the stages of each stream. A call to__syncthreads() after data is stored to shared memory synchronizes all warps so they may read the shared memory without race conditions. The final math stage of the pipeline is overlapped with a load from shared memory which feeds data to the first math stage of the next main loop iteration.
Practically, CUDA programmers implement instruction-level concurrency among the pipe stages by interleaving CUDA statements for each stage in the program text and relying on the CUDA compiler to issue the proper instruction schedule in the compiled code. Extensive use of #pragma unroll and compile-time constants enables the CUDA compiler to unroll loops and map array elements to registers, both of which are critical to a tunable, efficient implementation. See block_task::consume_tile() for an example.
We use double buffering at each level of the GEMM hierarchy to enable the upstream pipeline stage to write data to shared memory or registers while the dependent pipeline stage loads from its storage elements. Notably, this eliminates the second __syncthreads() since one shared memory buffer is written while the other is read. The cost for double buffering is twice the shared memory capacity and twice the number of registers used to hold shared memory fetches.
The actual amount of latency hiding available depends on the sizes of the thread block, warp, and thread tiles as well as the throughput of the active math functional unit within the SM. While larger tiles yield more opportunity for data reuse and may offer more latency hiding, the physical capacities of the SM register file and shared memory limit the maximum tile size. Fortunately, NVIDIA GPUs have sufficient storage resources to execute GEMM tiles large enough to be math limited!
CUTLASS
CUTLASS is an implementation of the hierarchical GEMM structure as CUDA C++ template classes. We intend for these templates to be included in existing device-side CUDA kernels and functions, but we also provide a sample kernel and launch interface to get up and running quickly. Like CUB, extensive use of template arguments and compile-time constants enables CUTLASS to be tunable and flexible.
CUTLASS implements abstractions for the operations needed for efficient GEMM implementations. Specialized “tile loaders” move data efficiently from global memory into shared memory, accommodating the layouts of the source data while also enabling efficient, conflict-free loading into registers. For some layouts, IGEMM requires some restructuring of data to target CUDA’s 4-element integer dot product instruction, and this is done as the data is stored to SMEM.
CUTLASS GEMM Device Functions
The following example from dispatch.h defines a block_task type and instantiates a GEMM for floating-point data assuming column-major input matrices. The block_task_policy_t defines GEMM tile sizes and is discussed at length in the next section.
/// CUTLASS SGEMM example
__global__ void gemm_kernel(
float *C,
float const *A,
float const *B,
int M,
int N,
int K) {
// Define the GEMM tile sizes - discussed in next section
typedef block_task_policy <
128, // BlockItemsY: Height in rows of a tile
32, // BlockItemsX - Width in columns of a tile
8, // ThreadItemsY - Height in rows of a thread-tile
4, // ThreadItemsX - Width in columns of a thread-tile
8, // BlockItemsK - Depth of a tile
true, // UseDoubleScratchTiles - whether to double-buffer SMEM
block_raster_enum::Default // Block rasterization strategy
> block_task_policy_t;
// Define the epilogue functor
typedef gemm::blas_scaled_epilogue<float, float, float> epilogue_op_t ;
// Define the block_task type.
typedef block_task <
block_task_policy_t,
float,
float,
matrix_transform_t::NonTranspose,
4,
matrix_transform_t::NonTranspose,
4,
epilogue_op_t,
4,
true
> block_task_t;
// Declare statically-allocated shared storage
__shared__ block_task_t::scratch_storage_t smem;
// Construct and run the task
block_task_t(
reinterpret_cast(&smem),
&smem,
A,
B,
C,
epilogue_op_t(1, 0),
M,
N,
K).run();
}
The shared memory allocation smem is used by the block_task_t instance to store the thread block-level tiles in the matrix product computation.
epilogue_op_t is a template argument that specifies a functor which is used to update the output matrix after the matrix multiply operation is complete. This lets you easily compose matrix multiply with custom element-wise operations, as we describe in more detail later. CUTLASS provides the gemm::blas_scaled_epilogue functor implementation to compute the familiar GEMM operation C = alpha * AB + beta * C (defined in epilogue_function.h).
CUTLASS GEMM Policies
CUTLASS organizes compile-time constants specifying tile sizes at each level of the GEMM hierarchy as a specialization of the gemm::block_task_policy template which has the following declaration.
template <
int BlockItemsY, /// Height in rows of a tile in matrix C
int BlockItemsX, /// Width in columns of a tile in matrix C
int ThreadItemsY, /// Height in rows of a thread-tile in C
int ThreadItemsX, /// Width in columns of a thread-tile in C
int BlockItemsK, /// Number of K-split subgroups in a block
bool UseDoubleScratchTiles, /// Whether to double buffer shared memory
grid_raster_strategy::kind_t RasterStrategy /// Grid <a href="https://developer.nvidia.com/discover/ray-tracing">rasterization</a> strategy
> struct block_task_policy;
Policies for several valid GEMM blocking structures are defined in dispatch_policies.h, and we show one such policy below. This policy decomposes a matrix multiply operation into CUDA blocks, each spanning a 128-by-32 tile of the output matrix. The thread block tiles storing A and B have size 128-by-8 and 8-by-32, respectively. This policy is optimized for GEMM computations in which the C matrix is relatively narrow in its N dimension.
/// GEMM task policy specialization for tall SGEMM
template <>
struct gemm_policy<float, float, problem_size_t::Tall> :
block_task_policy<
128, // BlockItemsY - Height in rows of a tile
32, // BlockItemsX - Width in columns of a tile
8, // ThreadItemsY - Height in rows of a thread-tile
4, // ThreadItemsX - Width in columns of a thread-tile
8, // BlockItemsK - Depth of a tile
true, // UseDoubleScratchTiles - whether to double-buffer SMEM
grid_raster_strategy::Default> // Grid rasterization strategy
{};
The sizes of the thread tile fragments are ThreadItemsY-by-1 and ThreadItemsX-by-1, respectively. In the case of the example above, these are given as 8-by-1 vectors from A and 4-by-1 vectors from B.
With the policy type defined, we can define the type for gemm::block_task, a CUTLASS GEMM. This template has the following argument list.
template <
/// Parameterization of block_task_policy
typename block_task_policy_t,
/// Multiplicand value type (matrices A and B)
typename value_t,
/// Accumulator value type (matrix C and scalars)
typename accum_t,
/// Layout enumerant for matrix A
matrix_transform_t::kind_t TransformA,
/// Alignment (in bytes) for A operand
int LdgAlignA,
/// Layout enumerant for matrix B
matrix_transform_t::kind_t TransformB,
/// Alignment (in bytes) for B operand
int LdgAlignB,
/// Epilogue functor applied to matrix product
typename epilogue_op_t,
/// Alignment (in bytes) for C operand
int LdgAlignC,
/// Whether GEMM supports matrix sizes other than mult of BlockItems{XY}
bool Ragged
> struct block_task;
value_t and accum_t specify the types of source operands and accumulator matrices, respectively. TransformA and TransformB specify the layout of operands A and B, respectively. Though we have not discussed matrix layout in detail, CUTLASS supports all combinations of row-major and column-major input matrices.
LdgAlignA and LdgAlignB specify guaranteed alignment, which enables the CUTLASS device code to use vector memory operations. An alignment of 8 bytes, for example, permits CUTLASS to load elements of type float in two-element vectors. This reduces code size and improves performance by reducing the number of memory operations in flight within the GPU. More critically, ragged handling indicates whether matrix dimensions may be arbitrary sizes (satisfying alignment guarantees). If this template argument is false`, matrices A, B, and C are all expected to have dimensions that are multiples of the tile parameters in the block_task_policy.
Fusing Element-wise Operations with SGEMM
Deep Learning computations typically perform simple element-wise operations after GEMM computations, such as computing an activation function. These bandwidth-limited layers can be fused into the end of the GEMM operation to eliminate an extra kernel launch and avoid a round trip through global memory.
The following example demonstrates a simple application of the GEMM template that adds a bias term to the scaled matrix product and then applies the ReLU function to clamp the result to non-negative values. By separating the epilogue into a functor, passing arguments such as pointers to additional matrix and tensor arguments or additional scale factors is straightforward and does not encumber the GEMM implementation.
First, we define a class that implements the gemm::epilogue_op concept. The constructor and other methods aren’t shown here, but the element-wise bias and ReLU operations are shown in the implementation of the function call operator.
template <typename accum_t, typename scalar_t, typename output_t>
struct fused_bias_relu_epilogue {
// Data members pass additional arguments to epilogue
scalar_t const *Bias;
accum_t threshold;
/// Constructor callable on host and device initializes data members
inline __device__ __host__
fused_bias_relu_epilogue(
scalar_t const *Bias,
accum_t threshold
): Bias(Bias), threshold(threshold) { }
/// Applies bias + ReLu operation
inline __device__ __host__
output_t operator()(
accum_t accumulator, /// element of matrix product result
output_t c, /// element of source accumulator matrix C
size_t idx /// index of c element; may be used to load
/// elements from other identically-
/// structured matrices
) const {
// Compute the result by scaling the matrix product, adding bias,
// and adding the scaled accumulator element.
accum_t result = output_t(
alpha * scalar_t(accumulator) +
Bias[i] + // load and add the bias
beta * scalar_t(c)
);
// apply clamping function
return max(threshold, result);
}
};
Then we apply that operator as the epilogue operation.
// New: define type for custom epilogue functor
typedef fused_bias_relu_epilogue_t<float, float, float>
bias_relu_epilogue_t;
/// Computes GEMM fused with Bias and ReLu operation
__global__ void gemm_bias_relu(
..., /// GEMM parameters not shown
bias_relu_epilogue_t bias_relu_op) { /// bias_relu_op constructed
/// by caller
// Define the block_task type.
typedef block_task<
block_task_policy_t, // same policy as previous example
float,
float,
matrix_transform_t::NonTranspose,
4,
matrix_transform_t::NonTranspose,
4,
bias_relu_epilogue_t, // New: custom epilogue functor type
4,
true
> block_task_t ;
// Declare statically-allocated shared storage
__shared__ block_task_t::scratch_storage_t smem;
// Construct and run the task
block_task_t(
reinterpret_cast(&smem),
&smem,
A,
B,
C,
bias_relu_op, // New: custom epilogue object
M,
N,
K).run();
}
This simple example demonstrates the value of combining generic programming techniques with efficient GEMM implementations.
Tesla V100 (Volta) Performance
CUTLASS is very efficient, with performance comparable to cuBLAS for scalar GEMM computations. Figure 9 shows CUTLASS performance relative to cuBLAS compiled with CUDA 9.0 running on an NVIDIA Tesla V100 GPU for large matrix dimensions (M=10240, N=K=4096). Figure 9 shows relative performance for each compute data type CUTLASS supports and all permutations of row-major and column-major layouts for input operands.
In most cases, CUTLASS C++ achieves within a few percent of the performance of the hand-tuned assembly kernels in cuBLAS. For WMMA GEMM (WGEMM in Figure 9), CUTLASS does not yet achieve the same performance as cuBLAS, but we are working closely with the CUDA compiler and GPU architecture teams to develop techniques to reach a similar level of performance in CUDA code.
Try CUTLASS Today!
There are many interesting details we haven’t addressed in this blog post, so we recommend you check out the CUTLASS repository and try out CUTLASS yourself. The cutlass_test sample program demonstrates calling CUTLASS GEMM kernels, verifying their result, and measuring their performance. Look forward to future updates from us, and feel free to send us feedback or reach out using the comments below!
Acknowledgements
Special thanks to Joel McCormack for his technical insight and explication, particularly with respect to NVIDIA microarchitecture and the techniques employed by cuBLAS and cuDNN.
References
[1] Sharan Chetlur, Cliff Woolley, Philippe Vandermersch, Jonathan Cohen, John Tran, Bryan Catanzaro, Evan Shelhamer. cuDNN: Efficient Primitives for Deep Learning, arXiv:1410.0759, 2014.
[2] Michael Mathieu, Mikael Henaff, Yann LeCun. Fast training of Convolutional Networks through FFTs. arXiv:1312.5851. 2013.
[3] Andrew Lavin, Scott Gray. Fast Algorithms for Convolutional Neural Networks. arXiv:1509.09308. 2015.
[4] MAGMA. http://icl.cs.utk.edu/magma/index.html
Related resources
Tags
About the Authors
Comments
Related posts
Introducing Grouped GEMM APIs in cuBLAS and More Performance Updates
New cuBLAS 12.0 Features and Matrix Multiplication Performance on NVIDIA Hopper GPUs
Implementing High Performance Matrix Multiplication Using CUTLASS v2.8
CUTLASS: Fast Linear Algebra in CUDA C++
Pro Tip: cuBLAS Strided Batched Matrix Multiply
Related posts
NVIDIA cuDSS Advances Solver Technologies for Engineering and Scientific Computing
Programming the Quantum-Classical Supercomputer
CUDA 12.1 Supports Large Kernel Parameters
Harnessing the Power of NVIDIA AI Enterprise on Azure Machine Learning
Webinar: Performant Multiphase Flow Simulation at Leadership-Class Scale
|
Cooperative Groups: Flexible CUDA Thread Programming
In efficient parallel algorithms, threads cooperate and share data to perform collective computations. To share data, the threads must synchronize. The granularity of sharing varies from algorithm to algorithm, so thread synchronization should be flexible. Making synchronization an explicit part of the program ensures safety, maintainability, and modularity. CUDA 9 introduces Cooperative Groups, which aims to satisfy these needs by extending the CUDA programming model to allow kernels to dynamically organize groups of threads.
Historically, the CUDA programming model has provided a single, simple construct for synchronizing cooperating threads: a barrier across all threads of a thread block, as implemented with the __syncthreads() function. However, CUDA programmers often need to define and synchronize groups of threads smaller than thread blocks in order to enable greater performance, design flexibility, and software reuse in the form of “collective” group-wide function interfaces.
The Cooperative Groups programming model describes synchronization patterns both within and across CUDA thread blocks. It provides CUDA device code APIs for defining, partitioning, and synchronizing groups of threads. It also provides host-side APIs to launch grids whose threads are all guaranteed to be executing concurrently to enable synchronization across thread blocks. These primitives enable new patterns of cooperative parallelism within CUDA, including producer-consumer parallelism and global synchronization across the entire thread grid or even multiple GPUs.
The expression of groups as first-class program objects improves software composition: collective functions can take an explicit argument representing the group of participating threads. Consider a library function that imposes requirements on its caller. Explicit groups make these requirements explicit, reducing the chances of misusing the library function. Explicit groups and synchronization help make code less brittle, reduce restrictions on compiler optimization, and improve forward compatibility.
The Cooperative Groups programming model consists of the following elements:
Cooperative Groups Fundamentals
At its simplest, Cooperative Groups is an API for defining and synchronizing groups of threads in a CUDA program. Much of the Cooperative Groups (in fact everything in this post) works on any CUDA-capable GPU compatible with CUDA 9. Specifically, that means Kepler and later GPUs (Compute Capability 3.0+).
To use Cooperative Groups, include its header file.
#include <cooperative_groups.h>
Cooperative Groups types and interfaces are defined in the cooperative_groups C++ namespace, so you can either prefix all names and functions with cooperative_groups::, or load the namespace or its types with using directives.
using namespace cooperative_groups; // or...
using cooperative_groups::thread_group; // etc.
It’s not uncommon to alias it to something shorter. Assume the following namespace alias exists in the examples in this post.
namespace cg = cooperative_groups;
Code containing any intra-block Cooperative Groups functionality can be compiled in the normal way using nvcc (note that many of the examples in this post use C++11 features so you need to add the --std=c++11 option to the compilation command line).
Thread Groups
The fundamental type in Cooperative Groups is thread_group, which is a handle to a group of threads. The handle is only accessible to members of the group it represents. Thread groups expose a simple interface. You can get the size (total number of threads) of a group with the size() method:
unsigned size();
To find the index of the calling thread (between 0 and size()-1) within the group, use the thread_rank() method:
unsigned thread_rank();
Finally, you can check the validity of a group using the is_valid() method.
bool is_valid();
Thread Group Collective Operations
Thread groups provide the ability to perform collective operations among all threads in a group. Collective operations, or simply collectives, are operations that need to synchronize or otherwise communicate amongst a specified set of threads. Because of the need for synchronization, every thread that is identified as participating in a collective must make a matching call to that collective operation. The simplest collective is a barrier, which transfers no data and merely synchronizes the threads in the group. Synchronization is supported by all thread groups. As you’ll learn later in this post, some group types support other collectives.
You can synchronize a group by calling its collective sync() method, or by calling the cooperative_groups::sync() function. These perform barrier synchronization among all threads in the group (Figure 2).
g.sync(); // synchronize group g
cg::synchronize(g); // an equivalent way to synchronize g
Here’s a simple example of a parallel reduction device function written using Cooperative Groups. When the threads of a group call it, they cooperatively compute the sum of the values passed by each thread in the group (through the val argument).
using namespace cooperative_groups;
__device__ int reduce_sum(thread_group g, int *temp, int val)
{
int lane = g.thread_rank();
// Each iteration halves the number of active threads
// Each thread adds its partial sum[i] to sum[lane+i]
for (int i = g.size() / 2; i > 0; i /= 2)
{
temp[lane] = val;
g.sync(); // wait for all threads to store
if(lane<i) val += temp[lane + i];
g.sync(); // wait for all threads to load
}
return val; // note: only thread 0 will return full sum
}
Now let’s look at how to create thread groups.
Thread Blocks
If you have programmed with CUDA before, you are familiar with thread blocks, the fundamental unit of parallelism in a CUDA program. Cooperative Groups introduces a new datatype, thread_block, to explicitly represent this concept within the kernel. An instance of thread_block is a handle to the group of threads in a CUDA thread block that you initialize as follows.
thread_block block = this_thread_block();
As with any CUDA program, every thread that executes that line has its own instance of the variable block. Threads with the same value of the CUDA built-in variable blockIdx are part of the same thread block group.
Synchronizing a thread_block group is much like calling __syncthreads(). The following lines of code all do the same thing (assuming all threads of the thread block reach them).
__syncthreads();
block.sync();
cg::synchronize(block);
this_thread_block().sync();
cg::synchronize(this_thread_block());
The thread_block data type extends the thread_group interface with the following block-specific methods.
dim3 group_index(); // 3-dimensional block index within the grid
dim3 thread_index(); // 3-dimensional thread index within the block
These are equivalent to CUDA’s blockIdx and threadIdx, respectively.
Here’s a simple kernel that uses the reduce_sum() device function to compute the sum of all values in an input array. It starts by computing many partial sums in parallel in thread_sum(), where each thread strides through the array computing a partial sum (and uses vector loads for higher memory access efficiency). The kernel then uses thread_block groups for cooperative summation, and atomicAdd() to combine the block sums.
__device__ int thread_sum(int *input, int n)
{
int sum = 0;
for(int i = blockIdx.x * blockDim.x + threadIdx.x;
i < n / 4;
i += blockDim.x * gridDim.x)
{
int4 in = ((int4*)input)[i];
sum += in.x + in.y + in.z + in.w;
}
return sum;
}
__global__ void sum_kernel_block(int *sum, int *input, int n)
{
int my_sum = thread_sum(input, n);
extern __shared__ int temp[];
auto g = this_thread_block();
int block_sum = reduce_sum(g, temp, my_sum);
if (g.thread_rank() == 0) atomicAdd(sum, block_sum);
}
We can launch this function to compute the sum of a 16M-element array like this.
int n = 1<<24;
int blockSize = 256;
int nBlocks = (n + blockSize - 1) / blockSize;
int sharedBytes = blockSize * sizeof(int);
int *sum, *data;
cudaMallocManaged(&sum, sizeof(int));
cudaMallocManaged(&data, n * sizeof(int));
std::fill_n(data, n, 1); // initialize data
cudaMemset(sum, 0, sizeof(int));
sum_kernel_block<<<nBlocks, blockSize, sharedBytes>>>(sum, data, n);
Partitioning Groups
Cooperative Groups provides you the flexibility to create new groups by partitioning existing groups. This enables cooperation and synchronization at a finer granularity. The cg::tiled_partition() function partitions a thread block into multiple “tiles”. Here’s an example that partitions each whole thread block into tiles of 32 threads.
thread_group tile32 = cg::tiled_partition(this_thread_block(), 32);
Each thread that executes the partition will get a handle (in tile32) to one 32-thread group. 32 is a common choice, because it corresponds to a warp: the unit of threads that are scheduled concurrently on a GPU streaming multiprocessor (SM).
Here’s another example where we partition into groups of four threads.
thread_group tile4 = tiled_partition(tile32, 4);
The thread_group objects returned by tiled_partition() are just like any thread group. So, for example, we can do things like this:
if (tile4.thread_rank()==0)
printf("Hello from tile4 rank 0: %d\n",
this_thread_block().thread_rank());
Every fourth thread will print, as in the following.
Hello from tile4 rank 0: 0
Hello from tile4 rank 0: 4
Hello from tile4 rank 0: 8
Hello from tile4 rank 0: 12
...
Modularity
The real power of Cooperative Groups lies in the modularity that arises when you can pass a group as an explicit parameter to a function and depend on a consistent interface across a variety of thread group sizes. This makes it harder to inadvertently cause race conditions and deadlock situations by making invalid assumptions about which threads will call a function concurrently. Let’s me show you an example.
__device__ int sum(int *x, int n)
{
...
__syncthreads();
...
return total;
}
__global__ void parallel_kernel(float *x, int n)
{
if (threadIdx.x < blockDim.x / 2)
sum(x, count); // error: half of threads in block skip
// __syncthreads() => deadlock
}
In the preceding code example, a portion of the threads of each block call sum(), which calls __syncthreads(). Since not all threads in the block reach the __syncthreads(), there is a deadlock situation, since __syncthreads() invokes a barrier which waits until all threads of the block reach it. Without knowing the details of the implementation of a library function like sum(), this is an easy mistake to make.
The following code uses Cooperative Groups to require that a thread block group be passed into the call. This makes that mistake much harder to make.
// Now much clearer that a whole thread block is expected to call
__device__ int sum(thread_block block, int *x, int n)
{
...
block.sync();
...
return total;
}
__global__ void parallel_kernel(float *x, int n)
{
sum(this_thread_block(), x, count); // no divergence around call
}
In the first (incorrect) example, the caller wanted to use fewer threads to compute sum(). The modularity enabled by Cooperative Groups means that we can apply the same reduction function to a variety of group sizes. Here’s another version of our sum kernel that uses tiles of 32 threads instead of whole thread blocks. Each tile does a parallel reduction—using the same reduce_sum() function as before—and then atomically adds its result to the total.
__global__ void sum_kernel_32(int *sum, int *input, int n)
{
int my_sum = thread_sum(input, n);
extern __shared__ int temp[];
auto g = this_thread_block();
auto tileIdx = g.thread_rank() / 32;
int* t = &temp[32 * tileIdx];
auto tile32 = tiled_partition(g, 32);
int tile_sum = reduce_sum(tile32, t, my_sum);
if (tile32.thread_rank() == 0) atomicAdd(sum, tile_sum);
}
Optimizing for the GPU Warp Size
Cooperative Groups provides an alternative version of cg::tiled_partition() that takes the tile size as a template parameter, returning a statically sized group called a thread_block_tile. Knowing the tile size at compile time provides the opportunity for better optimization. Here are two static tiled partitions that match the two examples given previously.
thread_block_tile<32> tile32 = tiled_partition<32>(this_thread_block());
thread_block_tile<4> tile4 = tiled_partition<4> (this_thread_block());
We can use this to slightly optimize our tiled reduction so that when passed a statically sized thread_block_tile the inner loop will be unrolled.
template <typename group_t>
__device__ int reduce_sum(group_t g, int *temp, int val)
{
int lane = g.thread_rank();
// Each iteration halves the number of active threads
// Each thread adds its partial sum[i] to sum[lane+i]
#pragma unroll
for (int i = g.size() / 2; i > 0; i /= 2)
{
temp[lane] = val;
g.sync(); // wait for all threads to store
if (lane < i) val += temp[lane + i];
g.sync(); // wait for all threads to load
}
return val; // note: only thread 0 will return full sum
}
Also, when the tile size matches the hardware warp size, the compiler can elide the synchronization while still ensuring correct memory instruction ordering to avoid race conditions. Intentionally removing synchronizations is an unsafe technique (known as implicit warp synchronous programming) that expert CUDA programmers have often used to achieve higher performance for warp-level cooperative operations. Always explicitly synchronize your thread groups, because implicitly synchronized programs have race conditions.
For parallel reduction, which is bandwidth bound, this code is not significantly faster on recent architectures than the non-static tiled_partition version. But it demonstrates the mechanics of statically sized tiles which can be beneficial in more computationally intensive uses.
Warp-Level Collectives
Thread Block Tiles also provide an API for the following warp-level collective functions:
.shfl().shfl_down().shfl_up().shfl_xor().any().all().ballot().match_any().match_all()
These operations are all primitive operations provided by NVIDIA GPUs starting with the Kepler architecture (Compute Capability 3.x), except for match_any() and match_all(), which are new in the Volta architecture (Compute Capability 7.x).
Using thread_block_tile::shfl_down() to simplify our warp-level reduction does benefit our code: it simplifies it and eliminates the need for shared memory.
template <int tile_sz>
__device__ int reduce_sum_tile_shfl(thread_block_tile<tile_sz> g, int val)
{
// Each iteration halves the number of active threads
// Each thread adds its partial sum[i] to sum[lane+i]
for (int i = g.size() / 2; i > 0; i /= 2) {
val += g.shfl_down(val, i);
}
return val; // note: only thread 0 will return full sum
}
template<int tile_sz>
__global__ void sum_kernel_tile_shfl(int *sum, int *input, int n)
{
int my_sum = thread_sum(input, n);
auto tile = tiled_partition<tile_sz>(this_thread_block());
int tile_sum = reduce_sum_tile_shfl<tile_sz>(tile, my_sum);
if (tile.thread_rank() == 0) atomicAdd(sum, tile_sum);
}
Discovering Thread Concurrency
In the GPU’s SIMT (Single Instruction Multiple Thread) architecture, the GPU streaming multiprocessors (SM) execute thread instructions in groups of 32 called warps. The threads in a SIMT warp are all of the same type and begin at the same program address, but they are free to branch and execute independently. At each instruction issue time, the instruction unit selects a warp that is ready to execute and issues its next instruction to the warp’s active threads. The instruction unit applies an active mask to the warp to ensure that only threads that are active issue the instruction. Individual threads in a warp may be inactive due to independent branching in the program.
Thus, when data-dependent conditional branches in the code cause threads within a warp to diverge, the SM disables threads that don’t take the branch. The threads that remain active on the path are referred to as coalesced.
Cooperative Groups provides the function coalesced_threads() to create a group comprising all coalesced threads:
coalesced_group active = coalesced_threads();
As an example, consider the following thread that creates a coalesced_group inside a divergent branch taken only by odd-numbered threads. Odd-numbered threads within a warp will be part of the same coalesced_group, and thus they can be synchronized by calling active.sync().
auto block = this_thread_block();
if (block.thread_rank() % 2) {
coalesced_group active = coalesced_threads();
...
active.sync();
}
Keep in mind that since threads from different warps are never coalesced, the largest group that coalesced_threads() can return is a full warp.
It’s common to need to work with the current active set of threads, without making assumptions about which threads are present. This is necessary to ensure modularity of utility functions that may be called in different situations but which still want to coordinate the activities of whatever threads happen to be active.
A good example is “warp-aggregated atomics”. In warp aggregation, the threads of a warp first compute a total increment among themselves, and then elect a single thread to atomically add the increment to a global counter. This aggregation reduces the number of atomics performed by up to the number of threads in a warp (up to 32x on current GPUs), and can dramatically improve performance. Moreover, it can be used as a drop-in replacement for atomicAdd(). You can see the full details of warp-aggregated atomics in this NVIDIA Developer Blog post.
The key to correct operation of warp aggregation is in electing a thread from the warp to perform the atomic add. To enable use inside divergent branches, warp aggregation can’t just pick thread zero of the warp because it might be inactive in the current branch. Instead, as the blog post explains, warp intrinsics can be used to elect the first active thread in the warp.
The Cooperative Groups coalesced_group type makes this trivial, since its thread_rank() method ranks only threads that are part of the group. This enables a simple implementation of warp-aggregated atomics that is robust and safe to use on any GPU architecture. The coalesced_group type also supports warp intrinsics like shfl(), use in the following code to broadcast to all threads in the group.
__device__
int atomicAggInc(int *ptr)
{
cg::coalesced_group g = cg::coalesced_threads();
int prev;
// elect the first active thread to perform atomic add
if (g.thread_rank() == 0) {
prev = atomicAdd(ptr, g.size());
}
// broadcast previous value within the warp
// and add each active thread’s rank to it
prev = g.thread_rank() + g.shfl(prev, 0);
return prev;
}
Get Started with Cooperative Groups
We hope that after reading this introduction you are as excited as we are about the possibilities of flexible and explicit groups of cooperating threads for sophisticated GPU algorithms. To get started with Cooperative Groups today, download the CUDA Toolkit version 9 or higher from https://developer.nvidia.com/cuda-toolkit. The toolkit includes various examples that use Cooperative Groups.
But we haven’t covered everything yet! New features in Pascal and Volta GPUs help Cooperative Groups go farther, by enabling creation and synchronization of thread groups that span an entire kernel launch running on one or even multiple GPUs. In a follow-up post we plan to cover the grid_group and multi_grid_group types and the cudaLaunchCooperative* APIs that enable them. Stay tuned.
Related resources
Tags
About the Authors
Comments
Related posts
Revealing New Features in the CUDA 11.5 Toolkit
Enhancing Memory Allocation with New NVIDIA CUDA 11.2 Features
CUDA Refresher: The CUDA Programming Model
Flexible CUDA Thread Programming
CUDA Dynamic Parallelism API and Principles
Related posts
Profit and Loss Modeling on GPUs with ISO C++ Language Parallelism
How to Accelerate Quantitative Finance with ISO C++ Standard Parallelism
Just Released: NVIDIA HPC SDK v24.1
Webinar: Analysis of OpenACC Validation and Verification Testsuite
Simplifying GPU Programming for HPC with NVIDIA Grace Hopper Superchip
|
Separate Compilation and Linking of CUDA C++ Device Code
Managing complexity in large programs requires breaking them down into components that are responsible for small, well-defined portions of the overall program. Separate compilation is an integral part of the C and C++ programming languages which allows portions of a program to be compiled into separate objects and then linked together to form an executable or library. Developing large and complex GPU programs is no different, and starting with CUDA 5.0, separate compilation and linking are now important tools in the repertoire of CUDA C/C++ programmers.
In this post, we explore separate compilation and linking of device code and highlight situations where it is helpful. In the process, we’ll walk through a simple example to show how device code linking can let you move existing code to the GPU with minimal changes to your class hierarchy and build infrastructure.
One of the key limitations that device code linking lifts is the need to have all the code for a GPU kernel present when compiling the kernel, including all the device functions that the kernel calls. As C++ programmers, we are used to calling externally defined functions simply by declaring the functions’ prototypes (or including a header that declares them).
Managing Complexity with Separate Compilation
The common approach to organizing and building C++ applications is to define all member functions of a single class in one or more .cpp source files, and compile each .cpp source file into a separate .o object file. Other classes and functions may call these member functions from anywhere in the program by including the class header file; the function implementation is not needed to compile another function that calls it. After compiling all code, the linker connects calls to functions implemented in other files as part of the process of generating the executable.
Let’s imagine a very simple example application which has two classes: a particle class and a three-dimensional vector class, v3, that it uses. Our main task is moving the particle objects along randomized trajectories. Particle filters and Monte Carlo simulations frequently involve operations of this sort. We’ll use a CUDA C++ kernel in which each thread calls particle::advance() on a particle.
Using the conventional C/C++ code structure, each class in our example has a .h header file with a class declaration, and a .cpp file that contains class member function definitions. We compile each .cpp file separately into its own .o file, which the linker combines into an executable. Figure 1 shows the structure of our example application.
This time-honored project structure is highly desirable for the purposes of maintaining abstraction barriers, class reuse, and separate units in development. It also enables partial rebuilding, which can greatly reduce compilation time, especially in large applications where the programmer modifies only a few classes at a time.
The following two listings show the header and implementation for our 3D vector class, v3.
class v3
{
public:
float x;
float y;
float z;
v3();
v3(float xIn, float yIn, float zIn);
void randomize();
__host__ __device__ void normalize();
__host__ __device__ void scramble();
};
#include <v3.h>
#include <math.h>
v3::v3() { randomize(); }
v3::v3(float xIn, float yIn, float zIn) : x(xIn), y(yIn), z(zIn) {}
void v3::randomize()
{
x = (float)rand() / (float)RAND_MAX;
y = (float)rand() / (float)RAND_MAX;
z = (float)rand() / (float)RAND_MAX;
}
__host__ __device__ void v3::normalize()
{
float t = sqrt(x*x + y*y + z*z);
x /= t;
y /= t;
z /= t;
}
__host__ __device__ void v3::scramble()
{
float tx = 0.317f*(x + 1.0) + y + z * x * x + y + z;
float ty = 0.619f*(y + 1.0) + y * y + x * y * z + y + x;
float tz = 0.124f*(z + 1.0) + z * y + x * y * z + y + x;
x = tx;
y = ty;
z = tz;
}
In our example, particle::advance() relies on two helper routines from the vector class: v3::normalize() and v3::scramble(). The following two listings show the particle class header and source. We’ll see that device object linking enables us to keep our code organized in a familiar way while satisfying the inter-class dependency.
#include <v3.h>
class particle
{
private:
v3 position;
v3 velocity;
v3 totalDistance;
public:
particle();
__host__ __device__ void advance(float dist);
const v3& getTotalDistance() const;
};
#include <particle.h>
particle::particle() : position(), velocity(), totalDistance(0,0,0) {}
__device__ __host__
void particle::advance(float d)
{
velocity.normalize();
float dx = d * velocity.x;
position.x += dx;
totalDistance.x += dx;
float dy = d * velocity.y;
position.y += dy;
totalDistance.y += dy;
float dz = d * velocity.z;
position.z += dz;
totalDistance.z += dz;
velocity.scramble();
}
const v3& particle::getTotalDistance() const
{
return totalDistance;
}
Before CUDA 5.0, if a programmer wanted to call particle::advance() from a CUDA kernel launched in main.cpp, the compiler required the main.cpp compilation unit to include the implementation of particle::advance() as well any subroutines it calls (v3::normalize() and v3::scramble() in this case). In complex C++ applications, the call chain may go deeper than the two-levels that our example illustrates. Without device object linking, the developer may need to deviate from the conventional application structure to accommodate this compiler requirement. Such changes are difficult for existing applications in which changing the structure is invasive and/or undesirable.
Using object linking of device code, the compiler can generate device code for all functions in a .cpp file, store it in a .o file, and then link device code from multiple .o files together in the same way that we are used to linking CPU code. As a result, the build structure does not change much, if at all, and changes to utility classes like v3 are minimal.
Utility Code for Host and Device
The source changes necessary to call v3 and particle member functions from a GPU kernel are minimal. The only required change in v3.h, v3.cpp, particle.h, and particle.cpp is to add __host__ and __device__ decorators to member functions that device code calls. The implementations are otherwise completely unchanged from their CPU-only version.
The __host__ __device__ decorations indicate to nvcc to compile these routines into both CPU code and device-callable GPU code. You can use __host__ or __device__ in isolation as well. Using __host__ alone tells the compiler to generate only a CPU version of this routine. This usage is unnecessary, as this is the default behavior. Using __device__ alone tells the compiler to generate only GPU code for a function. This is useful if you know this routine will never be needed by the host, or if you want to implement your function using operations specific to the GPU, such as fast math or texture unit operations. If you call a __host__ function from the device or a __device__ function from the host, the compiler will report an error.
The example code in main.cpp, shown below, generates particles on the host, copies them to the GPU and then executes the advance operations in a CUDA kernel. The program then copies the particles back and computes and prints a summary of the total distance traveled by all particles. For each of 100 steps, the program generates a random total distance on the CPU and passes it as an argument to the kernel.
You can get the complete example on Github.
#include <particle.h>
#include <stdlib.h>
#include <stdio.h>
__global__
void advanceParticles(float dt, particle * pArray, int nParticles)
{
int idx = threadIdx.x + blockIdx.x*blockDim.x;
if(idx < nParticles) { pArray[idx].advance(dt); }
}
int main(int argc, char ** argv)
{
int n = 1000000;
if(argc > 1) { n = atoi(argv[1]);} // Number of particles
if(argc > 2) { srand(atoi(argv[2])); } // Random seed
particle * pArray = new particle[n];
particle * devPArray = NULL;
cudaMalloc(&devPArray, n*sizeof(particle));
cudaMemcpy(devPArray, pArray, n*sizeof(particle), cudaMemcpyHostToDevice);
for(int i=0; i<100; i++)
{ // Random distance each step
float dt = (float)rand()/(float) RAND_MAX;
advanceParticles<<< 1 + n/256, 256>>>(dt, devPArray, n);
cudaDeviceSynchronize();
}
cudaMemcpy(pArray, devPArray, n*sizeof(particle), cudaMemcpyDeviceToHost);
v3 totalDistance(0,0,0);
v3 temp;
for(int i=0; i<n; i++)
{
temp = pArray[i].getTotalDistance();
totalDistance.x += temp.x;
totalDistance.y += temp.y;
totalDistance.z += temp.z;
}
float avgX = totalDistance.x /(float)n;
float avgY = totalDistance.y /(float)n;
float avgZ = totalDistance.z /(float)n;
float avgNorm = sqrt(avgX*avgX + avgY*avgY + avgZ*avgZ);
printf("Moved %d particles 100 steps. Average distance traveled is |(%f, %f, %f)| = %f\n",
n, avgX, avgY, avgZ, avgNorm);
return 0;
}
Building and running
Using make will work on this project so long as you have the CUDA 5.0 or later compiler in your path and a CUDA capable device with SM version 2.0 or later in your system. The following listing shows the contents of the Makefile.
objects = main.o particle.o v3.o
all: $(objects)
nvcc -arch=sm_20 $(objects) -o app
%.o: %.cpp
nvcc -x cu -arch=sm_20 -I. -dc $< -o $@
clean:
rm -f *.o app
When you run app you can optionally specify two command line arguments. The first is the number of particles to create and run (default is 1 million particles).
./app <numParticles>
The second number is a random seed, to generate different sequences of particles and distance steps.
./app <numParticles> <randomSeed>
In the absence of arguments, the program uses the default random seed.
Using Device Code Linking
Beyond the __host__ and __device__ decorations and the CUDA kernel, the only difference from a CPU-only version of this code is the use of nvcc as the compiler and the –dc compiler option. The –dc option tells nvcc to generate device code for later linking. It is worth noting that we have specified –arch=sm_20 before the –dc option, because not all SM code variants support device linking and nvcc needs to know that it is targeting a compatible SM architecture. Device code linking requires Compute Capability 2.0 (sm_20) or later. We omit –dc in the link command to tell nvcc to link the objects. When nvcc is passed the object files with both CPU and GPU object code, it will link both automatically.
Finally, you may not recognize the option –x cu. This option tells nvcc to treat the input files as .cu files containing both CPU and GPU code. By default, nvcc treats .cpp files as CPU-only code. This option is required to have nvcc generate device code here, but it is also a handy way to avoid renaming source files in larger projects. (A side note: if you #include <cuda_runtime.h> in a .cpp file and compile it with a compiler other than nvcc, __device__ and __host__ will be defined to nothing to enable portability of this code to other compilers!)
Advanced Usage: Using a Different Linker
When you use nvcc to link, there is nothing special to do: replace your normal compiler command with nvcc and it will take care of all the necessary steps. However, you may choose to use a compiler driver other than nvcc (such as g++) for the final link step. Since your CPU compiler will not know how to link CUDA device code, you’ll have to add a step in your build to have nvcc link the CUDA device code, using the nvcc option –dlink. In our example, we could do the following.
> nvcc –arch=sm_20 –dlink v3.o particle.o main.o –o gpuCode.o
This links all the device object code and places it into gpuCode.o. Note that this does not link the CPU object code. In fact, the CPU object code in v3.o, particle.o, and main.o is discarded in this step. To complete the link to an executable, we can use ld or g++.
> g++ gpuCode.o main.o particle.o v3.o –lcudart –o app
We give g++ all of the objects again because it needs the CPU object code, which is not in gpuCode.o. The device code stored in the original objects (particle.o, v3.o, main.o) does not conflict with the code in gpuCode.o. g++ ignores device code because it does not know how to link it, and the device code in gpuCode.o is already linked and ready to go. This intentional ignorance is extremely useful in large builds where intermediate objects may have both CPU and GPU code. In this case, we just let the GPU and CPU linkers each do its own job, noting that the CPU linker is always the last one we run. The CUDA Runtime API library is automatically linked when we use nvcc for linking, but we must explicitly link it (-lcudart) when using another linker.
Caveats
There are some limitations with device code linking. As mentioned previously, not all SM versions support device object linking; it requires sm_20 or higher, and CUDA 5.0 or newer.
Performance of linked device code may also be a bit lower than the performance of device code built with full code path visibility at compile time. When both the function and the call site code are known at compile time, the compiler can optimize the function call, but when the call site and the function are in different compilation units, the compiler must fully adhere to an ABI (Application Binary Interface), which prevents this type of optimization. Performance effects are variable, but CUDA 5.5 and 6.0 both contain notable improvements in the performance of applications using device code linking.
Conclusion: Device Code Linking is a Powerful Tool
The primary advantage of device code linking is the availability of more traditional code structures, especially in C++, for your application. Device code linking dramatically eases the process of compiling complicated C++ composition chains for the GPU and enabling their use in GPU kernels. For example, we have used this feature to enable GPU acceleration in a large C++ code with dozens of classes organized in the typical fashion shown here. In that case, the call chain routinely went through tens of classes and we compiled over 200 member functions for the device, all used in a single kernel. Using device code linking, we maintained the existing C++ structure while the computational load was parallelized on the GPU. Thanks to device object linking, this project took only a matter of days to port.
Using device code linking can allow you to have the best of both worlds: maintain the existing structure of your application, have control over each build and link step, and make use of the most powerful processors on the planet in your application.
Related resources
Tags
About the Authors
Comments
Related posts
Reducing Application Build Times Using CUDA C++ Compilation Aids
Programming Efficiently with the NVIDIA CUDA 11.3 Compiler Toolchain
Boosting Productivity and Performance with the NVIDIA CUDA 11.2 C++ Compiler
Improving GPU Application Performance with NVIDIA CUDA 11.2 Device Link Time Optimization
Simple, Portable Parallel C++ with Hemi 2 and CUDA 7.5
Related posts
New NVIDIA CUDA-Q Features Boost Quantum Application Performance
How to Accelerate Quantitative Finance with ISO C++ Standard Parallelism
Modernizing the Data Center with Accelerated Networking
Most Popular NVIDIA Technical Blog Posts of 2023: Generative AI, LLMs, Robotics, and Virtual Worlds Breakthroughs
Just Released: NVIDIA Modulus 23.11
|
GPU Pro Tip: Fast Dynamic Indexing of Private Arrays in CUDA
Sometimes you need to use small per-thread arrays in your GPU kernels. The performance of accessing elements in these arrays can vary depending on a number of factors. In this post I’ll cover several common scenarios ranging from fast static indexing to more complex and challenging use cases.
Static indexing
Before discussing dynamic indexing let’s briefly look at static indexing. For small arrays where all indices are known constants at compile time, as in the following sample code, the compiler places all accessed elements of the array into registers.
__global__ void kernel1(float * buf)
{
float a[2];
...
float sum = a[0] + a[1];
...
}
This way array elements are accessed in the fastest way possible: math instructions use the data directly without loads and stores.
A slightly more complex (and probably more useful) case is an unrolled loop over the indices of the array. In the following code the compiler is also capable of assigning the accessed array elements to registers.
__global__ void kernel2(float * buf)
{
float a[5];
...
float sum = 0.0f;
#pragma unroll
for(int i = 0; i < 5; ++i)
sum += a[i];
...
}
Here we tell the compiler to unroll the loop with the directive #pragma unroll, effectively replacing the loop with all the iterations listed explicitly, as in the following snippet.
sum += a[0];
sum += a[1];
sum += a[2];
sum += a[3];
sum += a[4];
All the indices are now constants, so the compiler puts the whole array into registers. I ran a Kernel Profile experiment in the NVIDIA Visual Profiler. When you enable source line information in the binary by building the CUDA source files with the -lineinfo nvcc option lets the Visual Profiler show the correspondence between the CUDA C++ source code lines and the generated assembler instructions. For the unrolled loop above, the compiler is able to generate just 4 floating point add instructions, without any stores and loads. Why 4 instructions when we have 5 additions? The compiler is smart enough to figure out that adding 0.0f to a[0] is just a[0] and it eliminates this instruction. See the screenshot of Kernel Profile experiment in Figure 1.
In some cases the compiler can unroll the loop automatically without #pragma unroll. Note that the array size must be an immediate numeric constant; however you can define it via a #define or a template parameter to the kernel.
Dynamic indexing with Uniform Access
When the compiler can’t resolve array indices to constants it must put private arrays into GPU local memory. “Local” here doesn’t mean this memory is close to compute units, necessarily; it means that it is local to each thread and not visible to other threads. Logical local memory actually resides in global GPU memory. Each thread has its own copy of any local array, and the compiler generates load and store instructions for array reads and writes, respectively.
Using local memory is slower than keeping array elements directly in registers, but if you have sufficient math instructions in your kernel and enough threads to hide the latency, the local load/store instructions may be a minor cost. Empiricially, a 4:1 to 8:1 ratio of math to memory operations should be enough, the exact number depends on your particular kernel and GPU architecture. Your kernel should also have occupancy high enough to hide local memory access latencies.
Here is an example of a kernel where the compiler can’t resolve indices to constants even if it unrolls the loop.
__global__ void kernel3(float * buf, int start_index)
{
float a[6];
...
float sum = 0.0f;
#pragma unroll
for(int i = 0; i < 5; ++i)
sum += a[start_index + i];
...
}
A Kernel Profile experiment confirms that each access to array a now results in a local load or store, as Figure 2 shows.
Note that this example demonstrates uniform access: all threads of each warp access elements of their own private array using the same index (even if this index is calculated dynamically at runtime). This enables the GPU load/store units to execute the instructions in the most efficient way.
Local memory is cached in the GPU’s L2 & L1 caches. As the size of your private array grows it will exceed the size of the L1 cache and then the L2 cache until eventually accesses will pay the full price of accessing global memory. To partly mitigate this problem you can use cudaFuncSetCacheConfig with cudaFuncCachePreferL1 to tell the CUDA runtime to configure a larger L1 cache and smaller shared memory. Please note that shared memory and L1 cache are physically separate in the Maxwell architecture, so this function has no effect for these chips.
Dynamic Indexing with Non-Uniform Access
Things get more difficult when threads of a warp start accessing elements of their private arrays using different indices. This is called non-uniform indexing. When this happens, the SM must “replay” load/store instructions for each unique index used by the threads in the warp. This is more common with data-dependent algorithms like the following example, where each thread reads its index from the global memory array indexbuf.
#define ARRAY_SIZE 32
__global__ void kernel4(float * buf, int * indexbuf)
{
float a[ARRAY_SIZE];
...
int index = indexbuf[threadIdx.x + blockIdx.x * blockDim.x];
float val = a[index];
...
}
The number of load instruction replays can vary widely depending on the data in indexbuf:
In this situation, a kernel might have so many local memory load and store replays that its performance drops significantly.
Fortunately there is a trick that can help you solve this problem. Let’s store the private array explicitly in shared memory!
Shared memory has 32 banks that are organized such that successive 32-bit words map to successive banks (see the CUDA C Programming Guide for details). For our example we’ll allocate a __shared__ array large enough to hold the private arrays of all threads of a threadblock.
We’ll logically assign elements of this new __shared__ array to the threads of the thread block so that all elements of our new virtual private array for each thread are stored in its own shared memory bank.
I will use THREADBLOCK_SIZE to define the size of the thread block (this value should be evenly divisible by the warp size, 32). Here the first THREADBLOCK_SIZE elements of our shared memory array contain all 0-index elements of the private arrays for all threads of the thread block. The next THREADBLOCK_SIZE elements of the shared memory array contain all 1-index elements of the private arrays, and so on. This approach is illustrated in Figure 3.
In this way we ensure that the whole virtual private array of thread 0 falls into shared memory bank 0, the array of thread 1 falls into bank 1, and so on. Thread 32—which is the first thread in the next warp—will occupy bank 0 again but there will be no shared memory bank conflicts with thread 0 (or any other bank 0 thread) since they belong to different warps and therefore will never read at the same instant.
The following code implements this idea.
// Should be multiple of 32
#define THREADBLOCK_SIZE 64
// Could be any number, but the whole array should fit into shared memory
#define ARRAY_SIZE 32
__device__ __forceinline__ int no_bank_conflict_index(int thread_id,
int logical_index)
{
return logical_index * THREADBLOCK_SIZE + thread_id;
}
__global__ void kernel5(float * buf, int * index_buf)
{
// Declare shared memory array A which will hold virtual
// private arrays of size ARRAY_SIZE elements for all
// THREADBLOCK_SIZE threads of a threadblock
__shared__ float A[ARRAY_SIZE * THREADBLOCK_SIZE];
...
int index = index_buf[threadIdx.x + blockIdx.x * blockDim.x];
// Here we assume thread block is 1D so threadIdx.x
// enumerates all threads in the thread block
float val = A[no_bank_conflict_index(threadIdx.x, index)];
...
}
As long as array A is declared __shared__, each access to an element of A is a load from (LDS) or store to (STS) shared memory, see Figure 4.
This technique guarantees that all accesses to this array (now located in shared memory) will execute without any replays. It’s possible to modify the code to eliminate the limitation to one-dimensional thread blocks with THREADBLOCK_SIZE divisible by 32.
This method has intrinsic limitations, though, associated with its use of shared memory. As the size of your array (located in shared memory) grows the occupancy drops. If the occupancy drops too low, performance may drop when the kernel has insufficient occupancy to hide latency. At some point this array will not fit into shared memory and you will not be able to run the kernel. 48 KB is the maximum shared memory size a thread block can use. In short, this method works for relatively small private arrays: up to 30-50 32-bit elements per thread.
Performance
For performance comparison I used a kernel with dynamic indexing of a private array of 32 32-bit elements. The kernel performance is limited by how fast it can access those elements.
I compared performance for 3 cases on an NVIDIA Tesla K20 accelerator:
With private arrays stored in local memory, non-uniform indexing is, as we expect, much worse: 24x slower for my particular kernel. The interesting part is that a shared-memory-based kernel with non-uniform indexing is 1.34x faster than the local-memory-based kernel with uniform indexing, as Figure 5 shows.
There is a good reason for this effect: the local memory version spills from the L1 cache to the L2 cache and even to global memory, while the shared memory version is able to fit all of the private arrays without any spilling.
Even for this kernel the relative performance depends on the architecture of the GPU. For example, on Maxwell the difference between the two local memory based cases is smaller—about 14x—but the shared memory version runs 4x faster than the local memory version with uniform indexing. Your particular kernel will more than likely behave differently, due to different instruction mix. Try it.
Summary
The performance of CUDA kernels that access private arrays can depend a lot on access patterns.
Related resources
Tags
About the Authors
Comments
Related posts
Register Cache: Caching for Warp-Centric CUDA Programs
Combine OpenACC and Unified Memory for Productivity and Performance
CUDA Pro Tip: Write Flexible Kernels with Grid-Stride Loops
An Efficient Matrix Transpose in CUDA Fortran
How to Access Global Memory Efficiently in CUDA C/C++ Kernels
Related posts
CUDA Pro Tip: The Fast Way to Query Device Properties
Pro Tip: Improved GLSL Syntax for Vulkan DescriptorSet Indexing
Pro Tip: Pinpointing Runtime Errors in CUDA Fortran
Pro Tip: Linking OpenGL for Server-Side Rendering
Pro Tip: cuBLAS Strided Batched Matrix Multiply
|
Controlling Data Movement to Boost Performance on the NVIDIA Ampere Architecture
The NVIDIA Ampere architecture provides new mechanisms to control data movement within the GPU and CUDA 11.1 puts those controls into your hands. These mechanisms include asynchronously copying data into shared memory and influencing the residency of data in the L2 cache.
This post walks through how to use the asynchronous copy feature, and how to set up your algorithms to overlap asynchronous copies with computations.
Applications stage data through shared memory
Applications with large data and computational intensity on that data are accelerated by copying data from global to shared memory, performing computations on data in shared memory, and copying results back to global memory.
Asynchronous data movement
You’ve long had the ability to asynchronously copy data between CPU memory and GPU global memory using cudaMemcpyAsync. For more information, see How to Overlap Data Transfers in CUDA C/C++.
CudaDMA was an early effort to give developers asynchronous data movement between global and shared memory. CudaDMA uses extra data movement warps dedicated to copy operations while primary warps perform computations. With the NVIDIA Ampere architecture, you can now asynchronously copy data between GPU global memory and shared memory by using cuda::memcpy_async and not tie up threads to shepherd data movement.
These asynchronous data movement features enable you to overlap computations with data movement and reduce total execution time. With cudaMemcpyAsync, data movement between CPU memory and GPU global memory can be overlapped with kernel execution. With cuda::memcpy_async, data movement from GPU global memory to shared memory can be overlapped with thread execution.
A better journey through the memory hierarchy
Prior to cuda::memcpy_async, copying data from global to shared memory was a two-step process. First, a thread block copied data from global memory into registers and then the thread block copied that data from registers into shared memory. This resulted in the data taking a long journey through the memory hierarchy.
With cuda::memcpy_async, the thread block no longer stages data through registers, freeing the thread block from the task of moving data and freeing registers to be used by computations.
Stages of asynchronous copy operations
Prior to cuda::memcpy_async, a thread block copied a batch of data from global to shared memory, computed on that batch, and then iterated to the next batch. A batch can be a contiguous region of memory or be scattered into a data structure such as into the boundary of a finite difference grid. Each thread within a thread block copied one or more elements of the batch and then all threads synchronized (_syncthreads or cooperative_group::sync) to wait for all element-copy operations to complete.
The pattern for asynchronously copying data is similar. Each thread calls cuda::memcpy_async one or more times to submit an asynchronous copy operation for elements within a batch and then all threads wait for the submitted copy operations to complete. Asynchronous data movement enables multiple batches to be “in flight” at the same time.
A thread block can use asynchronous data movement to pipeline (for example, double buffer) its iteration through a large data structure by submitting N stages of asynchronous data movement, waiting for the oldest stage to complete, computing with that batch of shared memory, and submitting a new stage before waiting for the next oldest batch to complete.
Copy and compute: From synchronous to pipelined
You work through code changes to a simple example kernel that copies data from global to shared memory and then computes on that data. With the final code change, the kernel is overlapping the asynchronous copy of data with computations.
Staging data through shared memory
Most applications using shared memory to perform computation on a subset of a larger data set can be represented by the following algorithmic pattern. For each subset of the dataset:
The following code example is a direct implementation of such an algorithm.
#include <cooperative_groups.h>
template <typename T>
__global__ void example_kernel(T * global1, T * global2, size_t subset_count)
{
extern __shared__ T shared[];
auto group = cooperative_groups::this_thread_block();
for (size_t subset = 0; subset < subset_count; ++subset) {
shared[group.thread_rank() ] = global1[subset * group.size() + group.thread_rank()];
shared[group.size() + group.thread_rank()] = global2[subset * group.size() + group.thread_rank()];
group.sync(); // Wait for all copies to complete
compute(shared);
group.sync();
}
}
Introducing asynchronous copies
For trivially copyable types, this algorithm can be straightforwardly improved using the new Ampere-accelerated CUDA 11.1 facilities.
#include <cooperative_groups.h>
#include <cooperative_groups/memcpy_async.h>
template <typename T>
__global__ void example_kernel(T * global1, T * global2, size_t subset_count)
{
extern __shared__ T shared[];
auto group = cooperative_groups::this_thread_block();
for (size_t subset = 0; subset < subset_count; ++subset) {
cooperative_groups::memcpy_async(group, shared,
&global1[subset * group.size()], sizeof(T) * group.size());
cooperative_groups::memcpy_async(group, shared + group.size(),
&global2[subset * group.size()], sizeof(T) * group.size());
cooperative_groups::wait(group); // Wait for all copies to complete
compute(shared);
group.sync();
}
}
Here, you use cooperative_groups::memcpy_async paired with cooperative_groups::wait as a drop-in replacement for memcpy and cooperative_groups::group::sync.
This new version has several advantages:
Arrive-wait barrier interoperability
The CUDA 11.1 memcpy_async APIs also offer the possibility of synchronizing asynchronous data transfers using asynchronous barriers.
For more information about asynchronous barriers, see cuda/std/barrier, cuda/barrier in the libcu++ documentation.
#include <cooperative_groups.h>
#include <cuda/barrier>
template <typename T>
__global__ void example_kernel(T * global1, T * global2, size_t subset_count)
{
extern __shared__ T shared[];
auto group = cooperative_groups::this_thread_block();
// Create a synchronization object (C++20 barrier)
__shared__ cuda::barrier<cuda::thread_scope::thread_scope_block> barrier;
if (group.thread_rank() == 0) {
init(&barrier, group.size());
}
group.sync();
for (size_t subset = 0; subset < subset_count; ++subset) {
cuda::memcpy_async(group, shared,
&global1[subset * group.size()], sizeof(T) * group.size(), barrier);
cuda::memcpy_async(group, shared + group.size(),
&global2[subset * group.size()], sizeof(T) * group.size(), barrier);
barrier.arrive_and_wait(); // Wait for all copies to complete
compute(shared);
barrier.arrive_and_wait();
}
}
Overlapping global-to-shared copies with compute
Finally, with some limited restructuring, you can asynchronously prefetch data for subset N+1 while computing subset N using a two-stage cuda::pipeline statement.
template<int block_dim, int num_stages>
__global__ void in_between_pipe_thread(int* dest, int const* src, size_t size) {
// Read blockDim.x integers per pipeline stage
__shared__ int smem[num_stages][block_dim];
// Grid stride loop:
int offset = blockIdx.x * blockDim.x;
size_t stride = gridDim.x * blockDim.x;
// No pipeline::shared_state needed
cuda::pipeline<cuda::thread_scope_thread> pipe = cuda::make_pipeline();
// Load all pipeline stages.
for (int stage = 0; stage < num_stages; ++stage) {
pipe.producer_acquire();
size_t idx = offset + stage * stride + threadIdx.x;
if (idx < size) {
cuda::memcpy_async(&smem[stage][threadIdx.x], &src[idx], sizeof(int), pipe);
} pipe.producer_commit();
}
// At this point, there are `num_stages` commited into the pipeline. This is a loop.
// invariant that is upheld throughout the loop.
int stage = 0;
for (size_t block_idx = offset; block_idx < size; block_idx += stride) {
// Wait for the first stage to have completed loading, or equivalently: wait until
// at most `num_stages - 1` stages are still loading.
cuda::pipeline_consumer_wait_prior<num_stages - 1>(pipe);
// __syncthreads is necessary if other threads want to read this thread's loaded data.
__syncthreads();
// Compute on smem[stage][..]. In this example, each thread determines
// if its value is between the current stage's start and end.
bool in_between = smem[stage][0] < smem[stage][threadIdx.x] && smem[stage][threadIdx.x] < smem[stage][block_dim - 1];
dest[block_idx + threadIdx.x] = (int) in_between;
// __syncthreads is necessary if other threads are reading data that this thread
// is about to overwrite below.
__syncthreads();
// Release the consumed stage.
pipe.consumer_release();
// Pre-load data for `num_stages` into the future.
pipe.producer_acquire();
// To ensure that the number of commited stages into the pipeline remains constant,
// producer_acquire and producer_commit are called even if the load is out-of-bounds.
size_t idx = block_idx + num_stages * stride + threadIdx.x;
if (idx < size) {
cuda::memcpy_async(&smem[stage][threadIdx.x], &src[idx], sizeof(int), pipe);
}
pipe.producer_commit();
stage = (stage + 1) % num_stages;
}
}
This pipelining scheme enables the improvement of two aspects compared to the previous version:
Summary
CUDA 11.1 provides a foundational development environment for building applications with the NVIDIA Ampere GPU architecture. You can access all the new features of CUDA 11.1 on either powerful server platforms built on the NVIDIA A100 or consumer GPUs with the GeForce RTX-30 series or Quadro RTX series. Use CUDA 11.1 to take control of your data movement.
Get started today by downloading CUDA 11.1:
Related resources
Tags
About the Authors
Comments
Related posts
Enhancing Memory Allocation with New NVIDIA CUDA 11.2 Features
CUDA Toolkit 11.1 Introduces Support for GeForce RTX 30 Series and Quadro RTX Series GPUs
Using Shared Memory in CUDA Fortran
How to Access Global Memory Efficiently in CUDA Fortran Kernels
How to Overlap Data Transfers in CUDA Fortran
Related posts
How to Use OpenUSD
Just Released: cuDSS 0.3.0
Boosting Application Performance with GPU Memory Prefetching
GPU Pro Tip: Fast Histograms Using Shared Atomics on Maxwell
CUDA Pro Tip: Do The Kepler Shuffle
|
Simplifying GPU Application Development with Heterogeneous Memory Management
Heterogeneous Memory Management (HMM) is a CUDA memory management feature that extends the simplicity and productivity of the CUDA Unified Memory programming model to include system allocated memory on systems with PCIe-connected NVIDIA GPUs. System allocated memory refers to memory that is ultimately allocated by the operating system; for example, through malloc, mmap, the C++ new operator (which of course uses the preceding mechanisms), or related system routines that set up CPU-accessible memory for the application.
Previously, on PCIe-based machines, system allocated memory was not directly accessible by the GPU. The GPU could only access memory that came from special allocators such as cudaMalloc or cudaMallocManaged.
With HMM enabled, all application threads (GPU or CPU) can directly access all of the application’s system allocated memory. As with Unified Memory (which can be thought of as a subset of, or precursor to HMM), there is no need to manually copy system allocated memory between processors. This is because it is automatically placed on the CPU or GPU, based on processor usage.
Within the CUDA driver stack, CPU and GPU page faults are typically used to discover where the memory should be placed. Again, this automatic placement already happens with Unified Memory—HMM simply extends the behavior to cover system allocated memory as well as cudaMallocManaged memory.
This new ability to directly read or write to the full application memory address space will significantly improve programmer productivity for all programming models built on top of CUDA: CUDA C++, Fortran, standard parallelism in Python, ISO C++, ISO Fortran, OpenACC, OpenMP, and many others.
In fact, as the upcoming examples demonstrate, HMM simplifies GPU programming to the point that GPU programming is nearly as accessible as CPU programming. Some highlights:
As an aside, new hardware platforms such as NVIDIA Grace Hopper natively support the Unified Memory programming model through hardware-based memory coherence among all CPUs and GPUs. For such systems, HMM is not required, and in fact, HMM is automatically disabled there. One way to think about this is to observe that HMM is effectively a software-based way of providing the same programming model as an NVIDIA Grace Hopper Superchip.
To learn more about CUDA Unified Memory, see the resources section at the end of this post.
Unified Memory before HMM
The original CUDA Unified Memory feature introduced in 2013 enables you to accelerate a CPU program with only a few changes, as shown below:
BeforeCPU only
void sortfile(FILE* fp, int N) {
char* data;
data = (char*)malloc(N);
fread(data, 1, N, fp);
qsort(data, N, 1, cmp);
use_data(data);
free(data);
}
AfterCUDA Unified Memory (2013)
void sortfile(FILE* fp, int N) {
char* data;
cudaMallocManaged(&data, N);
fread(data, 1, N, fp);
qsort<<<...>>>(data, N, 1, cmp);
cudaDeviceSynchronize();
use_data(data);
cudaFree(data);
}
This programming model is simple, clear, and powerful. Over the past 10 years, this approach has enabled countless applications to easily benefit from GPU acceleration. And yet, there is still room for improvement: note the need for a special allocator: cudaMallocManaged, and the corresponding cudaFree.
What if we could go even further, and get rid of those? That’s exactly what HMM does.
Unified Memory after HMM
On systems with HMM (detailed below), continue using malloc and free:
Before HMMCPU only
void sortfile(FILE* fp, int N) {
char* data;
data = (char*)malloc(N);
fread(data, 1, N, fp);
qsort(data, N, 1, cmp);
use_data(data);
free(data);
}
After HMMCUDA Unified Memory + HMM (2023)
void sortfile(FILE* fp, int N) {
char* data;
data = (char*)malloc(N);
fread(data, 1, N, fp);
qsort<<<...>>>(data, N, 1, cmp);
cudaDeviceSynchronize();
use_data(data);
free(data)
}
With HMM, the memory management is now identical between the two.
System allocated memory and CUDA allocators
GPU applications using CUDA memory allocators work “as is” on systems with HMM. The main difference in these systems is that system allocation APIs like malloc, C++ new, or mmap now create allocations that may be accessed from GPU threads, without having to call any CUDA APIs to tell CUDA about the existence of these allocations. Table 1 captures the differences between the most common CUDA memory allocators on systems with HMM:
In general, selecting the allocator that better expresses the application intent may enable CUDA to deliver better performance. With HMM, these choices become performance optimizations that do not need to be done upfront, before accessing the memory from the GPU for the first time. HMM enables developers to focus on parallelizing algorithms first, and performing memory allocator-related optimizations later, when their overhead improves performance.
Seamless GPU acceleration for C++, Fortran, and Python
HMM makes it significantly easier to program NVIDIA GPUs with standardized and portable programming languages like Python that do not distinguish between CPU and GPU memory and assume all threads may access all memory, as well as programming languages described by international standards like ISO Fortran and ISO C++.
These languages provide concurrency and parallelism facilities that enable implementations to automatically dispatch computations to GPUs and other devices. For example, since C++ 2017, the standard library algorithms from the <algorithm> header accept execution policies that enable implementations to run them in parallel.
Sorting a file in place from the GPU
For example, before HMM, sorting a file larger than CPU memory in-place was complicated, requiring sorting smaller parts of the file first, and merging them into a fully-sorted file afterwards. With HMM, the application may map the file on disk into memory using mmap, and read and write to it directly from the GPU. For more details, see the HMM sample code file_before.cpp and file_after.cpp on GitHub.
Before HMMDynamic Allocation
void sortfile(FILE* fp, int N) {
std::vector<char> buffer;
buffer.resize(N);
fread(buffer.data(), 1, N, fp);
// std::sort runs on the GPU:
std::sort(std::execution::par,
buffer.begin(), buffer.end(),
std::greater{});
use_data(std::span{buffer});
}
After HMMCUDA Unified Memory + HMM (2023)
void sortfile(int fd, int N) {
auto buffer = (char*)mmap(NULL, N,
PROT_READ | PROT_WRITE, MAP_SHARED, fd, 0);
// std::sort runs on the GPU:
std::sort(std::execution::par,
buffer, buffer + N,
std::greater{});
use_data(std::span{buffer});
}
The NVIDIA C++ Compiler (NVC++) implementation of the parallel std::sort algorithm sorts the file on the GPU when using the -stdpar=gpu option. There are many restrictions on the use of this option, as detailed in the HPC SDK documentation.
Atomic memory operations and synchronization primitives
HMM supports all memory operations, which includes atomic memory operations. That is, programmers may use atomic memory operations to synchronize GPU and CPU threads with flags. While certain parts of the C++ std::atomic APIs use system calls that are not available on the GPU yet, such as std::atomic::wait and std::atomic::notify_all/_one APIs, most of the C++ concurrency primitive APIs are available and readily useful to perform message passing between GPU and CPU threads.
For more information, see the documentation of HPC SDK C++ Parallel Algorithms: Interoperability with the C++ Standard Library) and atomic_flag.cpp HMM sample code on GitHub. You can extend this set using CUDA C++. See the ticket_lock.cpp HMM sample code on GitHub for more details.
Before HMMCPU←→GPU message passing
void main() {
// Variables allocated with cudaMallocManaged
std::atomic<int>* flag;
int* msg;
cudaMallocManaged(&flag, sizeof(std::atomic<int>));
cudaMallocManaged(&msg, sizeof(int));
new (flag) std::atomic<int>(0);
*msg = 0;
// Start a different CPU thread…
auto t = std::jthread([&] {
// … that launches and waits
// on a GPU kernel completing
std::for_each_n(
std::execution::par,
&msg, 1, [&](int& msg) {
// GPU thread writes message…
*msg = 42; // all accesses via ptrs
// …and signals completion…
flag->store(1); // all accesses via ptrs
});
});
// CPU thread waits on GPU thread
while (flag->load() == 0); // all accesses via ptrs
// …and reads the message:
std::cout << *msg << std::endl;
// …the GPU kernel and thread
// may still be running here…
}
After HMMCPU←→GPU message passing
void main() {
// Variables on CPU thread stack:
std::atomic<int> flag = 0; // Atomic
int msg = 0; // Message
// Start a different CPU thread…
auto t = std::jthread([&] {
// … that launches and waits
// on a GPU kernel completing
std::for_each_n(
std::execution::par,
&msg, 1, [&](int& msg) {
// GPU thread writes message…
msg = 42;
// …and signals completion…
flag.store(1);
});
});
// CPU thread waits on GPU thread
while (flag.load() == 0);
// …and reads the message:
std::cout << msg << std::endl;
// …the GPU kernel and thread
// may still be running here…
}
Before HMMCPU←→GPU locks
void main() {
// Variables allocated with cudaMallocManaged
ticket_lock* lock; // Lock
int* msg; // Message
cudaMallocManaged(&lock, sizeof(ticket_lock));
cudaMallocManaged(&msg, sizeof(int));
new (lock) ticket_lock();
*msg = 0;
// Start a different CPU thread…
auto t = std::jthread([&] {
// … that launches and waits
// on a GPU kernel completing
std::for_each_n(
std::execution::par,
&msg, 1, [&](int& msg) {
// GPU thread takes lock…
auto g = lock->guard();
// … and sets message (no atomics)
msg += 1;
}); // GPU thread releases lock here
});
{ // Concurrently with GPU thread
// … CPU thread takes lock…
auto g = lock->guard();
// … and sets message (no atomics)
msg += 1;
} // CPU thread releases lock here
t.join(); // Wait on GPU kernel completion
std::cout << msg << std::endl;
}
After HMMCPU←→GPU locks
void main() {
// Variables on CPU thread stack:
ticket_lock lock; // Lock
int msg = 0; // Message
// Start a different CPU thread…
auto t = std::jthread([&] {
// … that launches and waits
// on a GPU kernel completing
std::for_each_n(
std::execution::par,
&msg, 1, [&](int& msg) {
// GPU thread takes lock…
auto g = lock.guard();
// … and sets message (no atomics)
msg += 1;
}); // GPU thread releases lock here
});
{ // Concurrently with GPU thread
// … CPU thread takes lock…
auto g = lock.guard();
// … and sets message (no atomics)
msg += 1;
} // CPU thread releases lock here
t.join(); // Wait on GPU kernel completion
std::cout << msg << std::endl;
}
Accelerate complex HPC workloads with HMM
Research groups working on large and long-lived HPC applications have yearned for years for more productive and portable programming models for heterogeneous platforms. m-AIA is a multi-physics solver spanning almost 300,000 lines of code developed at the Institute of Aerodynamics at RWTH Aachen, Germany. See Accelerating a C++ CFD Code with OpenACC for more information. Instead of using OpenACC for the initial prototype, it is now partially accelerated on GPUs using the ISO C++ programming model described above, which was not available when the prototype work was done.
HMM enabled our team to accelerate new m-AIA workloads that interface with GPU-agnostic third-party libraries such as FFTW and pnetcdf, which are used for initial conditions and I/O and are oblivious to the GPU directly accessing the same memory.
Leverage memory-mapped I/O for fast development
One of the interesting features that HMM provides is memory-mapped file I/O directly from the GPU. It enables developers to directly read files from supported storage or /disk without staging them in system memory and without copying the data to the high bandwidth GPU memory. This also enables application developers to easily process input data larger than the available physical system memory, without constructing an iterative data ingestion and computation workflow.
To demonstrate this functionality, our team wrote a sample application that builds a histogram of hourly total precipitation for every day of the year from the ERA5 reanalysis dataset. For more details, see The ERA5 global reanalysis.
The ERA5 dataset consists of hourly estimates of several atmospheric variables. In the dataset, total precipitation data for each month is stored in a separate file. We used 40 years of total precipitation data from 1981–2020, which sum to 480 input files aggregating to ~1.3 TB total input data size. See Figure 1 for example results.
Using the Unix mmap API, input files can be mapped to a contiguous virtual address space. With HMM, this virtual address can be passed as input to a CUDA kernel which can then directly access the values to build a histogram of total precipitation for each hour for all the days in a year.
The resulting histogram will reside in GPU memory and can be used to easily compute interesting statistics such as average monthly precipitation over the northern hemisphere. As an example, we also computed average hourly precipitation for the months of February and August. To see the code for this application, visit HMM_sample_code on GitHub.
Before HMMBatch and pipeline memory transfers
size_t chunk_sz = 70_gb;
std::vector<char> buffer(chunk_sz);
for (fp : files)
for (size_t off = 0; off < N; off += chunk_sz) {
fread(buffer.data(), 1, chunk_sz, fp);
cudeMemcpy(dev, buffer.data(), chunk_sz, H2D);
histogram<<<...>>>(dev, N, out);
cudaDeviceSynchronize();
}
After HMMMemory map and transfer on demand
void* buffer = mmap(NULL, alloc_size,
PROT_NONE, MAP_PRIVATE | MAP_ANONYMOUS,
-1, 0);
for (fd : files)
mmap(buffer+file_offset, fileByteSize,
PROT_READ, MAP_PRIVATE|MAP_FIXED, fd, 0);
histogram<<<...>>>(buffer, total_N, out);
cudaDeviceSynchronize();
Enabling and detecting HMM
The CUDA Toolkit and driver will automatically enable HMM whenever it detects that your system can handle it. The requirements are documented in detail in the CUDA 12.2 Release Notes: General CUDA. You’ll need:
Query the Addressing Mode property to verify that HMM is enabled:
$ nvidia-smi -q | grep Addressing
Addressing Mode : HMM
To detect systems in which GPUs may access system allocated memory, query the cudaDevAttrPageableMemoryAccess.
In addition, systems such as the NVIDIA Grace Hopper Superchip support ATS, which has similar behavior to HMM. In fact, the programming model for HMM and ATS systems is the same, so merely checking for cudaDevAttrPageableMemoryAccess suffices for most programs.
However, for performance tuning and other advanced programming, it is possible to discern between HMM and ATS by also querying for cudaDevAttrPageableMemoryAccessUsesHostPageTables. Table 2 shows how to interpret the results.
For portable applications that are only interested in querying whether the programming model exposed by HMM or ATS is available, querying the ‘pageable memory access’ property usually suffices.
Unified Memory performance hints
There are no changes to the semantics of pre-existing Unified Memory performance hints. For applications that are already using CUDA Unified Memory on hardware-coherent systems like NVIDIA Grace Hopper, the main change is that HMM enables them to run “as is” on more systems within the limitations mentioned above.
The pre-existing Unified Memory hints also work with system allocated memory on HMM systems:
A little more advanced: there is a new CUDA 12.2 API, cudaMemAdvise_v2, that enables applications to choose which NUMA node a given memory range should prefer. This comes into play when HMM places the memory contents on the CPU side.
As always, memory management hints may either improve or degrade performance. Behavior is application and workload dependent, but none of the hints impacts the correctness of the application.
Limitations of HMM in CUDA 12.2
The initial HMM implementation in CUDA 12.2 delivers new features without regressing the performance of any pre-existing applications. The limitations of HMM in CUDA 12.2 are documented in detail in the CUDA 12.2 Release Notes: General CUDA. The main limitations are:
Stay tuned for future CUDA driver updates that will address HMM limitations and improve performance.
Summary
HMM simplifies the programming model by removing the need for explicit memory management for GPU programs that run on common PCIe-based (x86, typically) computers. Programmers can simply use malloc, C++ new, and mmap calls directly, just as they already do for CPU programming.
HMM further boosts programmer productivity by enabling a wide variety of standard programming language features to be safely used within CUDA programs. There is no need to worry about accidentally exposing system allocated memory to a CUDA kernel.
HMM enables a seamless transition to and from the new NVIDIA Grace Hopper Superchip, and similar machines. On PCIe-based machines, HMM provides the same simplified programming model as that used on the NVIDIA Grace Hopper Superchip.
Unified Memory resources
To learn more about CUDA Unified Memory, the following blog posts will help bring you up to date. You can also join the conversation at the NVIDIA Developer Forum for CUDA.
Related resources
Tags
About the Authors
Comments
Related posts
Simplifying GPU Programming for HPC with NVIDIA Grace Hopper Superchip
Unified Memory for CUDA Beginners
Unified Memory: Now for CUDA Fortran Programmers
Unified Memory in CUDA 6
How to Access Global Memory Efficiently in CUDA C/C++ Kernels
Related posts
Using NetworkX, Jaccard Similarity, and cuGraph to Predict Your Next Favorite Movie
Just Released: Tripy, a Python Programming Model For TensorRT
NVIDIA Hackathon Winners Share Strategies for RAPIDS-Accelerated ML Workflows
Sandboxing Agentic AI Workflows with WebAssembly
Harnessing GPU Acceleration for Multi-Label Classification with RAPIDS cuML
|
How to Overlap Data Transfers in CUDA Fortran
In my previous CUDA Fortran post I discussed how to transfer data efficiently between the host and device. In this post, I discuss how to overlap data transfers with computation on the host, computation on the device, and in some cases other data transfers between the host and device. Achieving overlap between data transfers and other operations requires the use of CUDA streams, so first let’s learn about streams.
CUDA Streams
A stream in CUDA is a sequence of operations that execute on the device in the order in which they are issued by the host code. While operations within a stream are guaranteed to execute in the prescribed order, operations in different streams can be interleaved and, when possible, they can even run concurrently.
The default stream
All device operations (kernels and data transfers) in CUDA run in a stream. When no stream is specified, the default stream (also called the “null stream”) is used. The default stream is different from other streams because it is a synchronizing stream with respect to operations on the device: no operation in the default stream will begin until all previously issued operations in any stream on the device have completed, and an operation in the default stream must complete before any other operation (in any stream on the device) will begin.
Let’s look at some simple code examples that use the default stream, and discuss how operations progress from the perspective of the host as well as the device.
a_d = a
call increment<<<1,N>>>(a_d)
a = a_d
In the code above, from the perspective of the device, all three operations are issued to the same (default) stream and will execute in the order that they were issued.
From the perspective of the host, the implicit data transfers are blocking or synchronous transfers, while the kernel launch is asynchronous. Since the host-to-device data transfer on the first line is synchronous, the CPU thread will not reach the kernel call on the second line until the host-to-device transfer is complete. Once the kernel is issued, the CPU thread moves to the third line, but the transfer on that line cannot begin due to the device-side order of execution.
The asynchronous behavior of kernel launches from the host’s perspective makes overlapping device and host computation very simple. We can modify the code to add some independent CPU computation as follows.
a_d = a
call increment<<<1,N>>>(a_d)
call myCPUroutine(b)
a = a_d
In the above code, as soon as the increment() kernel is launched on the device the CPU thread executes myCPUroutine(), overlapping its execution on the CPU with the kernel execution on the GPU. Whether the host or device routine completes first doesn’t affect the subsequent device-to-host transfer, which will begin only after the kernel completes. From the perspective of the device, nothing has changed from the previous example; the device is completely unaware of myCPUroutine().
Non-default streams
Non-default streams in CUDA Fortran are declared, created, and destroyed in host code as follows.
integer(kind=cuda_stream_kind) :: stream1
istat = cudaStreamCreate(stream1)
istat = cudaStreamDestroy(stream1)
To issue a data transfer to a non-default stream we use the cudaMemcpyAsync() function, which is similar to the cudaMemcpy() function discussed in the previous post, but takes a stream identifier as a fourth argument.
istat = cudaMemcpyAsync(a_d, a, N, stream1)
cudaMemcpyAsync() is non-blocking on the host, so control returns to the host thread immediately after the transfer is issued. There are cudaMemcpy2DAsync() and cudaMemcpy3DAsync() variants of this routine which can transfer 2D and 3D array sections asynchronously in the specified streams.
To issue a kernel to a non-default stream we specify the stream identifier as a fourth execution configuration parameter (the third execution configuration parameter allocates shared device memory, which we’ll talk about later; use 0 for now).
call increment<<<1,N,0,stream1>>>(a_d)
Synchronization with streams
Since all operations in non-default streams are non-blocking with respect to the host code, you will run across situations where you need to synchronize the host code with operations in a stream. There are several ways to do this. The “heavy hammer” way is to use cudaDeviceSynchronize(), which blocks the host code until all previously issued operations on the device have completed. In most cases this is overkill, and can really hurt performance due to stalling the entire device and host thread.
The CUDA stream API has multiple less severe methods of synchronizing the host with a stream. The function cudaStreamSynchronize(stream) can be used to block the host thread until all previously issued operations in the specified stream have completed. The function cudaStreamQuery(stream) tests whether all operations issued to the specified stream have completed, without blocking host execution. The functions cudaEventSynchronize(event) and cudaEventQuery(event) act similar to their stream counterparts, except that their result is based on whether a specified event has been recorded rather than whether a specified stream is idle.
Overlapping Kernel Execution and Data Transfers
Earlier we demonstrated how to overlap kernel execution in the default stream with execution of code on the host. But our main goal in this post is to show you how to overlap kernel execution with data transfers. There are several requirements for this to happen.
So let’s modify our simple host code from above to use multiple streams and see if we can achieve any overlap. The full code for this example is available on Github. In the modified code, we break up the array of size N into chunks of streamSize elements. Since the kernel operates independently on all elements, each of the chunks can be processed independently. The number of (non-default) streams used is nStreams=N/streamSize. There are multiple ways to implement the domain decomposition of the data and processing; one is to loop over all the operations for each chunk of the array as in this example code.
do i = 1, nStreams
offset = (i - 1) * streamSize
istat = cudaMemcpyAsync(a_d(offset+1), a(offset+1), streamSize, stream(i))
call kernel<<>>(a_d, offset)
istat = cudaMemcpyAsync(a(offset+1), a_d(offset+1), streamSize,stream(i))
enddo
Another approach is to batch similar operations together, issuing all the host-to-device transfers first, followed by all kernel launches, and then all device-to-host transfers, as in the following code.
do i = 1, nStreams
offset = (i - 1) * streamSize
istat = cudaMemcpyAsync(a_d(offset+1), a(offset+1), streamSize, stream(i))
enddo
do i = 1, nStreams
offset = (i - 1) * streamSize
call kernel<<>>(a_d,offset)
enddo
do i = 1, nStreams
offset = (i - 1) * streamSize
istat = cudaMemcpyAsync(a(offset+1), a_d(offset+1), streamSize, stream(i))
enddo
Both asynchronous methods shown above yield correct results, and in both cases dependent operations are issued to the same stream in the order in which they need to be executed. But the two approaches perform very differently depending on the specific generation of GPU used. On a Tesla C1060 (compute capability 1.3) running the test code (from Github) gives the following results.
Device : Tesla C1060
Time for sequential transfer and execute (ms ): 12.92381
max error : 2.3841858e -07
Time for asynchronous V1 transfer and execute (ms ): 13.63690
max error : 2.3841858e -07
Time for asynchronous V2 transfer and execute (ms ): 8.845888
max error : 2.3841858e -07
On a Tesla C2050 (compute capability 2.0) we get the following results.
Device : Tesla C2050
Time for sequential transfer and execute (ms ): 9.984512
max error : 1.1920929E -07
Time for asynchronous V1 transfer and execute (ms ): 5.735584
max error : 1.1920929E -07
Time for asynchronous V2 transfer and execute (ms ): 7.597984
max error : 1.1920929E -07
Here the first time reported is the sequential transfer and kernel execution using blocking transfers, which we use as a baseline for asynchronous speedup comparison. Why do the two asynchronous strategies perform differently on different architectures? To decipher these results we need to understand a bit more about how CUDA devices schedule and execute tasks. CUDA devices contain engines for various tasks, which queue up operations as they are issued. Dependencies between tasks in different engines are maintained, but within any engine all external dependencies are lost; tasks in each engine’s queue are executed in the order they are issued. The C1060 has a single copy engine and a single kernel engine. A time line for the execution of our example code on a C1060 is shown in the following diagram.
In the schematic we assume that the time required for the host-to-device transfer, kernel execution, and device-to-host transfer are approximately the same (the kernel code was chosen in order to achieve this). As expected for the sequential kernel, there is no overlap in any of the operations. For the first asynchronous version of our code the order of execution in the copy engine is: H2D stream(1), D2H stream(1), H2D stream(2), D2H stream(2), and so forth. This is why we do not see any speed-up when using the first asynchronous version on the C1060: tasks were issued to the copy engine in an order that precludes any overlap of kernel execution and data transfer. For version two, however, where all the host-to-device transfers are issued before any of the device-to-host transfers, overlap is possible as indicated by the lower execution time. From our schematic, we expect the execution of asynchronous version 2 to be 8/12 of the sequential version, or 8.7 ms which is confirmed in the timing results given previously.
On the C2050, two features interact to cause a behavior difference from the C1060. The C2050 has two copy engines, one for host-to-device transfers and another for device-to-host transfers, as well as a single kernel engine. The following diagram illustrates execution of our example on the C2050.
Having two copy engines explains why asynchronous version 1 achieves good speed-up on the C2050: the device-to-host transfer of data in stream(i) does not block the host-to-device transfer of data in stream(i+1) as it did on the C1060 because there is a separate engine for each copy direction on the C2050. The schematic predicts the execution time to be cut in half relative to the sequential version, and this is roughly what our timing results showed.
But what about the performance degradation observed in asynchronous version 2 on the C2050? This is related to the C2050’s ability to concurrently run multiple kernels. When multiple kernels are issued back-to-back in different (non-default) streams, the scheduler tries to enable concurrent execution of these kernels and as a result delays a signal that normally occurs after each kernel completion (which is responsible for kicking off the device-to-host transfer) until all kernels complete. So, while there is overlap between host-to-device transfers and kernel execution in the second version of our asynchronous code, there is no overlap between kernel execution and device-to-host transfers. The schematic predicts an overall time for the asynchronous version 2 to be 9/12 of the time for the sequential version, or 7.5 ms, and this is confirmed by our timing results.
A more detailed description of the example used in this post is available in CUDA Fortran Asynchronous Data Transfers. The good news is that for devices with compute capability 3.5 (the K20 series), the Hyper-Q feature eliminates the need to tailor the launch order, so either approach above will work. We will discuss using Kepler features in a future post, but for now, here are the results of running the sample code on a Tesla K20c GPU. As you can see, both asynchronous methods achieve the same speedup over the synchronous code.
Device : Tesla K20c
Time for sequential transfer and execute (ms): 7.101760
max error : 1.1920929E -07
Time for asynchronous V1 transfer and execute (ms): 3.974144
max error : 1.1920929E -07
Time for asynchronous V2 transfer and execute (ms): 3.967616
max error : 1.1920929E -07
Summary
This post and the previous one discussed how to optimize data transfers between the host and device. The previous post focused on how to minimize the time for executing such transfers, and this post introduced streams and how to use them to mask data transfer time by concurrently executing copies and kernels.
In a post dealing with streams I should mention that while using the default stream is convenient for developing code—synchronous code is simpler—eventually your code should use non-default streams. This is especially important when writing libraries. If code in a library uses the default stream, there is no chance for the end user to overlap data transfers with library kernel execution.
Now you know how to move data efficiently between the host and device, so we’ll look at how to access data efficiently from within kernels in the next post.
Related resources
Tags
About the Authors
Comments
Related posts
How to Access Global Memory Efficiently in CUDA Fortran Kernels
How to Overlap Data Transfers in CUDA C/C++
How to Optimize Data Transfers in CUDA Fortran
How to Implement Performance Metrics in CUDA Fortran
An Easy Introduction to CUDA Fortran
Related posts
Optimizing Memory and Retrieval for Graph Neural Networks with WholeGraph, Part 1
Deploying Retrieval-Augmented Generation Applications on NVIDIA GH200 Delivers Accelerated Performance
Simplifying GPU Application Development with Heterogeneous Memory Management
Boosting Application Performance with GPU Memory Access Tuning
Using the NVIDIA CUDA Stream-Ordered Memory Allocator, Part 2
|
An Efficient Matrix Transpose in CUDA Fortran
My previous CUDA Fortran post covered the mechanics of using shared memory, including static and dynamic allocation. In this post I will show some of the performance gains achievable using shared memory. Specifically, I will optimize a matrix transpose to show how to use shared memory to reorder strided global memory accesses into coalesced accesses.
Matrix Transpose
The code we wish to optimize is a transpose of a matrix of single precision values that operates out-of-place, i.e. the input and output are separate arrays in memory. For simplicity of presentation, we’ll consider only square matrices whose dimensions are integral multiples of 32 on a side. The entire code is available on Github. It consists several kernels as well as host code to perform typical tasks such as allocation and data transfers between host and device, launches and timings of several kernels as well as validation of their results, and deallocation of host and device memory. In this post I’ll only include the kernel code; you can view the rest or try it out on Github.
In addition to performing several different matrix transposes, we run simple matrix copy kernels because copy performance indicates the performance that we would like the matrix transpose to achieve. For both matrix copy and transpose, the relevant performance metric is effective bandwidth, calculated in GB/s by dividing twice the size in GB of the matrix (once for loading the matrix and once for storing) by time in seconds of execution.
All kernels in this study launch blocks of 32×8 threads (TILE_DIM=32, BLOCK_ROWS=8 in the code), and each thread block transposes (or copies) a tile of size 32×32. Using a thread block with fewer threads than elements in a tile is advantageous for the matrix transpose because each thread transposes four matrix elements, so much of the index calculation cost is amortized over these elements.
The kernels in this example map threads to matrix elements using a Cartesian (x,y) mapping rather than a row/column mapping to simplify the meaning of the components of the automatic variables in CUDA Fortran: threadIdx%x is horizontal and threadIdx%y is vertical. This mapping is up to the programmer; the important thing to remember is that to ensure memory coalescing we want to map the quickest varying component to contiguous elements in memory. In Fortran contiguous addresses correspond to the first index of a multidimensional array, and threadIdx%x and blockIdx%x vary quickest within blocks and grids, respectively.
Simple Matrix Copy
Let’s start by looking at the matrix copy kernel.
attributes(global) subroutine copy(odata, idata)
implicit none
real, intent(out) :: odata(nx,ny)
real, intent(in) :: idata(nx,ny)
integer :: x, y, j
x = (blockIdx%x-1) * TILE_DIM + threadIdx%x
y = (blockIdx%y-1) * TILE_DIM + threadIdx%y
do j = 0, TILE_DIM-1, BLOCK_ROWS
odata(x,y+j) = idata(x,y+j)
end do
end subroutine copy
Each thread copies four elements of the matrix in a loop at the end of this routine because the number of threads in a block is smaller by a factor of four (TILE_DIM/BLOCK_ROWS) than the number of elements in a tile. Note also that TILE_DIM must be used in the calculation of the matrix index y rather than BLOCK_ROWS or blockDim%y. The loop iterates over the second dimension and not the first so that contiguous threads load and store contiguous data, and all reads from idata and writes to odata are coalesced.
Naive Matrix Transpose
Our first transpose kernel looks very similar to the copy kernel. The only difference is that the indices for odata are swapped.
attributes(global) subroutine transposeNaive(odata, idata)
implicit none
real, intent(out) :: odata(ny,nx)
real, intent(in) :: idata(nx,ny)
integer :: x, y, j
x = (blockIdx%x-1) * TILE_DIM + threadIdx%x
y = (blockIdx%y-1) * TILE_DIM + threadIdx%y
do j = 0, TILE_DIM-1, BLOCK_ROWS
odata(y+j,x) = idata(x,y+j)
end do
end subroutine transposeNaive
In transposeNaive the reads from idata are coalesced as in the copy kernel, but for our 1024×1024 test matrix the writes to odata have a stride of 1024 elements or 4096 bytes between contiguous threads. This puts us well into the asymptote of the strided memory access plot from our global memory coalescing post, and we expect the performance of this kernel to suffer accordingly. The results of the copy and transposeNaive kernels bear this out.
The transposeNaive kernel achieves only a fraction of the effective bandwidth of the copy kernel. Because this kernel does very little other than copying, we would like to get closer to copy throughput. Let’s look at how we can do that.
Coalesced Transpose Via Shared Memory
The remedy for the poor transpose performance is to use shared memory to avoid the large strides through global memory. The following figure depicts how shared memory is used in the transpose.
The following kernel performs this “tiled” transpose.
attributes(global) subroutine transposeCoalesced(odata, idata)
implicit none
real, intent(out) :: odata(ny,nx)
real, intent(in) :: idata(nx,ny)
real, shared :: tile(TILE_DIM, TILE_DIM)
integer :: x, y, j
x = (blockIdx%x-1) * TILE_DIM + threadIdx%x
y = (blockIdx%y-1) * TILE_DIM + threadIdx%y
do j = 0, TILE_DIM-1, BLOCK_ROWS
tile(threadIdx%x, threadIdx%y+j) = idata(x,y+j)
end do
call syncthreads()
x = (blockIdx%y-1) * TILE_DIM + threadIdx%x
y = (blockIdx%x-1) * TILE_DIM + threadIdx%y
do j = 0, TILE_DIM-1, BLOCK_ROWS
odata(x,y+j) = tile(threadIdx%y+j, threadIdx%x)
end do
end subroutine transposeCoalesced
In the first do loop, a warp of threads reads contiguous data from idata into rows of the shared memory tile. After recalculating the array indices, a column of the shared memory tile is written to contiguous addresses in odata. Because threads write different data to odata than they read from idata, we must use a block-wise barrier synchronization syncthreads(). This approach gives us a nice speed up, as shown in this updated effective bandwidth table.
The transposeCoalesced results are an improvement over the transposeNaive case, but they are still far from the performance of the copy kernel. One possibility for the performance gap is the overhead associated with using shared memory and the required synchronization barrier syncthreads(). We can easily test this using the following copy kernel that uses shared memory.
attributes(global) subroutine copySharedMem(odata, idata)
implicit none
real, intent(out) :: odata(nx,ny)
real, intent(in) :: idata(nx,ny)
real, shared :: tile(TILE_DIM, TILE_DIM)
integer :: x, y, j
x = (blockIdx%x-1) * TILE_DIM + threadIdx%x
y = (blockIdx%y-1) * TILE_DIM + threadIdx%y
do j = 0, TILE_DIM-1, BLOCK_ROWS
tile(threadIdx%x, threadIdx%y+j) = idata(x,y+j)
end do
call syncthreads()
do j = 0, TILE_DIM-1, BLOCK_ROWS
odata(x,y+j) = tile(threadIdx%x, threadIdx%y+j)
end do
end subroutine copySharedMem
Note that the synchthreads() call is technically not needed in this case, because the operations for an element are performed by the same thread, but we include it here to mimic the transpose behavior. The second line of the table below shows that the problem is not the use of shared memory or the barrier synchronization.
Shared Memory Bank Conflicts
For a shared memory tile of 32 × 32 elements, all elements in a column of data map to the same shared memory bank, resulting in a worst-case scenario for memory bank conflicts: reading a column of data results in a 32-way bank conflict. Luckily, the solution for this is simply to pad the first index in the declaration of the shared memory tile.
real, shared :: tile(TILE_DIM+1, TILE_DIM)
Removing the bank conflicts in this way brings us within 93% of our fastest copy throughput.
Summary
In this post we presented three kernels that represent various optimizations for a matrix transpose. The kernels show how to use shared memory to coalesce global memory access and how to pad arrays to avoid shared memory bank conflicts. Looking at the relative gains of our kernels, coalescing global memory accesses is by far the most critical aspect of achieving good performance, which is true of many applications. Because global memory coalescing is so important, we revisit it again in the next post when we look at a finite difference computation on a 3D mesh.
Related resources
Tags
About the Authors
Comments
Related posts
Peer-to-Peer Multi-GPU Transpose in CUDA Fortran (Book Excerpt)
An Efficient Matrix Transpose in CUDA C/C++
How to Access Global Memory Efficiently in CUDA Fortran Kernels
How to Optimize Data Transfers in CUDA Fortran
An Easy Introduction to CUDA Fortran
Related posts
Just Released: cuDSS 0.3.0
Boosting Application Performance with GPU Memory Prefetching
Controlling Data Movement to Boost Performance on the NVIDIA Ampere Architecture
GPU Pro Tip: Fast Histograms Using Shared Atomics on Maxwell
CUDA Pro Tip: Do The Kepler Shuffle
|
Mastering String Transformations in RAPIDS libcudf
Efficient processing of string data is vital for many data science applications. To extract valuable information from string data, RAPIDS libcudf provides powerful tools for accelerating string data transformations. libcudf is a C++ GPU DataFrame library used for loading, joining, aggregating, and filtering data.
In data science, string data represents speech, text, genetic sequences, logging, and many other types of information. When working with string data for machine learning and feature engineering, the data must frequently be normalized and transformed before it can be applied to specific use cases. libcudf provides both general purpose APIs as well as device-side utilities to enable a wide range of custom string operations.
This post demonstrates how to skillfully transform strings columns with the libcudf general purpose API. You’ll gain new knowledge on how to unlock peak performance using custom kernels and libcudf device-side utilities. This post also walks you through examples of how to best manage GPU memory and efficiently construct libcudf columns to speed up your string transformations.
Introducing Arrow format for strings columns
libcudf stores string data in device memory using Arrow format, which represents strings columns as two child columns: chars and offsets (Figure 1).
The chars column holds the string data as UTF-8 encoded character bytes that are stored contiguously in memory.
The offsets column contains an increasing sequence of integers which are byte positions identifying the start of each individual string within the chars data array. The final offset element is the total number of bytes in the chars column. This means the size of an individual string at row i is defined as (offsets[i+1]-offsets[i]).
Example of string redaction function
To illustrate an example string transformation, consider a function that receives two input strings columns and produces one redacted output strings column.
The input data has the following form: a “names” column containing first and last names separated by a space and a “visibilities” column containing the status of “public” or “private.”
We propose the “redact” function that operates on the input data to produce output data consisting of the first initial of the last name followed by a space and the entire first name. However, if the corresponding visibility column is “private” then the output string should be fully redacted as “X X.”
Transforming strings with the libcudf API
First, string transformation can be accomplished using the libcudf strings API. The general purpose API is an excellent starting point and a good baseline for comparing performance.
The API functions operate on an entire strings column, launching at least one kernel per function and assigning one thread per string. Each thread handles a single row of data in parallel across the GPU and outputs a single row as part of a new output column.
To complete the redact example function using the general purpose API, follow these steps:
// convert the visibility label into a boolean
auto const visible = cudf::string_scalar(std::string("public"));
auto const allowed = cudf::strings::contains(visibilities, visible);
// redact names
auto const redaction = cudf::string_scalar(std::string("X X"));
auto const redacted = cudf::copy_if_else(names, redaction, allowed->view());
// split the first name and last initial into two columns
auto const sv = cudf::strings_column_view(redacted->view())
auto const first_last = cudf::strings::split(sv);
auto const first = first_last->view().column(0);
auto const last = first_last->view().column(1);
auto const last_initial = cudf::strings::slice_strings(last, 0, 1);
// assemble a result column
auto const tv = cudf::table_view({last_initial->view(), first});
auto result = cudf::strings::concatenate(tv, std::string(" "));
This approach takes about 3.5 ms on an A6000 with 600K rows of data. This example uses contains, copy_if_else, split, slice_strings and concatenate to accomplish a custom string transformation. A profiling analysis with Nsight Systems shows that the split function takes the longest amount of time, followed by slice_strings and concatenate.
Figure 2 shows profiling data from Nsight Systems of the redact example, showing end-to-end string processing at up to ~600 million elements per second. The regions correspond to NVTX ranges associated with each function. Light blue ranges correspond to periods where CUDA kernels are running.
Transforming strings with a custom kernel
The libcudf strings API is a fast and efficient toolkit for transforming strings, but sometimes performance-critical functions need to run even faster. A key source of extra work in the libcudf strings API is the creation of at least one new strings column in global device memory for each API call, opening up the opportunity to combine multiple API calls into a custom kernel.
Performance limitations in kernel malloc calls
First, we’ll build a custom kernel to implement the redact example transformation. When designing this kernel, we must keep in mind that libcudf strings columns are immutable.
Strings columns cannot be changed in place because the character bytes are stored contiguously, and any changes to the length of a string would invalidate the offsets data. Therefore the redact_kernel custom kernel generates a new strings column by using a libcudf column factory to build both offsets and chars child columns.
In this first approach, the output string for each row is created in dynamic device memory using a malloc call inside the kernel. The custom kernel output is a vector of device pointers to each row output, and this vector serves as input to a strings column factory.
The custom kernel accepts a cudf::column_device_view to access the strings column data and uses the element method to return a cudf::string_view representing the string data at the specified row index. The kernel output is a vector of type cudf::string_view that holds pointers to the device memory containing the output string and the size of that string in bytes.
The cudf::string_view class is similar to the std::string_view class but is implemented specifically for libcudf and wraps a fixed length of character data in device memory encoded as UTF-8. It has many of the same features (find and substr functions, for example) and limitations (no null terminator) as the std counterpart. A cudf::string_view represents a character sequence stored in device memory and so we can use it here to record the malloc’d memory for an output vector.
Malloc kernel
// note the column_device_view inputs to the kernel
__global__ void redact_kernel(cudf::column_device_view const d_names,
cudf::column_device_view const d_visibilities,
cudf::string_view redaction,
cudf::string_view* d_output)
{
// get index for this thread
auto index = threadIdx.x + blockIdx.x * blockDim.x;
if (index >= d_names.size()) return;
auto const visible = cudf::string_view("public", 6);
auto const name = d_names.element<cudf::string_view>(index);
auto const vis = d_visibilities.element<cudf::string_view>(index);
if (vis == visible) {
auto const space_idx = name.find(' ');
auto const first = name.substr(0, space_idx);
auto const last_initial = name.substr(space_idx + 1, 1);
auto const output_size = first.size_bytes() + last_initial.size_bytes() + 1;
char* output_ptr = static_cast<char*>(malloc(output_size));
// build output string
d_output[index] = cudf::string_view{output_ptr, output_size};
memcpy(output_ptr, last_initial.data(), last_initial.size_bytes());
output_ptr += last_initial.size_bytes();
*output_ptr++ = ' ';
memcpy(output_ptr, first.data(), first.size_bytes());
} else {
d_output[index] = cudf::string_view{redaction.data(), redaction.size_bytes()};
}
}
__global__ void free_kernel(cudf::string_view redaction, cudf::string_view* d_output, int count)
{
auto index = threadIdx.x + blockIdx.x * blockDim.x;
if (index >= count) return;
auto ptr = const_cast<char*>(d_output[index].data());
if (ptr != redaction.data()) free(ptr); // free everything that does match the redaction string
}
This might seem like a reasonable approach, until the kernel performance is measured. This approach takes about 108 ms on an A6000 with 600K rows of data—more than 30x slower than the solution provided above using the libcudf strings API.
redact_kernel 60.3ms
free_kernel 45.5ms
make_strings_column 0.5ms
The main bottleneck is the malloc/free calls inside the two kernels here. The CUDA dynamic device memory requires malloc/free calls in a kernel to be synchronized, causing parallel execution to degenerate into sequential execution.
Pre-allocating working memory to eliminate bottlenecks
Eliminate the malloc/free bottleneck by replacing the malloc/free calls in the kernel with pre-allocated working memory before launching the kernel.
For the redact example, the output size of each string in this example should be no larger than the input string itself, since the logic only removes characters. Therefore, a single device memory buffer can be used with the same size as the input buffer. Use the input offsets to locate each row position.
Accessing the strings column’s offsets involves wrapping the cudf::column_view with a cudf::strings_column_view and calling its offsets_begin method. The size of the chars child column can also be accessed using the chars_size method. Then a rmm::device_uvector is pre-allocated before calling the kernel to store the character output data.
auto const scv = cudf::strings_column_view(names);
auto const offsets = scv.offsets_begin();
auto working_memory = rmm::device_uvector<char>(scv.chars_size(), stream);
Pre-allocated kernel
__global__ void redact_kernel(cudf::column_device_view const d_names,
cudf::column_device_view const d_visibilities,
cudf::string_view redaction,
char* working_memory,
cudf::offset_type const* d_offsets,
cudf::string_view* d_output)
{
auto index = threadIdx.x + blockIdx.x * blockDim.x;
if (index >= d_names.size()) return;
auto const visible = cudf::string_view("public", 6);
auto const name = d_names.element<cudf::string_view>(index);
auto const vis = d_visibilities.element<cudf::string_view>(index);
if (vis == visible) {
auto const space_idx = name.find(' ');
auto const first = name.substr(0, space_idx);
auto const last_initial = name.substr(space_idx + 1, 1);
auto const output_size = first.size_bytes() + last_initial.size_bytes() + 1;
// resolve output string location
char* output_ptr = working_memory + d_offsets[index];
d_output[index] = cudf::string_view{output_ptr, output_size};
// build output string into output_ptr
memcpy(output_ptr, last_initial.data(), last_initial.size_bytes());
output_ptr += last_initial.size_bytes();
*output_ptr++ = ' ';
memcpy(output_ptr, first.data(), first.size_bytes());
} else {
d_output[index] = cudf::string_view{redaction.data(), redaction.size_bytes()};
}
}
The kernel outputs a vector of cudf::string_view objects which is passed to the cudf::make_strings_column factory function. The second parameter to this function is used for identifying null entries in the output column. The examples in this post do not have null entries, so a nullptr placeholder cudf::string_view{nullptr,0} is used.
auto str_ptrs = rmm::device_uvector<cudf::string_view>(names.size(), stream);
redact_kernel<<<blocks, block_size, 0, stream.value()>>>(*d_names,
*d_visibilities,
d_redaction.value(),
working_memory.data(),
offsets,
str_ptrs.data());
auto result = cudf::make_strings_column(str_ptrs, cudf::string_view{nullptr,0}, stream);
This approach takes about 1.1 ms on an A6000 with 600K rows of data and therefore beats the baseline by more than 2x. The approximate breakdown is shown below:
redact_kernel 66us
make_strings_column 400us
The remaining time is spent in cudaMalloc, cudaFree, cudaMemcpy, which is typical of the overhead for managing temporary instances of rmm::device_uvector. This method works well if all of the output strings are guaranteed to be the same size or smaller as the input strings.
Overall, switching to a bulk working memory allocation with RAPIDS RMM is a significant improvement and a good solution for a custom strings function.
Optimizing column creation for faster compute times
Is there a way to improve this even further? The bottleneck is now the cudf::make_strings_column factory function which builds the two strings column components, offsets and chars, from the vector of cudf::string_view objects.
In libcudf, many factory functions are included for building strings columns. The factory function used in the previous examples takes a cudf::device_span of cudf::string_view objects and then constructs the column by performing a gather on the underlying character data to build the offsets and character child columns. A rmm::device_uvector is automatically convertible to a cudf::device_span without copying any data.
However, if the vector of characters and the vector of offsets are built directly, then a different factory function can be used, which simply creates the strings column without requiring a gather to copy the data.
The sizes_kernel makes a first pass over the input data to compute the exact output size of each output row:
Optimized kernel: Part 1
__global__ void sizes_kernel(cudf::column_device_view const d_names,
cudf::column_device_view const d_visibilities,
cudf::size_type* d_sizes)
{
auto index = threadIdx.x + blockIdx.x * blockDim.x;
if (index >= d_names.size()) return;
auto const visible = cudf::string_view("public", 6);
auto const redaction = cudf::string_view("X X", 3);
auto const name = d_names.element<cudf::string_view>(index);
auto const vis = d_visibilities.element<cudf::string_view>(index);
cudf::size_type result = redaction.size_bytes(); // init to redaction size
if (vis == visible) {
auto const space_idx = name.find(' ');
auto const first = name.substr(0, space_idx);
auto const last_initial = name.substr(space_idx + 1, 1);
result = first.size_bytes() + last_initial.size_bytes() + 1;
}
d_sizes[index] = result;
}
The output sizes are then converted to offsets by performing an in-place exclusive_scan. Note that the offsets vector was created with names.size()+1 elements. The last entry will be the total number of bytes (all the sizes added together) while the first entry will be 0. These are both handled by the exclusive_scan call. The size of the chars column is retrieved from the last entry of the offsets column to build the chars vector.
// create offsets vector
auto offsets = rmm::device_uvector<cudf::size_type>(names.size() + 1, stream);
// compute output sizes
sizes_kernel<<<blocks, block_size, 0, stream.value()>>>(
*d_names, *d_visibilities, offsets.data());
thrust::exclusive_scan(rmm::exec_policy(stream), offsets.begin(), offsets.end(), offsets.begin());
The redact_kernel logic is still very much the same except that it accepts the output d_offsets vector to resolve each row’s output location:
Optimized kernel: Part 2
__global__ void redact_kernel(cudf::column_device_view const d_names,
cudf::column_device_view const d_visibilities,
cudf::size_type const* d_offsets,
char* d_chars)
{
auto index = threadIdx.x + blockIdx.x * blockDim.x;
if (index >= d_names.size()) return;
auto const visible = cudf::string_view("public", 6);
auto const redaction = cudf::string_view("X X", 3);
// resolve output_ptr using the offsets vector
char* output_ptr = d_chars + d_offsets[index];
auto const name = d_names.element<cudf::string_view>(index);
auto const vis = d_visibilities.element<cudf::string_view>(index);
if (vis == visible) {
auto const space_idx = name.find(' ');
auto const first = name.substr(0, space_idx);
auto const last_initial = name.substr(space_idx + 1, 1);
auto const output_size = first.size_bytes() + last_initial.size_bytes() + 1;
// build output string
memcpy(output_ptr, last_initial.data(), last_initial.size_bytes());
output_ptr += last_initial.size_bytes();
*output_ptr++ = ' ';
memcpy(output_ptr, first.data(), first.size_bytes());
} else {
memcpy(output_ptr, redaction.data(), redaction.size_bytes());
}
}
The size of the output d_chars column is retrieved from the last entry of the d_offsets column to allocate the chars vector. The kernel launches with the pre-computed offsets vector and returns the populated chars vector. Finally, the libcudf strings column factory creates the output strings columns.
This cudf::make_strings_column factory function builds the strings column without making a copy of the data. The offsets data and chars data are already in the correct, expected format and this factory simply moves the data from each vector and creates the column structure around it. Once completed, the rmm::device_uvectors for offsets and chars are empty, their data having been moved into the output column.
cudf::size_type output_size = offsets.back_element(stream);
auto chars = rmm::device_uvector<char>(output_size, stream);
redact_kernel<<<blocks, block_size, 0, stream.value()>>>(
*d_names, *d_visibilities, offsets.data(), chars.data());
// from pre-assembled offsets and character buffers
auto result = cudf::make_strings_column(names.size(), std::move(offsets), std::move(chars));
This approach takes about 300 us (0.3 ms) on an A6000 with 600K rows of data and improves over the previous approach by more than 2x. You might notice that sizes_kernel and redact_kernel share much of the same logic: once to measure the size of the output and then again to populate the output.
From a code quality perspective, it is beneficial to refactor the transformation as a device function called by both the sizes and redact kernels. From a performance perspective, you might be surprised to see the computational cost of the transformation being paid twice.
The benefits for memory management and more efficient column creation often outweigh the computation cost of performing the transformation twice.
Table 2 shows the compute time, kernel count, and bytes processed for the four solutions discussed in this post. “Total kernel launches” reflects the total number of kernels launched, including both compute and helper kernels. “Total bytes processed” is the cumulative DRAM read plus write throughput and “minimum bytes processed” is an average of 37.9 bytes per row for our test inputs and outputs. The ideal “memory bandwidth limited” case assumes 768 GB/s bandwidth, the theoretical peak throughput of the A6000.
“Optimized Kernel” provides the highest throughput due to the reduced number of kernel launches and the fewer total bytes processed. With efficient custom kernels, the total kernel launches drop from 31 to 4 and the total bytes processed from 12.6x to 1.75x of the input plus output size.
As a result, the custom kernel achieves >10x higher throughput than the general purpose strings API for the redact transformation.
Peak performance analysis
The pool memory resource in RAPIDS Memory Manager (RMM) is another tool you can use to increase performance. The examples above use the default “CUDA memory resource” for allocating and freeing global device memory. However, the time needed to allocate working memory adds significant latency in between steps of the string transformations. The “pool memory resource” in RMM reduces latency by allocating a large pool of memory up front, and assigning suballocations as needed during processing.
With the CUDA memory resource, “Optimized Kernel” shows a 10x-15x speedup that begins to drop off at higher row counts due to the increasing allocation size (Figure 3). Using the pool memory resource mitigates this effect and maintains 15x-25x speedups over the libcudf strings API approach.
With the pool memory resource, an end-to-end memory throughput approaching the theoretical limit for a two-pass algorithm is demonstrated. “Optimized Kernel” reaches 320-340 GB/s throughput, measured using the size of inputs plus the size of outputs and the compute time (Figure 4).
The two-pass approach first measures the sizes of the output elements, allocates memory, and then sets the memory with the outputs. Given a two-pass processing algorithm, the implementation in “Optimized Kernel” performs close to the memory bandwidth limit. “End-to-end memory throughput” is defined as the input plus output size in GB divided by the compute time. *RTX A6000 memory bandwidth (768 GB/s).
Key takeaways
This post demonstrates two approaches for writing efficient string data transformations in libcudf. The libcudf general purpose API is fast and straightforward for developers, and delivers good performance. libcudf also provides device-side utilities designed for use with custom kernels, in this example unlocking >10x faster performance.
Apply your knowledge
To get started with RAPIDS cuDF, visit the rapidsai/cudf GitHub repo. If you have not yet tried cuDF and libcudf for your string processing workloads, we encourage you to test the latest release. Docker containers are provided for releases as well as nightly builds. Conda packages are also available to make testing and deployment easier. If you’re already using cuDF, we encourage you to run the new strings transformation example by visiting rapidsai/cudf/tree/HEAD/cpp/examples/strings on GitHub.
Related resources
Tags
About the Authors
Comments
Related posts
Scaling Up to One Billion Rows of Data in pandas using RAPIDS cuDF
RAPIDS cuDF Unified Memory Accelerates pandas up to 30x on Large Datasets
Encoding and Compression Guide for Parquet String Data Using RAPIDS
Streamline ETL Workflows with Nested Data Types in RAPIDS libcudf
NLP and Text Processing with RAPIDS: Now Simpler and Faster
Related posts
Processing High-Quality Vietnamese Language Data with NVIDIA NeMo Curator
Access to NVIDIA NIM Now Available Free to Developer Program Members
Revolutionizing Graph Analytics: Next-Gen Architecture with NVIDIA cuGraph Acceleration
Efficient CUDA Debugging: Memory Initialization and Thread Synchronization with NVIDIA Compute Sanitizer
Analyzing the Security of Machine Learning Research Code
|
Finite Difference Methods in CUDA C/C++, Part 1
In the previous CUDA C/C++ post we investigated how we can use shared memory to optimize a matrix transpose, achieving roughly an order of magnitude improvement in effective bandwidth by using shared memory to coalesce global memory access. The topic of today’s post is to show how to use shared memory to enhance data reuse in a finite difference code. In addition to shared memory, we will also discuss constant memory, which is a cached read-only memory optimized for uniform access across threads in a block (or warp).
Problem Statement: 3D Finite Difference
Our example uses a three-dimensional grid of size 643. For simplicity we assume periodic boundary conditions and only consider first-order derivatives, although extending the code to calculate higher-order derivatives with other types of boundary conditions is straightforward.
The finite difference method essentially uses a weighted summation of function values at neighboring points to approximate the derivative at a particular point. For a (2N+1)-point stencil with uniform spacing ∆x in the x direction, the following equation gives a central finite difference scheme for the derivative in x. Equations for the other directions are similar.
The coefficients Ci are typically generated from Taylor series expansions and can be chosen to obtain a scheme with desired characteristics such as accuracy, and in the context of partial differential equations, dispersion and dissipation. For explicit finite difference schemes such as the type above, larger stencils typically have a higher order of accuracy. For this post we use a nine-point stencil which has eighth-order accuracy. We also choose a symmetric stencil, shown in the following equation.
Here we specify values of the function on the computational grid using the grid indices i, j, k, rather than the physical coordinates x, y, z. Here the coefficients are ax = 4/(5 ∆x) , bx = −1/(5 ∆x) , cx = 4/(105 ∆x), and dx = − 1/(280 ∆x), which is a typical eighth-order scheme. For derivatives in the y and z directions, the index offsets in the above equation are simply applied to the j and k indices and the coefficients are the same except that ∆y and ∆z are used in place of ∆x.
Because we calculate an approximation to the derivative at each point on the 643 periodic grid, the value of f at each point is used eight times: once for each right-hand-side term in the above expression. In designing a derivative kernel, our goal is to exploit this data reuse by storing blocks of data in shared memory to ensure we fetch the values of f from global memory as few times as possible.
Data Reuse and Shared Memory
We employ a tiling approach in which each thread block loads a tile of data from the multidimensional grid into shared memory, so that each thread in the block can access all elements of the shared memory tile as needed. How do we choose the best tile shape and size? Some experimentation is required, but characteristics of the finite difference stencil and grid size provide direction.
When choosing a tile shape for stencil calculations, the tiles typically overlap by half of the stencil size, as depicted on the left in the figure below.
Here, in order to calculate the derivative in a 16 × 16 tile (in yellow), the values of f not only from this tile but also from two additional 4 × 16 sections (in orange) must be loaded by each thread block. Overall, the f values in the orange sections get loaded twice, once by the thread block that calculates the derivative at that location, and once by each neighboring thread block. As a result, 8 × 16 values out of 16 × 16, or half of the values, get loaded from global memory twice. In addition, coalescing on devices with a compute capability of 2.0 and higher will be suboptimal with 16 × 16 tiles because perfect coalescing on such devices requires access to data within 32 contiguous elements in global memory per load.
A better choice of tile (and thread block) size for our problem and data size is a 64 × 4 tile, as depicted on the right in the figure above. This tile avoids overlap altogether when calculating the x-derivative for our one-dimensional stencil on a grid of 643 since the tile contains all points in the direction of the derivative. A minimal tile would have just one pencil, or one-dimensional array of all points in a direction. This would correspond to thread blocks of only 64 threads, however, so from an occupancy standpoint it is beneficial to use multiple pencils per tile. In our finite difference code, available for download on Github, we parameterize the number of pencils to allow experimentation. In addition to loading each value of f only once, every warp of threads loads contiguous data from global memory using this tile and therefore achieves perfectly coalesced accesses to global memory.
X-Derivative Implementation
The implementation for the x-derivative kernel follows.
__global__ void derivative_x(float *f, float *df)
{
__shared__ float s_f[sPencils][mx+8]; // 4-wide halo
int i = threadIdx.x;
int j = blockIdx.x*blockDim.y + threadIdx.y;
int k = blockIdx.y;
int si = i + 4; // local i for shared memory access + halo offset
int sj = threadIdx.y; // local j for shared memory access
int globalIdx = k * mx * my + j * mx + i;
s_f[sj][si] = f[globalIdx];
__syncthreads();
// fill in periodic images in shared memory array
if (i < 4) {
s_f[sj][si-4] = s_f[sj][si+mx-5];
s_f[sj][si+mx] = s_f[sj][si+1];
}
__syncthreads();
df[globalIdx] =
( c_ax * ( s_f[sj][si+1] - s_f[sj][si-1] )
+ c_bx * ( s_f[sj][si+2] - s_f[sj][si-2] )
+ c_cx * ( s_f[sj][si+3] - s_f[sj][si-3] )
+ c_dx * ( s_f[sj][si+4] - s_f[sj][si-4] ) );
}
Here mx, my, and mz are the array dimensions which are set to 64, and sPencils is 4, which is the number of pencils used to make the shared memory tile. The indices i, j, and k correspond to the coordinates in the 643 mesh. The indices si and sj are the local (row, column) coordinates for the shared memory tile. This kernel is launched with a block of 64 × sPencils threads which calculate the derivatives on a x × y tile of 64 × sPencils elements, so each thread calculates the derivative at one point.
The shared memory tile is declared with a padding of 4 elements at each end of the x dimension to accommodate the periodic images needed to calculate the derivative at the endpoints.
__shared__ float s_f[sPencils][mx+8]; // 4-wide halo
We compute an index into the linear input array and then read from global memory into the shared memory tile.
int globalIdx = k * mx * my + j * mx + i;
s_f[sj][si] = f[globalIdx];
These reads from global memory are perfectly coalesced. Data are copied within shared memory to fill out the periodic images in the x-direction by the following code.
// fill in periodic images in shared memory array
if (i < 4) {
s_f[sj][si-4] = s_f[sj][si+mx-5];
s_f[sj][si+mx] = s_f[sj][si+1];
}
Note that in this operation we are reading from shared memory, not from global memory. Each element is read from global memory only once by the previous statement. Since a thread reads data from shared memory that another thread wrote, we need a __syncthreads() call before the periodic images are written. Likewise, we need a __syncthreads() call after the periodic images are written since they are accessed by different threads. After the second __syncthreads() call, our finite difference approximation is calculated using the following code.
df[globalIdx] =
( c_ax * ( s_f[sj][si+1] - s_f[sj][si-1] )
+ c_bx * ( s_f[sj][si+2] - s_f[sj][si-2] )
+ c_cx * ( s_f[sj][si+3] - s_f[sj][si-3] )
+ c_dx * ( s_f[sj][si+4] - s_f[sj][si-4] ) );
The coefficients c_ax, c_bx, c_cx, and c_dx are declared external to this kernel in constant memory.
Constant memory
Constant memory is a special-purpose memory space that is read-only from device code, but can be read and written by the host. Constant memory resides in device DRAM, and is cached on chip. The constant memory cache has only one read port, but can broadcast data from this port across a warp. This means that constant memory access is effective when all threads in a warp read the same address, but when threads in a warp read different addresses the reads are serialized. Constant memory is perfect for coefficients and other data that are used uniformly across threads, as is the case with our coefficients c_ax, c_bx, etc.
In CUDA C/C++, constant data must be declared with global scope, and can be read (only) from device code, and read or written by host code. Constant memory is used in device code the same way any CUDA C variable or array/pointer is used, but it must be initialized from host code using cudaMemcpyToSymbol or one of its variants. In our finite difference code we have the following constant declarations, which use the __constant__ declaration specifier.
// stencil coefficients
__constant__ float c_ax, c_bx, c_cx, c_dx;
__constant__ float c_ay, c_by, c_cy, c_dy;
__constant__ float c_az, c_bz, c_cz, c_dz;
We initialize the x coefficients using the following code, and the y and z coefficients are initialized similarly.
// stencil weights (for unit length problem)
float dsinv = mx-1.f;
float ax = 4.f / 5.f * dsinv;
float bx = -1.f / 5.f * dsinv;
float cx = 4.f / 105.f * dsinv;
float dx = -1.f / 280.f * dsinv;
cudaMemcpyToSymbol(c_ax, &ax, sizeof(float), 0, cudaMemcpyHostToDevice);
cudaMemcpyToSymbol(c_bx, &bx, sizeof(float), 0, cudaMemcpyHostToDevice);
cudaMemcpyToSymbol(c_cx, &cx, sizeof(float), 0, cudaMemcpyHostToDevice);
cudaMemcpyToSymbol(c_dx, &dx, sizeof(float), 0, cudaMemcpyHostToDevice);
X Derivative Performance
The derivative_x kernel achieves the following performance on a Tesla K20c GPU.
Using shared memory tile of 64 x 4
RMS error: 7.277675e-06
MAX error: 2.861023e-05
Average time (ms): 0.025912
Average Bandwidth (GB/s): 80.933625
We will use this performance as a basis for comparison for derivatives in the other directions, which we will cover in the next CUDA C/C++ post.
Related resources
Tags
About the Authors
Comments
Related posts
Combine OpenACC and Unified Memory for Productivity and Performance
Finite Difference Methods in CUDA C++, Part 2
Finite Difference Methods in CUDA Fortran, Part 2
Finite Difference Methods in CUDA Fortran, Part 1
An Efficient Matrix Transpose in CUDA Fortran
Related posts
Just Released: cuDSS 0.3.0
Boosting Application Performance with GPU Memory Prefetching
Controlling Data Movement to Boost Performance on the NVIDIA Ampere Architecture
GPU Pro Tip: Fast Histograms Using Shared Atomics on Maxwell
A CUDA Dynamic Parallelism Case Study: PANDA
|
Using Tensor Cores for Mixed-Precision Scientific Computing
Double-precision floating point (FP64) has been the de facto standard for doing scientific simulation for several decades. Most numerical methods used in engineering and scientific applications require the extra precision to compute correct answers or even reach an answer. However, FP64 also requires more computing resources and runtime to deliver the increased precision levels.
Problem complexity and the sheer magnitude of data coming from various instruments and sensors motivate researchers to mix and match various approaches to optimize compute resources, including different levels of floating-point precision. Researchers have experimented with single-precision (FP32) in the fields of life science and seismic for several years. In recent years, the big bang for machine learning and deep learning has focused significant attention on half-precision (FP16). Using reduced precision levels can accelerate data transfers rates,increase application performance, and reduce power consumption, especially on GPUs with Tensor Core support for mixed-precision. Figure 1 describes the IEEE 754 standard floating point formats for FP64, FP32, and FP16 precision levels.
Using Tensor Cores for Mixed-Precision
NVIDIA Tesla V100 includes both CUDA Cores and Tensor Cores, allowing computational scientists to dramatically accelerate their applications by using mixed-precision. Using FP16 with Tensor Cores in V100 is just part of the picture. Accumulation to FP32 sets the Tesla V100 and Turing chip architectures apart from all the other architectures that simply support lower precision levels. Volta V100 and Turing architectures, enable fast FP16 matrix math with FP32 compute, as figure 2 shows. Tensor Cores provide up to 125 TFlops FP16 performance in the Tesla V100. The 16x multiple versus FP64 within the same power budget has prompted researchers to explore techniques to leverage Tensor Cores in their scientific applications. Let’s look at a few examples discussed at SC18 on how researchers used Tensor Cores and mixed-precision for scientific computing.
Using Mixed-Precision for Earthquake Simulation
One of the Gordon Bell finalists simulates an earthquake using AI and transprecision computing (transprecision is synonymous with mixed-precision). Current seismic simulations can compute the properties of hard soil shaking deep underground. Scientists also want to include the shaking of soft soil near the surface as well as the building structures below and above the ground level, shown in figure 3. researchers from the University of Tokyo, Oak Ridge National Laboratory (ORNL), and the Swiss National Supercomputing Centre collaborated on this new solver, called MOTHRA (iMplicit sOlver wiTH artificial intelligence and tRAnsprecision computing). Running MOTHRA on the Summit supercomputer using a combination of AI and mixed-precision, MOTHRA achieved a 25x speed-up compared to the standard solver.
The simulation starts with 3D data of the various buildings in the city of Tokyo. The scientists simulate a seismic wave spreading through the city. The simulation includes hard soil, soft soil, underground malls, and subway systems in addition to the complicated buildings above ground. Nonlinear dynamic equations simulate the movement or displacement of the various buildings as a result of the seismic wave.
The size of the domain and the physics involved requires a system of equations with 302 billion unknowns that need to be solved iteratively until the solution converges. Fast solution of such a large system of equations requires a good preconditioner to reduce the computational cost. Researchers have been cautious about using lower precision in the past because the solutions take longer to converge. The researchers turned to AI to determine how to improve the effectiveness of their preconditioner by training a neural network on smaller models to help identify regions of slow convergence in their solver. This efficiently identified where to apply lower or higher precision to reduce the overall time to solution, a unique use of AI.
The researchers then use a combination FP64, FP32, FP21 and FP16 to further reduce the computational and communication costs. Using FP21 and FP16 when communicating across kernels and nodes reduces the amount of data that needs to be communicated between compute nodes and hence shortens the time it takes to obtain the solution.
Eventually, results from all lower precision calculations accelerated the FP64 calculations, still yielding the desired high precision and level of accuracy while taking less time. Using these techniques enabled MOTHRA to run 25x faster than a standard solver and 4x faster than GAMERA, the state-of-the-art SC14 Gordon Bell Finalist solver. This example demonstrates that using mixed-precision all the way down to FP16 may be a viable option and can be applied to other types of scientific simulations.
Using Tensor Core FP16 in Linear Algebra
While the use of lower precision is very common in AI models, some of the researchers from ICL/UTK explored the possibility of using tensor cores to accelerate one of the most common dense linear algebra routines without loss of precision. They achieved a 4x performance increase and 5x better energy efficiency versus the standard full FP64 implementation.
The following block diagram in figure 4 shows a simple flow for solving linear systems via LU factorization.
LU factorization steps for solving a linear system
The approach is very simple: use lower precision to compute the expensive flops and then iteratively refine the solution in order to achieve the FP64 solution.
Iterative refinement for dense systems, Ax = b, can work in a manner simlar to the pseudocode snippet below.
LU = lu(A) lower precision O(n3)
X - U\(L\b) lower precision O(n2)
r = b - Ax FP64 precision O(n2)
WHILE || r || not small enough
//Find a correction “z” to adjust x that satisfies Az=r
//Solving Az=r could be done by either
Z = U|(L\r) //Classical Iterative Refinement lower precision (On2)
GMRes preconditioned by the LU to solve Az=r //Iterative Refinement using GMRes lower precision (On2)
x = x + z //FP64 precision (On1)
r = b - Ax //FP64 precision (On2)
END
ICL/UTK announced the support of FP16 into the MAGMA (Matrix Algebra on GPU and Multicore Architectures) library at SC18. Since the performance of Tensor cores is so much faster then FP64, mixing FP64 plus FP16/FP32 enables the solver library to achieve up to 4x better performance.
A group of researchers from the University of Tennessee Innovative Computing Laboratory, ORNL, the University of Manchester School of Mathematics, and CUDA library engineers from NVIDIA presented their results at SC18, shown in figure 5. They demonstrated a 4x performance improvement in the paper “Harnessing GPU Tensor Cores for Fast FP16 Arithmetic to Speed up Mixed-Precision Iterative Refinement Solvers”. An updated version of the MAGMA library with support for Tensor Cores is available from the ICL at UTK., allowing the broader scientific community to experiment and incorporate them into their applications.
David Green, a computational physicist in the Theory and Modeling group of the Fusion and Materials for Nuclear Systems Division at ORNL, used the MAGMA library solver with Tensor Core support to accelerate his application by 3.5x. The plasma fusion application simulates the instabilities that occur inside a plasma inside the International Thermonuclear Experimental Reactor (ITER).
Orienting the application in a way that makes use of the FP16/Tensor cores made it possible to simulate the instability between plasma beams 3.5x times faster than previous results. David Greene presented his work at the American Physical Society – Division of Plasma Physics meeting in Portland recently. Figure 6 shows output from the simulation.
Conclusion
Mixed-precision computing modes allows us to reduce the resources required by using lower precision arithmetic in portions of the applications where FP64 is not required. However, mixed-precision computing with Tensor Cores GPUs is more than just replacing an FP64 calculation with an FP16 calculation. We are providing an FP64 solution accelerated with tensor cores that happen to use FP16 and FP32. This allows researchers to achieve comparable levels of accuracy to double-precision, while dramatically decreasing the required memory, application runtime, and system power consumption. There’s a growing number of examples where researchers are leveraging GPU Tensor Cores and mixed-precision computing to accelerate traditional FP64-based scientific computing applications by up to 25 times.
The latest NVIDIA Volta and Turing GPUs come with Tensor Cores that simplify and accelerate mixed precision AI even faster with support for automatic mixed precision in TensorFlow, PyTorch and MXNet. Interested in learning more or trying it out for yourself? Get Tensor Core optimized model scripts for popular AI frameworks from NGC. NGC also offers pre-trained modelsfor popular use cases and containers for DL frameworks, ML algorithms, and HPC and Visualization applications.
You may also find the SC18 presentation Harnessing Tensor Cores FP16 Arithmetic to Accelerate Linear Solvers and HPC Scientific Applications useful and interesting. Learn more by reading the Programming Tensor Cores using NVIDIA’s CUDA accelerated-computing API NVIDIA developer blog post. You can also read about the implementation of Tensor Cores in the Volta architecture if you want do dive a little deeper.
Related resources
Tags
About the Authors
Comments
Related posts
HPL-AI Now Runs 2x Faster on NVIDIA DGX A100
Getting Immediate Speedups with NVIDIA A100 TF32
Programming Tensor Cores in CUDA 9
Programming Tensor Cores in CUDA 9
Mixed-Precision Programming with CUDA 8
Related posts
CUDA 12.1 Supports Large Kernel Parameters
Case Study: ResNet50 with DALI
Machine Learning Acceleration in Vulkan with Cooperative Matrices
Tensor Core Programming Using CUDA Fortran
Speeding Up Semantic Segmentation Using MATLAB Container from NVIDIA NGC
|
cuTENSOR 2.0: A Comprehensive Guide for Accelerating Tensor Computations
NVIDIA cuTENSOR is a CUDA math library that provides optimized implementations of tensor operations where tensors are dense, multi-dimensional arrays or array slices. The release of cuTENSOR 2.0 represents a major update—in both functionality and performance—over its predecessor. This version reimagines its APIs to be more expressive, including advanced just-in-time compilation capabilities all with the latest and greatest performance that can be achieved on the NVIDIA Ampere and NVIDIA Hopper GPU architectures.
While tensor operations might seem exotic at first glance, they describe many naturally occurring algorithms. In particular, such operations are ubiquitous in machine learning and quantum chemistry.
If you already work with NVIDIA cuBLAS, or BLAS in general, the three routines that cuTENSOR provides may already seem familiar to you:
The key difference is that cuTENSOR extends those operations to multiple dimensions. cuTENSOR enables you to avoid worrying about performance optimization for these operations and instead rely on ready-made accelerated routines.
The cuTENSOR benefits and advances are available for your use not only from your CUDA code, but also from any of the many tools that include support for it already, today.
The selection of programs that are accelerated with cuTENSOR is constantly expanding. We also provide example code that gets you started in C++ and Python with TensorFlow and PyTorch.
In this post, we discuss the various operations that cuTENSOR supports and how to take advantage of them as a CUDA programmer. We share performance considerations and other useful tips and tricks. Finally, the sample code that we use is also available as part of the /NVIDIA/CUDALibrarySamples GitHub repository.
cuTENSOR 2.0
cuTENSOR 2.0 represents a major advancement over its 1.x predecessor in performance, feature support, and ease of use. We overhauled the API of elementwise operations, reductions and tensor contractions to make them coherent such that all operations follow the same multi-stage API design (Figure 1).
For the first time, cuTENSOR introduces just-in-time compilation support for tensor contractions that enable you to compile a dedicated kernel tailored to a specific tensor contraction. This feature is especially valuable for high-dimensional tensor contractions, as they are frequently encountered in quantum circuit simulations.
Tensor permutations also support padding now, enabling the output tensor to be padded along any dimension to meet any alignment requirements or to avoid predication of a subsequent kernel.
Starting with cuTENSOR 2.0, the plan cache is enabled by default. That is, instead of opt-in, its default was changed to opt-out. This helps to reduce the planning overhead in a user-friendly manner.
Finally, we made the API design between elementwise operations, reductions and tensor contractions coherent in the sense that all operations follow the same multi-stage API design as contractions, which, among other things, enables you to reuse the plan for elementwise and reduction operations.
This post solely focuses on the latest 2.0 API. For more information about how to transition from 1.x to 2.0, see Transition to cuTENSOR 2.x.
API introduction
This section introduces the key concepts behind cuTENSOR APIs and how to invoke them in your code. For more information and comprehensive examples, see the /NVIDIA/CUDALibrarySamples GitHub repo and Getting Started in the cuTENSOR documentation.
The first step is always to initialize the cuTENSOR library handle (one per thread). This enables the library to prepare for execution and do expensive setup work only one time.
cutensorStatus_t status;
cutensorHandle_t handle;
status = cutensorCreate(handle);
// [...] check status
After the handle is created, it can be reused for any of the subsequent API calls. Starting with cuTENSOR 2.0, all operations follow the same workflow:
Figure 1 shows the steps involved in any operation and highlights the common steps.
Tensor descriptors are integral to the cuTENSOR API. They encode the following:
In this post, we use the terms dimension and mode interchangeably.
This might sound abstract, so consider Figure 2. It is a three-dimensional tensor with elements numbered based on how they are arranged in memory. This tensor has an extent of three elements in each dimension. Its strides are one in the first dimension, three in the second dimension, and nine in the last dimension. This corresponds to the difference in position between two elements in that dimension.
Strides enable you to represent sub-tensors (tensors that are slices of a larger tensor) by setting strides that are larger than the extents. In cuTENSOR, strides are always given in units of elements, just like extents.
For example, to express the tensor from the figure, you would invoke the following code:
int64_t extents[] = {3, 3, 3};
int64_t strides[] = {1, 3, 9};
uint32_t alignment = 256; // bytes (default of cudaMalloc)
cutensorTensorDescriptor_t tensor_desc;
status = cutensorCreateTensorDescriptor(handle, tensor_desc,
3 /*num_modes*/,
extents, strides,
CUTENSOR_R_32F, alignment);
You could also pass a NULL pointer instead of strides. In that case, cuTENSOR would automatically deduce the strides from the extents for a dense tensor, assuming a generalized column-major memory layout. That is, strides increase from left to right, with the left-most mode having a stride of one. Any other layout, including generalized row-major, can be achieved by providing the appropriate strides.
Strides can also be used to access sub-tensors. For instance, the following code example encodes a two-dimensional, non-contiguous horizontal plane of the previous tensor:
int64_t extents_slice[] = {3, 3};
int64_t strides_slice[] = {3, 9};
status = cutensorCreateTensorDescriptor(handle, tensor_desc,
2 /*num_modes*/,
extents_slice, strides,
CUTENSOR_R_32F, alignment);
Einsum notation
A common design decision between the cuTENSOR elementwise, reduction and contraction APIs is that they all adhere to the einsum notation. Each dimension, also referred to as a mode, is given a unique label. Tensor operations, such as permutations and contractions, can be expressed in an user-friendly manner by simply permuting or omitting modes. Modes that don’t appear in the output are contracted. For more information, see torch.einsum in the PyTorch documentation.
For example, permuting a four dimensional tensor that is stored in the layout to the format can be expressed as follows:
The letters before the denote the modes of the input tensor and the letters behind it denote modes of the output tensor.
Similarly, a matrix-matrix multiplication can be expressed as , where the dimension is contracted. We use this notation throughout the upcoming sections.
Contraction
Tensor contractions can be thought of as the higher-dimensional equivalent of matrix-matrix multiplications. The only difference is that the operands are multi-dimensional instead of just two-dimensional matrices. For more information about the exact equation, see cuTENSOR Functions.
In this post, we show the usage of the cuTENSOR API using the example of a tensor contraction operation. However, the other APIs behave similarly. From start to finish, the following steps are required to implement this:
Create an operation descriptor
As outlined earlier, you start by creating tensor descriptors. When that is done, proceed to encoding the actual contraction:
cutensorComputeDescriptor_t descCompute = CUTENSOR_COMPUTE_DESC_32F;
cutensorOperationDescriptor_t desc;
cutensorCreateContraction(handle, &desc,
descA, {‘a’,’b’,’k’}, /* unary op A*/ CUTENSOR_OP_IDENTITY,
descB, {‘m’,’k’,’n’}, /* unary op B*/ CUTENSOR_OP_IDENTITY,
descC, {‘m’,’a’,’n’,’b’}, /* unary op C*/ CUTENSOR_OP_IDENTITY,
descC, {‘m’,’a’,’n’,’b’},
descCompute)
The code example encodes a tensor contraction of two three-dimensional inputs to create a four-dimensional output;. The API resembles the corresponding einsum notation:
Performance guidelines
This section assumes a generalized column-major data layout where the left-most mode has the smallest stride.
While cuTENSOR can work with modes that are provided in any order, the order can have an impact on performance. We generally recommend the following performance guidelines:
Create a plan preference
The next step is to specify the space of applicable kernels to the problem by creating a cutensorPlanPreferrence_t object. For instance, you can use the plan preference to fix the cutensorAlgo_t object that should be used, specify the concrete kernel if you want to implement autotuning or enable just-in-time compilation.
cutensorAlgo_t algo = CUTENSOR_ALGO_DEFAULT;
cutensorJitMode_t jitMode = CUTENSOR_JIT_MODE_NONE;
cutensorPlanPreference_t planPref;
cutensorCreatePlanPreference(handle,
&planPref,
algo, jitMode);
Any plan created with this plan preference relies on the cuTENSOR performance model to pick the most suitable pre-compiled kernel: CUTENSOR_ALGO_DEFAULT. In this case, you’ve disabled JIT compilation.
As we briefly mentioned earlier, JIT compilation enables generating a dedicated kernel for the specific operation at runtime leading to significant performance improvement. To take advantage of the cuTENSOR JIT capabilities, set jitMode = CUTENSOR_JIT_MODE_DEFAULT. For more information, see the JIT compilation and performance section later in this post.
Query the workspace size
Now that you’ve initialized the contraction descriptor and created a plan preference, you can proceed to estimate the workspace size requirement using cutensorEstimateWorkspaceSize.
The API gives you a choice over the desired workspace estimate through cutensorWorksizePreference_t. CUTENSOR_WORKSPACE_DEFAULT is a good default value as it aims to attain high performance while also reducing the workspace requirement. If memory footprint is not a concern, then CUTENSOR_WORKSPACE_MAX might be a better choice.
uint64_t workspaceSizeEstimate = 0;
cutensorWorksizePreference_t workspacePref = CUTENSOR_WORKSPACE_DEFAULT;
cutensorEstimateWorkspaceSize(handle,
desc,
planPref,
workspacePref,
&workspaceSizeEstimate);
Create a plan
The next step is to create the actual plan, which encodes the execution of the operation and selects the kernel. This step involves querying the cuTENSOR performance model and is typically the most time-consuming step of the setup phase. This is why, starting with cuTENSOR 2.0.0, it is automatically cached in a user-controlled cache. For more information, see the Plan cache and incremental autotuning section later in this post.
Creating the plan is also the step that, if enabled, causes a kernel to be just-in-time compiled.
cutensorPlan_t plan;
cutensorCreatePlan(handle, &plan, desc, planPref, workspaceSizeEstimate);
cutensorCreatePlan accepts a workspace size limit as input (in this case, it is workspaceSizeEstimate) and guarantees that the created plan does not exceed this limit.
(Optional) Query the exact workspace size usage
Starting with cuTENSOR 2.0.0, it is possible to query the created plan for the exact amount of workspace that it actually uses. While this step is optional, we recommend it to reduce the required workspace size.
uint64_t actualWorkspaceSize = 0;
cutensorPlanGetAttribute(handle,
plan,
CUTENSOR_PLAN_REQUIRED_WORKSPACE,
&actualWorkspaceSize,
sizeof(actualWorkspaceSize));
Execute the contraction
All that remains is to execute the contraction and provide the appropriate data pointers that must be accessible by the GPU. For more information about when data remains on the host, see Extending Block-Cyclic Tensors for Multi-GPU with NVIDIA cuTENSORMg.
cutensorContract(handle,
plan,
(void*) &alpha, A_d, B_d,
(void*) &beta, C_d, C_d,
work, actualWorkspaceSize, stream);
Elementwise operations
Elementwise operations are structurally the simplest operations in cuTENSOR. Elementwise mean operations where the size of the participating tensors is not reduced in any way. That is, you perform an operation in an element-by-element manner. Common elementwise operations include copying a tensor, permuting a tensor, adding tensors or elementwise multiplying them (also known as the Hadamard product).
Depending on the number of input tensors, cuTENSOR offers three elementwise APIs:
As an example, consider a permutation. Here, tensors A (source) and B (destination) are rank-4 tensors and you permute their modes from NHWC to NCHW. The creation of the operation descriptor can be implemented as follows. Compare with steps 1 and 6 from the previous section.
float alpha = 1.0f;
cutensorOperator_t op = CUTENSOR_OP_IDENTITY;
cutensorComputeDescriptor_t descCompute = CUTENSOR_COMPUTE_DESC_32F;
cutensorOperationDescriptor_t permuteDesc;
status = cutensorCreatePermutation(handle, &permuteDesc,
&alpha, descA, {‘N’, ‘H’, ‘W’, ‘C’}, op,
descB, {‘N’, ‘C’, ‘H’, ‘W’},
descCompute, stream);
// next stages (such as plan creation) omitted …
cutensorPermute(handle, plan, &alpha, A_d, C_d, nullptr /* stream */));
This code example only highlights the differences with regard to the previous contraction example. All stages but the creation of the operation descriptor and the actual execution are identical to that of a contraction. The only exception is that elementwise operations do not require any workspace.
cuTENSOR 2.0 also offers support for padding the output tensor of a tensor permutation. This can be especially useful if alignment requirements must be met, such as enabling vectorized loads.
The following code example outlines how an output tensor can be padded with zeros. To be precise, one element of padding is added each to the left and to the right of the fourth mode and all other modes remain non-padded.
cutensorOperationDescriptorSetAttribute(handle, permuteDesc,
CUTENSOR_OPERATION_DESCRIPTOR_PADDING_RIGHT,
{0,0,0,1},
sizeof(int) * 4));
cutensorOperationDescriptorSetAttribute(handle, permuteDesc,
CUTENSOR_OPERATION_DESCRIPTOR_PADDING_LEFT,
{0,0,0,1},
sizeof(int) * 4));
float paddingValue = 0.f;
cutensorOperationDescriptorSetAttribute(handle, permuteDesc,
CUTENSOR_OPERATION_DESCRIPTOR_PADDING_VALUE,
&paddingValue, sizeof(paddingValue));
For more information and a fully functional padding example, see the /NVIDIA/CUDALibrarySamples GitHub repo.
Reduction
The cuTENSOR tensor reduction operation takes a single tensor as input and reduces the number of dimensions in that tensor using a reduction operation: summation, multiplication, maximum, or minimum. For more information, see cutensorOperator-t.
Similar to contractions and elementwise operations, tensor reductions also use the same multi-stage API. This API example is limited to the parts that differ.
cutensorOperationDescriptor_t desc;
cutensorOperator_t opReduce = CUTENSOR_OP_ADD;
cutensorCreateReduction(handle, &desc,
descA, {‘a’, ‘b’, ‘c’}, CUTENSOR_OP_IDENTITY,
descC, {‘b’, ‘a’,}, CUTENSOR_OP_IDENTITY,
descC, {‘b’, ‘a’,},
opReduce, descCompute);
// next stages (such as plan creation) omitted …
cutensorReduce(handle, plan,
(const void*)&alpha, A_d,
(const void*)&beta, C_d,
C_d,
work, actualWorkspaceSize, stream);
The input tensor doesn’t have to be fully reduced to a scalar but some modes can still remain. Also, there’s no constraint on the exact order of modes either, which effectively fuses permutations and reductions into a single kernel. For instance, the tensor reduction not only contracts the mode but also changes the order of the remaining modes.
Just-in-time compilation
As we mentioned in the Contraction section earlier in this post, cuTENSOR 2.0 introduced just-in-time compilation support for tensor contractions to produce dedicated kernels tailored to a given tensor contraction at runtime. This is especially valuable to provide the best possible performance for challenging tensor contractions, such as high-dimensional tensor contractions, where pre-built kernels may not be as performant.
Just-in-time compilation can be enabled by passing CUTENSOR_JIT_MODE_DEFAULT to cutensorCreatePlanPreference for the relevant tensor contraction operation.
cutensorAlgo_t algo = CUTENSOR_ALGO_DEFAULT;
cutensorJitMode_t jitMode = CUTENSOR_JIT_MODE_DEFAULT;
cutensorPlanPreference_t planPref;
cutensorCreatePlanPreference(handle,
&planPref,
algo, jitMode);
The actual compilation of the kernel is then performed using NVIDIA nvrtc and occurs upon the first call to cutensorCreatePlan at runtime. The successfully compiled kernel is then automatically added to an internal kernel cache such that any subsequent calls for the same operation descriptor and plan preference would just result in cache look-up rather than a recompilation.
To further amortize the overhead of JIT compilation, cuTENSOR provides the cutensorReadKernelCacheFromFile and cutensorWriteKernelCacheToFile APIs that enable you to read and write the internal kernel cache to a file such that it can be reused across multiple program executions.
cutensorReadKernelCacheFromFile(handle, "kernelCache.bin");
// execution (possibly with JIT-compilation enabled) omitted…
cutensorWriteKernelCacheToFile(handle, "kernelCache.bin");
For more information, see Just-In-Time Compilation.
Plan cache and incremental autotuning
Planning is the most time-consuming setup stage as it invokes the cuTENSOR performance model. It is a good practice to store a plan and reuse it multiple times with potentially different data pointers.
However, given that such reuse may not always be possible or may be time consuming to implement on the user side, cuTENSOR 2.0 employs a software-managed plan cache per library handle that is activated by default. You can still opt-out by using CUTENSOR_CACHE_MODE_NONE on an operation-by-operation basis.
The plan cache reduces the time that cutensorCreatePlan takes by ~10x.
The plan cache employs a least recently used (LRU) eviction policy and has a default capacity of 64 entries. Ideally, you’d want the cache to have the same or higher capacity as the number of unique tensor contractions. cuTENSOR provides the option to change the cache capacity as follows:
int32_t numEntries = 128;
cutensorHandleResizePlanCachelines(&handle, numEntries);
Similar to the kernel cache, the plan cache can also be serialized from and to disc so it can be reused across program executions:
uint32_t numCachelines = 0;
cutensorHandleReadPlanCacheFromFile(handle, "./planCache.bin", &numCachelines);
// execution (possibly with JIT-compilation enabled) omitted…
cutensorHandleWritePlanCacheToFile(handle, "./planCache.bin");
Incremental autotuning is an opt-in feature of the plan cache. It enables successive invocations of the same operation, with potentially different data pointers, to be performed by different candidates or kernels. After a user-defined number of candidates has been explored, the fastest one is stored in the plan cache and used thereafter for the specific operation.
Compared to other methods of autotuning, incremental autotuning using the plan cache has the following advantages:
Incremental autotuning can be enabled during plan preference creation, as follows:
const cutensorAutotuneMode_t autotuneMode = CUTENSOR_AUTOTUNE_MODE_INCREMENTAL;
cutensorPlanPreferenceSetAttribute(
&handle,
&find,
CUTENSOR_PLAN_PREFERENCE_AUTOTUNE_MODE_MODE,
&autotuneMode ,
sizeof(cutensorAutotuneMode_t));
// Optionally, also set the maximum number of candidates to explore
const uint32_t incCount = 4;
cutensorPlanPreferenceSetAttribute(
&handle,
&find,
CUTENSOR_PLAN_PREFERENCE_INCREMENTAL_COUNT,
&incCount,
sizeof(uint32_t));
For more information about the cuTENSOR plan cache and incremental autotuning features, see Plan Cache.
Multi-GPU support
For more information, see Extending Block-Cyclic Tensors for Multi-GPU with NVIDIA cuTENSORMg.
Summary
When working with dense tensors, cuTENSOR provides a comprehensive collection of routines that make your life as a CUDA developer easier and enable you to achieve high performance without having to worry about low-level performance optimizations. Many of the algorithms that you would want to apply to tensors can be expressed using the existing cuTENSOR routines.
As a CUDA library user, you can also benefit from automatic performance-portable code for any future NVIDIA architecture and other performance improvements, as we continuously optimize the cuTENSOR library.
For more information, see cuTENSOR 2.0: Applications and Performance.
Get Started with cuTENSOR 2.0
Get started with cuTENSOR 2.0.
Dive deeper and ask questions about cuTENSOR 2.0 in the Developer Forums.
Related resources
Tags
About the Authors
Comments
Related posts
cuTENSOR 2.0: Applications and Performance
Extending Block-Cyclic Tensors for Multi-GPU with NVIDIA cuTENSORMg
Programming Distributed Multi-GPU Tensor Operations with cuTENSOR v1.4
cuTENSOR v1.3.0 Now Available: Up to 2x Performance
Bringing Tensor Cores to Standard Fortran
Related posts
Advancing Quantum Algorithm Design with GPTs
Constant Time Launch for Straight-Line CUDA Graphs and Other Performance Enhancements
Checkpointing CUDA Applications with CRIU
Dynamic Control Flow in CUDA Graphs with Conditional Nodes
An Introduction to Quantum Accelerated Supercomputing
|
Mixed-Precision Programming with CUDA 8
Update, March 25, 2019: The latest Volta and Turing GPUs now incoporate Tensor Cores, which accelerate certain types of FP16 matrix math. This enables faster and easier mixed-precision computation within popular AI frameworks. Making use of Tensor Cores requires using CUDA 9 or later. NVIDIA has also added automatic mixed precision capabilities to TensorFlow, PyTorch, and MXNet. Interested in learning more or trying it out for yourself? Get tensor core optimized examples for popular AI frameworks here.
In the practice of software development, programmers learn early and often the importance of using the right tool for the job. This is especially important when it comes to numerical computing, where tradeoffs between precision, accuracy, and performance make it essential to choose the best representations for data. With the introduction of the Pascal GPU architecture and CUDA 8, NVIDIA is expanding the set of tools available for mixed-precision computing with new 16-bit floating point and 8/16-bit integer computing capabilities.
“As the relative costs and ease of computing at different precisions evolve, due to changing architectures and software, as well as the disruptive influence of accelerators such as GPUs, we will see an increasing development and use of mixed precision algorithms.” — Nick Higham, Richardson Professor of Applied Mathematics, University of Manchester
Many technical and HPC applications require high precision computation with 32-bit (single float, or FP32) or 64-bit (double float, or FP64) floating point, and there are even GPU-accelerated applications that rely on even higher precision (128- or 256-bit floating point!). But there are many applications for which much lower precision arithmetic suffices. For example, researchers in the rapidly growing field of deep learning have found that deep neural network architectures have a natural resilience to errors due to the backpropagation algorithm used in training them, and some have argued that 16-bit floating point (half precision, or FP16) is sufficient for training neural networks.
Storing FP16 (half precision) data compared to higher precision FP32 or FP64 reduces memory usage of the neural network, allowing training and deployment of larger networks, and FP16 data transfers take less time than FP32 or FP64 transfers. Moreover, for many networks deep learning inference can be performed using 8-bit integer computations without significant impact on accuracy.
In addition to deep learning, applications that use data from cameras or other real-world sensors often don’t require high-precision floating point computation, because the sensors generate low-precision or low dynamic range data. The data processed from radio telescopes is a good example. As you’ll see later in this post, the cross correlation algorithm used for processing data from radio telescopes can be greatly accelerated by using 8-bit integer computation.
The combined use of different numerical precisions in a computational method is known as mixed precision. The NVIDIA Pascal architecture provides features aimed at providing even higher performance for applications that can utilize lower precision computation, by adding vector instructions that pack multiple operations into a 32-bit datapath. Specifically, these instructions operate on 16-bit floating point data (“half” or FP16) and 8- and 16-bit integer data (INT8 and INT16).
The new NVIDIA Tesla P100, powered by the GP100 GPU, can perform FP16 arithmetic at twice the throughput of FP32. The GP102 (Tesla P40 and NVIDIA Titan X), GP104 (Tesla P4), and GP106 GPUs all support instructions that can perform integer dot products on 2- and4-element 8-bit vectors, with accumulation into a 32-bit integer. These instructions are valuable for implementing high-efficiency deep learning inference, as well as other applications such as radio astronomy.
In this post I will provide some details about half-precision floating point, and provide details about the performance achievable on Pascal GPUs using FP16 and INT8 vector computation. I will also discuss the mixed-precision computation capabilities provided by various CUDA platform libraries and APIs.
A Bit (or 16) about Floating Point Precision
As every computer scientist should know, floating point numbers provide a representation that allows real numbers to be approximated on a computer with a tradeoff between range and precision. Floating point numbers approximate the real value to a set number of significant digits, known as the mantissa or significand, and then scaled by an exponent in a fixed base (base 2 for IEEE standard floating point numbers used on most computers today).
Common floating point formats include 32-bit, known as “single precision” (`float` in C-derived programming languages), and 64-bit, known as “double precision” (`double`). As defined by the IEEE 754 standard, a 32-bit floating point value comprises a sign bit, 8 exponent bits, and 23 mantissa bits. A 64-bit double comprises a sign bit, 11 exponent bits, and 52 mantissa bits. In this post, we’re interested in the (newer) IEEE 754 standard 16-bit floating half type, which comprises a sign bit, 5 exponent bits, and 10 mantissa bits, as Figure 1 shows.
To get an idea of what a difference in precision 16 bits can make, FP16 can represent 1024 values for each power of 2 between 2-14 and 215 (its exponent range). That’s 30,720 values. Contrast this to FP32, which can represent about 8 million values for each power of 2 between 2-126 and 2127. That’s about 2 billion values—a big difference. So why use a small floating-point format like FP16? In a word, performance.
The NVIDIA Tesla P100 (based on the GP100 GPU) supports a 2-way vector half-precision fused multiply-add (FMA) instruction (opcode HFMA2), which it can issue at the same rate as 32-bit FMA instructions. This means that half-precision arithmetic has twice the throughput of single-precision arithmetic on P100, and four times the throughput of double precision. Specifically, the NVLink-enabled P100 (SXM2 module) is capable of 21.2 Teraflop/s of half-precision. With such a big performance benefit, it’s worth looking at how you can use it.
One thing to keep in mind when using reduced precision is that because the normalized range of FP16 is smaller, the probability of generating subnormal numbers (also known as denormals) increases. Therefore it’s important that NVIDIA GPUs implement FMA operations on subnormal numbers with full performance. Some processors do not, and performance can suffer. (Note: you may still see benefits from enabling “flush to zero”. See the post “CUDA Pro Tip: Flush Denormals with Confidence”.)
High Performance with Low-Precision Integers
Floating point numbers combine high dynamic range with high precision, but there are also cases where dynamic range is not necessary, so that integers may do the job. There are even applications where the data being processed has low precision so very low-precision storage (such as C short or char/byte types) can be used.
For such applications, the latest Pascal GPUs (GP102, GP104, and GP106) introduce new 8-bit integer 4-element vector dot product (DP4A) and 16-bit 2-element vector dot product (DP2A) instructions. DP4A performs the vector dot product between two 4-element vectors A and B (each comprising 4 single-byte values stored in a 32-bit word), storing the result in a 32-bit integer, and adding it to a third argument C, also a 32-bit integer. See Figure 2 for a diagram. DP2A is a similar instruction in which A is a 2-element vector of 16-bit values and B is a 4-element vector of 8-bit values, and different flavors of DP2A select either the high or low pair of bytes for the 2-way dot product. These flexible instructions are useful for linear algebraic computations such as matrix multiplies and convolutions. They are particularly powerful for implementing 8-bit integer convolutions for deep learning inference, common in the deployment of deep neural networks used for image classification and object detection. Figure 3 shows the improved power efficiency achieved on a Tesla P4 GPU using INT8 convolution on AlexNet.
DP4A computes the equivalent of a total of eight integer operations, and DP2A computes four. This gives the Tesla P40 (based on GP102) a peak integer throughput of 47 TOP/s (Tera operations per second).
An example application of DP4A is the cross-correlation algorithm commonly used in radio telescope data processing pipelines. As with optical telescopes, larger radio telescopes can resolve fainter and more distant objects in the cosmos; but building larger and larger monolithic single-antenna radios radio telescopes is not practical. Instead, radio astronomers build arrays of many antennae spread over a large area. To use these telescopes, the signals from all the antennae must be cross-correlated—a highly parallel computation with cost that scales quadratically with the number of antennas. Since radio telescope elements typically capture very low precision data, floating-point computation is not needed for cross correlation of the signals. GPUs have already been used in production radio astronomy cross correlation, but they have typically used FP32 computation. The introduction of DP4A promises much higher power efficiency for this computation. Figure 4 shows the results of modifying a cross-correlation code to use DP4A, resulting in a 4.5x efficiency improvement on a Tesla P40 GPU with default clocks (compared to FP32 computation on P40), and a 6.4x improvement with GPU clocks capped to reduce temperature (and therefore reduce leakage current). Overall, the new code is nearly 12x more efficient than FP32 cross-correlation on the previous-generation Tesla M40 GPU (credit: Kate Clark).
Mixed Precision Performance on Pascal GPUs
The half precision (FP16) Format is not new to GPUs. In fact, FP16 has been supported as a storage format for many years on NVIDIA GPUs, mostly used for reduced precision floating point texture storage and filtering and other special-purpose operations. The Pascal GPU architecture implements general-purpose, IEEE 754 FP16 arithmetic. High performance FP16 is supported at full speed on Tesla P100 (GP100), and at lower throughput (similar to double precision) on other Pascal GPUs (GP102, GP104, and GP106), as the following table shows.
The 8-bit and 16-bit DP4A and DP2A dot product instructions are supported on GP102-GP106, but not on GP100. Table 1 shows the arithmetic throughput of the different numerical instructions on Pascal-based Tesla GPUs.
Mixed-Precision Programming with NVIDIA Libraries
The easiest way to benefit from mixed precision in your application is to take advantage of the support for FP16 and INT8 computation in NVIDIA GPU libraries. Key libraries from the NVIDIA SDK now support a variety of precisions for both computation and storage.
Table 2 shows the current support for FP16 and INT8 in key CUDA libraries as well as in PTX assembly and CUDA C/C++ intrinsics.
cuDNN
cuDNN is a library of primitive routines used in training and deploying deep neural networks. cuDNN 5.0 includes FP16 support for forward convolution, and 5.1 added support for FP16 backward convolution. All other routines in the library are memory bound, so FP16 computation is not beneficial to performance. Therefore these routines use FP32 computation but support FP16 data input and output. cuDNN 6 will add support for INT8 inference convolutions.
TensorRT
TensorRT is a high-performance deep learning inference engine for production deployment of deep learning applications that automatically optimizes trained neural networks for run-time performance. TensorRT v1 has support for FP16 for inference convolutions, and v2 will support INT8 for inference convolutions.
cuBLAS
cuBLAS is a GPU library for dense linear algebra— an implementation of BLAS, the Basic Linear Algebra Subroutines. cuBLAS has support for mixed precision in several matrix-matrix multiplication routines. cublasHgemm is a FP16 dense matrix-matrix multiply routine that uses FP16 for compute as well as for input and output. cublasSgemmEx() computes in FP32, but the input data can be FP32, FP16, or INT8, and the output can be FP32 or FP16. cublasGemm() is a new routine in CUDA 8 that allows specification of the computation precision, including INT8 computation (which uses DP4A).
Support for more BLAS level 3 routines with FP16 computation and/or storage will be added based on demand, so please contact us if you need them. Level 1 and level 2 BLAS routines are memory bound, so reduced precision computation is not beneficial.
cuFFT
cuFFT is a popular Fast Fourier Transform library implemented in CUDA. Starting in CUDA 7.5, cuFFT supports FP16 compute and storage for single-GPU FFTs. FP16 FFTs are up to 2x faster than FP32. FP16 computation requires a GPU with Compute Capability 5.3 or later (Maxwell architecture). Sizes are restricted to powers of 2 currently, and strides on the real part of R2C or C2R transforms are not supported.
cuSPARSE
cuSPARSE is a library of GPU-accelerated linear algebra routines for sparse matrices. cuSPARSE supports FP16 storage for several routines (`cusparseXtcsrmv()`, `cusparseCsrsv_analysisEx()`, `cusparseCsrsv_solveEx()`, `cusparseScsr2cscEx()`, and `cusparseCsrilu0Ex()`). FP16 computation for cuSPARSE is being investigated. Please contact us via the comment form if you have specific needs.
Using Mixed Precision in your own CUDA Code
For developers of custom CUDA C++ kernels and users of the Thrust parallel algorithms library, CUDA provides the type definitions and APIs you need to get the most out of FP16 and INT8 computation, storage, and I/O.
FP16 types and intrinsics
For FP16, CUDA defines the `half` and `half2` types in the header `cuda_fp16.h` included in the CUDA include path. This header also defines a complete set of intrinsic functions for operating on `half` data. As an example, the following shows the declarations of the scalar FP16 addition function, `hadd()` and the 2-way vector FP16 addition function, `hadd2()`.
__device__ __half __hadd ( const __half a, const __half b );
__device__ __half2 __hadd2 ( const __half2 a, const __half2 b );
`cuda_fp16.h` defines a full suite of half-precision intrinsics for arithmetic, comparison, conversion and data movement, and other mathematical functions. All are described in the CUDA Math API documentation.
Use `half2` vector types and intrinsics where possible achieve the highest throughput. The GPU hardware arithmetic instructions operate on 2 FP16 values at a time, packed together in 32-bit registers. The peak throughput numbers in Table 1 assume `half2` vector computation. If you use scalar `half` instructions, you can achieve 50% of the peak throughput. Likewise, achieving maximum bandwidth when loading from and storing to FP16 arrays requires vector access of `half2` data. Ideally, you can vectorize loads further to achieve even higher bandwidth by loading and storing `float2` or `float4` types and casting to/from `half2`. See past Parallel Forall Pro Tip blog post for related examples.
The following example code demonstrates the use of CUDA’s __hfma() (half-precision fused multiply-add) and other intrinsics to compute a half-precision AXPY (A * X + Y). The complete code for the example is available on Github, and it shows how to initialize the half-precision arrays on the host. Importantly, when you start using half types you are likely to need to convert between half and float values in your host-side code. This blog post from Fabian Giesen includes some fast CPU type conversion routines (see the associated Gist for full source). I used some of Giesen’s code for this example.
__global__
void haxpy(int n, half a, const half *x, half *y)
{
int start = threadIdx.x + blockDim.x * blockIdx.x;
int stride = blockDim.x * gridDim.x;
#if __CUDA_ARCH__ >= 530
int n2 = n/2;
half2 *x2 = (half2*)x, *y2 = (half2*)y;
for (int i = start; i < n2; i+= stride)
y2[i] = __hfma2(__halves2half2(a, a), x2[i], y2[i]);
// first thread handles singleton for odd arrays
if (start == 0 && (n%2))
y[n-1] = __hfma(a, x[n-1], y[n-1]);
#else
for (int i = start; i < n; i+= stride) {
y[i] = __float2half(__half2float(a) * __half2float(x[i])
+ __half2float(y[i]));
}
#endif
}
Integer Dot Product Intrinsics
CUDA defines intrinsics for 8-bit and 16-bit dot products (the DP4A and DP2A instructions described previously) in the header `sm_61_intrinsics.h` (sm_61 is the SM architecture corresponding to GP102, GP104, and GP106. Also known as Compute Capability 6.1). For convenience, there are both `int` and `char4` versions of the DP4A intrinsics, in both signed and unsigned flavors:
__device__ int __dp4a(int srcA, int srcB, int c);
__device__ int __dp4a(char4 srcA, char4 srcB, int c);
__device__ unsigned int __dp4a(unsigned int srcA, unsigned int srcB, unsigned int c);
__device__ unsigned int __dp4a(uchar4 srcA, uchar4 srcB, unsigned int c);
Both versions assume that the four vector elements of A and B are packed into the four corresponding bytes of a 32-bit word. The `char4` / `uchar4` versions use CUDA’s struct type with explicit fields, while the packing is implicit in the `int` versions.
As mentioned previously, DP2A has a “high” and a “low” version for selecting either the high or low two bytes of input B, respectively.
// Generic [_lo]
__device__ int __dp2a_lo(int srcA, int srcB, int c);
__device__ unsigned int __dp2a_lo(unsigned int srcA, unsigned int srcB, unsigned int c);
// Vector-style [_lo]
__device__ int __dp2a_lo(short2 srcA, char4 srcB, int c);
__device__ unsigned int __dp2a_lo(ushort2 srcA, uchar4 srcB, unsigned int c);
// Generic [_hi]
__device__ int __dp2a_hi(int srcA, int srcB, int c);
__device__ unsigned int __dp2a_hi(unsigned int srcA, unsigned int srcB, unsigned int c);
// Vector-style [_hi]
__device__ int __dp2a_hi(short2 srcA, char4 srcB, int c);
__device__ unsigned int __dp2a_hi(ushort2 srcA, uchar4 srcB, unsigned int c);
Keep in mind that DP2A and DP4A are available on Tesla, GeForce, and Quadro accelerators based on GP102, GP104, and GP106 GPUs, but not on the Tesla P100 (based on the GP100 GPU).
Download CUDA 8 Today
To get the most out of mixed-precision computing on GPUs, download the free NVIDIA CUDA Toolkit version 8. To learn about all the powerful features of CUDA 8, check out the post CUDA 8 Features Revealed.
To learn more about all the performance improvements in CUDA 8 and the latest GPU-accelerated libraries, join us for the free overview session about CUDA 8 Toolkit Performance to be presented on Thursday, November 3.
Related resources
Tags
About the Authors
Comments
Related posts
Automatic Mixed Precision Helps NVIDIA GauGan Researchers Dramatically Speed Up their DL Training
Automatic Mixed Precision for NVIDIA Tensor Core Architecture in TensorFlow
Video Series: Mixed-Precision Training Techniques Using Tensor Cores for Deep Learning
Using Tensor Cores for Mixed-Precision Scientific Computing
Mixed-Precision Training of Deep Neural Networks
Related posts
Harnessing the Power of NVIDIA AI Enterprise on Azure Machine Learning
Webinar: Performant Multiphase Flow Simulation at Leadership-Class Scale
Asynchronous Error Reporting: When printf Just Won’t Do
Debugging a Mixed Python and C Language Stack
Accelerating AI Training with NVIDIA TF32 Tensor Cores
|
Exploiting NVIDIA Ampere Structured Sparsity with cuSPARSELt
Deep neural networks achieve outstanding performance in a variety of fields, such as computer vision, speech recognition, and natural language processing. The computational power needed to process these neural networks is rapidly increasing, so efficient models and computation are crucial. Neural network pruning, removing unnecessary model parameters to yield a sparse network, is a useful way to reduce model complexity while maintaining accuracy.
To exploit fine-grained network pruning, the NVIDIA Ampere GPU architecture introduces the concept of fine-grained structured sparsity. On the NVIDIA A100 GPU, the structure manifests as a 2:4 pattern: out of every four elements, at least two must be zero. This reduces the data footprint and bandwidth of one matrix multiply (also known as GEMM) operand by 2x and doubles throughput by skipping the computation of the zero values using new NVIDIA Sparse Tensor Cores.
cuSPARSELt: A high-performance CUDA library for sparse matrix-dense matrix multiplication
Figure 2 shows how NVIDIA Sparse Tensor Cores operate on only half of one input to double the math efficiency. On the left is a weight matrix pruned to meet the expected 2:4 sparse pattern. As you can see, in each group of four weights (outlined in orange), only two weights are nonzero (shades of green). This matrix is compressed to be half the size of the original matrix with a small amount of metadata to keep track of where the nonzeros were in the original, uncompressed matrix. This metadata is used to select only the corresponding activations from the second input matrix, letting the NVIDIA Sparse Tensor Core skip computing multiplications by zero to achieve twice the throughput of a regular Tensor Core.
To make it easy to use NVIDIA Ampere architecture sparse capabilities, NVIDIA introduces cuSPARSELt, a high-performance CUDA library dedicated to general matrix-matrix operations in which at least one operand is a sparse matrix. The cuSPARSELt library lets you use NVIDIA third-generation Tensor Cores Sparse Matrix Multiply-Accumulate (SpMMA) operation without the complexity of low-level programming. The library also provides helper functions for pruning and compressing matrices.
The key features of cuSPARSELt include the following:
cuSPARSELt workflow
The cuSPARSELt library follows an equivalent approach and adopts similar concepts to cuBLASLt and cuTENSOR. The library programming model requires organizing the computation in such a way that the same setup can be repeatedly used for different inputs.
In particular, the model relies on the following high-level stages:
The common workflow consists of the following steps (Figure 3):
Sparse GEMM Performance
As with dense matrix multiplication, the performance of sparse matrix multiplications varies with GEMM dimensions, layouts, and data types. Here’s a snapshot of the relative performance of dense and sparse GEMMs with today’s software.
The following charts show the performance of the cuSPARSELt and cuBLAS for the following operation:
D=alpha*op(A)*op(B)+beta*C
In this operation, A , B , and D=C are dense matrices of sizes MxK, KxN, and MxN, respectively. We denote the layouts of the matrices A and B with N for column-major order (op is non-transposed) and T for row-major order (op is transposed).
To showcase the performance achievable with cuSPARSELt for a real workload, the following table shows some common GEMM sizes used by a pruned BERT-Large model (seqlen=128, BS=128) with column-major TN FP16 kernels. In general, the larger the workload is, the more that sparsity can help.
Structured sparse matrix-matrix multiplication code example
Now that you’ve seen the available performance, here’s an example of performing a matrix multiplication with structured sparsity in the cuSPARSELt library using Sparse Tensor Cores in the NVIDIA A100 or GA100 GPU. For more information, see the NVIDIA/CUDALibrarySamples/tree/master/cuSPARSELt/spmma GitHub repo.
First, include the cuSPARSELt header, set up some device pointers and data structures, and initialize the cuSPARSELt handle.
#include <cusparseLt.h> // cusparseLt header
// Device pointers and coefficient definitions
float alpha = 1.0f;
float beta = 0.0f;
__half* dA = ...
__half* dB = ...
__half* dC = ...
// cusparseLt data structures and handle initialization
cusparseLtHandle_t handle;
cusparseLtMatDescriptor_t matA, matB, matC;
cusparseLtMatmulDescriptor_t matmul;
cusparseLtMatmulAlgSelection_t alg_sel;
cusparseLtMatmulPlan_t plan;
cudaStream_t stream = nullptr;
cusparseLtInit(&handle);
Next, initialize the structured sparse input matrix (matrix A), dense input matrix (matrix B), and dense output matrix (matrix C) descriptors.
cusparseLtStructuredDescriptorInit(&handle, &matA, num_A_rows, num_A_cols,
lda, alignment, type, order,
CUSPARSELT_SPARSITY_50_PERCENT);
cusparseLtDenseDescriptorInit(&handle, &matB, num_B_rows, num_B_cols, ldb,
alignment, type, order);
cusparseLtDenseDescriptorInit(&handle, &matC, num_C_rows, num_C_cols, ldc,
alignment, type, order);
With the descriptors ready, you can prepare the matrix multiplication operation’s descriptor, select an algorithm to use to perform the matmul operation, and initialize the matmul plan.
cusparseLtMatmulDescriptorInit(&handle, &matmul, opA, opB, &matA, &matB,
&matC, &matC, compute_type);
cusparseLtMatmulAlgSelectionInit(&handle, &alg_sel, &matmul,
CUSPARSELT_MATMUL_ALG_DEFAULT);
int alg = 0; // set algorithm ID
cusparseLtMatmulAlgSetAttribute(&handle, &alg_sel,
CUSPARSELT_MATMUL_ALG_CONFIG_ID,
&alg, sizeof(alg));
size_t workspace_size, compressed_size;
cusparseLtMatmulGetWorkspace(&handle, &alg_sel, &workspace_size);
cusparseLtMatmulPlanInit(&handle, &plan, &matmul, &alg_sel, workspace_size);
If the sparse matrix hasn’t been pruned by another process, you can do it at this point. Don’t forget to check the validity of the sparsity pattern to make sure it can be accelerated with Sparse Tensor Cores.
cusparseLtSpMMAPrune(&handle, &matmul, dA, dA, CUSPARSELT_PRUNE_SPMMA_TILE,
stream);
// checking the correctness
int is_valid = 0;
cusparseLtSpMMAPruneCheck(&handle, &matmul, dA, &is_valid, stream);
if (is_valid != 0) {
std::printf("!!!! The matrix does not conform to the SpMMA sparsity pattern. "
"cusparseLtMatmul does not provide correct results\n");
return EXIT_FAILURE;
}
Now that matrix A has been pruned with 2:4 sparsity, you can compress it to roughly half of its original size. This execution time for this step is negligible compared to the actual matrix multiplication (less than 5%).
cusparseLtSpMMACompressedSize(&handle, &plan, &compressed_size);
cudaMalloc((void**) &dA_compressed, compressed_size);
cusparseLtSpMMACompress(&handle, &plan, dA, dA_compressed, stream);
With the setup complete, perform the matmul operation. The call to cusparseLtMatmul can be repeated many times with different B matrices. You only have to set up the sparse matrix one time. For use cases where the A matrix values change, the cusparseLtSpMMACompress routine must be called again to set up the data structures for the sparse matrix.
void* d_workspace = nullptr;
int num_streams = 0;
cudaStream_t* streams = nullptr;
cusparseLtMatmul(&handle, &plan, &alpha, dA_compressed, dB, &beta, dC, dD,
d_workspace, streams, num_streams) )
Finally, clean up the used memory by destroying the matmul plan and cuSPARSELt handle.
cusparseLtMatmulPlanDestroy(&plan);
cusparseLtDestroy(&handle);
Get started with cuSPARSELt
The cuSPARSELt library makes it easy to exploit NVIDIA Sparse Tensor Core operations, significantly improving the performance of matrix-matrix multiplication for deep learning applications without reducing network’s accuracy. The library also provides utilities for matrix compression, pruning, and performance auto-tuning. In short, cuSPARSELt reduces computation, power consumption, execution time, and memory storage compared to the common dense math approach.
The latest version of cuSPARSELt with NVIDIA Ampere architecture support can be found in NVIDIA GPU Accelerated Libraries. For more information about APIs, installation notes, new features, and examples, see cuSPARSELt: A High-Performance CUDA Library for Sparse Matrix-Matrix Multiplication.
For more information, see the following resources:
Related resources
Tags
About the Authors
Comments
Related posts
Structured Sparsity in the NVIDIA Ampere Architecture and Applications in Search Engines
Sparsity in INT8: Training Workflow and Best Practices for NVIDIA TensorRT Acceleration
Accelerating Inference with Sparsity Using the NVIDIA Ampere Architecture and NVIDIA TensorRT
Accelerating Matrix Multiplication with Block Sparse Format and NVIDIA Tensor Cores
CUTLASS: Fast Linear Algebra in CUDA C++
Related posts
Simplifying GPU Application Development with Heterogeneous Memory Management
Programming the Quantum-Classical Supercomputer
CUDA 12.1 Supports Large Kernel Parameters
Harnessing the Power of NVIDIA AI Enterprise on Azure Machine Learning
Webinar: Performant Multiphase Flow Simulation at Leadership-Class Scale
|
Table of contents
Reference
Optimizing the GPU kernel
1. Reduction to a single value
In this part of the workshop, we are going to investigate how the individual kernel can take advantage of all the features provided by GPU.
As an example, we are going to use the reduction problem.
Consider a vector of N elements that contains floating point numbers from 0 to 1.
The task is to compute the sum of all the elements in the vector.
The basic approach in a serial code is to designate a single floating point variable and add elements to it one by one in a loop.
Thus, on each iteration, one floating point number will be added to accumulating value, as indicated on the figure below.
A reduction of an array to a single value.
Each iteration (unfolded on the Figure) takes one element of the vector and adds it to the accumulated value.
On a GPU, many threads are running simultaneously.
This means that simple approach of adding values to a single number will not work out of the box — we simply don’t know which of the threads will be adding respective value at a given time.
In case two threads are trying to add to the value simultaneously, we can encounter a problem called race condition.
In particular, if thread number one tries to make a binary addition, it should first read the accumulated value.
If, at the same time, thread number two does the same thing, it will first read the value as well.
In case this happens before the first thread saves the data, the result from the first thread will overwrite the value and the addition from the thread one will be lost.
To avoid race conditions, the summation has to be done in a thread-safe fashion.
Likely, CUDA provides atomic functions, that ensures that all the operations are done thread-safe.
Atomic functions have the following signature:
atomicAdd(..)
__device__ float atomicAdd(float* address, float val)
These can only be called from the device, where the application runs in many threads.
The function takes the address, where the data will be atomically added and a value to add.
Note, that the atomic operations can be called on a system level, device level and thread block level and these will result in different performance.
Let us first adopt a simple CPU code that accumulates values to a single variable to run on a GPU.
Note that this approach is not desirable both from correctness and performance standpoints, as we will see later.
The figure below illustrates how the CPU algorithm can be adopted to run on a GPU with atomic operations.
Since the number of threads on a GPU is a multiple by a block size, some threads may try to access the elements that are out of bonds.
So either extra condition should be added ot the array should be extended by zeroes, as shown on the figure.
A reduction of an array to a single value on a GPU.
Data is copied to the GPU memory, where each thread adds one element to the accumulated value.
Note that the thread-safe atomic operations have to be used in order to ensure that there are no race conditions.
Many threads will run simultaneously on a GPU, so there is no need for a loop over the indices.
Basic reduction code
#include <stdio.h>
#include <stdlib.h>
#include <algorithm>
#include <time.h>
int main(int argc, char* argv[])
{
int numElements = (argc > 1) ? atoi(argv[1]) : 100000000;
printf("Reducing over %d values.\n", numElements);
float* data = (float*)calloc(numElements, sizeof(float));
srand(1214134);
for (int i = 0; i < numElements; i++)
{
data[i] = float(rand())/float(RAND_MAX + 1.0);
}
float result = 0.0;
// Timing
clock_t start = clock();
// Main loop
for (int i = 0; i < numElements; i++)
{
result = result + data[i];
}
// Timing
clock_t finish = clock();
printf("The result is: %f\n", result);
printf("It took %f seconds\n", (double)(finish - start) / CLOCKS_PER_SEC);
// Release the memory
free(data);
return 0;
}
#include <stdio.h>
#include <stdlib.h>
#include <algorithm>
#include <time.h>
#define BLOCK_SIZE 256
__global__ void reduce_kernel(const float* data, float* result, int numElements)
{
int i = threadIdx.x + blockIdx.x*blockDim.x;
if (i < numElements)
{
atomicAdd(result, data[i]);
}
}
int main(int argc, char* argv[])
{
int numElements = (argc > 1) ? atoi(argv[1]) : 100000000;
printf("Reducing over %d values.\n", numElements);
float* h_data = (float*)calloc(numElements, sizeof(float));
srand(1214134);
for (int i = 0; i < numElements; i++)
{
h_data[i] = float(rand())/float(RAND_MAX + 1.0);
}
float h_result = 0.0;
float* d_data;
cudaMalloc((void**)&d_data, numElements*sizeof(float));
cudaMemcpy(d_data, h_data, numElements*sizeof(float), cudaMemcpyHostToDevice);
int threadsPerBlock = BLOCK_SIZE;
int numBlocks = numElements/BLOCK_SIZE + 1;
float* d_result;
cudaMalloc((void**)&d_result, 1*sizeof(float));
cudaMemset(d_result, 0.0, 1);
// Timing
clock_t start = clock();
// Call the reduction kernel
reduce_kernel<<<numBlocks, threadsPerBlock>>>(d_data, d_result, numElements);
cudaMemcpy(&h_result, d_result, 1*sizeof(float), cudaMemcpyDeviceToHost);
// Timing
clock_t finish = clock();
printf("The result is: %f\n", h_result);
printf("It took %f seconds\n", (double)(finish - start) / CLOCKS_PER_SEC);
// Release the memory
free(h_data);
cudaFree(d_data);
cudaFree(d_result);
return 0;
}
Converty the C++ code to CUDA using atomic operations.
To compile the CPU code, type:
gcc reduction_cpu_1.cpp -o reduction_cpu_1
Change the extension to .cu so that nvcc will understand that the code should be compiled for a GPU.
Allocate the device buffer for input data and a single-float buffer for the result.
Copy data to the GPU.
Make sure that the value you will be accumulating to is zero: cudaMalloc(..) function does not set values to zero.
This can be done by copying zero from the CPU memory with cudaMemcpy(..) or by the cudaMemset(..) function that sets the desired value to the provided address:
cudaMemset(..)
__host__ cudaError_t cudaMemset(void* devPtr, int value, size_t count)
Create the CUDA kernel that will use atomicAdd(..) to accumulate the data.
Call the kernel in appropriate number of blocks.
Remember that the total number of elements in array can be arbitrary and non-divisible by the size of a single block.
Make sure that the array index does not go out of bonds within the kernel.
Copy the result back to the CPU.
To compile the GPU code, use:
nvcc reduction_gpu_1.cu -o reduction_gpu_1
Before we start optimizing the GPU code, we need to fix one big problem with our approach: on both CPU and GPU, the sum becomes invalid for arrays of large size.
Indeed, we are summing random values between 0 and 1.
If the number of these values is large enough, the sum should be approximately half of the number of the elements.
But running the code for \(10^8\) elements results in a number is significantly lower.
Why the number is significantly lower than expected for large vectors? How can one fix this?
Try running the progrem for 100000000 elements. What is the expected reduction value? Compare it with what you are getting.
Solution
Even though the numbers we are summing up have similar value (from 0 to 1), we are accumulating them to a single precision floating point number.
The sum in this number becomes large and at some point we are adding small number to a big number.
The floating point numbers are stored as a set of significant digits and an exponent.
When adding them up, the exponent has to be equalized.
The significant numbers in the small number are then shifted to match the exponent of the big number.
When the significant numbers run out, it becomes zero.
For instance, \(0.5=0.500*10^1=0.050*10^2=0.005*10^3=0.000*10^4\).
The number of significant digits for single precision floating point is about 8 in decimal arithmetic.
So, when we are adding about \(10^8\) numbers of approximately the same value, their values will be lost.
The easiest way to solve this problem is to use double precision for accumulated value.
Double precision has about 15 significant digits in decimal arithmetic.
However, more robust approach would be to do the summation by pairs, as illustrated on the figure below.
There is another problem with the GPU code as well.
The reduction is running in many threads that all access the same location in the memory atomically.
One should expect a huge queue of threads trying to save their data.
The good thing that solving the first problem helps us to solves the second one, as we will see below.
2. Pair-wise reduction
Let us first fix the CPU code, so that the result will be correct for larger arrays.
The figure below shows one of the options how the correctness can be fixed even for large arrays.
The idea is to make sure that only numbers of similar value are added together.
This can be done by summing the elements by pairs.
These binary sum should be of similar value as well, so the procedure can be repeated until the final value is obtained.
A pair-wise reduction algorithm on a CPU.
The array is split into pairs, which are added together, resulting with the array half a size.
The procedure is then repeated until all the values are added.
Let us fix the CPU code with the approach described by the figure above.
Fix the accuracy for large number of elements
#include <stdio.h>
#include <stdlib.h>
#include <algorithm>
#include <time.h>
int main(int argc, char* argv[])
{
int numElements = (argc > 1) ? atoi(argv[1]) : 100000000;
printf("Reducing over %d values.\n", numElements);
float* data = (float*)calloc(numElements, sizeof(float));
srand(1214134);
for (int i = 0; i < numElements; i++)
{
data[i] = float(rand())/float(RAND_MAX + 1.0);
}
float result = 0.0;
// Timing
clock_t start = clock();
// Main loop
for (int i = 0; i < numElements; i++)
{
result = result + data[i];
}
// Timing
clock_t finish = clock();
printf("The result is: %f\n", result);
printf("It took %f seconds\n", (double)(finish - start) / CLOCKS_PER_SEC);
// Release the memory
free(data);
return 0;
}
#include <stdio.h>
#include <stdlib.h>
#include <algorithm>
#include <time.h>
void reduce(const float* data, float* result, int numElements)
{
for (int i = 0; i < numElements/2; i++)
{
result[i] = data[2*i] + data[2*i + 1];
}
if (numElements % 2 != 0)
{
result[0] += data[numElements-1];
}
}
int main(int argc, char* argv[])
{
int numElements = (argc > 1) ? atoi(argv[1]) : 100000000;
printf("Reducing over %d values.\n", numElements);
float* data = (float*)calloc(numElements, sizeof(float));
srand(1214134);
for (int i = 0; i < numElements; i++)
{
data[i] = float(rand())/float(RAND_MAX + 1.0);
}
float* result = (float*)calloc(numElements, sizeof(float));
// Timing
clock_t start = clock();
reduce(data, result, numElements);
// Main loop
for (int numElementsCurrent = numElements/2; numElementsCurrent > 1; numElementsCurrent /= 2)
{
reduce(result, result, numElementsCurrent);
}
// Timing
clock_t finish = clock();
printf("The result is: %f\n", result[0]);
printf("It took %f seconds\n", (double)(finish - start) / CLOCKS_PER_SEC);
// Release the memory
free(data);
free(result);
return 0;
}
Since, we are doing the reduction one element at a time, we will now need an array to hold the reduction results.
Create a reduce function that will take an input array, do the pair-wise addition and save the results.
This function will half the number of the elements to reduce, hence it should be called many times, until the final value is obtained.
Since the elements are computed sequentially, one does not need to worry about overwriting the data that was not yes used: the input index will be always ahead of the output index.
Hence there is no need in separate data array for the intermediate results: the pair-wise added values can be saved into the same array used for input.
As long as the number of elements is even, we are fine.
But in case it is odd, we need to deal with the last element of the array separately.
The easiest way to solve this problem is to add the last element to the first element of the sum in case the array has odd number of values.
Construct a loop that will call the reduction function many times, until the reduction size converges to 1.
Compile and run the code.
Make sure it produces the right result with large number of elements in the array (i.e. with \(N>10^8\)).
Having this CPU version gives us a reference that can be handy while adapting the GPU code.
Maping pair-style addition algorithm to CUDA.
Each kernel call does one binary addition per GPU thread.
The execution is than returned to the CPU so that all the threads are in-sync.
The kernel is called again with the new array as an input.
This continues untill only one element is left.
The numbers in circles indicate which thread does the specific operation.
The values that are out of bonds are set to zeroes to make sure that all threads get the data.
Let us use the same approach to fix the GPU code.
Fix the accuracy for large number of elements
#include <stdio.h>
#include <stdlib.h>
#include <algorithm>
#include <time.h>
void reduce(const float* data, float* result, int numElements)
{
for (int i = 0; i < numElements/2; i++)
{
result[i] = data[2*i] + data[2*i + 1];
}
if (numElements % 2 != 0)
{
result[0] += data[numElements-1];
}
}
int main(int argc, char* argv[])
{
int numElements = (argc > 1) ? atoi(argv[1]) : 100000000;
printf("Reducing over %d values.\n", numElements);
float* data = (float*)calloc(numElements, sizeof(float));
srand(1214134);
for (int i = 0; i < numElements; i++)
{
data[i] = float(rand())/float(RAND_MAX + 1.0);
}
float* result = (float*)calloc(numElements, sizeof(float));
// Timing
clock_t start = clock();
reduce(data, result, numElements);
// Main loop
for (int numElementsCurrent = numElements/2; numElementsCurrent > 1; numElementsCurrent /= 2)
{
reduce(result, result, numElementsCurrent);
}
// Timing
clock_t finish = clock();
printf("The result is: %f\n", result[0]);
printf("It took %f seconds\n", (double)(finish - start) / CLOCKS_PER_SEC);
// Release the memory
free(data);
free(result);
return 0;
}
#include <stdio.h>
#include <stdlib.h>
#include <algorithm>
#include <time.h>
#define BLOCK_SIZE 256
__device__ __forceinline__ float getValue(const float* data, int index, int numElements)
{
if(index < numElements)
{
return data[index];
}
else
{
return 0.0f;
}
}
__global__ void reduce_kernel(const float* data, float* result, int numElements)
{
int d_i = threadIdx.x + blockIdx.x*blockDim.x;
result[d_i] = getValue(data, 2*d_i, numElements) + getValue(data, 2*d_i + 1, numElements);
if (d_i == 0 && numElements % 2 != 0)
{
result[d_i] += data[numElements-1];
}
}
int main(int argc, char* argv[])
{
int numElements = (argc > 1) ? atoi(argv[1]) : 100000000;
printf("Reducing over %d values.\n", numElements);
float* h_data = (float*)calloc(numElements, sizeof(float));
srand(1214134);
for (int i = 0; i < numElements; i++)
{
h_data[i] = float(rand())/float(RAND_MAX + 1.0);
}
float h_result = 0.0;
float* d_data;
cudaMalloc((void**)&d_data, numElements*sizeof(float));
cudaMemcpy(d_data, h_data, numElements*sizeof(float), cudaMemcpyHostToDevice);
int threadsPerBlock = BLOCK_SIZE;
int numBlocks = numElements/2/BLOCK_SIZE + 1;
float* d_result1;
float* d_result2;
cudaMalloc((void**)&d_result1, numElements*sizeof(float));
cudaMalloc((void**)&d_result2, numElements*sizeof(float));
// Timing
clock_t start = clock();
// Main loop
reduce_kernel<<<numBlocks, threadsPerBlock>>>(d_data, d_result1, numElements);
for (int numElementsCurrent = numElements/2; numElementsCurrent > 1; numElementsCurrent = numElementsCurrent/2)
{
int numBlocksCurrent = numElementsCurrent/2/BLOCK_SIZE + 1;
reduce_kernel<<<numBlocksCurrent, threadsPerBlock>>>(d_result1, d_result2, numElementsCurrent);
std::swap(d_result1, d_result2);
}
cudaMemcpy(&h_result, d_result1, 1*sizeof(float), cudaMemcpyDeviceToHost);
// Timing
clock_t finish = clock();
printf("The result is: %f\n", h_result);
printf("It took %f seconds\n", (double)(finish - start) / CLOCKS_PER_SEC);
// Release the memory
free(h_data);
cudaFree(d_data);
cudaFree(d_result1);
cudaFree(d_result2);
return 0;
}
Change the extension of the file to .cu so that the nvcc expects GPU code in it.
Create a device-side array for the input and copy the data.
Contrary to the CPU, the execution on a GPU will not be sequential.
This can cause problem if we use the same array for both input and output.
Hence, we will create two separate arrays for the output and swap them from one reduction call to the other.
Change the reduction function call to the kernel calls.
Make sure that you recompute the number of blocks value as the reduction array becomes smaller.
Since the number of threads on the GPU is a multiple of the block size, it is convenient to create a helper function that will return the element of the array if it is in bonds and zero otherwise.
This function should have __device__ specifier.
To ensure that having this in a separate function does not affect the performance, we can ask the compiler to inline ib by adding a __forceinline__ specifier:
__device__ __forceinline__ float getValue(const float* data, int index, int numElements)
{
if(index < numElements)
{
return data[index];
}
else
{
return 0.0f;
}
}
Change the reduction function from CPU reduction code into a kernel.
The loop can now be removed with the thread index replacing the loop index.
This can go out of bonds, so use the helper function that we created to get the input elements.
The last element in case their number is odd should be dealt with only once, so we can designate the first thread to do it (i.e. the thread with index 0).
Compile the code with nvcc compiler.
Run it with arrays of large size to make sure that the resuls are correct.
Now we ensured that the result is correct.
Also note, that the performance of the new implementation is quite a lot better: we got rid of the bottleneck of many threads writing to the same memory address simultaneously.
In many cases, this first round of optimization would be sufficient for the GPU to outperform CPU.
However, there is still huge room for improvement in terms of the performance.
3. Using shared memory
The first issue we are going to address is the number of the kernel launches we currently do.
Each CUDA API call has an overhead, which we want to reduce.
Also, we have to read the input data and write the output from and to the global memory in each kernel call.
We can adress both of these issues by using the shared memory.
Shared memory allows the GPU threads within a block to communicate with one another.
Hence, the reduction of all the values inside the thread block can be done in just one kernel call.
The shared memory can be allocated in two ways: statically and dynamically.
In first option, we need to know how much shared memory we are going to need at the compile time.
To have this memory available, add the following line inside the GPU kernel:
__shared__ float s_data[BLOCK_SIZE]
The __shared__ modifier will tell the compiler that this array should be allocated in the shared memory space.
Note that we used the s_ prefix to the array.
This is not necessary, but helps for the code transparency.
The second option allows to define the size of the shared memory array at run time.
It is more flexible, since the size needed can vary from one kernel call to the other.
To declare the shared memory within the kernel, add the following line:
extern __shared__ float s_data[]
Note two difference here.
First, the definition now have extern keyword.
This tells the compiler to expect the size of the shared memory to be defined dynamically.
Due to the same reason, the size of the array is not defined here.
Instead, we will need to provide third argument to the kernel launch configuration:
gpu_kernel<<<numBlocks, threadsPerBlock, sharedMemorySizeInBytes>>>(..)
Note that the size should be specified in bytes (e.g. 4 bytes per lement of the array of floats).
One extra benefit of the dynamically defined shared memory is that it can be easily recycled within the kernel, i.e. having the following lines in the kernel allows to use the shared memory for both floating point and integer values:
extern __shared__ float s_dataFloat[]
..
extern __shared__ int s_dataInt[]
Note that one should be careful not to overwrite the data: the same memory adress will be used by both arrays.
So the s_dataInt should only be used when the s_dataFloat is not needed any more.
We will need one array element per thread in a block, i.e. the number of elements is equal to the block size.
This is define at compile time, so both options are suitable for us.
Since the threads within the block are executed in parallel, we will also need the means to synchronize them.
In CUDA, this can be done with the call to __syncthreads(..) function inside the GPU kernel:
__syncthreads(..)
void __syncthreads()
Calling this function will block all the threads from execution until they all reach the point where this function call is made.
Note that __syncthreads(..) should be called unconditionally, from all threads in the thread block, so that the point in code where it is called can be reached by all the threads.
The following figure shows how the modified code will work.
We read the data to from global memory to the shared memory, reduce the data to a single value, which is then saved to the global memory before the kernel quits.
Note that we will need to synchronize threads in multiple places to make sure that they all reached an intermediate checkpoint.
A reduction algorithm that uses the shared memory.
The data is copied to the GPU global memory.
Each thread is than saves one value into the shared memory.
The kernel is than executes until all the data from shared memory is reduced into one value.
The procedure repeates until there is only one thread block and all the data fits into a single thread block.
Note that each thread uses its own adress in shared memory to save the data.
This is done to ensure that the data is not overwritten and to avoid extra synchronizations between threads.
Use shared memory
#include <stdio.h>
#include <stdlib.h>
#include <algorithm>
#include <time.h>
void reduce(const float* data, float* result, int numElements)
{
for (int i = 0; i < numElements/2; i++)
{
result[i] = data[2*i] + data[2*i + 1];
}
if (numElements % 2 != 0)
{
result[0] += data[numElements-1];
}
}
int main(int argc, char* argv[])
{
int numElements = (argc > 1) ? atoi(argv[1]) : 100000000;
printf("Reducing over %d values.\n", numElements);
float* data = (float*)calloc(numElements, sizeof(float));
srand(1214134);
for (int i = 0; i < numElements; i++)
{
data[i] = float(rand())/float(RAND_MAX + 1.0);
}
float* result = (float*)calloc(numElements, sizeof(float));
// Timing
clock_t start = clock();
reduce(data, result, numElements);
// Main loop
for (int numElementsCurrent = numElements/2; numElementsCurrent > 1; numElementsCurrent /= 2)
{
reduce(result, result, numElementsCurrent);
}
// Timing
clock_t finish = clock();
printf("The result is: %f\n", result[0]);
printf("It took %f seconds\n", (double)(finish - start) / CLOCKS_PER_SEC);
// Release the memory
free(data);
free(result);
return 0;
}
#include <stdio.h>
#include <stdlib.h>
#include <algorithm>
#include <time.h>
#define BLOCK_SIZE 256
__device__ __forceinline__ float getValue(const float* data, int index, int numElements)
{
if(index < numElements)
{
return data[index];
}
else
{
return 0.0f;
}
}
__global__ void reduce_kernel(const float* data, float* result, int numElements)
{
extern __shared__ float s_data[];
int s_i = threadIdx.x;
int d_i = threadIdx.x + blockIdx.x*blockDim.x;
s_data[s_i] = getValue(data, d_i, numElements);
for (int offset = 1; offset < blockDim.x; offset *= 2)
{
__syncthreads();
if (s_i % (2*offset) == 0)
{
s_data[s_i] += s_data[s_i + offset];
}
}
if (s_i == 0)
{
result[blockIdx.x] = s_data[0];
}
}
int main(int argc, char* argv[])
{
int numElements = (argc > 1) ? atoi(argv[1]) : 100000000;
printf("Reducing over %d values.\n", numElements);
float* h_data = (float*)calloc(numElements, sizeof(float));
srand(1214134);
for (int i = 0; i < numElements; i++)
{
h_data[i] = float(rand())/float(RAND_MAX + 1.0);
}
float h_result = 0.0;
float* d_data;
cudaMalloc((void**)&d_data, numElements*sizeof(float));
cudaMemcpy(d_data, h_data, numElements*sizeof(float), cudaMemcpyHostToDevice);
int threadsPerBlock = BLOCK_SIZE;
int numBlocks = numElements/BLOCK_SIZE + 1;
float* d_result1;
float* d_result2;
cudaMalloc((void**)&d_result1, numBlocks*sizeof(float));
cudaMalloc((void**)&d_result2, numBlocks*sizeof(float));
// Timing
clock_t start = clock();
// Main loop
reduce_kernel<<<numBlocks, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(d_data, d_result1, numElements);
for (int numElementsCurrent = numBlocks; numElementsCurrent > 1; )
{
int numBlocksCurrent = numElementsCurrent/BLOCK_SIZE + 1;
reduce_kernel<<<numBlocksCurrent, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(d_result1, d_result2, numElementsCurrent);
numElementsCurrent = numBlocksCurrent;
std::swap(d_result1, d_result2);
}
cudaMemcpy(&h_result, d_result1, 1*sizeof(float), cudaMemcpyDeviceToHost);
// Timing
clock_t finish = clock();
printf("The result is: %f\n", h_result);
printf("It took %f seconds\n", (double)(finish - start) / CLOCKS_PER_SEC);
// Release the memory
free(h_data);
cudaFree(d_data);
cudaFree(d_result1);
cudaFree(d_result2);
return 0;
}
First, let us introduce the shared memory array to the code.
We simply add to the kernel:
extern __shared__ float s_data[];
And a third argument to the kernel launch:
reduce_kernel<<<numBlocks, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(..)
In the kernel, we first read one element of the input data per thread and save it to the shared memory array:
int s_i = threadIdx.x;
int d_i = threadIdx.x + blockIdx.x*blockDim.x;
s_data[s_i] = getValue(data, d_i, numElements);
To ensure that all the data is in shared memory, add a synchronization point after that.
The kernel now reduce more than two elements per launch.
This means that we need to add a loop, over an offset from the thread index.
The offset should start from 1 (two consecutive elements are reduced) and go to the half of the number of elements (when the last two numbers are reduced).
Every loop iteration, the offset doubles.
Only the threads that are multiple of the double of the current offset are reducing, so we need a conditional on that.
For instance, when offset is 1, only every other thread is reducing.
When it is half of the thread block, only the first one does the reduction.
We will also need a synchronization point after every loop iteration to ensure that the values are ready for the next one.
Make sure that the __syncthreads(..) is called unconditionally.
At the end of the kernel function, we need to save the result.
We can designate the first thread in the block to do so.
The code that calls the kernels should also be modified: now every kernel call reduced the number of elements by the factor of the block size.
4. Reduce thread divergency
In the previous call, we ask the thread that correspond to the value that is reduced to do the work.
This is not effective on a GPU: neighboring threads will diverge from one onother.
Next optimization step will be fixing that.
Let us try to modify the code so that the first threads in the block do the reduction.
This figure may look similar to the one before.
But have a look on the numbers in the gray circles.
They are the number of threads that do the reduction.
As one can see, they are now sequential, meaning that neighboring threads will more likely to take the same path in the conditionals.
This is espetially important for the threads within one warp, where both paths are taken in case the divergence occurs.
Reduce thread divergency
#include <stdio.h>
#include <stdlib.h>
#include <algorithm>
#include <time.h>
#define BLOCK_SIZE 256
__device__ __forceinline__ float getValue(const float* data, int index, int numElements)
{
if(index < numElements)
{
return data[index];
}
else
{
return 0.0f;
}
}
__global__ void reduce_kernel(const float* data, float* result, int numElements)
{
extern __shared__ float s_data[];
int s_i = threadIdx.x;
int d_i = threadIdx.x + blockIdx.x*blockDim.x;
s_data[s_i] = getValue(data, d_i, numElements);
for (int offset = 1; offset < blockDim.x; offset *= 2)
{
__syncthreads();
if (s_i % (2*offset) == 0)
{
s_data[s_i] += s_data[s_i + offset];
}
}
if (s_i == 0)
{
result[blockIdx.x] = s_data[0];
}
}
int main(int argc, char* argv[])
{
int numElements = (argc > 1) ? atoi(argv[1]) : 100000000;
printf("Reducing over %d values.\n", numElements);
float* h_data = (float*)calloc(numElements, sizeof(float));
srand(1214134);
for (int i = 0; i < numElements; i++)
{
h_data[i] = float(rand())/float(RAND_MAX + 1.0);
}
float h_result = 0.0;
float* d_data;
cudaMalloc((void**)&d_data, numElements*sizeof(float));
cudaMemcpy(d_data, h_data, numElements*sizeof(float), cudaMemcpyHostToDevice);
int threadsPerBlock = BLOCK_SIZE;
int numBlocks = numElements/BLOCK_SIZE + 1;
float* d_result1;
float* d_result2;
cudaMalloc((void**)&d_result1, numBlocks*sizeof(float));
cudaMalloc((void**)&d_result2, numBlocks*sizeof(float));
// Timing
clock_t start = clock();
// Main loop
reduce_kernel<<<numBlocks, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(d_data, d_result1, numElements);
for (int numElementsCurrent = numBlocks; numElementsCurrent > 1; )
{
int numBlocksCurrent = numElementsCurrent/BLOCK_SIZE + 1;
reduce_kernel<<<numBlocksCurrent, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(d_result1, d_result2, numElementsCurrent);
numElementsCurrent = numBlocksCurrent;
std::swap(d_result1, d_result2);
}
cudaMemcpy(&h_result, d_result1, 1*sizeof(float), cudaMemcpyDeviceToHost);
// Timing
clock_t finish = clock();
printf("The result is: %f\n", h_result);
printf("It took %f seconds\n", (double)(finish - start) / CLOCKS_PER_SEC);
// Release the memory
free(h_data);
cudaFree(d_data);
cudaFree(d_result1);
cudaFree(d_result2);
return 0;
}
#include <stdio.h>
#include <stdlib.h>
#include <algorithm>
#include <time.h>
#define BLOCK_SIZE 256
__device__ __forceinline__ float getValue(const float* data, int index, int numElements)
{
if(index < numElements)
{
return data[index];
}
else
{
return 0.0f;
}
}
__global__ void reduce_kernel(const float* data, float* result, int numElements)
{
extern __shared__ float s_data[];
int s_i = threadIdx.x;
int d_i = threadIdx.x + blockIdx.x*blockDim.x;
s_data[s_i] = getValue(data, d_i, numElements);
for (int offset = 1; offset < blockDim.x; offset *= 2)
{
__syncthreads();
int index = 2 * offset * s_i;
if (index < blockDim.x)
{
s_data[index] += s_data[index + offset];
}
}
if (s_i == 0)
{
result[blockIdx.x] = s_data[0];
}
}
int main(int argc, char* argv[])
{
int numElements = (argc > 1) ? atoi(argv[1]) : 100000000;
printf("Reducing over %d values.\n", numElements);
float* h_data = (float*)calloc(numElements, sizeof(float));
srand(1214134);
for (int i = 0; i < numElements; i++)
{
h_data[i] = float(rand())/float(RAND_MAX + 1.0);
}
float h_result = 0.0;
float* d_data;
cudaMalloc((void**)&d_data, numElements*sizeof(float));
cudaMemcpy(d_data, h_data, numElements*sizeof(float), cudaMemcpyHostToDevice);
int threadsPerBlock = BLOCK_SIZE;
int numBlocks = numElements/BLOCK_SIZE + 1;
float* d_result1;
float* d_result2;
cudaMalloc((void**)&d_result1, numBlocks*sizeof(float));
cudaMalloc((void**)&d_result2, numBlocks*sizeof(float));
// Timing
clock_t start = clock();
// Main loop
reduce_kernel<<<numBlocks, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(d_data, d_result1, numElements);
for (int numElementsCurrent = numBlocks; numElementsCurrent > 1; )
{
int numBlocksCurrent = numElementsCurrent/BLOCK_SIZE + 1;
reduce_kernel<<<numBlocksCurrent, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(d_result1, d_result2, numElementsCurrent);
numElementsCurrent = numBlocksCurrent;
std::swap(d_result1, d_result2);
}
cudaMemcpy(&h_result, d_result1, 1*sizeof(float), cudaMemcpyDeviceToHost);
// Timing
clock_t finish = clock();
printf("The result is: %f\n", h_result);
printf("It took %f seconds\n", (double)(finish - start) / CLOCKS_PER_SEC);
// Release the memory
free(h_data);
cudaFree(d_data);
cudaFree(d_result1);
cudaFree(d_result2);
return 0;
}
Change the thread indexing where to make sure that first threads are doing the reduction.
This is easier to do if one compute the index of the reduced value from the thread index.
5. Sequential memory access
Now, the cosequent threads do the work, we can address another issue with the code: memory access pattern.
Even though GPU has relatively fast memory bus, it is utilized by many threads simultaneously.
To add to the problem, the cache size is small relative to the CPU — GPUs are design to pack as many cores as possible, thus less transistors are left for the local memory.
This makes the memory access pattern one of the most important thing when optimizing the kernels.
Let us change the kernel so that the sequential GPU threads read the sequential memory addresses.
Since two values are added at a time, they will be separated by the offset that is large enough to accommodate other threads.
This means that the shared memory array should be split into two parts at each iterations: one for the first values for all the threads, the other is for the second.
The offset, or separation value, will be reduced from one iteration to the other with less values to reduce.
A scheme for the algorithm, where the memory is accessed sequentially.
At each iteration the reduced values are split into two equal parts which are read sequentially by sequential threads.
With less values left to reduced, the offset decreases, until it is equal to one for the last pair.
Note that all the relevant values are kept at the beginning of the array, thus the data read is less scattered.
Sequential memory access
#include <stdio.h>
#include <stdlib.h>
#include <algorithm>
#include <time.h>
#define BLOCK_SIZE 256
__device__ __forceinline__ float getValue(const float* data, int index, int numElements)
{
if(index < numElements)
{
return data[index];
}
else
{
return 0.0f;
}
}
__global__ void reduce_kernel(const float* data, float* result, int numElements)
{
extern __shared__ float s_data[];
int s_i = threadIdx.x;
int d_i = threadIdx.x + blockIdx.x*blockDim.x;
s_data[s_i] = getValue(data, d_i, numElements);
for (int offset = 1; offset < blockDim.x; offset *= 2)
{
__syncthreads();
int index = 2 * offset * s_i;
if (index < blockDim.x)
{
s_data[index] += s_data[index + offset];
}
}
if (s_i == 0)
{
result[blockIdx.x] = s_data[0];
}
}
int main(int argc, char* argv[])
{
int numElements = (argc > 1) ? atoi(argv[1]) : 100000000;
printf("Reducing over %d values.\n", numElements);
float* h_data = (float*)calloc(numElements, sizeof(float));
srand(1214134);
for (int i = 0; i < numElements; i++)
{
h_data[i] = float(rand())/float(RAND_MAX + 1.0);
}
float h_result = 0.0;
float* d_data;
cudaMalloc((void**)&d_data, numElements*sizeof(float));
cudaMemcpy(d_data, h_data, numElements*sizeof(float), cudaMemcpyHostToDevice);
int threadsPerBlock = BLOCK_SIZE;
int numBlocks = numElements/BLOCK_SIZE + 1;
float* d_result1;
float* d_result2;
cudaMalloc((void**)&d_result1, numBlocks*sizeof(float));
cudaMalloc((void**)&d_result2, numBlocks*sizeof(float));
// Timing
clock_t start = clock();
// Main loop
reduce_kernel<<<numBlocks, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(d_data, d_result1, numElements);
for (int numElementsCurrent = numBlocks; numElementsCurrent > 1; )
{
int numBlocksCurrent = numElementsCurrent/BLOCK_SIZE + 1;
reduce_kernel<<<numBlocksCurrent, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(d_result1, d_result2, numElementsCurrent);
numElementsCurrent = numBlocksCurrent;
std::swap(d_result1, d_result2);
}
cudaMemcpy(&h_result, d_result1, 1*sizeof(float), cudaMemcpyDeviceToHost);
// Timing
clock_t finish = clock();
printf("The result is: %f\n", h_result);
printf("It took %f seconds\n", (double)(finish - start) / CLOCKS_PER_SEC);
// Release the memory
free(h_data);
cudaFree(d_data);
cudaFree(d_result1);
cudaFree(d_result2);
return 0;
}
#include <stdio.h>
#include <stdlib.h>
#include <algorithm>
#include <time.h>
#define BLOCK_SIZE 256
__device__ __forceinline__ float getValue(const float* data, int index, int numElements)
{
if(index < numElements)
{
return data[index];
}
else
{
return 0.0f;
}
}
__global__ void reduce_kernel(const float* data, float* result, int numElements)
{
extern __shared__ float s_data[];
int s_i = threadIdx.x;
int d_i = threadIdx.x + blockIdx.x*blockDim.x;
s_data[s_i] = getValue(data, d_i, numElements);
for (int offset = blockDim.x / 2; offset > 0; offset >>= 1)
{
__syncthreads();
if (s_i < offset)
{
s_data[s_i] += s_data[s_i + offset];
}
}
if (s_i == 0)
{
result[blockIdx.x] = s_data[0];
}
}
int main(int argc, char* argv[])
{
int numElements = (argc > 1) ? atoi(argv[1]) : 100000000;
printf("Reducing over %d values.\n", numElements);
float* h_data = (float*)calloc(numElements, sizeof(float));
srand(1214134);
for (int i = 0; i < numElements; i++)
{
h_data[i] = float(rand())/float(RAND_MAX + 1.0);
}
float h_result = 0.0;
float* d_data;
cudaMalloc((void**)&d_data, numElements*sizeof(float));
cudaMemcpy(d_data, h_data, numElements*sizeof(float), cudaMemcpyHostToDevice);
int threadsPerBlock = BLOCK_SIZE;
int numBlocks = numElements/BLOCK_SIZE + 1;
float* d_result1;
float* d_result2;
cudaMalloc((void**)&d_result1, numBlocks*sizeof(float));
cudaMalloc((void**)&d_result2, numBlocks*sizeof(float));
// Timing
clock_t start = clock();
// Main loop
reduce_kernel<<<numBlocks, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(d_data, d_result1, numElements);
for (int numElementsCurrent = numBlocks; numElementsCurrent > 1; )
{
int numBlocksCurrent = numElementsCurrent/BLOCK_SIZE + 1;
reduce_kernel<<<numBlocksCurrent, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(d_result1, d_result2, numElementsCurrent);
numElementsCurrent = numBlocksCurrent;
std::swap(d_result1, d_result2);
}
cudaMemcpy(&h_result, d_result1, 1*sizeof(float), cudaMemcpyDeviceToHost);
// Timing
clock_t finish = clock();
printf("The result is: %f\n", h_result);
printf("It took %f seconds\n", (double)(finish - start) / CLOCKS_PER_SEC);
// Release the memory
free(h_data);
cudaFree(d_data);
cudaFree(d_result1);
cudaFree(d_result2);
return 0;
}
Change the loop over the offset values so that the offset goes from hald of the block size to 1.
To get the block size, one can use blockDim.x variable.`
Make sure that the each working thread reads the value that corresponds to it and adds the one with the current ofset from it.
6. Load two values at a time
At the very first iteration, the half of the threads are not doing any reduction.
The only thing that the second half of the threads are doing is loading the data into the shared memory.
This can be easily fixed by loading two numbers in each thread and reducing them before saving to the shared memory.
In this case all threads will have some computations to do and less resources will be wasted.
Only part of the algorithm that needs changing is shown.
Each thread now takes two values from the global memory and reduce it immediately to the respective location in shared memory.
Load two values at a time
#include <stdio.h>
#include <stdlib.h>
#include <algorithm>
#include <time.h>
#define BLOCK_SIZE 256
__device__ __forceinline__ float getValue(const float* data, int index, int numElements)
{
if(index < numElements)
{
return data[index];
}
else
{
return 0.0f;
}
}
__global__ void reduce_kernel(const float* data, float* result, int numElements)
{
extern __shared__ float s_data[];
int s_i = threadIdx.x;
int d_i = threadIdx.x + blockIdx.x*blockDim.x;
s_data[s_i] = getValue(data, d_i, numElements);
for (int offset = blockDim.x / 2; offset > 0; offset >>= 1)
{
__syncthreads();
if (s_i < offset)
{
s_data[s_i] += s_data[s_i + offset];
}
}
if (s_i == 0)
{
result[blockIdx.x] = s_data[0];
}
}
int main(int argc, char* argv[])
{
int numElements = (argc > 1) ? atoi(argv[1]) : 100000000;
printf("Reducing over %d values.\n", numElements);
float* h_data = (float*)calloc(numElements, sizeof(float));
srand(1214134);
for (int i = 0; i < numElements; i++)
{
h_data[i] = float(rand())/float(RAND_MAX + 1.0);
}
float h_result = 0.0;
float* d_data;
cudaMalloc((void**)&d_data, numElements*sizeof(float));
cudaMemcpy(d_data, h_data, numElements*sizeof(float), cudaMemcpyHostToDevice);
int threadsPerBlock = BLOCK_SIZE;
int numBlocks = numElements/BLOCK_SIZE + 1;
float* d_result1;
float* d_result2;
cudaMalloc((void**)&d_result1, numBlocks*sizeof(float));
cudaMalloc((void**)&d_result2, numBlocks*sizeof(float));
// Timing
clock_t start = clock();
// Main loop
reduce_kernel<<<numBlocks, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(d_data, d_result1, numElements);
for (int numElementsCurrent = numBlocks; numElementsCurrent > 1; )
{
int numBlocksCurrent = numElementsCurrent/BLOCK_SIZE + 1;
reduce_kernel<<<numBlocksCurrent, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(d_result1, d_result2, numElementsCurrent);
numElementsCurrent = numBlocksCurrent;
std::swap(d_result1, d_result2);
}
cudaMemcpy(&h_result, d_result1, 1*sizeof(float), cudaMemcpyDeviceToHost);
// Timing
clock_t finish = clock();
printf("The result is: %f\n", h_result);
printf("It took %f seconds\n", (double)(finish - start) / CLOCKS_PER_SEC);
// Release the memory
free(h_data);
cudaFree(d_data);
cudaFree(d_result1);
cudaFree(d_result2);
return 0;
}
#include <stdio.h>
#include <stdlib.h>
#include <algorithm>
#include <time.h>
#define BLOCK_SIZE 256
__device__ __forceinline__ float getValue(const float* data, int index, int numElements)
{
if(index < numElements)
{
return data[index];
}
else
{
return 0.0f;
}
}
__global__ void reduce_kernel(const float* data, float* result, int numElements)
{
extern __shared__ float s_data[];
int s_i = threadIdx.x;
int d_i = threadIdx.x + blockIdx.x*2*blockDim.x;
s_data[s_i] = getValue(data, d_i, numElements) + getValue(data, d_i + blockDim.x, numElements);
for (int offset = blockDim.x / 2; offset > 0; offset >>= 1)
{
__syncthreads();
if (s_i < offset)
{
s_data[s_i] += s_data[s_i + offset];
}
}
if (s_i == 0)
{
result[blockIdx.x] = s_data[0];
}
}
int main(int argc, char* argv[])
{
int numElements = (argc > 1) ? atoi(argv[1]) : 100000000;
printf("Reducing over %d values.\n", numElements);
float* h_data = (float*)calloc(numElements, sizeof(float));
srand(1214134);
for (int i = 0; i < numElements; i++)
{
h_data[i] = float(rand())/float(RAND_MAX + 1.0);
}
float h_result = 0.0;
float* d_data;
cudaMalloc((void**)&d_data, numElements*sizeof(float));
cudaMemcpy(d_data, h_data, numElements*sizeof(float), cudaMemcpyHostToDevice);
int threadsPerBlock = BLOCK_SIZE;
int numBlocks = numElements/2/BLOCK_SIZE + 1;
float* d_result1;
float* d_result2;
cudaMalloc((void**)&d_result1, numBlocks*sizeof(float));
cudaMalloc((void**)&d_result2, numBlocks*sizeof(float));
// Timing
clock_t start = clock();
// Main loop
reduce_kernel<<<numBlocks, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(d_data, d_result1, numElements);
for (int numElementsCurrent = numBlocks; numElementsCurrent > 1; )
{
int numBlocksCurrent = numElementsCurrent/2/BLOCK_SIZE + 1;
reduce_kernel<<<numBlocksCurrent, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(d_result1, d_result2, numElementsCurrent);
numElementsCurrent = numBlocksCurrent;
std::swap(d_result1, d_result2);
}
cudaMemcpy(&h_result, d_result1, 1*sizeof(float), cudaMemcpyDeviceToHost);
// Timing
clock_t finish = clock();
printf("The result is: %f\n", h_result);
printf("It took %f seconds\n", (double)(finish - start) / CLOCKS_PER_SEC);
// Release the memory
free(h_data);
cudaFree(d_data);
cudaFree(d_result1);
cudaFree(d_result2);
return 0;
}
Change the part of the code where the values are saved to the shared memory so that two values are read simultaneously and the first pairwise reduction is done.
Only half as many thread blocks are now needed, so the kernel launch configuration and loop over the kernel launches should be changed accordingly.
7. Unroll the last warp
The GPUs are often refereed to having Single Instruction Multiple Threads (SIMT) architecture.
This is to separate them from Single Instruction Multiple Data (SIMD) devices.
The main difference is that different threads can execute different instructions.
However, this is only true, when the threads in question are outside the same warp.
Warp is a unit of threads that executes the same instructions for all the threads.
In a warp any thread divergence will take both paths in every thread even when only one of them will take an alternative path.
On NVidia GPUs, the warp is a unit of 32 threads, which means that when we get to that many threads, special care should be taken to make sure that there is no divergence.
In fact, even checking for the conditional will slow the execution down.
The good thing is that, inside the warp, all the threads do the same operation at the same time, which can be used to remove explicit synchronization calls.
In our code, we slowly reduce the number of active threads from the block width to 2 on the last iteration.
When the number of active threads reaches the size of warp, all the active threads are within the same warp and we can manually unroll the last iterations.
While doing so, we will ask all the threads to do the reduction, not only those that produce the numbers needed at the next iteration.
It may look like we are asking the GPU to do extra work, but, in fact, we are removing extra conditional checks.
Indeed, the inactive threads wold have taken diferent path where they do nothing.
But since there are the threads that actually do the work, the inactive threads will idle while this is happening since they are in the same warp.
Last warp reduction for a warp of size 4 (indicated by dashed lines).
Only the changed part of the algorithm is shown.
Every thread computes the binary reduction at each interaction, which allows one to remove the conditional.
Even though this leads to computing values that are not used, the reduction in thread divergence inside a warp will give better performance.
Unroll the last warp
#include <stdio.h>
#include <stdlib.h>
#include <algorithm>
#include <time.h>
#define BLOCK_SIZE 256
__device__ __forceinline__ float getValue(const float* data, int index, int numElements)
{
if(index < numElements)
{
return data[index];
}
else
{
return 0.0f;
}
}
__global__ void reduce_kernel(const float* data, float* result, int numElements)
{
extern __shared__ float s_data[];
int s_i = threadIdx.x;
int d_i = threadIdx.x + blockIdx.x*2*blockDim.x;
s_data[s_i] = getValue(data, d_i, numElements) + getValue(data, d_i + blockDim.x, numElements);
for (int offset = blockDim.x / 2; offset > 0; offset >>= 1)
{
__syncthreads();
if (s_i < offset)
{
s_data[s_i] += s_data[s_i + offset];
}
}
if (s_i == 0)
{
result[blockIdx.x] = s_data[0];
}
}
int main(int argc, char* argv[])
{
int numElements = (argc > 1) ? atoi(argv[1]) : 100000000;
printf("Reducing over %d values.\n", numElements);
float* h_data = (float*)calloc(numElements, sizeof(float));
srand(1214134);
for (int i = 0; i < numElements; i++)
{
h_data[i] = float(rand())/float(RAND_MAX + 1.0);
}
float h_result = 0.0;
float* d_data;
cudaMalloc((void**)&d_data, numElements*sizeof(float));
cudaMemcpy(d_data, h_data, numElements*sizeof(float), cudaMemcpyHostToDevice);
int threadsPerBlock = BLOCK_SIZE;
int numBlocks = numElements/2/BLOCK_SIZE + 1;
float* d_result1;
float* d_result2;
cudaMalloc((void**)&d_result1, numBlocks*sizeof(float));
cudaMalloc((void**)&d_result2, numBlocks*sizeof(float));
// Timing
clock_t start = clock();
// Main loop
reduce_kernel<<<numBlocks, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(d_data, d_result1, numElements);
for (int numElementsCurrent = numBlocks; numElementsCurrent > 1; )
{
int numBlocksCurrent = numElementsCurrent/2/BLOCK_SIZE + 1;
reduce_kernel<<<numBlocksCurrent, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(d_result1, d_result2, numElementsCurrent);
numElementsCurrent = numBlocksCurrent;
std::swap(d_result1, d_result2);
}
cudaMemcpy(&h_result, d_result1, 1*sizeof(float), cudaMemcpyDeviceToHost);
// Timing
clock_t finish = clock();
printf("The result is: %f\n", h_result);
printf("It took %f seconds\n", (double)(finish - start) / CLOCKS_PER_SEC);
// Release the memory
free(h_data);
cudaFree(d_data);
cudaFree(d_result1);
cudaFree(d_result2);
return 0;
}
#include <stdio.h>
#include <stdlib.h>
#include <algorithm>
#include <time.h>
#define BLOCK_SIZE 256
__device__ __forceinline__ float getValue(const float* data, int index, int numElements)
{
if(index < numElements)
{
return data[index];
}
else
{
return 0.0f;
}
}
__device__ void warpReduce(volatile float* sdata, int s_i)
{
sdata[s_i] += sdata[s_i + 32];
sdata[s_i] += sdata[s_i + 16];
sdata[s_i] += sdata[s_i + 8];
sdata[s_i] += sdata[s_i + 4];
sdata[s_i] += sdata[s_i + 2];
sdata[s_i] += sdata[s_i + 1];
}
__global__ void reduce_kernel(const float* data, float* result, int numElements)
{
extern __shared__ float s_data[];
int s_i = threadIdx.x;
int d_i = threadIdx.x + blockIdx.x*2*blockDim.x;
s_data[s_i] = getValue(data, d_i, numElements) + getValue(data, d_i + blockDim.x, numElements);
for (int offset = blockDim.x / 2; offset > 32; offset >>= 1)
{
__syncthreads();
if (s_i < offset)
{
s_data[s_i] += s_data[s_i + offset];
}
}
if (s_i < 32)
{
warpReduce(s_data, s_i);
}
if (s_i == 0)
{
result[blockIdx.x] = s_data[0];
}
}
int main(int argc, char* argv[])
{
int numElements = (argc > 1) ? atoi(argv[1]) : 100000000;
printf("Reducing over %d values.\n", numElements);
float* h_data = (float*)calloc(numElements, sizeof(float));
srand(1214134);
for (int i = 0; i < numElements; i++)
{
h_data[i] = float(rand())/float(RAND_MAX + 1.0);
}
float h_result = 0.0;
float* d_data;
cudaMalloc((void**)&d_data, numElements*sizeof(float));
cudaMemcpy(d_data, h_data, numElements*sizeof(float), cudaMemcpyHostToDevice);
int threadsPerBlock = BLOCK_SIZE;
int numBlocks = numElements/2/BLOCK_SIZE + 1;
float* d_result1;
float* d_result2;
cudaMalloc((void**)&d_result1, numBlocks*sizeof(float));
cudaMalloc((void**)&d_result2, numBlocks*sizeof(float));
// Timing
clock_t start = clock();
// Main loop
reduce_kernel<<<numBlocks, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(d_data, d_result1, numElements);
for (int numElementsCurrent = numBlocks; numElementsCurrent > 1; )
{
int numBlocksCurrent = numElementsCurrent/2/BLOCK_SIZE + 1;
reduce_kernel<<<numBlocksCurrent, threadsPerBlock, threadsPerBlock*sizeof(float)>>>(d_result1, d_result2, numElementsCurrent);
numElementsCurrent = numBlocksCurrent;
std::swap(d_result1, d_result2);
}
cudaMemcpy(&h_result, d_result1, 1*sizeof(float), cudaMemcpyDeviceToHost);
// Timing
clock_t finish = clock();
printf("The result is: %f\n", h_result);
printf("It took %f seconds\n", (double)(finish - start) / CLOCKS_PER_SEC);
// Release the memory
free(h_data);
cudaFree(d_data);
cudaFree(d_result1);
cudaFree(d_result2);
return 0;
}
Create a separate __device__ function that will handle the last warp reduction.
This function should take the shared memory array of values and the index of the thread within the block.
Manually unwrap the loop of 6 reductions (\(32 = 2^5\) plus one extra reduction to get the last value).
Note that the shared memory array argument should have volatile qualifier to tell the compiler not to optimize the code.
Reduce the number of iteration in the main kernel and call the new warp reduction function for the lase 32 values.`
8. Further improvements
There is more one can do with the current code to get even better performance.
Please, see this excelent presentation from Mark Harris (NVidia) for some ideas.
© Copyright 2021, Artem Zhmurov and individual contributors..
|
Introduction
Hi everyone !
In this post, I will share with you all the steps to write an optimized FP32 matrix multiplication on AMD RDNA3 GPU outperforming rocBLAS by 60%. I will cover some basics and explain all the optimizations I have implemented. This will be done in a iterative way in 8 differents Kernels.
Figure 1: sneak peek of the performance results
I primary intended to work on this to deepen my understanding of RDNA3 and try out HIP and I felt like I needed to share what I learned doing this :).
Few things I like to say before we start :
That being said, let’s start !
Problem statement
There is a lot of research happening on the way to improve the performance of matrix multiplication nowadays. Being a core algorithm in ML applications, any FLOPS we can exploit is golden.
Before proceeding, let’s recall the basics of matrix multiplication. Given two matrices:
Their product, \(C\), is computed as follows:
where \(C\) is the resulting matrix of size \(M,N\).
For each output value of matrix C, we compute the dot product between the rows of matrix A and the columns of matrix B.
Figure 2: example for the first element of C
In terms of complexity, we have \(\large O(n^3)\) computational complexity and \(\large O(n^2)\) memory accesses.
If we don’t think about architectural details, this is clearly a compute bound problem and our goal will be to be compute bound on the GPU.
Let’s say we manage to write the best implementation possible for the 7900 XTX. How fast could it run ? To answer this questions we need to look a bit at RDNA3 architecture.
RDNA3 GPUs are made of arrays of WorkGroup Processors (WGP). Every WGP are split into 2 Compute Units (CUs), themself split into 2 SIMDs. A SIMD handles the work of multiple threads organized in waves (or warps for CUDA folks) and has a set of components to do some work (like arithmetic operations). For Floating point operations, there are two 32 way VALU units.
Figure 3: simplified representation of WGPs
Figure 4: simplified representation of a single SIMD
We can compute our theoritical floating point operation per second with this formula:
Every SIMD can issue 2 Floating points intructions per cycle (one on each vALU unit). If we use FMA instructions (Fused Multiply Add), each SIMD can issue \(32*2*2=128\) floating point operations per cycle.
The 7900 XTX has 48 WGPs, that’s \(48*2*2=192\) SIMDs.
Our theoritical VRAM bandwidth is given by :
The 7900 XTX uses GDDR6 with a 384-bit bus running at 20 Gbps.
If we go back to our 4096x4096 matrix multiplication, we essentially need to do \(\large 2*4096*4096*4096\) operations.
With a 61 TFLops implementation, it would take roughly 2.23 ms to do the work and the bandwidth required to sustain this rate would be \(\large {4096*4096*4*3}/{2.23}*10^{-3} = 90.2 \text {GB/s}\).
Of course, these are oversimplified calculations as they totally ignore memory hierarchy but we see that the available bandwidth is sufficiently high so that we can increase the amount of data we read to be closer to compute bound.
Kernel 1: naive implementation
Let’s start with a naive implementation like this :
__global__ void kernel1_naive(const float *A, const float *B, float *C, int M, int K, int N, float alpha, float beta)
{
int row = blockIdx.y * blockDim.y + threadIdx.y;
int col = blockIdx.x * blockDim.x + threadIdx.x;
if (row < M && col < N)
{
float acc_c = 0.0f;
for (int k = 0; k < K; ++k)
{
acc_c += A[row * K + k] * B[k * N + col];
}
C[row * N + col] = alpha * acc_c + beta * C[row * N + col];
}
}
You will notice I am doing \(\large C={alpha}*A*B+beta*C\) instead of \(\large C=A*B\) here. This is because it makes easier to compare with libraries like rocBLAS where matrix multiplications is provided by SGEMM functions (Single-Precision General Matrix Multiply).
We launch 4096x4096 threads with a blocksize of 16x16 and each thread compute the inner dot product described before.
The performance for this kernel is 136 ms (1010.60 GFlops/s). I know, that’s pretty bad and far off our 61 TFLops target.
Kernel 0: rocBLAS reference implementation
Now that we have seen possibly the worst implementation in terms of performance, let’s look at the official rocBLAS implementation.
const int M = N;
const int K = N;
CHECK_ROCBLAS_STATUS(rocblas_sgemm(
handle,
rocblas_operation_none, // Transpose option for A
rocblas_operation_none, // Transpose option for B
M, // Number of rows in A and C
N, // Number of columns in B and C
K, // Number of columns in A and rows in B
&alpha, // alpha
d_a, // Matrix A on the device
M, // Leading dimension of A
d_b, // Matrix B on the device
K, // Leading dimension of B
&beta, // beta
d_c, // Matrix C on the device
M // Leading dimension of C
));
As discussed before, I used rocblas_sgemm function with alpha and beta set to 1.02
The performance for this kernel is 4.49 ms (30547 GFLOPs/s). This is clearly much better than our kernel 1 but still far from our theoritical 61.4 TFlops/s.
By inspecting the ISA in RGP3, I couldn’t find any dual issue instructions in the kernel (only v_fmac_f32_e32)4
Figure 5: extract of rocBLAS ISA code
This is very surprising as this essentially means one of the VALU unit is sitting there doing nothing.
Considering this, the VALU utilization of this kernel is pretty impressive and almost 100 %. However, it’s really surprising we can’t exploit these dual issue instructions properly. I’ll come to that later.
Kernel 2: LDS Tiling
The main issue with our naive kernel is that our inner loop directly accesses global memory. This is inefficient because fetching data from global memory has a high latency, typically on the order of hundreds of cycles. Since each memory read is followed by minimal computation (just one multiplication and one addition), the GPU struggles to hide this latency, even with a large number of concurrent threads. Moreover, the algorithm repeatedly reads the same rows and columns from global memory across different threads, leading to redundant memory accesses and further exacerbating the performance bottleneck.
A solution to this problem is to load the data once into faster local memory and then iterate efficiently over it with all the threads. On RDNA3, we have the Local Data Store (LDS), a high-speed, low-latency memory accessible by all threads within a workgroup.
Figure 6: simplified representation of the memory hierarchy
Since the LDS has a much smaller capacity than global memory, we need to use tiling to divide our problem into smaller sub-matrix multiplications. One way to facilitate this is to restructure the computation by moving the inner loop’s dot product to the outer loop. The key idea is to cache a column of matrix A and a row of matrix B, then perform the computation across the entire tile. This approach is more cache-efficient and significantly reduces memory access latency.
The pseudo code for our kernel 1 is :
for i from 0 to M - 1: # Loop over rows of A
for j from 0 to N - 1: # Loop over columns of B
sum = 0
for k from 0 to K - 1: # Loop over columns of A / rows of B
sum += A[i][k] * B[k][j]
end for
C[i][j] = sum
end for
end for
If we move the dot product to the outer loop, we have this :
for k from 0 to K - 1: # Outer loop over the shared dimension
for i from 0 to M - 1: # Loop over rows of A
for j from 0 to N - 1: # Loop over columns of B
C[i][j] += A[i][k] * B[k][j]
end for
end for
end for
Tiling in this form is straightforward: each workgroup operates on a tile and follows these steps: (\(BK\) is the batch size, ie number of rows/columns we load to the LDS)
Init c to 0
While kId is less than N:
# Load A and B to Tile As and Bs
Load BK columns A to As
Load BK rows to Bs
Syncthreads
# Accumulate results using LDS
for k from 0 to BK
c += As[threadIdx.y][k] * Bs[k][threadIdx.x]
Syncthreads
Increment kId by BK
end for
c[row][col]=c
If we choose a tile size of 32x32 and \(BK=32\), our new kernel looks like this:
#define TILE_SIZE 32
__global__ void kernel2_lds(const float *A, const float *B, float *C, int N)
{
__shared__ float As[TILE_SIZE][TILE_SIZE];
__shared__ float Bs[TILE_SIZE][TILE_SIZE];
int row = blockIdx.y * TILE_SIZE + threadIdx.y;
int col = blockIdx.x * TILE_SIZE + threadIdx.x;
float sum = 0.0f;
for (int t = 0; t < N; t += TILE_SIZE)
{
Bs[threadIdx.y][threadIdx.x] = B[N * (threadIdx.y + t) + col];
As[threadIdx.y][threadIdx.x] = A[N * row + t + threadIdx.x];
__syncthreads();
for (int k = 0; k < TILE_SIZE; k++)
{
sum += As[threadIdx.y][k] * Bs[k][threadIdx.x];
}
__syncthreads();
}
if (row < N && col < N)
{
C[row * N + col] = sum;
}
}
__syncthreads(); is required here to ensure that all threads in the workgroup can see the data loaded into the LDS and to synchronize before any updates are made to the data.
We also ensure that the contents of both matrices A and B are loaded into the LDS by rows rather than columns to avoid uncoalesced memory accesses.
Indeed, if we were to read by columns, each thread in a wave would access a non-contiguous memory region, resulting in multiple separate transactions and reduced efficiency as shown in the 2 diagrams below.
Figure 7: coalesced loads for matrix A. A single 128 bytes memory transaction for all threads
Figure 8: non coalesced loads for matrix A. Multiple 32 bytes memory transactions for a single wave
From the ISA guide, device memory is accessed through 32-, 64-, or 128-byte transactions, which must be naturally aligned. Maximizing memory throughput requires coalescing memory accesses across threads within a wave to minimize the number of transactions 5.
The performance for this kernel is 34.2 ms (4017 GFlops/s). 4 times faster than our naive kernel !
Let’s use RGP to understand what is going on.
Our occupancy is pretty good (100 %) but our VALU utilization is only 15%.
Figure 9 : stats taken from the instruction tab in RGP
If we look at the ISA in the instruction timing tab, we see a couple of interesting things :
Figure 10 : instruction timing
To understand what is happening, we need to quickly explain how SIMD scheduling works. During each clock cycle, the SIMD selects an instruction from a pool of wave to issue. A SIMD can manage up to 16 wavefronts in parallel. When we refer to occupancy, we are actually talking about the ratio of active wave to the theoretical maximum number of wave that a SIMD can support.
The more active wavefronts there are, the greater the likelihood that the SIMD can switch between wave, increasing the chances of hiding latency within individual wavefronts.6
If we go back to our case, we are likely having something like this :
Figure 11 : wavefront scheduling within a SIMD
Here, we have a high-occupancy kernel launching many waves in parallel, all contending for access to the LDS. Since the time taken by our VALU operations is shorter than the LDS latency, the latency cannot be hidden, even with additional threads. This results in both LDS bandwidth congestion and resource waste due to latency.
One way to address this issue is by increasing the arithmetic intensity of our kernel, ensuring that the VALU operations per wave take longer than the LDS memory reads.
Kernel 3 : Register tiling
Now, we want to increase the arithmetic complexity of our kernel. This means having each thread perform more computations. Essentially, we aim to increase the ratio of computation to data read.
One way to achieve this is to compute a small output tile per thread—for example, an 8x8 tile. To do this, we introduce an additional level of tiling.
Each thread will be responsible for producing a small tile of the output matrix. We can cache the contents of matrices \(A\) and \(B\) into registers to enable very low-latency access. However, registers are limited on the GPU, with 1536 VGPRs (Vector General-Purpose Registers) available per SIMD and a maximum of 256 registers per kernel. Increasing register usage means we won’t be able to launch as many waves per SIMD, effectively reducing occupancy. However, this shouldn’t be an issue if we can maximize utilization of the SIMD’s VALUs (Vector Arithmetic Logic Units) with just a few waves.
Now, let’s look at the different levels of tiling:
Figure 12 : tiling levels
Essentially, it means each thread will now be responsible to compute a 8x8 output tile.
Our kernel parameters looks like this:
#define BLOCK_SIZE 256
// Block Tile size
constexpr int BN = 128;
constexpr int BM = 128;
// Number of Row or column we read per batch
constexpr int BK = 8;
// Thread Tile size . 4x4
constexpr int TN = 4;
constexpr int TM = 4;
// A wave is a block of 8x4 of the output matrix
constexpr int nbThreadXPerWave = 8;
constexpr int nbThreadYPerWave = 4;
// Number of waves in a block
constexpr int nbWavesPerBlock = BLOCK_SIZE / 32;
constexpr int WN = 64;
constexpr int WM = BN * BM / nbWavesPerBlock / WN;
constexpr int nbIterWaveN = WN / (nbThreadXPerWave * TN);
constexpr int nbIterWaveM = WM / (nbThreadYPerWave * TM);
// LDS Tile
__shared__ float As[BK][BM];
__shared__ float Bs[BK][BN];
// Column and row from A and B, stored into registers
float A_col[nbIterWaveM * TM];
float B_row[nbIterWaveN * TN];
//Wave Tile (registers)
float C_regs[TM * nbIterWaveM * TN * nbIterWaveN] = {0.0f};
The pseudo code for our new kernel:
Initialize kId to 0
While kId is less than N:
# Loading Tile to LDS
Load BK columns from A to As
Load BK rows from B to Bs
Syncthreads
For k from 0 to BK - 1 do:
Load col k of As to A_col
Load row k of Bs to B_row
# Wave Tile
For idY from 0 to nbIterWaveM:
For idX from 0 to nbIterWaveN:
# Thread Tile
For i from 0 to TM:
For j from 0 to TN:
x = idX * TN + j;
y = idY * TM + i;
C_regs[y][x] = A_col[y] * B_row[x]
Syncthreads
Increment kId by BK
Write C_regs to C
Full kernel source code can be found here
The performance for this kernel is 6.03 ms (22777 GFlops/s), 5 times faster than our previous kernel !
We have a lower occupancy but VALU utilization has significantly increased.
Figure 13 : kernel 3 stats
The ISA looks good. We now have a lot of v_dual_fmac instructions—exactly what we wanted, even though some are still single fma.
Figure 14 : kernel 3 instruction timing
Even though this is a significant improvement over Kernel 2, we can still see that we are waiting for the LDS. This is especially true for the first batch of ds_load instructions, where we observe more than 100 clock cycles of cumulative non-hidden latency as seen below :
Figure 15 : ds_load instructions latencies
Before diving into this, we need to first improve the way we read from global memory. According to RGP, this is now the biggest bottleneck in terms of performance.
Figure 16 : gmem wait latency
Our cumulative latency for global memory waits exceeds 12 million clock cycles, which is four times more than the LDS load wait in the inner loop.
To further optimize performance, we will focus on better hiding the Global memory read latency.
Kernel 4 : GMEM double buffering
With our current implementation, every wave must wait for global memory and then LDS write latency before doing any work. In a high-occupancy scenario, this shouldn’t be an issue if the GPU can find other waves to hide this latency. However, in practice, we often have multiple waves in the same state running simultaneously because we use a sync thread before and after reading from global memory.
Figure 17 : several waves waiting for GMEM loads
One way to mitigate this is by using double buffering. We could allocate twice the memory and perform reads and writes to the LDS in parallel.
Alternatively, we could use intermediate registers to load data from global memory while working on the LDS, only writing to LDS just before it is needed. This ensures no waiting on global memory.
I prefer this approach for now, as I don’t want to introduce additional LDS pressure in the inner loop just yet.
Figure 18 : double buffering on GMEM loads
If we update our pseudo code, we now have :
Initialize kId to 0
# Load first batch before loop
Load BK columns from A to As
Load BK rows from B to Bs
Syncthreads
While kId is less than N:
# Loading Tile to LDS
Load BK columns from A to A_TMP (no wait)
Load BK rows from B to B_TMP (no wait)
For k from 0 to BK - 1 do:
Load col k of As to A_col
Load row k of Bs to B_row
# Wave Tile
For idY from 0 to nbIterWaveM:
For idX from 0 to nbIterWaveN:
# Thread Tile
For i from 0 to TM:
For j from 0 to TN:
x = idX * TN + j;
y = idY * TM + i;
C_regs[y][x] = A_col[y] * B_row[x]
Syncthreads
Save A_TMP and B_TMP to As and Bs
Syncthreads
Increment kId by BK
Write C_regs to C
To my surprise, the performance for this kernel decreased to 14.3032 ms (9612.48 GFLOPS), more than 2 times slower than kernel 3 !
Our double buffering algorithm utilizes more registers and reduces occupancy.
After inspecting the ISA in RGP, we see that the HIP compiler attempts to keep register usage low by using scratch memory instead—which is detrimental to performance7.
Figure 19 : scratch_load instructions introduced to reduce register usage
Unfortunately, we cannot directly set the maximum number of registers per kernel in HIP (which is theoretically 256). However, we can use the launch_bounds extension to provide hints to the compiler.
With this change, the performance is back to normal : 5.37 ms (25559.6 GFLOP/s).
Full kernel source code can be found here
VALU utilization has increased from 43 % to 52 %.
Figure 20 : kernel 4 stats
We can now go back to our LDS loads in the inner loop which have become the new bottleneck, as shown below.
Figure 21 : latency on LDS loads
Kernel 5 : Optimize LDS usage
One thing I didn’t look at in previous kernels is whether or not we had bank conflicts on the LDS. This information is actually not easily accessible in RGP. If we look at ISA section where we write to the LDS, we see that the latency are unexpectly high.
Figure 22 : latencies on LDS writes
According the RDNA3 programming guide, the LDS memory is split into 64 banks of DWORD-wide RAMS. These 64 banks are further sub-divided into two sets of 32-banks each where 32 of the banks are affiliated with a pair of SIMD32’s, and the other 32 banks are affiliated with the other pair of SIMD32’s within the WGP. Each bank is a 512x32 two-port RAM (1R/1W per clock cycle). DWORDs are placed in the banks serially, but all banks can execute a store or load simultaneously.1
So, if threads within a wave access the same bank, the memory transactions will be serialized, which is exactly what happens when we write a column of matrix A to As.
Figure 23 : Matrix A bank conflicts and how to remove them
Our current kernel reads the content of matrix A rows by rows to avoid uncoalesced memory loads. Given we then operates on columns of matrix A, we transpose matrix A into matrix As so that each line of As correspond to a tile column of A.
Now, if we look at how this work is mapped to waves, we see that we essentially write 8 times to 4 consecutive banks within each wave. One way to fix this is to add a padding of 4 elements to our LDS matrix, As.
__shared__ float As[BK][BM+4]; // 4 padding to avoid bank conflicts
Doing another RGP capture with this change:
Figure 24 : updated latency with padding
LDS latency has decreased a lot and our VALU utilization is now 62.3%.
However are kernel is still bound by these LDS loads. Let’s do some napkin math and check whether or not we are not reaching the limit of the LDS bandwidth.
As said before, each pair of SIMD has a 32-banks memory capable of reading DWORD. Our theoritical bandwidth should something like this:
Now, let’s analyze what our current algorithm does:
That’s for reading. We also write to the LDS : 4096x128x32x32x4x2 = 4294967296 bytes.
With our current execution time of 5.37 ms, the required LDS bandwidth is roughly 13.56 TBytes/s.
This is less than 46% of the maximum capacity, but it is highly likely that our kernel experiences congestion in the LDS when multiple waves attempt to read or write simultaneously.
To mitigate this, we can try these 2 things :
According the RDNA3 programming guide, the LDS can operate on 2 distincts mode : WGP Mode and CU mode. HIP usse by default WGP mode.
In WGP mode, the LDS is one large contiguous memory that all waves on the WGP can access meaning we are more likely to get congestion on the LDS.
In CU mode, the LDS is effectively split into a separate upper and lower LDS, each serving two SIMD32’s. Waves are allocated LDS space within the half of LDS which is associated with the SIMD the wave is running on.
By enabling the CU mode, we should reduce the probability of wave contending for the LDS8
Second thing we can try is to increase our Thread tile to 16x8 instead of 8x8. This will improve the computation-to-data-read ratio. It should still fit within the 256 VGPR budget we have for the kernel and reduce our bandwidth requirements to 10.3 TBytes/s
With all these changes, the performance for this kernel is now 4.09 ms (33526 GFLOP/s). That’s better than rocBLAS !
Full kernel source code can be found here
We continue increasing VALU utilization, and now our kernel has doubled in register usage (which makes sense, as we have doubled our register space requirements). Even though occupancy is low, overall performance is better because we are making better use of the VALU units.
Figure 25 : kernel 5 stats
If we look at the ISA, we now have a small LDS latency of less than 30 cycles and most of it is hidden.
Figure 26 : kernel 5 instruction timing
OK, so our kernel outperforms rocBLAS, but since we are using dual_fmac instructions, the performance is still not as high as we would expect.
At this point, I tried several optimizations, but I struggled to get the HIP compiler to generate the code I wanted. Even small changes to the C++ code drastically altered the generated ISA, making optimization work very difficult. This was especially problematic with inline assembly, where the compiler would move instructions to incorrect locations due to a lack of explicit dependencies. Additionally, there is no way to manually assign specific VGPRs to particular instructions.
Because of these challenges, I decided to optimize directly at the ISA level, which is what we will focus on in the next steps.
Looking at RGP, one thing still puzzles me in the inner loop: the HIP compiler does not use dual_fmac instructions exclusively—we always see a few single FMA instructions mixed in. Another issue is that all the v_dual_fmac instructions have a minimum latency of 2–3 cycles. While this may seem insignificant, it adds up across all instructions and impacts overall performance at our current execution speed.
Kernel 6 : VALU optimization
Before we go into the next optimizations, I need to be able to directly modify the ISA. To do so, I will now use the Module Management API so that we can load pre-compiled kernel code. Of course the idea is that we generate the ISA of our kernel from C++ once and then iterate on the ISA for any further version.
To do so, I need to extract the ISA source file from my current C++ kernel and ask hip to build hsaco binary format:
hipcc --genco --offload-arch=gfx1100 kernel5_lds_optim.cpp -mcumode --save-temps -o tmp.hsaco
The --save-tempsparameter will allow us to have access to the intermediate .s file containing the ISA.
HIP should produce these files:
kernel5_lds_optim-hip-amdgcn-amd-amdhsa-gfx1100.bc
kernel5_lds_optim-hip-amdgcn-amd-amdhsa-gfx1100.hipi
kernel5_lds_optim-hip-amdgcn-amd-amdhsa-gfx1100.o
kernel5_lds_optim-hip-amdgcn-amd-amdhsa-gfx1100.out
kernel5_lds_optim-hip-amdgcn-amd-amdhsa-gfx1100.out.resolution.txt
kernel5_lds_optim-hip-amdgcn-amd-amdhsa-gfx1100.s
kernel5_lds_optim-hip-amdgcn-amd-amdhsa-gfx1100.s is our guy.
Now we can take this file as a basis for our modifications and assemble it with the commands :
hipcc -target amdgcn-amd-amdhsa -mcpu=gfx1100 -mcumode -c kernel_modified.s -o kernel.o
ld.lld -shared kernel.o -o kernel.hsaco
The kernel.hsacofile can then be loaded at runtime using the Module Management API of HIP.
Direct control over the ISA is great for micro-benchmarking and makes it easier to instrument the code for performance assessment without worrying about unexpected compiler optimizations.
For example, I tried duplicating our dual_fmac instructions 32 times in the inner loop to see if we could artificially become VALU-bound. However, it turns out that our VALU utilization cannot exceed 75%!
Figure 27 : Fake kernel with 32x more VALU operations
Next thing I tried is to launch a single workgroup and have a single wave running. It turns out these 2-3 clocks latency are still there meaning it must comes from the VGPR distribution of these dual_fmac instructions.
Ok, so let’s take a closer look at these dual instructions and see if we can do something about it.
Dual instructions are of that form :
OpCodeX DSTX, SRCX0, SRCX1 :: OpCodeY DSTY, SRCY0, SRCY1
In our case :
v_dual_fmac_f32 DSTX, SRCX0, SRCX1 :: v_dual_fmac_f32 DSTY, SRCY0, SRCY1
The two instructions are executed at the same time, so there are no races between them if one reads a VGPR and the other writes the same VGPR. The ‘read’ gets the old value.
There are a number of constrains in order to use these instructions. Namely:
On top of that, the RDNA 3 programming guide states :
Figure 28 : my visualization of register banks and dual instructions. Example of an instruction using bank 0,1,2,3
Figure 29 : SRCX0 and SRCY0 must use different banks
Bank number of register \(\large X\) is given by \(\large X\%4\)
Taking a example:
v_dual_fmac_f32 v10, v189, v207 :: v_dual_fmac_f32 v9, v190, v20
FMAC Bank2, Bank1, Bank3 :: FMAC Bank1, Bank2, Bank0
With this instruction, we are reading the 4 different banks in parallel and writing to bank 1 and 2 the next cycle.
In practice we could read from the same bank in both OPX and OPY if it’s not using the same operand. For example this is valid given that SRCX0 and SRCY0 use different banks :
v_dual_fmac_f32 v123, v139, v144 :: v_dual_fmac_f32 v114, v140, v143
FMAC Bank3, Bank3, Bank0 :: FMAC Bank2, Bank0, Bank3
Both instructions are reading the same banks (0 & 3). The way I see it (that’s not covered in the ISA guide AFAIK), 2 things could happen here :
On top of that, we also need to take into account the VGPRs we are writing to even though we write on the next cycle.
So, even though we may have a valid VGPR distribution that successfully compiles, we could still encounter register bank conflicts, impacting performance.
Let’s take a look at what the HIP compiler has generated for us.
v_dual_fmac_f32 v127, v138, v144 :: v_dual_fmac_f32 v122, v139, v143
v_dual_fmac_f32 v128, v138, v145 :: v_dual_fmac_f32 v121, v139, v142
v_dual_fmac_f32 v123, v139, v144 :: v_dual_fmac_f32 v114, v140, v143
v_dual_fmac_f32 v124, v139, v145 :: v_dual_fmac_f32 v113, v140, v142
v_dual_fmac_f32 v115, v140, v144 :: v_dual_fmac_f32 v110, v141, v143
v_dual_fmac_f32 v116, v140, v145 :: v_dual_fmac_f32 v109, v141, v142
v_dual_fmac_f32 v111, v141, v144 :: v_dual_fmac_f32 v90, v138, v147
v_dual_fmac_f32 v112, v141, v145 :: v_dual_fmac_f32 v89, v138, v146
v_dual_fmac_f32 v91, v138, v148 :: v_dual_fmac_f32 v94, v139, v147
v_dual_fmac_f32 v92, v138, v149 :: v_dual_fmac_f32 v93, v139, v146
v_dual_fmac_f32 v95, v139, v148 :: v_dual_fmac_f32 v98, v140, v147
v_dual_fmac_f32 v96, v139, v149 :: v_dual_fmac_f32 v97, v140, v146
v_dual_fmac_f32 v99, v140, v148 :: v_dual_fmac_f32 v118, v141, v147
v_dual_fmac_f32 v100, v140, v149 :: v_dual_fmac_f32 v117, v141, v146
v_dual_fmac_f32 v119, v141, v148 :: v_dual_fmac_f32 v70, v138, v151
v_dual_fmac_f32 v120, v141, v149 :: v_dual_fmac_f32 v69, v138, v150
;...
If we analyse both the banks and the cache state for the first instructions, we get something like this:
I assumed that writes are performed with a 1-clock delay, which is why the first row of DSTX and DSTY is empty.
We can see that the compiler does a great job of reusing the cache, as we only read a small amount of data. However, the access pattern is not consistent over time, and we often use the same bank more than twice.
I started creating microbenchmarks to understand the latencies displayed in RGP based on different access patterns in terms of VGPR banks. However, this turned out to be quite complex, likely due to the underlying architecture’s complexity.
Instead of spending too much time on this, I tried to design an implementation following these principles:
The good news is that we can ignore alignment constraints on the output matrix C, given that the number of iterations in the inner loop is quite high. In other words, we can freely shuffle register allocations during the accumulation phase and reorder them only once before writing to memory. This effectively removes one constraint, as we no longer need to maintain a direct mapping between contiguous memory locations and contiguous registers. This might be the reason the HIP compiler was struggling to only use dual_fmac instructions (write of matrix C_reg to C by global_store_b128 requires 4 consecutive VGPRs)
Since kernel 4, our inner loop consist of doing the multiplication between the 8 elements of a column of A and the 16 elements of a row of B. We can assume both A and B is contiguously distributed on the 4 different VGPR banks. Something like this :
Figure 30 : inner loop - product between A_col and B_row
For the sake of simplicity, I will only represent the algorithm on a 8x4 tile from now on. A naive approach is to create a dual instructions by shifting a small diagonal like this making sure both SRC0s & SRC1 use different banks.
Figure 31 : naive distribution
The cell number represents the instruction index.
We see it is perfectly doable to only use dual issue instructions however some of them are using multiple times the same banks. This is something we wanted to avoid. One way to get rid of this is to store A and B on a set of non-overlapping banks. For example B only on bank 0-1 and A on bank 2-3. The issue with this is that we won’t be able to use ds_load_b128 instruction anymore as they target a 4 consecutive VGPRs. So instead of having 6 ds_load_b128 instructions like we have now, we will have 12 ds_load_b64 instead. If the performance uplift from our change is good enough, it shouldn’t matter.
Figure 32 : separate banks between A and B
All green ! However if we look at the cache use and the read pattern, we have this :
Figure 33 : Cache usage per instruction
We have good reuse of the values from A, although instruction 8 performs two reads. However, if we look at the read pattern in the detailed table below, we can see that we mostly read from bank 0 and bank 1, and from instruction Y (which is not as symmetrical as we would like it to be)
Instead of iterating over the values of A alone, we could iterate over both A and B, swapping registers between instructions X and Y to maximize cache usage
Figure 34 : Optimal solution
Looking at the detailed view, we now have a nice and symetrical access pattern. Both instruction X and Y read the same amount of data from the register file and we iterate over the 4 banks in sequential way (not just bank 0 and 1)
Right, now we are happy with our new access pattern how can we apply this change to the code ?
Here are the steps:
List the VGPRs used
Let’s start with the ds_load_b128instructions:
ds_load_b128 v[184:187], v183
ds_load_b128 v[188:191], v183 offset:64
ds_load_b128 v[192:195], v204
ds_load_b128 v[196:199], v204 offset:128
ds_load_b128 v[200:203], v204 offset:256
ds_load_b128 v[204:207], v204 offset:384
The first 2 instructions are responsible for loading A.
If we look at the fma instructions now :
v_fmac_f32_e32 v124, v184, v192
v_fmac_f32_e32 v133, v184, v193
v_dual_fmac_f32 v132, v184, v194 :: v_dual_fmac_f32 v129, v185, v192
v_dual_fmac_f32 v131, v184, v195 :: v_dual_fmac_f32 v128, v185, v193
v_dual_fmac_f32 v127, v185, v194 :: v_dual_fmac_f32 v122, v186, v193
v_dual_fmac_f32 v126, v185, v195 :: v_dual_fmac_f32 v123, v186, v192
v_dual_fmac_f32 v121, v186, v194 :: v_dual_fmac_f32 v116, v187, v193
v_dual_fmac_f32 v120, v186, v195 :: v_dual_fmac_f32 v117, v187, v192
v_dual_fmac_f32 v115, v187, v194 :: v_dual_fmac_f32 v112, v184, v197
v_dual_fmac_f32 v114, v187, v195 :: v_dual_fmac_f32 v113, v184, v196
v_dual_fmac_f32 v111, v184, v198 :: v_dual_fmac_f32 v108, v185, v197
v_dual_fmac_f32 v110, v184, v199 :: v_dual_fmac_f32 v109, v185, v196
v_dual_fmac_f32 v107, v185, v198 :: v_dual_fmac_f32 v104, v186, v197
v_dual_fmac_f32 v106, v185, v199 :: v_dual_fmac_f32 v105, v186, v196
v_dual_fmac_f32 v103, v186, v198 :: v_dual_fmac_f32 v100, v187, v197
v_dual_fmac_f32 v102, v186, v199 :: v_dual_fmac_f32 v101, v187, v196
v_dual_fmac_f32 v99, v187, v198 :: v_dual_fmac_f32 v96, v184, v201
v_dual_fmac_f32 v98, v187, v199 :: v_dual_fmac_f32 v97, v184, v200
v_dual_fmac_f32 v95, v184, v202 :: v_dual_fmac_f32 v92, v185, v201
v_dual_fmac_f32 v94, v184, v203 :: v_dual_fmac_f32 v93, v185, v200
v_dual_fmac_f32 v91, v185, v202 :: v_dual_fmac_f32 v88, v186, v201
v_dual_fmac_f32 v90, v185, v203 :: v_dual_fmac_f32 v89, v186, v200
v_dual_fmac_f32 v87, v186, v202 :: v_dual_fmac_f32 v84, v187, v201
v_dual_fmac_f32 v86, v186, v203 :: v_dual_fmac_f32 v85, v187, v200
v_dual_fmac_f32 v83, v187, v202 :: v_dual_fmac_f32 v80, v184, v205
v_dual_fmac_f32 v82, v187, v203 :: v_dual_fmac_f32 v81, v184, v204
v_dual_fmac_f32 v79, v184, v206 :: v_dual_fmac_f32 v76, v185, v205
v_dual_fmac_f32 v78, v184, v207 :: v_dual_fmac_f32 v77, v185, v204
v_dual_fmac_f32 v75, v185, v206 :: v_dual_fmac_f32 v72, v186, v205
v_dual_fmac_f32 v74, v185, v207 :: v_dual_fmac_f32 v73, v186, v204
v_dual_fmac_f32 v71, v186, v206 :: v_dual_fmac_f32 v68, v187, v205
v_dual_fmac_f32 v70, v186, v207 :: v_dual_fmac_f32 v69, v187, v204
v_dual_fmac_f32 v67, v187, v206 :: v_dual_fmac_f32 v64, v188, v193
v_dual_fmac_f32 v66, v187, v207 :: v_dual_fmac_f32 v65, v188, v192
v_dual_fmac_f32 v63, v188, v194 :: v_dual_fmac_f32 v60, v189, v193
v_dual_fmac_f32 v62, v188, v195 :: v_dual_fmac_f32 v61, v189, v192
v_dual_fmac_f32 v59, v189, v194 :: v_dual_fmac_f32 v56, v190, v193
v_dual_fmac_f32 v58, v189, v195 :: v_dual_fmac_f32 v57, v190, v192
v_dual_fmac_f32 v55, v190, v194 :: v_dual_fmac_f32 v52, v191, v193
v_dual_fmac_f32 v54, v190, v195 :: v_dual_fmac_f32 v53, v191, v192
v_dual_fmac_f32 v51, v191, v194 :: v_dual_fmac_f32 v48, v188, v197
v_dual_fmac_f32 v50, v191, v195 :: v_dual_fmac_f32 v49, v188, v196
v_dual_fmac_f32 v47, v188, v198 :: v_dual_fmac_f32 v44, v189, v197
v_dual_fmac_f32 v46, v188, v199 :: v_dual_fmac_f32 v45, v189, v196
v_dual_fmac_f32 v43, v189, v198 :: v_dual_fmac_f32 v40, v190, v197
v_dual_fmac_f32 v42, v189, v199 :: v_dual_fmac_f32 v41, v190, v196
v_dual_fmac_f32 v39, v190, v198 :: v_dual_fmac_f32 v36, v191, v197
v_dual_fmac_f32 v38, v190, v199 :: v_dual_fmac_f32 v37, v191, v196
v_dual_fmac_f32 v35, v191, v198 :: v_dual_fmac_f32 v32, v188, v201
v_dual_fmac_f32 v34, v191, v199 :: v_dual_fmac_f32 v33, v188, v200
v_dual_fmac_f32 v31, v188, v202 :: v_dual_fmac_f32 v28, v189, v201
v_dual_fmac_f32 v30, v188, v203 :: v_dual_fmac_f32 v29, v189, v200
v_dual_fmac_f32 v27, v189, v202 :: v_dual_fmac_f32 v24, v190, v201
v_dual_fmac_f32 v26, v189, v203 :: v_dual_fmac_f32 v25, v190, v200
v_dual_fmac_f32 v23, v190, v202 :: v_dual_fmac_f32 v20, v191, v201
v_dual_fmac_f32 v22, v190, v203 :: v_dual_fmac_f32 v21, v191, v200
v_dual_fmac_f32 v19, v191, v202 :: v_dual_fmac_f32 v16, v188, v205
v_dual_fmac_f32 v18, v191, v203 :: v_dual_fmac_f32 v17, v188, v204
v_dual_fmac_f32 v15, v188, v206 :: v_dual_fmac_f32 v12, v189, v205
v_dual_fmac_f32 v14, v188, v207 :: v_dual_fmac_f32 v13, v189, v204
v_dual_fmac_f32 v11, v189, v206 :: v_dual_fmac_f32 v8, v190, v205
v_dual_fmac_f32 v10, v189, v207 :: v_dual_fmac_f32 v9, v190, v204
v_dual_fmac_f32 v7, v190, v206 :: v_dual_fmac_f32 v4, v191, v205
v_dual_fmac_f32 v6, v190, v207 :: v_dual_fmac_f32 v5, v191, v204
v_fmac_f32_e32 v3, v191, v206
v_fmac_f32_e32 v2, v191, v207
Matrix C_reg is spread accross these ranges :
\(\large[2, 117], [120, 124], [126, 129], [131,133]\)
VGPR redistribution
It turns out that the VGPR allocation for C_reg is already close to what we need. We just need to add an extra bank 2 VGPR to ensure that all 128 VGPRs are allocated sequentially across banks 0-3.
This is good news, as it allows us to maintain compatibility with the initialization code for C_reg (setting all values to 0.0).
New allocation for C_reg : \([2, 117], [120, 124], [126, 129], [131,133], [214]\)
For A_col and B_row, we also need to allocate extra registers given that B_row will only use bank 0-1.
New allocation for A_col and B_row :
A_col : \([186,187],[190,191],[194,195],[198,199]\) (banks 2-3)
B_row : \([184,185] , [188,189] , [192,193] , [196,197] , [200,201], [204,205] , [208,209] , [212,213]\) (banks 0-1)
Re-write LDS loads
Our new code for loading A_col from As:
;A on bank 2-3
ds_load_b64 v[186:187], v183
ds_load_b64 v[190:191], v183 offset: 8
ds_load_b64 v[194:195], v183 offset: 64
ds_load_b64 v[198:199], v183 offset: 72
Loading B_row from Bs:
;B on bank 0-1
ds_load_b64 v[184:185], v202
ds_load_b64 v[188:189], v202 offset: 8
ds_load_b64 v[192:193], v202 offset: 128
ds_load_b64 v[196:197], v202 offset: 136
ds_load_b64 v[200:201], v202 offset: 256
ds_load_b64 v[204:205], v202 offset: 264
ds_load_b64 v[208:209], v202 offset: 384
ds_load_b64 v[212:213], v202 offset: 392
v183 and v202 are the new VGPRs holding the addresses of A and B in the LDS memory.
Re-write dual_fmas
We can then write our inner loop only as v_dual_fmac :
v_dual_fmac_f32 v5, v186, v184 :: v_dual_fmac_f32 v2, v187, v185
v_dual_fmac_f32 v3, v186, v185 :: v_dual_fmac_f32 v4, v187, v184
v_dual_fmac_f32 v9, v186, v188 :: v_dual_fmac_f32 v6, v187, v189
v_dual_fmac_f32 v7, v187, v188 :: v_dual_fmac_f32 v8, v186, v189
v_dual_fmac_f32 v13, v190, v188 :: v_dual_fmac_f32 v10, v191, v189
v_dual_fmac_f32 v11, v190, v189 :: v_dual_fmac_f32 v12, v191, v188
v_dual_fmac_f32 v17, v190, v184 :: v_dual_fmac_f32 v14, v191, v185
v_dual_fmac_f32 v15, v191, v184 :: v_dual_fmac_f32 v16, v190, v185
v_dual_fmac_f32 v21, v194, v184 :: v_dual_fmac_f32 v18, v195, v185
v_dual_fmac_f32 v19, v194, v185 :: v_dual_fmac_f32 v20, v195, v184
v_dual_fmac_f32 v25, v194, v188 :: v_dual_fmac_f32 v22, v195, v189
v_dual_fmac_f32 v23, v195, v188 :: v_dual_fmac_f32 v24, v194, v189
v_dual_fmac_f32 v29, v198, v188 :: v_dual_fmac_f32 v26, v199, v189
v_dual_fmac_f32 v27, v198, v189 :: v_dual_fmac_f32 v28, v199, v188
v_dual_fmac_f32 v33, v198, v192 :: v_dual_fmac_f32 v30, v199, v193
v_dual_fmac_f32 v31, v199, v192 :: v_dual_fmac_f32 v32, v198, v193
v_dual_fmac_f32 v37, v186, v192 :: v_dual_fmac_f32 v34, v187, v193
v_dual_fmac_f32 v35, v186, v193 :: v_dual_fmac_f32 v36, v187, v192
v_dual_fmac_f32 v41, v186, v196 :: v_dual_fmac_f32 v38, v187, v197
v_dual_fmac_f32 v39, v187, v196 :: v_dual_fmac_f32 v40, v186, v197
v_dual_fmac_f32 v45, v190, v196 :: v_dual_fmac_f32 v42, v191, v197
v_dual_fmac_f32 v43, v190, v197 :: v_dual_fmac_f32 v44, v191, v196
v_dual_fmac_f32 v49, v190, v192 :: v_dual_fmac_f32 v46, v191, v193
v_dual_fmac_f32 v47, v191, v192 :: v_dual_fmac_f32 v48, v190, v193
v_dual_fmac_f32 v53, v194, v192 :: v_dual_fmac_f32 v50, v195, v193
v_dual_fmac_f32 v51, v194, v193 :: v_dual_fmac_f32 v52, v195, v192
v_dual_fmac_f32 v57, v194, v196 :: v_dual_fmac_f32 v54, v195, v197
v_dual_fmac_f32 v55, v195, v196 :: v_dual_fmac_f32 v56, v194, v197
v_dual_fmac_f32 v61, v198, v196 :: v_dual_fmac_f32 v58, v199, v197
v_dual_fmac_f32 v59, v198, v197 :: v_dual_fmac_f32 v60, v199, v196
v_dual_fmac_f32 v65, v198, v200 :: v_dual_fmac_f32 v62, v199, v201
v_dual_fmac_f32 v63, v199, v200 :: v_dual_fmac_f32 v64, v198, v201
v_dual_fmac_f32 v69, v186, v200 :: v_dual_fmac_f32 v66, v187, v201
v_dual_fmac_f32 v67, v186, v201 :: v_dual_fmac_f32 v68, v187, v200
v_dual_fmac_f32 v73, v186, v204 :: v_dual_fmac_f32 v70, v187, v205
v_dual_fmac_f32 v71, v187, v204 :: v_dual_fmac_f32 v72, v186, v205
v_dual_fmac_f32 v77, v190, v204 :: v_dual_fmac_f32 v74, v191, v205
v_dual_fmac_f32 v75, v190, v205 :: v_dual_fmac_f32 v76, v191, v204
v_dual_fmac_f32 v81, v190, v200 :: v_dual_fmac_f32 v78, v191, v201
v_dual_fmac_f32 v79, v191, v200 :: v_dual_fmac_f32 v80, v190, v201
v_dual_fmac_f32 v85, v194, v200 :: v_dual_fmac_f32 v82, v195, v201
v_dual_fmac_f32 v83, v194, v201 :: v_dual_fmac_f32 v84, v195, v200
v_dual_fmac_f32 v89, v194, v204 :: v_dual_fmac_f32 v86, v195, v205
v_dual_fmac_f32 v87, v195, v204 :: v_dual_fmac_f32 v88, v194, v205
v_dual_fmac_f32 v93, v198, v204 :: v_dual_fmac_f32 v90, v199, v205
v_dual_fmac_f32 v91, v198, v205 :: v_dual_fmac_f32 v92, v199, v204
v_dual_fmac_f32 v97, v198, v208 :: v_dual_fmac_f32 v94, v199, v209
v_dual_fmac_f32 v95, v199, v208 :: v_dual_fmac_f32 v96, v198, v209
v_dual_fmac_f32 v101, v186, v208 :: v_dual_fmac_f32 v98, v187, v209
v_dual_fmac_f32 v99, v186, v209 :: v_dual_fmac_f32 v100, v187, v208
v_dual_fmac_f32 v105, v186, v212 :: v_dual_fmac_f32 v102, v187, v213
v_dual_fmac_f32 v103, v187, v212 :: v_dual_fmac_f32 v104, v186, v213
v_dual_fmac_f32 v109, v190, v212 :: v_dual_fmac_f32 v106, v191, v213
v_dual_fmac_f32 v107, v190, v213 :: v_dual_fmac_f32 v108, v191, v212
v_dual_fmac_f32 v113, v190, v208 :: v_dual_fmac_f32 v110, v191, v209
v_dual_fmac_f32 v111, v191, v208 :: v_dual_fmac_f32 v112, v190, v209
v_dual_fmac_f32 v117, v194, v208 :: v_dual_fmac_f32 v114, v195, v209
v_dual_fmac_f32 v115, v194, v209 :: v_dual_fmac_f32 v116, v195, v208
v_dual_fmac_f32 v121, v194, v212 :: v_dual_fmac_f32 v122, v195, v213
v_dual_fmac_f32 v123, v195, v212 :: v_dual_fmac_f32 v120, v194, v213
v_dual_fmac_f32 v129, v198, v212 :: v_dual_fmac_f32 v126, v199, v213
v_dual_fmac_f32 v127, v198, v213 :: v_dual_fmac_f32 v124, v199, v212
v_dual_fmac_f32 v133, v198, v184 :: v_dual_fmac_f32 v214, v199, v185
v_dual_fmac_f32 v131, v199, v184 :: v_dual_fmac_f32 v128, v198, v185
Restore VGPR mapping
We use a temporary VGPR to restore the full mapping like this:
; v2 -> v128 & v128 -> v2
v_mov_b32 v200, v128
v_mov_b32 v128, v2
v_mov_b32 v2, v200
; v128 -> v56 & v56 -> v128
v_mov_b32 v200, v56
v_mov_b32 v56, v2
v_mov_b32 v2, v200
; v56 -> v46 & v46 -> v56
v_mov_b32 v200, v46
v_mov_b32 v46, v2
v_mov_b32 v2, v200
; v46 -> v100 & v100 -> v46
v_mov_b32 v200, v100
v_mov_b32 v100, v2
v_mov_b32 v2, v200
...
To facilitate these changes, I wrote a small C++ program to parse the ISA, extract the mapping between the old and new VGPR distribution, and automatically generate all the necessary instructions.
Our kernel now uses 214 VGPRs instead of 208. We need to modify this in the .s file in the amdhsa.kernels section:
.vgpr_count: 214
Full kernel source code can be found here
Performance for this kernel is 3.63 ms (37791.2 GFLOP/s).
Our VALU utilization has gone up again to 76.2 % (more than our 75 % with 32x the inner loop).
Figure 35 : kernel 6 stats
If we look at this ISA, our inner loop consists solely of v_dual_fmac instructions, each with a 1-cycle latency. Beautiful !
Figure 36 : kernel 7 stats
We can also see that many cycles are wasted at the end of the loop on branching. Let’s try to optimize that in the next kernel.
Kernel 7 : Loop unrolling
I previously tried unrolling the inner loop in the C++ HIP implementation, but it didn’t work out. The kernel became too large as the compiler pre-fetched more values from the LDS, and performance remained unchanged.
Now that we have a highly efficient loop and full control over the ISA, we might have better luck. For this step, I will simply duplicate the added code from Kernel 6 eight times and remove the loop mechanism.
s_cmpk_lg_i32 s14, 0x1000 ; Remove this line at the beginning of the loop
s_waitcnt lgkmcnt(0)
v_dual_fmac_f32 ...
v_dual_fmac_f32 ...
s_cbranch_scc1 .LBB0_9 ; Remove this line at the end of the loop
Duplicate 8 times our load and multiplication and make sure we increment the addresses :
v_add_nc_u32_e32 v183, 0x210, v183 ; B : 0x210 = (128+4)*4
v_add_nc_u32_e32 v202, 0x200, v202 ; A : 0x200 = (128)*4
Full kernel source code can be found here
The performance for this kernel is 3.33 ms (41255.6 GFLOPS/s).
VALU utilization is now above 80 %.
Figure 37 : kernel 7 stats
The instruction timing starts to look very good as well :
Figure 38 : instruction timing
So why aren’t we faster ?
If we look at the total latency clk in RGP, our biggest offender is the wait on the barrier. The s_waitcnt just before is the wait on the global memory loads.
Figure 39 : Remaining non hidden latencies
We can’t eliminate the barrier since we need to synchronize the threads before writing to LDS. However, if we examine the generated code for global memory loads, we notice a large code segment dedicated to it (128 lines)
Figure 40 : GMEM loads
I didn’t notice it before but even though latency is partly hidden, cumulated latency for a single load is around 1.3 Millions clks. Given that we do 16 different loads (8 for each matrix), that’s 20 millions clks latency here !
Let’s see how we can improve this in the next kernel.
Kernel 8 : Batched GMEM loads
Ok, let’s start looking at what HIP has generated for us (I have removed s_delay_alu instructions for better readability)
v_add_nc_u32_e32 v169, s4, v168
v_ashrrev_i32_e32 v170, 31, v169
v_lshlrev_b64 v[170:171], 2, v[169:170]
v_add_co_u32 v170, vcc_lo, s10, v170
v_add_co_ci_u32_e32 v171, vcc_lo, s11, v171, vcc_lo
global_load_b32 v168, v[170:171], off
v_add_nc_u32_e32 v170, s4, v169
v_ashrrev_i32_e32 v171, 31, v170
v_lshlrev_b64 v[171:172], 2, v[170:171]
v_add_co_u32 v171, vcc_lo, s10, v171
v_add_co_ci_u32_e32 v172, vcc_lo, s11, v172, vcc_lo
global_load_b32 v169, v[171:172], off
Here s[10:11] hold the address of matrix B. For each each global_load_b32, the compiler computes the read offset using VGPRs from the previous iteration (v170 and v171 here). This is not ideal for a couple of reasons :
So ideally, we would like this 128 line section of code to be just 16 lines:
global_load_b32 v169, v[171:172], off
global_load_b32 v170, v[173:174], off
global_load_b32 v171, v[175:176], off
....
However, this would require us to maintain additional VGPRs and potentially use VALU instructions to update the memory addresses as well. Given that we are already using 214 VGPRs, this is clearly not feasible.
That said, we still have a fairly good SGPR budget, and according to the RDNA3 programming guide, global_load instructions can use SGPR for base addressing.
global_load_b32 v171, v214, s[10:11]
v214 is now a offset in bytes. s[10:11] a 64bit address in memory.
So we could pre-compute once all the needed addresses for the 16 loads and just increment once the offset in the loop. That would require an additional 16*2 SGPRs and 2 VGPRs to handle the offset.
After inspecting the ISA, it looks like :
That’s all we need to compute the needed base addresses.
First, we load the first 128 bytes to get matrix A and B addresses into s[20:21] and s[22:23]:
s_load_b128 s[20:23], s[0:1], 0x0 ; Matrix A and B
s_waitcnt lgkmcnt(0)
For matrix B, we will save the base addresses with the pre-computed offsets in s[24:39]. If we go back to our C++ code, each offset is separated by strideReadB*N = BLOCK_SIZE / BN * N, that’s 4096x4 = 0x4000 bytes
s_add_u32 s24, s22, 0x0000
s_addc_u32 s25, s23, 0
s_add_u32 s26, s22, 0x4000
s_addc_u32 s27, s23, 0
s_add_u32 s28, s22, 0x8000
s_addc_u32 s29, s23, 0
s_add_u32 s30, s22, 0xc000
s_addc_u32 s31, s23, 0
s_add_u32 s32, s22, 0x10000
s_addc_u32 s33, s23, 0
s_add_u32 s34, s22, 0x14000
s_addc_u32 s35, s23, 0
s_add_u32 s36, s22, 0x18000
s_addc_u32 s37, s23, 0
s_add_u32 s38, s22, 0x1c000
s_addc_u32 s39, s23, 0
And to compute the index in bytes, we can do :
; compute Matrix B offset
s_lshl_b32 s19, s14, 7 ; BN * blockIdx.x
v_add_nc_u32_e32 v203, s19, v0 ; index = BN * blockIdx.x + threadIdx.x
v_lshlrev_b32_e32 v203,2, v203 ; offset = 4*index (to bytes offset)
We apply the same logic for matrix A using s[40:55] for the base addresses and v215 for the offset.
s_add_u32 s40, s20, 0x0000
s_addc_u32 s41, s21, 0
s_add_u32 s42, s20, 0x40000
s_addc_u32 s43, s21, 0
s_add_u32 s44, s20, 0x80000
s_addc_u32 s45, s21, 0
s_add_u32 s46, s20, 0xc0000
s_addc_u32 s47, s21, 0
s_add_u32 s48, s20, 0x100000
s_addc_u32 s49, s21, 0
s_add_u32 s50, s20, 0x140000
s_addc_u32 s51, s21, 0
s_add_u32 s52, s20, 0x180000
s_addc_u32 s53, s21, 0
s_add_u32 s54, s20, 0x1c0000
s_addc_u32 s55, s21, 0
; compute Matrix A offset
s_lshl_b32 s19, s15, 19 ; 4096 * 128 * blockIdx.y
v_lshrrev_b32_e32 v1, 3, v0 ; threadIdx.x / 8
v_lshlrev_b32_e32 v1, 12, v1 ; 4096 * (threadIdx.x/8)
v_and_b32_e32 v215, 7, v0 ; threadIdx.x % 8
v_add_u32_e32 v215, v1, v215 ; index = 4096*(threadIdx.x/8) + threadIdx.x % 8
v_add_nc_u32_e32 v215, s19, v215 ; index += 4096*128*blockIdx.y
v_lshlrev_b32_e32 v215,2, v215 ; offset = 4*index
Now, in our main loop we can replace the 128 lines of code with this :
v_add_nc_u32_e32 v203, 0x20000, v203
v_add_nc_u32_e32 v215, 0x20, v215
global_load_b32 v167, v203, s[24:25]
global_load_b32 v168, v203, s[26:27]
global_load_b32 v169, v203, s[28:29]
global_load_b32 v170, v203, s[30:31]
global_load_b32 v171, v203, s[32:33]
global_load_b32 v172, v203, s[34:35]
global_load_b32 v173, v203, s[36:37]
global_load_b32 v174, v203, s[38:39]
global_load_b32 v175, v215, s[40:41]
global_load_b32 v176, v215, s[42:43]
global_load_b32 v177, v215, s[44:45]
global_load_b32 v178, v215, s[46:47]
global_load_b32 v179, v215, s[48:49]
global_load_b32 v180, v215, s[50:51]
global_load_b32 v181, v215, s[52:53]
global_load_b32 v182, v215, s[54:55]
Our modified kernel now uses 55 SGPRs instead of 18 and 216 VGPRs instead of 214.
If we take another RGP capture, we can see that this is much better, with less than 2 million clock cycles of latency for the entire process now
Figure 41 : Simplified GMEMs
After some experimentation, I found that spreading these 16 loads across the inner loop was more efficient.
Our kernel currently executes six wavefronts per SIMD. Since our workgroup consists of 128 threads (4 waves), every time we execute a syncthreads, at least 2 of the 6 waves on the SIMD will compete for GMEM access. Additionally, if any of the remaining 4 waves happen to be in the same state, even more waves could be contending for memory access
Figure 42 : at least 2 waves requesting GMEM at the same time
By splitting these loads into chunks of 2, we reduce the likelihood of overlap between waves, as shown in the following diagram:
Figure 43 : splitting GMEM instructions in chunks of 2
Performance for this kernel is 2.80 ms (49047 GFLOPS/s). That’s now 60% faster than our reference rocBLAS version and almost 50 times faster than our naive approach !
Full kernel source code can be found here
Figure 44 : performance results in GFLOPS/s
Conclusion
This has been an exciting journey. What started as a simple experiment to try out HIP on Windows turned into a deep dive into the hardware details of RDNA3. My biggest inspiration for this blog was Simon Boehm’s technical post9 on matrix multiplication in CUDA—an incredibly well-written piece that clearly influenced Kernel 3.
HIP tooling on Windows is quite limited. For instance, RGP does not display bank conflicts by default. However, with enough practice, it becomes possible to analyze most performance bottlenecks using the instruction timing view.
Even though the performance results are impressive—outperforming rocBLAS by 60%—this code is clearly not scalable in its current state. Furthermore, performing custom ISA optimizations makes these changes RDNA3-specific, limiting portability. As the codebase grows, modifications become increasingly difficult to implement.
That being said, the goal of this personal project was to push performance to the limit without worrying about maintainability or flexibility. While matrix multiplication can be implemented in just a few lines of code, writing an optimized implementation is incredibly challenging. We achieved a 50x speedup between the naive kernel and our best kernel, which, in my experience, would not have been possible using only HIP C++. This highlights the value of projects like OpenAI’s Triton, which I find particularly interesting and worth exploring in the future.
Although reaching almost 50 TFLOP/s is a solid achievement, we are still not fully VALU-bound, meaning there’s likely more performance left on the table. One technique I haven’t tested yet is LDS double buffering, which could eliminate one of the barriers and potentially improve the distribution of LDS instructions across the SIMD.
Finally, I want to thank Francois Guthmann for our brainstorming session on LDS optimization, which inspired the approach used in Kernel 4.
This project has been both fun and insightful, and I look forward to investigating further optimizations in the future
All the code for the 8 kernels can be found on this github here
Changelog
18/02/2024 : Updated Figures 28 and 29. Added missing SRC2 used by destination operand for FMAC_F32. Thanks to Aditya Atluri for pointing this out.
References
RDNA3 Instruction Set Architecture ↩ ↩2
I used ROCm 6.2.4 which was the latest version available on Windows 11 at the time I wrote this link ↩
Radeon Graphic Profiler is the recommended profiler on Windows ↩
I didn’t spend much time analysing how rocBLAS works but after exploring the ROCBlas repo, it looks like rocBLAS uses a project called Tensile to generate highly optimized GEMM codes for AMD GPU. ↩
ROCm performance guidelines ↩
There is a great blogpost by Francois Guthmann here that goes into these details. ↩
According to the RDNA3 ISA programming guide, the Scratch memory is similar to the global memory but instructions access a private (per-thread) memory space ↩
To enable cumode, just add -mcumode option when building with hipcc. ↩
How to Optimize a CUDA Matmul Kernel for cuBLAS-like Performance: a Worklog ↩
seb-v
Insights into GPU Performance, Optimization, and Beyond
|
Akshay Subramaniam
[email protected]
WHAT THE PROFILER IS TELLING YOU:
OPTIMIZING GPU KERNELS
2
BEFORE YOU START
1.
Know your hardware
•
What are the target machines, how many nodes? Machine-specific optimizations okay?
2.
Know your tools
•
Strengths and weaknesses of each tool? Learn how to use them (and learn one well!)
3.
Know your application
•
What does it compute? How is it parallelized? What final performance is expected?
4.
Know your process
•
Performance optimization is a constant learning process
5.
Make it so!
The five steps to enlightenment
3
THE APOD CYCLE
1. Assess
•
Identify Performance Limiter
•
Analyze Profile
•
Find Indicators
2. Parallelize
3. Optimize
3b. Build Knowledge
4. Deploy
and Test
4
Scope
GUIDING OPTIMIZATION EFFORT
• Challenge: How to know where to start?
• Top-down Approach:
•
Find Hotspot Kernel
•
Identify Performance Limiter of the Hotspot
•
Find performance bottleneck indicators related to the limiter
•
Identify associated regions in the source code
•
Come up with strategy to fix and change the code
•
Start again
“Drilling Down into the Metrics”
5
KNOW YOUR HARDWARE:
VOLTA ARCHITECTURE
6
VOLTA V100 FEATURES
Volta Architecture
Most Productive GPU
Tensor Core
120 Programmable
TFLOPS Deep Learning
Improved SIMT Model
New Algorithms
Volta MPS
Inference Utilization
Improved NVLink &
HBM2
Efficient Bandwidth
7
GPU COMPARISON
P100 (SXM2)
V100 (SXM2)
Double/Single/Half TFlop/s
5.3/10.6/21.2
7.8/15.7/125 (TensorCores)
Memory Bandwidth (GB/s)
732
900
Memory Size
16GB
16GB
L2 Cache Size
4096 KB
6144 KB
Base/Boost Clock (Mhz)
1328/1480
1312/1530
TDP (Watts)
300
300
8
VOLTA SM
GV100
GP100
FP32 Cores
64
64
INT32 Cores
64
0
FP64 Cores
32
32
Register File
256 KB
256 KB
Active Threads
2048
2048
Active Blocks
32
32
Same active threads/warps/blocks on SM
Same amount of registers
Expect similar occupancy, if not limited by shared mem.
10
Shared
Memory
64 KB
L1$
24 KB
L2$
4 MB
Load/Store Units
Pascal SM
L2$
6 MB
Load/Store Units
Volta SM
L1$ and Shared Memory
128 KB
Low Latency
Streaming
IMPROVED L1 CACHE
13
INSTRUCTION LATENCY
Dependent instruction issue latency for core FMA operations:
Volta: 4 clock cycles
Pascal: 6 clock cycles
14
TENSOR CORE
Each Tensor Core performs 64 FMA mixed-precision operations per clock
Mixed precision multiplication and accumulation
15
TENSOR CORE
cuBLAS/cuDNN: set TENSOR_OP_MATH
CUDA: nvcuda::wmma API
Example to use tensor core
#include <mma.h>
using namespace nvcuda;
__global__ void wmma_ker(half *a, half *b, float *c) {
wmma::fragment<wmma::matrix_a, 16, 16, 16, half, wmma::col_major> a_frag;
wmma::fragment<wmma::matrix_b, 16, 16, 16, half, wmma::row_major> b_frag;
wmma::fragment<wmma::accumulator, 16, 16, 16, float> c_frag;
wmma::fill_fragment(c_frag, 0.0f);
wmma::load_matrix_sync(a_frag, a, 16);
wmma::load_matrix_sync(b_frag, b, 16);
wmma::mma_sync(c_frag, a_frag, b_frag, c_frag);
wmma::store_matrix_sync(c, c_frag, 16, wmma::mem_row_major);
}
16
KNOW YOUR TOOLS:
PROFILERS
18
PROFILING TOOLS
From NVIDIA
Volta, Turing, Ampere and future:
• NVIDIA Nsight Systems
• NVIDIA Nsight Compute
Older generations
• nvprof
• NVIDIA Visual Profiler (nvvp)
• Nsight Visual Studio Edition
Third Party
• TAU Performance System
• VampirTrace
• PAPI CUDA component
• HPC Toolkit
• (Tools using CUPTI)
Many Options!
Without loss of generality, in this talk we will be showing Nsight systems and
compute screenshots
20
Nsight Systems
●
Nsight Systems:
nsys profile -o profile_v4_2O ./build/bin/hpgmg-fv 7 8
System level analysis tool (think timeline)
GPU info
CPU info
21
Nsight Compute
Kernel analysis tool (think metrics)
●
Nsight Compute:
nv-nsight-cu-cli -o profile_v4_2O \
--launch-count 1 ./build/bin/hpgmg-fv 7 8
30
KNOW YOUR APPLICATION:
HPGMG
31
HPGMG
High-Performance Geometric Multi-Grid, Hybrid Implementation
Fine levels are executed on throughput-optimized processors (GPU)
Coarse levels are executed on latency-optimized processors (CPU)
5/20/2019
GPU
CPU
THRESHOLD
F-CYCLE
V-CYCLE
DIRECT SOLVE
SMOOTHER
& RESIDUAL
SMOOTHER
& RESIDUAL
SMOOTHER
SMOOTHER
RESTRICTION
INTERPOLATION
http://crd.lbl.gov/departments/computer-science/PAR/research/hpgmg/
32
MULTI-GRID BOTTLENECK
Cost of operations
5/20/2019
level
kernel time / total time
MOST TIME SPENT
ON STENCILS
level
kernel time / level time
VOLUME
SURFACE
33
MAKE IT SO:
ITERATION 1
2ND ORDER 7-POINT STENCIL
36
Identify the hotspot: smooth_kernel()
IDENTIFY HOTSPOT
Hotspot
Kernel
Time
Speedup
Original Version
2.079ms
1.00x
38
IDENTIFY PERFORMANCE LIMITER
Memory utilization
Compute utilization
39
Memory Utilization vs Compute Utilization
Four possible combinations:
PERFORMANCE LIMITER CATEGORIES
Comp
Mem
Compute
Bound
Comp
Mem
Bandwidth
Bound
Comp
Mem
Latency
Bound
Comp
Mem
Compute and
Bandwidth
Bound
42
BANDWIDTH BOUND ON V100
45
DRILLING DOWN: LATENCY ANALYSIS (V100)
The profiler
warns about
low occupancy
Limited by block size of
only 8x4=32 threads
46
OCCUPANCY
Each SM has limited resources:
•
max. 64K Registers (32 bit) distributed between threads
•
max. 48KB of shared memory per block (96KB per SMM)
•
max. 32 Active Blocks per SMM
•
Full occupancy: 2048 threads per SM (64 warps)
When a resource is used up, occupancy is reduced
GPU Utilization
(*) Values vary with Compute Capability
47
LATENCY
GPUs cover latencies by having a lot of work in flight
warp 0
warp 1
warp 2
warp 3
warp 4
warp 5
warp 6
warp 7
warp 8
warp 9
The warp issues
The warp waits (latency)
Fully covered latency
warp 0
warp 1
warp 2
warp 3
No warp issues
Exposed latency, not enough warps
48
LATENCY AT HIGH OCCUPANCY
Many active warps but with high latency instructions
Exposed latency at high occupancy
No warp issuing
warp 0
warp 1
warp 2
warp 3
warp 4
warp 5
warp 6
warp 7
warp 8
warp 9
49
GLOBAL MEMORY
Basic optimization is the same: Coalescing, Alignment, SOA pattern.
Granularity is 32 bytes, i.e. 8 threads are accessing a continuous 32 byte space.
Latency: what is the occupancy we need to saturate global load/store?
One V100:
BW = 4096 bit * 877Mhz * 2 / 8 = 898 GB/s ~ 1.23x of P100 (theoretical)
SM ratio: 80/56 = 1.43x of P100
HBM2 increase bandwidth from 732 GB/s to 900 GB/s
52
LOOKING FOR MORE INDICATORS
12 Global Load
Transactions per 1 Request
For line numbers use:
nvcc -lineinfo
Source Code
Association
53
MEMORY TRANSACTIONS: BEST CASE
A warp issues 32x4B aligned and consecutive load/store request
Threads read different elements of the same 128B segment
1x L1 transaction: 128B needed / 128B transferred
4x L2 transactions: 128B needed / 128B transferred
1x 128B L1 transaction per warp
4x 32B L2 transactions per warp
1x 128B load/store request per warp
54
MEMORY TRANSACTIONS: WORST CASE
Threads in a warp read/write 4B words, 128B between words
Each thread reads the first 4B of a 128B segment
32x L1 transactions: 128B needed / 32x 128B transferred
32x L2 transactions: 128B needed / 32x 32B transferred
1x 128B L1 transaction per thread
1x 32B L2 transaction per thread
1x 128B load/store request per warp
Stride: 32x4B
thread 2
55
TRANSACTIONS AND REPLAYS
With replays, requests take more time and use more resources
More instructions issued
More memory traffic
Increased execution time
Inst. 0
Issued
Inst. 1
Issued
Inst. 2
Issued
Execution time
Threads
0-7/24-31
Threads
8-15
Threads
16-23
Inst. 0
Completed
Inst. 1
Completed
Inst. 2
Completed
Threads
0-7/24-31
Threads
8-15
Threads
16-23
Transfer data for inst. 0
Transfer data for inst. 1
Transfer data for inst. 2
Extra latency
Extra work (SM)
Extra memory traffic
58
FIX: BETTER GPU TILING
Before
After
Memory
Utilization Up
Transactions Per
Access Down to 9
Kernel
Time
Speedup
Original Version
2.079ms
1.00x
Better Memory Accesses
1.756ms
1.18x
+10%
Block Size Up from (8,4,1) to (32,4,1)
59
Category:
Latency Bound – Occupancy
Problem:
Latency is exposed due to low occupancy
Goal:
Hide latency behind more parallel work
Indicators:
Occupancy low (< 60%)
Execution Dependency High
Strategy:
Increase occupancy by:
• Varying block size
• Varying shared memory usage
• Varying register count (use __launch_bounds)
PERF-OPT QUICK REFERENCE CARD
60
Category:
Latency Bound – Coalescing
Problem:
Memory is accessed inefficiently => high latency
Goal:
Reduce #transactions/request to reduce latency
Indicators:
Low global load/store efficiency,
High #transactions/#request compared to ideal
Strategy:
Improve memory coalescing by:
•
Cooperative loading inside a block
•
Change block layout
•
Aligning data
•
Changing data layout to improve locality
PERF-OPT QUICK REFERENCE CARD
61
Category:
Bandwidth Bound - Coalescing
Problem:
Too much unused data clogging memory system
Goal:
Reduce traffic, move more useful data per request
Indicators:
Low global load/store efficiency,
High #transactions/#request compared to ideal
Strategy:
Improve memory coalescing by:
•
Cooperative loading inside a block
•
Change block layout
•
Aligning data
•
Changing data layout to improve locality
PERF-OPT QUICK REFERENCE CARD
62
ITERATION 2: DATA MIGRATION
68
PAGE FAULTS
Details
69
MEMORY MANAGEMENT
Using Unified Memory
No changes to data structures
No explicit data movements
Single pointer for CPU and GPU data
Use cudaMallocManaged for allocations
5/20/2
019
Developer View With
Unified Memory
Unified Memory
70
Solution: allocate the first CPU level with cudaMallocHost (zero-copy memory)
UNIFIED MEMORY
Eliminating page migrations and faults
5/20/2
019
GPU
CPU
THRESHOLD
F-CYCLE
Page faults
72
PAGE FAULTS
Almost gone
74
PAGE FAULTS
Significant speedup for affected kernel
75
MEM ADVICE API
Not used here
cudaMemPrefetchAsync(ptr, length, destDevice, stream)
Migrate data to destDevice: overlap with compute
Update page table: much lower overhead than page fault in kernel
Async operation that follows CUDA stream semantics
cudaMemAdvise(ptr, length, advice, device)
Specifies allocation and usage policy for memory region
User can set and unset at any time
5/20/2019
76
CONCURRENCY THROUGH PIPELINING
Serial
Concurrent– overlap kernel and D2H copy
Use CUDA streams to hide data transfers
K1
K2
K3
K4
cudaMemcpyAsync(H2D)
cudaMemcpyAsync(D2H)
Kernel<<<>>>
time
cudaMemcpyAsync(H2D)
DH1
DH2
DH3
DH4
time
performance
improvement
77
ITERATION 3:
REGISTER OPTIMIZATION AND CACHING
80
LIMITER: STILL MEMORY BANDWIDTH
81
SM
Functional Units
Register File
SM
Functional Units
Register File
GPU MEMORY HIERARCHY
V100
Global Memory (Framebuffer)
L2$
Bring reused
data closer to
the SMs
•
Registers (256 KB/SM): good for
intra-thread data reuse
•
Shared mem / L1$ (128 KB/SM):
good for explicit intra-block data
reuse
•
L2$ (6144 KB): implicit data reuse
Shared Memory /
L1$
Shared Memory /
L1$
82
CACHING IN REGISTERS
No data loaded initially
5/20/2019
83
CACHING IN REGISTERS
Load first set of data
5/20/2019
load
84
CACHING IN REGISTERS
Perform calculation
5/20/2019
Stencil
85
CACHING IN REGISTERS
Naively load next set of data?
5/20/2019
load
86
CACHING IN REGISTERS
Reusing already loaded data is better
5/20/2019
load
keep
keep
87
CACHING IN REGISTERS
Repeat
5/20/2019
Stencil
Higher register usage may
result in reduced
occupancy => trade off
(run experiments!)
91
THE EFFECT OF REGISTER CACHING
Transactions for cached
loads reduced by a
factor of 8
Memory utilization still
high, but transferring less
redundant data
Kernel
Time
Speedup
Original Version
2.079ms
1.00x
Better Memory Accesses
1.756ms
1.18x
Register Caching
1.486ms
1.40x
92
SHARED MEMORY
• Programmer-managed cache
• Great for caching data reused across threads in a CTA
• 128KB split between shared memory and L1 cache per SM
• Each block can use at most 96KB shared memory on GV100
• Search for cudaFuncAttributePreferredSharedMemoryCarveout in the docs
__global__ void sharedMemExample(int *d) {
__shared__ float s[64];
int t = threadIdx.x;
s[t] = d[t];
__syncthreads();
if(t>0 && t<63)
stencil[t] = -2.0f*s[t] + s[t-1] + s[t+1];
}
global
global
registers
registers
shared
93
Category:
Bandwidth Bound – Register Caching
Problem:
Data is reused within threads and memory bw
utilization is high
Goal:
Reduce amount of data traffic to/from global mem
Indicators:
High device memory usage, latency exposed
Data reuse within threads and small-ish working set
Low arithmetic intensity of the kernel
Strategy:
•
Assign registers to cache data
•
Avoid storing and reloading data (possibly by
assigning work to threads differently)
•
Avoid register spilling
PERF-OPT QUICK REFERENCE CARD
94
Category:
Latency Bound – Texture Cache
Problem:
Load/Store Unit becomes bottleneck
Goal:
Relieve Load/Store Unit from read-only data
Indicators:
High utilization of Load/Store Unit, pipe-busy stall
reason, significant amount of read-only data
Strategy:
Load read-only data through Texture Units:
•
Annotate read-only pointers with const __restrict__
•
Use __ldg() intrinsic
PERF-OPT QUICK REFERENCE CARD
95
Category:
Device Mem Bandwidth Bound – Shared Memory
Problem:
Too much data movement
Goal:
Reduce amount of data traffic to/from global mem
Indicators:
Higher than expected memory traffic to/from global
memory
Low arithmetic intensity of the kernel
Strategy:
(Cooperatively) move data closer to SM:
•
Shared Memory
•
(or Registers)
•
(or Constant Memory)
•
(or Texture Cache)
PERF-OPT QUICK REFERENCE CARD
96
Category:
Shared Mem Bandwidth Bound – Shared Memory
Problem:
Shared memory bandwidth bottleneck
Goal:
Reduce amount of data traffic to/from global mem
Indicators:
Shared memory loads or stores saturate
Strategy:
Reduce Bank Conflicts (insert padding)
Move data from shared memory into registers
Change data layout in shared memory
PERF-OPT QUICK REFERENCE CARD
97
ITERATION 4:
KERNELS WITH INCREASED
ARITHMETIC INTENSITY
98
OPERATIONAL INTENSITY
•
Operational intensity = arithmetic operations/bytes written and read
•
Our stencil kernels have very low operational intensity
•
It might be beneficial to use a different algorithm with higher operational intensity.
•
In this case this might be achieved by using higher order stencils
99
ILP VS OCCUPANCY
•
Earlier we looked at how occupancy helps hide latency by providing independent threads of
execution.
•
When our code requires many registers the occupancy will be limited but we can still get
instruction level parallelism inside the threads.
•
Occupancy is helpful to achieving performance but not always
required
•
Some algorithms such as matrix multiplications allow
increases in operational intensity by using more registers
for local storage while simultaneously offering decent ILP.
In these cases it might be beneficial to maximize ILP and
operational intensity at the cost of occupancy.
a = b + c;
d = e + f;
a = b + c;
d = a + f;
Independent instr.
Dependent instr.
104
STALL REASONS:
EXECUTION DEPENDENCY
Memory accesses may influence execution dependencies
Global accesses create longer dependencies than shared accesses
Read-only/texture dependencies are counted in Texture
Instruction level parallelism can reduce dependencies
a = b + c; // ADD
d = a + e; // ADD
a = b[i]; // LOAD
d = a + e; // ADD
a = b + c; // Independent ADDs
d = e + f;
105
ILP AND MEMORY ACCESSES
#pragma unroll is useful to extract ILP
Manually rewrite code if not a simple loop
float a = 0.0f;
for( int i = 0 ; i < N ; ++i )
a += logf(b[i]);
c = b[0]
No ILP
2-way ILP (with loop unrolling)
float a, a0 = 0.0f, a1 = 0.0f;
for( int i = 0 ; i < N ; i += 2 )
{
a0 += logf(b[i]);
a1 += logf(b[i+1]);
}
a = a0 + a1
a += logf(c)
c = b[1]
a += logf(c)
c = b[2]
a += logf(c)
c = b[3]
a += logf(c)
c0 = b[0]
a0 += logf(c0)
c0 = b[2]
a0 += logf(c0)
c1 = b[1]
a1 += logf(c1)
c1 = b[3]
a1 += logf(c1)
a = a0 + a1
...
106
Category:
Latency Bound – Instruction Level Parallelism
Problem:
Not enough independent work per thread
Goal:
Do more parallel work inside single threads
Indicators:
High execution dependency, increasing occupancy has
no/little positive effect, still registers available
Strategy:
•
Unroll loops (#pragma unroll)
•
Refactor threads to compute n output values at the
same time (code duplication)
PERF-OPT QUICK REFERENCE CARD
107
Category:
Compute Bound – Algorithmic Changes
Problem:
GPU is computing as fast as possible
Goal:
Reduce computation if possible
Indicators:
Clearly compute bound problem, speedup only with
less computation
Strategy:
•
Pre-compute or store (intermediate) results
•
Trade memory for compute time
•
Use a computationally less expensive algorithm
•
Possibly: run with low occupancy and high ILP
PERF-OPT QUICK REFERENCE CARD
108
SUMMARY
109
SUMMARY
1.
Know your application
2.
Know your hardware
3.
Know your tools
4.
Know your process
•
Identify the Hotspot
•
Classify the Performance Limiter
•
Look for indicators
5.
Make it so!
Performance Optimization is a Constant Learning Process
112
nsys profile -o profile_v4_2O
\
./build/bin/hpgmg-fv 7 8
nv-nsight-cu-cli -o profile_v4_2O \
--kernel-regex ".*smooth_kernel*" \
--launch-count 1 ./build/bin/hpgmg-fv 7
8
113
GUIDING OPTIMIZATION EFFORT
• Challenge: How to know where to start?
• Top-down Approach:
•
Find Hotspot Kernel
•
Identify Performance Limiter of the Hotspot
•
Find performance bottleneck indicators related to the limiter
•
Identify associated regions in the source code
•
Come up with strategy to fix and change the code
•
Start again
“Drilling Down into the Metrics”
Nsight Systems
Nsight Compute
116
REFERENCES
CUDA Documentation
Best Practices: http://docs.nvidia.com/cuda/cuda-c-best-practices-guide/
Volta Tuning Guide: http://docs.nvidia.com/cuda/volta-tuning-guide/
Ampere Tuning Guide: https://docs.nvidia.com/cuda/ampere-tuning-guide/
NVIDIA Developer Blog on HPGMG
https://devblogs.nvidia.com/high-performance-geometric-multi-grid-gpu-acceleration/
Nsight Tools
https://devblogs.nvidia.com/migrating-nvidia-nsight-tools-nvvp-nvprof/
https://devblogs.nvidia.com/transitioning-nsight-systems-nvidia-visual-profiler-nvprof/
https://devblogs.nvidia.com/using-nsight-compute-to-inspect-your-kernels/ |
DeMoriarty
Beep Boop
Modifying Custom Matmul CUDA Kernels
I started to learn CUDA last year, and started writing matrix multiplication kernels as a learning project. After some struggles, I made them to work, but then got disappointed when I saw my kernels are 10 times slower than cuBLAS GEMM kernels. Maybe my expectations were a bit too high. I’ve tried lots of open sourced matmul kernels on github, but the best one I found was still about 5 times slower (some of them were optimized for older architectures). So I started the journey of optimizing my own matmul kernel. After few months of trial and error, my matmul kernel finally has comparable speed to cuBLAS GEMM.
Why would you need to write your own matmul kernel? the answer is, in most cases you don’t need to and also shouldn’t. It simply isn’t worth the effort when there are highly optimized cuBLAS kernels available.
There are two reasons why I did it:
In this post, I mainly want to talk about the second point: some modifications that can be done on matmul kernels and their applications.
Note: all of the kernel optimizations are targeted towards devices with compute capability 7.5 (Tesla T4, RTX 20 series), all experiments are done with a Tesla T4 GPU on Google Colab. Performance on other GPUs may not be optimal.
Note: all source code can be found in this repository.
Table of contents:
Batch Matrix Multiplication (BMM)
BMM is basically multiplying a batch of (M x K) matrices with a batch of (K x N) matrices, and get a batch of (M x N) matrices as a result. When batch size is equal to 1, it becomes a regular matrix multiplication. here is an example code in pytorch:
a = torch.randn(batch_size, M, K)
b = torch.randn(batch_size, K, N)
c = torch.bmm(a, b)
# c has shape [batch_size, M, N]
Here is the source code of my BMM kernel.
Here are some speed comparisons between my BMM kernel and torch.bmm which calls cuBLAS GEMM.
Square matrices: batch size = 1, varying M, N, K (M = N = K)
Runtime (ms):
Batch size = 128, K = 128, varying M, N (M = N)
Runtime (ms):
Fused Reduce Matmul
Sometimes we want to apply reduction right after a matrix multiplication:
c = torch.bmm(a, b)
max_c, argmax_c = torch.max(c, dim=1)
sum_c = torch.sum(c, dim=2)
argmin_c = torch.argmin(c, dim=1)
Normally there isn’t a problem with doing this, however, when I was implementing k-means clustering for TorchPQ, this became an issue. One of the major steps of k-means clustering algorithm is the computations of pairwise distance between all data points and all centroids (cluster centers), then getting the index of the closest cluster for each data point (argmin).
When the number of data points (n_data) and the number of clusters (n_clusters) are very large, this step will produce a huge (n_data x n_clusters) matrix that might not fit into the GPU memory (imagine a 1,000,000 x 10,000 fp32 matrix)
One workaround is to split this step into multiple tiny steps: in each step only compute the distance between a subset of data points and centroids (let’s say 10,000 x 10,000), then asign each data point in the subset to the closest cluster, and repeat.
A better solution could be to fuse argmin into the matmul kernel. Advantages of doing this are:
I will not go into details in this post, if you are interested, you can read implementation details, and check the source code.
I compared the new MinBMM kernel with 2 things:
values, indices = torch.min( torch.bmm(a, b), dim = dim)
and
values, indices = torch.min( custom_bmm(a, b), dim = dim)
And here are the results:
batch_size = 1, K = 16, varying M, N (M = N)
Runtime (ms)
Peak Memory Usage (MB)
Runtime (ms)
Peak Memory Usage (MB)
batch_size = 1, K = 64, varying M, N (M = N)
Runtime (ms)
Peak Memory Usage (MB)
Runtime (ms)
Peak Memory Usage (MB)
batch_size = 1, K = 256, varying M, N (M = N)
Runtime (ms)
Peak Memory Usage (MB)
Runtime (ms)
Peak Memory Usage (MB)
batch_size = 64, K = 64, varying M, N (M = N)
Runtime (ms)
Peak Memory Usage (MB)
Runtime (ms)
Peak Memory Usage (MB)
From the benchmark results we can see that fused MinBMM kernel is much faster than calling min and bmm kernels seperately, especially when k is smaller. The speedup seems to decay as K growth larger. Another advantage of MinBMM is the required memory is much smaller than other methods, the inputs can be (1,000,000 x K) matrices, and we still don’t need to be worried of going out of memory.
L2 distance
For L2 distance, all we need to change is a single line in thread_matmul:
cCache[m].val[n] = fmaf(a, b, cCache[m].val[n]);
to
float dif = a - b;
cCache[m].val[n] = fmaf(dif, dif, cCache[m].val[n]);
Now the BMM kernel will calculate squared L2 distance instead of dot product. We don’t calculate square root since it doesn’t affect the order of nearest neighbors.
Topk Search
Nearest Neighbor Search (NNS) and Maximum Inner Product Search (MIPS) are some of the most important components of information retrieval and data science. Sometimes it’s required to search in million or billions of vectors with thousands of queries within seconds.
The naive way is to calculate pairwise distance matrix (or dot product) of dataset vectors and query vectors first, then apply topk search after. This is also called “bruteforce search”. Here is an example in pytorch:
n_data = 1_000_000
n_query = 1000
d_vector = 256
a = torch.randn(n_data, d_vector)
b = torch.randn(n_queries, d_vector)
c = 2 * a @ b.T - a.pow(2).sum(dim=-1)[:, None] - b.pow(2).sum(dim=-1)[None] # negative squared L2 distance
# c = a @ b.T # inner product
topk_val, topk_idx = torch.topk(c, dim=0, k=128)
Doing this is OK in smaller scale, but as n_data grows larger, bruteforce search will get more and more expensive. It also has the same problem as “matmul & reduce” that I talked about in previous section: the pairwise distance (or dot product) matrix may not fit into GPU RAM.
There are some approximate nearest neighbor search algorithms that try to solve the performance problem, for example Hierarchical Navigable Small World (HNSW), Product Quantization (PQ) and Locality Sensitve Hashing (LHS). I’ve implemented some variations of PQ algorithm earlier this year, you can take a look if interested.
In this post I will not be focusing on approximation methods, instead I will try to speed up the “bruteforce search” by fusing topk search into matmul kernel. Like in MinBMM kernel, all there needs to be changed is the part where we store calculated results to global memory (DRAM). In MinBMM, after matrix multiplication, instead of writing 128 x 128 block of C matrix to DRAM, we first did an in-block reduction (128 x 128) –> (128), then a global reduction using atomic operations. Similarly, in TopkBMM kernel, we will first perform an in-block bitonic sort along one of the two axis of C matrix, and a global sort after. We also need to have mutex (mutual exclusion) in order to avoid racing conditions (multiple threads trying to write to the same global memory location). The source code can be found here.
I compared it to
torch.topk(torch.bmm(a, b), k=128)
and here are the results:
A is a (n_data x d_vector) matrix, B is a (d_vector x n_query) matrix , n_data is the number of data points, n_query is the number of query vectors, d_vector is the vector dimensionality.
n_query = 1024, d_vector = 64, varying n_data:
Runtime (ms)
Peak Memory Usage (MB)
Runtime (ms)
Peak Memory Usage (MB)
n_query = 1024, d_vector = 256, varying n_data:
Runtime (ms)
Maximum Memory Footprint (MB)
Runtime (ms)
Maximum Memory Footprint (MB)
n_query = 1024, d_vector = 1024, varying n_data:
Runtime (ms)
Peak Memory Usage (MB)
Runtime (ms)
Peak Memory Usage (MB)
We can see that TopkBMM kernel is extremely fast! it’s up to 10x faster than the naive “bruteforce search” when n_data is close to a million and d_vector = 64. To be honest I totally wasn’t expecting this. And this is not the end! We can further improve its speed by switching to half type (float16 or bfloat16), it’s also possible to use tensor cores of more recent nvidia GPUs to accelerate the matrix multiplication. But that’s for future, I still haven’t figured out how to use WMMA with CuPy.
One limitation is that TopkBMM can search up to 128 nearest neighbors at most. In order to search for more than 128 neighbors, we can do this iteratively. For example, first search the top 128 neighbors, then 128 ~ 256, then 256 ~ 384, and so on.
UPDATE
The TopkBMM kernel that I originally wrote was a prototype that only works under specific assumptions, such as:
I wrote a more complete version of TopkBMM later, that works for both C order and Fortan order input matrices, and supports any K value that is under 128, and supports topk on both first and second dimentions of c. However, when benchmarking the TopkBMM kernel when dim == 1, I found out that the torch.topk was performing much faster than before. It turns out that torch.topk is faster on the contiguous dimention compared to other non-contiguous dimentions.
c = torch.randn(M, N)
topk_value, topk_index = c.topk(dim=1, k=k) # this is fast
topk_value, topk_index = c.topk(dim=0, k=k) # this is a lot slower
This is why when I benchmarked my original TopkBMM kernel, it was up to 10 times faster than torch.topk(torch.bmm(a, b)), it’s not because my kernel was that much faster, it’s because the torch.topk was not used correctly. There is a trick to speed up torch.topk when performed on a non-contiguous dimention:
\((A \cdot B)^T = B^T \cdot A^T\)
c = torch.bmm(b.transpose(-1, -2), a.transpose(-1, -2))
topk_value, topk_index = c.topk(dim=-1, k=k)
TODO: add new benchmark results.
Masked BMM
Masked BMM means “hiding” some portion of the output of BMM using a binary mask. One use case of this operation is the self-attention layer of GPT style language models. After computing dot product of queries and keys, we fill the upper triangular half of the output matrix with -∞. Here is how it’s usually done in pytorch:
# shape of keys: [batch_size, M, K]
# shape of values: [batch_size, K, N]
# shape of mask: [M, N]
dot = torch.bmm(keys, queries)
dot.masked_fill_(mask = mask, value = float("-inf") )
Mask pattern
If we could fuse masked_fill into the matmul kernel, we not only can get rid of computation of masked_fill, but the BMM kernel itself can take advantage of the sparcity given by the mask (by not computing the masked part), reducing computation even further.
The matmul kernel splits the output matrix into a grid of 128 x 128 submatrices, each submatrix is assigned to a thread block. Each thread block consists of 256 threads, and each thread computes an 8 x 8 block of the 128 x 128 submatrix.
First we need to do some preprocessing. We need 3 different levels of mask: block mask, thread mask and element mask. Block mask indicates which thread blocks should be ignored, and the same goes for thread mask and element mask.
M, N = mask.shape
assert M % 128 == 0
assert N % 128 == 0
element_mask = mask
thread_mask = mask.view( M//8, 8, N//8, 8)
thread_mask = thread_mask.sum(dim=1).sum(dim=3)
block_mask = mask.view(M//128, 128, N//128, 128)
block_mask = block_mask.sum(dim=1).sum(dim=3)
For simplicity, here we assume both M and N are divisible by 128.
We input all 3 masks to the MBMM kernel:
extern "C"
__global__ void mbmm_nn(
const float* __restrict__ A,
const float* __restrict__ B,
float* __restrict__ C,
const uint8_t* __restrict__ BlockMask, //
const uint8_t* __restrict__ ThreadMask, //
const uint8_t* __restrict__ ElementMask, //
int M, int N, int K
){
...
}
At the very start of the kernel, all threads within a thread block will read the block mask value, if it is 0, all threads within that thread block will exit. (No memory store/loads, no arithmetic ops.)
int bN = (N + 128 - 1) / 128;
uint8_t block_mask = BlockMask[(blockIdx.y)*bN + (blockIdx.x)];
if (block_mask == 0){
return;
}
Each thread in the grid also reads a unique thread mask value:
int vx = threadIdx.x % 16;
int vy = threadIdx.x / 16;
uint8_t thread_mask = ThreadMask[(blockIdx.y*16 + vy)*tN + (blockIdx.x*16 + vx) ];
If it’s 0, then that thread skips thread_matmul.
// main loop
for (int i=0; i<nItr; i++)
{
...
if (thread_mask != 0){
thread_matmul(aSM, bSM, cCache, vx, vy);
}
__syncthreads();
}
However, all threads still participate in loading matrix A and B from global memory.
At the end of the kernel, before storing cached results to C, each thread will check its thread mask value once again, if it’s between 0 and 64, that thread will mask the cached result using element mask.
if (0 < thread_mask < 64){
mask_cCache(cCache, ElementMask, gStartx, gStarty, vx, vy, bid, M, N);
}
write_c(cCache, C, gStartx, gStarty, vx, vy, bid, M, N);
This is kind of similar to the idea of Block Sparse. The difference is instead of designing a new kernel, we’re taking advantage of the grid-threadblock-thread hierarchy of BMM kernel to introduce sparcity. This works very well when there are lots of (128 x 128) masked blocks in our mask.
Now let’s see how the new MBMM kernel performs compared to the original BMM + masked_fill and torch.bmm (cuBLAS) + masked_fill:
1. batch size = 128, M = N = 1024, varying K
Runtime (ms):
2. batch size = 128, K = 64, varying M, N (M = N)
Runtime (ms):
3. M = N = 1024, K = 128, varying batch size
Runtime (ms):
We can see that, the new MBMM kernel has roughly 2 times the speed of the original BMM kernel + masked_fill, and 1.2 ~ 1.4 times the speed of torch.bmm + masked_fill.
Selective BMM
to be continued…
|
CUDA Matrix Multiplication Optimization
Introduction
General matrix multiplication (GEMM) is a fundamental operation in linear algebra. It is also a very important operation in many scientific computing applications, such as machine learning and deep learning.
In this article, we will discuss how to optimize the performance of FP32 GEMM on NVIDIA GPUs using CUDA and how to extend the FP32 GEMM optimizations to FP16 GEMM using NVIDIA Tensor Cores.
General Matrix Multiplication
GEMM operation computes $D = AB + C$, where $D \in \mathbb{R}^{m \times n}$, $A \in \mathbb{R}^{m \times k}$, $B \in \mathbb{R}^{k \times n}$, $C \in \mathbb{R}^{m \times n}$. In computer programs, usually $A$ and $B$ are constant input matrices and $C$ will be overwritten by the output matrix $D$.
In our implementations, we assume all the matrices, $A$, $B$, $C$ and $D$, are stored in the row-major order on memory with the leading dimension padded to 64 bytes for FP32 matrices and 32 bytes for FP16 matrices.
Naive Implementation with Non-Coalesced Memory Access
The naive implementation is to use 2D blocks, where each thread is responsible for computing one element of the output matrix. Concretely, for each thread with global thread index $(t_m, t_n)$, where $t_m \in [1, m]$ and $t_n \in [1, n]$, it computes $D_{t_m, t_n} = \sum_{t_k=1}^{k} A_{t_m, t_k} B_{t_k, t_n} + C_{t_m, t_n}$.
The following code snippet shows the naive implementation.
1234567891011121314151617181920212223242526272829303132333435363738
template <typename T>__global__ void gemm_v00(size_t m, size_t n, size_t k, T alpha, T const* A, size_t lda, T const* B, size_t ldb, T beta, T* C, size_t ldc){ // Compute the row and column of C that this thread is responsible for. size_t const C_row_idx{blockIdx.x * blockDim.x + threadIdx.x}; size_t const C_col_idx{blockIdx.y * blockDim.y + threadIdx.y}; // Each thread compute // C[C_row_idx, C_col_idx] = alpha * A[C_row_idx, :] * B[:, C_col_idx] + // beta * C[C_row_idx, C_col_idx]. if (C_row_idx < m && C_col_idx < n) { T sum{static_cast<T>(0)}; for (size_t k_idx{0U}; k_idx < k; ++k_idx) { sum += A[C_row_idx * lda + k_idx] * B[k_idx * ldb + C_col_idx]; } C[C_row_idx * ldc + C_col_idx] = alpha * sum + beta * C[C_row_idx * ldc + C_col_idx]; }}template <typename T>void launch_gemm_kernel_v00(size_t m, size_t n, size_t k, T const* alpha, T const* A, size_t lda, T const* B, size_t ldb, T const* beta, T* C, size_t ldc, cudaStream_t stream){ dim3 const block_dim{32U, 32U, 1U}; dim3 const grid_dim{ (static_cast<unsigned int>(m) + block_dim.x - 1U) / block_dim.x, (static_cast<unsigned int>(n) + block_dim.y - 1U) / block_dim.y, 1U}; gemm_v00<T><<<grid_dim, block_dim, 0U, stream>>>(m, n, k, *alpha, A, lda, B, ldb, *beta, C, ldc); CHECK_LAST_CUDA_ERROR();}
In addition to other drawbacks from the naive algorithm, however, there is a major problem with this implementation, which is the non-coalesced memory access for both reading and writing the global memory. In our implementation specifically, because of the oversight that the fast thread index is used for indexing the row of $A$ and $C$, the threads in the same warp read the elements from the same column of $A$ that is stored in row-major order on memory, resulting in a non-coalesced memory access as the reads are completely non-consecutive. The same problem also happens when the warp overwrites the elements of $C$. The threads in the same warp read the same element of $B$, resulting in a broadcast memory access which is not affected by the oversight.
The performance of this FP32 GEMM implementation is only 0.27 TFLOPS on an NVIDIA GeForce RTX 3090 GPU, which is very poor.
Naive Implementation with Coalesced Memory Access
The fix to the non-coalesced memory access is to use the fast thread index for indexing the row of matrices that are stored in row-major order on memory instead so that the threads in the same warp read or overwrite the elements from the same row of the matrices are coalesced. In our implementation, we just need to swap the fast thread index and the slow thread index in the kernel function.
The following code snippet shows the naive implementation with coalesced memory access.
1234567891011121314151617181920212223242526272829303132333435363738
template <typename T>__global__ void gemm_v01(size_t m, size_t n, size_t k, T alpha, T const* A, size_t lda, T const* B, size_t ldb, T beta, T* C, size_t ldc){ // Compute the row and column of C that this thread is responsible for. size_t const C_col_idx{blockIdx.x * blockDim.x + threadIdx.x}; size_t const C_row_idx{blockIdx.y * blockDim.y + threadIdx.y}; // Each thread compute // C[C_row_idx, C_col_idx] = alpha * A[C_row_idx, :] * B[:, C_col_idx] + // beta * C[C_row_idx, C_col_idx]. if (C_row_idx < m && C_col_idx < n) { T sum{static_cast<T>(0)}; for (size_t k_idx{0U}; k_idx < k; ++k_idx) { sum += A[C_row_idx * lda + k_idx] * B[k_idx * ldb + C_col_idx]; } C[C_row_idx * ldc + C_col_idx] = alpha * sum + beta * C[C_row_idx * ldc + C_col_idx]; }}template <typename T>void launch_gemm_kernel_v01(size_t m, size_t n, size_t k, T const* alpha, T const* A, size_t lda, T const* B, size_t ldb, T const* beta, T* C, size_t ldc, cudaStream_t stream){ dim3 const block_dim{32U, 32U, 1U}; dim3 const grid_dim{ (static_cast<unsigned int>(n) + block_dim.x - 1U) / block_dim.x, (static_cast<unsigned int>(m) + block_dim.y - 1U) / block_dim.y, 1U}; gemm_v01<T><<<grid_dim, block_dim, 0U, stream>>>(m, n, k, *alpha, A, lda, B, ldb, *beta, C, ldc); CHECK_LAST_CUDA_ERROR();}
Now, because of the fix, the threads in the same warp read the elements from the same row of $B$ that is stored in row-major order on memory, resulting in a coalesced memory access. The same thing also happens when the warp overwrites the elements of $C$. The threads in the same warp read the same element of $A$, resulting in a broadcast memory access. Therefore, this implementation should perform much better than the one with non-coalesced memory access.
The performance of this FP32 GEMM implementation becomes 1.72 TFLOPS on an NVIDIA GeForce RTX 3090 GPU, which is much better than the previous implementation. However, considering the theoretical peak performance of the GPU is 35.58 TFLOPS, the performance of this implementation is still very poor.
Implementation with 2D Block Tiling
Because the previous implementation accesses the global memory frequently, the GEMM implementation becomes memory-bound. Because accessing the shared memory is much faster than accessing the global memory, to improve the performance, we can use the shared memory to cache the input matrices $A$ and $B$ for data reuse.
However, because the shared memory size is limited, we cannot cache the entire input matrices $A$ and $B$ in the shared memory. Instead, we can cache a 2D tile of $A$ and $B$ in the shared memory and use the 2D tile to compute a 2D tile of the output matrix $D$. Then, we can load the next 2D tile of $A$ and $B$ to the shared memory and compute the next 2D tile of $D$.
Mathematically, given a GEMM operation $D = AB + C$, where $D \in \mathbb{R}^{m \times n}$, $A \in \mathbb{R}^{m \times k}$, $B \in \mathbb{R}^{k \times n}$, $C \in \mathbb{R}^{m \times n}$, the matrices could be divided into smaller matrices.
$$A =\begin{bmatrix}A_{1,1}^{d_{bm} \times d_{bk}} & A_{1,2}^{d_{bm} \times d_{bk}} & \cdots & A_{1,k/d_{bk}}^{d_{bm} \times d_{bk}} \\A_{2,1}^{d_{bm} \times d_{bk}} & A_{2,2}^{d_{bm} \times d_{bk}} & \cdots & A_{2,k/d_{bk}}^{d_{bm} \times d_{bk}} \\\vdots & \vdots & \ddots & \vdots \\A_{m/d_{bm},1}^{d_{bm} \times d_{bk}} & A_{m/d_{bm},2}^{d_{bm} \times d_{bk}} & \cdots & A_{m/d_{bm},k/d_{bk}}^{d_{bm} \times d_{bk}} \\\end{bmatrix}$$
$$B =\begin{bmatrix}B_{1,1}^{d_{bk} \times d_{bn}} & B_{1,2}^{d_{bk} \times d_{bn}} & \cdots & B_{1,n/d_{bn}}^{d_{bk} \times d_{bn}} \\B_{2,1}^{d_{bk} \times d_{bn}} & B_{2,2}^{d_{bk} \times d_{bn}} & \cdots & B_{2,n/d_{bn}}^{d_{bk} \times d_{bn}} \\\vdots & \vdots & \ddots & \vdots \\B_{k/d_{bk},1}^{d_{bk} \times d_{bn}} & B_{k/d_{bk},2}^{d_{bk} \times d_{bn}} & \cdots & B_{k/d_{bk},n/d_{bn}}^{d_{bk} \times d_{bn}} \\\end{bmatrix}$$
$$C =\begin{bmatrix}C_{1,1}^{d_{bm} \times d_{bn}} & C_{1,2}^{d_{bm} \times d_{bn}} & \cdots & C_{1,n/d_{bn}}^{d_{bm} \times d_{bn}} \\C_{2,1}^{d_{bm} \times d_{bn}} & C_{2,2}^{d_{bm} \times d_{bn}} & \cdots & C_{2,n/d_{bn}}^{d_{bm} \times d_{bn}} \\\vdots & \vdots & \ddots & \vdots \\C_{m/d_{bm},1}^{d_{bm} \times d_{bn}} & C_{m/d_{bm},2}^{d_{bm} \times d_{bn}} & \cdots & C_{m/d_{bm},n/d_{bn}}^{d_{bm} \times d_{bn}} \\\end{bmatrix}$$
$$D =\begin{bmatrix}D_{1,1}^{d_{bm} \times d_{bn}} & D_{1,2}^{d_{bm} \times d_{bn}} & \cdots & D_{1,n/d_{bn}}^{d_{bm} \times d_{bn}} \\D_{2,1}^{d_{bm} \times d_{bn}} & D_{2,2}^{d_{bm} \times d_{bn}} & \cdots & D_{2,n/d_{bn}}^{d_{bm} \times d_{bn}} \\\vdots & \vdots & \ddots & \vdots \\D_{m/d_{bm},1}^{d_{bm} \times d_{bn}} & D_{m/d_{bm},2}^{d_{bm} \times d_{bn}} & \cdots & D_{m/d_{bm},n/d_{bn}}^{d_{bm} \times d_{bn}} \\\end{bmatrix}$$
Each small matrix in $D$ is computed as multiple small matrix multiplications and accumulations.
$$D_{b_m,b_n}^{d_{bm} \times d_{bn}} = \sum_{b_k=1}^{k/d_{bk}} A_{b_m,b_k}^{d_{bm} \times d_{bk}} B_{b_k,b_n}^{d_{bk} \times d_{bn}} + C_{b_m,b_n}^{d_{bm} \times d_{bn}}$$
In this implementation, each 2D block with block index $(b_m, b_n)$, where $b_m \in [1, m/d_{bm}]$ and $b_n \in [1, n/d_{bn}]$, is responsible for computing one small matrix $D_{b_m,b_n}^{d_{bm} \times d_{bn}}$. The shared memory is used to cache a 2D tile of $A$ and $B$ with size $d_{bm} \times d_{bk}$ and $d_{bk} \times d_{bn}$, respectively. The 2D tile of $A$ is indexed by $(b_m, b_k)$, where $b_m \in [1, m/d_{bm}]$ and $b_k \in [1, k/d_{bk}]$. The 2D tile of $B$ is indexed by $(b_k, b_n)$, where $b_k \in [1, k/d_{bk}]$ and $b_n \in [1, n/d_{bn}]$. The cache and small matrix multiplication compute process is repeated for $k/d_{bk}$ times until the entire small matrix $D_{b_m,b_n}^{d_{bm} \times d_{bn}}$ is accumulated.
Similar to the previous implementations, each block requires $d_{bm} \times d_{bn}$ threads to compute the small matrix $D_{b_m, b_n}^{d_{bm} \times d_{bn}}$ and each thread with block thread index $(t_m, t_n)$, where $t_m \in [1, d_{bm}]$ and $t_n \in [1, d_{bn}]$, is responsible for computing one element of the small matrix.
$$\begin{aligned}\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{t_m,t_n}&= \left( \sum_{b_k=1}^{k/d_{bk}} A_{b_m,b_k}^{d_{bm} \times d_{bk}} B_{b_k,b_n}^{d_{bk} \times d_{bn}} + C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{t_m,t_n} \\&= \sum_{b_k=1}^{k/d_{bk}} \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{t_m,t_n} + \left( C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{t_m,t_n} \\&= \sum_{b_k=1}^{k/d_{bk}} \left( \sum_{t_k=1}^{d_{bk}} \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{t_m,t_k} \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{t_k,t_n} \right) + \left( C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{t_m,t_n} \\\end{aligned}$$
The following code snippet shows the implementation with 2D block tiling.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178
template <typename T, size_t BLOCK_TILE_SIZE_X, size_t BLOCK_TILE_SIZE_Y, size_t BLOCK_TILE_SIZE_K, size_t NUM_THREADS, size_t BLOCK_TILE_SKEW_SIZE_X = 0U, size_t BLOCK_TILE_SKEW_SIZE_K = 0U>__device__ void load_data_to_shared_memory(T const* A, size_t lda, T const* B, size_t ldb, T A_thread_block_tile[BLOCK_TILE_SIZE_Y][BLOCK_TILE_SIZE_K + BLOCK_TILE_SKEW_SIZE_K], T B_thread_block_tile[BLOCK_TILE_SIZE_K][BLOCK_TILE_SIZE_X + BLOCK_TILE_SKEW_SIZE_X], size_t thread_block_tile_idx, size_t thread_linear_idx, size_t m, size_t n, size_t k){ // Load data from A on DRAM to A_thread_block_tile on shared memory.#pragma unroll for (size_t load_idx{0U}; load_idx < (BLOCK_TILE_SIZE_Y * BLOCK_TILE_SIZE_K + NUM_THREADS - 1U) / NUM_THREADS; ++load_idx) { size_t const A_thread_block_tile_row_idx{ (thread_linear_idx + load_idx * NUM_THREADS) / BLOCK_TILE_SIZE_K}; size_t const A_thread_block_tile_col_idx{ (thread_linear_idx + load_idx * NUM_THREADS) % BLOCK_TILE_SIZE_K}; size_t const A_row_idx{blockIdx.y * BLOCK_TILE_SIZE_Y + A_thread_block_tile_row_idx}; size_t const A_col_idx{thread_block_tile_idx * BLOCK_TILE_SIZE_K + A_thread_block_tile_col_idx}; // These boundary checks might slow down the kernel to some extent. // But they guarantee the correctness of the kernel for all // different GEMM configurations. T val{static_cast<T>(0)}; if (A_row_idx < m && A_col_idx < k) { val = A[A_row_idx * lda + A_col_idx]; } // This if will slow down the kernel. // Add static asserts from the host code to guarantee this if is // always true. static_assert(BLOCK_TILE_SIZE_K * BLOCK_TILE_SIZE_Y % NUM_THREADS == 0U); // if (A_thread_block_tile_row_idx < BLOCK_TILE_SIZE_Y && // A_thread_block_tile_col_idx < BLOCK_TILE_SIZE_K) // { // A_thread_block_tile[A_thread_block_tile_row_idx] // [A_thread_block_tile_col_idx] = val; // } A_thread_block_tile[A_thread_block_tile_row_idx] [A_thread_block_tile_col_idx] = val; }// Load data from B on DRAM to B_thread_block_tile on shared memory.#pragma unroll for (size_t load_idx{0U}; load_idx < (BLOCK_TILE_SIZE_K * BLOCK_TILE_SIZE_X + NUM_THREADS - 1U) / NUM_THREADS; ++load_idx) { size_t const B_thread_block_tile_row_idx{ (thread_linear_idx + load_idx * NUM_THREADS) / BLOCK_TILE_SIZE_X}; size_t const B_thread_block_tile_col_idx{ (thread_linear_idx + load_idx * NUM_THREADS) % BLOCK_TILE_SIZE_X}; size_t const B_row_idx{thread_block_tile_idx * BLOCK_TILE_SIZE_K + B_thread_block_tile_row_idx}; size_t const B_col_idx{blockIdx.x * BLOCK_TILE_SIZE_X + B_thread_block_tile_col_idx}; // These boundary checks might slow down the kernel to some extent. // But they guarantee the correctness of the kernel for all // different GEMM configurations. T val{static_cast<T>(0)}; if (B_row_idx < k && B_col_idx < n) { val = B[B_row_idx * ldb + B_col_idx]; } // This if will slow down the kernel. // Add static asserts from the host code to guarantee this if is // always true. static_assert(BLOCK_TILE_SIZE_X * BLOCK_TILE_SIZE_K % NUM_THREADS == 0U); // if (B_thread_block_tile_row_idx < BLOCK_TILE_SIZE_K && // B_thread_block_tile_col_idx < BLOCK_TILE_SIZE_X) // { // B_thread_block_tile[B_thread_block_tile_row_idx] // [B_thread_block_tile_col_idx] = val; // } B_thread_block_tile[B_thread_block_tile_row_idx] [B_thread_block_tile_col_idx] = val; }}template <typename T, size_t BLOCK_TILE_SIZE_X, size_t BLOCK_TILE_SIZE_Y, size_t BLOCK_TILE_SIZE_K>__global__ void gemm_v02(size_t m, size_t n, size_t k, T alpha, T const* A, size_t lda, T const* B, size_t ldb, T beta, T* C, size_t ldc){ // Avoid using blockDim.x * blockDim.y as the number of threads per block. // Because it is a runtime constant and the compiler cannot optimize the // loop unrolling based on that. // Use a compile time constant instead. constexpr size_t NUM_THREADS{BLOCK_TILE_SIZE_X * BLOCK_TILE_SIZE_Y}; size_t const thread_linear_idx{threadIdx.y * blockDim.x + threadIdx.x}; // Compute the row and column of C that this thread is responsible for. size_t const C_col_idx{blockIdx.x * blockDim.x + threadIdx.x}; size_t const C_row_idx{blockIdx.y * blockDim.y + threadIdx.y}; // Cache a tile of A and B in shared memory for data reuse. __shared__ T A_thread_block_tile[BLOCK_TILE_SIZE_Y][BLOCK_TILE_SIZE_K]; __shared__ T B_thread_block_tile[BLOCK_TILE_SIZE_K][BLOCK_TILE_SIZE_X]; size_t const num_thread_block_tiles{(k + BLOCK_TILE_SIZE_K - 1) / BLOCK_TILE_SIZE_K}; T sum{static_cast<T>(0)}; for (size_t thread_block_tile_idx{0U}; thread_block_tile_idx < num_thread_block_tiles; ++thread_block_tile_idx) { load_data_to_shared_memory<T, BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y, BLOCK_TILE_SIZE_K, NUM_THREADS>( A, lda, B, ldb, A_thread_block_tile, B_thread_block_tile, thread_block_tile_idx, thread_linear_idx, m, n, k); __syncthreads();#pragma unroll for (size_t k_i{0U}; k_i < BLOCK_TILE_SIZE_K; ++k_i) { // Doing this results in 2 TOPS. // Suppose blockDim.x = blockDim.y = 32. // Effectively, for a warp, in one iteration, we read the value from // A_thread_block_tile at the same location on the shared memory // resulting in a broadcast, we also read 32 values that have no // bank conflicts from B_thread_block_tile. Even with that, all the // values have to be read from the shared memory and consequence is // the shared memory instruction runs very intensively just to // compute a small number of values using simple arithmetic // instructions, which is not efficient. sum += A_thread_block_tile[threadIdx.y][k_i] * B_thread_block_tile[k_i][threadIdx.x]; } __syncthreads(); } if (C_row_idx < m && C_col_idx < n) { C[C_row_idx * ldc + C_col_idx] = alpha * sum + beta * C[C_row_idx * ldc + C_col_idx]; }}template <typename T>void launch_gemm_kernel_v02(size_t m, size_t n, size_t k, T const* alpha, T const* A, size_t lda, T const* B, size_t ldb, T const* beta, T* C, size_t ldc, cudaStream_t stream){ // Feel free to play with the block tile sizes. // The algorithm correctness should always be guaranteed. constexpr unsigned int BLOCK_TILE_SIZE_X{32U}; constexpr unsigned int BLOCK_TILE_SIZE_Y{32U}; constexpr unsigned int BLOCK_TILE_SIZE_K{32U}; constexpr unsigned int NUM_THREADS{BLOCK_TILE_SIZE_X * BLOCK_TILE_SIZE_Y}; static_assert(BLOCK_TILE_SIZE_K * BLOCK_TILE_SIZE_Y % NUM_THREADS == 0U); static_assert(BLOCK_TILE_SIZE_X * BLOCK_TILE_SIZE_K % NUM_THREADS == 0U); dim3 const block_dim{BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y, 1U}; dim3 const grid_dim{ (static_cast<unsigned int>(n) + block_dim.x - 1U) / block_dim.x, (static_cast<unsigned int>(m) + block_dim.y - 1U) / block_dim.y, 1U}; gemm_v02<T, BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y, BLOCK_TILE_SIZE_K> <<<grid_dim, block_dim, 0U, stream>>>(m, n, k, *alpha, A, lda, B, ldb, *beta, C, ldc); CHECK_LAST_CUDA_ERROR();}
The performance of this FP32 GEMM implementation becomes 2.66 TFLOPS on an NVIDIA GeForce RTX 3090 GPU, which is much better than the previous implementation. However, it is still far from the theoretical peak performance of the GPU.
The problem of this implementation is that the shared memory is accessed very frequently. Even if accessing the shared memory is much faster than accessing the global memory, the shared memory instruction runs very intensively just to compute a small number of values using simple arithmetic instructions, which is not efficient. Therefore, the performance of this implementation is still limited by the memory bandwidth, this time from the shared memory.
Implementation with 2D Block Tiling and 1D Thread Tiling
To further improve the performance, we can alleviate the shared memory bandwidth problem by further caching some even smaller tiles of the input matrices $A$ and $B$ from the shared memory to the registers of the threads. This time, each thread is responsible for computing a small tile of the output matrix $D$ instead of one single element. Because the registers are the fastest to access, the performance of this implementation should be much better than the previous one.
We start with only caching the data of matrix $B$ from the shared memory to the registers. Each thread with block thread index $(t_m, t_n)$, where $t_m \in [1, d_{bm} / d_{tm}]$ and $t_n \in [1, d_{bn}]$, is now responsible for computing $d_{tm}$ elements of the small matrix, where $d_{tm}$ is the thread tile size.
$$\begin{aligned}\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{t_m : t_m + d_{tm},t_n}&= \left( \sum_{b_k=1}^{k/d_{bk}} A_{b_m,b_k}^{d_{bm} \times d_{bk}} B_{b_k,b_n}^{d_{bk} \times d_{bn}} + C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{t_m : t_m + d_{tm},t_n} \\&= \sum_{b_k=1}^{k/d_{bk}} \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{t_m : t_m + d_{tm},t_n} + \left( C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{t_m : t_m + d_{tm},t_n} \\&= \sum_{b_k=1}^{k/d_{bk}} \left( \sum_{t_k=1}^{d_{bk}} \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{t_m : t_m + d_{tm},t_k} \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{t_k,t_n} \right) + \left( C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{t_m : t_m + d_{tm},t_n} \\\end{aligned}$$
In our previous implementation without thread tiling, to compute one element of the small matrix, we need to read $d_{bk}$ values from the cached matrix $A$ in the shared memory and $d_{bk}$ values from the cached matrix $B$ in the shared memory. In total, we need to read $2d_{k}$ values from the shared memory.
Now, with 1D thread tiling, to compute $d_{tm}$ elements of the small matrix, we only need to read $d_{bk} \times d_{tm}$ values from the cached matrix $A$ in the shared memory and $d_{bk}$ values from the cached matrix $B$ in the shared memory. Specifically, in each inner loop, $\left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{t_k,t_n}$ is cached in the register to be reused for $d_{tm}$ times. In total, we need to read $d_{bk} \times d_{tm} + d_{bk}$ values from the shared memory. On average, to compute one element of the small matrix, we need to read $d_{bk} + d_{bk} / d_{tm}$ values from the shared memory.
Because $d_{bk} + d_{bk} / d_{tm} < 2d_{k}$, the shared memory is accessed less frequently and the shared memory bandwidth problem is alleviated.
The following code snippet shows the implementation with 2D block tiling and 1D thread tiling.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115
template <typename T, size_t BLOCK_TILE_SIZE_X, size_t BLOCK_TILE_SIZE_Y, size_t BLOCK_TILE_SIZE_K, size_t THREAD_TILE_SIZE_Y>__global__ void gemm_v03(size_t m, size_t n, size_t k, T alpha, T const* A, size_t lda, T const* B, size_t ldb, T beta, T* C, size_t ldc){ // Avoid using blockDim.x * blockDim.y as the number of threads per block. // Because it is a runtime constant and the compiler cannot optimize the // loop unrolling based on that. // Use a compile time constant instead. constexpr size_t NUM_THREADS{BLOCK_TILE_SIZE_X * BLOCK_TILE_SIZE_Y / THREAD_TILE_SIZE_Y}; size_t const thread_linear_idx{threadIdx.y * blockDim.x + threadIdx.x}; // Cache a tile of A and B in shared memory for data reuse. __shared__ T A_thread_block_tile[BLOCK_TILE_SIZE_Y][BLOCK_TILE_SIZE_K]; __shared__ T B_thread_block_tile[BLOCK_TILE_SIZE_K][BLOCK_TILE_SIZE_X]; size_t const num_thread_block_tiles{(k + BLOCK_TILE_SIZE_K - 1) / BLOCK_TILE_SIZE_K}; // Each thread in the block processes BLOCK_TILE_SIZE_Y output values. // Specifically, these values corresponds to // C[blockIdx.y * BLOCK_TILE_SIZE_Y + threadIdx.x / BLOCK_TILE_SIZE_X * // THREAD_TILE_SIZE_Y : blockIdx.y * BLOCK_TILE_SIZE_Y + (threadIdx.x / // BLOCK_TILE_SIZE_X + 1) * THREAD_TILE_SIZE_Y][blockIdx.x * // BLOCK_TILE_SIZE_X + threadIdx.x % BLOCK_TILE_SIZE_X] T C_thread_results[THREAD_TILE_SIZE_Y] = {static_cast<T>(0)}; for (size_t thread_block_tile_idx{0U}; thread_block_tile_idx < num_thread_block_tiles; ++thread_block_tile_idx) { load_data_to_shared_memory<T, BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y, BLOCK_TILE_SIZE_K, NUM_THREADS>( A, lda, B, ldb, A_thread_block_tile, B_thread_block_tile, thread_block_tile_idx, thread_linear_idx, m, n, k); __syncthreads();#pragma unroll for (size_t k_i{0U}; k_i < BLOCK_TILE_SIZE_K; ++k_i) { size_t const B_thread_block_tile_row_idx{k_i}; // B_val is cached in the register to alleviate the pressure on the // shared memory access. T const B_val{ B_thread_block_tile[B_thread_block_tile_row_idx] [thread_linear_idx % BLOCK_TILE_SIZE_X]};#pragma unroll for (size_t thread_tile_row_idx{0U}; thread_tile_row_idx < THREAD_TILE_SIZE_Y; ++thread_tile_row_idx) { size_t const A_thread_block_tile_row_idx{ thread_linear_idx / BLOCK_TILE_SIZE_X * THREAD_TILE_SIZE_Y + thread_tile_row_idx}; size_t const A_thread_block_tile_col_idx{k_i}; T const A_val{A_thread_block_tile[A_thread_block_tile_row_idx] [A_thread_block_tile_col_idx]}; C_thread_results[thread_tile_row_idx] += A_val * B_val; } } __syncthreads(); }// Write the results to DRAM.#pragma unroll for (size_t thread_tile_row_idx{0U}; thread_tile_row_idx < THREAD_TILE_SIZE_Y; ++thread_tile_row_idx) { size_t const C_row_idx{blockIdx.y * BLOCK_TILE_SIZE_Y + thread_linear_idx / BLOCK_TILE_SIZE_X * THREAD_TILE_SIZE_Y + thread_tile_row_idx}; size_t const C_col_idx{blockIdx.x * BLOCK_TILE_SIZE_X + thread_linear_idx % BLOCK_TILE_SIZE_X}; if (C_row_idx < m && C_col_idx < n) { C[C_row_idx * ldc + C_col_idx] = alpha * C_thread_results[thread_tile_row_idx] + beta * C[C_row_idx * ldc + C_col_idx]; } }}template <typename T>void launch_gemm_kernel_v03(size_t m, size_t n, size_t k, T const* alpha, T const* A, size_t lda, T const* B, size_t ldb, T const* beta, T* C, size_t ldc, cudaStream_t stream){ // Feel free to play with the block tile sizes. // The algorithm correctness should always be guaranteed. constexpr unsigned int BLOCK_TILE_SIZE_X{64U}; constexpr unsigned int BLOCK_TILE_SIZE_Y{64U}; constexpr unsigned int BLOCK_TILE_SIZE_K{8U}; // Each thread computes THREAD_TILE_SIZE_Y values of C. constexpr unsigned int THREAD_TILE_SIZE_Y{8U}; constexpr unsigned int NUM_THREADS_PER_BLOCK{ BLOCK_TILE_SIZE_X * BLOCK_TILE_SIZE_Y / THREAD_TILE_SIZE_Y}; static_assert(BLOCK_TILE_SIZE_Y % THREAD_TILE_SIZE_Y == 0U); static_assert(NUM_THREADS_PER_BLOCK % BLOCK_TILE_SIZE_K == 0U); static_assert(NUM_THREADS_PER_BLOCK % BLOCK_TILE_SIZE_X == 0U); dim3 const block_dim{NUM_THREADS_PER_BLOCK, 1U, 1U}; dim3 const grid_dim{ (static_cast<unsigned int>(n) + BLOCK_TILE_SIZE_X - 1U) / BLOCK_TILE_SIZE_X, (static_cast<unsigned int>(m) + BLOCK_TILE_SIZE_Y - 1U) / BLOCK_TILE_SIZE_Y, 1U}; gemm_v03<T, BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y, BLOCK_TILE_SIZE_K, THREAD_TILE_SIZE_Y><<<grid_dim, block_dim, 0U, stream>>>( m, n, k, *alpha, A, lda, B, ldb, *beta, C, ldc); CHECK_LAST_CUDA_ERROR();}
The performance of this FP32 GEMM implementation becomes 8.91 TFLOPS on an NVIDIA GeForce RTX 3090 GPU. It seems that we have been making good progress.
Implementation with 2D Block Tiling and 2D Thread Tiling
If the number of registers is not a bottleneck for the performance, we can further improve the performance by caching the data of both matrix $A$ and $B$ from the shared memory to the registers. Each thread with block thread index $(t_m, t_n)$, where $t_m \in [1, d_{bm} / d_{tm}]$ and $t_n \in [1, d_{bn} / d_{tn}]$, is now responsible for computing $d_{tm} \times d_{tn}$ elements of the small matrix, where $d_{tm}$ and $d_{tn}$ are the thread tile sizes for the row and column, respectively.
$$\begin{aligned}\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{t_m : t_m + d_{tm},t_n : t_n + d_{tn}}&= \left( \sum_{b_k=1}^{k/d_{bk}} A_{b_m,b_k}^{d_{bm} \times d_{bk}} B_{b_k,b_n}^{d_{bk} \times d_{bn}} + C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{t_m : t_m + d_{tm},t_n : t_n + d_{tn}} \\&= \sum_{b_k=1}^{k/d_{bk}} \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{t_m : t_m + d_{tm},t_n : t_n + d_{tn}} + \left( C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{t_m : t_m + d_{tm},t_n : t_n + d_{tn}} \\&= \sum_{b_k=1}^{k/d_{bk}} \left( \sum_{t_k=1}^{d_{bk}} \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{t_m : t_m + d_{tm},t_k} \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{t_k,t_n : t_n + d_{tn}} \right) + \left( C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{t_m : t_m + d_{tm},t_n : t_n + d_{tn}} \\\end{aligned}$$
In our previous implementation with 1D thread tiling, to compute one element of the small matrix, we need to read $d_{bk} + d_{bk} / d_{tm}$ values from the shared memory on average.
Now, with 2D thread tiling, to compute $d_{tm} \times d_{tn}$ elements of the small matrix, we only need to read $d_{bk} \times (d_{tm} + d_{tn})$ values from the shared memory. Specifically, in each inner loop, $\left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{t_m : t_m + d_{tm},t_k}$ and $\left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{t_k,t_n : t_n + d_{tn}}$ are cached in the register to be reused for computing the matrix multiplication $\left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{t_m : t_m + d_{tm},t_k} \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{t_k,t_n : t_n + d_{tn}}$. In total, we need to read $d_{bk} \times (d_{tm} + d_{tn})$ values from the shared memory. On average, to compute one element of the small matrix, we need to read $d_{bk} / d_{tm} + d_{bk} / d_{tn}$ values from the shared memory.
Because $d_{bk} / d_{tm} + d_{bk} / d_{tn} < d_{bk} + d_{bk} / d_{tm}$, the shared memory is accessed even less frequently and the shared memory bandwidth problem is further alleviated.
There is an alternative way to describe the 2D thread tiling implementation.
Mathematically, given a matrix multiplication and accumulation operation $D_{b_m,b_n}^{d_{bm} \times d_{bn}} = \sum_{b_k=1}^{k/d_{bk}} A_{b_m,b_k}^{d_{bm} \times d_{bk}} B_{b_k,b_n}^{d_{bk} \times d_{bn}} + C_{b_m,b_n}^{d_{bm} \times d_{bn}}$, where $D_{b_m,b_n} \in \mathbb{R}^{d_{bm} \times d_{bn}}$, $A_{b_m,b_k} \in \mathbb{R}^{d_{bm} \times d_{bk}}$, $B_{b_k,b_n} \in \mathbb{R}^{d_{bk} \times d_{bn}}$, $C_{b_m,b_n} \in \mathbb{R}^{d_{bm} \times d_{bn}}$, the matrices could be divided into smaller matrices.
$$A_{b_m,b_k}^{d_{bm} \times d_{bk}} =\begin{bmatrix}\left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{1,1}^{d_{tm} \times d_{tk}} & \left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{1,2}^{d_{tm} \times d_{tk}} & \cdots & \left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{1,d_{bk}/d_{tk}}^{d_{tm} \times d_{tk}} \\\left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{2,1}^{d_{tm} \times d_{tk}} & \left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{2,2}^{d_{tm} \times d_{tk}} & \cdots & \left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{2,d_{bk}/d_{tk}}^{d_{tm} \times d_{tk}} \\\vdots & \vdots & \ddots & \vdots \\\left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{d_{bm}/d_{tm},1}^{d_{tm} \times d_{tk}} & \left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{d_{bm}/d_{tm},2}^{d_{tm} \times d_{tk}} & \cdots & \left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{d_{bm}/d_{tm},d_{bk}/d_{tk}}^{d_{tm} \times d_{tk}} \\\end{bmatrix}$$
$$B_{b_k,b_n}^{d_{bk} \times d_{bn}} =\begin{bmatrix}\left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{1,1}^{d_{tk} \times d_{tn}} & \left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{1,2}^{d_{tk} \times d_{tn}} & \cdots & \left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{1,d_{bn}/d_{tn}}^{d_{tk} \times d_{tn}} \\\left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{2,1}^{d_{tk} \times d_{tn}} & \left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{2,2}^{d_{tk} \times d_{tn}} & \cdots & \left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{2,d_{bn}/d_{tn}}^{d_{tk} \times d_{tn}} \\\vdots & \vdots & \ddots & \vdots \\\left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{d_{bk}/d_{tk},1}^{d_{tk} \times d_{tn}} & \left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{d_{bk}/d_{tk},2}^{d_{tk} \times d_{tn}} & \cdots & \left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{d_{bk}/d_{tk},d_{bn}/d_{tn}}^{d_{tk} \times d_{tn}} \\\end{bmatrix}$$
$$C_{b_m,b_n}^{d_{bm} \times d_{bn}} =\begin{bmatrix}\left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{1,1}^{d_{tm} \times d_{tn}} & \left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{1,2}^{d_{tm} \times d_{tn}} & \cdots & \left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{1,d_{bn}/d_{tn}}^{d_{tm} \times d_{tn}} \\\left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{2,1}^{d_{tm} \times d_{tn}} & \left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{2,2}^{d_{tm} \times d_{tn}} & \cdots & \left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{2,d_{bn}/d_{tn}}^{d_{tm} \times d_{tn}} \\\vdots & \vdots & \ddots & \vdots \\\left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{d_{bm}/d_{tm},1}^{d_{tm} \times d_{tn}} & \left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{d_{bm}/d_{tm},2}^{d_{tm} \times d_{tn}} & \cdots & \left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{d_{bm}/d_{tm},d_{bn}/d_{tn}}^{d_{tm} \times d_{tn}} \\\end{bmatrix}$$
$$D_{b_m,b_n}^{d_{bm} \times d_{bn}} =\begin{bmatrix}\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{1,1}^{d_{tm} \times d_{tn}} & \left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{1,2}^{d_{tm} \times d_{tn}} & \cdots & \left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{1,d_{bn}/d_{tn}}^{d_{tm} \times d_{tn}} \\\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{2,1}^{d_{tm} \times d_{tn}} & \left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{2,2}^{d_{tm} \times d_{tn}} & \cdots & \left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{2,d_{bn}/d_{tn}}^{d_{tm} \times d_{tn}} \\\vdots & \vdots & \ddots & \vdots \\\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{d_{bm}/d_{tm},1}^{d_{tm} \times d_{tn}} & \left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{d_{bm}/d_{tm},2}^{d_{tm} \times d_{tn}} & \cdots & \left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{d_{bm}/d_{tm},d_{bn}/d_{tn}}^{d_{tm} \times d_{tn}} \\\end{bmatrix}$$
Each small matrix in $D_{b_m,b_n}^{d_{bm} \times d_{bn}}$ is computed as multiple small matrix multiplications and accumulations.
$$\begin{aligned}\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{t_m,t_n}^{d_{tm} \times d_{tn}}&= \left( \sum_{b_k=1}^{k/d_{bk}} A_{b_m,b_k}^{d_{bm} \times d_{bk}} B_{b_k,b_n}^{d_{bk} \times d_{bn}} + C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{t_m,t_n}^{d_{tm} \times d_{tn}} \\&= \sum_{b_k=1}^{k/d_{bk}} \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{t_m,t_n}^{d_{tm} \times d_{tn}} + \left( C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{t_m,t_n}^{d_{tm} \times d_{tn}} \\&= \sum_{b_k=1}^{k/d_{bk}} \left( \sum_{t_k=1}^{d_{bk} / d_{tk}} \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{t_m,t_k}^{d_{tm} \times d_{tk}} \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{t_k,t_n}^{d_{tk} \times d_{tn}} \right) + \left( C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{t_m,t_n}^{d_{tm} \times d_{tn}} \\\end{aligned}$$
Each thread with block thread index $(t_m, t_n)$, where $t_m \in [1, d_{bm} / d_{tm}]$ and $t_n \in [1, d_{bn} / d_{tn}]$, in the block with block index $(b_m, b_n)$, where $b_m \in [1, m/d_{bm}]$ and $b_n \in [1, n/d_{bn}]$, is responsible for computing one small matrix multiplication and accumulation $\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{t_m,t_n}^{d_{tm} \times d_{tn}}$.
In the case of 1D thread tiling described in this article, we have $d_{tm} > 1$, $d_{tk} = 1$ and $d_{tn} = 1$. In the case of 2D thread tiling in this article, we have $d_{tm} > 1$, $d_{tk} = 1$ and $d_{tn} > 1$. It is also technically feasible to do thread tiling with $d_{tk} > 1$. In the case of no thread tiling, which is actually a special case of the thread tiling, we have $d_{tm} = 1$, $d_{tk} = 1$ and $d_{tn} = 1$.
The following code snippet shows the implementation with 2D block tiling and 2D thread tiling.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172
// GEMM kernel v04.// Coalesced read and write from global memory.template <typename T, size_t BLOCK_TILE_SIZE_X, size_t BLOCK_TILE_SIZE_Y, size_t BLOCK_TILE_SIZE_K, size_t THREAD_TILE_SIZE_X, size_t THREAD_TILE_SIZE_Y>__global__ void gemm_v04(size_t m, size_t n, size_t k, T alpha, T const* A, size_t lda, T const* B, size_t ldb, T beta, T* C, size_t ldc){ // Avoid using blockDim.x * blockDim.y as the number of threads per block. // Because it is a runtime constant and the compiler cannot optimize the // loop unrolling based on that. // Use a compile time constant instead. constexpr size_t NUM_THREADS{BLOCK_TILE_SIZE_X * BLOCK_TILE_SIZE_Y / (THREAD_TILE_SIZE_X * THREAD_TILE_SIZE_Y)}; size_t const thread_linear_idx{threadIdx.y * blockDim.x + threadIdx.x}; // Cache a tile of A and B in shared memory for data reuse. __shared__ T A_thread_block_tile[BLOCK_TILE_SIZE_Y][BLOCK_TILE_SIZE_K]; __shared__ T B_thread_block_tile[BLOCK_TILE_SIZE_K][BLOCK_TILE_SIZE_X]; size_t const num_thread_block_tiles{(k + BLOCK_TILE_SIZE_K - 1) / BLOCK_TILE_SIZE_K}; // Each thread in the block processes BLOCK_TILE_SIZE_Y output values. // Specifically, these values corresponds to // C[blockIdx.y * BLOCK_TILE_SIZE_Y + threadIdx.x / BLOCK_TILE_SIZE_X * // THREAD_TILE_SIZE_Y : blockIdx.y * BLOCK_TILE_SIZE_Y + (threadIdx.x / // BLOCK_TILE_SIZE_X + 1) * THREAD_TILE_SIZE_Y][blockIdx.x * // BLOCK_TILE_SIZE_X + threadIdx.x % BLOCK_TILE_SIZE_X * // THREAD_TILE_SIZE_X : blockIdx.x * BLOCK_TILE_SIZE_X + (threadIdx.x % // BLOCK_TILE_SIZE_X + 1) * THREAD_TILE_SIZE_X] T C_thread_results[THREAD_TILE_SIZE_Y][THREAD_TILE_SIZE_X] = { static_cast<T>(0)}; // A_vals is cached in the register. T A_vals[THREAD_TILE_SIZE_Y] = {static_cast<T>(0)}; // B_vals is cached in the register. T B_vals[THREAD_TILE_SIZE_X] = {static_cast<T>(0)}; for (size_t thread_block_tile_idx{0U}; thread_block_tile_idx < num_thread_block_tiles; ++thread_block_tile_idx) { load_data_to_shared_memory<T, BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y, BLOCK_TILE_SIZE_K, NUM_THREADS>( A, lda, B, ldb, A_thread_block_tile, B_thread_block_tile, thread_block_tile_idx, thread_linear_idx, m, n, k); __syncthreads();#pragma unroll for (size_t k_i{0U}; k_i < BLOCK_TILE_SIZE_K; ++k_i) { size_t const A_thread_block_tile_row_idx{ thread_linear_idx / (BLOCK_TILE_SIZE_X / THREAD_TILE_SIZE_X) * THREAD_TILE_SIZE_Y}; size_t const A_thread_block_tile_col_idx{k_i};#pragma unroll for (size_t thread_tile_row_idx{0U}; thread_tile_row_idx < THREAD_TILE_SIZE_Y; ++thread_tile_row_idx) { // There will be shared memory bank conflicts accessing the // values from A_thread_block_tile. We can do it better by // transposing the A_thread_block_tile when we load the data // from DRAM. A_vals[thread_tile_row_idx] = A_thread_block_tile[A_thread_block_tile_row_idx + thread_tile_row_idx] [A_thread_block_tile_col_idx]; } size_t const B_thread_block_tile_row_idx{k_i}; size_t const B_thread_block_tile_col_idx{ thread_linear_idx % (BLOCK_TILE_SIZE_X / THREAD_TILE_SIZE_X) * THREAD_TILE_SIZE_X};#pragma unroll for (size_t thread_tile_col_idx{0U}; thread_tile_col_idx < THREAD_TILE_SIZE_X; ++thread_tile_col_idx) { B_vals[thread_tile_col_idx] = B_thread_block_tile[B_thread_block_tile_row_idx] [B_thread_block_tile_col_idx + thread_tile_col_idx]; } for (size_t thread_tile_row_idx{0U}; thread_tile_row_idx < THREAD_TILE_SIZE_Y; ++thread_tile_row_idx) { for (size_t thread_tile_col_idx{0U}; thread_tile_col_idx < THREAD_TILE_SIZE_X; ++thread_tile_col_idx) { C_thread_results[thread_tile_row_idx] [thread_tile_col_idx] += A_vals[thread_tile_row_idx] * B_vals[thread_tile_col_idx]; } } } __syncthreads(); } // Write the results to DRAM. for (size_t thread_tile_row_idx{0U}; thread_tile_row_idx < THREAD_TILE_SIZE_Y; ++thread_tile_row_idx) { for (size_t thread_tile_col_idx{0U}; thread_tile_col_idx < THREAD_TILE_SIZE_X; ++thread_tile_col_idx) { size_t const C_row_idx{ blockIdx.y * BLOCK_TILE_SIZE_Y + threadIdx.x / (BLOCK_TILE_SIZE_X / THREAD_TILE_SIZE_X) * THREAD_TILE_SIZE_Y + thread_tile_row_idx}; size_t const C_col_idx{ blockIdx.x * BLOCK_TILE_SIZE_X + threadIdx.x % (BLOCK_TILE_SIZE_X / THREAD_TILE_SIZE_X) * THREAD_TILE_SIZE_X + thread_tile_col_idx}; if (C_row_idx < m && C_col_idx < n) { C[C_row_idx * ldc + C_col_idx] = alpha * C_thread_results[thread_tile_row_idx] [thread_tile_col_idx] + beta * C[C_row_idx * ldc + C_col_idx]; } } }}template <typename T>void launch_gemm_kernel_v04(size_t m, size_t n, size_t k, T const* alpha, T const* A, size_t lda, T const* B, size_t ldb, T const* beta, T* C, size_t ldc, cudaStream_t stream){ // Feel free to play with the block tile sizes. // The algorithm correctness should always be guaranteed. constexpr unsigned int BLOCK_TILE_SIZE_X{128U}; constexpr unsigned int BLOCK_TILE_SIZE_Y{128U}; constexpr unsigned int BLOCK_TILE_SIZE_K{16U}; // Each thread computes THREAD_TILE_SIZE_X * THREAD_TILE_SIZE_Y values of C. constexpr unsigned int THREAD_TILE_SIZE_X{8U}; constexpr unsigned int THREAD_TILE_SIZE_Y{8U}; constexpr unsigned int NUM_THREADS_PER_BLOCK{ BLOCK_TILE_SIZE_X * BLOCK_TILE_SIZE_Y / (THREAD_TILE_SIZE_X * THREAD_TILE_SIZE_Y)}; static_assert(BLOCK_TILE_SIZE_X % THREAD_TILE_SIZE_X == 0U); static_assert(BLOCK_TILE_SIZE_Y % THREAD_TILE_SIZE_Y == 0U); static_assert(NUM_THREADS_PER_BLOCK % BLOCK_TILE_SIZE_K == 0U); static_assert(NUM_THREADS_PER_BLOCK % BLOCK_TILE_SIZE_X == 0U); static_assert( BLOCK_TILE_SIZE_X * BLOCK_TILE_SIZE_K % NUM_THREADS_PER_BLOCK == 0U); static_assert( BLOCK_TILE_SIZE_K * BLOCK_TILE_SIZE_Y % NUM_THREADS_PER_BLOCK == 0U); dim3 const block_dim{NUM_THREADS_PER_BLOCK, 1U, 1U}; dim3 const grid_dim{ (static_cast<unsigned int>(n) + BLOCK_TILE_SIZE_X - 1U) / BLOCK_TILE_SIZE_X, (static_cast<unsigned int>(m) + BLOCK_TILE_SIZE_Y - 1U) / BLOCK_TILE_SIZE_Y, 1U}; gemm_v04<T, BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y, BLOCK_TILE_SIZE_K, THREAD_TILE_SIZE_X, THREAD_TILE_SIZE_Y> <<<grid_dim, block_dim, 0U, stream>>>(m, n, k, *alpha, A, lda, B, ldb, *beta, C, ldc); CHECK_LAST_CUDA_ERROR();}
The performance of this FP32 GEMM implementation becomes 13.02 TFLOPS on an NVIDIA GeForce RTX 3090 GPU.
Implementation with 2D Block Tiling and 2D Thread Tiling and Vectorized Memory Access
In my previous article “CUDA Vectorized Memory Access”, I showed how to use vectorized memory access to improve the performance of a trivial memory copy kernel. Vectorized memory access reduces the number of memory transactions and therefore improves the memory bandwidth utilization. The same trick can be applied to this GEMM kernel to accelerate the data loading from global memory to the shared memory and the data loading from the shared memory to the registers.
In the previous implementation, to compute matrix multiplication, each thread would have to read a column of matrix $A$ and a row of matrix $B$ from the shared memory and cache them in the registers. Because reading the data from a column of matrix $A$ would prevent vectorized memory access, we would like to transpose the matrix $A$ when loading the data from global memory to the shared memory, so that each thread can access a row of transposed matrix $A$ and a row of matrix $B$ from the shared memory in a vectorized fashion and cache them in the registers.
The following code snippet shows the implementation with 2D block tiling and 2D thread tiling and vectorized memory access.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346
template <typename T, size_t BLOCK_TILE_SIZE_X, size_t BLOCK_TILE_SIZE_Y, size_t BLOCK_TILE_SIZE_K, size_t NUM_THREADS, size_t BLOCK_TILE_SKEW_SIZE_X = 0U, size_t BLOCK_TILE_SKEW_SIZE_Y = 0U, typename VECTOR_TYPE = int4>__device__ void load_data_to_shared_memory_transposed_vectorized(T const* A, size_t lda, T const* B, size_t ldb, T A_thread_block_tile_transposed[BLOCK_TILE_SIZE_K][BLOCK_TILE_SIZE_Y + BLOCK_TILE_SKEW_SIZE_Y], T B_thread_block_tile[BLOCK_TILE_SIZE_K][BLOCK_TILE_SIZE_X + BLOCK_TILE_SKEW_SIZE_X], size_t thread_block_tile_idx, size_t thread_linear_idx, size_t m, size_t n, size_t k){ constexpr size_t NUM_VECTOR_UNITS{sizeof(VECTOR_TYPE) / sizeof(T)}; static_assert(sizeof(VECTOR_TYPE) % sizeof(T) == 0U); static_assert(BLOCK_TILE_SIZE_K % NUM_VECTOR_UNITS == 0U); static_assert(BLOCK_TILE_SIZE_X % NUM_VECTOR_UNITS == 0U); constexpr size_t VECTORIZED_BLOCK_TILE_SIZE_K{BLOCK_TILE_SIZE_K / NUM_VECTOR_UNITS}; static_assert(BLOCK_TILE_SIZE_K % NUM_VECTOR_UNITS == 0U); constexpr size_t VECTORIZED_BLOCK_TILE_SIZE_X{BLOCK_TILE_SIZE_X / NUM_VECTOR_UNITS}; static_assert(BLOCK_TILE_SIZE_X % NUM_VECTOR_UNITS == 0U); // The skew size could affect the data alignment in shared memory when we use vectorized load. // We need to make sure the data alignment is correct. static_assert((BLOCK_TILE_SIZE_Y) * sizeof(T) % sizeof(VECTOR_TYPE) == 0U); static_assert((BLOCK_TILE_SIZE_X) * sizeof(T) % sizeof(VECTOR_TYPE) == 0U); static_assert((BLOCK_TILE_SIZE_Y + BLOCK_TILE_SKEW_SIZE_Y) * sizeof(T) % sizeof(VECTOR_TYPE) == 0U); static_assert((BLOCK_TILE_SIZE_X + BLOCK_TILE_SKEW_SIZE_X) * sizeof(T) % sizeof(VECTOR_TYPE) == 0U);// Load data from A on DRAM to A_thread_block_tile on shared memory.#pragma unroll for (size_t load_idx{0U}; load_idx < (BLOCK_TILE_SIZE_Y * VECTORIZED_BLOCK_TILE_SIZE_K + NUM_THREADS - 1U) / NUM_THREADS; ++load_idx) { size_t const A_thread_block_tile_row_idx{ (thread_linear_idx + load_idx * NUM_THREADS) / VECTORIZED_BLOCK_TILE_SIZE_K}; size_t const A_thread_block_tile_col_idx{ (thread_linear_idx + load_idx * NUM_THREADS) % VECTORIZED_BLOCK_TILE_SIZE_K * NUM_VECTOR_UNITS}; size_t const A_row_idx{blockIdx.y * BLOCK_TILE_SIZE_Y + A_thread_block_tile_row_idx}; size_t const A_col_idx{thread_block_tile_idx * BLOCK_TILE_SIZE_K + A_thread_block_tile_col_idx}; // These boundary checks might slow down the kernel to some extent. // But they guarantee the correctness of the kernel for all // different GEMM configurations. int4 A_row_vector_vals{0, 0, 0, 0}; if (A_row_idx < m && A_col_idx < k) { A_row_vector_vals = *reinterpret_cast<int4 const*>( &A[A_row_idx * lda + A_col_idx]); } if (A_col_idx + NUM_VECTOR_UNITS > k) { // Number of invalid elements in the last vector. size_t const num_invalid_elements{A_col_idx + NUM_VECTOR_UNITS - k}; // Mask out the invalid elements. T* const A_row_vector_vals_ptr{ reinterpret_cast<T*>(&A_row_vector_vals)}; for (size_t i{0U}; i < num_invalid_elements; ++i) { A_row_vector_vals_ptr[NUM_VECTOR_UNITS - 1U - i] = static_cast<T>(0); } } // If this is true, the following if can be removed. // static_assert(VECTORIZED_BLOCK_TILE_SIZE_K * BLOCK_TILE_SIZE_Y % // NUM_THREADS == // 0U); if (A_thread_block_tile_row_idx < BLOCK_TILE_SIZE_Y && A_thread_block_tile_col_idx < BLOCK_TILE_SIZE_K) { for (size_t i{0U}; i < NUM_VECTOR_UNITS; ++i) { A_thread_block_tile_transposed [A_thread_block_tile_col_idx + i] [A_thread_block_tile_row_idx] = reinterpret_cast<T const*>(&A_row_vector_vals)[i]; } } }// Load data from B on DRAM to B_thread_block_tile on shared memory.#pragma unroll for (size_t load_idx{0U}; load_idx < (BLOCK_TILE_SIZE_K * VECTORIZED_BLOCK_TILE_SIZE_X + NUM_THREADS - 1U) / NUM_THREADS; ++load_idx) { size_t const B_thread_block_tile_row_idx{ (thread_linear_idx + load_idx * NUM_THREADS) / VECTORIZED_BLOCK_TILE_SIZE_X}; size_t const B_thread_block_tile_col_idx{ (thread_linear_idx + load_idx * NUM_THREADS) % VECTORIZED_BLOCK_TILE_SIZE_X * NUM_VECTOR_UNITS}; size_t const B_row_idx{thread_block_tile_idx * BLOCK_TILE_SIZE_K + B_thread_block_tile_row_idx}; size_t const B_col_idx{blockIdx.x * BLOCK_TILE_SIZE_X + B_thread_block_tile_col_idx}; // These boundary checks might slow down the kernel to some extent. // But they guarantee the correctness of the kernel for all // different GEMM configurations. int4 B_row_vector_vals{0, 0, 0, 0}; if (B_row_idx < k && B_col_idx < n) { B_row_vector_vals = *reinterpret_cast<int4 const*>( &B[B_row_idx * ldb + B_col_idx]); } if (B_col_idx + NUM_VECTOR_UNITS > n) { // Number of invalid elements in the last vector. size_t const num_invalid_elements{B_col_idx + NUM_VECTOR_UNITS - n}; // Mask out the invalid elements. T* const B_row_vector_vals_ptr{ reinterpret_cast<T*>(&B_row_vector_vals)}; for (size_t i{0U}; i < num_invalid_elements; ++i) { B_row_vector_vals_ptr[NUM_VECTOR_UNITS - 1U - i] = static_cast<T>(0); } } // If this is true, the following if can be removed. // static_assert(VECTORIZED_BLOCK_TILE_SIZE_X * BLOCK_TILE_SIZE_K % // NUM_THREADS == // 0U); if (B_thread_block_tile_row_idx < BLOCK_TILE_SIZE_K && B_thread_block_tile_col_idx < BLOCK_TILE_SIZE_X) { *reinterpret_cast<int4*>( &B_thread_block_tile[B_thread_block_tile_row_idx] [B_thread_block_tile_col_idx]) = B_row_vector_vals; } }}// GEMM kernel v05.// Coalesced read and write from global memory.template <typename T, size_t BLOCK_TILE_SIZE_X, size_t BLOCK_TILE_SIZE_Y, size_t BLOCK_TILE_SIZE_K, size_t THREAD_TILE_SIZE_X, size_t THREAD_TILE_SIZE_Y>__global__ void gemm_v05_vectorized(size_t m, size_t n, size_t k, T alpha, T const* A, size_t lda, T const* B, size_t ldb, T beta, T* C, size_t ldc){ // Avoid using blockDim.x * blockDim.y as the number of threads per block. // Because it is a runtime constant and the compiler cannot optimize the // loop unrolling based on that. // Use a compile time constant instead. constexpr size_t NUM_THREADS{BLOCK_TILE_SIZE_X * BLOCK_TILE_SIZE_Y / (THREAD_TILE_SIZE_X * THREAD_TILE_SIZE_Y)}; size_t const thread_linear_idx{threadIdx.y * blockDim.x + threadIdx.x}; // Cache a tile of A and B in shared memory for data reuse. __shared__ T A_thread_block_tile_transposed[BLOCK_TILE_SIZE_K][BLOCK_TILE_SIZE_Y]; __shared__ T B_thread_block_tile[BLOCK_TILE_SIZE_K][BLOCK_TILE_SIZE_X]; size_t const num_thread_block_tiles{(k + BLOCK_TILE_SIZE_K - 1) / BLOCK_TILE_SIZE_K}; // Each thread in the block processes BLOCK_TILE_SIZE_Y output values. // Specifically, these values corresponds to // C[blockIdx.y * BLOCK_TILE_SIZE_Y + threadIdx.x / BLOCK_TILE_SIZE_X * // THREAD_TILE_SIZE_Y : blockIdx.y * BLOCK_TILE_SIZE_Y + (threadIdx.x / // BLOCK_TILE_SIZE_X + 1) * THREAD_TILE_SIZE_Y][blockIdx.x * // BLOCK_TILE_SIZE_X + threadIdx.x % BLOCK_TILE_SIZE_X * // THREAD_TILE_SIZE_X : blockIdx.x * BLOCK_TILE_SIZE_X + (threadIdx.x % // BLOCK_TILE_SIZE_X + 1) * THREAD_TILE_SIZE_X] T C_thread_results[THREAD_TILE_SIZE_Y][THREAD_TILE_SIZE_X] = { static_cast<T>(0)}; // A_vals is cached in the register. T A_vals[THREAD_TILE_SIZE_Y] = {static_cast<T>(0)}; // B_vals is cached in the register. T B_vals[THREAD_TILE_SIZE_X] = {static_cast<T>(0)}; constexpr size_t NUM_VECTOR_UNITS{sizeof(int4) / sizeof(T)}; static_assert(sizeof(int4) % sizeof(T) == 0U); static_assert(BLOCK_TILE_SIZE_K % NUM_VECTOR_UNITS == 0U); static_assert(BLOCK_TILE_SIZE_X % NUM_VECTOR_UNITS == 0U); constexpr size_t VECTORIZED_THREAD_TILE_SIZE_X{THREAD_TILE_SIZE_X / NUM_VECTOR_UNITS}; static_assert(THREAD_TILE_SIZE_X % NUM_VECTOR_UNITS == 0U); for (size_t thread_block_tile_idx{0U}; thread_block_tile_idx < num_thread_block_tiles; ++thread_block_tile_idx) { load_data_to_shared_memory_transposed_vectorized< T, BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y, BLOCK_TILE_SIZE_K, NUM_THREADS>(A, lda, B, ldb, A_thread_block_tile_transposed, B_thread_block_tile, thread_block_tile_idx, thread_linear_idx, m, n, k); __syncthreads();#pragma unroll for (size_t k_i{0U}; k_i < BLOCK_TILE_SIZE_K; ++k_i) { size_t const A_thread_block_tile_row_idx{ thread_linear_idx / (BLOCK_TILE_SIZE_X / THREAD_TILE_SIZE_X) * THREAD_TILE_SIZE_Y}; size_t const A_thread_block_tile_col_idx{k_i};#pragma unroll for (size_t thread_tile_row_idx{0U}; thread_tile_row_idx < THREAD_TILE_SIZE_Y; ++thread_tile_row_idx) { A_vals[thread_tile_row_idx] = A_thread_block_tile_transposed[A_thread_block_tile_col_idx] [A_thread_block_tile_row_idx + thread_tile_row_idx]; } size_t const B_thread_block_tile_row_idx{k_i}; size_t const B_thread_block_tile_col_idx{ thread_linear_idx % (BLOCK_TILE_SIZE_X / THREAD_TILE_SIZE_X) * THREAD_TILE_SIZE_X};// Although the read from A_thread_block_tile cannot be vectorized, the read// from B_thread_block_tile can be vectorized.#pragma unroll for (size_t thread_tile_col_vector_idx{0U}; thread_tile_col_vector_idx < VECTORIZED_THREAD_TILE_SIZE_X; ++thread_tile_col_vector_idx) { *reinterpret_cast<int4*>( &B_vals[thread_tile_col_vector_idx * NUM_VECTOR_UNITS]) = *reinterpret_cast<int4 const*>( &B_thread_block_tile[B_thread_block_tile_row_idx] [B_thread_block_tile_col_idx + thread_tile_col_vector_idx * NUM_VECTOR_UNITS]); } for (size_t thread_tile_row_idx{0U}; thread_tile_row_idx < THREAD_TILE_SIZE_Y; ++thread_tile_row_idx) { for (size_t thread_tile_col_idx{0U}; thread_tile_col_idx < THREAD_TILE_SIZE_X; ++thread_tile_col_idx) { C_thread_results[thread_tile_row_idx] [thread_tile_col_idx] += A_vals[thread_tile_row_idx] * B_vals[thread_tile_col_idx]; } } } __syncthreads(); } // Vectorized writing the results to DRAM. for (size_t thread_tile_row_idx{0U}; thread_tile_row_idx < THREAD_TILE_SIZE_Y; ++thread_tile_row_idx) { for (size_t thread_tile_col_vector_idx{0U}; thread_tile_col_vector_idx < VECTORIZED_THREAD_TILE_SIZE_X; ++thread_tile_col_vector_idx) { size_t const C_row_idx{ blockIdx.y * BLOCK_TILE_SIZE_Y + thread_linear_idx / (BLOCK_TILE_SIZE_X / THREAD_TILE_SIZE_X) * THREAD_TILE_SIZE_Y + thread_tile_row_idx}; size_t const C_col_idx{ blockIdx.x * BLOCK_TILE_SIZE_X + thread_linear_idx % (BLOCK_TILE_SIZE_X / THREAD_TILE_SIZE_X) * THREAD_TILE_SIZE_X + thread_tile_col_vector_idx * NUM_VECTOR_UNITS}; // Vectorized read from C. int4 C_row_vector_vals{*reinterpret_cast<int4 const*>( &C[C_row_idx * ldc + C_col_idx])}; // Vectorized read from C_thread_results. int4 const C_thread_results_row_vector_vals{ *reinterpret_cast<int4 const*>( &C_thread_results[thread_tile_row_idx] [thread_tile_col_vector_idx * NUM_VECTOR_UNITS])}; // Update the values in C_row_vector_vals for (size_t i{0U}; i < NUM_VECTOR_UNITS; ++i) { reinterpret_cast<T*>(&C_row_vector_vals)[i] = alpha * reinterpret_cast<T const*>( &C_thread_results_row_vector_vals)[i] + beta * reinterpret_cast<T const*>(&C_row_vector_vals)[i]; } // Vectorized write to C. if (C_row_idx < m && C_col_idx < n) { // No need to mask out the out-of-bound invalid elements, // because the row of C matrix is 32-byte aligned. *reinterpret_cast<int4*>(&C[C_row_idx * ldc + C_col_idx]) = C_row_vector_vals; } } }}template <typename T>void launch_gemm_kernel_v05_vectorized(size_t m, size_t n, size_t k, T const* alpha, T const* A, size_t lda, T const* B, size_t ldb, T const* beta, T* C, size_t ldc, cudaStream_t stream){ // Feel free to play with the block tile sizes. // The algorithm correctness should always be guaranteed. constexpr unsigned int BLOCK_TILE_SIZE_X{128U}; constexpr unsigned int BLOCK_TILE_SIZE_Y{128U}; constexpr unsigned int BLOCK_TILE_SIZE_K{16U}; // Each thread computes THREAD_TILE_SIZE_X * THREAD_TILE_SIZE_Y values of C. constexpr unsigned int THREAD_TILE_SIZE_X{8U}; constexpr unsigned int THREAD_TILE_SIZE_Y{8U}; constexpr unsigned int NUM_THREADS_PER_BLOCK{ BLOCK_TILE_SIZE_X * BLOCK_TILE_SIZE_Y / (THREAD_TILE_SIZE_X * THREAD_TILE_SIZE_Y)}; static_assert(BLOCK_TILE_SIZE_X % THREAD_TILE_SIZE_X == 0U); static_assert(BLOCK_TILE_SIZE_Y % THREAD_TILE_SIZE_Y == 0U); static_assert(NUM_THREADS_PER_BLOCK % BLOCK_TILE_SIZE_K == 0U); static_assert(NUM_THREADS_PER_BLOCK % BLOCK_TILE_SIZE_X == 0U); static_assert( BLOCK_TILE_SIZE_X * BLOCK_TILE_SIZE_K % NUM_THREADS_PER_BLOCK == 0U); static_assert( BLOCK_TILE_SIZE_K * BLOCK_TILE_SIZE_Y % NUM_THREADS_PER_BLOCK == 0U); dim3 const block_dim{NUM_THREADS_PER_BLOCK, 1U, 1U}; dim3 const grid_dim{ (static_cast<unsigned int>(n) + BLOCK_TILE_SIZE_X - 1U) / BLOCK_TILE_SIZE_X, (static_cast<unsigned int>(m) + BLOCK_TILE_SIZE_Y - 1U) / BLOCK_TILE_SIZE_Y, 1U}; gemm_v05_vectorized<T, BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y, BLOCK_TILE_SIZE_K, THREAD_TILE_SIZE_X, THREAD_TILE_SIZE_Y> <<<grid_dim, block_dim, 0U, stream>>>(m, n, k, *alpha, A, lda, B, ldb, *beta, C, ldc); CHECK_LAST_CUDA_ERROR();}
Except the data loading using vectorized memory access, the rest of the kernel is the same as the previous implementation with 2D block tiling and 2D thread tiling. There is, however, a caveat for vectorized memory access in our use case which does not exist in the previous implementation. When we load the data from global memory to the shared memory and load the data from the shared memory to the registers, considering the matrices are 2D, we need to make sure the data alignment is correct for the vectorized memory access data type. Otherwise, undefined behavior will happen. For example, if we use int4 as the vectorized memory access data type, we need to make sure the data alignment is a multiple of 16 bytes. This is why we will have to pad the leading dimension of the matrix $A$ and matrix $B$ in the global memory and the shared memory dimensions have to be carefully chosen.
The performance of this FP32 GEMM implementation becomes 19.66 TFLOPS on an NVIDIA GeForce RTX 3090 GPU.
Implementation with 2D Block Tiling and 2D Warp Tiling and 2D Thread Tiling and Vectorized Memory Access
In the CUDA programming model, a warp, which consists of 32 threads, is the smallest unit of scheduling and execution. Shared memory bank conflicts can happen when the threads in a warp access the same bank of the shared memory. In our previous implementation, because the GEMM CUDA kernel was not organized in a warp-centric way, it is less obvious how to avoid shared memory bank conflicts.
In this implementation, we will organize the GEMM CUDA kernel in a warp-centric way and use 2D warp tiling and 2D thread tiling so that the shared memory bank conflicts can be anticipated and optimized much easier.
Understanding warp tiling is almost exactly the same as understanding thread tiling.
Mathematically, given a matrix multiplication and accumulation operation $D_{b_m,b_n}^{d_{bm} \times d_{bn}} = \sum_{b_k=1}^{k/d_{bk}} A_{b_m,b_k}^{d_{bm} \times d_{bk}} B_{b_k,b_n}^{d_{bk} \times d_{bn}} + C_{b_m,b_n}^{d_{bm} \times d_{bn}}$, where $D_{b_m,b_n} \in \mathbb{R}^{d_{bm} \times d_{bn}}$, $A_{b_m,b_k} \in \mathbb{R}^{d_{bm} \times d_{bk}}$, $B_{b_k,b_n} \in \mathbb{R}^{d_{bk} \times d_{bn}}$, $C_{b_m,b_n} \in \mathbb{R}^{d_{bm} \times d_{bn}}$, the matrices could be divided into smaller matrices.
$$A_{b_m,b_k}^{d_{bm} \times d_{bk}} =\begin{bmatrix}\left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{1,1}^{d_{wm} \times d_{wk}} & \left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{1,2}^{d_{wm} \times d_{wk}} & \cdots & \left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{1,d_{bk}/d_{wk}}^{d_{wm} \times d_{wk}} \\\left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{2,1}^{d_{wm} \times d_{wk}} & \left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{2,2}^{d_{wm} \times d_{wk}} & \cdots & \left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{2,d_{bk}/d_{wk}}^{d_{wm} \times d_{wk}} \\\vdots & \vdots & \ddots & \vdots \\\left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{d_{bm}/d_{wm},1}^{d_{wm} \times d_{wk}} & \left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{d_{bm}/d_{wm},2}^{d_{wm} \times d_{wk}} & \cdots & \left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{d_{bm}/d_{wm},d_{bk}/d_{wk}}^{d_{wm} \times d_{wk}} \\\end{bmatrix}$$
$$B_{b_k,b_n}^{d_{bk} \times d_{bn}} =\begin{bmatrix}\left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{1,1}^{d_{wk} \times d_{wn}} & \left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{1,2}^{d_{wk} \times d_{wn}} & \cdots & \left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{1,d_{bn}/d_{wn}}^{d_{wk} \times d_{wn}} \\\left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{2,1}^{d_{wk} \times d_{wn}} & \left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{2,2}^{d_{wk} \times d_{wn}} & \cdots & \left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{2,d_{bn}/d_{wn}}^{d_{wk} \times d_{wn}} \\\vdots & \vdots & \ddots & \vdots \\\left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{d_{bk}/d_{wk},1}^{d_{wk} \times d_{wn}} & \left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{d_{bk}/d_{wk},2}^{d_{wk} \times d_{wn}} & \cdots & \left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{d_{bk}/d_{wk},d_{bn}/d_{wn}}^{d_{wk} \times d_{wn}} \\\end{bmatrix}$$
$$C_{b_m,b_n}^{d_{bm} \times d_{bn}} =\begin{bmatrix}\left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{1,1}^{d_{wm} \times d_{wn}} & \left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{1,2}^{d_{wm} \times d_{wn}} & \cdots & \left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{1,d_{bn}/d_{wn}}^{d_{wm} \times d_{wn}} \\\left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{2,1}^{d_{wm} \times d_{wn}} & \left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{2,2}^{d_{wm} \times d_{wn}} & \cdots & \left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{2,d_{bn}/d_{wn}}^{d_{wm} \times d_{wn}} \\\vdots & \vdots & \ddots & \vdots \\\left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{d_{bm}/d_{wm},1}^{d_{wm} \times d_{wn}} & \left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{d_{bm}/d_{wm},2}^{d_{wm} \times d_{wn}} & \cdots & \left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{d_{bm}/d_{wm},d_{bn}/d_{wn}}^{d_{wm} \times d_{wn}} \\\end{bmatrix}$$
$$D_{b_m,b_n}^{d_{bm} \times d_{bn}} =\begin{bmatrix}\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{1,1}^{d_{wm} \times d_{wn}} & \left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{1,2}^{d_{wm} \times d_{wn}} & \cdots & \left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{1,d_{bn}/d_{wn}}^{d_{wm} \times d_{wn}} \\\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{2,1}^{d_{wm} \times d_{wn}} & \left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{2,2}^{d_{wm} \times d_{wn}} & \cdots & \left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{2,d_{bn}/d_{wn}}^{d_{wm} \times d_{wn}} \\\vdots & \vdots & \ddots & \vdots \\\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{d_{bm}/d_{wm},1}^{d_{wm} \times d_{wn}} & \left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{d_{bm}/d_{wm},2}^{d_{wm} \times d_{wn}} & \cdots & \left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{d_{bm}/d_{wm},d_{bn}/d_{wn}}^{d_{wm} \times d_{wn}} \\\end{bmatrix}$$
Each small matrix in $D_{b_m,b_n}^{d_{bm} \times d_{bn}}$ is computed as multiple small matrix multiplications and accumulations.
$$\begin{aligned}\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{w_m,w_n}^{d_{wm} \times d_{wn}}&= \left( \sum_{b_k=1}^{k/d_{bk}} A_{b_m,b_k}^{d_{bm} \times d_{bk}} B_{b_k,b_n}^{d_{bk} \times d_{bn}} + C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{w_m,w_n}^{d_{wm} \times d_{wn}} \\&= \sum_{b_k=1}^{k/d_{bk}} \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_m,w_n}^{d_{wm} \times d_{wn}} + \left( C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{w_m,w_n}^{d_{wm} \times d_{wn}} \\&= \sum_{b_k=1}^{k/d_{bk}} \left( \sum_{w_k=1}^{d_{bk} / d_{wk}} \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right) + \left( C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{w_m,w_n}^{d_{wm} \times d_{wn}} \\\end{aligned}$$
Each warp with block warp index $(w_m, w_n)$, where $w_m \in [1, d_{bm} / d_{wm}]$ and $w_n \in [1, d_{bn} / d_{wn}]$, in the block with block index $(b_m, b_n)$, where $b_m \in [1, m/d_{bm}]$ and $b_n \in [1, n/d_{bn}]$, is responsible for computing one small matrix multiplication and accumulation $\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{w_m,w_n}^{d_{wm} \times d_{wn}}$.
So far, everything looks the same as the mathematical descriptions for the 2D thread tiling, except that the thread indices and thread tile sizes are replaced by the warp indices and warp tile sizes.
The remaining question is how to use all the 32 threads in the warp with block warp index $(w_m, w_n)$ to compute $\left(\left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}}\right)^{d_{wm} \times d_{wn}}$. There is not a unique way to do that. The way we chose is to use 2D thread tiling. Suppose the number of threads per warp in the row is $m_{t}$ and the number of threads per warp in the column is $n_{t}$, we must have $m_{t} \times n_{t} = 32$. Each thread in the warp should be responsible for computing $\left(d_{wm} / m_{t}) \times (d_{wn} / n_{t}\right)$ values of the output matrix $\left(\left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}}\right)^{d_{wm} \times d_{wn}}$. We then set the thread tile sizes to be $d_{tm}$ for row and $d_{tn}$ for column, such that $\left(d_{wm} / m_{t} \right) \mod d_{tm} = 0$ and $\left(d_{wn} / n_{t} \right) \mod d_{tn} = 0$. Each thread in the warp will have to compute $\left(\left(d_{wm} / m_{t} \right) / d_{tm} \right) \times \left(\left(d_{wn} / n_{t} \right) / d_{tn} \right)$ tiles of size $d_{tm} \times d_{tn}$ of the output matrix $\left(\left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}}\right)^{d_{wm} \times d_{wn}}$.
Suppose the thread tile index is $(t_m, t_n)$, where $t_m \in [1, d_{wm} / d_{tm}]$ and $t_n \in [1, d_{wn} / d_{tn}]$. The thread responsible for computing the tile has the warp thread index $(t_m \mod m_t, t_n \mod n_t)$. Because the matrix $\left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}}$ can be divided along the row dimension to $d_{wm} / d_{tm}$ fragments and the matrix $\left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}}$ can be divided along the column dimension to $d_{wn} / d_{tn}$ fragments. We have
$$\left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} =\begin{bmatrix}\left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{1,1}^{d_{tm} \times d_{tk}} & \left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{1,2}^{d_{tm} \times d_{tk}} & \cdots & \left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{1,d_{wk}/d_{tk}}^{d_{tm} \times d_{tk}} \\\left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{2,1}^{d_{tm} \times d_{tk}} & \left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{2,2}^{d_{tm} \times d_{tk}} & \cdots & \left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{2,d_{wk}/d_{tk}}^{d_{tm} \times d_{tk}} \\\vdots & \vdots & \ddots & \vdots \\\left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{d_{wm}/d_{tm},1}^{d_{tm} \times d_{tk}} & \left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{d_{wm}/d_{tm},2}^{d_{tm} \times d_{tk}} & \cdots & \left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{d_{wm}/d_{tm},d_{wk}/d_{tk}}^{d_{tm} \times d_{tk}} \\\end{bmatrix}$$
$$\left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} =\begin{bmatrix}\left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{1,1}^{d_{tk} \times d_{tn}} & \left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{1,2}^{d_{tk} \times d_{tn}} & \cdots & \left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{1,d_{wn}/d_{tn}}^{d_{tk} \times d_{tn}} \\\left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{2,1}^{d_{tk} \times d_{tn}} & \left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{2,2}^{d_{tk} \times d_{tn}} & \cdots & \left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{2,d_{wn}/d_{tn}}^{d_{tk} \times d_{tn}} \\\vdots & \vdots & \ddots & \vdots \\\left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{d_{wk}/d_{tk},1}^{d_{tk} \times d_{tn}} & \left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{d_{wk}/d_{tk},2}^{d_{tk} \times d_{tn}} & \cdots & \left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{d_{wk}/d_{tk},d_{wn}/d_{tn}}^{d_{tk} \times d_{tn}} \\\end{bmatrix}$$
Each thread with warp thread index $(t_m \mod m_t, t_n \mod n_t)$ is responsible for computing one small matrix multiplication and accumulation $\left(\left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}}\right)_{t_m, t_n}^{d_{tm} \times d_{tn}}$.
The thread tile $\left(\left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}}\right)_{t_m, t_n}^{d_{tm} \times d_{tn}}$ can be computed as follows.
$$\begin{aligned}\left(\left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}}\right)_{t_m, t_n}^{d_{tm} \times d_{tn}}&= \sum_{t_k=1}^{d_{wk} / d_{tk}} \left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{t_m,t_k}^{d_{tm} \times d_{tk}} \left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{t_k,t_n}^{d_{tk} \times d_{tn}} \\\end{aligned}$$
Taken together, the thread tile $\left(\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{w_m,w_n}^{d_{wm} \times d_{wn}}\right)_{t_m, t_n}^{d_{tm} \times d_{tn}}$ can be computed as follows.
$$\begin{aligned}\left(\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{w_m,w_n}^{d_{wm} \times d_{wn}}\right)_{t_m, t_n}^{d_{tm} \times d_{tn}}&= \left(\sum_{b_k=1}^{k/d_{bk}} \left( \sum_{w_k=1}^{d_{bk} / d_{wk}} \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right) + \left( C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{w_m,w_n}^{d_{wm} \times d_{wn}}\right)_{t_m, t_n}^{d_{tm} \times d_{tn}} \\&= \sum_{b_k=1}^{k/d_{bk}} \left( \sum_{w_k=1}^{d_{bk} / d_{wk}} \left( \sum_{t_k=1}^{d_{wk} / d_{tk}} \left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{t_m,t_k}^{d_{tm} \times d_{tk}} \left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{t_k,t_n}^{d_{tk} \times d_{tn}} \right) + \left( C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{w_m,w_n}^{d_{wm} \times d_{wn}}\right)_{t_m, t_n}^{d_{tm} \times d_{tn}} \\\end{aligned}$$
In this implementation, we set $d_{wk} = d_{tk}$ to make the thread tiling algorithm simpler.
The following code snippet shows the implementation with 2D block tiling and 2D warp tiling and 2D thread tiling and vectorized memory access.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309
// GEMM kernel v06.// Each thread in the block processes THREAD_TILE_SIZE_Y *// THREAD_TILE_SIZE_X output values. Number of threads BLOCK_TILE_SIZE_Y *// BLOCK_TILE_SIZE_X / (THREAD_TILE_SIZE_Y * THREAD_TILE_SIZE_X)template <typename T, size_t BLOCK_TILE_SIZE_X, size_t BLOCK_TILE_SIZE_Y, size_t BLOCK_TILE_SIZE_K, size_t WARP_TILE_SIZE_X, size_t WARP_TILE_SIZE_Y, size_t THREAD_TILE_SIZE_X, size_t THREAD_TILE_SIZE_Y, size_t NUM_THREADS_PER_WARP_X, size_t NUM_THREADS_PER_WARP_Y>__global__ void gemm_v06_vectorized(size_t m, size_t n, size_t k, T alpha, T const* A, size_t lda, T const* B, size_t ldb, T beta, T* C, size_t ldc){ static_assert(NUM_THREADS_PER_WARP_X * NUM_THREADS_PER_WARP_Y == 32U); constexpr size_t NUM_WARPS_X{BLOCK_TILE_SIZE_X / WARP_TILE_SIZE_X}; static_assert(BLOCK_TILE_SIZE_X % WARP_TILE_SIZE_X == 0U); constexpr size_t NUM_WARPS_Y{BLOCK_TILE_SIZE_Y / WARP_TILE_SIZE_Y}; static_assert(BLOCK_TILE_SIZE_Y % WARP_TILE_SIZE_Y == 0U); constexpr unsigned int NUM_THREAD_TILES_PER_WARP_X{ WARP_TILE_SIZE_X / (THREAD_TILE_SIZE_X * NUM_THREADS_PER_WARP_X)}; constexpr unsigned int NUM_THREAD_TILES_PER_WARP_Y{ WARP_TILE_SIZE_Y / (THREAD_TILE_SIZE_Y * NUM_THREADS_PER_WARP_Y)}; static_assert( WARP_TILE_SIZE_X % (THREAD_TILE_SIZE_X * NUM_THREADS_PER_WARP_X) == 0U); static_assert( WARP_TILE_SIZE_Y % (THREAD_TILE_SIZE_Y * NUM_THREADS_PER_WARP_Y) == 0U); constexpr unsigned int NUM_THREADS_X{NUM_WARPS_X * NUM_THREADS_PER_WARP_X}; constexpr unsigned int NUM_THREADS_Y{NUM_WARPS_Y * NUM_THREADS_PER_WARP_Y}; // Avoid using blockDim.x * blockDim.y as the number of threads per block. // Because it is a runtime constant and the compiler cannot optimize the // loop unrolling based on that. // Use a compile time constant instead. constexpr size_t NUM_THREADS{NUM_THREADS_X * NUM_THREADS_Y}; // Cache a tile of A and B in shared memory for data reuse. __shared__ T A_thread_block_tile_transposed[BLOCK_TILE_SIZE_K][BLOCK_TILE_SIZE_Y]; __shared__ T B_thread_block_tile[BLOCK_TILE_SIZE_K][BLOCK_TILE_SIZE_X]; // A_vals is cached in the register. T A_vals[NUM_THREAD_TILES_PER_WARP_Y][THREAD_TILE_SIZE_Y] = { static_cast<T>(0)}; // B_vals is cached in the register. T B_vals[NUM_THREAD_TILES_PER_WARP_X][THREAD_TILE_SIZE_X] = { static_cast<T>(0)}; size_t const thread_linear_idx{threadIdx.y * blockDim.x + threadIdx.x}; size_t const warp_linear_idx{thread_linear_idx / 32U}; size_t const warp_row_idx{warp_linear_idx / NUM_WARPS_X}; size_t const warp_col_idx{warp_linear_idx % NUM_WARPS_X}; size_t const thread_linear_idx_in_warp{thread_linear_idx % 32U}; size_t const thread_linear_row_idx_in_warp{thread_linear_idx_in_warp / NUM_THREADS_PER_WARP_X}; size_t const thread_linear_col_idx_in_warp{thread_linear_idx_in_warp % NUM_THREADS_PER_WARP_X}; // Number of outer loops to perform the sum of inner products. // C_thread_block_tile = // \sigma_{thread_block_tile_idx=0}^{num_thread_block_tiles-1} A[:, // thread_block_tile_idx:BLOCK_TILE_SIZE_K] * // B[thread_block_tile_idx:BLOCK_TILE_SIZE_K, :] size_t const num_thread_block_tiles{(k + BLOCK_TILE_SIZE_K - 1) / BLOCK_TILE_SIZE_K}; // Each thread in the block processes NUM_THREAD_TILES_PER_WARP_Y * // NUM_THREAD_TILES_PER_WARP_X * THREAD_TILE_SIZE_Y * // THREAD_TILE_SIZE_X output values. T C_thread_results[NUM_THREAD_TILES_PER_WARP_Y][NUM_THREAD_TILES_PER_WARP_X] [THREAD_TILE_SIZE_Y][THREAD_TILE_SIZE_X] = { static_cast<T>(0)}; constexpr size_t NUM_VECTOR_UNITS{sizeof(int4) / sizeof(T)}; static_assert(sizeof(int4) % sizeof(T) == 0U); static_assert(BLOCK_TILE_SIZE_K % NUM_VECTOR_UNITS == 0U); static_assert(BLOCK_TILE_SIZE_X % NUM_VECTOR_UNITS == 0U); constexpr size_t VECTORIZED_THREAD_TILE_SIZE_X{THREAD_TILE_SIZE_X / NUM_VECTOR_UNITS}; static_assert(THREAD_TILE_SIZE_X % NUM_VECTOR_UNITS == 0U); constexpr size_t VECTORIZED_THREAD_TILE_SIZE_Y{THREAD_TILE_SIZE_Y / NUM_VECTOR_UNITS}; static_assert(THREAD_TILE_SIZE_Y % NUM_VECTOR_UNITS == 0U); for (size_t thread_block_tile_idx{0U}; thread_block_tile_idx < num_thread_block_tiles; ++thread_block_tile_idx) { load_data_to_shared_memory_transposed_vectorized< T, BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y, BLOCK_TILE_SIZE_K, NUM_THREADS>(A, lda, B, ldb, A_thread_block_tile_transposed, B_thread_block_tile, thread_block_tile_idx, thread_linear_idx, m, n, k); __syncthreads();// Perform A[:, thread_block_tile_idx:BLOCK_TILE_SIZE_K] *// B[thread_block_tile_idx:BLOCK_TILE_SIZE_K, :] where A[:,// thread_block_tile_idx:BLOCK_TILE_SIZE_K] and// B[thread_block_tile_idx:BLOCK_TILE_SIZE_K, :] are cached in the// shared memory as A_thread_block_tile and B_thread_block_tile,// respectively. This inner product is further decomposed to// BLOCK_TILE_SIZE_K outer products. A_thread_block_tile *// B_thread_block_tile = \sigma_{k_i=0}^{BLOCK_TILE_SIZE_K-1}// A_thread_block_tile[:, k_i] @ B_thread_block_tile[k_i, :] Note that// both A_thread_block_tile and B_thread_block_tile can be cached in the// register.#pragma unroll for (size_t k_i{0U}; k_i < BLOCK_TILE_SIZE_K; ++k_i) {#pragma unroll for (size_t thread_tile_repeat_row_idx{0U}; thread_tile_repeat_row_idx < NUM_THREAD_TILES_PER_WARP_Y; ++thread_tile_repeat_row_idx) { size_t const A_thread_block_tile_row_idx{ warp_row_idx * WARP_TILE_SIZE_Y + thread_tile_repeat_row_idx * (WARP_TILE_SIZE_Y / NUM_THREAD_TILES_PER_WARP_Y) + thread_linear_row_idx_in_warp * THREAD_TILE_SIZE_Y}; size_t const A_thread_block_tile_col_idx{k_i};#pragma unroll for (size_t thread_tile_y_vector_idx{0U}; thread_tile_y_vector_idx < VECTORIZED_THREAD_TILE_SIZE_Y; ++thread_tile_y_vector_idx) { *reinterpret_cast<int4*>( &A_vals[thread_tile_repeat_row_idx] [thread_tile_y_vector_idx * NUM_VECTOR_UNITS]) = *reinterpret_cast<int4 const*>( &A_thread_block_tile_transposed [A_thread_block_tile_col_idx] [A_thread_block_tile_row_idx + thread_tile_y_vector_idx * NUM_VECTOR_UNITS]); } }#pragma unroll for (size_t thread_tile_repeat_col_idx{0U}; thread_tile_repeat_col_idx < NUM_THREAD_TILES_PER_WARP_X; ++thread_tile_repeat_col_idx) { size_t const B_thread_block_tile_row_idx{k_i}; size_t const B_thread_block_tile_col_idx{ warp_col_idx * WARP_TILE_SIZE_X + thread_tile_repeat_col_idx * (WARP_TILE_SIZE_X / NUM_THREAD_TILES_PER_WARP_X) + thread_linear_col_idx_in_warp * THREAD_TILE_SIZE_X};#pragma unroll for (size_t thread_tile_x_vector_idx{0U}; thread_tile_x_vector_idx < VECTORIZED_THREAD_TILE_SIZE_X; ++thread_tile_x_vector_idx) { *reinterpret_cast<int4*>( &B_vals[thread_tile_repeat_col_idx] [thread_tile_x_vector_idx * NUM_VECTOR_UNITS]) = *reinterpret_cast<int4 const*>( &B_thread_block_tile[B_thread_block_tile_row_idx] [B_thread_block_tile_col_idx + thread_tile_x_vector_idx * NUM_VECTOR_UNITS]); } }// Compute NUM_THREAD_TILES_PER_WARP_Y * NUM_THREAD_TILES_PER_WARP_X outer// products.#pragma unroll for (size_t thread_tile_repeat_row_idx{0U}; thread_tile_repeat_row_idx < NUM_THREAD_TILES_PER_WARP_Y; ++thread_tile_repeat_row_idx) {#pragma unroll for (size_t thread_tile_repeat_col_idx{0U}; thread_tile_repeat_col_idx < NUM_THREAD_TILES_PER_WARP_X; ++thread_tile_repeat_col_idx) {#pragma unroll for (size_t thread_tile_y_idx{0U}; thread_tile_y_idx < THREAD_TILE_SIZE_Y; ++thread_tile_y_idx) {#pragma unroll for (size_t thread_tile_x_idx{0U}; thread_tile_x_idx < THREAD_TILE_SIZE_X; ++thread_tile_x_idx) { C_thread_results[thread_tile_repeat_row_idx] [thread_tile_repeat_col_idx] [thread_tile_y_idx] [thread_tile_x_idx] += A_vals[thread_tile_repeat_row_idx] [thread_tile_y_idx] * B_vals[thread_tile_repeat_col_idx] [thread_tile_x_idx]; } } } } } __syncthreads(); }// Write the results to DRAM.#pragma unroll for (size_t thread_tile_repeat_row_idx{0U}; thread_tile_repeat_row_idx < NUM_THREAD_TILES_PER_WARP_Y; ++thread_tile_repeat_row_idx) {#pragma unroll for (size_t thread_tile_repeat_col_idx{0U}; thread_tile_repeat_col_idx < NUM_THREAD_TILES_PER_WARP_X; ++thread_tile_repeat_col_idx) {#pragma unroll for (size_t thread_tile_y_idx{0U}; thread_tile_y_idx < THREAD_TILE_SIZE_Y; ++thread_tile_y_idx) {#pragma unroll for (size_t thread_tile_x_vector_idx{0U}; thread_tile_x_vector_idx < VECTORIZED_THREAD_TILE_SIZE_X; ++thread_tile_x_vector_idx) { size_t const C_row_idx{ blockIdx.y * BLOCK_TILE_SIZE_Y + warp_row_idx * WARP_TILE_SIZE_Y + thread_tile_repeat_row_idx * (WARP_TILE_SIZE_Y / NUM_THREAD_TILES_PER_WARP_Y) + thread_linear_row_idx_in_warp * THREAD_TILE_SIZE_Y + thread_tile_y_idx}; size_t const C_col_idx{ blockIdx.x * BLOCK_TILE_SIZE_X + warp_col_idx * WARP_TILE_SIZE_X + thread_tile_repeat_col_idx * (WARP_TILE_SIZE_X / NUM_THREAD_TILES_PER_WARP_X) + thread_linear_col_idx_in_warp * THREAD_TILE_SIZE_X + thread_tile_x_vector_idx * NUM_VECTOR_UNITS}; if (C_row_idx < m && C_col_idx < n) { int4 C_vals{*reinterpret_cast<int4 const*>( &C[C_row_idx * ldc + C_col_idx])};#pragma unroll for (size_t i{0U}; i < NUM_VECTOR_UNITS; ++i) { reinterpret_cast<T*>(&C_vals)[i] = alpha * C_thread_results[thread_tile_repeat_row_idx] [thread_tile_repeat_col_idx] [thread_tile_y_idx] [thread_tile_x_vector_idx * NUM_VECTOR_UNITS + i] + beta * reinterpret_cast<T const*>(&C_vals)[i]; } *reinterpret_cast<int4*>( &C[C_row_idx * ldc + C_col_idx]) = C_vals; } } } } }}template <typename T>void launch_gemm_kernel_v06_vectorized(size_t m, size_t n, size_t k, T const* alpha, T const* A, size_t lda, T const* B, size_t ldb, T const* beta, T* C, size_t ldc, cudaStream_t stream){ // Feel free to play with the block tile sizes. // The algorithm correctness should always be guaranteed. constexpr unsigned int BLOCK_TILE_SIZE_X{128U}; constexpr unsigned int BLOCK_TILE_SIZE_Y{128U}; constexpr unsigned int BLOCK_TILE_SIZE_K{16U}; constexpr unsigned int WARP_TILE_SIZE_X{32U}; constexpr unsigned int WARP_TILE_SIZE_Y{64U}; constexpr unsigned int NUM_WARPS_X{BLOCK_TILE_SIZE_X / WARP_TILE_SIZE_X}; constexpr unsigned int NUM_WARPS_Y{BLOCK_TILE_SIZE_Y / WARP_TILE_SIZE_Y}; static_assert(BLOCK_TILE_SIZE_X % WARP_TILE_SIZE_X == 0U); static_assert(BLOCK_TILE_SIZE_Y % WARP_TILE_SIZE_Y == 0U); constexpr unsigned int THREAD_TILE_SIZE_X{8U}; constexpr unsigned int THREAD_TILE_SIZE_Y{8U}; constexpr unsigned int NUM_THREADS_PER_WARP_X{4U}; constexpr unsigned int NUM_THREADS_PER_WARP_Y{8U}; static_assert(NUM_THREADS_PER_WARP_X * NUM_THREADS_PER_WARP_Y == 32U); static_assert( WARP_TILE_SIZE_X % (THREAD_TILE_SIZE_X * NUM_THREADS_PER_WARP_X) == 0U); static_assert( WARP_TILE_SIZE_Y % (THREAD_TILE_SIZE_Y * NUM_THREADS_PER_WARP_Y) == 0U); constexpr unsigned int NUM_THREADS_X{NUM_WARPS_X * NUM_THREADS_PER_WARP_X}; constexpr unsigned int NUM_THREADS_Y{NUM_WARPS_Y * NUM_THREADS_PER_WARP_Y}; constexpr unsigned int NUM_THREADS_PER_BLOCK{NUM_THREADS_X * NUM_THREADS_Y}; dim3 const block_dim{NUM_THREADS_PER_BLOCK, 1U, 1U}; dim3 const grid_dim{ (static_cast<unsigned int>(n) + BLOCK_TILE_SIZE_X - 1U) / BLOCK_TILE_SIZE_X, (static_cast<unsigned int>(m) + BLOCK_TILE_SIZE_Y - 1U) / BLOCK_TILE_SIZE_Y, 1U}; gemm_v06_vectorized<T, BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y, BLOCK_TILE_SIZE_K, WARP_TILE_SIZE_X, WARP_TILE_SIZE_Y, THREAD_TILE_SIZE_X, THREAD_TILE_SIZE_Y, NUM_THREADS_PER_WARP_X, NUM_THREADS_PER_WARP_Y> <<<grid_dim, block_dim, 0U, stream>>>(m, n, k, *alpha, A, lda, B, ldb, *beta, C, ldc); CHECK_LAST_CUDA_ERROR();}
The performance of this FP32 GEMM implementation becomes 20.16 TFLOPS on an NVIDIA GeForce RTX 3090 GPU. Comparing to the cuBLAS FP32 GEMM performance, which is 24.59 TFLOPS, this implementation has been optimized reasonably well.
Implementation with 2D Block Tiling and 2D Warp Tiling and Tensor Core and Vectorized Memory Access
Because we have already organized the GEMM CUDA kernel in a warp-centric way, and NVIDIA Tensor Core instructions are interfaced at the warp level, it is then very straightforward to utilize NVIDIA Tensor Core WMMA APIs to further accelerate the GEMM computation. Because the NVIDIA Tensor Core does not support IEEE FP32 computation, we will make this CUDA kernel to run FP16 GEMM instead.
Comparing to the implementation with 2D block tiling and 2D warp tiling and 2D thread tiling and vectorized memory access, the implementation with 2D block tiling and 2D warp tiling and Tensor Core and vectorized memory access is simpler because the thread tiling process is abstracted away by the NVIDIA Tensor Core warp-level WMMA APIs.
Mathematically, given a matrix multiplication and accumulation operation $D_{b_m,b_n}^{d_{bm} \times d_{bn}} = \sum_{b_k=1}^{k/d_{bk}} A_{b_m,b_k}^{d_{bm} \times d_{bk}} B_{b_k,b_n}^{d_{bk} \times d_{bn}} + C_{b_m,b_n}^{d_{bm} \times d_{bn}}$, where $D_{b_m,b_n} \in \mathbb{R}^{d_{bm} \times d_{bn}}$, $A_{b_m,b_k} \in \mathbb{R}^{d_{bm} \times d_{bk}}$, $B_{b_k,b_n} \in \mathbb{R}^{d_{bk} \times d_{bn}}$, $C_{b_m,b_n} \in \mathbb{R}^{d_{bm} \times d_{bn}}$, the matrices could be divided into smaller matrices.
$$A_{b_m,b_k}^{d_{bm} \times d_{bk}} =\begin{bmatrix}\left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{1,1}^{d_{wm} \times d_{wk}} & \left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{1,2}^{d_{wm} \times d_{wk}} & \cdots & \left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{1,d_{bk}/d_{wk}}^{d_{wm} \times d_{wk}} \\\left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{2,1}^{d_{wm} \times d_{wk}} & \left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{2,2}^{d_{wm} \times d_{wk}} & \cdots & \left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{2,d_{bk}/d_{wk}}^{d_{wm} \times d_{wk}} \\\vdots & \vdots & \ddots & \vdots \\\left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{d_{bm}/d_{wm},1}^{d_{wm} \times d_{wk}} & \left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{d_{bm}/d_{wm},2}^{d_{wm} \times d_{wk}} & \cdots & \left(A_{b_m,b_k}^{d_{bm} \times d_{bk}}\right)_{d_{bm}/d_{wm},d_{bk}/d_{wk}}^{d_{wm} \times d_{wk}} \\\end{bmatrix}$$
$$B_{b_k,b_n}^{d_{bk} \times d_{bn}} =\begin{bmatrix}\left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{1,1}^{d_{wk} \times d_{wn}} & \left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{1,2}^{d_{wk} \times d_{wn}} & \cdots & \left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{1,d_{bn}/d_{wn}}^{d_{wk} \times d_{wn}} \\\left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{2,1}^{d_{wk} \times d_{wn}} & \left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{2,2}^{d_{wk} \times d_{wn}} & \cdots & \left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{2,d_{bn}/d_{wn}}^{d_{wk} \times d_{wn}} \\\vdots & \vdots & \ddots & \vdots \\\left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{d_{bk}/d_{wk},1}^{d_{wk} \times d_{wn}} & \left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{d_{bk}/d_{wk},2}^{d_{wk} \times d_{wn}} & \cdots & \left(B_{b_k,b_n}^{d_{bk} \times d_{bn}}\right)_{d_{bk}/d_{wk},d_{bn}/d_{wn}}^{d_{wk} \times d_{wn}} \\\end{bmatrix}$$
$$C_{b_m,b_n}^{d_{bm} \times d_{bn}} =\begin{bmatrix}\left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{1,1}^{d_{wm} \times d_{wn}} & \left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{1,2}^{d_{wm} \times d_{wn}} & \cdots & \left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{1,d_{bn}/d_{wn}}^{d_{wm} \times d_{wn}} \\\left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{2,1}^{d_{wm} \times d_{wn}} & \left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{2,2}^{d_{wm} \times d_{wn}} & \cdots & \left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{2,d_{bn}/d_{wn}}^{d_{wm} \times d_{wn}} \\\vdots & \vdots & \ddots & \vdots \\\left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{d_{bm}/d_{wm},1}^{d_{wm} \times d_{wn}} & \left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{d_{bm}/d_{wm},2}^{d_{wm} \times d_{wn}} & \cdots & \left(C_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{d_{bm}/d_{wm},d_{bn}/d_{wn}}^{d_{wm} \times d_{wn}} \\\end{bmatrix}$$
$$D_{b_m,b_n}^{d_{bm} \times d_{bn}} =\begin{bmatrix}\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{1,1}^{d_{wm} \times d_{wn}} & \left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{1,2}^{d_{wm} \times d_{wn}} & \cdots & \left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{1,d_{bn}/d_{wn}}^{d_{wm} \times d_{wn}} \\\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{2,1}^{d_{wm} \times d_{wn}} & \left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{2,2}^{d_{wm} \times d_{wn}} & \cdots & \left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{2,d_{bn}/d_{wn}}^{d_{wm} \times d_{wn}} \\\vdots & \vdots & \ddots & \vdots \\\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{d_{bm}/d_{wm},1}^{d_{wm} \times d_{wn}} & \left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{d_{bm}/d_{wm},2}^{d_{wm} \times d_{wn}} & \cdots & \left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{d_{bm}/d_{wm},d_{bn}/d_{wn}}^{d_{wm} \times d_{wn}} \\\end{bmatrix}$$
Each small matrix in $D_{b_m,b_n}^{d_{bm} \times d_{bn}}$ is computed as multiple small matrix multiplications and accumulations.
$$\begin{aligned}\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{w_m,w_n}^{d_{wm} \times d_{wn}}&= \left( \sum_{b_k=1}^{k/d_{bk}} A_{b_m,b_k}^{d_{bm} \times d_{bk}} B_{b_k,b_n}^{d_{bk} \times d_{bn}} + C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{w_m,w_n}^{d_{wm} \times d_{wn}} \\&= \sum_{b_k=1}^{k/d_{bk}} \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_m,w_n}^{d_{wm} \times d_{wn}} + \left( C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{w_m,w_n}^{d_{wm} \times d_{wn}} \\&= \sum_{b_k=1}^{k/d_{bk}} \left( \sum_{w_k=1}^{d_{bk} / d_{wk}} \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right) + \left( C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{w_m,w_n}^{d_{wm} \times d_{wn}} \\\end{aligned}$$
Each warp with block warp index $(w_m, w_n)$, where $w_m \in [1, d_{bm} / d_{wm}]$ and $w_n \in [1, d_{bn} / d_{wn}]$, in the block with block index $(b_m, b_n)$, where $b_m \in [1, m/d_{bm}]$ and $b_n \in [1, n/d_{bn}]$, is responsible for computing one small matrix multiplication and accumulation $\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{w_m,w_n}^{d_{wm} \times d_{wn}}$.
Suppose the Tensor Core WMMA GEMM size is $d_{tm} \times d_{tn} \times d_{tk}$. Because the matrix $\left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}}$ can be divided along the row dimension to $d_{wm} / d_{tm}$ fragments and the matrix $\left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}}$ can be divided along the column dimension to $d_{wn} / d_{tn}$ fragments. We have
$$\left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} =\begin{bmatrix}\left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{1,1}^{d_{tm} \times d_{tk}} & \left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{1,2}^{d_{tm} \times d_{tk}} & \cdots & \left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{1,d_{wk}/d_{tk}}^{d_{tm} \times d_{tk}} \\\left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{2,1}^{d_{tm} \times d_{tk}} & \left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{2,2}^{d_{tm} \times d_{tk}} & \cdots & \left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{2,d_{wk}/d_{tk}}^{d_{tm} \times d_{tk}} \\\vdots & \vdots & \ddots & \vdots \\\left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{d_{wm}/d_{tm},1}^{d_{tm} \times d_{tk}} & \left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{d_{wm}/d_{tm},2}^{d_{tm} \times d_{tk}} & \cdots & \left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{d_{wm}/d_{tm},d_{wk}/d_{tk}}^{d_{tm} \times d_{tk}} \\\end{bmatrix}$$
$$\left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} =\begin{bmatrix}\left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{1,1}^{d_{tk} \times d_{tn}} & \left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{1,2}^{d_{tk} \times d_{tn}} & \cdots & \left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{1,d_{wn}/d_{tn}}^{d_{tk} \times d_{tn}} \\\left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{2,1}^{d_{tk} \times d_{tn}} & \left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{2,2}^{d_{tk} \times d_{tn}} & \cdots & \left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{2,d_{wn}/d_{tn}}^{d_{tk} \times d_{tn}} \\\vdots & \vdots & \ddots & \vdots \\\left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{d_{wk}/d_{tk},1}^{d_{tk} \times d_{tn}} & \left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{d_{wk}/d_{tk},2}^{d_{tk} \times d_{tn}} & \cdots & \left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{d_{wk}/d_{tk},d_{wn}/d_{tn}}^{d_{tk} \times d_{tn}} \\\end{bmatrix}$$
Instead of calling thread-level instructions, each warp will call WMMA warp-level Tensor Core to compute all the $\left(\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{w_m,w_n}^{d_{wm} \times d_{wn}}\right)_{t_m, t_n}^{d_{tm} \times d_{tn}}$ for $\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{w_m,w_n}^{d_{wm} \times d_{wn}}$ iteratively.
$$\begin{aligned}\left(\left(D_{b_m,b_n}^{d_{bm} \times d_{bn}}\right)_{w_m,w_n}^{d_{wm} \times d_{wn}}\right)_{t_m, t_n}^{d_{tm} \times d_{tn}}&= \left(\sum_{b_k=1}^{k/d_{bk}} \left( \sum_{w_k=1}^{d_{bk} / d_{wk}} \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right) + \left( C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{w_m,w_n}^{d_{wm} \times d_{wn}}\right)_{t_m, t_n}^{d_{tm} \times d_{tn}} \\&= \sum_{b_k=1}^{k/d_{bk}} \left( \sum_{w_k=1}^{d_{bk} / d_{wk}} \left( \sum_{t_k=1}^{d_{wk} / d_{tk}} \left( \left( A_{b_m,b_k}^{d_{bm} \times d_{bk}} \right)_{w_m,w_k}^{d_{wm} \times d_{wk}} \right)_{t_m,t_k}^{d_{tm} \times d_{tk}} \left( \left( B_{b_k,b_n}^{d_{bk} \times d_{bn}} \right)_{w_k,w_n}^{d_{wk} \times d_{wn}} \right)_{t_k,t_n}^{d_{tk} \times d_{tn}} \right) + \left( C_{b_m,b_n}^{d_{bm} \times d_{bn}} \right)_{w_m,w_n}^{d_{wm} \times d_{wn}}\right)_{t_m, t_n}^{d_{tm} \times d_{tn}} \\\end{aligned}$$
In this implementation, because of the WMMA Tensor Core API restrictions, $d_{tm} = 16$, $d_{tn} = 16$, $d_{tk} = 16$.
The following code snippet shows the implementation with 2D block tiling and 2D warp tiling and Tensor Core and vectorized memory access.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224
// GEMM kernel v07.// Each thread in the block processes THREAD_TILE_SIZE_Y *// THREAD_TILE_SIZE_X output values. Number of threads BLOCK_TILE_SIZE_Y *// BLOCK_TILE_SIZE_X / (THREAD_TILE_SIZE_Y * THREAD_TILE_SIZE_X)template <typename T, size_t BLOCK_TILE_SIZE_X, size_t BLOCK_TILE_SIZE_Y, size_t BLOCK_TILE_SIZE_K, size_t BLOCK_TILE_SKEW_SIZE_X, size_t BLOCK_TILE_SKEW_SIZE_Y, size_t WARP_TILE_SIZE_X, size_t WARP_TILE_SIZE_Y, size_t WMMA_TILE_SIZE_X, size_t WMMA_TILE_SIZE_Y, size_t WMMA_TILE_SIZE_K, size_t NUM_THREADS>__global__ void gemm_v07_vectorized(size_t m, size_t n, size_t k, T alpha, T const* A, size_t lda, T const* B, size_t ldb, T beta, T* C, size_t ldc){ constexpr size_t NUM_WARPS_X{BLOCK_TILE_SIZE_X / WARP_TILE_SIZE_X}; static_assert(BLOCK_TILE_SIZE_X % WARP_TILE_SIZE_X == 0U); static_assert(BLOCK_TILE_SIZE_Y % WARP_TILE_SIZE_Y == 0U); // Cache a tile of A and B in shared memory for data reuse. __shared__ T A_thread_block_tile_transposed[BLOCK_TILE_SIZE_K] [BLOCK_TILE_SIZE_Y + BLOCK_TILE_SKEW_SIZE_Y]; __shared__ T B_thread_block_tile[BLOCK_TILE_SIZE_K][BLOCK_TILE_SIZE_X + BLOCK_TILE_SKEW_SIZE_X]; constexpr size_t NUM_WMMA_TILES_X{WARP_TILE_SIZE_X / WMMA_TILE_SIZE_X}; static_assert(WARP_TILE_SIZE_X % WMMA_TILE_SIZE_X == 0U); constexpr size_t NUM_WMMA_TILES_Y{WARP_TILE_SIZE_Y / WMMA_TILE_SIZE_Y}; static_assert(WARP_TILE_SIZE_Y % WMMA_TILE_SIZE_Y == 0U); constexpr size_t NUM_WMMA_TILES_K{BLOCK_TILE_SIZE_K / WMMA_TILE_SIZE_K}; static_assert(BLOCK_TILE_SIZE_K % WMMA_TILE_SIZE_K == 0U); // Declare the fragments. nvcuda::wmma::fragment<nvcuda::wmma::matrix_a, WMMA_TILE_SIZE_Y, WMMA_TILE_SIZE_X, WMMA_TILE_SIZE_K, T, nvcuda::wmma::col_major> a_frags[NUM_WMMA_TILES_Y]; nvcuda::wmma::fragment<nvcuda::wmma::matrix_b, WMMA_TILE_SIZE_Y, WMMA_TILE_SIZE_X, WMMA_TILE_SIZE_K, T, nvcuda::wmma::row_major> b_frags[NUM_WMMA_TILES_X]; nvcuda::wmma::fragment<nvcuda::wmma::accumulator, WMMA_TILE_SIZE_Y, WMMA_TILE_SIZE_X, WMMA_TILE_SIZE_K, T> acc_frags[NUM_WMMA_TILES_Y][NUM_WMMA_TILES_X]; nvcuda::wmma::fragment<nvcuda::wmma::accumulator, WMMA_TILE_SIZE_Y, WMMA_TILE_SIZE_X, WMMA_TILE_SIZE_K, T> c_frag;// Make sure the accumulator starts from 0.#pragma unroll for (size_t wmma_tile_row_idx{0U}; wmma_tile_row_idx < NUM_WMMA_TILES_Y; ++wmma_tile_row_idx) { for (size_t wmma_tile_col_idx{0U}; wmma_tile_col_idx < NUM_WMMA_TILES_X; ++wmma_tile_col_idx) { nvcuda::wmma::fill_fragment( acc_frags[wmma_tile_row_idx][wmma_tile_col_idx], static_cast<T>(0)); } } size_t const thread_linear_idx{threadIdx.y * blockDim.x + threadIdx.x}; size_t const warp_linear_idx{thread_linear_idx / 32U}; size_t const warp_row_idx{warp_linear_idx / NUM_WARPS_X}; size_t const warp_col_idx{warp_linear_idx % NUM_WARPS_X}; // Number of outer loops to perform the sum of inner products. // C_thread_block_tile = // \sigma_{thread_block_tile_idx=0}^{num_thread_block_tiles-1} A[:, // thread_block_tile_idx:BLOCK_TILE_SIZE_K] * // B[thread_block_tile_idx:BLOCK_TILE_SIZE_K, :] size_t const num_thread_block_tiles{(k + BLOCK_TILE_SIZE_K - 1) / BLOCK_TILE_SIZE_K}; for (size_t thread_block_tile_idx{0U}; thread_block_tile_idx < num_thread_block_tiles; ++thread_block_tile_idx) { load_data_to_shared_memory_transposed_vectorized< T, BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y, BLOCK_TILE_SIZE_K, NUM_THREADS, BLOCK_TILE_SKEW_SIZE_X, BLOCK_TILE_SKEW_SIZE_Y>( A, lda, B, ldb, A_thread_block_tile_transposed, B_thread_block_tile, thread_block_tile_idx, thread_linear_idx, m, n, k); __syncthreads();// Perform A[:, thread_block_tile_idx:BLOCK_TILE_SIZE_K] *// B[thread_block_tile_idx:BLOCK_TILE_SIZE_K, :] where A[:,// thread_block_tile_idx:BLOCK_TILE_SIZE_K] and// B[thread_block_tile_idx:BLOCK_TILE_SIZE_K, :] are cached in the// shared memory as A_thread_block_tile and B_thread_block_tile,// respectively. This inner product is further decomposed to// BLOCK_TILE_SIZE_K outer products. A_thread_block_tile *// B_thread_block_tile = \sigma_{k_i=0}^{BLOCK_TILE_SIZE_K-1}// A_thread_block_tile[:, k_i] @ B_thread_block_tile[k_i, :] Note that// both A_thread_block_tile and B_thread_block_tile can be cached in the// register.#pragma unroll for (size_t k_i{0U}; k_i < NUM_WMMA_TILES_K; ++k_i) {#pragma unroll for (size_t wmma_tile_row_idx{0U}; wmma_tile_row_idx < NUM_WMMA_TILES_Y; ++wmma_tile_row_idx) { nvcuda::wmma::load_matrix_sync( a_frags[wmma_tile_row_idx], &A_thread_block_tile_transposed[k_i * WMMA_TILE_SIZE_K] [warp_row_idx * WARP_TILE_SIZE_Y + wmma_tile_row_idx * WMMA_TILE_SIZE_Y], BLOCK_TILE_SIZE_Y + BLOCK_TILE_SKEW_SIZE_Y);#pragma unroll for (size_t wmma_tile_col_idx{0U}; wmma_tile_col_idx < NUM_WMMA_TILES_X; ++wmma_tile_col_idx) { // These loads are extremely slow somehow, which affects the // performance a lot. Load the fragment from shared memory. nvcuda::wmma::load_matrix_sync( b_frags[wmma_tile_col_idx], &B_thread_block_tile[k_i * WMMA_TILE_SIZE_K] [warp_col_idx * WARP_TILE_SIZE_X + wmma_tile_col_idx * WMMA_TILE_SIZE_Y], BLOCK_TILE_SIZE_X + BLOCK_TILE_SKEW_SIZE_X); // Perform the matrix multiplication. nvcuda::wmma::mma_sync( acc_frags[wmma_tile_row_idx][wmma_tile_col_idx], a_frags[wmma_tile_row_idx], b_frags[wmma_tile_col_idx], acc_frags[wmma_tile_row_idx][wmma_tile_col_idx]); } } } __syncthreads(); }// Write the results to DRAM.#pragma unroll for (size_t wmma_tile_row_idx{0U}; wmma_tile_row_idx < NUM_WMMA_TILES_Y; ++wmma_tile_row_idx) {#pragma unroll for (size_t wmma_tile_col_idx{0U}; wmma_tile_col_idx < NUM_WMMA_TILES_X; ++wmma_tile_col_idx) { // Load the fragment from shared memory. nvcuda::wmma::load_matrix_sync( c_frag, &C[(blockIdx.y * BLOCK_TILE_SIZE_Y + warp_row_idx * WARP_TILE_SIZE_Y + wmma_tile_row_idx * WMMA_TILE_SIZE_Y) * n + blockIdx.x * BLOCK_TILE_SIZE_X + warp_col_idx * WARP_TILE_SIZE_X + wmma_tile_col_idx * WMMA_TILE_SIZE_X], n, nvcuda::wmma::mem_row_major); // Perform scaling and addition. for (size_t i{0}; i < c_frag.num_elements; ++i) { c_frag.x[i] = alpha * acc_frags[wmma_tile_row_idx][wmma_tile_col_idx].x[i] + beta * c_frag.x[i]; } // Store the fragment back to shared memory. nvcuda::wmma::store_matrix_sync( &C[(blockIdx.y * BLOCK_TILE_SIZE_Y + warp_row_idx * WARP_TILE_SIZE_Y + wmma_tile_row_idx * WMMA_TILE_SIZE_Y) * n + blockIdx.x * BLOCK_TILE_SIZE_X + warp_col_idx * WARP_TILE_SIZE_X + wmma_tile_col_idx * WMMA_TILE_SIZE_X], c_frag, n, nvcuda::wmma::mem_row_major); } }}template <typename T>void launch_gemm_kernel_v07_vectorized(size_t m, size_t n, size_t k, T const* alpha, T const* A, size_t lda, T const* B, size_t ldb, T const* beta, T* C, size_t ldc, cudaStream_t stream){ // Feel free to play with the block tile sizes. // The algorithm correctness should always be guaranteed. constexpr unsigned int BLOCK_TILE_SIZE_X{128U}; constexpr unsigned int BLOCK_TILE_SIZE_Y{128U}; constexpr unsigned int BLOCK_TILE_SIZE_K{16U}; // The skew size is used to avoid bank conflicts in shared memory. constexpr size_t BLOCK_TILE_SKEW_SIZE_X{16U}; constexpr size_t BLOCK_TILE_SKEW_SIZE_Y{16U}; constexpr unsigned int WARP_TILE_SIZE_X{32U}; constexpr unsigned int WARP_TILE_SIZE_Y{64U}; constexpr unsigned int NUM_WARPS_X{BLOCK_TILE_SIZE_X / WARP_TILE_SIZE_X}; constexpr unsigned int NUM_WARPS_Y{BLOCK_TILE_SIZE_Y / WARP_TILE_SIZE_Y}; static_assert(BLOCK_TILE_SIZE_X % WARP_TILE_SIZE_X == 0U); static_assert(BLOCK_TILE_SIZE_Y % WARP_TILE_SIZE_Y == 0U); constexpr unsigned int WMMA_TILE_SIZE_X{16U}; constexpr unsigned int WMMA_TILE_SIZE_Y{16U}; constexpr unsigned int WMMA_TILE_SIZE_K{16U}; constexpr unsigned int NUM_THREADS_PER_BLOCK{NUM_WARPS_X * NUM_WARPS_Y * 32U}; dim3 const block_dim{NUM_THREADS_PER_BLOCK, 1U, 1U}; dim3 const grid_dim{ (static_cast<unsigned int>(n) + BLOCK_TILE_SIZE_X - 1U) / BLOCK_TILE_SIZE_X, (static_cast<unsigned int>(m) + BLOCK_TILE_SIZE_Y - 1U) / BLOCK_TILE_SIZE_Y, 1U}; gemm_v07_vectorized<T, BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y, BLOCK_TILE_SIZE_K, BLOCK_TILE_SKEW_SIZE_X, BLOCK_TILE_SKEW_SIZE_Y, WARP_TILE_SIZE_X, WARP_TILE_SIZE_Y, WMMA_TILE_SIZE_X, WMMA_TILE_SIZE_Y, WMMA_TILE_SIZE_K, NUM_THREADS_PER_BLOCK> <<<grid_dim, block_dim, 0U, stream>>>(m, n, k, *alpha, A, lda, B, ldb, *beta, C, ldc); CHECK_LAST_CUDA_ERROR();}
Because the fundamental WMMA size is $16 \times 16 \times 16$, all the 32 threads in the same warp has to synergistically access the shared memory where the WMMA fragment is cached. It is then very possible that the shared memory bank conflicts will happen. To avoid the shared memory bank conflicts, we will have to pad the shared memory size to make sure the shared memory bank conflicts will not happen. This is why we have to use the skew size to pad the shared memory size at the leading dimension.
The performance of this FP16 GEMM implementation becomes 46.78 TFLOPS on an NVIDIA GeForce RTX 3090 GPU. Comparing to the cuBLAS FP16 GEMM performance, which is 138.95 TFLOPS, this implementation only achieves 33.7% of the cuBLAS FP16 GEMM performance. We will leave the performance optimization of this implementation as a future work.
Conclusions
The optimizations we performed on the GEMM CUDA kernels mainly follow the diagrams in the article “CUTLASS: Fast Linear Algebra in CUDA C++”.
With the optimization techniques, such as 2D block tiling, 2D warp tiling, 2D thread tiling, and vectorized memory access, we can achieve 20.16 TFLOPS FP32 GEMM performance on an NVIDIA GeForce RTX 3090 GPU, which is 80% - 90% of the cuBLAS FP32 GEMM performance.
Source Code
The source code of the GEMM CUDA kernels can be found in my GitHub repository “CUDA GEMM Optimization”.
References
CUDA Matrix Multiplication Optimization
https://leimao.github.io/article/CUDA-Matrix-Multiplication-Optimization/
Author
Lei Mao
Posted on
01-20-2024
Updated on
01-20-2024
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
Beating cuBLAS in Single-Precision General Matrix Multiplication
Jan 12, 2025
• Aman Salykov
This project is inspired by the outstanding works of Andrej Karpathy, George Hotz, Scott Gray, Horace He, Philippe Tillet, Jeremy Howard, Lei Mao and the best CUDA hackers from the GPU MODE community (Discord server). A special thanks to Mark Saroufim and Andreas Köpf for running GPU MODE and all you’ve done for the community.
The code is available at sgemm.cu. This article complements my blog post, which covers the implementation of FP32 matrix multiplication that outperforms BLAS libraries on modern Intel and AMD CPUs. Today we’ll walk through a GPU implementation of SGEMM (Single-precision GEneral Matrix Multiply) operation defined as C := alpha*A*B + beta*C. The blog delves into benchmarking code on CUDA devices and explains the algorithm’s design along with optimization techniques. These include inlined PTX, asynchronous memory copies, double-buffering, avoiding shared memory bank conflicts, and efficient coalesced storage through shared memory. I’d also like to mention that the high-level algorithm design used in this project was developed by the excellent engineers at NVIDIA and has been extensively studied in prior works on cuBLAS and CUTLASS. My main contribution was translating it into efficient CUDA/PTX code. The goal of this project wasn’t to build an SGEMM that would magically outperform cuBLAS on all GPUs and all matrix sizes. This is especially pointless, given the open-sourced, lightweight CUTLASS library. Instead, the project primarily targets CUDA learners and aims to bridge the gap between the SGEMM implementations explained in books/blogs and those used in NVIDIA’s BLAS libraries. While the implementation is expected to deliver high performance on Ada/Ampere/Volta/Turing devices, it was specifically fine-tuned for and tested on a local NVIDIA RTX 3090 (=GA102 chip: RTX 3080, A10, A40, A6000). The achieved performance is shown below, comparing results with locked and unlocked GPU core frequencies against cuBLAS and Simon Boehm’s highly cited work (used in llamafile, aka tinyBLAS). I plan to continue publishing educational content on high-performance kernels used in AI/ML. Let me know what topics you’d like to see next! Projects currently in development: beating NVIDIA on Tensor Cores, Stream-K GEMM, FlashAttention, xLSTM. If you enjoy educational content like this and would like to see more, please share this article. Your feedback would be greatly appreciated!
P.S. Please feel free to get in touch if you are interested in collaborating. My contact information is available on the homepage.
1. Introduction
I clearly remember Andrej’s post on the current state of the existing cuda learning materials vs. cuda code used in high-performance libraries:
Indeed, when it comes to SGEMM implementations, there are some excellent educational blog posts, such as
that break down, step by step, how to optimize a CUDA matmul kernel. However, in terms of achieved performance, none of them come close to matching the speed of cuBLAS or CUTLASS, especially when using recent CUDA versions and if benchmarked properly. From my experiments, these implementations achieve 50-70% of cuBLAS’ performance at best. Additionally, I found the explanations in both blog posts a bit overcomplicated in the final optimization steps. Nevertheless, I still think these resources are great for anyone starting with CUDA programming since they provide good foundational knowledge.
On the other hand, I’ve seen some really fast SGEMM implementations with cuBLAS-level performance:
The problem is that they are undocumented, difficult to find and understand, especially for a CUDA beginner. A similar problem exists with CUTLASS. While it is highly performant, there is a lack of introductory or educational materials explaining how it is internally designed and implemented in efficient CUDA/PTX. Another notable project is MaxAs, an assembler for the Maxwell architecture developed over a decade ago by Scott Gray. This tool enables programming directly in SASS (the assembly language for NVIDIA GPUs), allowing direct communication with the hardware instead of relying on the hardware-agnostic CUDA/PTX. Using MaxAs, Scott wrote an SGEMM implementation that achieved around 98% of the GM204 chip’s theoretical maximum FLOPS, surpassing cuBLAS by an average of 5%. While the results are impressive, programming in SASS is inflexible and requires deep understanding of the underlying hardware. Furthermore, with significant advancements in the compiler since then, programming directly in SASS is only advantageous in exceptional cases (for example, if you build tinygrad). CUTLASS achieves performance on par with cuBLAS across various GPU architectures and matrix sizes using only CUDA/PTX code.
But can we actually exceed the cuBLAS barrier? In the following chapters, we will briefly review the high-level SGEMM design used in CUTLASS, and discuss how to translate this design into efficient CUDA/PTX. This guide assumes only a basic knowledge of the CUDA programming model and linear algebra. If you are new to CUDA programming, I strongly recommend starting with these short introductory articles:
Before we proceed with implementation, let’s talk about benchmarking code on NVIDIA GPUs - a topic often overlooked. Properly benchmarking code is just as important as the code itself, particularly when comparing different implementations.
2. How to Benchmark Code on CUDA Devices?
The most reliable way to measure kernel duration is by profiling with NVIDIA Nsight Compute and manually extracting performance data. To obtain deterministic and reproducible results, Nsight Compute automatically applies the following settings:
Alternatively, you can apply these settings manually and measure kernel duration at runtime without relying on external profilers. On Ubuntu, you can retrieve the base core clock frequency using:
nvidia-smi base-clocks
For instance, on an RTX 3090, the base core clock frequency is 1395 MHz. Next, you’ll need the memory clock frequencies, which work in combination with the base core clock:
nvidia-smi -q -d supported_clocks
From the list of supported frequencies, choose the fastest memory clock compatible with the base core frequency. Memory clock speeds are generally more stable than core clock speeds. To lock the clock frequencies and enable persistence mode, run the following commands:
sudo nvidia-smi --persistence-mode=1
# NVIDIA RTX 3090
sudo nvidia-smi --lock-gpu-clocks=1395
sudo nvidia-smi --lock-memory-clocks=9501
To reset the core and memory clock frequencies, you can use:
sudo nvidia-smi --reset-gpu-clocks
sudo nvidia-smi --reset-memory-clocks
sudo nvidia-smi --persistence-mode=0
GPU clock frequencies may drop due to the GPU’s thermal state, but for high-performance applications, throttling is often caused by power limits. Faulty hardware can also lead to throttling. It’s a good idea to monitor the GPU’s state at least during a test run. Use the following command to keep track of power draw, clock speeds, and throttling reasons in real time:
watch -n 0.1 nvidia-smi --query-gpu=power.draw,clocks.sm,clocks.mem,clocks_throttle_reasons.active --format=csv
A sample output might look like this:
308.50 W, 1395 MHz, 9501 MHz, 0x0000000000000000
The bit mask 0x0000000000000000 indicates no throttling, and the clocks are running at their maximum speeds. A value of 0x0000000000000001 indicates an idle state. Any other values suggest throttling is occurring. For a full list of bit mask values and their meanings, refer to the NvmlClocksThrottleReasons documentation.
Once you’ve locked the clock frequencies, you can measure the kernel duration directly in CUDA using CUDA events. Here’s an example:
cudaEvent_t start, stop;
cudaEventCreate(&start); cudaEventCreate(&stop);
float elapsed_time_ms = 0.0;
cudaEventRecord(start);
kernel<<<...>>>(...);
cudaEventRecord(stop);
cudaEventSynchronize(stop);
cudaEventElapsedTime(&elapsed_time_ms, start, stop);
For reliable measurements, multiple replay passes are typically used. In such cases, the GPU cache should be flushed before each kernel replay. This can be done using cudaMemsetAsync as shown in nvbench:
// Flush L2 cache
int dev_id{};
int m_l2_size{};
void* buffer;
checkCudaErrors(cudaGetDevice(&dev_id));
checkCudaErrors(cudaDeviceGetAttribute(&m_l2_size, cudaDevAttrL2CacheSize, dev_id));
if (m_l2_size > 0) {
checkCudaErrors(cudaMalloc(&buffer, static_cast<std::size_t>(m_l2_size)));
int* m_l2_buffer = reinterpret_cast<int*>(buffer);
checkCudaErrors(cudaMemsetAsync(m_l2_buffer, 0, static_cast<std::size_t>(m_l2_size)));
checkCudaErrors(cudaFree(m_l2_buffer));
}
Locking the clock frequencies to their base values is a reliable way to measure the speed of your kernel. However, in real-world scenarios, algorithms don’t typically run with locked clocks. To achieve optimal performance, your algorithm needs to be both fast and power-efficient. The less power your algorithm consumes, the higher the clock speeds your hardware can maintain. NVIDIA GPUs often reduce clock frequencies aggressively, well before hitting their power limits, which can significantly degrade application performance. To account for this, we benchmark our implementation under both locked and unlocked clock conditions, testing for both speed and power efficiency. In our benchmarks, we evaluate matrix sizes ranging from 1024 to 12,800 with a step size of 128. For each matrix size, we launch 1000*exp((-matsize+1024)/3100.0)) kernel replays and calculate the runtime as the average of the second half of the replays. For example, given matrix size problem m=n=k=4096, we run the sgemm 1000*exp((-4096 + 1024)/3100.0))=371 times and measure the average duration of the last 185 runs, ensuring the clocks have stabilized. This profiling strategy leads to consistent and reproducible results, even when GPU clocks are unlocked.
Avoid using WSL for performance measurements. To ensure accurate and reliable results, please use a native Linux environment.
3. Memory Layout
Without loss of generality in this implementation, we assume matrices are stored in row-major order. A matrix A with dimensions M x N is stored as contiguous array of length M*N. Elements A[row][col] are accessed via a 1D raw C pointer ptr[row*N + col] with 0<=col<=N-1 and 0<=row<=M-1. Matrix multiplication is denoted as $C=AB$, where the shapes of matrices $A, B, C$ are $M \times K, K \times N$ and $M \times N$, respectively.
To adapt this implementation for matrices stored in column-major order, simply swap the operands $A$ and $B$, because:
Here, $A, B, C$ are matrices stored in row-major order, while $A^\text{T}, B^\text{T}, C^\text{T}$ are the corresponding transposed matrices (i.e., stored in column-major order).
cuBLAS provides an API to calculate SGEMM:
cublasSgemm(m, n, k, A, lda, B, ldb, C, ldc); // simplified form
with m, n, k denote the matrix sizes $M, N, K$. The parameters lda, ldb, ldc are the leading dimensions of matrices $A, B, C$, respectively. The leading dimension is the length of the fastest-varying dimension when iterating over the matrix elements (i.e., the length of the first dimension). For matrices stored in row-major order, the leading dimension is usually the number of columns, so typically lda=k, ldb=n, ldc=n. However, this isn’t always the case. In scenarios where you need to compute a submatrix of a larger matrix, the leading dimension might be larger than the number of columns.
Matrices may be also padded with zeros to support vectorized memory loads or tensor cores. The vectorized load instructions allow to load multiple elements at once using just 1 instruction. Though the vectorized loads reduce total number of instructions and improve bandwidth utilization, they also impose alignment constraint on input data, so that the leading dimension must be divisible by 2 (for 64-bit loads) or 4 (for 128-bit loads). The figure below illustrate the case for 128-bit (=4 floats) loads.
Note how it’s impossible to load the elements of the first row without touching the elements of the next row if the leading dimension is not divisible by 4. Padding with zeros helps, but requires additional memory. Another solution would be to check at runtime if the leading dimension is divisible by 4. If it is - then use vectorized loads, if not - scalar loads. Additionally, zero padding was commonly used in the past to enable tensor core computations. For instance, in cuBLAS versions < 11, Tensor Core FP16 operations required m, n, k to be multiples of 8.
4. Parallel Thread Execution
The CUDA compilation trajectory of a .cu file looks as follows:
During Stage 1 CUDA code is compiled to PTX (parallel thread execution) instructions - intermediate high-level code, which can be considered as assembly for a virtual GPU architecture. Such a virtual GPU is defined entirely by the set of capabilities, or features, that it provides to the application. PTX doesn’t run on any real architecture, directly. It must be optimized and translated to native target-architecture instructions (Stage 2). NVIDIA provides a mechanism to insert PTX code into your CUDA program, so that you can mix CUDA/PTX in source code and still have benefits of code optimizations during the PTX generation. By rewriting parts of your code in PTX, you can 1) reduce total number of generated PTX instructions 2) exactly specify PTX instructions you need 3) tune the instructions through qualifiers 4) apply optimizations that are either lacking in the compiler or prohibited by C++ language extensions. Important! Using inline PTX Assembly will not make your code automatically faster than the one written in CUDA. It will only be faster if your hand-written PTX is better than the generated by the compiler.
In this implementation we will program some parts of the algorithm directly in PTX, so I highly recommend to check this short overview of inline ptx assembly if you have never used it before. The PTX instructions are well documented and can be found at PTX Instruction Set. We will now briefly review the PTX instructions used in this implementation.
4.1. Global Memory Loads
For global memory loads we will use ld.global.f32 instruction. Here, “ld” denotes “load” and “f32” - “32-bit float”. The following CUDA code
float reg; // single float register
float* gmem_ptr = data_in_global_memory; // pointer to global memory
reg = *gmem_ptr; // global memory -> register transfer
can be implemented in PTX as:
float reg; // single float register
float* gmem_ptr = data_in_global_memory; // pointer to global memory
asm volatile("ld.global.f32 %0, [%1];" : "=f"(reg) : "l"(gmem_ptr));
The f in "=f" denotes float datatype and the = modifier specifies that the register is written to. The l represents unsigned 64-bit integer. We also use volatile keyword to ensure that the instruction is not deleted or moved during generation of PTX.
4.2. Global Memory Stores
For global memory stores there is `st.global.f32 instruction:
float reg; // single float register
float* gmem_ptr = data_in_global_memory; // pointer to global memory
// *gmem_ptr = reg; can be implemented in PTX as:
asm volatile("st.global.f32 [%0], %1;" : : "l"(gmem_ptr), "f"(reg));
4.3. Global to Shared Memory Transfers
When you write something like this in CUDA:
__shared__ float smem_ptr[n]; // pointer to shared memory
float* gmem_ptr = data_in_global_memory; // pointer to global memory
*smem_ptr = *gmem_ptr; // global to shared memory transfer
a two-step process occurs. First, the data is fetched from global memory into registers and then that data is copied from registers into shared memory. Additionally the data is cached in all cache levels during the transfer.
Global to shared memory transfers
For this reason, a global to shared memory transfer in PTX consists of two data movement instructions ld.global and st.shared:
__shared__ float smem_ptr[n]; // pointer to shared memory
uint64_t smem_addr;
// convert generic address to shared address (store location for st.shared instruction)
asm volatile("cvta.to.shared.u64 %0, %1;" : "=l"(smem_addr) : "l"(smem_ptr));
float* gmem_ptr = data_in_global_memory; // pointer to global memory
float buffer;
// global memory -> register
asm volatile("ld.global.f32 %0, [%1];" : "=f"(buffer) : "l"(gmem_ptr));
// register -> shared memory
asm volatile("st.shared.f32 [%0], %1;" : : "l"(smem_addr), "f"(buffer));
Prior to Ampere architecture it was not possible to transfer data from global memory directly to shared memory mitigating storing in registers. Starting from Ampere architecture, there are asynchronous copy instructions that allow this. The usage of these instructions will be demonstrated later.
4.4. Vectorized Shared Memory Loads and Stores
In PTX you can also implement vectorized memory operations (loading/storing multiple elements with one instruction). Here, v4 denotes vector with four elements:
float reg0, reg1, reg2, reg3;
uint64_t addr;
...
// Shared memory 128-bit loads
asm volatile("ld.shared.v4.f32 {%0, %1, %2, %3}, [%4];"
: "=f"(reg0), "=f"(reg1), "=f"(reg2), "=f"(reg3)
: "l"(addr));
// Shared memory 128-bit stores
asm volatile("st.shared.v4.f32 [%0], {%1, %2, %3, %4};"
:
: "l"(addr), "f"(reg0), "f"(reg1), "f"(reg2), "f"(reg3));
4.5. Predicated Execution
In PTX conditional executions are implemented using optional guard predicates. The following CUDA code:
float reg;
float* ptr; //pointer to global memory
unsigned guard;
...
if (guard != 0) {
*ptr = reg;
}
can be converted to PTX as:
float reg;
float* ptr;
unsigned guard;
...
asm volatile(".reg .pred p;\n\t" // declare predicate 'p'
".setp.ne.u32 p, %2, 0;\n\t" // set 'p' to true if (guard != 0); ne="not equal"
"@p ld.global.f32 %0, [%1];\n\t" // execute instruction if 'p' is true
: "=f"(reg)
: "l"(ptr), "r"(guard));
We use guard predicates in combination with global load/store instructions to perform global memory access only if it is not out of bounds.
5. SGEMM Design
Let’s now break down the high-level design of the algorithm. The paper Strassen’s Algorithm Reloaded on GPUs contains, in my opinion, one of the best visualizations of the SGEMM design from the CUTLASS library. The SGEMM algorithm can be roughly divided into three main parts:
Each of these steps must be carefully optimized to achieve high overall performance. In the following sections, we’ll explore each step in detail and discuss efficient implementation strategies. It’s worth mentioning that the first step - “transferring data from global memory to shared memory” is the most challenging to grasp. However, once you understand this part, the remaining steps become much easier to follow.
5.1. Transferring data from global to shared memory
Source: Strassen’s Algorithm Reloaded on GPUs
To parallelize $C=AB$ on GPU, the matrix $C$ is partitioned into sub-matrices $\tilde{C}$ of size $m_S \times n_S$ and the sub-matrices are processed in parallel with one thread block computing one sub-matrix $\tilde{C}$ independently from other thread blocks. To compute $\tilde{C}$, we iterate over the dimension $K$. In each iteration, a submatrix $\tilde{A}$ of size $m_s \times k_s$ and a submatrix $\tilde{B}$ of size $k_s \times n_s$ are loaded from global into shared memory (see the figure above). These submatrices are then multiplied, and the result is used to update $\tilde{C}$ as $\tilde{C} += \tilde{A} \tilde{B}$. The sub-matrices $\tilde{A}, \tilde{B}, \tilde{C}$ are often called blocks or tiles. In total there are $K / k_s$ iterations (assuming the simplest case, where $K$ is divisible by $k_s$). The limited shared memory capacity is the reason why the dimension $K$ is divided into smaller $k_s$ blocks. Full $m_s \times K, K \times n_s$ blocks simply wouldn’t fit available shared memory. For now, don’t be distracted by why the matrices are loaded into shared memory and how exactly the matrices $\tilde{A}, \tilde{B}$ are multiplied, we will discuss it in the next chapter. Let’s focus on the efficient data movement from global to shared memory as our first step towards fast SGEMM.
The pseudo code of the algorithm, from the perspective of a thread block, is as follows:
// The shapes of block_a, block_b, block_c are (ms x ks), (ks x ns), (ms x ns)
// Each thread block computes one block of C:
block_c = 0
__shared__ float block_a[block_a_size]
__shared__ float block_b[block_b_size]
for (i=0; i<K/ks; i++) {
block_a = load ith block of matrix A // from global into shared memory
block_b = load ith block of matrix B // from global into shared memory
block_c += block_a * block_b // compute matrix product and update block_c
}
store(block_c) // store to global memory
Data transfers from global memory to shared memory have significantly higher latency compared to arithmetic operations. During this time, threads are forced to stall, idly waiting for the data needed to compute block_a * block_b. One way to mitigate this latency is by overlapping data transfers with computations, leveraging instruction-level parallelism (ILP). In GEMM implementations, a technique known as double buffering is commonly used to achieve this overlap:
block_c = 0
// Shared Memory Double buffering
__shared__ float block_a[2][block_a_size] // 2x shared memory usage
__shared__ float block_b[2][block_b_size] // 2x shared memory usage
block_a[0] = load first block of matrix A
block_b[0] = load first block of matrix B
for (i=0; i<(K/ks-1); i++) {
idx = i%2
prefetch_idx = (i+1)%2
// prefetch next blocks
block_a[prefetch_idx] = load next block of matrix A
block_a[prefetch_idx] = load next block of matrix B
// use blocks loaded in previous iteration to calculate matrix product
block_c += block_a[idx] * block_b[idx]
}
// final update of the accumulator using last blocks
block_c += block_a[prefetch_idx] * block_b[prefetch_idx]
store_to_global_memory(block_c)
Note that block_c += block_a[idx] * block_b[idx] doesn’t depend on blocks[prefetch_idx] allowing the arithmetic instructions to be issued in parallel with the data movement instructions. However, this comes at the cost of doubled shared memory usage, as we need to store two blocks instead of one. The good news is that modern GPUs have sufficient shared memory to support double-buffering.
We’ve already introduced several parameters such as block sizes $m_s, k_s, n_s$ and number of threads per thread block. The choice of these parameters highly depends on the shapes of the operands $A, B, C$, as well as the underlying GPU architecture. For example, cuBLAS implements multiple SGEMM kernels optimized for various matrix shapes and GPU architectures. At runtime, it selects the most appropriate kernel using a heuristic approach. The block sizes $m_s, k_s, n_s$ affect not only how the data will be fetched from global memory, but also how the work in all subsequent steps (shared memory loads, arithmetic operations, global memory stores) is organized among the threads to achieve the best possible performance. The choice of the block sizes and the number of threads per thread block also impact shared memory / register usage, which can result in decreased performance if not taken into account. As you might expect, identifying optimal parameter values requires excellent understanding of hardware and extensive experimentation. Fortunately, SGEMM is a well-studied problem and we can use the results from previous studies of cuBLAS and CUTLASS. For large square matrices (M=N=K > 1024) the combinations of $m_S \times n_S$ such as $128 \times 256$, $128 \times 128$ and $256 \times 128$ lead to optimal performance. From my tests, the configuration $m_s \times n_s \times k_s = 128 \times 128 \times 8$ with 256 threads per thread block achieved the highest TFLOP/S on my local RTX 3090 for matrix size problems 1024 <= M=N=K <= 2500. Therefore, we will start with implementation of a 128x128x8 SGEMM kernel. Now that we know the block dimensions and the number of threads per thread block, let’s discuss how to efficiently organize data loading from global memory and storage into shared memory.
First, we need to load 128x8 submatrix $\tilde{A}$ using 256 threads. This results in each thread loading 128*8/256 = 4 float elements from global memory. There are several different ways to organize the loading of the block. For global memory reads/stores you always want your accesses to be contiguous or coalesced, so that 32 threads in a wrap access 32 consecutive floats in memory. If a memory access is coalesced the minimum number of memory transactions will be used. However, it is not possible in case of the $\tilde{A}$ block: each row of the block contains only 8 consecutive elements. Nevertheless, even in such cases, consecutive threads in a wrap accessing consecutive elements in memory is preferable and usually results in better performance. The figure below shows how the loading of the block $\tilde{A}$ is implemented. Here, different colors represent different threads, whereas only first 16 threads are shown for simplicity. Four consecutive rows are loaded by 8 consecutive threads: the rows 1-4 are loaded by threads 0-7, the rows 5-8 are loaded by threads 8-15, the rows 9-12 are loaded by threads 16-23 and so on, with the last rows 125-128 are loaded by threads 248-255. We also transpose the block $\tilde{A}$ while storing in shared memory for better memory access pattern during the next computation step. Note how each thread stores 4 consecutive elements in shared memory. This allows us to use PTX vectorized stores st.shared.v4.f32.
Storing to shared memory using this naive scheme would result in shared memory bank conflicts. From the CUDA programming guide:
To achieve high bandwidth, shared memory is divided into equally-sized memory modules, called banks, which can be accessed simultaneously. Any memory read or write request made of n addresses that fall in n distinct memory banks can therefore be serviced simultaneously, yielding an overall bandwidth that is n times as high as the bandwidth of a single module.
However, if two addresses of a memory request fall in the same memory bank, there is a bank conflict and the access has to be serialized. The hardware splits a memory request with bank conflicts into as many separate conflict-free requests as necessary, decreasing throughput by a factor equal to the number of separate memory requests. If the number of separate memory requests is n, the initial memory request is said to cause n-way bank conflicts.
Shared memory has 32 banks that are organized such that successive 32-bit words map to successive banks. Imagine a float32 array of size 8x32 stored in row-major order as shown below.
In this context, colors and their shades represent memory banks: each row corresponds to 32 distinct memory banks, while each column represents a single memory bank. Here are two important notes about shared memory bank conflicts:
When you store (or load) 4 bytes(= 1 float) per thread, which is 4*32=128 bytes per warp, a CUDA device issues a single memory transaction (warp-wide) so that the shared memory access must be conflict-free across the whole wrap(=32 threads). In our case, we store 16 bytes(= 4 floats) per thread using the vector instructions. Warp-wide that will be a total of 512 bytes per request. The GPU splits the request into 4 memory transactions (threads 0-7 make up a transaction, threads 8-15 a transaction and so on), each of which is 128 byte wide. If we would store according to our scheme, then each thread within threads 0-7 would store to the same four columns (red color shades) or with other words to the same four memory banks causing bank conflicts. The same applies for other memory transactions i.e. threads 8-15, threads 16-23 and so on. One possible way to completely avoid bank conflicts would be to pad the leading dimension with 16 bytes (=4 floats) as shown below.
Now, if we store the data according to our scheme, each thread within threads 0-7 would accesses distinct memory banks, resulting in 32 memory banks being accessed per memory transaction. The same applies for the remaining memory transactions i.e. t8-t15, t16-t23 and so on. This is the reason why the leading dimension is 132 and not 128 in the implementation:
const int smem_a_ld = 132; // 128 + 4
To implement double-buffering and store two $\tilde{A}$ blocks, theoretically, we would need shared memory of size 2*132*8*4 bytes. However, we increase the size to the nearest power of 2 = 2*256*8*4 to enable fast switching. Compare the following code with the pseudocode presented at the beginning of the chapter:
// Double-buffering (blocks_b is omitted for simplicity)
__shared__ float __align__(2*256*8*sizeof(float)) blocks_a[2*256*8]
uint64_t lds_a_addr;
uint64_t sts_a_addr;
float* lds_a_ptr = blocks_a; // lds = load shared
float* sts_a_ptr = blocks_a; // sts = store shared
lds_a_addr = convert_to_addr(lds_a_ptr); // convert pointer to address for PTX load/store instructions
sts_a_addr = convert_to_addr(sts_a_ptr); // convert pointer to address for PTX load/store instructions
// store first block to first half of shared memory
sts_ptx(sts_a_addr);
// switch address to second half of shared memory
sts_a_addr ^= 8192;
for (int i=0; i<(K/ks-1); i++) {
...
// store next block to second(first) half of shared memory
sts_ptx(sts_a_addr);
...
// load block from first(second) half of shared memory to compute c+=block_a*block_b
lds_ptx(lds_a_addr);
...
// swap the addresses for next iteration: lds_a_addr = sts_a_addr, sts_a_addr = lds_a_addr
lds_a_addr ^= 8192;
sts_a_addr ^= 8192;
...
}
...
First, we require blocks_a to be 2*256*8*4=2^14=16384-byte aligned. This implies the address of the first element of blocks_a to be divisible by 16384 or with other words the last 14 bits of the address are zero:
As each block size is 8192=2^13 bytes, switching between the blocks can now be implemented with just a single XOR instruction ^= 8192. The only drawback of this method is the unused shared memory (in this case 2*8*128*4 bytes). However, this can be ignored considering maximum amount of shared memory per thread block on modern GPUs.
Loading and storing a 8 x 128 submatrix $\tilde{B}$ is much simpler to manage due to its shape. Since the sub-matrix must not be transposed, the loading and storing schemes are identical:
We use 32 consecutive threads to load 32 consecutive elements, with each thread loading 4 elements, spaced apart by a stride of 32. Note that since we store data in 32 distinct shared memory banks, no padding is required, and bank conflicts are avoided. Furthermore, the block size 128*8 is naturally a power of two, eliminating the need for additional padding and allowing block switching with a single XOR ^=4096 instruction.
5.2. Shared Memory Loads and Arithmetic Operations
With blocks $\tilde{A}$ and $\tilde{B}$ now residing in shared memory, let’s discuss how to efficiently load from shared memory and compute block $\tilde{C}$. To do this, we’ll dive one level deeper into our parallelization strategy and describe the algorithm from a warp’s perspective:
Launched thread block consists of 256 threads, which corresponds to 256/32=8 warps. The block $\tilde{C}$, with dimensions $128 \times 128$, is, therefore, divided into 8 regions $\tilde{C}_W$ labeled $W1, …, W8$ in the figure. Each region $\tilde{C}_W$ has dimensions $m_W \times n_W = 32 \times 64$ and is computed by a single warp: $W1$ is computed by threads t0-t31, $W2$ is computed by threads t32-t63, and so on, with $W8$ computed by threads t224-t255. The figure above uses $W8$ as an example to demonstrate how a single $\tilde{C}_W$ region is computed. We iterate over the dimension $K$ and in each iteration we
As $k_S = 8$, there will be in total 8 iterations. This explanation is from the perspective of a warp. Now, let’s delve one final level deeper and examine how the work within a warp is distributed among its 32 threads.
Each thread in a wrap computes four 4x4 sub-matrices (=accumulators) within $\tilde{C}_W$ or if concatenated - 8x8 accumulator. To do this, each thread loads 8 elements from fragment_a, 8 elements from fragment_b (as illustrated for thread t0 in the figure), multiplies them and updates the accumulator using fused multiply-add (FMA) instructions. Since block_a was transposed in the previous step, the elements in fragment_a are stored contiguously in memory, allowing faster access through vectorized loads. The threads are arranged in a way that avoids bank conflicts and works around NVIDIA’s shared memory broadcast limitation. This limitation occurs when 4 floats loaded using 16-byte vector instruction must be broadcast to more than 4 consecutive threads within a warp.
Bringing everything together, the entire SGEMM algorithm can be visualized as follows:
As you might expect, the accumulators are frequently updated during the computation and need to be stored in the fastest memory - the register files. Each thread allocates float accumulator[8][8], so that the entire block $\tilde{C}$ of size $128 \times 128$ is stored in registers by the 256 threads. This works because 256=16*16, and the combined arrangement (16*8)x(16*8)=128x128 matches the size of $\tilde{C}$. Just as we used double buffering to load the blocks $\tilde{A}$ and $\tilde{B}$ (from global memory to shared memory), we now also double buffer the fragments to minimize memory transfer latencies when moving data from shared memory to registers. The pseudocode for the algorithm can be written as follows:
// Pseudocode
__shared__ float block_a[2][block_a_size]
__shared__ float block_b[2][block_b_size]
float fragment_a[2][8]
float fragment_b[2][8]
float accumulator[8][8]
block_a[0] = load first block of matrix A
block_b[0] = load first block of matrix B
fragment_a[0] = load first fragment from block_a[0]
fragment_b[0] = load first fragment from block_b[0]
for (block_k=0; block_k<(K/ks-1); block_k++) {
block_idx = block_k % 2
block_prefetch_idx = (block_k+1) % 2
// prefetch next blocks (Shared Memory Double buffering)
block_a[block_prefetch_idx] = load next block of matrix A
block_a[block_prefetch_idx] = load next block of matrix B
for (int warp_k=0; warp_k<8; warp_k++) {
frag_idx = warp_k % 2
frag_prefetch_idx = (warp_k + 1) % 2
// prefetch next fragments (Register Double buffering)
fragment_a[frag_prefetch_idx] = load next fragment from block_a[block_idx]
fragment_b[frag_prefetch_idx] = load next fragment from block_b[block_idx]
// use fragments loaded in previous iteration to calculate matrix product
for (int i = 0; i < 8; i++) {
for (int j = 0; j < 8; j++) {
accumulator[i][j] += fragment_a[frag_idx][i] * fragment_b[frag_idx][j];
}
}
}
fragment_a[0] = load first fragment from block_a[block_prefetch_idx]
fragment_b[0] = load first fragment from block_b[block_prefetch_idx]
}
// final update of the accumulator using last blocks
for (int warp_k=0; warp_k<8; warp_k++) {
frag_idx = warp_k % 2
frag_prefetch_idx = (warp_k + 1) % 2
// prefetch next fragments (Register Double buffering)
fragment_a[frag_prefetch_idx] = load next fragment from block_a[block_prefetch_idx]
fragment_b[frag_prefetch_idx] = load next fragment from block_b[block_prefetch_idx]
// use fragments loaded in previous iteration to calculate matrix product
for (int i = 0; i < 8; i++) {
for (int j = 0; j < 8; j++) {
accumulator[i][j] += fragment_a[frag_idx][i] * fragment_b[frag_idx][j];
}
}
}
// After completing the matrix multiplication C=A*B, we perform one final update to the accumulator
// to compute C=alpha*A*B before storing the result back to global memory:
for (int i=0; i<8; i++) {
for (int j=0; j<8; j++) {
accumulator[i][j] *= alpha;
}
}
store_to_global_memory(accumulator)
5.3. Coalesced Global Memory Stores Through Shared Memory
Just as with global memory reads, we want our global memory writes to be coalesced. However, directly storing the accumulators to global memory based on our current mapping
would result in random memory accesses, significantly hurting performance. To fix this, we use shared memory as a buffer to rearrange the accumulators, enabling coalesced global memory writes. At this stage, the accumulators have already been computed, so we no longer need shared memory for computation. Transferring data from registers to shared memory is fast. The overhead of these additional transfers from registers to shared memory is negligible compared to the performance gains achieved through coalesced writes. We write the accumulator’s elements to shared memory row by row according to the following scheme:
The first row, containing 32 elements, is copied to the first 32 consecutive memory addresses in shared memory. Similarly, the second row is copied to the next 32 consecutive memory addresses, and so on with all 16 rows have been copied to shared memory. Next, we iterate through the rows in shared memory, and in each iteration, we store a row (containing 32 elements) to global memory using coalesced writes:
The process is then repeated for the other three 4x4 accumulators of the threads.
To compute $C := \alpha AB + \beta C$, we make a slight adjustment to the process of storing the data to global memory. After copying the accumulator from registers to shared memory, we check if beta != 0.0. If true, we load (using coalesced loads) the corresponding element from global memory into a register, multiply it by beta and add the result to the accumulator stored in shared memory. Finally, we store the updated accumulator alpha*A*B+beta*C from shared memory to global memory using coalesced writes.
6. Performance Analysis
So far, we have discussed the design of the 128x128x8 SGEMM kernel. Its implementation is available at 128x128x8.cuh and closely follows the pseudo-code outlined earlier. Let’s now benchmark this kernel to evaluate its performance. First, we conduct a benchmark with locked clock frequencies:
The benchmark results show that the implementation outperforms cuBLAS when clock speeds remain constant. However, performance alone is not enough; we also need to consider power consumption. To evaluate both metrics, we run the benchmark with unlocked clock frequencies:
This reveals the effect of throttling due to reaching power limits. While the 128x128x8 kernel is, on average, 3–4% faster than cuBLAS, it consumes 12% more power. The increased power consumption causes the GPU to operate near the power limit for matrix sizes m=n=k>4000, resulting in reduced clock speeds and overall performance degradation. This the reason why optimizing both runtime and power consumption is required for achieving a balanced and efficient implementation.
We can slightly improve the runtime of the kernel by utilizing vectorized global texture loads. The new kernel is available at 128x128x8_texld. Since the vectorized load instructions impose alignment constraints on the input data, we first verify the memory alignment and ensure the leading dimensions of matrices A and B are divisible by 4:
bool is_aligned = (((unsigned)lda & 3u) == 0) && (((unsigned)ldb & 3u) == 0)
&& (((unsigned long)A & 15u) == 0) && (((unsigned long)B & 15u) == 0);
If the input data is aligned, we can use the vectorized load instructions. First we need to create texture objects, texture descriptors, and resource descriptors. These are configured to handle float data type with four 32-bit channels (x, y, z, w). The texture objects are then bound to the operands A, B, and passed to the kernel instead of raw pointers A, B.
cudaResourceDesc resDesc;
cudaTextureDesc texDesc;
cudaTextureObject_t tex_a = 0;
cudaTextureObject_t tex_b = 0;
...
if (is_aligned) {
memset(&texDesc, 0, sizeof(texDesc));
texDesc.readMode = cudaReadModeElementType;
texDesc.normalizedCoords = 0;
memset(&resDesc, 0, sizeof(resDesc));
resDesc.resType = cudaResourceTypeLinear;
resDesc.res.linear.desc.f = cudaChannelFormatKindFloat;
resDesc.res.linear.desc.x = 32;
resDesc.res.linear.desc.y = 32;
resDesc.res.linear.desc.z = 32;
resDesc.res.linear.desc.w = 32;
resDesc.res.linear.devPtr = A;
resDesc.res.linear.sizeInBytes = m * lda * sizeof(float);
cudaCreateTextureObject(&tex_a, &resDesc, &texDesc, NULL);
resDesc.res.linear.devPtr = B;
resDesc.res.linear.sizeInBytes = k * ldb * sizeof(float);
cudaCreateTextureObject(&tex_b, &resDesc, &texDesc, NULL);
sgemm_texld_128x128x8<<<grid, threads>>>(m,
n,
k,
*alpha,
tex_a,
lda,
tex_b,
ldb,
*beta,
C,
ldc);
cudaDestroyTextureObject(tex_a);
cudaDestroyTextureObject(tex_b);
}
Within the kernel, we load data through texture objects using the tex1Dfetch function, which compiles to a single PTX instruction:
float4 texld_a_buffer;
texld_a_buffer = tex1Dfetch<float4>(tex_a, texld_a_offset);
We use global texture loads over normal vectorized global loads (ld.global.v4.f32) because texture loads handle out-of-bounds reads gracefully by returning zeros, avoiding the need for predicated execution. This simplification leads to more efficient code:
Lastly, we developed an 128x256x8 SGEMM kernel leveraging bigger block size $n_S=256$ and asynchronous copy instructions (cp.async.ca.shared.global) which are supported starting with the Ampere architecture. The main advantage of these instructions is that one can overlay computation with memory transfers and avoid pipeline stalls. Additionally, they allow to copy data from global memory directly into shared memory bypassing registers:
By simply replacing the normal global load instructions with cp.async in the 128x128x8 kernel results in degraded performance - possibly due to higher latencies of the cp.async instructions or suboptimal compiler optimizations. However, combining increased block size with cp.async yields superior results in both speed and power efficiency:
Our final implementation combines the 128x128x8 and 128x256x8 kernels. For smaller matrices m=n < 2500, we use the 128x128x8 kernel, otherwise, the 128x256x8 kernel.
making AI inference run really fast.
|
Harshit Kumar
Matrix Multiplication in CUDA
Matrix multiplication is at the heart of deep learning. In this evolving world of LLMs, the need for fast and efficient matrix multiplications is paramount. Nvidia CUDA allows you to perform matrix operations on GPU in a faster way.
CUDA (Compute Unified Device Architecture) is a parallel computing platform and application programming interface (API) model. CUDA programming model provides an abstraction of GPU architecture (API for GPUs).
In this blog post, we will explore how to implement matrix multiplication using CUDA. We will start with a naive implementation on the CPU and then demonstrate how to significantly speed up the process using CUDA.
Naive C++ Implementation on CPU
Since in most hardwares, matrices are stored in row-major format, let’s define our 2d matrices as row-major 1d arrays.
struct Matrix
{
int height;
int width;
float *elements; // height x width
// you can also use std::vector<float> elements for automatic memory management
};
Matrix multiplication for computing each element of matrix C from matrices A and B can be written as follows:
where i and j are the row and column indices of the resulting matrix C and k is the index used for the summation over the common dimension.
Our naive matrix multplication in C++ on CPU is:
void matMulCPU(const Matrix &A, const Matrix &B, Matrix &C)
{
for (int row = 0; row < A.height; ++row)
{
for (int col = 0; col < B.width; ++col)
{
float cValue = 0;
// C[i][j] = sum_k A[i][k] * B[k][j]
for (int k = 0; k < A.width; ++k)
cValue += A.elements[row * A.width + k] * B.elements[k * B.width + col];
C.elements[row * C.width + col] = cValue;
}
}
}
We can use the below main() function to call our matMulCPU() and measure its performance.
// Function to initialize a matrix with random values
void initializeMatrix(Matrix &mat)
{
for (int i = 0; i < mat.height * mat.width; ++i)
mat.elements[i] = static_cast<float>(rand() % 100);
}
int main()
{
int M = 1024; // Rows of A and C
int K = 768; // Columns of A and rows of B
int N = 1024; // Columns of B and C
// Allocate matrices A, B, and C
Matrix A = {M, K, new float[M * K]}; // 1024x768
Matrix B = {K, N, new float[K * N]}; // 768x1024
Matrix C = {M, N, new float[M * N]}; // 1024x1024
// Initialize matrices A and B with random values
initializeMatrix(A);
initializeMatrix(B);
// Measure the time taken for matrix multiplication on the CPU
auto start = std::chrono::high_resolution_clock::now();
matMulCPU(A, B, C);
auto stop = std::chrono::high_resolution_clock::now();
std::chrono::duration<float> duration = stop - start;
cout << "CPU matrix multiplication time: " << duration.count() * 1000.0f << " ms" << endl;
// Clean up memory
delete[] A.elements;
delete[] B.elements;
delete[] C.elements;
return 0;
}
Naive CUDA Kernel
In CUDA, we define a CUDA kernel, which is a function (e.g. C++ function) executed by CUDA.
In CUDA programming model, there is a three-level hierarchy. The threads are the smallest unit of execution. These threads are grouped into a CUDA thread block. CUDA blocks are grouped into arrays called grids. The kernel is written from the perspective of a single thread in CUDA. Thus, a kernel is executed as a grid of blocks of threads.
On a CPU, matrix multiplication is typically performed sequentially, where each element of the output matrix is computed one after another. This process can be slow for large matrices due to the limited number of CPU cores available for parallel execution. In contrast, the GPU excels at parallel processing. A CUDA kernel is executed by many threads running simultaneously, allowing for significant speedup in computations like matrix multiplication. The GPU’s architecture enables it to handle thousands of threads concurrently, making it well-suited for tasks with high levels of parallelism.
Let’s re-write the above matrix multiplication code in CUDA. We use __global__ keyword to define a CUDA kernel. Here, we assign a thread for calculation of each element of output matrix C. And, multiple such threads are run in parallel. Each thread reads one row of A and one column of B to compute one element of C.
Threads and blocks are indexed using the built-in 3D variable threadIdx and blockIdx. The blockDim gives the dimension of thread block. We can access index using dot attribute e.g. threadIdx.x, threadIdx.y, and threadIdx.z. Thus, for 2d thread block, we can access particular element of C using a combination of these as shown in below code.
__global__ void matMulNaiveKernel(Matrix A, Matrix B, Matrix C)
{
int row = blockIdx.y * blockDim.y + threadIdx.y;
int col = blockIdx.x * blockDim.x + threadIdx.x;
// Each thread accumulates one element of C by accumulating results into cValue
float cValue = 0;
// C[i][j] = sum_k A[i][k] * B[k][j]
// Iterates over common dimensions of A and B (k = A.width = B.height)
if (row < A.height && col < B.width)
{
for (int k = 0; k < A.width; ++k)
cValue += A.elements[row * A.width + k] * B.elements[k * B.width + col];
C.elements[row * C.width + col] = cValue;
}
}
We create a 16x16 thread block (256 threads with 16 each in x and y-direction). We define (B.width/BLOCK_SIZE, A.height/BLOCK_SIZE) blocks per grid. Extra operations below is to take care of the last tile if size isn’t perfectly divisible.
#define BLOCK_SIZE 16
dim3 threadsPerBlock(BLOCK_SIZE, BLOCK_SIZE);
dim3 blocksPerGrid((B.width + threadsPerBlock.x - 1) / threadsPerBlock.x,
(A.height + threadsPerBlock.y - 1) / threadsPerBlock.y);
runKernel(matMulNaiveKernel, A, B, C, blocksPerGrid, threadsPerBlock);
This kernel is called with device (gpu) matrices A, B, and C as follows:
kernel<<<blocksPerGrid, threadsPerBlock>>>(d_A, d_B, d_C);
This setup ensures that the CUDA kernel efficiently processes the entire matrix by dividing the workload among the available threads and blocks.
To execute CUDA program:
We pass our kernel to the runKernel() function that also takes CPU matrices A and B. It copies the data from CPU to GPU, runs kernel, copy result from GPU to CPU, and return the result matrix C.
void runKernel(void(*kernel)(Matrix, Matrix, Matrix),
const Matrix &A, const Matrix &B, Matrix &C,
dim3 gridDim, dim3 blockDim)
{
// Load matrices to device memory
Matrix d_A, d_B, d_C;
size_t size_A = A.width * A.height * sizeof(float);
size_t size_B = B.width * B.height * sizeof(float);
size_t size_C = C.width * C.height * sizeof(float);
d_A.width = A.width; d_A.height = A.height;
d_B.width = B.width; d_B.height = B.height;
d_C.width = C.width; d_C.height = C.height;
// Allocate device memory
CUDA_CHECK_ERROR(cudaMalloc(&d_A.elements, size_A));
CUDA_CHECK_ERROR(cudaMalloc(&d_B.elements, size_B));
CUDA_CHECK_ERROR(cudaMalloc(&d_C.elements, size_C));
// Copy A, B to device memory
CUDA_CHECK_ERROR(cudaMemcpy(d_A.elements, A.elements, size_A, cudaMemcpyHostToDevice));
CUDA_CHECK_ERROR(cudaMemcpy(d_B.elements, B.elements, size_B, cudaMemcpyHostToDevice));
auto start = std::chrono::high_resolution_clock::now();
// Launch kernel
kernel<<<gridDim, blockDim>>>(d_A, d_B, d_C);
// Synchronize device memory
CUDA_CHECK_ERROR(cudaDeviceSynchronize());
auto end = std::chrono::high_resolution_clock::now();
std::chrono::duration<float> duration = end - start;
std::cout << "Kernel execution time: " << duration.count() * 1000.0f << " ms" << std::endl;
// Copy C from device memory to host memory
CUDA_CHECK_ERROR(cudaMemcpy(C.elements, d_C.elements, size_C, cudaMemcpyDeviceToHost));
// Free device memory
CUDA_CHECK_ERROR(cudaFree(d_A.elements));
CUDA_CHECK_ERROR(cudaFree(d_B.elements));
CUDA_CHECK_ERROR(cudaFree(d_C.elements));
}
And, we call runKernel() function in above defined main() function.
CUDA Shared Memory Kernel
The previous CUDA kernel uses DRAM, but we can optimize performance by leveraging the GPU’s shared memory. Shared memory is faster but has limited capacity, so we cannot load entire matrices at once. Instead, we divide the matrices into smaller sub-matrices, or tiles, that fit into shared memory.
Shared memory is allocated per thread block, allowing threads within the same block to communicate efficiently. Each thread block is responsible for computing one square sub-matrix \(C_{sub}\) of C by loading tiles of input matrices A and B from global memory to shared memory. Each thread within the block computes a single element of \(C_{sub}\) by iterating over the corresponding elements in the shared memory tiles, accumulating the results of the products. Finally, each thread writes its computed value to the appropriate position in global memory.
#define TILE_SIZE 16
// Kernel for matrix multiplication using tiling and shared memory
__global__ void matMulSharedMemoryKernel(Matrix A, Matrix B, Matrix C)
{
// Shared memory for tiles of A and B
__shared__ float shared_A[TILE_SIZE][TILE_SIZE];
__shared__ float shared_B[TILE_SIZE][TILE_SIZE];
// Calculate the global row and column index of the element
int globalRow = blockIdx.y * blockDim.y + threadIdx.y;
int globalCol = blockIdx.x * blockDim.x + threadIdx.x;
float Cvalue = 0.0f;
// Thread row and column within Csub
int row = threadIdx.y;
int col = threadIdx.x;
// Loop over the tiles of the input matrices
// A.width/TILE_SIZE and B.height/TILE_SIZE; take care of the last tile
for (int m = 0; m < (A.width + TILE_SIZE - 1) / TILE_SIZE; ++m)
{
// Load elements of A into shared memory
// if shared memory defined using 1d array, we'd have used shared_A[row * TILE_SIZE + col]
if (row < A.height && (m * TILE_SIZE + col) < A.width)
{
shared_A[row][col] = A.elements[globalRow * A.width + m * TILE_SIZE + col];
} else
{
// When matrix dimensions are not exact multiples of the tile size,
// some threads in the last blocks might access elements outside
// the matrix boundaries. By setting out-of-bounds elements to zero,
// we ensure that these threads do not contribute invalid values to final result.
// e.g. Matrix A = [100x100] and TILE_SIZE = 16
shared_A[row][col] = 0.0f;
}
// Load elements of B into shared memory
if (col < B.width && (m * TILE_SIZE + row) < B.height)
{
shared_B[row][col] = B.elements[(m * TILE_SIZE + row) * B.width + globalCol];
} else
{
shared_B[row][col] = 0.0f;
}
// Synchronize to ensure all threads have loaded their elements
__syncthreads();
// Compute the partial result
for (int k = 0; k < TILE_SIZE; ++k)
Cvalue += shared_A[row][k] * shared_B[k][col];
// Synchronize to ensure all threads have completed the computation
__syncthreads();
}
// Write the result to global memory
if (globalRow < C.height && globalCol < C.width)
C.elements[globalRow * C.width + globalCol] = Cvalue;
}
We can call our kernel as follows:
dim3 blockDim(TILE_SIZE, TILE_SIZE);
dim3 gridDim((C.width + TILE_SIZE - 1) / TILE_SIZE, (C.height + TILE_SIZE - 1) / TILE_SIZE);
runKernel(matMulSharedMemoryKernel, A, B, C, gridDim, blockDim);
CUDA Matrix Multiplication Comparison
The kernel execution time of above kernels on Tesla T4 on google colab is as follows.
The CUDA parallelism significantly improves the CPU computation time. The shared memory kernel achieves the fastest execution time.
The full code is availble at https://github.com/kHarshit/cuda-programming
Further Optimization
There are other ways to optimize the CUDA matrix multplication kernel further, such as:
By combining these advanced optimization techniques with shared memory, you can achieve even greater performance gains for matrix multiplication on CUDA-enabled GPUs.
References
You May Also Like
|
January 2009
Optimizing Matrix
Transpose in CUDA
Greg Ruetsch
[email protected]
Paulius Micikevicius
[email protected]
ii
January 2009
3
Chapter 1.
Introduction
Optimizing CUDA Memory
Management in Matrix
Transpose
This document discusses aspects of CUDA application performance related to
efficient use of GPU memories and data management as applied to a matrix
transpose. In particular, this document discusses the following issues of memory
usage:
coalescing data transfers to and from global memory
shared memory bank conflicts
partition camping
There are other aspects of efficient memory usage not discussed here, such as
data transfers between host and device, as well as constant and texture memories.
Both coalescing and partition camping deal with data transfers between global
device and on-chip memories, while shared memory bank conflicts deal with on-
chip shared memory.
The reader should be familiar with basic CUDA programming concepts such as
kernels, threads, and blocks, as well as a basic understanding of the different
memory spaces accessible by CUDA threads. A good introduction to CUDA
programming is given in the CUDA Programming Guide as well as other
resources on CUDA Zone (http://www.nvidia.com/cuda).
The matrix transpose problem statement is given next, followed by a brief
discussion of performance metrics, after which the remainder of the document
presents a sequence of CUDA matrix transpose kernels which progressively
address various performance bottlenecks.
Matrix Transpose Characteristics
In this document we optimize a transpose of a matrix of floats that operates out-
of-place, i.e. the input and output matrices address separate memory locations.
For simplicity and brevity in presentation, we consider only square matrices
whose dimensions are integral multiples of 32 on a side, the tile size, through the
Optimizing Matrix Transpose in CUDA
4
January 2009
document. However, modifications of code required to accommodate matrices of
arbitrary size are straightforward.
Code Highlights and Performance Measurements
The host code for all the transpose cases is given in Appendix A. The host code
performs typical tasks: data allocation and transfer between host and device, the
launching and timing of several kernels, result validation, and the deallocation of
host and device memory.
In addition to different matrix transposes, we run kernels that execute matrix
copies. The performance of the matrix copies serve as benchmarks that we
would like the matrix transpose to achieve.
For both the matrix copy and transpose, the relevant performance metric is the
effective bandwidth, calculated in GB/s as twice the size of the matrix – once for
reading the matrix and once for writing – divided by the time of execution. Since
timing is performed in loops executed NUM_REPS times, which is defined at the
top of the code, the effective bandwidth is also normalized by NUM_REPS.
Looping NUM_REPS times over code for measurement is done in two different
fashions: looping over kernel launches, and looping within the kernel over the
load and stores. The host code for these measurements is given below:
// take measurements for loop over kernel launches
cudaEventRecord(start, 0);
for (int i=0; i < NUM_REPS; i++) {
kernel<<<grid, threads>>>(d_odata, d_idata,size_x,size_y,1);
}
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
float outerTime;
cudaEventElapsedTime(&outerTime, start, stop);
...
// take measurements for loop inside kernel
cudaEventRecord(start, 0);
kernel<<<grid,threads>>>
(d_odata, d_idata, size_x, size_y, NUM_REPS);
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
float innerTime;
cudaEventElapsedTime(&innerTime, start, stop);
The first timing is done by a for loop in the host code, the second by passing
NUM_REPS as a parameter to the kernel. A simple copy kernel is shown below:
__global__ void copy(float *odata, float* idata, int width,
int height, int nreps)
{
int xIndex = blockIdx.x*TILE_DIM + threadIdx.x;
int yIndex = blockIdx.y*TILE_DIM + threadIdx.y;
int index = xIndex + width*yIndex;
Optimizing Matrix Transpose in CUDA
January 2009
5
for (int r=0; r < nreps; r++) {
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) {
odata[index+i*width] = idata[index+i*width];
}
}
}
The difference between these two timings is the overhead of the kernel launch,
which should be consistent between different kernels, as well as the time spent in
calculating the matrix indices at the beginning of each kernel. In addition,
looping over kernel launches also acts as a synchronization mechanism. When
the kernel is launched multiple times from a loop in host code, all blocks from
one kernel launch must complete execution before any block of a following
launch can begin. As a result the set of active blocks and hence memory access
patterns resets every loop iteration. When the loop is performed within the
kernels, the set of active thread blocks has more opportunity to diverge as
execution progresses through the timing loop.
Both methods of timing code provide useful measurements, the first indicating
what one would typically use as an overall performance metric, and the second as
a means of comparing the data movement times between kernels.
In the following section we present different kernels called from the host code,
each addressing different performance issues. All kernels in this study launch
thread blocks of dimension 32x8, where each block transposes (or copies) a tile
of dimension 32x32. As such, the parameters TILE_DIM and BLOCK_ROWS
are set to 32 and 8, respectively. Using a thread block with fewer threads than
elements in a tile is advantageous for the matrix transpose in that each thread
transposes several matrix elements, four in our case, and much of the cost of
calculating the indices is amortized over these elements.
January 2009
6
2. Copy and Transpose Kernels
Simple copy
The first two cases we consider are a naïve transpose and simple copy, each
using blocks of 32x8 threads on a 32x32 matrix tiles. The copy kernel was given
in the previous section, and shows the basic layout for all of the kernels. The
first two arguments odata and idata are pointers to the input and output
matrices, width and height are the matrix x and y dimensions, and nreps
determines how many times the loop over data movement between matrices is
performed. In this kernel, the global 2D matrix indices xIndex and yIndex
are calculated, which are in turn used to calculate index, the 1D index used by
each thread to access matrix elements. The loop over i adds additional offsets
to index so that each thread copies multiple elements of the array, and the loop
over r is used for timing the data transfer from input to output array multiple
times.
Naïve transpose
The naïve transpose:
__global__ void transposeNaive(float *odata, float* idata,
int width, int height, int nreps)
{
int xIndex = blockIdx.x*TILE_DIM + threadIdx.x;
int yIndex = blockIdx.y*TILE_DIM + threadIdx.y;
int index_in = xIndex + width * yIndex;
int index_out = yIndex + height * xIndex;
for (int r=0; r < nreps; r++) {
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) {
odata[index_out+i] = idata[index_in+i*width];
}
}
}
is nearly identical to the copy kernel above, with the exception that index, the
array index used to access elements in both input and output arrays for the copy
kernel, is replaced by the two indices index_in (equivalent to index in the
copy kernel), and index_out. Each thread executing the kernel transposes
four elements from one column of the input matrix to their transposed locations
in one row of the output matrix.
Optimizing Matrix Transpose in CUDA
January 2009
7
The performance of these two kernels on a 2048x2048 matrix using a GTX280 is
given in the following table:
Effective Bandwidth (GB/s)
2048x2048, GTX 280
Loop over kernel
Loop in kernel
Simple Copy
96.9
81.6
Naïve Transpose
2.2
2.2
The minor differences in code between the copy and naïve transpose kernels have
a profound effect on performance - nearly two orders of magnitude. This brings
us to our first optimization technique: global memory coalescing.
Coalesced Transpose
Because device memory has a much higher latency and lower bandwidth than on-
chip memory, special attention must be paid to how global memory accesses are
performed, in our case loading data from idata and storing data in odata. All
global memory accesses by a half-warp of threads can be coalesced into one or
two transactions if certain criteria are met. These criteria depend on the compute
capability of the device, which can be determined, for example, by running the
deviceQuery SDK example. For compute capabilities of 1.0 and 1.1, the
following conditions are required for coalescing:
threads must access either 32- 64-, or 128-bit words, resulting in either one
transaction (for 32- and 64-bit words) or two transactions (for 128-bit words)
All 16 words must lie in the same aligned segment of 64 or 128 bytes for 32-
and 64-bit words, and for 128-bit words the data must lie in two contiguous
128 byte aligned segments
The threads need to access words in sequence. If the k-th thread is to access
a word, it must access the k-th word, although not all threads need to
participate.
For devices with compute capabilities of 1.2, requirements for coalescing are
relaxed. Coalescing into a single transaction can occur when data lies in 32-, 64-,
and 128-byte aligned segments, regardless of the access pattern by threads within
the segment. In general, if a half-warp of threads access N segments of memory,
N memory transactions are issued.
In a nutshell, if a memory access coalesces on a device of compute capability 1.0
or 1.1, then it will coalesce on a device of compute capability 1.2 and higher. If
it doesn’t coalesce on a device of compute capability 1.0 or 1.1, then it may
Optimizing Matrix Transpose in CUDA
8
January 2009
either completely coalesce or perhaps result in a reduced number of memory
transactions, on a device of compute capability 1.2 or higher.
For both the simple copy and naïve transpose, all loads from idata coalesce on
devices with any of the compute capabilities discussed above. For each iteration
within the i-loop, each half warp reads 16 contiguous 32-bit words, or one half
of a row of a tile. Allocating device memory through cudaMalloc() and
choosing TILE_DIM to be a multiple of 16 ensures alignment with a segment of
memory, therefore all loads are coalesced.
Coalescing behavior differs between the simple copy and naïve transpose kernels
when writing to odata. For the simple copy, during each iteration of the i-
loop, a half warp writes one half of a row of a tile in a coalesced manner. In the
case of the naïve transpose, for each iteration of the i-loop a half warp writes one
half of a column of floats to different segments of memory, resulting in 16
separate memory transactions, regardless of the compute capability.
The way to avoid uncoalesced global memory access is to read the data into
shared memory, and have each half warp access noncontiguous locations in
shared memory in order to write contiguous data to odata. There is no
performance penalty for noncontiguous access patters in shared memory as there
is in global memory, however the above procedure requires that each element in
a tile be accessed by different threads, so a __synchthreads() call is
required to ensure that all reads from idata to shared memory have completed
before writes from shared memory to odata commence. A coalesced transpose
is listed below:
__global__ void transposeCoalesced(float *odata,
float *idata, int width, int height, int nreps)
{
__shared__ float tile[TILE_DIM][TILE_DIM];
int xIndex = blockIdx.x*TILE_DIM + threadIdx.x;
int yIndex = blockIdx.y*TILE_DIM + threadIdx.y;
int index_in = xIndex + (yIndex)*width;
xIndex = blockIdx.y * TILE_DIM + threadIdx.x;
yIndex = blockIdx.x * TILE_DIM + threadIdx.y;
int index_out = xIndex + (yIndex)*height;
for (int r=0; r < nreps; r++) {
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) {
tile[threadIdx.y+i][threadIdx.x] =
idata[index_in+i*width];
}
__syncthreads();
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) {
odata[index_out+i*height] =
tile[threadIdx.x][threadIdx.y+i];
}
}
}
A depiction of the data flow of a half warp in the coalesced transpose kernel is
given below. The half warp writes four half rows of the idata matrix tile to the
Optimizing Matrix Transpose in CUDA
January 2009
9
shared memory 32x32 array “tile” indicated by the yellow line segments.
After a __syncthreads() call to ensure all writes to tile are completed,
the half warp writes four half columns of tile to four half rows of an odata
matrix tile, indicated by the green line segments.
With the improved access pattern to memory in odata, the writes are coalesced
and we see an improved performance:
Effective Bandwidth (GB/s)
2048x2048, GTX 280
Loop over kernel
Loop in kernel
Simple Copy
96.9
81.6
Naïve Transpose
2.2
2.2
Coalesced Transpose
16.5
17.1
While there is a dramatic increase in effective bandwidth of the coalesced
transpose over the naïve transpose, there still remains a large performance gap
between the coalesced transpose and the copy. The additional indexing required
by the transpose doesn’t appear to be the cause for the performance gap, since the
results in the “Loop in kernel” column, where the index calculation is amortized
over 100 iterations of the data movement, also shows a large performance
difference. One possible cause of this performance gap is the synchronization
barrier required in the coalesced transpose. This can be easily assessed using the
following copy kernel which utilizes shared memory and contains a
__syncthreads() call:
__global__ void copySharedMem(float *odata, float *idata,
int width, int height, int nreps)
{
idata
odata
tile
Optimizing Matrix Transpose in CUDA
10
January 2009
__shared__ float tile[TILE_DIM][TILE_DIM];
int xIndex = blockIdx.x*TILE_DIM + threadIdx.x;
int yIndex = blockIdx.y*TILE_DIM + threadIdx.y;
int index = xIndex + width*yIndex;
for (int r=0; r < nreps; r++) {
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) {
tile[threadIdx.y+i][threadIdx.x] =
idata[index+i*width];
}
__syncthreads();
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) {
odata[index+i*width] =
tile[threadIdx.y+i][threadIdx.x];
}
}
}
The __syncthreads() call is not needed for successful execution of this
kernel, as threads do not share data, and is included only to assess the cost of the
synchronization barrier in the coalesced transpose. The results are shown in the
following modified table:
Effective Bandwidth (GB/s)
2048x2048, GTX 280
Loop over kernel
Loop in kernel
Simple Copy
96.9
81.6
Shared Memory Copy
80.9
81.1
Naïve Transpose
2.2
2.2
Coalesced Transpose
16.5
17.1
The shared memory copy results seem to suggest that the use of shared memory
with a synchronization barrier has little effect on the performance, certainly as far
as the “Loop in kernel” column indicates when comparing the simple copy and
shared memory copy. When comparing the coalesced transpose and shared
memory copy kernels, however, there is one performance bottleneck regarding
how shared memory is accessed that needs to be addressed: shared memory bank
conflicts.
Optimizing Matrix Transpose in CUDA
January 2009
11
Shared memory bank conflicts
Shared memory is divided into 16 equally-sized memory modules, called banks,
which are organized such that successive 32-bit words are assigned to successive
banks. These banks can be accessed simultaneously, and to achieve maximum
bandwidth to and from shared memory the threads in a half warp should access
shared memory associated with different banks. The exception to this rule is
when all threads in a half warp read the same shared memory address, which
results in a broadcast where the data at that address is sent to all threads of the
half warp in one transaction.
One can use the warp_serialize flag when profiling CUDA applications to
determine whether shared memory bank conflicts occur in any kernel. In
general, this flag also reflects use of atomics and constant memory, however
neither of these are present in our example.
The coalesced transpose uses a 32x32 shared memory array of floats. For this
sized array, all data in columns k and k+16 are mapped to the same bank. As a
result, when writing partial columns from tile in shared memory to rows in
odata the half warp experiences a 16-way bank conflict and serializes the
request. A simple to avoid this conflict is to pad the shared memory array by one
column:
__shared__ float tile[TILE_DIM][TILE_DIM+1];
The padding does not affect shared memory bank access pattern when writing a
half warp to shared memory, which remains conflict free, but by adding a single
column now the access of a half warp of data in a column is also conflict free.
The performance of the kernel, now coalesced and memory bank conflict free, is
added to our table below:
Effective Bandwidth (GB/s)
2048x2048, GTX 280
Loop over kernel
Loop in kernel
Simple Copy
96.9
81.6
Shared Memory Copy
80.9
81.1
Naïve Transpose
2.2
2.2
Coalesced Transpose
16.5
17.1
Bank Conflict Free Transpose
16.6
17.2
While padding the shared memory array did eliminate shared memory bank
conflicts, as was confirmed by checking the warp_serialize flag with the
CUDA profiler, it has little effect (when implemented at this stage) on
performance. As a result, there is still a large performance gap between the
coalesced and shared memory bank conflict free transpose and the shared
Optimizing Matrix Transpose in CUDA
12
January 2009
memory copy. In the next section we break the transpose into components to
determine the cause for the performance degradation.
Decomposing Transpose
There is over a factor of four performance difference between the best optimized
transpose and the shared memory copy in the table above. This is the case not
only for measurements which loop over the kernel launches, but also for
measurements obtained from looping within the kernel where the costs associated
with the additional index calculations are amortized over the 100 iterations.
To investigate further, we revisit the data flow for the transpose and compare it to
that of the copy, both of which are indicated in the top portion of the diagram
below. There are essentially two differences between the copy code and the
transpose: transposing the data within a tile, and writing data to transposed tile.
We can isolate the performance between each of these two components by
implementing two kernels that individually perform just one of these
components. As indicated in the bottom half of the diagram below, the fine-
grained transpose kernel transposes the data within a tile, but writes the tile to the
location that a copy would write the tile. The coarse-grained transpose kernel
writes the tile to the transposed location in the odata matrix, but does not
transpose the data within the tile.
idata
odata
tile
copy
transpose
coarse-grained
transpose
fine-grained
transpose
Optimizing Matrix Transpose in CUDA
January 2009
13
The source code for these two kernels is given below:
__global__ void transposeFineGrained(float *odata,
float *idata, int width, int height, int nreps)
{
__shared__ float block[TILE_DIM][TILE_DIM+1];
int xIndex = blockIdx.x * TILE_DIM + threadIdx.x;
int yIndex = blockIdx.y * TILE_DIM + threadIdx.y;
int index = xIndex + (yIndex)*width;
for (int r=0; r<nreps; r++) {
for (int i=0; i < TILE_DIM; i += BLOCK_ROWS) {
block[threadIdx.y+i][threadIdx.x] =
idata[index+i*width];
}
__syncthreads();
for (int i=0; i < TILE_DIM; i += BLOCK_ROWS) {
odata[index+i*height] =
block[threadIdx.x][threadIdx.y+i];
}
}
}
__global__ void transposeCoarseGrained(float *odata,
float *idata, int width, int height, int nreps)
{
__shared__ float block[TILE_DIM][TILE_DIM+1];
int xIndex = blockIdx.x * TILE_DIM + threadIdx.x;
int yIndex = blockIdx.y * TILE_DIM + threadIdx.y;
int index_in = xIndex + (yIndex)*width;
xIndex = blockIdx.y * TILE_DIM + threadIdx.x;
yIndex = blockIdx.x * TILE_DIM + threadIdx.y;
int index_out = xIndex + (yIndex)*height;
for (int r=0; r<nreps; r++) {
for (int i=0; i<TILE_DIM; i += BLOCK_ROWS) {
block[threadIdx.y+i][threadIdx.x] =
idata[index_in+i*width];
}
__syncthreads();
for (int i=0; i<TILE_DIM; i += BLOCK_ROWS) {
odata[index_out+i*height] =
block[threadIdx.y+i][threadIdx.x];
}
}
}
Optimizing Matrix Transpose in CUDA
14
January 2009
Note that the fine- and coarse-grained kernels are not actual transposes since in
either case odata is not a transpose of idata, but as you will see they are
useful in analyzing performance bottlenecks. The performance results for these
two cases are added to our table below:
Effective Bandwidth (GB/s)
2048x2048, GTX 280
Loop over kernel
Loop in kernel
Simple Copy
96.9
81.6
Shared Memory Copy
80.9
81.1
Naïve Transpose
2.2
2.2
Coalesced Transpose
16.5
17.1
Bank Conflict Free Transpose
16.6
17.2
Fine-grained Transpose
80.4
81.5
Coarse-grained Transpose
16.7
17.1
The fine-grained transpose has performance similar to the shared memory copy,
whereas the coarse-grained transpose has roughly the performance of the
coalesced and bank conflict free transposes. Thus the performance bottleneck
lies in writing data to the transposed location in global memory. Just as shared
memory performance can be degraded via bank conflicts, an analogous
performance degradation can occur with global memory access through partition
camping, which we investigate next.
Partition Camping
Just as shared memory is divided into 16 banks of 32-bit width, global memory is
divided into either 6 partitions (on 8- and 9-series GPUs) or 8 partitions (on 200-
and 10-series GPUs) of 256-byte width. We previously discussed that to use
shared memory effectively, threads within a half warp should access different
banks so that these accesses can occur simultaneously. If threads within a half
warp access shared memory though only a few banks, then bank conflicts occur.
To use global memory effectively, concurrent accesses to global memory by all
active warps should be divided evenly amongst partitions. The term partition
camping is used to describe the case when global memory accesses are directed
through a subset of partitions, causing requests to queue up at some partitions
while other partitions go unused.
Optimizing Matrix Transpose in CUDA
January 2009
15
While coalescing concerns global memory accesses within a half warp, partition
camping concerns global memory accesses amongst active half warps. Since
partition camping concerns how active thread blocks behave, the issue of how
thread blocks are scheduled on multiprocessors is important. When a kernel is
launched, the order in which blocks are assigned to multiprocessors is determined
by the one-dimensional block ID defined as:
bid = blockIdx.x + gridDim.x*blockIdx.y;
which is a row-major ordering of the blocks in the grid. Once maximum
occupancy is reached, additional blocks are assigned to multiprocessors as
needed. How quickly and the order in which blocks complete cannot be
determined, so active blocks are initially contiguous but become less contiguous
as execution of the kernel progresses.
If we return to our matrix transpose and look at how tiles in our 2048x2048
matrices map to partitions on a GTX 280, as depicted in the figure below, we
immediately see that partition camping is a problem.
With 8 partitions of 256-byte width, all data in strides of 2048 bytes (or 512
floats) map to the same partition. Any float matrix with an integral multiple of
512 columns, such as our 2048x2048 matrix, will contain columns whose
elements map to a single partition. With tiles of 32x32 floats (or 128x128 bytes),
whose one-dimensional block IDs are shown in the figure, all the data within the
first two columns of tiles map to the same partition, and likewise for other pairs
of tile columns (assuming the matrices are aligned to a partition segment).
Combining how the matrix elements map to partitions, and how blocks are
scheduled, we can see that concurrent blocks will be accessing tiles row-wise in
idata which will be roughly equally distributed amongst partitions, however
these blocks will access tiles column-wise in odata which will typically access
global memory through just a few partitions.
Having diagnosed the problem as partition camping, the question now turns to
what can be done about it. Just as with shared memory, padding is an option.
Adding an additional 64 columns (one partition width) to odata will cause rows
of a tile to map sequentially to different partitions. However, such padding can
become prohibitive to certain applications. There is a simpler solution that
essentially involves rescheduling how blocks are executed.
…
130
129
128
69
68
67
66
65
64
5
4
3
2
1
0
69
5
68
4
…
67
3
130
66
2
129
65
1
128
64
0
idata
odata
Optimizing Matrix Transpose in CUDA
16
January 2009
Diagonal block reordering
While the programmer does not have direct control of the order in which blocks
are scheduled, which is determined by the value of the automatic kernel variable
blockIdx, the programmer does have the flexibility in how to interpret the
components of blockIdx. Given how the components blockIdx are named,
i.e. x and y, one generally assumes these components refer to a cartesian
coordinate system. This does not need to be the case, however, and one can
choose otherwise. Within the cartesian interpretation one could swap the roles of
these two components, which would eliminate the partition camping problem in
writing to odata, however this would merely move the problem to reading data
from idata.
One way to avoid partition camping in both reading from idata and writing to
odata is to use a diagonal interpretation of the components of blockIdx: the
y component represents different diagonal slices of tiles through the matrix and
the x component indicates the distance along each diagonal. Both cartesian and
diagonal interpretations of blockIdx components are shown in the top portion
of the diagram below for a 4x4-block matrix, along with the resulting one-
dimensional block ID on the bottom.
3,3
2,3
1,3
0,3
3,2
2,2
1,2
0,2
3,1
2,1
1,1
0,1
3,0
2,0
1,0
0,0
3,0
3,3
3,2
3,1
2,1
2,0
2,3
2,2
1,2
1,1
1,0
1,3
0,3
0,2
0,1
0,0
blockIdx.x + gridDim.x*blockIdx.y
15
14
13
12
11
10
9
8
7
6
5
4
3
2
1
0
3
15
11
7
6
2
14
10
9
5
1
13
12
8
4
0
Cartesian
Coordinate
Diagonal
Coordinate
Optimizing Matrix Transpose in CUDA
January 2009
17
Before we discuss the merits of using the diagonal interpretation of blockIdx
components in the matrix transpose, we briefly mention how it can be efficiently
implemented using a mapping of coordinates. This technique is useful when
writing new kernels, but even more so when modifying existing kernels to use
diagonal (or other) interpretations of blockIdx fields. If blockIdx.x and
blockIdx.y represent the diagonal coordinates, then (for block-square
matrixes) the corresponding cartesian coordinates are given by the following
mapping:
blockIdx_y = blockIdx.x;
blockIdx_x = (blockIdx.x+blockIdx.y)%gridDim.x;
One would simply include the previous two lines of code at the beginning of the
kernel, and write the kernel assuming the cartesian interpretation of blockIdx
fields, except using blockIdx_x and blockIdx_y in place of blockIdx.x
and blockIdx.y, respectively, throughout the kernel. This is precisely what is
done in the transposeDiagonal kernel below:
__global__ void transposeDiagonal(float *odata,
float *idata, int width, int height, int nreps)
{
__shared__ float tile[TILE_DIM][TILE_DIM+1];
int blockIdx_x, blockIdx_y;
// diagonal reordering
if (width == height) {
blockIdx_y = blockIdx.x;
blockIdx_x = (blockIdx.x+blockIdx.y)%gridDim.x;
} else {
int bid = blockIdx.x + gridDim.x*blockIdx.y;
blockIdx_y = bid%gridDim.y;
blockIdx_x = ((bid/gridDim.y)+blockIdx_y)%gridDim.x;
}
int xIndex = blockIdx_x*TILE_DIM + threadIdx.x;
int yIndex = blockIdx_y*TILE_DIM + threadIdx.y;
int index_in = xIndex + (yIndex)*width;
xIndex = blockIdx_y*TILE_DIM + threadIdx.x;
yIndex = blockIdx_x*TILE_DIM + threadIdx.y;
int index_out = xIndex + (yIndex)*height;
for (int r=0; r < nreps; r++) {
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) {
tile[threadIdx.y+i][threadIdx.x] =
idata[index_in+i*width];
}
__syncthreads();
for (int i=0; i<TILE_DIM; i+=BLOCK_ROWS) {
odata[index_out+i*height] =
tile[threadIdx.x][threadIdx.y+i];
}
}
}
Optimizing Matrix Transpose in CUDA
18
January 2009
Here we allow for both square and nonsquare matrices. The mapping for
nonsquare matrices can be used in the general case, however the simpler
expressions for square matrices evaluate quicker and are preferable when
appropriate.
If we revisit our 2048x2048 matrix in the figure below, we can see how the
diagonal reordering solves the partition camping problem. When reading from
idata and writing to odata in the diagonal case, pairs of tiles cycle through
partitions just as in the cartesian case when reading data from idata.
The performance of the diagonal kernel in the table below reflects this. The
bandwidth measured when looping within the kernel over the read and writes to
global memory is within a few percent of the shared memory copy. When
looping over the kernel, the performance degrades slightly, likely due to
additional computation involved in calculating blockIdx_x and
blockIdx_y. However, even with this performance degradation the diagonal
transpose has over four times the bandwidth of the other complete transposes.
…
130
129
128
69
68
67
66
65
64
5
4
3
2
1
0
69
5
68
4
…
67
3
130
66
2
129
65
1
128
64
0
idata
odata
5
68
4
…
67
3
130
66
2
129
65
1
128
64
0
5
68
…
4
67
130
3
66
129
2
65
128
1
64
0
Cartesian
Diagonal
Optimizing Matrix Transpose in CUDA
January 2009
19
Effective Bandwidth (GB/s)
2048x2048, GTX 280
Loop over kernel
Loop in kernel
Simple Copy
96.9
81.6
Shared Memory Copy
80.9
81.1
Naïve Transpose
2.2
2.2
Coalesced Transpose
16.5
17.1
Bank Conflict Free Transpose
16.6
17.2
Fine-grained Transpose
80.4
81.5
Coarse-grained Transpose
16.7
17.1
Diagonal
69.5
78.3
January 2009
20
Summary
In this paper we have discussed several aspects of GPU memory management
through a sequence of progressively optimized transpose kernels. The sequence
is typical of performance tuning using CUDA. The first step in improving
effective bandwidth is to ensure that global memory accesses are coalesced,
which can improve performance by an order of magnitude.
The second step was to look at shared memory bank conflicts. In this study
eliminating shared memory bank conflicts appeared to have little effect on
performance, however that is largely due to when it was applied in relation to
other optimizations: the effect of bank conflicts were masked by partition
camping. By removing the padding of the shared memory array in the diagonally
reordered transpose, one can see that bank conflicts have a sizeable effect on
performance.
While coalescing and bank conflicts will remain relatively consistent as the
problem size varies, partition camping is dependent on problem size, and varies
across different generations of hardware. The particular sized matrix in this
example will experience far less performance degradation due to partition
camping on a G80-based card due to the different number of partitions: 6
partitions on the 8-series rather than 8 on the 200-series.
The final version of the transpose kernel by no means represents the highest level
of optimization that can be achieved. Tile size, number of elements per thread,
and instruction optimizations can improve performance, both of the transpose
and the copy kernels. But in the study we merely focused on the issues that have
the largest impact.
January 2009
21
Appendix A - Host Code
#include <stdio.h>
// kernels transpose/copy a tile of TILE_DIM x TILE_DIM elements
// using a TILE_DIM x BLOCK_ROWS thread block, so that each thread
// transposes TILE_DIM/BLOCK_ROWS elements. TILE_DIM must be an
// integral multiple of BLOCK_ROWS
#define TILE_DIM 32
#define BLOCK_ROWS 8
// Number of repetitions used for timing.
#define NUM_REPS 100
int
main( int argc, char** argv)
{
// set matrix size
const int size_x = 2048, size_y = 2048;
// kernel pointer and descriptor
void (*kernel)(float *, float *, int, int, int);
char *kernelName;
// execution configuration parameters
dim3 grid(size_x/TILE_DIM, size_y/TILE_DIM),
threads(TILE_DIM,BLOCK_ROWS);
// CUDA events
cudaEvent_t start, stop;
// size of memory required to store the matrix
const int mem_size = sizeof(float) * size_x*size_y;
// allocate host memory
float *h_idata = (float*) malloc(mem_size);
float *h_odata = (float*) malloc(mem_size);
float *transposeGold = (float *) malloc(mem_size);
float *gold;
// allocate device memory
float *d_idata, *d_odata;
cudaMalloc( (void**) &d_idata, mem_size);
cudaMalloc( (void**) &d_odata, mem_size);
// initalize host data
for(int i = 0; i < (size_x*size_y); ++i)
h_idata[i] = (float) i;
// copy host data to device
cudaMemcpy(d_idata, h_idata, mem_size,
cudaMemcpyHostToDevice );
Optimizing Matrix Transpose in CUDA
22
January 2009
// Compute reference transpose solution
computeTransposeGold(transposeGold, h_idata, size_x, size_y);
// print out common data for all kernels
printf("\nMatrix size: %dx%d, tile: %dx%d, block: %dx%d\n\n",
size_x, size_y, TILE_DIM, TILE_DIM, TILE_DIM, BLOCK_ROWS);
printf("Kernel\t\t\tLoop over kernel\tLoop within kernel\n");
printf("------\t\t\t----------------\t------------------\n");
//
// loop over different kernels
//
for (int k = 0; k<8; k++) {
// set kernel pointer
switch (k) {
case 0:
kernel = ©
kernelName = "simple copy "; break;
case 1:
kernel = ©SharedMem;
kernelName = "shared memory copy "; break;
case 2:
kernel = &transposeNaive;
kernelName = "naive transpose "; break;
case 3:
kernel = &transposeCoalesced;
kernelName = "coalesced transpose "; break;
case 4:
kernel = &transposeNoBankConflicts;
kernelName = "no bank conflict trans"; break;
case 5:
kernel = &transposeCoarseGrained;
kernelName = "coarse-grained "; break;
case 6:
kernel = &transposeFineGrained;
kernelName = "fine-grained "; break;
case 7:
kernel = &transposeDiagonal;
kernelName = "diagonal transpose "; break;
}
// set reference solution
// NB: fine- and coarse-grained kernels are not full
// transposes, so bypass check
if (kernel == © || kernel == ©SharedMem) {
gold = h_idata;
} else if (kernel == &transposeCoarseGrained ||
kernel == &transposeFineGrained) {
gold = h_odata;
} else {
gold = transposeGold;
}
// initialize events, EC parameters
cudaEventCreate(&start);
cudaEventCreate(&stop);
// warmup to avoid timing startup
Optimizing Matrix Transpose in CUDA
January 2009
23
kernel<<<grid, threads>>>(d_odata, d_idata, size_x,size_y, 1);
// take measurements for loop over kernel launches
cudaEventRecord(start, 0);
for (int i=0; i < NUM_REPS; i++) {
kernel<<<grid, threads>>>(d_odata, d_idata,size_x,size_y,1);
}
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
float outerTime;
cudaEventElapsedTime(&outerTime, start, stop);
cudaMemcpy(h_odata,d_odata, mem_size, cudaMemcpyDeviceToHost);
int res = comparef(gold, h_odata, size_x*size_y);
if (res != 1)
printf("*** %s kernel FAILED ***\n", kernelName);
// take measurements for loop inside kernel
cudaEventRecord(start, 0);
kernel<<<grid,threads>>>
(d_odata, d_idata, size_x, size_y, NUM_REPS);
cudaEventRecord(stop, 0);
cudaEventSynchronize(stop);
float innerTime;
cudaEventElapsedTime(&innerTime, start, stop);
cudaMemcpy(h_odata,d_odata, mem_size, cudaMemcpyDeviceToHost);
res = comparef(gold, h_odata, size_x*size_y);
if (res != 1)
printf("*** %s kernel FAILED ***\n", kernelName);
// report effective bandwidths
float outerBandwidth =
2.*1000*mem_size/(1024*1024*1024)/(outerTime/NUM_REPS);
float innerBandwidth =
2.*1000*mem_size/(1024*1024*1024)/(innerTime/NUM_REPS);
printf("%s\t%5.2f GB/s\t\t%5.2f GB/s\n",
kernelName, outerBandwidth, innerBandwidth);
}
// cleanup
free(h_idata); free(h_odata); free(transposeGold);
cudaFree(d_idata); cudaFree(d_odata);
cudaEventDestroy(start); cudaEventDestroy(stop);
return 0;
}
Optimizing Matrix Transpose in CUDA
24
January 2009
Notice
ALL NVIDIA DESIGN SPECIFICATIONS, REFERENCE BOARDS, FILES, DRAWINGS, DIAGNOSTICS, LISTS, AND
OTHER DOCUMENTS (TOGETHER AND SEPARATELY, “MATERIALS”) ARE BEING PROVIDED “AS IS.” NVIDIA
MAKES NO WARRANTIES, EXPRESSED, IMPLIED, STATUTORY, OR OTHERWISE WITH RESPECT TO THE
MATERIALS, AND EXPRESSLY DISCLAIMS ALL IMPLIED WARRANTIES OF NONINFRINGEMENT,
MERCHANTABILITY, AND FITNESS FOR A PARTICULAR PURPOSE.
Information furnished is believed to be accurate and reliable. However, NVIDIA Corporation assumes no
responsibility for the consequences of use of such information or for any infringement of patents or other
rights of third parties that may result from its use. No license is granted by implication or otherwise under
any patent or patent rights of NVIDIA Corporation. Specifications mentioned in this publication are subject to
change without notice. This publication supersedes and replaces all information previously supplied. NVIDIA
Corporation products are not authorized for use as critical components in life support devices or systems
without express written approval of NVIDIA Corporation.
Trademarks
NVIDIA, the NVIDIA logo, and CUDA are trademarks or registered trademarks of NVIDIA Corporation in the
United States and other countries. Other company and product names may be trademarks of the respective
companies with which they are associated.
Macrovision Compliance Statement
NVIDIA Products that are Macrovision enabled can only be sold or distributed to buyers with a valid and
existing authorization from Macrovision to purchase and incorporate the device into buyer’s products.
Macrovision copy protection technology is protected by U.S. patent numbers 5,583,936; 6,516,132;
6,836,549; and 7,050,698 and other intellectual property rights. The use of Macrovision’s copy protection
technology in the device must be authorized by Macrovision and is intended for home and other limited pay-
per-view uses only, unless otherwise authorized in writing by Macrovision. Reverse engineering or
disassembly is prohibited.
Copyright
© 2009 NVIDIA Corporation. All rights reserved. |
CUDA Shared Memory Capacity
Introduction
CUDA shared memory is an extremely powerful feature for CUDA kernel implementation and optimization. Because CUDA shared memory is located on chip, its memory bandwidth is much larger than the global memory which is located off chip. Therefore, CUDA kernel optimization by caching memory access on shared memory can improve the performance of some operations significantly, especially for those memory-bound operations.
However, CUDA shared memory has size limits for each thread block which is 48 KB by default. Sometimes, we would like to use a little bit more shared memory for our implementations. In this blog post, I would like to discuss how to allocate static shared memory, dynamic shared memory, and how to request more than 48 KB dynamic shared memory.
Stencil Kernel
We have implemented a stencil kernel for demonstrating the allocation of CUDA shared memory. Stencil is almost mathematically equivalent as a special case of convolution whose weights are exactly 1 with valid padding.
For example, given an 1D array of $\{1, 1, 1, 1, 1, 1, 1\}$ and a stencil kernel with a radius of $2$, we will have the output 1D array $\{1, 1, 5, 5, 5, 1, 1\}$.
The stencil operation will have many redundant memory reads from the input tensor and thus is a memory-bound operation. If the memory reads are not cached and the program reads from the global memory, the performance will be poor. Therefore, we will take advantage of shared memory which is on chip to cache the memory reads and improve the performance.
Static Shared Memory
In this implementation, we allocated static shared memory whose size must be known at compile time. The implementation also supports arbitrary “valid” array size, radius, and CUDA thread block size. Also notice that when we implement the kernel, we have to pay special attention to the scenario when the radius is larger than the CUDA thread block size and the “valid” array size is not divisible by the CUDA thread block size, as it is not easy to implement them correctly.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149
#include <cassert>#include <iostream>#include <vector>#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)void check(cudaError_t err, char const* const func, char const* const file, int const line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}#define CHECK_LAST_CUDA_ERROR() checkLast(__FILE__, __LINE__)void checkLast(char const* const file, int const line){ cudaError_t const err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); }}template <int BLOCK_SIZE = 1024, int RADIUS = 5>__global__ void stencil_1d_kernel(int const* d_in, int* d_out, int valid_array_size){ __shared__ int temp[BLOCK_SIZE + 2 * RADIUS]; // This has to be int because we will use negative indices. int const gindex{static_cast<int>(threadIdx.x + blockIdx.x * blockDim.x)}; int const lindex{static_cast<int>(threadIdx.x) + RADIUS}; int const valid_block_size{ min(BLOCK_SIZE, valid_array_size - static_cast<int>(blockIdx.x * blockDim.x))}; // Read input elements into shared memory if (gindex < valid_array_size) { temp[lindex] = d_in[gindex]; if (RADIUS <= valid_block_size) { if (threadIdx.x < RADIUS) { temp[lindex - RADIUS] = d_in[gindex - RADIUS]; temp[lindex + valid_block_size] = d_in[gindex + valid_block_size]; } } else { for (int i{0}; i < RADIUS; i += valid_block_size) { // Some threads might have to do one more job than other // threads. if (lindex - RADIUS + i < RADIUS) { temp[lindex - RADIUS + i] = d_in[gindex - RADIUS + i]; temp[lindex + valid_block_size + i] = d_in[gindex + valid_block_size + i]; } } } } // Synchronize (ensure all the data is available) __syncthreads(); if (gindex >= valid_array_size) { return; } // Apply the stencil int result{0}; for (int offset{-RADIUS}; offset <= RADIUS; offset++) { result += temp[lindex + offset]; } // Store the result d_out[gindex] = result;}void stencil_1d_cpu(int const* h_in, int* h_out, int radius, int valid_array_size){ for (int i{0}; i < valid_array_size; ++i) { int result{0}; for (int offset{-radius}; offset <= radius; offset++) { result += h_in[i + offset]; } h_out[i] = result; }}int main(int argc, char** argv){ constexpr int const valid_array_size{1024 * 100 + 1}; constexpr int const block_size{1024}; constexpr int const grid_size{(valid_array_size + block_size - 1) / block_size}; constexpr int const radius{1025}; int const array_size{valid_array_size + 2 * radius}; std::vector<int> const h_in(array_size, 1); std::vector<int> h_out{h_in}; std::vector<int> h_out_reference{h_in}; stencil_1d_cpu(h_in.data() + radius, h_out_reference.data() + radius, radius, valid_array_size); int* d_in; int* d_out; CHECK_CUDA_ERROR(cudaMalloc(&d_in, array_size * sizeof(int))); CHECK_CUDA_ERROR(cudaMalloc(&d_out, array_size * sizeof(int))); CHECK_CUDA_ERROR(cudaMemcpy(d_in, h_in.data(), array_size * sizeof(int), cudaMemcpyHostToDevice)); CHECK_CUDA_ERROR(cudaMemcpy(d_out, h_out.data(), array_size * sizeof(int), cudaMemcpyHostToDevice)); stencil_1d_kernel<block_size, radius><<<grid_size, block_size>>>( d_in + radius, d_out + radius, valid_array_size); CHECK_LAST_CUDA_ERROR(); CHECK_CUDA_ERROR(cudaDeviceSynchronize()); CHECK_CUDA_ERROR(cudaMemcpy(h_out.data(), d_out, array_size * sizeof(int), cudaMemcpyDeviceToHost)); for (int i{0}; i < h_out_reference.size(); ++i) { assert(h_out[i] == h_out_reference[i]); } CHECK_CUDA_ERROR(cudaFree(d_in)); CHECK_CUDA_ERROR(cudaFree(d_out));}
12
$ nvcc stencil_static_shared_memory.cu -o stencil_static_shared_memory$ ./stencil_static_shared_memory
If we increase the radius from 1025 to some larger values such as 6000, we will get the following compilation error.
12
$ nvcc stencil_static_shared_memory.cu -o stencil_static_shared_memoryptxas error : Entry function '_Z17stencil_1d_kernelILi1024ELi6000EEvPKiPii' uses too much shared data (0xcb80 bytes, 0xc000 max)
This is because the user could only allocate the CUDA static shared memory up to 48 KB. In our use case, BLOCK_SIZE + 2 * RADIUS = $1024 + 2 \times 6000$ = $13024$ and the size of an int is $4$ bytes, therefore, the shared memory required is $17024 \times 4 / 1024$ = $50.875$ KB, which is larger than the maximum static shared memory we could have.
Dynamic Shared Memory
To use shared memory larger than 48 KB, we will have to use dynamic shared memory and it is architecture specific. Specifically, CUDA Runtime API cudaFuncSetAttribute has to be called in addition to specifying the dynamic shared memory size we want to request in the third argument in <<<...>>> for CUDA launch, and we should always check its return as it can fail during runtime on certain architectures.
The platform GPU is NVIDIA RTX 2080TI. According to the CUDA C Programming Guide, compute capability 7.x devices allow a single thread block to dynamically allocate shared memory up to 64 KB on Turing. So we could run the stencil program with a radius of 6000 on NVIDIA RTX 2080TI.
This implementation with dynamic shared memory is almost the same as the one with static shared memory.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154
#include <cassert>#include <iostream>#include <vector>#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)void check(cudaError_t err, char const* const func, char const* const file, int const line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}#define CHECK_LAST_CUDA_ERROR() checkLast(__FILE__, __LINE__)void checkLast(char const* const file, int const line){ cudaError_t const err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); }}template <int BLOCK_SIZE = 1024, int RADIUS = 5>__global__ void stencil_1d_kernel(int const* d_in, int* d_out, int valid_array_size){ extern __shared__ int temp[]; // This has to be int because we will use negative indices. int const gindex{static_cast<int>(threadIdx.x + blockIdx.x * blockDim.x)}; int const lindex{static_cast<int>(threadIdx.x) + RADIUS}; int const valid_block_size{ min(BLOCK_SIZE, valid_array_size - static_cast<int>(blockIdx.x * blockDim.x))}; // Read input elements into shared memory if (gindex < valid_array_size) { temp[lindex] = d_in[gindex]; if (RADIUS <= valid_block_size) { if (threadIdx.x < RADIUS) { temp[lindex - RADIUS] = d_in[gindex - RADIUS]; temp[lindex + valid_block_size] = d_in[gindex + valid_block_size]; } } else { for (int i{0}; i < RADIUS; i += valid_block_size) { // Some threads might have to do one more job than other // threads. if (lindex - RADIUS + i < RADIUS) { temp[lindex - RADIUS + i] = d_in[gindex - RADIUS + i]; temp[lindex + valid_block_size + i] = d_in[gindex + valid_block_size + i]; } } } } // Synchronize (ensure all the data is available) __syncthreads(); if (gindex >= valid_array_size) { return; } // Apply the stencil int result{0}; for (int offset{-RADIUS}; offset <= RADIUS; offset++) { result += temp[lindex + offset]; } // Store the result d_out[gindex] = result;}void stencil_1d_cpu(int const* h_in, int* h_out, int radius, int valid_array_size){ for (int i{0}; i < valid_array_size; ++i) { int result{0}; for (int offset{-radius}; offset <= radius; offset++) { result += h_in[i + offset]; } h_out[i] = result; }}int main(int argc, char** argv){ constexpr int const valid_array_size{1024 * 100 + 1}; constexpr int const block_size{1024}; constexpr int const grid_size{(valid_array_size + block_size - 1) / block_size}; constexpr int const radius{6000}; int const array_size{valid_array_size + 2 * radius}; std::vector<int> const h_in(array_size, 1); std::vector<int> h_out{h_in}; std::vector<int> h_out_reference{h_in}; stencil_1d_cpu(h_in.data() + radius, h_out_reference.data() + radius, radius, valid_array_size); int* d_in; int* d_out; CHECK_CUDA_ERROR(cudaMalloc(&d_in, array_size * sizeof(int))); CHECK_CUDA_ERROR(cudaMalloc(&d_out, array_size * sizeof(int))); CHECK_CUDA_ERROR(cudaMemcpy(d_in, h_in.data(), array_size * sizeof(int), cudaMemcpyHostToDevice)); CHECK_CUDA_ERROR(cudaMemcpy(d_out, h_out.data(), array_size * sizeof(int), cudaMemcpyHostToDevice)); int const shared_memory_bytes{(block_size + radius * 2) * sizeof(int)}; CHECK_CUDA_ERROR(cudaFuncSetAttribute( stencil_1d_kernel<block_size, radius>, cudaFuncAttributeMaxDynamicSharedMemorySize, shared_memory_bytes)); stencil_1d_kernel<block_size, radius> <<<grid_size, block_size, shared_memory_bytes>>>( d_in + radius, d_out + radius, valid_array_size); CHECK_LAST_CUDA_ERROR(); CHECK_CUDA_ERROR(cudaDeviceSynchronize()); CHECK_CUDA_ERROR(cudaMemcpy(h_out.data(), d_out, array_size * sizeof(int), cudaMemcpyDeviceToHost)); for (int i{0}; i < h_out_reference.size(); ++i) { assert(h_out[i] == h_out_reference[i]); } CHECK_CUDA_ERROR(cudaFree(d_in)); CHECK_CUDA_ERROR(cudaFree(d_out));}
12
$ nvcc stencil_dynamic_shared_memory.cu -o stencil_dynamic_shared_memory --gpu-architecture=compute_75 --gpu-code=sm_75$ ./stencil_dynamic_shared_memory
Conclusion
The reason why large shared memory can only be allocated for dynamic shared memory is that not all the GPU architecture can support certain size of shared memory that is larger than 48 KB. If static shared memory larger than 48 KB is allowed, the CUDA program will compile but fail on some specific GPU architectures, which is not desired. Therefore, to use shared memory that is larger than 48 KB, it has to be requested via dynamic shared memory during runtime. If the GPU architecture does not support shared memory of certain size, a CUDA runtime error will be returned.
References
CUDA Shared Memory Capacity
https://leimao.github.io/blog/CUDA-Shared-Memory-Capacity/
Author
Lei Mao
Posted on
07-04-2022
Updated on
12-26-2023
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
Kernel Tuner
Guides
Features
Reference
Design documentation¶
This section provides detailed information about the design and internals
of the Kernel Tuner. This information is mostly relevant for developers.
The Kernel Tuner is designed to be extensible and support
different search and execution strategies. The current architecture of
the Kernel Tuner can be seen as:
At the top we have the kernel code and the Python script that tunes it,
which uses any of the main functions exposed in the user interface.
The strategies are responsible for iterating over and searching through
the search space. The default strategy is brute_force, which
iterates over all valid kernel configurations in the search space.
random_sample simply takes a random sample of the search space. More
advanced strategies are continuously being implemented and improved in
Kernel Tuner. The full list of supported strategies and how to use these
is explained in the API Documentation, see the options strategy and
strategy_options.
The runners are responsible for compiling and benchmarking the kernel
configurations selected by the strategy. The sequential runner is currently
the only supported runner, which does exactly what its name says. It compiles
and benchmarks configurations using a single sequential Python process.
Other runners are foreseen in future releases.
The runners are implemented on top of the core, which implements a
high-level Device Interface,
which wraps all the functionality for compiling and benchmarking
kernel configurations based on the low-level Device Function Interface.
Currently, we have
five different implementations of the device function interface, which
basically abstracts the different backends into a set of simple
functions such as ready_argument_list which allocates GPU memory and
moves data to the GPU, and functions like compile, benchmark, or
run_kernel. The functions in the core are basically the main
building blocks for implementing runners.
The observers are explained in Observers.
At the bottom, the backends are shown.
PyCUDA, CuPy, cuda-python, PyOpenCL and PyHIP are for tuning either CUDA, OpenCL, or HIP kernels.
The CompilerFunctions implementation can call any compiler, typically NVCC
or GCC is used. There is limited support for tuning Fortran kernels.
This backend was created not just to be able to tune C
functions, but in particular to tune C functions that in turn launch GPU kernels.
The rest of this section contains the API documentation of the modules
discussed above. For the documentation of the user API see the
API Documentation.
Strategies¶
Strategies are explained in Optimization strategies.
Many of the strategies use helper functions that are collected in kernel_tuner.strategies.common.
kernel_tuner.strategies.common¶
Get the strategy-specific options or their defaults from user-supplied strategy_options.
Generate docstring for a ‘tune’ method of a strategy.
Generate documentation for the supported strategy options and their defaults.
Helper func to do the inverse of the ‘unscale’ function.
Prepare method specific arguments.
Prepare method specific options.
Helper func that for each param selects the closest actual value.
Helper func that snaps a scaled variable to the nearest config.
Runners¶
kernel_tuner.runners.sequential.SequentialRunner¶
SequentialRunner is used for tuning with a single process/thread.
Instantiate the SequentialRunner.
kernel_source (kernel_tuner.core.KernelSource) – The kernel source
kernel_options (kernel_tuner.interface.Options) – A dictionary with all options for the kernel.
device_options (kernel_tuner.interface.Options) – A dictionary with all options for the device
on which the kernel should be tuned.
iterations (int) – The number of iterations used for benchmarking
each kernel instance.
Iterate through the entire parameter space using a single Python process.
parameter_space (iterable) – The parameter space as an iterable.
tuning_options (kernel_tuner.iterface.Options) – A dictionary with all options regarding the tuning
process.
A list of dictionaries for executed kernel configurations and their
execution times.
dict())
kernel_tuner.runners.sequential.SimulationRunner¶
SimulationRunner is used for tuning with a single process/thread.
Instantiate the SimulationRunner.
kernel_source (kernel_tuner.core.KernelSource) – The kernel source
kernel_options (kernel_tuner.interface.Options) – A dictionary with all options for the kernel.
device_options (kernel_tuner.interface.Options) – A dictionary with all options for the device
on which the kernel should be tuned.
iterations (int) – The number of iterations used for benchmarking
each kernel instance.
Iterate through the entire parameter space using a single Python process.
parameter_space (iterable) – The parameter space as an iterable.
tuning_options (kernel_tuner.iterface.Options) – A dictionary with all options regarding the tuning
process.
A list of dictionaries for executed kernel configurations and their
execution times.
dict()
Device Interfaces¶
kernel_tuner.core.DeviceInterface¶
Class that offers a High-Level Device Interface to the rest of the Kernel Tuner
Instantiate the DeviceInterface, based on language in kernel source
kernel_source (kernel_tuner.core.KernelSource) – The kernel sources
device (int) – CUDA/OpenCL device to use, in case you have multiple
CUDA-capable GPUs or OpenCL devices you may use this to select one,
0 by default. Ignored if you are tuning host code by passing lang=”C”.
platform – OpenCL platform to use, in case you have multiple
OpenCL platforms you may use this to select one,
0 by default. Ignored if not using OpenCL.
lang (string) – Specifies the language used for GPU kernels.
Currently supported: “CUDA”, “OpenCL”, “HIP” or “C”
compiler_options (list of strings) – The compiler options to use when compiling kernels for this device.
iterations (int) – Number of iterations to be used when benchmarking using this device.
times (bool) – Return the execution time of all iterations.
benchmark the kernel instance
Benchmark continuously for at least ‘duration’ seconds
Benchmark one kernel execution at a time
runs the kernel once and checks the result against answer
compile the kernel for this specific instance
adds constant memory arguments to the most recently compiled module
adds shared memory arguments to the most recently compiled module
adds texture memory arguments to the most recently compiled module
create kernel instance from kernel source, parameters, problem size, grid divisors, and so on
Return dictionary with information about the environment
perform a device to host memory copy
Get a flat list of arguments based on the configuration given by params
ready argument list to be passed to the kernel, allocates gpu mem if necessary
Run a compiled kernel instance on a device
kernel_tuner.backends.pycuda.PyCudaFunctions¶
Class that groups the CUDA functions on maintains state about the device.
Instantiate PyCudaFunctions object used for interacting with the CUDA device.
Instantiating this object will inspect and store certain device properties at
runtime, which are used during compilation and/or execution of kernels by the
kernel tuner. It also maintains a reference to the most recently compiled
source module for copying data to constant memory before kernel launch.
device (int) – Number of CUDA device to use for this context
iterations (int) – Number of iterations used while benchmarking a kernel, 7 by default.
Call the CUDA compiler to compile the kernel, return the device function.
kernel_name (string) – The name of the kernel to be compiled, used to lookup the
function after compilation.
kernel_string (string) – The CUDA kernel code that contains the function kernel_name
An CUDA kernel that can be called directly.
pycuda.driver.Function
Adds constant memory arguments to the most recently compiled module.
cmem_args (dict( string: numpy.ndarray, ... )) – A dictionary containing the data to be passed to the
device constant memory. The format to be used is as follows: A
string key is used to name the constant memory symbol to which the
value needs to be copied. Similar to regular arguments, these need
to be numpy objects, such as numpy.ndarray or numpy.int32, and so on.
Add shared memory arguments to the kernel.
Adds texture memory arguments to the most recently compiled module.
texmem_args (dict) – A dictionary containing the data to be passed to the
device texture memory. See tune_kernel().
Returns True if the kernel has finished, False otherwise.
Perform a device to host memory copy.
dest (numpy.ndarray) – A numpy array in host memory to store the data
src (pycuda.driver.DeviceAllocation) – A GPU memory allocation unit
Perform a host to device memory copy.
dest (pycuda.driver.DeviceAllocation) – A GPU memory allocation unit
src (numpy.ndarray) – A numpy array in host memory to store the data
Set the memory in allocation to the value in value.
allocation (pycuda.driver.DeviceAllocation) – A GPU memory allocation unit
value (a single 8-bit unsigned int) – The value to set the memory to
size (int) – The size of to the allocation unit in bytes
Ready argument list to be passed to the kernel, allocates gpu mem.
arguments (list(numpy objects)) – List of arguments to be passed to the kernel.
The order should match the argument list on the CUDA kernel.
Allowed values are numpy.ndarray, and/or numpy.int32, numpy.float32, and so on.
A list of arguments that can be passed to an CUDA kernel.
list( pycuda.driver.DeviceAllocation, numpy.int32, … )
Runs the CUDA kernel passed as ‘func’.
func (pycuda.driver.Function) – A PyCuda kernel compiled for this specific kernel configuration
gpu_args (list( pycuda.driver.DeviceAllocation, numpy.int32, ...)) – A list of arguments to the kernel, order should match the
order in the code. Allowed values are either variables in global memory
or single values passed by value.
threads (tuple(int, int, int)) – A tuple listing the number of threads in each dimension of
the thread block
grid (tuple(int, int)) – A tuple listing the number of thread blocks in each dimension
of the grid
Records the event that marks the start of a measurement.
Records the event that marks the end of a measurement.
Halts execution until device has finished its tasks.
kernel_tuner.backends.cupy.CupyFunctions¶
Class that groups the Cupy functions on maintains state about the device.
Instantiate CupyFunctions object used for interacting with the CUDA device.
Instantiating this object will inspect and store certain device properties at
runtime, which are used during compilation and/or execution of kernels by the
kernel tuner. It also maintains a reference to the most recently compiled
source module for copying data to constant memory before kernel launch.
device (int) – Number of CUDA device to use for this context
iterations (int) – Number of iterations used while benchmarking a kernel, 7 by default.
Call the CUDA compiler to compile the kernel, return the device function.
kernel_name (string) – The name of the kernel to be compiled, used to lookup the
function after compilation.
kernel_string (string) – The CUDA kernel code that contains the function kernel_name
An CUDA kernel that can be called directly.
cupy.RawKernel
Adds constant memory arguments to the most recently compiled module.
cmem_args (dict( string: numpy.ndarray, ... )) – A dictionary containing the data to be passed to the
device constant memory. The format to be used is as follows: A
string key is used to name the constant memory symbol to which the
value needs to be copied. Similar to regular arguments, these need
to be numpy objects, such as numpy.ndarray or numpy.int32, and so on.
Add shared memory arguments to the kernel.
Adds texture memory arguments to the most recently compiled module.
texmem_args (dict) – A dictionary containing the data to be passed to the
device texture memory. See tune_kernel().
Returns True if the kernel has finished, False otherwise.
Perform a device to host memory copy.
dest (numpy.ndarray) – A numpy array in host memory to store the data
src (cupy.ndarray) – A GPU memory allocation unit
Perform a host to device memory copy.
dest (cupy.ndarray) – A GPU memory allocation unit
src (numpy.ndarray) – A numpy array in host memory to store the data
Set the memory in allocation to the value in value.
allocation (cupy.ndarray) – A GPU memory allocation unit
value (a single 8-bit unsigned int) – The value to set the memory to
size (int) – The size of to the allocation unit in bytes
Ready argument list to be passed to the kernel, allocates gpu mem.
arguments (list(numpy objects)) – List of arguments to be passed to the kernel.
The order should match the argument list on the CUDA kernel.
Allowed values are numpy.ndarray, and/or numpy.int32, numpy.float32, and so on.
A list of arguments that can be passed to an CUDA kernel.
list( cupy.ndarray, numpy.int32, … )
Runs the CUDA kernel passed as ‘func’.
func (cupy.RawKernel) – A cupy kernel compiled for this specific kernel configuration
gpu_args (list( cupy.ndarray, numpy.int32, ...)) – A list of arguments to the kernel, order should match the
order in the code. Allowed values are either variables in global memory
or single values passed by value.
threads (tuple(int, int, int)) – A tuple listing the number of threads in each dimension of
the thread block
grid (tuple(int, int)) – A tuple listing the number of thread blocks in each dimension
of the grid
Records the event that marks the start of a measurement.
Records the event that marks the end of a measurement.
Halts execution until device has finished its tasks.
kernel_tuner.backends.nvcuda.CudaFunctions¶
Class that groups the Cuda functions on maintains state about the device.
Instantiate CudaFunctions object used for interacting with the CUDA device.
Instantiating this object will inspect and store certain device properties at
runtime, which are used during compilation and/or execution of kernels by the
kernel tuner. It also maintains a reference to the most recently compiled
source module for copying data to constant memory before kernel launch.
device (int) – Number of CUDA device to use for this context
iterations (int) – Number of iterations used while benchmarking a kernel, 7 by default.
compiler_options – Compiler options for the CUDA runtime compiler
observers – List of Observer type objects
Call the CUDA compiler to compile the kernel, return the device function.
kernel_name (string) – The name of the kernel to be compiled, used to lookup the
function after compilation.
kernel_string (string) – The CUDA kernel code that contains the function kernel_name
A kernel that can be launched by the CUDA runtime
Adds constant memory arguments to the most recently compiled module.
cmem_args (dict( string: numpy.ndarray, ... )) – A dictionary containing the data to be passed to the
device constant memory. The format to be used is as follows: A
string key is used to name the constant memory symbol to which the
value needs to be copied. Similar to regular arguments, these need
to be numpy objects, such as numpy.ndarray or numpy.int32, and so on.
Add shared memory arguments to the kernel.
Adds texture memory arguments to the most recently compiled module.
texmem_args (dict) – A dictionary containing the data to be passed to the
device texture memory. See tune_kernel().
Returns True if the kernel has finished, False otherwise.
Perform a device to host memory copy.
dest (numpy.ndarray) – A numpy array in host memory to store the data
src (cuda.CUdeviceptr) – A GPU memory allocation unit
Perform a host to device memory copy.
dest (cuda.CUdeviceptr) – A GPU memory allocation unit
src (numpy.ndarray) – A numpy array in host memory to store the data
Set the memory in allocation to the value in value.
allocation (cupy.ndarray) – A GPU memory allocation unit
value (a single 8-bit unsigned int) – The value to set the memory to
size (int) – The size of to the allocation unit in bytes
Ready argument list to be passed to the kernel, allocates gpu mem.
arguments (list(numpy objects)) – List of arguments to be passed to the kernel.
The order should match the argument list on the CUDA kernel.
Allowed values are numpy.ndarray, and/or numpy.int32, numpy.float32, and so on.
A list of arguments that can be passed to an CUDA kernel.
list( pycuda.driver.DeviceAllocation, numpy.int32, … )
Runs the CUDA kernel passed as ‘func’.
func (cuda.CUfunction) – A CUDA kernel compiled for this specific kernel configuration
gpu_args (list( cupy.ndarray, numpy.int32, ...)) – A list of arguments to the kernel, order should match the
order in the code. Allowed values are either variables in global memory
or single values passed by value.
threads (tuple(int, int, int)) – A tuple listing the number of threads in each dimension of
the thread block
grid (tuple(int, int)) – A tuple listing the number of thread blocks in each dimension
of the grid
Records the event that marks the start of a measurement.
Records the event that marks the end of a measurement.
Halts execution until device has finished its tasks.
kernel_tuner.backends.opencl.OpenCLFunctions¶
Class that groups the OpenCL functions on maintains some state about the device.
Creates OpenCL device context and reads device properties.
device (int) – The ID of the OpenCL device to use for benchmarking
iterations (int) – The number of iterations to run the kernel during benchmarking, 7 by default.
Call the OpenCL compiler to compile the kernel, return the device function.
kernel_name (string) – The name of the kernel to be compiled, used to lookup the
function after compilation.
kernel_string (string) – The OpenCL kernel code that contains the function kernel_name
An OpenCL kernel that can be called directly.
pyopencl.Kernel
This method must implement the allocation and copy of constant memory to the GPU.
This method must implement the dynamic allocation of shared memory on the GPU.
This method must implement the allocation and copy of texture memory to the GPU.
Returns True if the kernel has finished, False otherwise.
Perform a device to host memory copy.
dest (numpy.ndarray) – A numpy array in host memory to store the data
src (pyopencl.Buffer) – An OpenCL Buffer to copy data from
Perform a host to device memory copy.
dest (pyopencl.Buffer) – An OpenCL Buffer to copy data from
src (numpy.ndarray) – A numpy array in host memory to store the data
Set the memory in allocation to the value in value.
allocation (pyopencl.Buffer) – An OpenCL Buffer to fill
value (a single 32-bit int) – The value to set the memory to
size (int) – The size of to the allocation unit in bytes
Ready argument list to be passed to the kernel, allocates gpu mem.
arguments (list(numpy objects)) – List of arguments to be passed to the kernel.
The order should match the argument list on the OpenCL kernel.
Allowed values are numpy.ndarray, and/or numpy.int32, numpy.float32, and so on.
A list of arguments that can be passed to an OpenCL kernel.
list( pyopencl.Buffer, numpy.int32, … )
Runs the OpenCL kernel passed as ‘func’.
func (pyopencl.Kernel) – An OpenCL Kernel
gpu_args (list( pyopencl.Buffer, numpy.int32, ...)) – A list of arguments to the kernel, order should match the
order in the code. Allowed values are either variables in global memory
or single values passed by value.
threads (tuple(int, int, int)) – A tuple listing the number of work items in each dimension of
the work group.
grid (tuple(int, int)) – A tuple listing the number of work groups in each dimension
of the NDRange.
Records the event that marks the start of a measurement.
In OpenCL the event is created when the kernel is launched
Records the event that marks the end of a measurement.
In OpenCL the event is created when the kernel is launched
Halts execution until device has finished its tasks.
kernel_tuner.backends.compiler.CompilerFunctions¶
Class that groups the code for running and compiling C functions
instantiate CFunctions object used for interacting with C code
iterations (int) – Number of iterations used while benchmarking a kernel, 7 by default.
unload the previously loaded shared library
call the C compiler to compile the kernel, return the function
kernel_instance (kernel_tuner.core.KernelInstance) – An object representing the specific instance of the tunable kernel
in the parameter space.
An ctypes function that can be called directly.
ctypes._FuncPtr
Returns True if the kernel has finished, False otherwise
C backend does not support asynchronous launches
a simple memcpy copying from an Argument to a numpy array
dest (np.ndarray or cupy.ndarray) – A numpy or cupy array to store the data
src (Argument) – An Argument for some memory allocation
a simple memcpy copying from a numpy array to an Argument
dest (Argument) – An Argument for some memory allocation
src (np.ndarray or cupy.ndarray) – A numpy or cupy array containing the source data
set the memory in allocation to the value in value
allocation (Argument) – An Argument for some memory allocation unit
value (a single 8-bit unsigned int) – The value to set the memory to
size (int) – The size of to the allocation unit in bytes
ready argument list to be passed to the C function
arguments (list(numpy or cupy objects)) – List of arguments to be passed to the C function.
The order should match the argument list on the C function.
Allowed values are np.ndarray, cupy.ndarray, and/or np.int32, np.float32, and so on.
A list of arguments that can be passed to the C function.
list(Argument)
runs the kernel once, returns whatever the kernel returns
func (ctypes._FuncPtr) – A C function compiled for this specific configuration
c_args (list(Argument)) – A list of arguments to the function, order should match the
order in the code. The list should be prepared using
ready_argument_list().
threads (any) – Ignored, but left as argument for now to have the same
interface as CudaFunctions and OpenCLFunctions.
grid (any) – Ignored, but left as argument for now to have the same
interface as CudaFunctions and OpenCLFunctions.
A robust average of values returned by the C function.
float
Records the event that marks the start of a measurement
C backend does not use events
Records the event that marks the end of a measurement
C backend does not use events
Halts execution until device has finished its tasks
C backend does not support asynchronous launches
kernel_tuner.backends.hip.HipFunctions¶
Class that groups the HIP functions on maintains state about the device.
Instantiate HipFunctions object used for interacting with the HIP device.
Instantiating this object will inspect and store certain device properties at
runtime, which are used during compilation and/or execution of kernels by the
kernel tuner. It also maintains a reference to the most recently compiled
source module for copying data to constant memory before kernel launch.
device (int) – Number of HIP device to use for this context
iterations (int) – Number of iterations used while benchmarking a kernel, 7 by default.
Call the HIP compiler to compile the kernel, return the function.
kernel_instance (kernel_tuner.core.KernelInstance) – An object representing the specific instance of the tunable kernel
in the parameter space.
An ctypes function that can be called directly.
ctypes._FuncPtr
Adds constant memory arguments to the most recently compiled module.
cmem_args (dict( string: numpy.ndarray, ... )) – A dictionary containing the data to be passed to the
device constant memory. The format to be used is as follows: A
string key is used to name the constant memory symbol to which the
value needs to be copied. Similar to regular arguments, these need
to be numpy objects, such as numpy.ndarray or numpy.int32, and so on.
Add shared memory arguments to the kernel.
Copy texture memory arguments. Not yet implemented.
Returns True if the kernel has finished, False otherwise.
Perform a device to host memory copy.
dest (numpy.ndarray) – A numpy array in host memory to store the data
src (ctypes ptr) – A GPU memory allocation unit
Perform a host to device memory copy.
dest (ctypes ptr) – A GPU memory allocation unit
src (numpy.ndarray) – A numpy array in host memory to store the data
Set the memory in allocation to the value in value.
allocation (ctypes ptr) – A GPU memory allocation unit
value (a single 8-bit unsigned int) – The value to set the memory to
size (int) – The size of to the allocation unit in bytes
Ready argument list to be passed to the HIP function.
arguments (list(numpy objects)) – List of arguments to be passed to the HIP function.
The order should match the argument list on the HIP function.
Allowed values are np.ndarray, and/or np.int32, np.float32, and so on.
Ctypes structure of arguments to be passed to the HIP function.
ctypes structure
Runs the HIP kernel passed as ‘func’.
func (ctypes pionter) – A HIP kernel compiled for this specific kernel configuration
gpu_args (ctypes structure) – A ctypes structure of arguments to the kernel, order should match the
order in the code. Allowed values are either variables in global memory
or single values passed by value.
threads (tuple(int, int, int)) – A tuple listing the number of threads in each dimension of
the thread block
grid (tuple(int, int, int)) – A tuple listing the number of thread blocks in each dimension
of the grid
Records the event that marks the start of a measurement.
Records the event that marks the end of a measurement.
Halts execution until device has finished its tasks.
Util Functions¶
kernel_tuner.util¶
Module for kernel tuner utility functions.
Class we use for dumping Numpy objects to JSON.
Implement this method in a subclass such that it returns
a serializable object for o, or calls the base implementation
(to raise a TypeError).
For example, to support arbitrary iterators, you could
implement default like this:
def default(self, o):
try:
iterable = iter(o)
except TypeError:
pass
else:
return list(iterable)
# Let the base class default method raise the TypeError
return JSONEncoder.default(self, o)
Exception used to raise when compiling or launching a kernel fails for a reason that can be expected.
Exception thrown when a stop criterion has been reached.
Raise an exception if a kernel arguments do not match host arguments.
Check if the numpy.dtype matches the type used in the code.
Check whether a specific configuration meets the search space restrictions.
Checks if max_fevals is reached or time limit is exceeded.
Check on maximum thread block dimensions.
Raise an exception if a tune parameter has a forbidden name.
Parses restrictions from a list of strings into a list of strings, Functions, or Constraints (if try_to_constraint) and parameters used, or a single Function if monolithic is true.
Combines restrictions and a check on the max thread block dimension to check config validity.
Convert the python-constraint to a function for backwards compatibility.
if cache file was not properly closed, pretend it was properly closed
Checking the status of CUDA calls using the NVIDIA cuda-python backend.
Delete a temporary file, don’t complain if no longer exists.
Attempt to detect language from the kernel_string.
Dumps a string in the cache, this omits the several checks of store_cache() to speed up the process - with great power comes great responsibility!
Returns the best configuration from a list of results according to some objective.
Return a compact string representation of a measurement.
Compute grid dims based on problem sizes and listed grid divisors.
Combine the parameters to a string mostly used for debug output use of dict is advised.
Retrieve the kernel source and return as a string.
This function processes the passed kernel_source argument, which could be
a function, a string with a filename, or just a string with code already.
If kernel_source is a function, the function is called with instance
parameters in ‘params’ as the only argument.
If kernel_source looks like filename, the file is read in, but if
the file does not exist, it is assumed that the string is not a filename
after all.
kernel_source (string or callable) – One of the sources for the kernel, could be a
function that generates the kernel code, a string containing a filename
that points to the kernel source, or just a string that contains the code.
params – Dictionary containing the tunable parameters for this specific
kernel instance, only needed when kernel_source is a generator.
A string containing the kernel code.
string
Compute current problem size.
Return a dict with kernel instance specific size.
Return a string in the form of temp_X, where X is a large integer.
Thread block size from tuning params, currently using convention.
Sum all timings and put their totals in the env.
Attempt to detect whether source code or a filename was passed.
Normalize a user-specified verify function.
The user-specified function has two required positional arguments (answer, result_host),
and an optional keyword (or keyword-only) argument atol. We normalize it to always accept
an atol keyword argument.
Undefined behaviour if the passed function does not match the required signatures.
Parses restrictions from a list of strings into compilable functions and constraints, or a single compilable function (if monolithic is True). Returns a list of tuples of (strings or constraints) and parameters.
Prepare kernel string for compilation.
Prepends the kernel with a series of C preprocessor defines specific
to this kernel instance:
the thread block dimensions
the grid dimensions
tunable parameters
kernel_name (string) – Name of the kernel.
kernel_string (string) – One of the source files of the kernel as a string containing code.
params (dict) – A dictionary containing the tunable parameters specific to this instance.
grid (tuple(x,y,z)) – A tuple with the grid dimensions for this specific instance.
threads (tuple(x,y,z)) – A tuple with the thread block dimensions for this specific instance.
block_size_names (tuple(string)) – A tuple with the names of the thread block dimensions used
in the code. By default this is [“block_size_x”, …], but the user
may supply different names if they prefer.
defines (dict or None) – A dict that describes the variables that should be defined as
preprocessor macros. Each keys should be the variable names and each value
is either a string or a function that returns a string. If None, each
tunable parameter is defined as preprocessor macro instead.
A string containing the source code made specific to this kernel instance.
string
Print the configuration string with tunable parameters and benchmark results.
Print the configuration string with tunable parameters and benchmark results.
Cache file for storing tuned configurations.
the cache file is stored using JSON and uses the following format:
{ device_name: "name of device"
kernel_name: "name of kernel"
problem_size: (int, int, int)
tune_params_keys: list
tune_params:
cache: {
"x1,x2,..xN": {"block_size_x": x1, ..., time=0.234342},
"y1,y2,..yN": {"block_size_x": y1, ..., time=0.134233},
}
}
The last two closing brackets are not required, and everything
should work as expected if these are missing. This is to allow to continue
from an earlier (abruptly ended) tuning session.
Process user-defined metrics for derived benchmark results.
Metrics must be a dictionary to support composable metrics. The dictionary keys describe
the name given to this user-defined metric and will be used as the key in the results dictionaries
return by Kernel Tuner. The values describe how to calculate the user-defined metric, using either a
string expression in which the tunable parameters and benchmark results can be used as variables, or
as a function that accepts a dictionary as argument.
Example:
metrics = dict()
metrics[“x”] = “10000 / time”
metrics[“x2”] = “x*x”
Note that the values in the metric dictionary can also be functions that accept params as argument.
Example:
metrics = dict()
metrics[“GFLOP/s”] = lambda p : 10000 / p[“time”]
params (dict) – A dictionary with tunable parameters and benchmark results.
metrics (dict) – A dictionary with user-defined metrics that can be used to create derived benchmark results.
An updated params dictionary with the derived metrics inserted along with the benchmark results.
dict
Read the cachefile into a dictionary, if open_cache=True prepare the cachefile for appending.
Return the contents of the file named filename or None if file not found.
Replace occurrences of the tuning params with their current value.
Compute problem size, thread block and grid dimensions for this kernel.
Stores a new entry (key, params) to the cachefile.
Dump the contents of string to a file called filename.
© Copyright 2016-2024, Ben van Werkhoven, Alessio Sclocco, Stijn Heldens, Floris-Jan Willemsen, Willem-Jan Palenstijn, Bram Veenboer and Richard Schoonhoven.
|
Instructor View
EPISODES
RESOURCES
Registers, Global, and Local Memory
Last updated on 2024-11-19 |
Edit this page
Estimated time: 45 minutes
Overview
Questions
Objectives
Now that we know how to write a CUDA kernel to run code on the GPU,
and how to use the Python interface provided by CuPy to execute it, it
is time to look at the different memory spaces in the CUDA programming
model.
Registers
Registers are fast on-chip memories that are used to store operands
for the operations executed by the computing cores.
Did we encounter registers in the vector_add code used
in the previous episode? Yes we did! The variable item is,
in fact, stored in a register for at least part, if not all, of a
thread’s execution. In general all scalar variables defined in CUDA code
are stored in registers.
Registers are local to a thread, and each thread has exclusive access
to its own registers: values in registers cannot be accessed by other
threads, even from the same block, and are not available for the host.
Registers are also not permanent, therefore data stored in registers is
only available during the execution of a thread.
Challenge: how many registers are we using?
Can you guess how many registers are we using in the following
vector_add code?
C
extern "C"
__global__ void vector_add(const float * A, const float * B, float * C, const int size)
{
int item = (blockIdx.x * blockDim.x) + threadIdx.x;
if ( item < size )
{
C[item] = A[item] + B[item];
}
}
Show me the solution
In general, it is not possible to exactly know how many registers the
compiler will use without examining the output generated by the compiler
itself. However, we can roughly estimate the amount of necessary
registers based on the variables used. We most probably need one
register to store the variable item, two registers to store
the content of A[item] and B[item], and one
additional register to store the sum A[item] + B[item]. So
the number of registers that vector_add probably uses is
4.
If we want to make registers use more explicit in the
vector_add code, we can try to rewrite it in a slightly
different, but equivalent, way.
C
extern "C"
__global__ void vector_add(const float * A, const float * B, float * C, const int size)
{
int item = (blockIdx.x * blockDim.x) + threadIdx.x;
float temp_a, temp_b, temp_c;
if ( item < size )
{
temp_a = A[item];
temp_b = B[item];
temp_c = temp_a + temp_b;
C[item] = temp_c;
}
}
In this new version of vector_add we explicitly declare
three float variables to store the values loaded from
memory and the sum of our input items, making the estimation of used
registers more obvious.
This it totally unnecessary in the case of our example, because the
compiler will determine on its own the right amount of registers to
allocate per thread, and what to store in them. However, explicit
register usage can be important for reusing items already loaded from
memory.
Callout
Registers are the fastest memory on the GPU, so using them to
increase data reuse is an important performance optimization. We will
look at some examples of manually using registers to improve performance
in future episodes.
Small CUDA arrays, which size is known at compile time, will also be
allocated in registers by the compiler. We can rewrite the previous
version of vector_add to work with an array of
registers.
C
extern "C"
__global__ void vector_add(const float * A, const float * B, float * C, const int size)
{
int item = (blockIdx.x * blockDim.x) + threadIdx.x;
float temp[3];
if ( item < size )
{
temp[0] = A[item];
temp[1] = B[item];
temp[2] = temp[0] + temp[1];
C[item] = temp[2];
}
}
Once again, this is not something that we would normally do, and it
is provided only as an example of how to work with arrays of
registers.
Global Memory
Global memory can be considered the main memory space of the GPU in
CUDA. It is allocated, and managed, by the host, and it is accessible to
both the host and the GPU, and for this reason the global memory space
can be used to exchange data between the two. It is the largest memory
space available, and therefore it can contain much more data than
registers, but it is also slower to access. This memory space does not
require any special memory space identifier.
Challenge: identify when global memory is used
Observe the code of the following vector_add and
identify where global memory is used.
C
extern "C"
__global__ void vector_add(const float * A, const float * B, float * C, const int size)
{
int item = (blockIdx.x * blockDim.x) + threadIdx.x;
if ( item < size )
{
C[item] = A[item] + B[item];
}
}
Show me the solution
The vectors A, B, and C are
stored in global memory.
Memory allocated on the host, and passed as a parameter to a kernel,
is by default allocated in global memory.
Global memory is accessible by all threads, from all thread blocks.
This means that a thread can read and write any value in global
memory.
Callout
While global memory is visible to all threads, remember that global
memory is not coherent, and changes made by one thread block may not be
available to other thread blocks during the kernel execution. However,
all memory operations are finalized when the kernel terminates.
Local Memory
Memory can also be statically allocated from within a kernel, and
according to the CUDA programming model such memory will not be global
but local memory. Local memory is only visible, and therefore
accessible, by the thread allocating it. So all threads executing a
kernel will have their own privately allocated local memory.
Challenge: use local memory
Modify the following of vector_add so that intermediate
data products are stored in local memory, and only the final result is
saved into global memory.
C
extern "C"
__global__ void vector_add(const float * A, const float * B, float * C, const int size)
{
int item = (blockIdx.x * blockDim.x) + threadIdx.x;
if ( item < size )
{
C[item] = A[item] + B[item];
}
}
Hint: have a look at the example using an array of registers, but
find a way to use a variable and not a constant for the size.
Show me the solution
We need to pass the size of the local array as a new parameter to the
kernel, because if we just specified 3 in the code, the
compiler would allocate registers and not local memory.
C
extern "C"
__global__ void vector_add(const float * A, const float * B, float * C, const int size, const int local_memory_size)
{
int item = (blockIdx.x * blockDim.x) + threadIdx.x;
float local_memory[local_memory_size];
if ( item < size )
{
local_memory[0] = A[item];
local_memory[1] = B[item];
local_memory[2] = local_memory[0] + local_memory[1];
C[item] = local_memory[2];
}
}
The host code could be modified adding one line and changing the way
the kernel is called.
PYTHON
local_memory_size = 3
vector_add_gpu((2, 1, 1), (size // 2, 1, 1), (a_gpu, b_gpu, c_gpu, size, local_memory_size))
Local memory is not not a particularly fast memory, and in fact it
has similar throughput and latency of global memory, but it is much
larger than registers. As an example, local memory is automatically used
by the CUDA compiler to store spilled registers, i.e. to temporarily
store variables that cannot be kept in registers anymore because there
is not enough space in the register file, but that will be used again in
the future and so cannot be erased.
Key Points
This lesson is subject to the Code of Conduct
Edit on GitHub
| Contributing
| Source
Cite | Contact | About
Materials licensed under CC-BY 4.0 by the authors
Template licensed under CC-BY 4.0 by The Carpentries
Built with sandpaper (0.16.10), pegboard (0.7.7), and varnish (1.0.5)
|
Table of contents
Reference
Launching the GPU kernel
CUDA kernels
Now we learned how to interact with CUDA API, we can ask the GPU to execute a code.
GPU is an accelerator, which means that it was designed to be used alongside the conventional CPU.
Any code that uses GPU must have two parts: one that is executed on a CPU and one that is ported to the GPU.
CPU still controls the workflow, with GPU helping out with the more compute-intensive parts of the workflow.
This is why the CPU is normally referred to as a host, and GPU — as a device.
With this hardware structure, the API should have a means to switch from CPU to GPU execution.
This is done using special functions, called kernels.
To separate these functions from usual functions, they are marked by function specifier __global__:
__global__ void gpu_kernel(..)
What __global__ essentially means is that the function should be called from the host code, but will be executed on the device.
Since this function will be executed in many threads, the return value must be void: otherwise it would not be clear which of the threads should do the return.
The rest of the function definition is the same as with any C/C++ function: its name has the same limitations as a normal C function, it can have any number of arguments of any type, it is even can be templated.
Since the call of the kernel function happens in the host code but it is executed on the device, this place in the code marks a transition from single-thread execution to a many-thread execution.
One can think of it being a loop, each step of which is executed simultaneously.
As in loop, one needs an index, to differentiate the threads.
Here it gets a little bit complicated and we need to step back and re-iterate on how the GPUs are organized on a hardware level.
A simple example of the division of threads (green squares) in blocks (cyan rectangles).
The equally-sized blocks contain four threads each.
The thread index starts from zero in each block.
Hence the “global” thread index should be computed from the thread index, block index and block size.
This is explained for the thread #3 in block #2 (blue numbers).
The thread blocks are mapped to SMs for execution, with all threads within a block executing on the same device.
The number of threads within one block does not have to be equal to the number of execution units within multiprocessor.
In fact, GPUs can switch between software threads very efficiently, putting threads that currently wait for the data on hold and releasing the resources for threads that are ready for computations.
For efficient GPU utilization, the number of threads per block has to be couple of factors higher than the number of computing units on the multiprocessor.
Same is true for the number of thread blocks, which can and should be higher than the number of available multiprocessor in order to use the GPU computational resources efficiently.
The GPU contains several Streaming Modules (SMs, or multiprocessors), each with many compute units (see Figure above).
Every compute unit can execute commands.
So the entire GPU is first divided into streaming modules (or multiprocessors) and each multiprocessor contains many execution units.
To reflect this hierarchy on a software level, threads are grouped in identically sized blocks.
Each block is assigned into a streaming module for execution.
This collection of the thread blocks is usually called “grid”, which also can be multi-dimensional.
Although it may seem a bit complicated at the beginning, the grouping of threads open extra opportunities for synchronization and data exchange.
Since threads in a block are executed on a same SM, they can shared the data and can do fast communications.
This can be leveraged when designing and optimizing the code for GPU execution, and we will touch this topic later.
Given that the threads on a GPU are organized in a hierarchical manner, the global index of a thread should be computed from its in-block index, the index of execution block and the execution block size.
To get the global thread index, one can start the kernel function with:
__global__ void gpu_kernel(..)
{
int i = threadIdx.x + blockIdx.x*blockDim.x;
}
Here, threadIdx.x, blockIdx.x and blockDim.x are internal variables that are always available inside the device function.
They are, respectively, index of thread in a block, index of the block and the size of the block.
Here, we use one-dimensional arrangement of blocks and threads (hence, the .x).
More on multi-dimensional grids and CUDA built-in simple types later, for now we assume that the rest of the components equal to 1.
Since the index i is unique for each thread in an entire grid, it is usually called “global” index.
Global index can than be used to identify the GPU thread and assign a data elements to it.
For example, if we are applying the same function on different data elements in an array, we can use the global index to identify the element of this array for a particular thread.
In a CPU code, this would normally be done in a loop over all consecutive values in an array.
In a GPU code, we assign a thread to each element of the array.
Now the kernel is defined, we can call it from the host code.
Since the kernel will be executed in a grid of threads, so the kernel launch should be supplied with the configuration of the grid.
In CUDA this is done by adding kernel cofiguration, <<<numBlocks, threadsPerBlock>>>, to the function call:
gpu_kernel<<<numBlocks, threadsPerBlock>>>(..)
Here, numBlocks is the total number of thread blocks in the grid, threadsPerBlock is the number of threads in a single block.
Note, that these values can be integers, or can be two-dimensional of three-dimensional vectors, if this is more suitable for the kernel.
More on that later.
In case of one-dimensional grid, the kernel configuration can be specified by two integer values.
The threadsPerBlock can be arbitrary chosen.
It should be larger that the number of CUDA cores in the SM to fully occupy the device, but lower than the limit of 1024 (see the technical specifications).
Values of 256 or 512 are frequently used.
The total number of threads that will be created is the multiple of numBlocks and threadsPerBlock.
Kernels are asynchronous
In CUDA, the execution of the kernel is asynchronous.
This means that the execution will return to the CPU immediately after the kernel is launched.
Later we will see how this can be used to our advantage, since it allows us to keep CPU busy while GPU is executing the kernel.
But for the following example we will need to explicitly ask the CPU to wait until the GPU is done with the kernel execution.
This can be done with the following function from CUDA API:
cudaDeviceSynchronize(..)
__host__ __device__ cudaError_t cudaDeviceSynchronize()
We are already familiar with __host__ and __device__ specifiers: this function can be used in both host and device code.
As usual, the return type is cudaError_t, which may indicate that there was an error in execution and the function does not take any arguments.
This is all we are going to need for our next example, in which we are going to ask a thread to print its global index.
Exercise
Printing messages from the CUDA kernel
#include <stdio.h>
int main()
{
printf("I am a CPU running one thread.\n");
return 0;
}
#include <stdio.h>
__global__ void hello_kernel()
{
printf("Hello from GPU thread %d\n", threadIdx.x);
}
int main()
{
hello_kernel<<<1, 32>>>();
cudaDeviceSynchronize();
return 0;
}
#include <stdio.h>
__global__ void hello_kernel()
{
int threadInBlock = threadIdx.x;
int blockIndex = blockIdx.x;
int blockSize = blockDim.x;
int threadIndex = blockIndex * blockSize + threadInBlock;
printf("Hello from GPU thread %d = %d * %d + %d\n", threadIndex, blockIndex, blockSize, threadInBlock);
}
int main()
{
int numThreadsInBlock = 32;
int numBlocks = 3;
hello_kernel<<<numBlocks, numThreadsInBlock>>>();
cudaDeviceSynchronize();
return 0;
}
#include <stdio.h>
__global__ void hello_kernel()
{
printf("Hello from GPU thread (%d, %d) = (%d, %d) * (%d, %d) + (%d, %d)\n",
threadIdx.x, threadIdx.y,
blockIdx.x, blockIdx.y,
blockDim.x, blockDim.y,
blockIdx.x * blockDim.x + threadIdx.x, blockIdx.y * blockDim.y + threadIdx.y);
}
int main()
{
dim3 numThreadsInBlock(8, 4);
dim3 numBlocks(1, 2);
hello_kernel<<<numBlocks, numThreadsInBlock>>>();
cudaDeviceSynchronize();
return 0;
}
Change the file extension to .cu to inform the compiler that it will contain GPU code.
Create a kernel function. Remember that kernel should be marked with __global__ specifier and should return void.
In the kernel function, get the thread index using threadIdx.x and print it out.
Call the kernel in a single block of 32 threads.
Add cudaDeviceSynchronize(..) call after the kernel call to ensure that the host will wait for the GPU to complete the task.
What will happen if we don’t add the cudaDeviceSynchronize(..) call?
Everything will execute as normal, the CPU will wait for the GPU to complete the execution before terminating.
An error will occur since the GPU will not be able to complete the task before the end of the program is reached.
Only some of the threads will print their indices.
Nothing will be printed.
Solution
The correct answer is 4: nothing will be printed since the program termination is right after the kernel launch.
You can also add a sleep(..) function call after the kernel to ensure that it completes before the program terminates (make sure to include unistd.h to make the sleep function available).
Compile the code using nvcc, run the executable.
Modify the code to run in 4 blocks of 32 threads.
Apart from threadIdx.x, wou will need blockIdx.x and blockDim.x to compute the “global” thread index.
Print these values and the computed global index.
(*) Modify the code to use two-dimensional grid.
Remember, that the total number of threads per block is limited by 1024 on NVIDIA GPUs.
Why the order of the threads in the output is random?
Try executing the program several times to see if there is a pattern in the way the output is printed.
Try increasing the number of threads per block to 64.
Can you notice anything interesting in the order of threads within the block?
Solution
Driver assigns the threads to multiprocessors by blocks.
There is no guarantee that the first multiprocessor will complete its operations before the second.
The output is printed as the threads execution reach the corresponding line of the code and which one will be there faster depends on many different factors.
Within the block, the order seems to be consistent if the block size is 32.
When the number is larger, you can notice that the order of threads within chunks of 32 threads is consistent.
However, the order of this chunks can vary.
On NVIDIA GPU, execution is performed by so-called warps of threads and the size of a warp is exactly 32 for all NVIDIA GPUs.
Within the warp, the threads execute the same command simultaneously.
This is why the order within warp is consistent.
And this is also why one has to be very careful with thread divergency within warp.
Even if just one thread diverges within warp, the rest of the threads will wait until the divergent thread completes its operations.
© Copyright 2021, Artem Zhmurov and individual contributors..
|
Lecture: Manycore GPU Architectures
and Programming, Part 2
1
CSCE 569 Parallel Computing
Department of Computer Science and Engineering
Yonghong Yan
[email protected]
https://passlab.github.io/CSCE569/
Manycore GPU Architectures and
Programming: Outline
• Introduction
– GPU architectures, GPGPUs, and CUDA
• GPU Execution model
• CUDA Programming model
• Working with Memory in CUDA
– Global memory, shared and constant memory
• Streams and concurrency
• CUDA instruction intrinsic and library
• Performance, profiling, debugging, and error handling
• Directive-based high-level programming model
– OpenACC and OpenMP
2
How is the GPU controlled?
• The CUDA API is split into:
– The CUDA Management API
– The CUDA Kernel API
• The CUDA Management API is for a variety of operations
– GPU memory allocation, data transfer, execution, resource
creation
– Mostly regular C function and calls
• The CUDA Kernel API is used to define the computation to
be performed by the GPU
– C extensions
3
How is the GPU controlled?
• A CUDA kernel:
– Defines the operations to be performed by a single thread on
the GPU
– Just as a C/C++ function defines work to be done on the CPU
– Syntactically, a kernel looks like C/C++ with some extensions
__global__ void kernel(...) {
...
}
– Every CUDA thread executes the same kernel logic (SIMT)
– Initially, the only difference between threads are their thread
coordinates
4
How are GPU threads organized?
• CUDA thread hierarchy
– Warp = SIMT Group
– Thread Block = SIMT Groups that run
concurrently on an SM
– Grid = All Thread Blocks created by the same
kernel launch
• Launching a kernel is simple and similar to a function call.
– kernel name and arguments
– # of thread blocks/grid and # of threads/block to create:
kernel<<<nblocks,
threads_per_block>>>(arg1, arg2, ...);
5
How are GPU threads organized?
• In CUDA, only thread blocks and grids are first-class
citizens of the programming model.
• The number of warps created and their organization are
implicitly controlled by the kernel launch configuration, but
never set explicitly.
kernel<<<nblocks,
threads_per_block>>>(arg1, arg2, ...);
kernel launch
configuration
6
How are GPU threads organized?
• GPU threads can be configured in one-, two-, or three-
dimensional layouts
– One-dimensional blocks and grids:
int nblocks = 4;
int threads_per_block = 8;
kernel<<<nblocks, threads_per_block>>>(...);
7
Block 0
Block 1
Block 2
Block 3
How are GPU threads organized?
• GPU threads can be configured in one-, two-, or three-
dimensional layouts
– Two-dimensional blocks and grids:
dim3 nblocks(2,2)
dim3 threads_per_block(4,2);
kernel<<<nblocks, threads_per_block>>>(...);
8
How are GPU threads organized?
• GPU threads can be configured in one-, two-, or three-
dimensional layouts
– Two-dimensional grid and one-dimensional blocks:
dim3 nblocks(2,2);
int threads_per_block = 8;
kernel<<<nblocks, threads_per_block>>>(...);
9
How are GPU threads organized?
• On the GPU, the number of blocks and threads per block is
exposed through intrinsic thread coordinate variables:
– Dimensions
– IDs
Variable
Meaning
gridDim.x, gridDim.y,
gridDim.z
Number of blocks in a kernel
launch.
blockIdx.x, blockIdx.y,
blockIdx.z
Unique ID of the block that
contains the current thread.
blockDim.x, blockDim.y,
blockDim.z
Number of threads in each block.
threadIdx.x, threadIdx.y,
threadIdx.z
Unique ID of the current thread
within its block.
10
How are GPU threads organized?
to calculate a globally unique ID for a thread on the GPU
inside a one-dimensional grid and one-dimensional block:
kernel<<<4, 8>>>(...);
__global__ void kernel(...) {
int tid = blockIdx.x * blockDim.x + threadIdx.x;
...
}
11
Block 0
Block 1
Block 2
Block 3
blockIdx.x = 2;
blockDim.x = 8;
threadIdx.x = 2;
0 1 2 3 4 5 6 7
8
How are GPU threads organized?
• Thread coordinates offer a way to differentiate threads and
identify thread-specific input data or code paths.
– Link data and computation, a mapping
__global__ void kernel(int *arr) {
int tid = blockIdx.x * blockDim.x + threadIdx.x;
if (tid < 32) {
arr[tid] = f(arr[tid]);
} else {
arr[tid] = g(arr[tid]);
}
12
code path for threads with tid < 32
code path for threads with tid >= 32
Thread Divergence: recall that useless code path is
executed, but then disabled in SIMT execution model
How is GPU memory managed?
• CUDA Memory Management API
– Allocation of GPU memory
– Transfer of data from the host to GPU memory
– Free-ing GPU memory
– Foo(int A[][N]) { }
Host Function
CUDA Analogue
malloc
cudaMalloc
memcpy
cudaMemcpy
free
cudaFree
13
How is GPU memory managed?
cudaError_t cudaMalloc(void **devPtr,
size_t size);
– Allocate size bytes of GPU memory and store their address
at *devPtr
cudaError_t cudaFree(void *devPtr);
– Release the device memory allocation stored at devPtr
– Must be an allocation that was created using cudaMalloc
14
How is GPU memory managed?
cudaError_t cudaMemcpy(
void *dst, const void *src, size_t count,
enum cudaMemcpyKind kind);
– Transfers count bytes from the memory pointed to by src to
dst
– kind can be:
• cudaMemcpyHostToHost,
• cudaMemcpyHostToDevice,
• cudaMemcpyDeviceToHost,
• cudaMemcpyDeviceToDevice
– The locations of dst and src must match kind, e.g. if kind is
cudaMemcpyHostToDevice then src must be a host array and
dst must be a device array
15
How is GPU memory managed?
void *d_arr, *h_arr;
h_addr = … ; /* init host memory and data */
// Allocate memory on GPU and its address is in d_arr
cudaMalloc((void **)&d_arr, nbytes);
// Transfer data from host to device
cudaMemcpy(d_arr, h_arr, nbytes,
cudaMemcpyHostToDevice);
// Transfer data from a device to a host
cudaMemcpy(h_arr, d_arr, nbytes,
cudaMemcpyDeviceToHost);
// Free the allocated memory
cudaFree(d_arr);
16
CUDA Program Flow
• At its most basic, the flow of a CUDA program is as follows:
1. Allocate GPU memory
2. Populate GPU memory with inputs from the host
3. Execute a GPU kernel on those inputs
4. Transfer outputs from the GPU back to the host
5. Free GPU memory
• Let’s take a look at a simple example that manipulates data
17
AXPY Example with OpenMP: Multicore
• y = α·x + y
– x and y are vectors of size n
– α is scalar
• Data (x, y and a) are shared
– Parallelization is relatively easy
18
CUDA Program Flow
• AXPY is an embarrassingly parallel problem
– How can vector addition be parallelized?
– How can we map this to GPUs?
• Each thread does one element
A
B
C
19
AXPY Offloading To a GPU using CUDA
20
Memory allocation on device
Memcpy from host to device
Launch parallel execution
Memcpy from device to host
Deallocation of dev memory
CUDA Program Flow
• Consider the workflow of the example vector addition vecAdd.cu:
1.
Allocate space for A, B, and C on the GPU
2.
Transfer the initial contents of A and B to the GPU
3.
Execute a kernel in which each thread sums Ai and Bi, and stores the
result in Ci
4.
Transfer the final contents of C back to the host
5.
Free A, B, and C on the GPU
Modify to C = A+B+C
A = B*C;
we will need both C and A in the host side after GPU
computation.
• Compile and running on bridges:
– https://passlab.github.io/CSCE569/resources/HardwareSoftware.html#inte
ractive
– copy gpu_code_examples folder from my home folder
• cp –r ~yan/gpu_code_examples ~
– $nvcc –Xcompiler –fopenmp vectorAdd.cu
– $./a.out
21
More Examples and Exercises
• Matvec:
– Version 1: each thread computes one element of the final
vector
– Version 2:
• Matmul in assignment #4
– Version 1: each thread computes one row of the final matrix C
22
CUDA SDK Examples
• CUDA Programming Manual:
– http://docs.nvidia.com/cuda/cuda-c-programming-guide
• CUDA SDK Examples on bridges
– module load gcc/5.3.0 cuda/8.0
– export CUDA_PATH=/opt/packages/cuda/8.0
– /opt/packages/cuda/8.0/samples
• Copy to your home folder
– cp –r /opt/packages/cuda/8.0/samples ~/CUDA_samples
• Do a “make” in the folder, and it will build all the sources
• Or go to a specific example folder and make, it will build only the
binary
• Find ones you are interested in and run to see
23
Inspecting CUDA Programs
• Debugging CUDA program:
– cuda-gdb debugging tool, like gdb
• Profiling a program to examine the performance
– Nvprof tool, like gprof
– Nvprof ./vecAdd
24
Manycore GPU Architectures and
Programming: Outline
• Introduction
– GPU architectures, GPGPUs, and CUDA
• GPU Execution model
• CUDA Programming model
• Working with Memory in CUDA
– Global memory, shared and constant memory
• Streams and concurrency
• CUDA instruction intrinsic and library
• Performance, profiling, debugging, and error handling
• Directive-based high-level programming model
– OpenACC and OpenMP
25
Storing Data on the CPU
• A memory hierarchy emulates a large amount of low-
latency memory
– Cache data from a large, high-latency memory bank in a small
low-latency memory bank
DRAM
L2 Cache
L1 Cache
26
CPU Memory Hierarchy
CPU
GPU Memory Hierarchy
27
SIMT Thread Groups on a GPU
SIMT Thread Groups on an SM
SIMT Thread Group
Registers
Local Memory
On-Chip Shared
Memory/Cache
Global Memory
Constant Memory
Texture Memory
• More complex than
the CPU memory
– Many different types
of memory, each with
special-purpose
characteristics
• SRAM
• DRAM
– More explicit control
over data movement
Storing Data on the GPU
28
• Registers (SRAM)
– Lowest latency memory space on the
GPU
– Private to each CUDA thread
– Constant pool of registers per-SM
divided among threads in resident
thread blocks
– Architecture-dependent limit on
number of registers per thread
– Registers are not explicitly used by the
programmer, implicitly allocated by
the compiler
– -maxrregcount compiler option
allows you to limit # registers per
thread
Storing Data on the GPU
Shared Memory
Transfer
29
• Shared Memory (SRAM)
– Declared with the __shared__
keyword
– Low-latency, high bandwidth
– Shared by all threads in a thread block
– Explicitly allocated and managed by
the programmer, manual L1 cache
– Stored on-SM, same physical memory
as the GPU L1 cache
– On-SM memory is statically
partitioned between L1 cache and
shared memory
Storing Data on the GPU
L1 Cache
L2 Cache
Global Memory
30
• GPU Caches (SRAM)
– Behaviour of GPU caches is
architecture-dependent
– Per-SM L1 cache stored on-chip
– Per-GPU L2 cache stored off-chip,
caches values for all SMs
– Due to parallelism of accesses, GPU
caches do not follow the same LRU
rules as CPU caches
Storing Data on the GPU
Constant
Memory
Constant Cache
31
• Constant Memory (DRAM)
– Declared with the __constant__
keyword
– Read-only
– Limited in size: 64KB
– Stored in device memory (same
physical location as Global Memory)
– Cached in a per-SM constant cache
– Optimized for all threads in a warp
accessing the same memory cell
Storing Data on the GPU
Texture Memory
Read-Only Cache
Texture Memory
32
• Texture Memory (DRAM)
– Read-only
– Stored in device memory (same
physical location as Global Memory)
– Cached in a texture-only on-SM cache
– Optimized for 2D spatial locality
(caches commonly only optimized for
1D locality)
– Explicitly used by the programmer
– Special-purpose memory
Storing Data on the GPU
L1 Cache
L2 Cache
Global Memory
33
• Global Memory (DRAM)
– Large, high-latency memory
– Stored in device memory (along
with constant and texture memory)
– Can be declared statically with
__device__
– Can be allocated dynamically with
cudaMalloc
– Explicitly managed by the
programmer
– Optimized for all threads in a warp
accessing neighbouring memory
cells
Storing Data on the GPU
34
Storing Data on the GPU
35
Static Global Memory
• Static Global Memory has a fixed size throughout execution
time:
__device__ float devData;
__global__ void checkGlobalVariable()
printf(“devData has value %f\n”, devData);
}
• Initialized using cudaMemcpyToSymbol:
cudaMemcpyToSymbol(devData, &hostData, sizeof(float));
• Fetched using cudaMemcpyFromSymbol:
cudaMemcpyFromSymbol(&hostData, devData,
sizeof(float));
36
Dynamic Global Memory
• We have already seen dynamic global memory
– cudaMalloc dynamically allocates global memory
– cudaMemcpy transfers to/from global memory
– cudaFree frees global memory allocated by cudaMalloc
• cudaMemcpy supports 4 types of transfer:
– cudaMemcpyHostToHost,
cudaMemcpyHostToDevice,
cudaMemcpyDeviceToHost,
cudaMemcpyDeviceToDevice
• You can also memset global memory
cudaError_t cudaMemset(void *devPtr, int value,
size_t count);
37
Global Memory Access Patterns
• CPU caches are optimized for linear, iterative memory
accesses
– Cache lines ensure that accessing one memory cell brings
neighbouring memory cells into cache
– If an application exhibits good spatial or temporal locality
(which many do), later references will also hit in cache
CPU
System
Memory
Cache
38
Global Memory Access Patterns
• GPU caching is a more challenging problem
– Thousands of threads cooperating on a problem
– Difficult to predict the next round of accesses for all threads
• For efficient global memory access, GPUs instead rely on:
– Large device memory bandwidth
– Aligned and coalesced memory access patterns
– Maintaining sufficient pending I/O operations to keep the
memory bus saturated and hide global memory latency
39
Global Memory Access Patterns
• Achieving aligned and coalesced global memory accesses is
key to optimizing an application’s use of global memory
bandwidth
– Coalesced: the threads within a warp reference memory
addresses that can all be serviced by a single global memory
transaction (think of a memory transaction as the process of
bring a cache line into the cache)
– Aligned: the global memory accesses by threads within a warp
start at an address boundary that is an even multiple of the
size of a global memory transaction
40
Global Memory Access Patterns
• A global memory transaction is either 32 or 128 bytes
– The size of a memory transaction depends on which caches it
passes through
– If L1 + L2: 128 byte
– If L2 only: 32 bytes
– Which caches a global memory transaction passes through
depends on GPU architecture and the type of access (read vs.
write)
41
Global Memory Access Patterns
• Aligned and Coalesced Memory Access (w/ L1 cache)
– 32-thread wrap, 128-bytes memory transaction
• With 128-byte access, a single transaction is required and
all of the loaded bytes are used
42
Global Memory Access Patterns
• Misaligned and Coalesced Memory Access (w/ L1 cache)
• With 128-byte access, two memory transactions are
required to load all requested bytes. Only half of the loaded
bytes are used.
43
Global Memory Access Patterns
• Misaligned and Uncoalesced Memory Access (w/ L1 cache)
• With uncoalesced loads, many more bytes loaded than
requested
44
Global Memory Access Patterns
• Misaligned and Uncoalesced Memory Access (w/ L1 cache)
• One factor to consider with uncoalesced loads: while the
efficiency of this access is very low it may bring many cache
lines into L1/L2 cache which are used by later memory
accesses. The GPU is flexible enough to perform well, even
for applications that present suboptimal access patterns.
45
Global Memory Access Patterns
• Memory accesses that are not cached in L1 cache are
serviced by 32-byte transactions
– This can improve memory bandwidth utilization
– However, the L2 cache is device-wide, higher latency than L1,
and still relatively small è many applications may take a
performance hit if L1 cache is not used for reads
46
Global Memory Access Patterns
• Aligned and Coalesced Memory Access (w/o L1 cache)
• With 32-byte transactions, four transactions are required
and all of the loaded bytes are used
47
Global Memory Access Patterns
• Misaligned and Coalesced Memory Access (w/o L1 cache)
• With 32-byte transactions, extra memory transactions are
still required to load all requested bytes but the number of
wasted bytes is likely reduced, compared to 128-byte
transactions.
48
Global Memory Access Patterns
• Misaligned and Uncoalesced Memory Access (w/o L1
cache)
• With uncoalesced loads, more bytes loaded than requested
but better efficiency than with 128-byte transactions
49
Global Memory Access Patterns
• Global Memory Writes are always serviced by 32-byte
transactions
50
Global Memory and Special-Purpose Memory
• Global memory is widely useful and as easy to use as CPU DRAM
• Limitations
– Easy to find applications with memory access patterns that are
intrinsically poor for global memory
– Many threads accessing the same memory cell è poor global
memory efficiency
– Many threads accessing sparse memory cells è poor global
memory efficiency
• Special-purpose memory spaces to address these deficiencies in
global memory
– Specialized for different types of data, different access patterns
– Give more control over data movement and data placement than
CPU architectures do
53
Shared Memory
Shared Memory
Transfer
54
• Declared with the __shared__
keyword
• Low-latency, high bandwidth
• Shared by all threads in a thread
block
• Explicitly allocated and managed by
the programmer, manual L1 cache
• Stored on-SM, same physical memory
as the GPU L1 cache
• On-SM memory is statically
partitioned between L1 cache and
shared memory
Shared Memory Allocation
• Shared memory can be allocated statically or dynamically
• Statically Allocated Shared Memory
– Size is fixed at compile-time
– Can declare many statically allocated shared memory variables
– Can be declared globally or inside a device function
– Can be multi-dimensional arrays
__shared__ int s_arr[256][256];
55
Shared Memory Allocation
• Dynamically Allocated Shared Memory
– Size in bytes is set at kernel launch with a third kernel launch
configurable
– Can only have one dynamically allocated shared memory array
per kernel
– Must be one-dimensional arrays
__global__ void kernel(...) {
extern __shared__ int s_arr[];
...
}
kernel<<<nblocks, threads_per_block,
shared_memory_bytes>>>(...);
56
Matvec using shared memory
57
Matrix Vector Multiplication
58
Matrix Vector Multiplication
59
Matrix Multiplication V1 and V2 in Assignment
#4
• https://docs.nvidia.com/cuda/cuda-c-programming-
guide/#shared-memory
60
GPU Memory Hierarchy
68
SIMT Thread Groups on a GPU
SIMT Thread Groups on an SM
SIMT Thread Group
Registers
Local Memory
On-Chip Shared
Memory/Cache
Global Memory
Constant Memory
Texture Memory
• More complex than
the CPU memory
– Many different types
of memory, each with
special-purpose
characteristics
• SRAM
• DRAM
– More explicit control
over data movement
Constant Memory
Constant
Memory
Constant Cache
69
• Declared with the __constant__
keyword
• Read-only
• Limited in size: 64KB
• Stored in device memory (same
physical location as Global Memory)
• Cached in a per-SM constant cache
• Optimized for all threads in a warp
accessing the same memory cell
Constant Memory
• As its name suggests, constant memory is best used for
storing constants
– Values which are read-only
– Values that are accessed identically by all threads
• For example: suppose all threads are evaluating the
equation
y = mx + b
for different values of x, but identical values of m and b
– All threads would reference m and b with the same memory
operation
– This broadcast access pattern is optimal for constant memory
70
Constant Memory
• A simple 1D stencil
– target cell is updated based on its 8 neighbors, weighted by
some constants c0, c1, c2, c3
71
Constant Memory
• constantStencil.cu contains an example 1D stencil
that uses constant memory
__constant__ float coef[RADIUS + 1];
cudaMemcpyToSymbol(coef, h_coef, (RADIUS + 1) *
sizeof(float));
__global__ void stencil_1d(float *in, float *out, int N)
{
...
for (int i = 1; i <= RADIUS; i++) {
tmp += coef[i] * (smem[sidx + i] - smem[sidx - i]);
}
}
72
CUDA Synchronization
• When using shared memory, you often must coordinate
accesses by multiple threads to the same data
• CUDA offers synchronization primitives that allow you to
synchronize among threads
73
CUDA Synchronization
__syncthreads
– Synchronizes execution across all threads in a thread block
– No thread in a thread block can progress past a __syncthreads
before all other threads have reached it
– __syncthreads ensures that all changes to shared and
global memory by threads in this block are visible to all other
threads in this block
__threadfence_block
– All writes to shared and global memory by the calling thread
are visible to all other threads in its block after this fence
– Does not block thread execution
74
CUDA Synchronization
__threadfence
– All writes to global memory by the calling thread are visible to
all other threads in its grid after this fence
– Does not block thread execution
__threadfence_system
– All writes to global memory, page-locked host memory, and
memory of other CUDA devices by the calling thread are
visible to all other threads on all CUDA devices and all host
threads after this fence
– Does not block thread execution
75
Suggested Readings
1. Chapter 2, 4, 5 in Professional CUDA C Programming
2. Cliff Woolley. GPU Optimization Fundamentals. 2013.
https://www.olcf.ornl.gov/ wp-
content/uploads/2013/02/GPU_Opt_Fund-CW1.pdf
3. Mark Harris. Using Shared Memory in CUDA C/C++.
http://devblogs.nvidia.com/ parallelforall/using-shared-
memory-cuda-cc/
4. Mark Harris. Optimizing Parallel Reduction in CUDA.
http://developer.download.nvidia
.com/assets/cuda/files/reduction.pdf
76 |
CUDA/HIP
GPU Accelerated Small Matrix Multiplications
libsmm_acc is a library for small matrix-matrix multiplication on a GPU-accelerator. Stacks of matrix-matrix multiplication indices are passed from DBCSR to libsmm_acc which performs the multiplications on the GPU.
For a description of the library (some details are outdated, but this nevertheless provides a very good introduction), see Chapter 8.4 of:
WALKER, R. C., & GOETZ, A. W. (2016). Electronic structure calculations on graphics processing units: from quantum chemistry to condensed matter physics.
Compilation
libsmm_acc is compiled from within DBCSR, there is no separate compilation.
Directory Organization
Matrix-matrix Multiplication Kernels and Parameters
For a given matrix-matrix multiplication triplet characterized by dimensions
libsmm_acc can run 5 different matrix-matrix multiplication kernels:
which take between 3 - 7 parameters (see figure at the top):
The performance of the matrix-matrix multiplication kernels is highly dependent on the choice of algorithm and parameters. For this reason, libsmm_acc provides lists of optimal parameters for different GPU cards and different (m, n, k)-triplets.
Contributing to libsmm_acc
We expect users to contribute to the library by providing new optimized kernels and support for new GPUs.
Autotuning procedure
Follow the autotuning procedure
Adding a new kernel
Choose a kernel name
Add the kernel's code (must be able to compile by both nvcc and hip) in file kernels/smm_acc_dnt_name.h
Add Python kernel class inheriting from base class kernels/smm_acc_dnt_name.py
Adding support for a new GPU card
Add the GPU's compute architecture properties to kernels/gpu_properties.json. For more information on where to find these properties, please refer to the "info" field of kernels/gpu_properties.json.
Add the GPU to the gpu_architectures data structure in kernels/smm_acc.py.
Add the necessary code for setting ARCH_NUMBER correctly in the CMakeLists. Also add this GPU to the list of SUPPORTED_CUDA_ARCHITECTURES or SUPPORTED_HIP_ARCHITECTURES in the CMakeLists.
Add a minimal JSON file parameters_GPU.json, containing:
{
}
then add matrix-matrix multiplication parameters for this GPU using autotuning.
DBCSR
was developed by DBCSR Authors © 2025
Documentation generated by
FORD
|
CUDA Shared Memory Swizzling
Introduction
When we write CUDA kernels that use shared memory, we have to be careful about shared memory bank conflicts. Having severe shared memory bank conflicts can introduce a significant performance penalty.
One simple way to deal with shared memory bank conflicts is to use padding. However, padding can waste shared memory and can have other drawbacks.
In this blog post, I would like to discuss how to deal with shared memory bank conflicts using swizzling. Swizzling is a more complicated technique that can be used to avoid shared memory bank conflicts without wasting shared memory.
CUDA Shared Memory Swizzling
Swizzling Example
When we use CUDA shared memory to cache data without using padding, it’s very common that either reading from or writing to shared memory by a warp can cause shared memory bank conflicts. Swizzling is a technique that rearranges the mapping of the shared memory index to avoid shared memory bank conflicts. Matrix transpose is a perfect example that can have shared memory bank conflicts if the implementation does not use padding or swizzling.
In the above example, the shared memory is a 2D array of float with size 32 × 16. In terms of matrix transpose, each warp reads a row of 32 values from the global memory and writes them to the shared memory with swizzling. There will be no shared memory bank conflicts when writing to the shared memory. To perform matrix transpose, each wrap reads two swizzled “columns” of 32 values from the shared memory and writes them to the global memory. For example, the swizzled column 0 and 1 are colored in yellow and cyan, respectively. In this way, there will be only one shared memory bank conflict when reading from the shared memory. Without using swizzling, there will be 16 (2-way) shared memory bank conflicts when reading from the shared memory. Of course, obviously, if the shared memory is a 2D array of float with size 32 × 32, there will be no shared memory bank conflicts when writing to the shared memory and reading from the shared memory.
Swizzling Formula
Given an array of T array[][NX] on shared memory, we define NX × sizeof(T) == SWIZZLE_SIZE. The allowed values of SWIZZLE_SIZE are powers of 2 that is larger than or equal to 32, such as 32, 64, 128, 256, …, etc.
Given the index [y][x] in T array[][NX], we can compute the swizzled index x_swz as follows:
Compute the index of the TC-byte chunk within SWIZZLE_SIZE-byte segment:i_chunk = (y × NX + x) × sizeof(T) / sizeof(TC)y_chunk = i / (SWIZZLE_SIZE / sizeof(TC))x_chunk = i % (SWIZZLE_SIZE / sizeof(TC))
Compute the swizzled index of TC-byte chunk using XOR operation:x_chunk_swz = y_chunk ^ x_chunk
Compute swizzled index:x_swz = x_chunk_swz × sizeof(TC) / sizeof(T) % NX + x % (sizeof(TC) / sizeof(T))
Swizzling Properties
This swizzling formula has the following properties:
The property 1 ensures that there will be no data loss during swizzling. The property 2 ensures that the index before and after swizzling will be one to one mapped.
Here I am going to show some informal mathematical proofs for the properties from the swizzling formula.
Proof
We will first prove the property 1.
$$\begin{align}x_{\text{chunk}}&= i_{\text{chunk}} \% (\text{SWIZZLE_SIZE} / \text{sizeof}(\text{TC})) \\&= \left(\left(y × \text{NX} + x\right) × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})\right) \% (\text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})) \\&= \left(y × \text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) + x × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})\right) \% (\text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})) \\&= \left(y × \text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) \% (\text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})) + x × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) \% (\text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})) \right) \% (\text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})) \\&= \left(x × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) \% (\text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})) \right) \% (\text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})) \\&= x × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) \% (\text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})) \\&= \left( x \% \text{NX} \right) × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) \\&= x × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) \\\end{align}$$
It seems that we have derived another equivalent formula for $x_{\text{chunk}}$. Note that $\text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})$ is a bit (right) shifting operation when $\text{sizeof}(\text{TC}) / \text{sizeof}(\text{T})$ is a power of 2.
$$\begin{align}y_{\text{chunk}}&= i_{\text{chunk}} / (\text{SWIZZLE_SIZE} / \text{sizeof}(\text{TC})) \\&= \left(\left(y × \text{NX} + x\right) × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})\right) / (\text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})) \\&= \left(y × \text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) + x × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})\right) / (\text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})) \\&= y × \text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) / (\text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})) + x × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) / (\text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})) \\&= y + x / \text{NX} \\&= y \\\end{align}$$
It seems that we have also derived another equivalent formula for $y_{\text{chunk}}$.
$$\begin{align}x_{\text{chunk_swz}}&= y_{\text{chunk}} \oplus x_{\text{chunk}} \\&= y \oplus \left( x × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) \right) \\&= y / (\text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})) × \text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) + \left( y \% (\text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})) \right) \oplus \left( x × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) \right) \\&= y / (\text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})) × \text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) + \left( \left( y \% \text{NX} \right) × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) \right) \oplus \left( x × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) \right) \\&= y / (\text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})) × \text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) + \left( y \% \text{NX} \right) \oplus x × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) \\\end{align}$$
Note that $\oplus$ is the bitwise XOR operation. If either $y_{\text{chunk}}$ or $x_{\text{chunk}}$ is a constant, the mapping is a one to one mapping.
$$\begin{align}x_{\text{swz}}&= x_{\text{chunk_swz}} × \text{sizeof}(\text{TC}) / \text{sizeof}(\text{T}) \% \text{NX} + x \% (\text{sizeof}(\text{TC}) / \text{sizeof}(\text{T})) \\\end{align}$$
Here the proof becomes a little bit informal.
Because a consecutive of $\text{sizeof}(\text{TC}) / \text{sizeof}(\text{T})$ $x$ values will be mapped to one unique trunk index $x_{\text{chunk}}$, the mapping between $x_{\text{chunk}}$ and $x_{\text{chunk_swz}}$ is a one to one mapping, one $x_{\text{chunk_swz}}$ value will map to one unique $x_{\text{chunk_swz}} × \text{sizeof}(\text{TC}) / \text{sizeof}(\text{T}) \% \text{NX}$ value. To create the one to one mapping between the swizzled index $x_{\text{swz}}$ and the original index $x$, the offset $x \% (\text{sizeof}(\text{TC}) / \text{sizeof}(\text{T}))$ is added. Therefore, the index before and after swizzling must be one to one mapped.
The property 2 is trivial to show.
The property 3 might be somewhat easier to see given the following expression for $x_{\text{swz}}$.
$$\begin{align}x_{\text{swz}}&= x_{\text{chunk_swz}} × \text{sizeof}(\text{TC}) / \text{sizeof}(\text{T}) \% \text{NX} + x \% (\text{sizeof}(\text{TC}) / \text{sizeof}(\text{T})) \\&= \left( y / (\text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC})) × \text{NX} × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) + \left( y \% \text{NX} \right) \oplus x × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) \right) × \text{sizeof}(\text{TC}) / \text{sizeof}(\text{T}) \% \text{NX} + x \% (\text{sizeof}(\text{TC}) / \text{sizeof}(\text{T})) \\&= \left( y \% \text{NX} \right) \oplus x × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) × \text{sizeof}(\text{TC}) / \text{sizeof}(\text{T}) \% \text{NX} + x \% (\text{sizeof}(\text{TC}) / \text{sizeof}(\text{T})) \\&= \left( y \% \text{NX} \right) \oplus x × \text{sizeof}(\text{T}) / \text{sizeof}(\text{TC}) × \text{sizeof}(\text{TC}) / \text{sizeof}(\text{T}) + x \% (\text{sizeof}(\text{TC}) / \text{sizeof}(\text{T})) \\\end{align}$$
Given any $x$ and any $\{y, y+1, y+2, \cdots, y+\text{NX}-1\}$, the number of unique swizzled index $x_{\text{swz}}$ is $\text{NX}$ which is maximized.
Examples
Matrix Transpose
In this example, we implemented matrix transpose CUDA kernels using shared memory in three different ways:
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367
#include <algorithm>#include <cassert>#include <chrono>#include <cstdio>#include <functional>#include <iomanip>#include <iostream>#include <random>#include <vector>#include <cuda_runtime.h>#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)void check(cudaError_t err, char const* func, char const* file, int line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}#define CHECK_LAST_CUDA_ERROR() check_last(__FILE__, __LINE__)void check_last(char const* file, int line){ cudaError_t const err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); }}template <class T>float measure_performance(std::function<T(cudaStream_t)> bound_function, cudaStream_t stream, size_t num_repeats = 10, size_t num_warmups = 10){ cudaEvent_t start, stop; float time; CHECK_CUDA_ERROR(cudaEventCreate(&start)); CHECK_CUDA_ERROR(cudaEventCreate(&stop)); for (size_t i{0}; i < num_warmups; ++i) { bound_function(stream); } CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); CHECK_CUDA_ERROR(cudaEventRecord(start, stream)); for (size_t i{0}; i < num_repeats; ++i) { bound_function(stream); } CHECK_CUDA_ERROR(cudaEventRecord(stop, stream)); CHECK_CUDA_ERROR(cudaEventSynchronize(stop)); CHECK_LAST_CUDA_ERROR(); CHECK_CUDA_ERROR(cudaEventElapsedTime(&time, start, stop)); CHECK_CUDA_ERROR(cudaEventDestroy(start)); CHECK_CUDA_ERROR(cudaEventDestroy(stop)); float const latency{time / num_repeats}; return latency;}constexpr size_t div_up(size_t a, size_t b) { return (a + b - 1) / b; }template <typename T, size_t BLOCK_TILE_SIZE_X = 32, size_t BLOCK_TILE_SIZE_Y = 32, size_t BLOCK_TILE_SKEW_SIZE_X = 0>__global__ void transpose(T* output_matrix, T const* input_matrix, size_t M, size_t N){ // Waste some shared memory to avoid bank conflicts if // BLOCK_TILE_SKEW_SIZE_X != 0. __shared__ T shm[BLOCK_TILE_SIZE_Y][BLOCK_TILE_SIZE_X + BLOCK_TILE_SKEW_SIZE_X]; // In some algorithms, such as matrix multiplication, // a warp of threads have to access a column of the 2D matrix in the shared // memory. Using the conventional index mapping, if the column size is not a // multiple of the warp size, there will be bank conflicts. size_t const input_matrix_from_idx_x{threadIdx.x + blockIdx.x * blockDim.x}; size_t const input_matrix_from_idx_y{threadIdx.y + blockIdx.y * blockDim.y}; size_t const input_matrix_from_idx{input_matrix_from_idx_x + input_matrix_from_idx_y * N}; size_t const shm_to_idx_x{threadIdx.x}; size_t const shm_to_idx_y{threadIdx.y}; if ((input_matrix_from_idx_y < M) && (input_matrix_from_idx_x < N)) { // Coalesced global memory access. // No shared memory bank conflict. shm[shm_to_idx_y][shm_to_idx_x] = input_matrix[input_matrix_from_idx]; } // Make sure the buffer in a block is filled. __syncthreads(); size_t const block_thread_idx{threadIdx.x + threadIdx.y * blockDim.x}; size_t const shm_from_idx_x{block_thread_idx / BLOCK_TILE_SIZE_Y}; size_t const shm_from_idx_y{block_thread_idx % BLOCK_TILE_SIZE_Y}; size_t const output_matrix_to_idx_x{shm_from_idx_y + blockIdx.y * blockDim.y}; size_t const output_matrix_to_idx_y{shm_from_idx_x + blockIdx.x * blockDim.x}; size_t const output_matrix_to_idx{output_matrix_to_idx_x + output_matrix_to_idx_y * M}; if ((output_matrix_to_idx_y < N) && (output_matrix_to_idx_x < M)) { // Coalesced global memory access. // No shared memory bank conflict if BLOCK_TILE_SKEW_SIZE_X = 1. output_matrix[output_matrix_to_idx] = shm[shm_from_idx_y][shm_from_idx_x]; }}template <typename T, size_t BLOCK_TILE_SIZE_X = 32, size_t BLOCK_TILE_SIZE_Y = 32>__global__ void transpose_swizzling(T* output_matrix, T const* input_matrix, size_t M, size_t N){ __shared__ T shm[BLOCK_TILE_SIZE_Y][BLOCK_TILE_SIZE_X]; // In some algorithms, such as matrix multiplication, // a warp of threads have to access a column of the 2D matrix in the shared // memory. Using the conventional index mapping, if the column size is not a // multiple of the warp size, there will be bank conflicts. size_t const input_matrix_from_idx_x{threadIdx.x + blockIdx.x * blockDim.x}; size_t const input_matrix_from_idx_y{threadIdx.y + blockIdx.y * blockDim.y}; size_t const input_matrix_from_idx{input_matrix_from_idx_x + input_matrix_from_idx_y * N}; size_t const shm_to_idx_x{threadIdx.x}; size_t const shm_to_idx_y{threadIdx.y}; size_t const shm_to_idx_x_swizzled{(shm_to_idx_x ^ shm_to_idx_y) % BLOCK_TILE_SIZE_X}; if ((input_matrix_from_idx_y < M) && (input_matrix_from_idx_x < N)) { // Coalesced global memory access. // No shared memory bank conflict. shm[shm_to_idx_y][shm_to_idx_x_swizzled] = input_matrix[input_matrix_from_idx]; } // Make sure the buffer in a block is filled. __syncthreads(); size_t const block_thread_idx{threadIdx.x + threadIdx.y * blockDim.x}; size_t const shm_from_idx_x{block_thread_idx / BLOCK_TILE_SIZE_Y}; size_t const shm_from_idx_y{block_thread_idx % BLOCK_TILE_SIZE_Y}; size_t const shm_from_idx_x_swizzled{(shm_from_idx_x ^ shm_from_idx_y) % BLOCK_TILE_SIZE_X}; size_t const output_matrix_to_idx_x{shm_from_idx_y + blockIdx.y * blockDim.y}; size_t const output_matrix_to_idx_y{shm_from_idx_x + blockIdx.x * blockDim.x}; size_t const output_matrix_to_idx{output_matrix_to_idx_x + output_matrix_to_idx_y * M}; if ((output_matrix_to_idx_y < N) && (output_matrix_to_idx_x < M)) { // Coalesced global memory access. // No shared memory bank conflict. output_matrix[output_matrix_to_idx] = shm[shm_from_idx_y][shm_from_idx_x_swizzled]; }}template <typename T>void launch_transpose_with_shm_bank_conflict(T* d_output_matrix, T const* d_input_matrix, size_t M, size_t N, cudaStream_t stream){ constexpr size_t BLOCK_TILE_SIZE_X{32}; constexpr size_t BLOCK_TILE_SIZE_Y{32}; constexpr size_t BLOCK_TILE_SKEW_SIZE_X{0}; dim3 const block_size{BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y}; dim3 const grid_size{static_cast<unsigned int>(div_up(N, block_size.x)), static_cast<unsigned int>(div_up(M, block_size.y))}; transpose<T, BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y, BLOCK_TILE_SKEW_SIZE_X> <<<grid_size, block_size, 0, stream>>>(d_output_matrix, d_input_matrix, M, N); CHECK_LAST_CUDA_ERROR();}template <typename T>void launch_transpose_without_shm_bank_conflict_via_padding( T* d_output_matrix, T const* d_input_matrix, size_t M, size_t N, cudaStream_t stream){ constexpr size_t BLOCK_TILE_SIZE_X{32}; constexpr size_t BLOCK_TILE_SIZE_Y{32}; constexpr size_t BLOCK_TILE_SKEW_SIZE_X{1}; dim3 const block_size{BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y}; dim3 const grid_size{static_cast<unsigned int>(div_up(N, block_size.x)), static_cast<unsigned int>(div_up(M, block_size.y))}; transpose<T, BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y, BLOCK_TILE_SKEW_SIZE_X> <<<grid_size, block_size, 0, stream>>>(d_output_matrix, d_input_matrix, M, N); CHECK_LAST_CUDA_ERROR();}template <typename T>void launch_transpose_without_shm_bank_conflict_via_swizzling( T* d_output_matrix, T const* d_input_matrix, size_t M, size_t N, cudaStream_t stream){ constexpr size_t BLOCK_TILE_SIZE_X{32}; constexpr size_t BLOCK_TILE_SIZE_Y{32}; dim3 const block_size{BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y}; dim3 const grid_size{static_cast<unsigned int>(div_up(N, block_size.x)), static_cast<unsigned int>(div_up(M, block_size.y))}; transpose_swizzling<T, BLOCK_TILE_SIZE_X, BLOCK_TILE_SIZE_Y><<<grid_size, block_size, 0, stream>>>( d_output_matrix, d_input_matrix, M, N); CHECK_LAST_CUDA_ERROR();}template <typename T>bool is_equal(T const* data_1, T const* data_2, size_t size){ for (size_t i{0}; i < size; ++i) { if (data_1[i] != data_2[i]) { return false; } } return true;}template <typename T>bool verify_transpose_implementation( std::function<void(T*, T const*, size_t, size_t, cudaStream_t)> transpose_function, size_t M, size_t N){ // Fixed random seed for reproducibility std::mt19937 gen{0}; cudaStream_t stream; size_t const matrix_size{M * N}; std::vector<T> matrix(matrix_size, 0.0f); std::vector<T> matrix_transposed(matrix_size, 1.0f); std::vector<T> matrix_transposed_reference(matrix_size, 2.0f); std::uniform_real_distribution<T> uniform_dist(-256, 256); for (size_t i{0}; i < matrix_size; ++i) { matrix[i] = uniform_dist(gen); } // Create the reference transposed matrix using CPU. for (size_t i{0}; i < M; ++i) { for (size_t j{0}; j < N; ++j) { size_t const from_idx{i * N + j}; size_t const to_idx{j * M + i}; matrix_transposed_reference[to_idx] = matrix[from_idx]; } } T* d_matrix; T* d_matrix_transposed; CHECK_CUDA_ERROR(cudaMalloc(&d_matrix, matrix_size * sizeof(T))); CHECK_CUDA_ERROR(cudaMalloc(&d_matrix_transposed, matrix_size * sizeof(T))); CHECK_CUDA_ERROR(cudaStreamCreate(&stream)); CHECK_CUDA_ERROR(cudaMemcpy(d_matrix, matrix.data(), matrix_size * sizeof(T), cudaMemcpyHostToDevice)); transpose_function(d_matrix_transposed, d_matrix, M, N, stream); CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); CHECK_CUDA_ERROR(cudaMemcpy(matrix_transposed.data(), d_matrix_transposed, matrix_size * sizeof(T), cudaMemcpyDeviceToHost)); bool const correctness{is_equal(matrix_transposed.data(), matrix_transposed_reference.data(), matrix_size)}; CHECK_CUDA_ERROR(cudaFree(d_matrix)); CHECK_CUDA_ERROR(cudaFree(d_matrix_transposed)); CHECK_CUDA_ERROR(cudaStreamDestroy(stream)); return correctness;}template <typename T>float profile_transpose_implementation( std::function<void(T*, T const*, size_t, size_t, cudaStream_t)> transpose_function, size_t M, size_t N){ constexpr int num_repeats{100}; constexpr int num_warmups{10}; cudaStream_t stream; size_t const matrix_size{M * N}; T* d_matrix; T* d_matrix_transposed; CHECK_CUDA_ERROR(cudaMalloc(&d_matrix, matrix_size * sizeof(T))); CHECK_CUDA_ERROR(cudaMalloc(&d_matrix_transposed, matrix_size * sizeof(T))); CHECK_CUDA_ERROR(cudaStreamCreate(&stream)); std::function<void(cudaStream_t)> const transpose_function_wrapped{ std::bind(transpose_function, d_matrix_transposed, d_matrix, M, N, std::placeholders::_1)}; float const transpose_function_latency{measure_performance( transpose_function_wrapped, stream, num_repeats, num_warmups)}; CHECK_CUDA_ERROR(cudaFree(d_matrix)); CHECK_CUDA_ERROR(cudaFree(d_matrix_transposed)); CHECK_CUDA_ERROR(cudaStreamDestroy(stream)); return transpose_function_latency;}void print_latencty(std::string const& kernel_name, float latency){ std::cout << kernel_name << ": " << std::fixed << std::setprecision(2) << latency << " ms" << std::endl;}int main(){ // Unit tests. for (size_t m{1}; m <= 64; ++m) { for (size_t n{1}; n <= 64; ++n) { assert(verify_transpose_implementation<float>( &launch_transpose_with_shm_bank_conflict<float>, m, n)); assert(verify_transpose_implementation<float>( &launch_transpose_without_shm_bank_conflict_via_padding<float>, m, n)); assert(verify_transpose_implementation<float>( &launch_transpose_without_shm_bank_conflict_via_swizzling< float>, m, n)); } } // M: Number of rows. size_t const M{8192}; // N: Number of columns. size_t const N{8192}; std::cout << M << " x " << N << " Matrix" << std::endl; float const latency_with_shm_bank_conflict{ profile_transpose_implementation<float>( &launch_transpose_with_shm_bank_conflict<float>, M, N)}; print_latencty("Transpose with Shared Memory Bank Conflict", latency_with_shm_bank_conflict); float const latency_without_shm_bank_conflict_via_padding{ profile_transpose_implementation<float>( &launch_transpose_without_shm_bank_conflict_via_padding<float>, M, N)}; print_latencty("Transpose without Shared Memory Bank Conflict via Padding", latency_without_shm_bank_conflict_via_padding); float const latency_without_shm_bank_conflict_via_swizzling{ profile_transpose_implementation<float>( &launch_transpose_without_shm_bank_conflict_via_swizzling<float>, M, N)}; print_latencty( "Transpose without Shared Memory Bank Conflict via Swizzling", latency_without_shm_bank_conflict_via_swizzling); return 0;}
The program was built and performed on a platform that has an Intel i9-9900K CPU and an NVIDIA RTX 3090 GPU.
123456
$ nvcc transpose.cu -o transpose$ ./transpose8192 x 8192 MatrixTranspose with Shared Memory Bank Conflict: 1.10 msTranspose without Shared Memory Bank Conflict via Padding: 0.92 msTranspose without Shared Memory Bank Conflict via Swizzling: 0.92 ms
We could see that the transpose kernel with shared memory bank conflict has the highest latency, while the transpose kernel without shared memory bank conflict via padding and swizzling have the same latency and run 20% faster than the kernel with shared memory bank conflict in this case.
Note that this implementation achieves ~65% of the peak memory bandwidth of an RTX 3090 GPU. The performance can be further improved significantly using vectorized memory access if the implementation assumes the matrix is always padded (and usually allocated using cudaMallocPitch) so that each row will continue to meet the coalescing requirement.
Swizzling vs Padding
Swizzling and padding are two common techniques to deal with shared memory bank conflicts.
The advantage of swizzling is that it does not waste shared memory space. The disadvantage of swizzling is that it is more complicated to implement and understand because the index mapping is not linear.
The advantage of padding is that it is simple to implement and understand. The disadvantage of padding is that it wastes shared memory space and can break the address alignment of the data if the padding size is not selected carefully especially when we access the data via large trunks using reinterpret_cast and cause undefined behaviors. This usually happens when vectorized memory access is performed on 2D padded arrays which accidentally breaks the alignment of the data.
References
CUDA Shared Memory Swizzling
https://leimao.github.io/blog/CUDA-Shared-Memory-Swizzling/
Author
Lei Mao
Posted on
05-14-2024
Updated on
07-31-2024
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
CUDA Matrix Multiplication
Introduction
CUDA is a parallel computing platform and programming language that allows software to use certain types of graphics processing unit (GPU) for general purpose processing, an approach called general-purpose computing on GPUs (GPGPU). It could significantly enhance the performance of programs that could be computed with massive parallelism.
Matrix multiplication is a typical application that could be computed with massive parallelism. In this blog post, I would like to present a “hello-world” CUDA example of matrix multiplications and its preliminary optimizations.
Matrix Multiplication
There are two common matrix multiplication forms. The ordinary matrix multiplication mm and the batched matrix multiplication bmm.
$$\begin{align}\mathbf{C}^{n \times p} &= \mathbf{A}^{n \times m} \mathbf{B}^{m \times p} \\\mathbf{C}^{b \times n \times p} &= \mathbf{A}^{b \times n \times m} \mathbf{B}^{b \times m \times p} \\\end{align}$$
The reader could find the specifications of mm and bmm from PyTorch documentation torch.mm and torch.bmm.
In the following example, we first implemented the mm and bmm using C++. Then we implemented the mm using CUDA and naturally extended the mm implementation to the bmm implementation. Finally, we verified the correctness of the mm and bmm CUDA implementations.
Naive Implementation
This is the single source code file that contains the CPU and CUDA implementations for the matrix multiplication mm and the batched matrix multiplication bmm.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477
#include <cassert>#include <cstddef>#include <cstdint>#include <iomanip>#include <iostream>#include <random>#include <stdexcept>#include <vector>#define BLOCK_DIM 32#define checkCuda(val) check((val), #val, __FILE__, __LINE__)void check(cudaError_t err, const char* const func, const char* const file, const int line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}template <typename T>std::vector<T> create_rand_vector(size_t n){ std::random_device r; std::default_random_engine e(r()); std::uniform_int_distribution<int> uniform_dist(-256, 256); std::vector<T> vec(n); for (size_t i{0}; i < n; ++i) { vec.at(i) = static_cast<T>(uniform_dist(e)); } return vec;}// mat_1: m x n// mat_2: n x p// mat_3: m x ptemplate <typename T>void mm(T const* mat_1, T const* mat_2, T* mat_3, size_t m, size_t n, size_t p){ // Compute the cells in mat_3 sequentially. for (size_t i{0}; i < m; ++i) { for (size_t j{0}; j < p; ++j) { T acc_sum{0}; for (size_t k{0}; k < n; ++k) { acc_sum += mat_1[i * n + k] * mat_2[k * p + j]; } mat_3[i * p + j] = acc_sum; } }}// mat_1: b x m x n// mat_2: b x n x p// mat_3: b x m x ptemplate <typename T>void bmm(T const* mat_1, T const* mat_2, T* mat_3, size_t b, size_t m, size_t n, size_t p){ // Iterate through the batch dimension. for (size_t i{0}; i < b; ++i) { mm(mat_1 + i * (m * n), mat_2 + i * (n * p), mat_3 + i * (m * p), m, n, p); }}template <typename T>__global__ void mm_kernel(T const* mat_1, T const* mat_2, T* mat_3, size_t m, size_t n, size_t p){ // 2D block and 2D thread // Each thread computes one cell in mat_3. size_t i{blockIdx.y * blockDim.y + threadIdx.y}; size_t j{blockIdx.x * blockDim.x + threadIdx.x}; // Do not process outside the matrix. // Do not forget the equal sign! if ((i >= m) || (j >= p)) { return; } T acc_sum{0}; for (size_t k{0}; k < n; ++k) { acc_sum += mat_1[i * n + k] * mat_2[k * p + j]; } mat_3[i * p + j] = acc_sum;}// It should be straightforward to extend a kernel to support batching.template <typename T>__global__ void bmm_kernel(T const* mat_1, T const* mat_2, T* mat_3, size_t b, size_t m, size_t n, size_t p){ // 2D block and 2D thread // Each thread computes one cell in mat_3. size_t i{blockIdx.y * blockDim.y + threadIdx.y}; size_t j{blockIdx.x * blockDim.x + threadIdx.x}; size_t l{blockIdx.z}; // Do not process outside the matrix. // Do not forget the equal sign! if ((i >= m) || (j >= p)) { return; } T acc_sum{0}; for (size_t k{0}; k < n; ++k) { acc_sum += mat_1[l * m * n + i * n + k] * mat_2[l * n * p + k * p + j]; } mat_3[l * m * p + i * p + j] = acc_sum;}template <typename T>void mm_cuda(T const* mat_1, T const* mat_2, T* mat_3, size_t m, size_t n, size_t p){ dim3 threads_per_block(BLOCK_DIM, BLOCK_DIM); dim3 blocks_per_grid(1, 1); blocks_per_grid.x = std::ceil(static_cast<double>(p) / static_cast<double>(threads_per_block.x)); blocks_per_grid.y = std::ceil(static_cast<double>(m) / static_cast<double>(threads_per_block.y)); mm_kernel<<<blocks_per_grid, threads_per_block>>>(mat_1, mat_2, mat_3, m, n, p);}template <typename T>void bmm_cuda(T const* mat_1, T const* mat_2, T* mat_3, size_t b, size_t m, size_t n, size_t p){ dim3 threads_per_block(BLOCK_DIM, BLOCK_DIM); dim3 blocks_per_grid(1, 1, 1); blocks_per_grid.x = std::ceil(static_cast<double>(p) / static_cast<double>(threads_per_block.x)); blocks_per_grid.y = std::ceil(static_cast<double>(m) / static_cast<double>(threads_per_block.y)); blocks_per_grid.z = b; bmm_kernel<<<blocks_per_grid, threads_per_block>>>(mat_1, mat_2, mat_3, b, m, n, p);}template <typename T>bool allclose(std::vector<T> const& vec_1, std::vector<T> const& vec_2, T const& abs_tol){ if (vec_1.size() != vec_2.size()) { return false; } for (size_t i{0}; i < vec_1.size(); ++i) { if (std::abs(vec_1.at(i) - vec_2.at(i)) > abs_tol) { std::cout << vec_1.at(i) << " " << vec_2.at(i) << std::endl; return false; } } return true;}template <typename T>bool random_test_mm_cuda(size_t m, size_t n, size_t p){ std::vector<T> const mat_1_vec{create_rand_vector<T>(m * n)}; std::vector<T> const mat_2_vec{create_rand_vector<T>(n * p)}; std::vector<T> mat_3_vec(m * p); std::vector<T> mat_4_vec(m * p); T const* mat_1{mat_1_vec.data()}; T const* mat_2{mat_2_vec.data()}; T* mat_3{mat_3_vec.data()}; T* mat_4{mat_4_vec.data()}; mm(mat_1, mat_2, mat_3, m, n, p); T *d_mat_1, *d_mat_2, *d_mat_4; // Allocate device buffer. checkCuda(cudaMalloc(&d_mat_1, sizeof(T) * mat_1_vec.size())); checkCuda(cudaMalloc(&d_mat_2, sizeof(T) * mat_2_vec.size())); checkCuda(cudaMalloc(&d_mat_4, sizeof(T) * mat_4_vec.size())); // Copy data from host to device. checkCuda(cudaMemcpy(d_mat_1, mat_1, sizeof(T) * mat_1_vec.size(), cudaMemcpyHostToDevice)); checkCuda(cudaMemcpy(d_mat_2, mat_2, sizeof(T) * mat_2_vec.size(), cudaMemcpyHostToDevice)); // Run matrix multiplication on GPU. mm_cuda(d_mat_1, d_mat_2, d_mat_4, m, n, p); cudaDeviceSynchronize(); cudaError_t err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Matrix Multiplication kernel failed to execute." << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); } // Copy data from device to host. checkCuda(cudaMemcpy(mat_4, d_mat_4, sizeof(T) * mat_4_vec.size(), cudaMemcpyDeviceToHost)); // Free device buffer. checkCuda(cudaFree(d_mat_1)); checkCuda(cudaFree(d_mat_2)); checkCuda(cudaFree(d_mat_4)); return allclose<T>(mat_3_vec, mat_4_vec, 1e-4);}template <typename T>bool random_test_bmm_cuda(size_t b, size_t m, size_t n, size_t p){ std::vector<T> const mat_1_vec{create_rand_vector<T>(b * m * n)}; std::vector<T> const mat_2_vec{create_rand_vector<T>(b * n * p)}; std::vector<T> mat_3_vec(b * m * p); std::vector<T> mat_4_vec(b * m * p); T const* mat_1{mat_1_vec.data()}; T const* mat_2{mat_2_vec.data()}; T* mat_3{mat_3_vec.data()}; T* mat_4{mat_4_vec.data()}; bmm(mat_1, mat_2, mat_3, b, m, n, p); T *d_mat_1, *d_mat_2, *d_mat_4; // Allocate device buffer. checkCuda(cudaMalloc(&d_mat_1, sizeof(T) * mat_1_vec.size())); checkCuda(cudaMalloc(&d_mat_2, sizeof(T) * mat_2_vec.size())); checkCuda(cudaMalloc(&d_mat_4, sizeof(T) * mat_4_vec.size())); // Copy data from host to device. checkCuda(cudaMemcpy(d_mat_1, mat_1, sizeof(T) * mat_1_vec.size(), cudaMemcpyHostToDevice)); checkCuda(cudaMemcpy(d_mat_2, mat_2, sizeof(T) * mat_2_vec.size(), cudaMemcpyHostToDevice)); // Run matrix multiplication on GPU. bmm_cuda(d_mat_1, d_mat_2, d_mat_4, b, m, n, p); cudaDeviceSynchronize(); cudaError_t err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Matrix Multiplication kernel failed to execute." << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); } // Copy data from device to host. checkCuda(cudaMemcpy(mat_4, d_mat_4, sizeof(T) * mat_4_vec.size(), cudaMemcpyDeviceToHost)); // Free device buffer. checkCuda(cudaFree(d_mat_1)); checkCuda(cudaFree(d_mat_2)); checkCuda(cudaFree(d_mat_4)); return allclose<T>(mat_3_vec, mat_4_vec, 1e-4);}template <typename T>bool random_multiple_test_mm_cuda(size_t num_tests){ std::random_device r; std::default_random_engine e(r()); std::uniform_int_distribution<int> uniform_dist(1, 256); size_t m{0}, n{0}, p{0}; bool success{false}; for (size_t i{0}; i < num_tests; ++i) { m = static_cast<size_t>(uniform_dist(e)); n = static_cast<size_t>(uniform_dist(e)); p = static_cast<size_t>(uniform_dist(e)); success = random_test_mm_cuda<T>(m, n, p); if (!success) { return false; } } return true;}template <typename T>bool random_multiple_test_bmm_cuda(size_t num_tests){ std::random_device r; std::default_random_engine e(r()); std::uniform_int_distribution<int> uniform_dist(1, 256); size_t b{0}, m{0}, n{0}, p{0}; bool success{false}; for (size_t i{0}; i < num_tests; ++i) { b = static_cast<size_t>(uniform_dist(e)); m = static_cast<size_t>(uniform_dist(e)); n = static_cast<size_t>(uniform_dist(e)); p = static_cast<size_t>(uniform_dist(e)); success = random_test_bmm_cuda<T>(b, m, n, p); if (!success) { return false; } } return true;}template <typename T>float measure_latency_mm_cuda(size_t m, size_t n, size_t p, size_t num_tests, size_t num_warmups){ cudaEvent_t startEvent, stopEvent; float time{0.0f}; checkCuda(cudaEventCreate(&startEvent)); checkCuda(cudaEventCreate(&stopEvent)); T *d_mat_1, *d_mat_2, *d_mat_4; // Allocate device buffer. checkCuda(cudaMalloc(&d_mat_1, sizeof(T) * m * n)); checkCuda(cudaMalloc(&d_mat_2, sizeof(T) * n * p)); checkCuda(cudaMalloc(&d_mat_4, sizeof(T) * m * p)); for (size_t i{0}; i < num_warmups; ++i) { mm_cuda(d_mat_1, d_mat_2, d_mat_4, m, n, p); } checkCuda(cudaEventRecord(startEvent, 0)); for (size_t i{0}; i < num_tests; ++i) { mm_cuda(d_mat_1, d_mat_2, d_mat_4, m, n, p); } checkCuda(cudaEventRecord(stopEvent, 0)); checkCuda(cudaEventSynchronize(stopEvent)); cudaError_t err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Matrix Multiplication kernel failed to execute." << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); } checkCuda(cudaEventElapsedTime(&time, startEvent, stopEvent)); // Free device buffer. checkCuda(cudaFree(d_mat_1)); checkCuda(cudaFree(d_mat_2)); checkCuda(cudaFree(d_mat_4)); float latency{time / num_tests}; return latency;}template <typename T>float measure_latency_bmm_cuda(size_t b, size_t m, size_t n, size_t p, size_t num_tests, size_t num_warmups){ cudaEvent_t startEvent, stopEvent; float time{0.0f}; checkCuda(cudaEventCreate(&startEvent)); checkCuda(cudaEventCreate(&stopEvent)); T *d_mat_1, *d_mat_2, *d_mat_4; // Allocate device buffer. checkCuda(cudaMalloc(&d_mat_1, sizeof(T) * b * m * n)); checkCuda(cudaMalloc(&d_mat_2, sizeof(T) * b * n * p)); checkCuda(cudaMalloc(&d_mat_4, sizeof(T) * b * m * p)); for (size_t i{0}; i < num_warmups; ++i) { bmm_cuda(d_mat_1, d_mat_2, d_mat_4, b, m, n, p); } checkCuda(cudaEventRecord(startEvent, 0)); for (size_t i{0}; i < num_tests; ++i) { bmm_cuda(d_mat_1, d_mat_2, d_mat_4, b, m, n, p); } checkCuda(cudaEventRecord(stopEvent, 0)); checkCuda(cudaEventSynchronize(stopEvent)); cudaError_t err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Matrix Multiplication kernel failed to execute." << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); } checkCuda(cudaEventElapsedTime(&time, startEvent, stopEvent)); // Free device buffer. checkCuda(cudaFree(d_mat_1)); checkCuda(cudaFree(d_mat_2)); checkCuda(cudaFree(d_mat_4)); float latency{time / num_tests}; return latency;}int main(){ constexpr size_t num_tests{10}; assert(random_multiple_test_mm_cuda<int32_t>(num_tests)); assert(random_multiple_test_mm_cuda<float>(num_tests)); assert(random_multiple_test_mm_cuda<double>(num_tests)); assert(random_multiple_test_bmm_cuda<int32_t>(num_tests)); assert(random_multiple_test_bmm_cuda<float>(num_tests)); assert(random_multiple_test_bmm_cuda<double>(num_tests)); constexpr size_t num_measurement_tests{100}; constexpr size_t num_measurement_warmups{10}; size_t b{128}, m{1024}, n{1024}, p{1024}; float mm_cuda_int32_latency{measure_latency_mm_cuda<int32_t>( m, n, p, num_measurement_tests, num_measurement_warmups)}; float mm_cuda_float_latency{measure_latency_mm_cuda<float>( m, n, p, num_measurement_tests, num_measurement_warmups)}; float mm_cuda_double_latency{measure_latency_mm_cuda<double>( m, n, p, num_measurement_tests, num_measurement_warmups)}; float bmm_cuda_int32_latency{measure_latency_bmm_cuda<int32_t>( b, m, n, p, num_measurement_tests, num_measurement_warmups)}; float bmm_cuda_float_latency{measure_latency_bmm_cuda<float>( b, m, n, p, num_measurement_tests, num_measurement_warmups)}; float bmm_cuda_double_latency{measure_latency_bmm_cuda<double>( b, m, n, p, num_measurement_tests, num_measurement_warmups)}; std::cout << "Matrix Multiplication CUDA Latency" << std::endl; std::cout << "m: " << m << " " << "n: " << n << " " << "p: " << p << std::endl; std::cout << "INT32: " << std::fixed << std::setprecision(5) << mm_cuda_int32_latency << " ms" << std::endl; std::cout << "FLOAT: " << std::fixed << std::setprecision(5) << mm_cuda_float_latency << " ms" << std::endl; std::cout << "DOUBLE: " << std::fixed << std::setprecision(5) << mm_cuda_double_latency << " ms" << std::endl; std::cout << "Batched Matrix Multiplication CUDA Latency" << std::endl; std::cout << "b: " << b << " " << "m: " << m << " " << "n: " << n << " " << "p: " << p << std::endl; std::cout << "INT32: " << std::fixed << std::setprecision(5) << bmm_cuda_int32_latency << " ms" << std::endl; std::cout << "FLOAT: " << std::fixed << std::setprecision(5) << bmm_cuda_float_latency << " ms" << std::endl; std::cout << "DOUBLE: " << std::fixed << std::setprecision(5) << bmm_cuda_double_latency << " ms" << std::endl;}
Run Naive Example
Building and running the example requires an NVIDIA GPU. We also used NVIDIA official Docker container to set up the building environment.
To start the Docker container, please run the following command on the host computer.
1
$ docker run -it --rm --gpus all --cap-add=SYS_PTRACE --security-opt seccomp=unconfined -v $(pwd):/mnt -w /mnt nvcr.io/nvidia/cuda:11.7.1-devel-ubuntu22.04
To build and run the application, please run the following command in the Docker container.
12345678910111213
$ cd /mnt/$ nvcc mm.cu -o mm -std=c++14$ ./mmMatrix Multiplication CUDA Latencym: 1024 n: 1024 p: 1024INT32: 1.11436 msFLOAT: 0.98451 msDOUBLE: 4.10433 msBatched Matrix Multiplication CUDA Latencyb: 128 m: 1024 n: 1024 p: 1024INT32: 125.26781 msFLOAT: 124.67697 msDOUBLE: 487.87039 ms
We should expect no assertion error or any other kind of error for build and execution. The latencies were measured on a NVIDIA RTX 3090 GPU.
Matrix Multiplication Optimizations
The CUDA kernel optimization is usually all about how to accelerate the data traffic without affecting the number of math operations. To get the CUDA kernel fully optimized for GPU, the user would have to be very experienced with low-level GPU features and specifications and CUDA programming. But this does not prevent us from doing some preliminary optimization based on some shallow understandings of GPU.
Make Matrix Multiplication More Math-Bound
GPU is very friendly with math-bound operations. According to my previous blog post “Math-Bound VS Memory-Bound Operations”, if the number of operations remains the same and the number of memory IO bytes gets reduced, the operation will become more math-bound. That is to say, we want
$$\begin{gather}\frac{N_{\text{op}}}{N_{\text{byte}}} > \frac{\text{BW}_{\text{math}}}{\text{BW}_{\text{mem}}}\end{gather}$$
In our matrix multiplication naive CUDA implementation,
$$\begin{align}\mathbf{C}^{n \times p} &= \mathbf{A}^{n \times m} \mathbf{B}^{m \times p} \\\end{align}$$
We have to do $mnp$ multiplication and additions, $2mnp$ reads from memory, and $mp$ writes to memory. We could ignore the $mp$ writes from memory IO because the $2mnp$ reads is usually much more than the $mp$ writes.
Suppose we are doing FP32 matrix multiplication,
$$\begin{align}\frac{N_{\text{op}}}{N_{\text{byte}}}&= \frac{2 \times mnp}{2mnp \times 4} \\&= \frac{1}{4} \\\end{align}$$
For a modern GPU such as NVIDIA RTX 3090, for FP32 math,
$$\begin{align}\frac{\text{BW}_{\text{math}}}{\text{BW}_{\text{mem}}} &= \frac{35.58}{0.936} \\&= 38.0 \\\end{align}$$
We could see that the naive CUDA matrix multiplication implementation does not get even close to math-bound. Since $N_{\text{op}}$ should be a constant in matrix multiplication, let’s see if we could reduce $N_{\text{byte}}$ by caching.
Ideally, if we could cache the two full operand matrices $\mathbf{A}^{n \times m}$ and $\mathbf{B}^{m \times p}$, we could make the matrix multiplication most math-bound. However, since the caching size is limited and the implementation is supposed to support matrix multiplications with all different sizes, caching the full matrices is not technically possible.
Matrix Multiplication Decomposition
It is possible to decompose matrix multiplication mm into smaller matrix multiplications.
$$\mathbf{A} =\begin{bmatrix}\mathbf{A}_{1,1}^{d \times d} & \mathbf{A}_{1,2}^{d \times d} & \cdots & \mathbf{A}_{1,n/d}^{d \times d} \\\mathbf{A}_{2,1}^{d \times d} & \mathbf{A}_{2,2}^{d \times d} & \cdots & \mathbf{A}_{2,n/d}^{d \times d} \\\vdots & \vdots & \ddots & \vdots \\\mathbf{A}_{m/d,1}^{d \times d} & \mathbf{A}_{m/d,2}^{d \times d} & \cdots & \mathbf{A}_{m/d,n/d}^{d \times d} \\\end{bmatrix}$$
$$\mathbf{B} =\begin{bmatrix}\mathbf{B}_{1,1}^{d \times d} & \mathbf{B}_{1,2}^{d \times d} & \cdots & \mathbf{B}_{1,p/d}^{d \times d} \\\mathbf{B}_{2,1}^{d \times d} & \mathbf{B}_{2,2}^{d \times d} & \cdots & \mathbf{B}_{2,p/d}^{d \times d} \\\vdots & \vdots & \ddots & \vdots \\\mathbf{B}_{n/d,1}^{d \times d} & \mathbf{B}_{n/d,2}^{d \times d} & \cdots & \mathbf{B}_{n/d,p/d}^{d \times d} \\\end{bmatrix}$$
$$\mathbf{C} =\begin{bmatrix}\mathbf{C}_{1,1}^{d \times d} & \mathbf{C}_{1,2}^{d \times d} & \cdots & \mathbf{C}_{1,p/d}^{d \times d} \\\mathbf{C}_{2,1}^{d \times d} & \mathbf{C}_{2,2}^{d \times d} & \cdots & \mathbf{C}_{2,p/d}^{d \times d} \\\vdots & \vdots & \ddots & \vdots \\\mathbf{C}_{m/d,1}^{d \times d} & \mathbf{C}_{m/d,2}^{d \times d} & \cdots & \mathbf{C}_{m/d,p/d}^{d \times d} \\\end{bmatrix}$$
$$\mathbf{C}_{i,j}^{d \times d} = \sum_{k=1}^{n/d} \mathbf{A}_{i,k}^{d \times d} \mathbf{B}_{k,j}^{d \times d}$$
The decomposition does not alter the number of operations $N_{\text{op}}$.
$$\begin{align}N_{\text{op}} &= 2d^3 \left( \frac{n}{d} \right) \left( \frac{m}{d} \frac{p}{d}\right) \\&= 2mnp \\\end{align}$$
Because small matrices $\mathbf{A}_{i,k}^{d \times d}$ and $\mathbf{B}_{k,j}^{d \times d}$ could be cached, the memory IO bytes could be reduced, and the overall matrix multiplication could become more math bound. Let’s calculate how much memory IO bytes is needed in this case.
$$\begin{align}N_{\text{byte}} &= 2d^2 \times 4 \times \left( \frac{n}{d} \right) \left( \frac{m}{d} \frac{p}{d}\right) \\&= \frac{8mnp}{d} \\\end{align}$$
Therefore,
$$\begin{align}\frac{N_{\text{op}}}{N_{\text{byte}}}&= \frac{2mnp}{\frac{8mnp}{d}} \\&= \frac{d}{4} \\\end{align}$$
Notice that when $d=1$, the matrix multiplication falls back to the naive matrix multiplication. When $d$ becomes larger, the implementation becomes more math-bound.
Optimized Implementation
The following implementation decomposed the matrix multiplication into multiple small matrix multiplications. The source code could be found on GitHub.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356
#include <cassert>#include <cstddef>#include <cstdint>#include <iomanip>#include <iostream>#include <random>#include <stdexcept>#include <vector>#define BLOCK_DIM 32#define checkCuda(val) check((val), #val, __FILE__, __LINE__)void check(cudaError_t err, const char* const func, const char* const file, const int line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}template <typename T>std::vector<T> create_rand_vector(size_t n){ std::random_device r; std::default_random_engine e(r()); std::uniform_int_distribution<int> uniform_dist(-256, 256); std::vector<T> vec(n); for (size_t i{0}; i < n; ++i) { vec.at(i) = static_cast<T>(uniform_dist(e)); } return vec;}// mat_1: m x n// mat_2: n x p// mat_3: m x ptemplate <typename T>void mm(T const* mat_1, T const* mat_2, T* mat_3, size_t m, size_t n, size_t p){ // Compute the cells in mat_3 sequentially. for (size_t i{0}; i < m; ++i) { for (size_t j{0}; j < p; ++j) { T acc_sum{0}; for (size_t k{0}; k < n; ++k) { acc_sum += mat_1[i * n + k] * mat_2[k * p + j]; } mat_3[i * p + j] = acc_sum; } }}template <typename T>__global__ void mm_kernel(T const* mat_1, T const* mat_2, T* mat_3, size_t m, size_t n, size_t p){ // 2D block and 2D thread // Each thread computes one cell in mat_3. size_t i{blockIdx.y * blockDim.y + threadIdx.y}; size_t j{blockIdx.x * blockDim.x + threadIdx.x}; // Do not process outside the matrix. // Do not forget the equal sign! if ((i >= m) || (j >= p)) { return; } T acc_sum{0}; for (size_t k{0}; k < n; ++k) { acc_sum += mat_1[i * n + k] * mat_2[k * p + j]; } mat_3[i * p + j] = acc_sum;}template <typename T>__global__ void mm_kernel_optimized(T const* mat_1, T const* mat_2, T* mat_3, size_t m, size_t n, size_t p){ __shared__ T mat_1_tile[BLOCK_DIM][BLOCK_DIM]; __shared__ T mat_2_tile[BLOCK_DIM][BLOCK_DIM]; T acc_sum{0}; for (size_t tile_idx{0}; tile_idx < ceilf(static_cast<float>(n) / BLOCK_DIM); ++tile_idx) { size_t i{blockIdx.y * blockDim.y + threadIdx.y}; size_t j{tile_idx * blockDim.x + threadIdx.x}; if ((i < m) && (j < n)) { mat_1_tile[threadIdx.y][threadIdx.x] = mat_1[i * n + j]; } else { mat_1_tile[threadIdx.y][threadIdx.x] = 0; } i = tile_idx * blockDim.y + threadIdx.y; j = blockIdx.x * blockDim.x + threadIdx.x; if ((i < n) && (j < p)) { mat_2_tile[threadIdx.y][threadIdx.x] = mat_2[i * p + j]; } else { mat_2_tile[threadIdx.y][threadIdx.x] = 0; } __syncthreads(); for (size_t k{0}; k < BLOCK_DIM; ++k) { acc_sum += mat_1_tile[threadIdx.y][k] * mat_2_tile[k][threadIdx.x]; } __syncthreads(); } // 2D block and 2D thread // Each thread computes one cell in mat_3. size_t i{blockIdx.y * blockDim.y + threadIdx.y}; size_t j{blockIdx.x * blockDim.x + threadIdx.x}; if ((i < m) && (j < p)) { mat_3[i * p + j] = acc_sum; }}template <typename T>void mm_cuda(T const* mat_1, T const* mat_2, T* mat_3, size_t m, size_t n, size_t p, void (*f)(T const*, T const*, T*, size_t, size_t, size_t)){ dim3 threads_per_block(BLOCK_DIM, BLOCK_DIM); dim3 blocks_per_grid(1, 1); blocks_per_grid.x = std::ceil(static_cast<double>(p) / static_cast<double>(threads_per_block.x)); blocks_per_grid.y = std::ceil(static_cast<double>(m) / static_cast<double>(threads_per_block.y)); f<<<blocks_per_grid, threads_per_block>>>(mat_1, mat_2, mat_3, m, n, p);}template <typename T>bool allclose(std::vector<T> const& vec_1, std::vector<T> const& vec_2, T const& abs_tol){ if (vec_1.size() != vec_2.size()) { return false; } for (size_t i{0}; i < vec_1.size(); ++i) { if (std::abs(vec_1.at(i) - vec_2.at(i)) > abs_tol) { std::cout << vec_1.at(i) << " " << vec_2.at(i) << std::endl; return false; } } return true;}template <typename T>bool random_test_mm_cuda(size_t m, size_t n, size_t p, void (*f)(T const*, T const*, T*, size_t, size_t, size_t)){ std::vector<T> const mat_1_vec{create_rand_vector<T>(m * n)}; std::vector<T> const mat_2_vec{create_rand_vector<T>(n * p)}; std::vector<T> mat_3_vec(m * p); std::vector<T> mat_4_vec(m * p); T const* mat_1{mat_1_vec.data()}; T const* mat_2{mat_2_vec.data()}; T* mat_3{mat_3_vec.data()}; T* mat_4{mat_4_vec.data()}; mm(mat_1, mat_2, mat_3, m, n, p); T *d_mat_1, *d_mat_2, *d_mat_4; // Allocate device buffer. checkCuda(cudaMalloc(&d_mat_1, sizeof(T) * mat_1_vec.size())); checkCuda(cudaMalloc(&d_mat_2, sizeof(T) * mat_2_vec.size())); checkCuda(cudaMalloc(&d_mat_4, sizeof(T) * mat_4_vec.size())); // Copy data from host to device. checkCuda(cudaMemcpy(d_mat_1, mat_1, sizeof(T) * mat_1_vec.size(), cudaMemcpyHostToDevice)); checkCuda(cudaMemcpy(d_mat_2, mat_2, sizeof(T) * mat_2_vec.size(), cudaMemcpyHostToDevice)); // Run matrix multiplication on GPU. mm_cuda(d_mat_1, d_mat_2, d_mat_4, m, n, p, f); cudaDeviceSynchronize(); cudaError_t err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Matrix Multiplication kernel failed to execute." << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); } // Copy data from device to host. checkCuda(cudaMemcpy(mat_4, d_mat_4, sizeof(T) * mat_4_vec.size(), cudaMemcpyDeviceToHost)); // Free device buffer. checkCuda(cudaFree(d_mat_1)); checkCuda(cudaFree(d_mat_2)); checkCuda(cudaFree(d_mat_4)); return allclose<T>(mat_3_vec, mat_4_vec, 1e-4);}template <typename T>bool random_multiple_test_mm_cuda(size_t num_tests, void (*f)(T const*, T const*, T*, size_t, size_t, size_t)){ std::random_device r; std::default_random_engine e(r()); std::uniform_int_distribution<int> uniform_dist(1, 256); size_t m{0}, n{0}, p{0}; bool success{false}; for (size_t i{0}; i < num_tests; ++i) { m = static_cast<size_t>(uniform_dist(e)); n = static_cast<size_t>(uniform_dist(e)); p = static_cast<size_t>(uniform_dist(e)); success = random_test_mm_cuda<T>(m, n, p, f); if (!success) { return false; } } return true;}template <typename T>float measure_latency_mm_cuda(size_t m, size_t n, size_t p, size_t num_tests, size_t num_warmups, void (*f)(T const*, T const*, T*, size_t, size_t, size_t)){ cudaEvent_t startEvent, stopEvent; float time{0.0f}; checkCuda(cudaEventCreate(&startEvent)); checkCuda(cudaEventCreate(&stopEvent)); T *d_mat_1, *d_mat_2, *d_mat_4; // Allocate device buffer. checkCuda(cudaMalloc(&d_mat_1, sizeof(T) * m * n)); checkCuda(cudaMalloc(&d_mat_2, sizeof(T) * n * p)); checkCuda(cudaMalloc(&d_mat_4, sizeof(T) * m * p)); for (size_t i{0}; i < num_warmups; ++i) { mm_cuda(d_mat_1, d_mat_2, d_mat_4, m, n, p, f); } checkCuda(cudaEventRecord(startEvent, 0)); for (size_t i{0}; i < num_tests; ++i) { mm_cuda(d_mat_1, d_mat_2, d_mat_4, m, n, p, f); } checkCuda(cudaEventRecord(stopEvent, 0)); checkCuda(cudaEventSynchronize(stopEvent)); cudaError_t err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Matrix Multiplication kernel failed to execute." << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); } checkCuda(cudaEventElapsedTime(&time, startEvent, stopEvent)); // Free device buffer. checkCuda(cudaFree(d_mat_1)); checkCuda(cudaFree(d_mat_2)); checkCuda(cudaFree(d_mat_4)); float latency{time / num_tests}; return latency;}int main(){ constexpr size_t num_tests{10}; assert(random_multiple_test_mm_cuda<int32_t>(num_tests, mm_kernel)); assert(random_multiple_test_mm_cuda<float>(num_tests, mm_kernel)); assert(random_multiple_test_mm_cuda<double>(num_tests, mm_kernel)); assert( random_multiple_test_mm_cuda<int32_t>(num_tests, mm_kernel_optimized)); assert(random_multiple_test_mm_cuda<float>(num_tests, mm_kernel_optimized)); assert( random_multiple_test_mm_cuda<double>(num_tests, mm_kernel_optimized)); constexpr size_t num_measurement_tests{100}; constexpr size_t num_measurement_warmups{10}; const size_t m{1024}, n{1024}, p{1024}; float mm_cuda_int32_latency{measure_latency_mm_cuda<int32_t>( m, n, p, num_measurement_tests, num_measurement_warmups, mm_kernel)}; float mm_cuda_float_latency{measure_latency_mm_cuda<float>( m, n, p, num_measurement_tests, num_measurement_warmups, mm_kernel)}; float mm_cuda_double_latency{measure_latency_mm_cuda<double>( m, n, p, num_measurement_tests, num_measurement_warmups, mm_kernel)}; std::cout << "Matrix Multiplication CUDA Latency" << std::endl; std::cout << "m: " << m << " " << "n: " << n << " " << "p: " << p << std::endl; std::cout << "INT32: " << std::fixed << std::setprecision(5) << mm_cuda_int32_latency << " ms" << std::endl; std::cout << "FLOAT: " << std::fixed << std::setprecision(5) << mm_cuda_float_latency << " ms" << std::endl; std::cout << "DOUBLE: " << std::fixed << std::setprecision(5) << mm_cuda_double_latency << " ms" << std::endl; mm_cuda_int32_latency = measure_latency_mm_cuda<int32_t>( m, n, p, num_measurement_tests, num_measurement_warmups, mm_kernel_optimized); mm_cuda_float_latency = measure_latency_mm_cuda<float>( m, n, p, num_measurement_tests, num_measurement_warmups, mm_kernel_optimized); mm_cuda_double_latency = measure_latency_mm_cuda<double>( m, n, p, num_measurement_tests, num_measurement_warmups, mm_kernel_optimized); std::cout << "Optimized Matrix Multiplication CUDA Latency" << std::endl; std::cout << "m: " << m << " " << "n: " << n << " " << "p: " << p << std::endl; std::cout << "INT32: " << std::fixed << std::setprecision(5) << mm_cuda_int32_latency << " ms" << std::endl; std::cout << "FLOAT: " << std::fixed << std::setprecision(5) << mm_cuda_float_latency << " ms" << std::endl; std::cout << "DOUBLE: " << std::fixed << std::setprecision(5) << mm_cuda_double_latency << " ms" << std::endl;}
Run Optimized Example
In the same Docker container, build and run the following application. We could see that the latency of INT32 and FP32 matrix multiplication got improved too different degrees.
123456789101112
$ nvcc mm_optimization.cu -o mm_optimization --std c++14$ ./mm_optimizationMatrix Multiplication CUDA Latencym: 1024 n: 1024 p: 1024INT32: 1.04373 msFLOAT: 1.02149 msDOUBLE: 3.83370 msOptimized Matrix Multiplication CUDA Latencym: 1024 n: 1024 p: 1024INT32: 0.84207 msFLOAT: 0.81759 msDOUBLE: 3.95231 ms
Miscellaneous
There are more subtle factors affecting the performance and there are more optimization opportunities to further optimize the matrix multiplication implementation. But those requires more thorough understanding of GPU and CUDA.
References
CUDA Matrix Multiplication
https://leimao.github.io/blog/CUDA-Matrix-Multiplication/
Author
Lei Mao
Posted on
03-21-2022
Updated on
03-04-2023
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
Kernel Tuner
Guides
Features
Reference
Getting Started¶
So you have installed Kernel Tuner! That’s great! But now you’d like to get started tuning some GPU code.
Let’s say we have a simple CUDA kernel stored in a file called vector_add_kernel.cu:
__global__ void vector_add(float * c, float * a, float * b, int n) {
int i = (blockIdx.x * blockDim.x) + threadIdx.x;
if ( i < n ) {
c[i] = a[i] + b[i];
}
}
This kernel simply performs a point-wise addition of vectors a and b and stores the result in c.
To tune this kernel with Kernel Tuner, we are going to create the input and output data in Python using Numpy arrays.
import numpy as np
import kernel_tuner
size = 1000000
a = np.random.randn(size).astype(np.float32)
b = np.random.randn(size).astype(np.float32)
c = np.zeros_like(b)
n = np.int32(size)
To tell Kernel Tuner how it should call the kernel, we can create a list in Python that should correspond to
our CUDA kernel’s argument list with the same order and types.
args = [c, a, b, n]
So far, we have created the data structures needed by Kernel Tuner to call our kernel, but we have not yet specified what we
want Kernel Tuner to tune in our kernel. For that, we create a dictionary that we call tune_params, in which keys correspond
to tunable parameters in our kernel and the values are lists of values that these parameters may take.
tune_params = dict()
tune_params["block_size_x"] = [32, 64, 128, 256, 512, 1024]
In the code above, we have inserted a key into our dictionary, namely "block_size_x". This is a special name for a tunable
parameter that is recognized by Kernel Tuner to denote the size of our thread block in the x-dimension.
For a full list of special parameter names, please see the Parameter Vocabulary.
Alright, we are all set to start calling Kernel Tuner’s main function, which is called tune_kernel.
results, env = kernel_tuner.tune_kernel("vector_add", "vector_add_kernel.cu", size, args, tune_params)
In the above, tune_kernel takes five arguments:
The kernel name passed as a string
The filename of the kernel, also as a string
The problem_size, which corresponds to the total number of elements/threads in our kernel
The argument list used to call our kernel
The dictionary holding our tunable parameters
What happens how, is that Kernel Tuner will copy the our kernel’s input and output data to the GPU, iteratively compile and
benchmark our kernel for every possible combination of all values of all tunable parameters (a.k.a brute force tuning), and
return the benchmarking results as a list of dictionaries, along with an env dictionary that lists important information
about the hardware and software in which the benchmarking took place.
This wraps up the most basic use case of Kernel Tuner. There is a lot more functionality, which is explained in various
guides, examples, and feature articles.
© Copyright 2016-2024, Ben van Werkhoven, Alessio Sclocco, Stijn Heldens, Floris-Jan Willemsen, Willem-Jan Palenstijn, Bram Veenboer and Richard Schoonhoven.
|
CUDA SHARED MEMORY
NVIDIA Corporation
2
REVIEW (1 OF 2)
Difference between host and device
Host
CPU
Device
GPU
Using __global__
to declare a function as device code
Executes on the device
Called from the host (or possibly from other device code)
Passing parameters from host code to a device function
3
REVIEW (2 OF 2)
Basic device memory management
cudaMalloc()
cudaMemcpy()
cudaFree()
Launching parallel kernels
Launch N copies of add() with add<<<N,1>>>(…);
Use blockIdx.x to access block index
4
1D STENCIL
Consider applying a 1D stencil to a 1D array of elements
Each output element is the sum of input elements within a radius
If radius is 3, then each output element is the sum of 7 input elements:
radius
radius
5
IMPLEMENTING WITHIN A BLOCK
Each thread processes one output element
blockDim.x elements per block
Input elements are read several times
With radius 3, each input element is read seven times
6
SHARING DATA BETWEEN THREADS
Terminology: within a block, threads share data via shared memory
Extremely fast on-chip memory, user-managed
Declare using __shared__, allocated per block
Data is not visible to threads in other blocks
7
IMPLEMENTING WITH SHARED MEMORY
Cache data in shared memory
Read (blockDim.x + 2 * radius) input elements from global memory to shared memory
Compute blockDim.x output elements
Write blockDim.x output elements to global memory
Each block needs a halo of radius elements at each boundary
blockDim.x output elements
halo on left
halo on right
8
STENCIL KERNEL
__global__ void stencil_1d(int *in, int *out) {
__shared__ int temp[BLOCK_SIZE + 2 * RADIUS];
int gindex = threadIdx.x + blockIdx.x * blockDim.x;
int lindex = threadIdx.x + RADIUS;
// Read input elements into shared memory
temp[lindex] = in[gindex];
if (threadIdx.x < RADIUS) {
temp[lindex - RADIUS] = in[gindex - RADIUS];
temp[lindex + BLOCK_SIZE] =
in[gindex + BLOCK_SIZE];
}
9
STENCIL KERNEL
// Apply the stencil
int result = 0;
for (int offset = -RADIUS ; offset <= RADIUS ; offset++)
result += temp[lindex + offset];
// Store the result
out[gindex] = result;
}
10
DATA RACE!
The stencil example will not work…
Suppose thread 15 reads the halo before thread 0 has fetched
temp[lindex] = in[gindex];
if (threadIdx.x < RADIUS) {
temp[lindex – RADIUS] = in[gindex – RADIUS];
temp[lindex + BLOCK_SIZE] = in[gindex + BLOCK_SIZE];
}
int result = 0;
result += temp[lindex + 1];
Store at temp[18]
Load from temp[19]
Skipped, threadIdx > RADIUS
11
__SYNCTHREADS()
void
__syncthreads();
Synchronizes all threads within a block
Used to prevent RAW / WAR / WAW hazards
All threads must reach the barrier
In conditional code, the condition must be uniform across the block
12
STENCIL KERNEL
Stencil Kernel
__global__ void stencil_1d(int *in, int *out) {
__shared__ int temp[BLOCK_SIZE + 2 * RADIUS];
int gindex = threadIdx.x + blockIdx.x * blockDim.x;
int lindex = threadIdx.x + radius;
// Read input elements into shared memory
temp[lindex] = in[gindex];
if (threadIdx.x < RADIUS) {
temp[lindex – RADIUS] = in[gindex – RADIUS];
temp[lindex + BLOCK_SIZE] = in[gindex + BLOCK_SIZE];
}
// Synchronize (ensure all the data is available)
__syncthreads();
13
STENCIL KERNEL
Stencil Kernel
// Apply the stencil
int result = 0;
for (int offset = -RADIUS ; offset <= RADIUS ; offset++)
result += temp[lindex + offset];
// Store the result
out[gindex] = result;
}
14
REVIEW
Use __shared__ to declare a variable/array in shared memory
Data is shared between threads in a block
Not visible to threads in other blocks
Use __syncthreads() as a barrier
Use to prevent data hazards
15
DEVELOPERS
Scalable Cooperation among groups of threads
Flexible parallel decompositions
Composition across software boundaries
Deploy Everywhere
Examples include:
Persistent RNNs
Physics
Search Algorithms
Sorting
Cooperative Groups: a flexible model for synchronization and
communication within groups of threads.
At a glance
Benefits all applications
LOOKING FORWARD
16
FOR EXAMPLE: THREAD BLOCK
Implicit group of all the threads in the launched thread block
Implements the same interface as thread_group:
void sync();
// Synchronize the threads in the group
unsigned size();
// Total number of threads in the group
unsigned thread_rank();
// Rank of the calling thread within [0, size)
bool is_valid();
// Whether the group violated any API constraints
And additional thread_block specific functions:
dim3 group_index();
// 3-dimensional block index within the grid
dim3 thread_index();
// 3-dimensional thread index within the block
17
NARROWING THE SHARED MEMORY GAP
with the GV100 L1 cache
Pascal
Volta
Cache: vs shared
•
Easier to use
•
90%+ as good
Shared: vs cache
•
Faster atomics
•
More banks
•
More predictable
Average
Shared
Memory
Benefit
70%
93%
Directed testing: shared in global
18
FUTURE SESSIONS
CUDA GPU architecture and basic optimizations
Atomics, Reductions, Warp Shuffle
Using Managed Memory
Concurrency (streams, copy/compute overlap, multi-GPU)
Analysis Driven Optimization
Cooperative Groups
19
FURTHER STUDY
Shared memory:
https://devblogs.nvidia.com/using-shared-memory-cuda-cc/
CUDA Programming Guide:
https://docs.nvidia.com/cuda/cuda-c-programming-guide/index.html#shared-memory
CUDA Documentation:
https://docs.nvidia.com/cuda/index.html
https://docs.nvidia.com/cuda/cuda-runtime-api/index.html (runtime API)
20
HOMEWORK
Log into Summit (ssh [email protected] -> ssh summit)
Clone GitHub repository:
Git clone [email protected]:olcf/cuda-training-series.git
Follow the instructions in the readme.md file:
https://github.com/olcf/cuda-training-series/blob/master/exercises/hw2/readme.md
Prerequisites: basic linux skills, e.g. ls, cd, etc., knowledge of a text editor like vi/emacs, and some
knowledge of C/C++ programming
QUESTIONS? |
CUDA Execution Provider
The CUDA Execution Provider enables hardware accelerated computation on Nvidia CUDA-enabled GPUs.
Contents
Install
Pre-built binaries of ONNX Runtime with CUDA EP are published for most language bindings. Please reference Install ORT.
Build from source
See Build instructions.
Requirements
Please reference table below for official GPU packages dependencies for the ONNX Runtime inferencing package. Note that ONNX Runtime Training is aligned with PyTorch CUDA versions; refer to the Optimize Training tab on onnxruntime.ai for supported versions.
Because of Nvidia CUDA Minor Version Compatibility, ONNX Runtime built with CUDA 11.8 are compatible with any CUDA 11.x version; ONNX Runtime built with CUDA 12.x are compatible with any CUDA 12.x version.
ONNX Runtime built with cuDNN 8.x is not compatible with cuDNN 9.x, and vice versa. You can choose the package based on CUDA and cuDNN major versions that match your runtime environment (e.g., PyTorch 2.3 uses cuDNN 8.x, while PyTorch 2.4 or later uses cuDNN 9.x).
Note: Starting with version 1.19, CUDA 12.x becomes the default version when distributing ONNX Runtime GPU packages in PyPI.
CUDA 12.x
CUDA 11.x
CUDA 10.x
For older versions, please reference the readme and build pages on the release branch.
Build
For build instructions, please see the BUILD page.
Configuration Options
The CUDA Execution Provider supports the following configuration options.
device_id
The device ID.
Default value: 0
user_compute_stream
Defines the compute stream for the inference to run on. It implicitly sets the has_user_compute_stream option. It cannot be set through UpdateCUDAProviderOptions, but rather UpdateCUDAProviderOptionsWithValue. This cannot be used in combination with an external allocator.
Example python usage:
providers = [("CUDAExecutionProvider", {"device_id": torch.cuda.current_device(),
"user_compute_stream": str(torch.cuda.current_stream().cuda_stream)})]
sess_options = ort.SessionOptions()
sess = ort.InferenceSession("my_model.onnx", sess_options=sess_options, providers=providers)
To take advantage of user compute stream, it is recommended to use I/O Binding to bind inputs and outputs to tensors in device.
do_copy_in_default_stream
Whether to do copies in the default stream or use separate streams. The recommended setting is true. If false, there are race conditions and possibly better performance.
Default value: true
use_ep_level_unified_stream
Uses the same CUDA stream for all threads of the CUDA EP. This is implicitly enabled by has_user_compute_stream, enable_cuda_graph or when using an external allocator.
Default value: false
gpu_mem_limit
The size limit of the device memory arena in bytes. This size limit is only for the execution provider’s arena. The total device memory usage may be higher. s: max value of C++ size_t type (effectively unlimited)
Note: Will be over-ridden by contents of default_memory_arena_cfg (if specified)
arena_extend_strategy
The strategy for extending the device memory arena.
Default value: kNextPowerOfTwo
Note: Will be over-ridden by contents of default_memory_arena_cfg (if specified)
cudnn_conv_algo_search
The type of search done for cuDNN convolution algorithms.
Default value: EXHAUSTIVE
cudnn_conv_use_max_workspace
Check tuning performance for convolution heavy models for details on what this flag does. This flag is only supported from the V2 version of the provider options struct when used using the C API.(sample below)
Default value: 1, for versions 1.14 and later 0, for previous versions
cudnn_conv1d_pad_to_nc1d
Check convolution input padding in the CUDA EP for details on what this flag does. This flag is only supported from the V2 version of the provider options struct when used using the C API. (sample below)
Default value: 0
enable_cuda_graph
Check using CUDA Graphs in the CUDA EP for details on what this flag does. This flag is only supported from the V2 version of the provider options struct when used using the C API. (sample below)
Default value: 0
enable_skip_layer_norm_strict_mode
Whether to use strict mode in SkipLayerNormalization cuda implementation. The default and recommended setting is false. If enabled, accuracy improvement and performance drop can be expected. This flag is only supported from the V2 version of the provider options struct when used using the C API. (sample below)
Default value: 0
use_tf32
TF32 is a math mode available on NVIDIA GPUs since Ampere. It allows certain float32 matrix multiplications and convolutions to run much faster on tensor cores with TensorFloat-32 reduced precision: float32 inputs are rounded with 10 bits of mantissa and results are accumulated with float32 precision.
Default value: 1
TensorFloat-32 is enabled by default. Starting from ONNX Runtime 1.18, you can use this flag to disable it for an inference session.
Example python usage:
providers = [("CUDAExecutionProvider", {"use_tf32": 0})]
sess_options = ort.SessionOptions()
sess = ort.InferenceSession("my_model.onnx", sess_options=sess_options, providers=providers)
This flag is only supported from the V2 version of the provider options struct when used using the C API. (sample below)
gpu_external_[alloc|free|empty_cache]
gpu_external_* is used to pass external allocators. Example python usage:
from onnxruntime.training.ortmodule.torch_cpp_extensions import torch_gpu_allocator
provider_option_map["gpu_external_alloc"] = str(torch_gpu_allocator.gpu_caching_allocator_raw_alloc_address())
provider_option_map["gpu_external_free"] = str(torch_gpu_allocator.gpu_caching_allocator_raw_delete_address())
provider_option_map["gpu_external_empty_cache"] = str(torch_gpu_allocator.gpu_caching_allocator_empty_cache_address())
Default value: 0
prefer_nhwc
This option is not available in default builds ! One has to compile ONNX Runtime with onnxruntime_USE_CUDA_NHWC_OPS=ON. If this is enabled the EP prefers NHWC operators over NCHW. Needed transforms will be added to the model. As NVIDIA tensor cores can only work on NHWC layout this can increase performance if the model consists of many supported operators and does not need too many new transpose nodes. Wider operator support is planned in the future.
This flag is only supported from the V2 version of the provider options struct when used using the C API. The V2 provider options struct can be created using CreateCUDAProviderOptions and updated using UpdateCUDAProviderOptions.
Default value: 0
Performance Tuning
The I/O Binding feature should be utilized to avoid overhead resulting from copies on inputs and outputs. Ideally up and downloads for inputs can be hidden behind the inference. This can be achieved by doing asynchronous copies while running inference. This is demonstrated in this PR
Ort::RunOptions run_options;
run_options.AddConfigEntry("disable_synchronize_execution_providers", "1");
session->Run(run_options, io_binding);
By disabling the synchronization on the inference the user has to take care of synchronizing the compute stream after execution. This feature should only be used with device local memory or an ORT Value allocated in pinned memory, otherwise the issued download will be blocking and not behave as desired.
Convolution-heavy models
ORT leverages CuDNN for convolution operations and the first step in this process is to determine which “optimal” convolution algorithm to use while performing the convolution operation for the given input configuration (input shape, filter shape, etc.) in each Conv node. This sub-step involves querying CuDNN for a “workspace” memory size and have this allocated so that CuDNN can use this auxiliary memory while determining the “optimal” convolution algorithm to use.
The default value of cudnn_conv_use_max_workspace is 1 for versions 1.14 or later, and 0 for previous versions. When its value is 0, ORT clamps the workspace size to 32 MB which may lead to a sub-optimal convolution algorithm getting picked by CuDNN. To allow ORT to allocate the maximum possible workspace as determined by CuDNN, a provider option named cudnn_conv_use_max_workspace needs to get set (as shown below).
Keep in mind that using this flag may increase the peak memory usage by a factor (sometimes a few GBs) but this does help CuDNN pick the best convolution algorithm for the given input. We have found that this is an important flag to use while using an fp16 model as this allows CuDNN to pick tensor core algorithms for the convolution operations (if the hardware supports tensor core operations). This flag may or may not result in performance gains for other data types (float and double).
providers = [("CUDAExecutionProvider", {"cudnn_conv_use_max_workspace": '1'})]
sess_options = ort.SessionOptions()
sess = ort.InferenceSession("my_conv_heavy_fp16_model.onnx", sess_options=sess_options, providers=providers)
OrtCUDAProviderOptionsV2* cuda_options = nullptr;
CreateCUDAProviderOptions(&cuda_options);
std::vector<const char*> keys{"cudnn_conv_use_max_workspace"};
std::vector<const char*> values{"1"};
UpdateCUDAProviderOptions(cuda_options, keys.data(), values.data(), 1);
OrtSessionOptions* session_options = /* ... */;
SessionOptionsAppendExecutionProvider_CUDA_V2(session_options, cuda_options);
// Finally, don't forget to release the provider options
ReleaseCUDAProviderOptions(cuda_options);
var cudaProviderOptions = new OrtCUDAProviderOptions(); // Dispose this finally
var providerOptionsDict = new Dictionary<string, string>();
providerOptionsDict["cudnn_conv_use_max_workspace"] = "1";
cudaProviderOptions.UpdateOptions(providerOptionsDict);
SessionOptions options = SessionOptions.MakeSessionOptionWithCudaProvider(cudaProviderOptions); // Dispose this finally
Convolution Input Padding
ORT leverages CuDNN for convolution operations. While CuDNN only takes 4-D or 5-D tensor as input for convolution operations, dimension padding is needed if the input is 3-D tensor. Given an input tensor of shape [N, C, D], it can be padded to [N, C, D, 1] or [N, C, 1, D]. While both of these two padding ways produce same output, the performance may be a lot different because different convolution algorithms are selected, especially on some devices such as A100. By default the input is padded to [N, C, D, 1]. A provider option named cudnn_conv1d_pad_to_nc1d needs to get set (as shown below) if [N, C, 1, D] is preferred.
providers = [("CUDAExecutionProvider", {"cudnn_conv1d_pad_to_nc1d": '1'})]
sess_options = ort.SessionOptions()
sess = ort.InferenceSession("my_conv_model.onnx", sess_options=sess_options, providers=providers)
OrtCUDAProviderOptionsV2* cuda_options = nullptr;
CreateCUDAProviderOptions(&cuda_options);
std::vector<const char*> keys{"cudnn_conv1d_pad_to_nc1d"};
std::vector<const char*> values{"1"};
UpdateCUDAProviderOptions(cuda_options, keys.data(), values.data(), 1);
OrtSessionOptions* session_options = /* ... */;
SessionOptionsAppendExecutionProvider_CUDA_V2(session_options, cuda_options);
// Finally, don't forget to release the provider options
ReleaseCUDAProviderOptions(cuda_options);
var cudaProviderOptions = new OrtCUDAProviderOptions(); // Dispose this finally
var providerOptionsDict = new Dictionary<string, string>();
providerOptionsDict["cudnn_conv1d_pad_to_nc1d"] = "1";
cudaProviderOptions.UpdateOptions(providerOptionsDict);
SessionOptions options = SessionOptions.MakeSessionOptionWithCudaProvider(cudaProviderOptions); // Dispose this finally
Using CUDA Graphs (Preview)
While using the CUDA EP, ORT supports the usage of CUDA Graphs to remove CPU overhead associated with launching CUDA kernels sequentially. To enable the usage of CUDA Graphs, use the provider options as shown in the samples below. ORT supports multi-graph capture capability by passing the user specified gpu_graph_id to the run options. gpu_graph_id is optional when the session uses one cuda graph. If not set, the default value is 0. If the gpu_graph_id is set to -1, cuda graph capture/replay is disabled in that run.
Currently, there are some constraints with regards to using the CUDA Graphs feature:
Models with control-flow ops (i.e. If, Loop and Scan ops) are not supported.
Usage of CUDA Graphs is limited to models where-in all the model ops (graph nodes) can be partitioned to the CUDA EP.
The input/output types of models need to be tensors.
Shapes and addresses of inputs/outputs cannot change across inference calls for the same graph annotation id. Input tensors for replay shall be copied to the address of input tensors used in graph capture.
In multi-graph capture mode, the captured graphs will remain in the session’s lifetime and the captured graph deletion feature is not supported at the moment.
By design, CUDA Graphs is designed to read from/write to the same CUDA virtual memory addresses during the graph replaying step as it does during the graph capturing step. Due to this requirement, usage of this feature requires using IOBinding so as to bind memory which will be used as input(s)/output(s) for the CUDA Graph machinery to read from/write to (please see samples below).
While updating the input(s) for subsequent inference calls, the fresh input(s) need to be copied over to the corresponding CUDA memory location(s) of the bound OrtValue input(s) (please see samples below to see how this can be achieved). This is due to the fact that the “graph replay” will require reading inputs from the same CUDA virtual memory addresses.
Multi-threaded usage is currently not supported, i.e. Run() MAY NOT be invoked on the same InferenceSession object from multiple threads while using CUDA Graphs.
NOTE: The very first Run() performs a variety of tasks under the hood like making CUDA memory allocations, capturing the CUDA graph for the model, and then performing a graph replay to ensure that the graph runs. Due to this, the latency associated with the first Run() is bound to be high. Subsequent Run()s only perform graph replays of the graph captured and cached in the first Run().
Python
providers = [("CUDAExecutionProvider", {"enable_cuda_graph": '1'})]
sess_options = ort.SessionOptions()
sess = ort.InferenceSession("my_model.onnx", sess_options=sess_options, providers=providers)
providers = [("CUDAExecutionProvider", {'enable_cuda_graph': True})]
x = np.array([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]], dtype=np.float32)
y = np.array([[0.0], [0.0], [0.0]], dtype=np.float32)
x_ortvalue = onnxrt.OrtValue.ortvalue_from_numpy(x, 'cuda', 0)
y_ortvalue = onnxrt.OrtValue.ortvalue_from_numpy(y, 'cuda', 0)
session = onnxrt.InferenceSession("matmul_2.onnx", providers=providers)
io_binding = session.io_binding()
# Pass gpu_graph_id to RunOptions through RunConfigs
ro = onnxrt.RunOptions()
# gpu_graph_id is optional if the session uses only one cuda graph
ro.add_run_config_entry("gpu_graph_id", "1")
# Bind the input and output
io_binding.bind_ortvalue_input('X', x_ortvalue)
io_binding.bind_ortvalue_output('Y', y_ortvalue)
# One regular run for the necessary memory allocation and cuda graph capturing
session.run_with_iobinding(io_binding, ro)
expected_y = np.array([[5.0], [11.0], [17.0]], dtype=np.float32)
np.testing.assert_allclose(expected_y, y_ortvalue.numpy(), rtol=1e-05, atol=1e-05)
# After capturing, CUDA graph replay happens from this Run onwards
session.run_with_iobinding(io_binding, ro)
np.testing.assert_allclose(expected_y, y_ortvalue.numpy(), rtol=1e-05, atol=1e-05)
# Update input and then replay CUDA graph with the updated input
x_ortvalue.update_inplace(np.array([[10.0, 20.0], [30.0, 40.0], [50.0, 60.0]], dtype=np.float32))
session.run_with_iobinding(io_binding, ro)
const auto& api = Ort::GetApi();
struct CudaMemoryDeleter {
explicit CudaMemoryDeleter(const Ort::Allocator* alloc) {
alloc_ = alloc;
}
void operator()(void* ptr) const {
alloc_->Free(ptr);
}
const Ort::Allocator* alloc_;
};
// Enable cuda graph in cuda provider option.
OrtCUDAProviderOptionsV2* cuda_options = nullptr;
api.CreateCUDAProviderOptions(&cuda_options);
std::unique_ptr<OrtCUDAProviderOptionsV2, decltype(api.ReleaseCUDAProviderOptions)> rel_cuda_options(cuda_options, api.ReleaseCUDAProviderOptions);
std::vector<const char*> keys{"enable_cuda_graph"};
std::vector<const char*> values{"1"};
api.UpdateCUDAProviderOptions(rel_cuda_options.get(), keys.data(), values.data(), 1);
Ort::SessionOptions session_options;
api.SessionOptionsAppendExecutionProvider_CUDA_V2(static_cast<OrtSessionOptions*>(session_options), rel_cuda_options.get();
// Pass gpu_graph_id to RunOptions through RunConfigs
Ort::RunOptions run_option;
// gpu_graph_id is optional if the session uses only one cuda graph
run_option.AddConfigEntry("gpu_graph_id", "1");
// Create IO bound inputs and outputs.
Ort::Session session(*ort_env, ORT_TSTR("matmul_2.onnx"), session_options);
Ort::MemoryInfo info_cuda("Cuda", OrtAllocatorType::OrtArenaAllocator, 0, OrtMemTypeDefault);
Ort::Allocator cuda_allocator(session, info_cuda);
const std::array<int64_t, 2> x_shape = {3, 2};
std::array<float, 3 * 2> x_values = {1.0f, 2.0f, 3.0f, 4.0f, 5.0f, 6.0f};
auto input_data = std::unique_ptr<void, CudaMemoryDeleter>(cuda_allocator.Alloc(x_values.size() * sizeof(float)),
CudaMemoryDeleter(&cuda_allocator));
cudaMemcpy(input_data.get(), x_values.data(), sizeof(float) * x_values.size(), cudaMemcpyHostToDevice);
// Create an OrtValue tensor backed by data on CUDA memory
Ort::Value bound_x = Ort::Value::CreateTensor(info_cuda, reinterpret_cast<float*>(input_data.get()), x_values.size(),
x_shape.data(), x_shape.size());
const std::array<int64_t, 2> expected_y_shape = {3, 2};
std::array<float, 3 * 2> expected_y = {1.0f, 4.0f, 9.0f, 16.0f, 25.0f, 36.0f};
auto output_data = std::unique_ptr<void, CudaMemoryDeleter>(cuda_allocator.Alloc(expected_y.size() * sizeof(float)),
CudaMemoryDeleter(&cuda_allocator));
// Create an OrtValue tensor backed by data on CUDA memory
Ort::Value bound_y = Ort::Value::CreateTensor(info_cuda, reinterpret_cast<float*>(output_data.get()),
expected_y.size(), expected_y_shape.data(), expected_y_shape.size());
Ort::IoBinding binding(session);
binding.BindInput("X", bound_x);
binding.BindOutput("Y", bound_y);
// One regular run for necessary memory allocation and graph capturing
session.Run(run_option, binding);
// After capturing, CUDA graph replay happens from this Run onwards
session.Run(run_option, binding);
// Update input and then replay CUDA graph with the updated input
x_values = {10.0f, 20.0f, 30.0f, 40.0f, 50.0f, 60.0f};
cudaMemcpy(input_data.get(), x_values.data(), sizeof(float) * x_values.size(), cudaMemcpyHostToDevice);
session.Run(run_option, binding);
Samples
Python
import onnxruntime as ort
model_path = '<path to model>'
providers = [
('CUDAExecutionProvider', {
'device_id': 0,
'arena_extend_strategy': 'kNextPowerOfTwo',
'gpu_mem_limit': 2 * 1024 * 1024 * 1024,
'cudnn_conv_algo_search': 'EXHAUSTIVE',
'do_copy_in_default_stream': True,
}),
'CPUExecutionProvider',
]
session = ort.InferenceSession(model_path, providers=providers)
C/C++
Using legacy provider options struct
OrtSessionOptions* session_options = /* ... */;
OrtCUDAProviderOptions options;
options.device_id = 0;
options.arena_extend_strategy = 0;
options.gpu_mem_limit = 2 * 1024 * 1024 * 1024;
options.cudnn_conv_algo_search = OrtCudnnConvAlgoSearchExhaustive;
options.do_copy_in_default_stream = 1;
SessionOptionsAppendExecutionProvider_CUDA(session_options, &options);
Using V2 provider options struct
OrtCUDAProviderOptionsV2* cuda_options = nullptr;
CreateCUDAProviderOptions(&cuda_options);
std::vector<const char*> keys{"device_id", "gpu_mem_limit", "arena_extend_strategy", "cudnn_conv_algo_search", "do_copy_in_default_stream", "cudnn_conv_use_max_workspace", "cudnn_conv1d_pad_to_nc1d"};
std::vector<const char*> values{"0", "2147483648", "kSameAsRequested", "DEFAULT", "1", "1", "1"};
UpdateCUDAProviderOptions(cuda_options, keys.data(), values.data(), keys.size());
cudaStream_t cuda_stream;
cudaStreamCreate(&cuda_stream);
// this implicitly sets "has_user_compute_stream"
UpdateCUDAProviderOptionsWithValue(cuda_options, "user_compute_stream", cuda_stream);
OrtSessionOptions* session_options = /* ... */;
SessionOptionsAppendExecutionProvider_CUDA_V2(session_options, cuda_options);
// Finally, don't forget to release the provider options
ReleaseCUDAProviderOptions(cuda_options);
C#
var cudaProviderOptions = new OrtCUDAProviderOptions(); // Dispose this finally
var providerOptionsDict = new Dictionary<string, string>();
providerOptionsDict["device_id"] = "0";
providerOptionsDict["gpu_mem_limit"] = "2147483648";
providerOptionsDict["arena_extend_strategy"] = "kSameAsRequested";
providerOptionsDict["cudnn_conv_algo_search"] = "DEFAULT";
providerOptionsDict["do_copy_in_default_stream"] = "1";
providerOptionsDict["cudnn_conv_use_max_workspace"] = "1";
providerOptionsDict["cudnn_conv1d_pad_to_nc1d"] = "1";
cudaProviderOptions.UpdateOptions(providerOptionsDict);
SessionOptions options = SessionOptions.MakeSessionOptionWithCudaProvider(cudaProviderOptions); // Dispose this finally
Also see the tutorial here on how to configure CUDA for C# on Windows.
Java
OrtCUDAProviderOptions cudaProviderOptions = new OrtCUDAProviderOptions(/*device id*/0); // Must be closed after the session closes
cudaProviderOptions.add("gpu_mem_limit","2147483648");
cudaProviderOptions.add("arena_extend_strategy","kSameAsRequested");
cudaProviderOptions.add("cudnn_conv_algo_search","DEFAULT");
cudaProviderOptions.add("do_copy_in_default_stream","1");
cudaProviderOptions.add("cudnn_conv_use_max_workspace","1");
cudaProviderOptions.add("cudnn_conv1d_pad_to_nc1d","1");
OrtSession.SessionOptions options = new OrtSession.SessionOptions(); // Must be closed after the session closes
options.addCUDA(cudaProviderOptions);
For documentation questions, please file an issue.
Edit this page on GitHub
|
CuTe Tiled MMA
Introduction
Matrix multiplication and accumulation (MMA) is a key operation for general matrix multiplication (GEMM). In CUTLASS, CuTe provides APIs for configuring MMA from MMA atoms to MMA tiles so that larger MMA problems can be solved.
In this blog post, I would like to discuss the CuTe tiled MMA configurations, layouts, and API usages, using an example.
CuTe Tiled MMA Preview Example
The following CuTe tiled MMA preview example does not actually perform any MMA computation, because it is completely a host program. Instead, it demonstrates how to configure the MMA atom, MMA tile, and MMA layout using CuTe APIs.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260
#include <cassert>#include <fstream>#include <iomanip>#include <iostream>#include <cute/layout.hpp>#include <cute/swizzle.hpp>#include <cute/tensor.hpp>#include <thrust/host_vector.h>int main(int argc, const char** argv){ // Tiled MMA requires everything to be static. // Therefore, this preview program does not allow user to configure // dynamically. To preview a new tiled MMA configuration, the user has to // modify this program and recompile. // Configure data type. using TA = cute::half_t; using TB = cute::half_t; using TC = cute::half_t; // Configure static "shared memory". // The "shared memory" is actually on host for preview purpose. // For tiled mma, the shared memory layout has to be static. constexpr int bM{128 * 2 / sizeof(TA)}; constexpr int bN{128 * 2 / sizeof(TB)}; constexpr int bK{32}; auto const blk_M = cute::Int<bM>{}; auto const blk_N = cute::Int<bN>{}; auto const blk_K = cute::Int<bK>{}; auto const smem_shape_A{cute::make_shape(blk_M, blk_K)}; auto const smem_shape_B{cute::make_shape(blk_N, blk_K)}; auto const smem_shape_C{cute::make_shape(blk_M, blk_N)}; auto const smem_stride_A{ cute::make_stride(cute::Int<1>{}, blk_M)}; // Column-major auto const smem_stride_B{ cute::make_stride(cute::Int<1>{}, blk_N)}; // Column-major auto const smem_stride_C{ cute::make_stride(cute::Int<1>{}, blk_M)}; // Column-major auto const smem_layout_A{ cute::make_layout(smem_shape_A, smem_stride_A)}; // (blk_M, blk_K) auto const smem_layout_B{ cute::make_layout(smem_shape_B, smem_stride_B)}; // (blk_N, blk_K) auto const smem_layout_C{ cute::make_layout(smem_shape_C, smem_stride_C)}; // (blk_M, blk_N) auto const size_a{blk_M * blk_K}; auto const size_b{blk_N * blk_K}; auto const size_c{blk_M * blk_N}; auto h_A = thrust::host_vector<TA>(size_a); auto h_B = thrust::host_vector<TB>(size_b); auto h_C = thrust::host_vector<TC>(size_c); // Make tensor for smem_A and smem_B. auto smem_tensor_A{cute::make_tensor(h_A.data(), smem_layout_A)}; auto smem_tensor_B{cute::make_tensor(h_B.data(), smem_layout_B)}; auto smem_tensor_C{cute::make_tensor(h_C.data(), smem_layout_C)}; std::cout << "smem_tensor_A" << std::endl; cute::print(smem_tensor_A); std::cout << std::endl; std::cout << "smem_tensor_B" << std::endl; cute::print(smem_tensor_B); std::cout << std::endl; std::cout << "smem_tensor_C" << std::endl; cute::print(smem_tensor_C); std::cout << std::endl; // Configure tiled MMA. using MmaTraits = cute::MMA_Traits<cute::SM80_16x8x16_F16F16F16F16_TN>; using MmaAtomShape = MmaTraits::Shape_MNK; auto const mma_atom = cute::MMA_Atom<MmaTraits>{}; auto const mma_atom_shape = MmaAtomShape{}; // Repeating the mma atom along the M, N, and K dimensions. // This increases the number of threads to process the tiled MMA. constexpr int MMA_LAYOUT_M{2}; constexpr int MMA_LAYOUT_N{2}; constexpr int MMA_LAYOUT_K{1}; auto mma_layout{cute::make_layout( cute::make_shape(cute::Int<MMA_LAYOUT_M>{}, cute::Int<MMA_LAYOUT_N>{}, cute::Int<MMA_LAYOUT_K>{}))}; // Repeating the mma processing along the M, N, and K dimensions. // This does not increase the number of threads to process the tiled MMA. // But the number of registers required for processing the tiled MMA // increases. constexpr int NUM_MMA_TILE_M{1}; constexpr int NUM_MMA_TILE_N{2}; constexpr int NUM_MMA_TILE_K{1}; constexpr int MMA_TILE_M{cute::get<0>(mma_atom_shape) * MMA_LAYOUT_M * NUM_MMA_TILE_M}; constexpr int MMA_TILE_N{cute::get<1>(mma_atom_shape) * MMA_LAYOUT_N * NUM_MMA_TILE_N}; constexpr int MMA_TILE_K{cute::get<2>(mma_atom_shape) * MMA_LAYOUT_K * NUM_MMA_TILE_K}; auto mma_tile{cute::make_tile(cute::Int<MMA_TILE_M>{}, cute::Int<MMA_TILE_N>{}, cute::Int<MMA_TILE_K>{})}; auto tiled_mma{cute::make_tiled_mma(mma_atom, mma_layout, mma_tile)}; constexpr auto NUM_THREADS{cute::size(tiled_mma)}; CUTE_STATIC_ASSERT(NUM_THREADS == MMA_LAYOUT_M * MMA_LAYOUT_N * MMA_LAYOUT_K * cute::size(decltype(mma_atom)::ThrID{})); std::cout << "mma_atom" << std::endl; cute::print(mma_atom); std::cout << std::endl; std::cout << "tiled_mma" << std::endl; cute::print(tiled_mma); std::cout << std::endl; // ThrLayoutVMNK static asserts. CUTE_STATIC_ASSERT_V(cute::shape<0>(decltype(tiled_mma)::ThrLayoutVMNK{}) == cute::shape(decltype(mma_atom)::ThrID{})); CUTE_STATIC_ASSERT_V(cute::shape<1>(decltype(tiled_mma)::ThrLayoutVMNK{}) == cute::Int<MMA_LAYOUT_M>{}); CUTE_STATIC_ASSERT_V(cute::shape<2>(decltype(tiled_mma)::ThrLayoutVMNK{}) == cute::Int<MMA_LAYOUT_N>{}); CUTE_STATIC_ASSERT_V(cute::shape<3>(decltype(tiled_mma)::ThrLayoutVMNK{}) == cute::Int<MMA_LAYOUT_K>{}); // PermutationMNK static asserts. CUTE_STATIC_ASSERT_V(tiled_mma.tile_size_mnk<0>() == cute::Int<MMA_TILE_M>{}); CUTE_STATIC_ASSERT_V(tiled_mma.tile_size_mnk<1>() == cute::Int<MMA_TILE_N>{}); CUTE_STATIC_ASSERT_V(tiled_mma.tile_size_mnk<2>() == cute::Int<MMA_TILE_K>{}); // Partition via MMA. // set an arbitrary thread index. constexpr int THREAD_IDX{0}; CUTE_STATIC_ASSERT(THREAD_IDX < NUM_THREADS); CUTE_STATIC_ASSERT(THREAD_IDX >= 0); auto thread_mma{tiled_mma.get_slice(THREAD_IDX)}; // Register tensors used for MMA. auto thread_layout_C_register_tensor_A{ thread_mma.partition_fragment_A(smem_tensor_A)}; // (MMA, MMA_M, MMA_K) auto thread_layout_C_register_tensor_B{ thread_mma.partition_fragment_B(smem_tensor_B)}; // (MMA, MMA_N, MMA_K) auto thread_layout_C_register_tensor_C{ thread_mma.partition_fragment_C(smem_tensor_C)}; // (MMA, MMA_M, MMA_N) CUTE_STATIC_ASSERT_V( cute::shape<1>(decltype(mma_atom)::LayoutA_TV{}) == cute::shape<0>(thread_layout_C_register_tensor_A)); // MMA_A CUTE_STATIC_ASSERT_V( cute::shape<1>(decltype(mma_atom)::LayoutB_TV{}) == cute::shape<0>(thread_layout_C_register_tensor_B)); // MMA_B // Use no tiled copy from shared memory to register. auto thread_layout_C_smem_tensor_A_no_tiled_copy{ thread_mma.partition_A(smem_tensor_A)}; // (MMA, MMA_M, MMA_K) auto thread_layout_C_smem_tensor_B_no_tiled_copy{ thread_mma.partition_B(smem_tensor_B)}; // (MMA, MMA_N, MMA_K) auto thread_layout_C_smem_tensor_C_no_tiled_copy{ thread_mma.partition_C(smem_tensor_C)}; // (MMA, MMA_M, MMA_N) // thread_layout_C_smem_tensor_A_no_tiled_copy and // thread_layout_C_register_tensor_A shall have the same shape. CUTE_STATIC_ASSERT_V( cute::shape(thread_layout_C_smem_tensor_A_no_tiled_copy) == cute::shape(thread_layout_C_register_tensor_A)); std::cout << "thread_layout_C_register_tensor_A" << std::endl; cute::print(thread_layout_C_register_tensor_A); std::cout << std::endl; std::cout << "thread_layout_C_register_tensor_B" << std::endl; cute::print(thread_layout_C_register_tensor_B); std::cout << std::endl; std::cout << "thread_layout_C_register_tensor_C" << std::endl; cute::print(thread_layout_C_register_tensor_C); std::cout << std::endl; std::cout << "thread_layout_C_smem_tensor_A_no_tiled_copy" << std::endl; cute::print(thread_layout_C_smem_tensor_A_no_tiled_copy); std::cout << std::endl; std::cout << "thread_layout_C_smem_tensor_B_no_tiled_copy" << std::endl; cute::print(thread_layout_C_smem_tensor_B_no_tiled_copy); std::cout << std::endl; std::cout << "thread_layout_C_smem_tensor_C_no_tiled_copy" << std::endl; cute::print(thread_layout_C_smem_tensor_C_no_tiled_copy); std::cout << std::endl; // Use tiled copy from shared memory to register. auto copy_atom_A = cute::Copy_Atom<cute::SM75_U16x8_LDSM_T, TA>{}; auto copy_atom_B = cute::Copy_Atom<cute::SM75_U16x8_LDSM_T, TB>{}; auto smem_tiled_copy_A{cute::make_tiled_copy_A(copy_atom_A, tiled_mma)}; auto smem_tiled_copy_B{cute::make_tiled_copy_B(copy_atom_B, tiled_mma)}; CUTE_STATIC_ASSERT_V( cute::shape<0>(decltype(smem_tiled_copy_A)::Tiler_MN{}) == tiled_mma.tile_size_mnk<0>()); // MMA_TILE_M CUTE_STATIC_ASSERT_V( cute::shape<1>(decltype(smem_tiled_copy_A)::Tiler_MN{}) == tiled_mma.tile_size_mnk<2>()); // MMA_TILE_K CUTE_STATIC_ASSERT_V( cute::shape<0>(decltype(smem_tiled_copy_B)::Tiler_MN{}) == tiled_mma.tile_size_mnk<1>()); // MMA_TILE_N CUTE_STATIC_ASSERT_V( cute::shape<1>(decltype(smem_tiled_copy_B)::Tiler_MN{}) == tiled_mma.tile_size_mnk<2>()); // MMA_TILE_K auto smem_thread_copy_A{smem_tiled_copy_A.get_slice(THREAD_IDX)}; auto smem_thread_copy_B{smem_tiled_copy_B.get_slice(THREAD_IDX)}; auto thread_layout_C_smem_tensor_A_tiled_copy{ smem_thread_copy_A.partition_S(smem_tensor_A)}; auto thread_layout_C_smem_tensor_B_tiled_copy{ smem_thread_copy_B.partition_S(smem_tensor_B)}; auto thread_layout_C_register_tensor_A_copy_view{ smem_thread_copy_A.retile_D(thread_layout_C_register_tensor_A)}; auto thread_layout_C_register_tensor_B_copy_view{ smem_thread_copy_B.retile_D(thread_layout_C_register_tensor_B)}; CUTE_STATIC_ASSERT_V( cute::shape(thread_layout_C_smem_tensor_A_tiled_copy) == cute::shape(thread_layout_C_register_tensor_A_copy_view)); CUTE_STATIC_ASSERT_V( cute::shape(thread_layout_C_smem_tensor_B_tiled_copy) == cute::shape(thread_layout_C_register_tensor_B_copy_view)); std::cout << "copy_atom_A" << std::endl; cute::print(copy_atom_A); std::cout << std::endl; std::cout << "copy_atom_B" << std::endl; cute::print(copy_atom_B); std::cout << std::endl; std::cout << "smem_tiled_copy_A" << std::endl; cute::print(smem_tiled_copy_A); std::cout << std::endl; std::cout << "smem_tiled_copy_B" << std::endl; cute::print(smem_tiled_copy_B); std::cout << std::endl; std::cout << "thread_layout_C_smem_tensor_A_tiled_copy" << std::endl; cute::print(thread_layout_C_smem_tensor_A_tiled_copy); std::cout << std::endl; std::cout << "thread_layout_C_smem_tensor_B_tiled_copy" << std::endl; cute::print(thread_layout_C_smem_tensor_B_tiled_copy); std::cout << std::endl; std::cout << "thread_layout_C_register_tensor_A_copy_view" << std::endl; cute::print(thread_layout_C_register_tensor_A_copy_view); std::cout << std::endl; std::cout << "thread_layout_C_register_tensor_B_copy_view" << std::endl; cute::print(thread_layout_C_register_tensor_B_copy_view); std::cout << std::endl; return 0;}
A high-performance example that uses almost the same tiled MMA configurations to perform the GEMM computation can be found on my GitHub.
CuTe Tiled MMA Configurations and Layouts
MMA Problem Size and Shared Memory Configuration
In one thread block, per one main loop iteration, the MMA problem size is $M \times N \times K = 128 \times 128 \times 32$. The static shared memory is used to store the $M \times K = 128 \times 32$ sub-matrix of matrix $A$ in a column-major layout and the $K \times N = 32 \times 128$ sub-matrix of matrix $B$ in a row-major layout. Using the convention of MMA, we typically describe the sub-matrix of matrix $B$ as $N \times K = 128 \times 32$ column-major.
1234
smem_tensor_Aptr[16b](0x57b7b93248c0) o (_128,_32):(_1,_128)smem_tensor_Bptr[16b](0x57b7b93268d0) o (_128,_32):(_1,_128)
The shared memory configuration has to be compatible with the tiled MMA we configured later. Otherwise, the tiled MMA will not be able to process the MMA problem correctly because the cute::gemm API takes no predicates (for performance reasons).
MMA Atom Configuration
The MMA atom processes an MMA problem of size $M^{\prime} \times N^{\prime} \times K^{\prime}$ using a certain number of threads.
In our case, the MMA atom cute::SM80_16x8x16_F16F16F16F16_TN processes an MMA problem of size $M^{\prime} \times N^{\prime} \times K^{\prime} = 16 \times 8 \times 16$, and this MMA atom consists of a warp of 32 threads. The MMA atom is responsible for processing the $M^{\prime} \times K^{\prime} = 16 \times 16$ sub-matrix of matrix $A$ and the $K^{\prime} \times N^{\prime} = 16 \times 8$ sub-matrix of matrix $B$.
1234567
mma_atomMMA_Atom ThrID: _32:_1 Shape_MNK: (_16,_8,_16) LayoutA_TV: ((_4,_8),(_2,_2,_2)):((_32,_1),(_16,_8,_128)) LayoutB_TV: ((_4,_8),(_2,_2)):((_16,_1),(_8,_64)) LayoutC_TV: ((_4,_8),(_2,_2)):((_32,_1),(_16,_8))
To process the MMA problem of size $M \times N \times K = 128 \times 128 \times 32$, theoretically we could tile the MMA atoms in one of the following configurations:
Note that we did not configure parallelism in the $K$ dimension in the above configurations. But theoretically it’s also possible to do it, especially when $\frac{K}{K^{\prime}}$ is large.
The MMA atom also defines the layouts of the MMA matrices it works on. The thread-value layouts of the MMA matrices are usually very complicated. But fortunately, we could usually visualize them using the CuTe cute::print_latex or cute::print_svg functions.
In our case, the MMA atom cute::SM80_16x8x16_F16F16F16F16_TN defines the thread-value layouts of the MMA matrices as follows:
123
LayoutA_TV: ((_4,_8),(_2,_2,_2)):((_32,_1),(_16,_8,_128))LayoutB_TV: ((_4,_8),(_2,_2)):((_16,_1),(_8,_64))LayoutC_TV: ((_4,_8),(_2,_2)):((_32,_1),(_16,_8))
Apparently, each thread in the MMA atom in one process will access $2 \times 2 \times 2 = 8$ elements in the sub-matrix of matrix $A$ and $2 \times 2 = 4$ elements in the sub-matrix of matrix $B$, and produce $2 \times 2 = 4$ elements in the sub-matrix of matrix $C$.
Tiled MMA Configuration
The MMA tile configuration is a trade-off between resource and performance. The more MMA atoms we use, the higher parallelism we can achieve, at a cost of more threads to use and higher pressure for memory access. The fewer MMA atoms we use, the lower parallelism we can achieve, but we can save more threads and reduce the pressure for memory access. To achieve the best performance, we need to find the sweet spot between the two extremes.
123456789101112131415161718192021222324252627282930
// Configure tiled MMA.using MmaTraits = cute::MMA_Traits<cute::SM80_16x8x16_F16F16F16F16_TN>;using MmaAtomShape = MmaTraits::Shape_MNK;auto const mma_atom = cute::MMA_Atom<MmaTraits>{};auto const mma_atom_shape = MmaAtomShape{};// Repeating the mma atom along the M, N, and K dimensions.// This increases the number of threads to process the tiled MMA.constexpr int MMA_LAYOUT_M{2};constexpr int MMA_LAYOUT_N{2};constexpr int MMA_LAYOUT_K{1};auto mma_layout{cute::make_layout( cute::make_shape(cute::Int<MMA_LAYOUT_M>{}, cute::Int<MMA_LAYOUT_N>{}, cute::Int<MMA_LAYOUT_K>{}))};// Repeating the mma processing along the M, N, and K dimensions.// This does not increase the number of threads to process the tiled MMA.// But the number of registers required for processing the tiled MMA// increases.constexpr int NUM_MMA_TILE_M{1};constexpr int NUM_MMA_TILE_N{2};constexpr int NUM_MMA_TILE_K{1};constexpr int MMA_TILE_M{cute::get<0>(mma_atom_shape) * MMA_LAYOUT_M * NUM_MMA_TILE_M};constexpr int MMA_TILE_N{cute::get<1>(mma_atom_shape) * MMA_LAYOUT_N * NUM_MMA_TILE_N};constexpr int MMA_TILE_K{cute::get<2>(mma_atom_shape) * MMA_LAYOUT_K * NUM_MMA_TILE_K};auto mma_tile{cute::make_tile(cute::Int<MMA_TILE_M>{}, cute::Int<MMA_TILE_N>{}, cute::Int<MMA_TILE_K>{})};auto tiled_mma{cute::make_tiled_mma(mma_atom, mma_layout, mma_tile)};
In our tiled MMA configuration, we have 2 MMA atoms along the $M$ dimension, 2 MMA atoms along the $N$ dimension, and $1$ MMA along the $K$ dimension, i.e., no parallelism in the $K$ dimension. Therefore, we have $2 \times 2 \times 1 = 4$ MMA atoms in total. Because each MMA atom consists of a warp of 32 threads, we have the tiled MMA ThrLayoutVMNK = (_32,_2,_2,_1):(_1,_32,_64,_0). The number of threads involved in this tiled MMA is $32 \times 2 \times 2 \times 1 = 128$. This tiled MMA in process can solve an MMA problem of size $(2 \times M^{\prime}) \times (2 \times N^{\prime}) \times (1 \times K^{\prime}) = 32 \times 16 \times 16$. The same tiled MMA can process multiple times along different dimensions to solve larger MMA problems. In our case, we configured the tiled MMA to process along the $N$ dimension 2 times. As a result, with such permutation, we have the tiled MMA PermutationMNK: (_32,_32,_16) that solves an MMA problem of size $32 \times 32 \times 16$.
12345678910
tiled_mmaTiledMMA ThrLayoutVMNK: (_32,_2,_2,_1):(_1,_32,_64,_0) PermutationMNK: (_32,_32,_16)MMA_Atom ThrID: _32:_1 Shape_MNK: (_16,_8,_16) LayoutA_TV: ((_4,_8),(_2,_2,_2)):((_32,_1),(_16,_8,_128)) LayoutB_TV: ((_4,_8),(_2,_2)):((_16,_1),(_8,_64)) LayoutC_TV: ((_4,_8),(_2,_2)):((_32,_1),(_16,_8))
The tiled MMA layouts for the MMA matrices can also be visualized using the CuTe cute::print_latex or cute::print_svg functions.
Tiled MMA Memory Copy Partition
The tiled MMA could then be used as the building block for solving MMA problems of even larger size. Given large MMA matrix tensors, the tiled MMA can be decomposed into thread MMAs and each thread MMA has the methods partition_A, partition_B, and partition_C to partition the MMA matrix tensors into sub-matrices needed for the thread that can be processed by the tiled MMA. The partitioned MMA matrix tensors are then used as the input to the tiled MMA and larger MMA problems are solved by repeating the tiled MMA along different dimensions using the cute::gemm API.
123456789
auto thread_mma{tiled_mma.get_slice(THREAD_IDX)};// Use no tiled copy from shared memory to register.auto thread_layout_C_smem_tensor_A_no_tiled_copy{ thread_mma.partition_A(smem_tensor_A)}; // (MMA, MMA_M, MMA_K)auto thread_layout_C_smem_tensor_B_no_tiled_copy{ thread_mma.partition_B(smem_tensor_B)}; // (MMA, MMA_N, MMA_K)auto thread_layout_C_smem_tensor_C_no_tiled_copy{ thread_mma.partition_C(smem_tensor_C)}; // (MMA, MMA_M, MMA_N)
In our case, the tiled MMA partitions the matrix A, matrix B, and matrix C as follows:
123456
thread_layout_C_smem_tensor_A_no_tiled_copyptr[16b](0x5c6c073e78c0) o ((_2,_2,_2),_4,_2):((_128,_8,_1024),_32,_2048)thread_layout_C_smem_tensor_B_no_tiled_copyptr[16b](0x5c6c073e98d0) o ((_2,_2),_8,_2):((_128,_1024),_16,_2048)thread_layout_C_smem_tensor_C_no_tiled_copyptr[16b](0x5c6c073eb8e0) o ((_2,_2),_4,_8):((_128,_8),_32,_2048)
Note that, again, each thread in the MMA atom in one process will access $2 \times 2 \times 2 = 8$ elements in the sub-matrix of matrix $A$ and $2 \times 2 = 4$ elements in the sub-matrix of matrix $B$, and produce $2 \times 2 = 4$ elements in the sub-matrix of matrix $C$. Such patterns are repeated $4$ times along the $M$ dimension and $2$ times along the $K$ dimension for matrix $A$, $8$ times along the $N$ dimension and $2$ times along the $K$ dimension for matrix $B$, and $4$ times along the $M$ dimension and $8$ times along the $N$ dimension for matrix $C$. The cute::gemm API will be responsible for accessing the desired data using the correct indices from MMA iterations.
Instead of accessing data from global or shared memory, sometimes we would like to access data from register for data reuse and better performance. The thread MMA also provides the methods partition_fragment_A, partition_fragment_B, and partition_fragment_C that configure the minimum amount of registers needed for the tiled MMA operations in each thread.
1234567
// Register tensors used for MMA.auto thread_layout_C_register_tensor_A{ thread_mma.partition_fragment_A(smem_tensor_A)}; // (MMA, MMA_M, MMA_K)auto thread_layout_C_register_tensor_B{ thread_mma.partition_fragment_B(smem_tensor_B)}; // (MMA, MMA_N, MMA_K)auto thread_layout_C_register_tensor_C{ thread_mma.partition_fragment_C(smem_tensor_C)}; // (MMA, MMA_M, MMA_N)
We could see that the shape of the register MMA tensors are exactly the same as the shape of the corresponding ones on shared memory or global memory. However, because of its compact striding comparing to the ones on shared memory or global memory, no registers are wasted. This is important because the number of registers is limited for each thread and the performance can be compromised if the number of registers configured is too large or there are wasted registers (the compiler might not be able to identify the wasted registers and optimize them out).
123456
thread_layout_C_register_tensor_Aptr[16b](0x7ffc34e465f0) o ((_2,_2,_2),_4,_2):((_1,_2,_4),_8,_32)thread_layout_C_register_tensor_Bptr[16b](0x7ffc34e46670) o ((_2,_2),_8,_2):((_1,_2),_4,_32)thread_layout_C_register_tensor_Cptr[16b](0x7ffc34e466f0) o ((_2,_2),_4,_8):((_1,_2),_4,_16)
Tiled MMA Memory Tiled Copy Partition
In this case, there are some performance issues when the threads tries to access the data from shared memory or global memory. When each thread in the MMA atom tries to access the $2 \times 2 \times 2 = 8$ elements in the sub-matrix of matrix $A$ and the $2 \times 2 = 4$ elements in the sub-matrix of matrix $B$, and the $2 \times 2 = 4$ elements in the sub-matrix of matrix $C$, multiple transactions have to be performed because those data are not contiguous in memory, not even any two elements.
CUDA has special warp-level matrix load instruction ldmatrix that specifically addresses this problem and it is wrapped into the CuTe copy atoms. In our case, for cute::half_t data type, the copy atoms we used are cute::SM75_U16x8_LDSM_T for both matrix $A$ and matrix $B$. This copy atom consists of a warp of 32 threads from the ValLayoutSrc: (_32,_8):(_8,_1) we learned that each thread will copy $8$ contiguous elements from the source memory to the destination memory by abstraction. It is by abstraction because under the hood ldmatrix does not work in this way. The copy atom will copy $32 \times 8 = 128$ elements in total.
123456789101112131415
copy_atom_ACopy_Atom ThrID: _32:_1 ValLayoutSrc: (_32,_8):(_8,_1) ValLayoutDst: ((_4,_8),(_1,_2,_4)):((_16,_1),(_1,_8,_64)) ValLayoutRef: ((_4,_8),(_1,_2,_4)):((_16,_1),(_1,_8,_64)) ValueType: 16bcopy_atom_BCopy_Atom ThrID: _32:_1 ValLayoutSrc: (_32,_8):(_8,_1) ValLayoutDst: ((_4,_8),(_1,_2,_4)):((_16,_1),(_1,_8,_64)) ValLayoutRef: ((_4,_8),(_1,_2,_4)):((_16,_1),(_1,_8,_64)) ValueType: 16b
The copy atoms could be tiled with the tiled MMA for matrix $A$ sub-matrix and matrix $B$ sub-matrix tiled copy using cute::make_tiled_copy_A and cute::make_tiled_copy_B functions. In our case, because the tiled MMA (that has permutations) will solve an MMA problem of size $32 \times 32 \times 16$ (not $32 \times 16 \times 16$), the tiled copy will be responsible for a tile of size $32 \times 16$ for matrix $A$ and $32 \times 16$ for matrix $B$. In fact, without the permutation in the tiled MMA, the tile size for matrix $B$ would become $16 \times 16$ and $16 \times 16$ is smaller than $128$, the number of elements in total that the copy atom will copy, and such tiled copy will be prohibited by CuTe.
123456
// Use tiled copy from shared memory to register.auto copy_atom_A = cute::Copy_Atom<cute::SM75_U16x8_LDSM_T, TA>{};auto copy_atom_B = cute::Copy_Atom<cute::SM75_U16x8_LDSM_T, TB>{};auto smem_tiled_copy_A{cute::make_tiled_copy_A(copy_atom_A, tiled_mma)};auto smem_tiled_copy_B{cute::make_tiled_copy_B(copy_atom_B, tiled_mma)};
123456789101112131415161718192021
smem_tiled_copy_ATiledCopy Tiler_MN: (_32,_16) TiledLayout_TV: ((_4,_8,_2,_2),((_2,_2,_2),(_1,_1))):((_64,_1,_16,_0),((_32,_8,_256),(_0,_0)))Copy_Atom ThrID: _32:_1 ValLayoutSrc: (_32,_8):(_8,_1) ValLayoutDst: ((_4,_8),(_1,_2,_4)):((_16,_1),(_1,_8,_64)) ValLayoutRef: ((_4,_8),(_1,_2,_4)):((_16,_1),(_1,_8,_64)) ValueType: 16bsmem_tiled_copy_BTiledCopy Tiler_MN: (_32,_16) TiledLayout_TV: ((_4,_8,_2,_2),((_2,_2),(_2,_1))):((_64,_1,_0,_8),((_32,_256),(_16,_0)))Copy_Atom ThrID: _32:_1 ValLayoutSrc: (_32,_8):(_8,_1) ValLayoutDst: ((_4,_8),(_1,_2,_4)):((_16,_1),(_1,_8,_64)) ValLayoutRef: ((_4,_8),(_1,_2,_4)):((_16,_1),(_1,_8,_64)) ValueType: 16b
The tiled copy layouts could be printed using the cute::print_latex function as well. In our case, the left layout is the source layout and actually it’s misleading in our particular case. The value mapping between the left source layout and the right destination layout is incorrect. For example, the $T0V1$ value from the left layout will not be copied to the $T0V1$ value in the right layout. The left source layout shows that a thread copies $8$ elements from the same column because we need the threads to be located at the beginning of the column so that the column address can be correctly passed for ldmatrix.
The tiled copy can be decomposed into (abstracted) thread copies and each thread copy has the method partition_S to partition the source layout. Similarly, the thread copy source layout is misleading as one thread will not actually load $8$ contiguous elements from the source memory using the cute::SM75_U16x8_LDSM_T copy atom. But the CuTe tiled copy abstraction will at least ensure the consequent copy behavior is correct.
1234
thread_layout_C_smem_tensor_A_tiled_copyptr[16b](0x57b7b93248c0) o ((_8,_1),_4,_2):((_1,_0),_32,_2048)thread_layout_C_smem_tensor_B_tiled_copyptr[16b](0x57b7b93268d0) o ((_8,_1),_4,_2):((_1,_0),_32,_2048)
To perform the tiled copy, there is still one problem. The layouts for the destination register tensors for tiled MMA are not immediately compatible with the source tensor layouts.
1234
thread_layout_C_register_tensor_Aptr[16b](0x7ffc34e465f0) o ((_2,_2,_2),_4,_2):((_1,_2,_4),_8,_32)thread_layout_C_register_tensor_Bptr[16b](0x7ffc34e46670) o ((_2,_2),_8,_2):((_1,_2),_4,_32)
However, we realized that the sub-layout of the destination register tensor A (_2,_2,_2):(_1,_2,_4) is equivalent as the sub-layout of the source shared memory tensor A (_8,_1):(_1,_0). The sub-layout of the destination register tensor B ((_2,_2),_8:(_1,_2),_4) is equivalent as the sub-layout of the source shared memory tensor B ((_8,_1),_4):((_1,_0),_8). So before running tiled copy using the cute::copy API, the destination register tensors should be retiled using the retile_D method from thread copy.
After retiling the destination register tensors, the layouts become compatible with the source shared memory tensors for tiled copy.
1234
thread_layout_C_register_tensor_A_copy_viewptr[16b](0x7ffd7a129350) o ((_8,_1),_4,_2):((_1,_0),_8,_32)thread_layout_C_register_tensor_B_copy_viewptr[16b](0x7ffd7a1293d0) o ((_8,_1),_4,_2):((_1,_0),_8,_32)
Without using tiled copy, to load $8$ elements from matrix $A$ for tiled MMA in each thread, 8 memory access instructions have to be performed. But with tiled copy, only 1 memory access instruction is needed. Thus, tiled copy from shared memory to register could usually improve the performance.
Of course, when the tiled copy is performed, there could be shared memory bank conflicts. We could try minimizing the shared memory bank conflicts using CuTe swizzle.
References
CuTe Tiled MMA
https://leimao.github.io/blog/CuTe-Tiled-MMA/
Author
Lei Mao
Posted on
01-09-2025
Updated on
01-09-2025
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
cuBLAS GEMM API Usages for Column-Major and Row-Major Matrices
Introduction
The cuBLAS GEMM API has very strict requirements on the storage format of the input and output matrices. If all the matrices are stored in column-major format, the cuBLAS GEMM API can be used straightforwardly. But if some of the matrices are stored in row-major format, setting the parameters for the cuBLAS GEMM API for such matrix multiplications can be error-prone.
In this blog post, we will discuss the relationship between the transpose and column-major storage of matrices and how cuBLAS GEMM API should be used for different cases.
cuBLAS GEMM
cuBLAS GEMM API
The cuBLAS single-precision GEMM API is declared as follows.
12345678
cublasStatus_t cublasSgemm(cublasHandle_t handle, cublasOperation_t transa, cublasOperation_t transb, int m, int n, int k, const float *alpha, const float *A, int lda, const float *B, int ldb, const float *beta, float *C, int ldc)
This function performs the general matrix-matrix multiplication
$$\begin{align}C = \alpha \text{op}(A) \text{op}(B) + \beta C\end{align}$$
where $\alpha$ and $\beta$ are scalars, and $A$, $B$, and $C$ are matrices stored in column-major format with dimensions of $\text{op}(A)$ being $m \times k$, $\text{op}(B)$ being $k \times n$, and $C$ being $m \times n$, respectively. Also for matrix $A$
$$\begin{align}\text{op}(A) =\begin{cases}A & \text{if transa = CUBLAS_OP_N} \\A^{\top} & \text{if transa = CUBLAS_OP_T} \\A^{\dagger} & \text{if transa = CUBLAS_OP_C} \\\end{cases}\end{align}$$
cuBLAS GEMM and Row-Major Matrices
But what if some of the matrices are stored in row-major format? Let’s see a few examples.
Suppose $m^{\prime} \times k^{\prime}$ matrix $A^{\prime}$ is stored in row-major format, and $k^{\prime} \times n^{\prime}$ matrix $B^{\prime}$ and $m^{\prime} \times n^{\prime}$ matrix $C^{\prime}$ are stored in column-major format. The transpose of $A^{\prime}$, $k^{\prime} \times m^{\prime}$ matrix $A^{\prime\top}$, stored in column-major format, is equivalent to the original $A^{\prime}$ stored in row-major format. But in order to perform the general matrix-matrix multiplication using cuBLAS, $A^{\prime\top}$ has to be transposed to $A^{\prime}$. In this case, transa = CUBLAS_OP_T, transb = CUBLAS_OP_N, m = m', n = n', k = k', A = A', B = B', and C = C'.
Suppose $m^{\prime} \times k^{\prime}$ matrix $A^{\prime}$ and $k^{\prime} \times n^{\prime}$ matrix $B^{\prime}$ are stored in column-major format, and $m^{\prime} \times n^{\prime}$ matrix $C^{\prime}$ is stored in row-major format. In this case, there is no way to transpose $C^{\prime}$ via the cuBLAS API.
We notice that we could transpose matrix $C$ in the formula first before performing the general matrix-matrix multiplication.
$$\begin{align}C^{\top} &= \alpha \left(\text{op}(A) \text{op}(B)\right)^{\top} + \beta C^{\top} \\&= \alpha \text{op}(B)^{\top} \text{op}(A)^{\top} + \beta C^{\top} \\&= \alpha \text{op}(B^{\top}) \text{op}(A^{\top}) + \beta C^{\top}\end{align}$$
So if $B^{\top}$, $A^{\top}$, and $C^{\top}$ are stored in column-major format, we could still perform the general matrix-matrix multiplication using the existing cuBLAS API.
In this case, the transpose of $C^{\prime}$, $n^{\prime} \times m^{\prime}$ matrix $C^{\prime\top}$, stored in column-major format, is equivalent to the original $C^{\prime}$ stored in row-major format. In addition, the matrix $A^{\prime}$ and $B^{\prime}$ have to be transposed as well. The transpose of $A^{\prime}$, $k^{\prime} \times m^{\prime}$ matrix $A^{\prime\top}$, stored in row-major format, is equivalent to the original $A^{\prime}$ stored in column-major format. The transpose of $B^{\prime}$, $n^{\prime} \times k^{\prime}$ matrix $B^{\prime\top}$, stored in row-major format, is equivalent to the original $B^{\prime}$ stored in column-major format. In this case, transa = CUBLAS_OP_T, transb = CUBLAS_OP_T, m = n', n = m', k = k', A = B', B = A', and C = C'.
Conclusions
Suppose we want to perform matrix multiplication $C^{\prime} = \alpha A^{\prime} B^{\prime} + \beta C^{\prime}$, where $A^{\prime}$, $B^{\prime}$, and $C^{\prime}$ are matrices of shapes $m^{\prime} \times k^{\prime}$, $k^{\prime} \times n^{\prime}$, and $m^{\prime} \times n^{\prime}$, respectively, using cuBLAS API. The following table summarizes the relationship between the transpose and column-major storage of matrices $A^{\prime}$, $B^{\prime}$, and $C^{\prime}$, and how cuBLAS API should be used.
References
cuBLAS GEMM API Usages for Column-Major and Row-Major Matrices
https://leimao.github.io/blog/cuBLAS-Transpose-Column-Major-Relationship/
Author
Lei Mao
Posted on
12-12-2024
Updated on
12-12-2024
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
CuTe Swizzle
Introduction
In my article “CUDA Shared Memory Swizzling”, we have discussed how to use swizzling to avoid bank conflicts when a warp of threads accesses shared memory in a strided pattern. Because the swizzle operation and the mathematical proof involve both integer and bit operations, it might not be straightforward to understand and could be error-prone to implement.
CuTe provided a shared memory swizzling abstraction class Swizzle to simplify the shared memory swizzling implementation. Its implementation only involves bit operations, therefore it is more readable and somewhat easier to prove.
In this blog post, I would like to quickly discuss the implementation of the CuTe shared memory swizzling abstraction class Swizzle and its configurations in practice.
CuTe Swizzle
CuTe Swizzle Implementation
The CuTe shared memory swizzling abstraction class Swizzle from the source code is as follows. Only three parameters are used for the swizzle configuration: BBits, MBase, and SShift, where BBits is the number of bits in the mask, MBase is the number of least-significant bits to keep constant, and SShift is the distance to shift the mask. This might be obscure at first glance, let’s walk through a quick example.
For simplicity, suppose we have a 16-bit integer offset whose value is 65 and a swizzle configuration Swizzle<5, 0, 6>. The bit representation of offset is 0b0000000001000001. The bit_msk is 0b0000000000011111, the yyy_msk is 0b0000011111000000, the zzz_msk is 0b0000000000011111, and msk_sft is 6. To swizzle the offset, offset & yyy_msk{} is 0b0000000001000000 where only the bits in the masked region are kept, and shiftr(offset & yyy_msk{}, msk_sft{}) is 0b0000000000000001 where the masked bits are shifted to the right. The final result is offset ^ shiftr(offset & yyy_msk{}, msk_sft{}) which is 0b0000000001000001 xor 0b0000000000000001 equals 0b0000000001000000 whose value is 64. This means the swizzle operation Swizzle<5, 0, 6> projects the offset from 65 to 64.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
// A generic Swizzle functor/* 0bxxxxxxxxxxxxxxxYYYxxxxxxxZZZxxxx * ^--^ MBase is the number of least-sig bits to keep constant * ^-^ ^-^ BBits is the number of bits in the mask * ^---------^ SShift is the distance to shift the YYY mask * (pos shifts YYY to the right, neg shifts YYY to the left) * * e.g. Given * 0bxxxxxxxxxxxxxxxxYYxxxxxxxxxZZxxx * the result is * 0bxxxxxxxxxxxxxxxxYYxxxxxxxxxAAxxx where AA = ZZ xor YY */template <int BBits, int MBase, int SShift = BBits>struct Swizzle{ static constexpr int num_bits = BBits; static constexpr int num_base = MBase; static constexpr int num_shft = SShift; static_assert(num_base >= 0, "MBase must be positive."); static_assert(num_bits >= 0, "BBits must be positive."); static_assert(abs(num_shft) >= num_bits, "abs(SShift) must be more than BBits."); // using 'int' type here to avoid unintentially casting to unsigned... unsure. using bit_msk = cute::constant<int, (1 << num_bits) - 1>; using yyy_msk = cute::constant<int, bit_msk{} << (num_base + max(0,num_shft))>; using zzz_msk = cute::constant<int, bit_msk{} << (num_base - min(0,num_shft))>; using msk_sft = cute::constant<int, num_shft>; static constexpr uint32_t swizzle_code = uint32_t(yyy_msk{} | zzz_msk{}); template <class Offset> CUTE_HOST_DEVICE constexpr static auto apply(Offset const& offset) { return offset ^ shiftr(offset & yyy_msk{}, msk_sft{}); // ZZZ ^= YYY } template <class Offset> CUTE_HOST_DEVICE constexpr auto operator()(Offset const& offset) const { return apply(offset); } template <int B, int M, int S> CUTE_HOST_DEVICE constexpr auto operator==(Swizzle<B,M,S> const&) const { return B == BBits && M == MBase && S == SShift; }};
Offset Bijection
Given an integer $m$, a domain $X = [0, 2^m - 1]$, and a constant $c \in X$, the function $f: X \to X$ defined as $f(x) = x \oplus c$, where $\oplus$ is the XOR operation, is a bijection.
Proof
It is trivial to see the XOR operation is commutative and associative. Suppose there exist two different values $x_1, x_2 \in X$ such that $f(x_1) = f(x_2)$, then $x_1 \oplus c = x_2 \oplus c$, which implies $x_1 = x_2$. Therefore, the function $f$ is injective. Because $X = [0, 2^m - 1]$ and the XOR operation cannot produce a value outside of $X$, the function $f$ is surjective.
This concludes the proof. $\square$
Therefore, assuming MBase is zero, and BBits = m, for the offsets $x$ in the range $X = [k, k \cdot 2^m]$, because $c$ is a constant (as it’s in the bit position higher than $m$), the swizzle operation will permute the offsets in $X$ bijectively.
Shared Memory 2D Layout and Shared Memory Bank Conflicts
On devices of compute capability 5.x or newer, each bank has a bandwidth of 32 bits every clock cycle, and successive 32-bit words are assigned to successive banks. So element size matters for shared memory bank conflicts.
In many use cases, the shared memory layout is a 2D row-major matrix whose row size is a multiple of 32 and element size is 32-bit. When strided access from a warp of threads is performed on the column of the matrix, severe 32-way shared memory bank conflicts will occur. If the element in each column are mapped to different shared memory banks, the shared memory bank conflicts can be mitigated.
Assuming MBase is zero, element size is 32-bit, and the matrix row size is $n$, we could design the swizzle operation such that there is free of shared memory bank conflicts when accessing each column of the matrix, based on the offset bijection property we just proved above. To configure BBits and SShift such that offset % 32 is distinct when a warp of threads accesses the column of the matrix, we have to set SShift to be $\log_2 n$ and BBits to be $\log_2 32 = 5$, so that the $c$ used in $f(x) = x \oplus c$ are different for each row, resulting in distinct offset % 32 when a warp of threads accesses the column of the matrix.
Vectorized Memory Access
In CuTe, when we want to perform vectorized access, which is common in CUDA kernels such as matrix transpose, all the elements in the vector from both source and target must be contiguous on the memory. If MBase is zero, the swizzle operation will not be able to guarantee the contiguous memory access because of the XOR operation. However, when we know the number of elements in the vector, we could set MBase to the log2 of the number of elements in the vector, and as we will prove later, the swizzle operation will guarantee the contiguous memory access.
Suppose a vector holds $n$ values where $n$ is a power of 2, then all the bits of offsets $kn, kn + 1, \ldots, kn + n - 1$, where $k$ is an integer, are the same except the least significant $\log_2 n$ bits.
Proof
It is trivial to see the least significant $\log_2 n$ bits of $kn, kn + 1, \ldots, kn + n - 1$ are $0, 1, \ldots, n - 1$ in decimal. Suppose there exist one value among $kn, kn + 1, \ldots, kn + n - 1$ whose non-least significant bits are different from one of its neighbors, then the difference between the value and its neighbor must be different from $1$, which contradicts the fact that the difference between any two consecutive values is $1$ in decimal.
This concludes the proof. $\square$
Therefore, no matter how the swizzle operation is performed, as long as MBase is set to $\log_2 n$ and the memory access starting offset is a multiple of $n$, the contiguous memory access is guaranteed.
If the element size is less than or equal to 32-bit, such as 1-bit, 2-bit, 4-bit, 8-bit, 16-bit, and 32-bit, we could set the number of values in the vector to be 32, 16, 8, 4, 2, and 1, and set MBase to be $\log_2 32 = 5$, $\log_2 16 = 4$, $\log_2 8 = 3$, $\log_2 4 = 2$, $\log_2 2 = 1$, and $\log_2 1 = 0$, respectively, such that each vectorized memory access transaction is 32-bit. Consequently, the vector is just treated as if it were just a 32-bit element, and the offset can be re-indexed by offset >>= MBase before the swizzle operation. Then all the swizzle properties we just proved above still hold and the swizzle configurations can be reused.
If the element size is greater than 32-bit, such as 64-bit, 128-bit, 256-bit, etc., we could treat it as a vectorized memory access of multiple 32-bit elements. The offset can be re-indexed by offset <<= log2(element_size / 32) before the swizzle operation, and the swizzle configurations can be reused. This, however, will bring a consequence that there can never be free of shared memory bank conflicts when accessing the column of the matrix, because the image of shared memory bank ids offset % 32 will become smaller than $[0, 32)$. According to the pigeon hole principle, there will always be at least a log2(element_size / 32)-way shared memory bank conflict.
Universal Swizzle Equations and Configurations
Suppose the element size is $S$-byte, the number of elements in a vector is $N$, the number of elements in the fast dimension of the shared memory is $X$. The MBase should be set to $\log_2 N$, the BBits should be set to $\log_2 (32 \times 4 / S) - \text{MBase}$, and the SShift should be set to $\log_2 X - \text{MBase}$.
An example of the universal swizzle configurations could be implemented as follows.
123456789101112131415
constexpr int constexpr_log2(int n){ return ((n < 2) ? 0 : 1 + constexpr_log2(n / 2));}using VectorType = cute::uint128_t;CUTE_STATIC_ASSERT(sizeof(VectorType) % sizeof(T) == 0, "sizeof(VectorType) must be a multiple of sizeof(T)");constexpr unsigned int NUM_VECTOR_ELEMENTS{sizeof(VectorType) / sizeof(T)};using TileSizeX = cute::Int<128>; // Fast dimension size on shared memory.using TileSizeY = cute::Int<32>; // Slow dimension size on shared memory.constexpr int NUM_BASE_BITS{constexpr_log2(NUM_VECTOR_ELEMENTS)};constexpr int NUM_MASK_BITS{constexpr_log2(32 * 4 / sizeof(T)) - NUM_BASE_BITS};constexpr int NUM_SHIFT_BITS{constexpr_log2(TileSizeX::value) - NUM_BASE_BITS};
Examples
Let’s see a few more complicated examples in which the data type is not of 32-bit size and vectorized memory access is used.
Suppose we have an INT8 8 x 128 row-major matrix, and we want to use 128-bit vectorized memory access. In this case, there are 16 elements in a vector, and MBase should be set to $\log_2 16 = 4$. Because there are 32 shared memory banks of 32-bit size, each 32-bit word contains 4 elements, so BBits should be set to $\log_2 (32 \times 4) - \text{MBase} = 7 - 4 = 3$. The SShift is set to $\log_2 128 - \text{MBase} = 7 - 4 = 3$ to ensure the constant $c$ used for the XOR operation is different for each row. Therefore, the swizzle configuration is Swizzle<3, 4, 3>.
Suppose we have an FP16 8 x 64 row-major matrix, and we want to use 128-bit vectorized memory access. In this case, there are 8 elements in a vector, and MBase should be set to $\log_2 8 = 3$. Because there are 32 shared memory banks of 32-bit size, each 32-bit word contains 2 elements, so BBits should be set to $\log_2 (32 \times 2) - \text{MBase} = 6 - 3 = 3$. The SShift is set to $\log_2 64 - \text{MBase} = 6 - 3 = 3$ to ensure the constant $c$ used for the XOR operation is different for each row. Therefore, the swizzle configuration is Swizzle<3, 3, 3>.
CuTe Swizzle Preview
The shared memory bank ids of CuTe swizzled layout can be previewed using the CuTe Swizzle Preview App I created. The app saves the shared memory bank ids of the swizzled layout to a LaTeX file, and the LaTeX file can be compiled to a PDF file for previewing.
For example, given a 2D row-major $32 \times 64$ matrix consisting of elements of 32-bit size, we printed the shared memory bank ids of the swizzled layouts, including Swizzle<5, 0, 6>, Swizzle<5, 0, 8>, Swizzle<5, 2, 6>, and Swizzle<5, 2, 8>. Only Swizzle<5, 0, 6> results in free of shared memory bank conflicts when accessing each column of the matrix.
References
CuTe Swizzle
https://leimao.github.io/blog/CuTe-Swizzle/
Author
Lei Mao
Posted on
12-01-2024
Updated on
12-26-2024
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
CuTe Matrix Transpose
Introduction
CuTe is a C++ template library that provides a high-level abstraction for layout and tensor operations in CUDA kernels. CUTLASS 3.0 and beyond adopts CuTe throughout the GEMM hierarchy in its templates, allowing the implementation to be more readable and maintainable.
Previously, I have created an article “CuTe Layout Algebra” on the mathematical foundations of CuTe. In this blog post, we will have some hands-on experience and have a better understanding of CuTe by implementing matrix transpose CUDA kernels.
CuTe Matrix Transpose
Matrix transpose CUDA kernels are probably the most example CUDA kernels that I have ever implemented. In my previous examples, the thread and data index mappings in the CUDA kernels are completely manual. There were also some hard-coded assumptions, such as each CUDA thread will only process one single element in matrix transpose to make the implementation easier and more human readable. To have both the configuration complexity and human readability in the implementation, we can create matrix transpose CUDA kernels using CuTe.
To transpose a matrix in a CUDA kernel, performing strided memory reads or writes in a warp is inevitable and it will lead to uncoalesced memory accesses, resulting in performance degradation. To mitigate the performance degradation, the strided memory reads or writes could be performed on shared memory instead of global memory. When the strided memory reads or writes are performed on shared memory, special optimizations have also to be performed to avoid shared memory bank conflicts.
All the CuTe matrix transpose CUDA kernels implemented in this article and their unit tests could be found from my CUTLASS Examples GitHub repository.
CuTe Naive Matrix Transpose
In the CuTe naive matrix transpose CUDA kernel implementation, we will not use shared memory. Two slightly different CUDA kernel variants have been implemented. One performs coalesced global memory reads and strided global memory writes, and the other performs strided global memory reads and coalesced global memory writes. It turns out that the difference between the implementations of the two variants is just one line of code.
CuTe Naive Matrix Transpose Implementation
The CuTe naive matrix transpose CUDA kernel implementation could also be found from my CUTLASS Examples GitHub repository.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190
#include <cuda_runtime.h>#include <cute/tensor.hpp>#include "cute_matrix_transpose.hpp"template <class TensorSrc, class TensorDst, class ThreadLayout>static __global__ void matrix_transpose_naive(TensorSrc tensor_src, TensorDst tensor_dst_transposed, ThreadLayout){ using Element = typename TensorSrc::value_type; auto global_tile_src{tensor_src(cute::make_coord(cute::_, cute::_), blockIdx.y, blockIdx.x)}; // (TileSizeY, TileSizeX) auto global_tile_dst_transposed{ tensor_dst_transposed(cute::make_coord(cute::_, cute::_), blockIdx.y, blockIdx.x)}; // (TileSizeY, TileSizeX) auto thread_global_tile_src{cute::local_partition( global_tile_src, ThreadLayout{}, threadIdx.x)}; // (ThreadValueSizeY, ThreadValueSizeX) auto thread_global_tile_dst_transposed{cute::local_partition( global_tile_dst_transposed, ThreadLayout{}, threadIdx.x)}; // (ThreadValueSizeY, ThreadValueSizeX) // A 2D array of tuples that maps (x, y) to (x, y). auto const identity_tensor{cute::make_identity_tensor(cute::make_shape( cute::size<0>(global_tile_src), cute::size<1>(global_tile_src)))}; auto const thread_identity_tensor{ cute::local_partition(identity_tensor, ThreadLayout{}, threadIdx.x)}; auto fragment{cute::make_tensor_like(thread_global_tile_src)}; auto predicator{cute::make_tensor<bool>( cute::make_shape(cute::size<0>(fragment), cute::size<1>(fragment)))}; auto const num_max_columns{cute::stride<0>(global_tile_src)}; auto const num_max_rows{cute::stride<1>(global_tile_dst_transposed)}; constexpr auto global_tile_columns{cute::size<1>(global_tile_src)}; constexpr auto global_tile_rows{cute::size<0>(global_tile_src)}; CUTE_UNROLL for (unsigned int i{0}; i < cute::size<0>(predicator); ++i) { CUTE_UNROLL for (unsigned int j{0}; j < cute::size<1>(predicator); ++j) { auto const thread_identity{thread_identity_tensor(i, j)}; bool const is_row_in_bound{cute::get<0>(thread_identity) + blockIdx.y * global_tile_rows < num_max_rows}; bool const is_column_in_bound{cute::get<1>(thread_identity) + blockIdx.x * global_tile_columns < num_max_columns}; predicator(i, j) = is_row_in_bound && is_column_in_bound; } } cute::copy_if(predicator, thread_global_tile_src, fragment); cute::copy_if(predicator, fragment, thread_global_tile_dst_transposed); // Alternatively, we could just do the following instead. // cute::copy_if(predicator, thread_global_tile_src, // thread_global_tile_dst_transposed);}enum class GlobalMemoryCoalescedAccessMode{ Read, Write};template <typename T>static cudaError_t launch_matrix_transpose_naive_base( T const* input_matrix, T* output_matrix, unsigned int M, unsigned int N, GlobalMemoryCoalescedAccessMode coalesced_access_mode, cudaStream_t stream){ auto const tensor_shape{cute::make_shape(M, N)}; auto const tensor_shape_transposed{cute::make_shape(N, M)}; // Input matrix: row-major M x N matrix. auto const global_memory_layout_src{cute::make_layout( tensor_shape, cute::GenRowMajor{})}; // (M, N) : (N, 1) // Output matrix: row-major N x M matrix. auto const global_memory_layout_dst{cute::make_layout( tensor_shape_transposed, cute::GenRowMajor{})}; // (N, M) : (M, 1) // Same output matrix, but different view: column-major M x N matrix. auto const global_memory_layout_dst_transposed{cute::make_layout( tensor_shape, cute::GenColMajor{})}; // (M, N) : (1, M) auto const tensor_src{cute::make_tensor(cute::make_gmem_ptr(input_matrix), global_memory_layout_src)}; auto const tensor_dst{cute::make_tensor(cute::make_gmem_ptr(output_matrix), global_memory_layout_dst)}; auto const tensor_dst_transposed{ cute::make_tensor(cute::make_gmem_ptr(output_matrix), global_memory_layout_dst_transposed)}; using TileSizeX = cute::Int<64>; // bN using TileSizeY = cute::Int<32>; // bM constexpr auto block_shape{cute::make_shape(TileSizeY{}, TileSizeX{})}; auto const tiled_tensor_src{cute::tiled_divide( tensor_src, block_shape)}; // ((TileSizeY, TileSizeX), M / // TileSizeY, N / TileSizeX) auto const tiled_tensor_dst_transposed{cute::tiled_divide( tensor_dst_transposed, block_shape)}; // ((TileSizeY, TileSizeX), M // / TileSizeY, N / TileSizeX) using ThreadBlockSizeX = cute::Int<32>; // tN using ThreadBlockSizeY = cute::Int<8>; // tM constexpr auto thread_block_shape{ cute::make_shape(ThreadBlockSizeY{}, ThreadBlockSizeX{})}; constexpr auto thread_block_shape_transposed{ cute::make_shape(ThreadBlockSizeX{}, ThreadBlockSizeY{})}; // Coalesced memory read. constexpr auto thread_layout{ cute::make_layout(thread_block_shape, cute::GenRowMajor{})}; // Coalesced memory write. constexpr auto thread_layout_transposed{ cute::make_layout(thread_block_shape_transposed, cute::GenColMajor{})}; dim3 const grid_dim{cute::size<2>(tiled_tensor_src), cute::size<1>(tiled_tensor_src)}; dim3 const thread_dim{ cute::size(ThreadBlockSizeX::value * ThreadBlockSizeY::value)}; if (coalesced_access_mode == GlobalMemoryCoalescedAccessMode::Read) { CUTE_STATIC_ASSERT_V(TileSizeX{} % ThreadBlockSizeX{} == cute::Int<0>{}, "TileSizeX must be divisible by ThreadBlockSizeX"); CUTE_STATIC_ASSERT_V(TileSizeY{} % ThreadBlockSizeY{} == cute::Int<0>{}, "TileSizeY must be divisible by ThreadBlockSizeY"); matrix_transpose_naive<<<grid_dim, thread_dim, 0, stream>>>( tiled_tensor_src, tiled_tensor_dst_transposed, thread_layout); } else { CUTE_STATIC_ASSERT_V(TileSizeX{} % ThreadBlockSizeY{} == cute::Int<0>{}, "TileSizeX must be divisible by ThreadBlockSizeY"); CUTE_STATIC_ASSERT_V(TileSizeY{} % ThreadBlockSizeX{} == cute::Int<0>{}, "TileSizeY must be divisible by ThreadBlockSizeX"); matrix_transpose_naive<<<grid_dim, thread_dim, 0, stream>>>( tiled_tensor_src, tiled_tensor_dst_transposed, thread_layout_transposed); } return cudaGetLastError();}template <typename T>cudaError_t launch_matrix_transpose_naive_coalesced_read(T const* input_matrix, T* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream){ return launch_matrix_transpose_naive_base( input_matrix, output_matrix, M, N, GlobalMemoryCoalescedAccessMode::Read, stream);}template <typename T>cudaError_t launch_matrix_transpose_naive_coalesced_write(T const* input_matrix, T* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream){ return launch_matrix_transpose_naive_base( input_matrix, output_matrix, M, N, GlobalMemoryCoalescedAccessMode::Write, stream);}// Explicit instantiation.template cudaError_t launch_matrix_transpose_naive_coalesced_read<float>( float const* input_matrix, float* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream);template cudaError_t launch_matrix_transpose_naive_coalesced_read<double>( double const* input_matrix, double* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream);template cudaError_t launch_matrix_transpose_naive_coalesced_write<float>( float const* input_matrix, float* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream);template cudaError_t launch_matrix_transpose_naive_coalesced_write<double>( double const* input_matrix, double* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream);
Matrix Layout
There are typically two ways to describe a matrix stored in linear storages: row-major and column-major. In row-major layout, the elements of each row are contiguous in memory, while in column-major layout, the elements of each column are contiguous in memory. Given a $M \times N$ input matrix $A$ with $M$ rows and $N$ columns in row-major layout, typically we want to transpose the input matrix $A$ to a $N \times M$ output matrix $A^{\top}$ with $N$ rows and $M$ columns that is also in row-major layout. In this case, the output matrix $A^{\top}$ can not only be viewed as a row-major $N \times M$ matrix but also as a column-major $M \times N$ matrix.
The matrix transpose operation maps the element $A_{i, j}$ from the input matrix $A$ to the element $A^{\top}_{j, i}$ in the output matrix $A^{\top}$. On one hand, if the input matrix is described using row-major layout, the input matrix $A$ is of shape $(M, N)$, and the element $A_{i, j}$ is stored at the coordinate $(i, j)$ in the row-major layout of $A$. The output matrix $A^{\top}$ is of shape $(N, M)$, and the element $A^{\top}_{j, i}$ is stored at the coordinate $(j, i)$ in the row-major layout of $A^{\top}$. On the other hand, if the input matrix is described using column-major layout, the same input matrix $A$ is of shape $(M, N)$, and the element $A_{i, j}$ is stored at the coordinate $(j, i)$ in the column-major layout of $A$. The output matrix $A^{\top}$ is of shape $(M, N)$, and the element $A^{\top}_{j, i}$ is stored at the coordinate $(i, j)$ in the column-major layout of $A^{\top}$.
Although being a little bit brain-twisting, matrix transpose maps an element at the coordinate $(i, j)$ in the row-major layout of a matrix to the element at the coordinate $(i, j)$ in the column-major layout of the output matrix. In CuTe, given an 1D input coordinate and the input matrix and the output matrix both have a shape of $(M, N)$ in the layout, the 1D input coordinate will always be mapped to the same natural coordinate in both the input matrix and the output matrix. When CuTe iterates over $M \times N$ 1D coordinates, the corresponding elements in the input matrix and the output matrix is in a relationship of transpose. This is the key reason why we have to use row-major layout and column-major layout to describe the input matrix and the output matrix, respectively, in CuTe. Otherwise, if the input matrix and the output matrix are both described using the same layout, when CuTe iterates over $M \times N$ 1D coordinates, the corresponding elements in the input matrix and the output matrix will not be in a relationship of transpose.
1234567891011121314151617181920
auto const tensor_shape{cute::make_shape(M, N)};auto const tensor_shape_transposed{cute::make_shape(N, M)};// Input matrix: row-major M x N matrix.auto const global_memory_layout_src{cute::make_layout( tensor_shape, cute::GenRowMajor{})}; // (M, N) : (N, 1)// Output matrix: row-major N x M matrix.auto const global_memory_layout_dst{cute::make_layout( tensor_shape_transposed, cute::GenRowMajor{})}; // (N, M) : (M, 1)// Same output matrix, but different view: column-major M x N matrix.auto const global_memory_layout_dst_transposed{cute::make_layout( tensor_shape, cute::GenColMajor{})}; // (M, N) : (1, M)auto const tensor_src{cute::make_tensor(cute::make_gmem_ptr(input_matrix), global_memory_layout_src)};auto const tensor_dst{cute::make_tensor(cute::make_gmem_ptr(output_matrix), global_memory_layout_dst)};auto const tensor_dst_transposed{ cute::make_tensor(cute::make_gmem_ptr(output_matrix), global_memory_layout_dst_transposed)};
Divide and Conquer and Matrix Tiling
To accelerate matrix transpose for large problems, we will have to divide the input matrix and the output matrix into smaller tiles and compute the transpose of each tile in parallel. In this example, we divide the input matrix and the output matrix into tiles of shape $(bM, bN)$, where $bM$ and $bN$ are the number of rows and columns in each tile, respectively. Both the input matrix and the output matrix will be divided into $\left\lceil \frac{M}{bM} \right\rceil \times \left\lceil \frac{N}{bN} \right\rceil$ tiles. The matrix transpose in each tile is independent and can be processed in parallel.
The divided input matrix and the divided output matrix now have new layouts, whose shapes are both $\left((bM, bN), \left\lceil \frac{M}{bM} \right\rceil \times \left\lceil \frac{N}{bN} \right\rceil\right)$. The row-major and column-major notations are no longer applicable for describing the divided matrices, because the shapes now have 3 modes, i.e., a rank of 3. To describe the storage layout of a tensor that has higher rank (any rank), CuTe uses stride. In our particular problem, it’s not too important, because CuTe automatically handles those concepts for us. In other problems, it might not be the case though.
1234567891011
using TileSizeX = cute::Int<64>; // bNusing TileSizeY = cute::Int<32>; // bMconstexpr auto block_shape{cute::make_shape(TileSizeY{}, TileSizeX{})};auto const tiled_tensor_src{cute::tiled_divide( tensor_src, block_shape)}; // ((TileSizeY, TileSizeX), M / // TileSizeY, N / TileSizeX)auto const tiled_tensor_dst_transposed{cute::tiled_divide( tensor_dst_transposed, block_shape)}; // ((TileSizeY, TileSizeX), M // / TileSizeY, N / TileSizeX)
CUDA Thread Block Layout and Coalesced Memory Access
Each tile of the input matrix and the output matrix is processed by a CUDA thread block that consists of multiple CUDA threads. In our case, we use a thread block of shape $(tM, tN)$ and row-major layout or $(tN, tM)$ and column-major layout. The number of threads in a CUDA thread block is $tM \times tN$. The number of CUDA thread blocks to issue is, obviously, just the number of tiles, i.e., $\left\lceil \frac{M}{bM} \right\rceil \times \left\lceil \frac{N}{bN} \right\rceil$. This is feasible because $bM$ and $bN$, $tM$ and $tN$, are all compile-time constants.
Because the input matrix and its tiles are of row-major layout, and the output matrix and its tiles are of column-major layout, when the thread block is of row-major layout, each warp in the thread block will read from the input matrix on global memory in a coalesced fashion but write to the output matrix on global memory in a strided fashion. Similarly, when the thread block is of column-major layout, each warp in the thread block will read from the input matrix on global memory in a strided fashion but write to the output matrix on global memory in a coalesced fashion.
1234567891011121314151617181920212223242526272829303132333435363738
using ThreadBlockSizeX = cute::Int<32>; // tNusing ThreadBlockSizeY = cute::Int<8>; // tMconstexpr auto thread_block_shape{ cute::make_shape(ThreadBlockSizeY{}, ThreadBlockSizeX{})};constexpr auto thread_block_shape_transposed{ cute::make_shape(ThreadBlockSizeX{}, ThreadBlockSizeY{})};// Coalesced memory read.constexpr auto thread_layout{ cute::make_layout(thread_block_shape, cute::GenRowMajor{})};// Coalesced memory write.constexpr auto thread_layout_transposed{ cute::make_layout(thread_block_shape_transposed, cute::GenColMajor{})};dim3 const grid_dim{cute::size<2>(tiled_tensor_src), cute::size<1>(tiled_tensor_src)};dim3 const thread_dim{ cute::size(ThreadBlockSizeX::value * ThreadBlockSizeY::value)};if (coalesced_access_mode == GlobalMemoryCoalescedAccessMode::Read){ CUTE_STATIC_ASSERT_V(TileSizeX{} % ThreadBlockSizeX{} == cute::Int<0>{}, "TileSizeX must be divisible by ThreadBlockSizeX"); CUTE_STATIC_ASSERT_V(TileSizeY{} % ThreadBlockSizeY{} == cute::Int<0>{}, "TileSizeY must be divisible by ThreadBlockSizeY"); matrix_transpose_naive<<<grid_dim, thread_dim, 0, stream>>>( tiled_tensor_src, tiled_tensor_dst_transposed, thread_layout);}else{ CUTE_STATIC_ASSERT_V(TileSizeX{} % ThreadBlockSizeY{} == cute::Int<0>{}, "TileSizeX must be divisible by ThreadBlockSizeY"); CUTE_STATIC_ASSERT_V(TileSizeY{} % ThreadBlockSizeX{} == cute::Int<0>{}, "TileSizeY must be divisible by ThreadBlockSizeX"); matrix_transpose_naive<<<grid_dim, thread_dim, 0, stream>>>( tiled_tensor_src, tiled_tensor_dst_transposed, thread_layout_transposed);}
Tensor Partitions
There are three major partitions in CuTe, inner-partition, outer-partition, and thread-value partition.
Inner-partition has been performed previously, where we divide the input matrix and the output matrix into tiles. Usually inner-partition is performed at the CUDA thread block level that distributes the large problems into smaller problems that can be solved by a single CUDA thread block.
Outer-partition is usually performed at the CUDA thread level that distributes the smaller problems into even smaller problems that can be solved by a single CUDA thread. There is a different between inner-partition and outer-partition, without understanding which the implementation can work correctly.
Suppose we have a CuTe layout $(8, 4) : (4, 1)$ and a tile layout $(4, 2) : (2, 1)$. Inner-partition will result in a layout of shape $\left((4, 2), \frac{8}{4}, \frac{4}{2}\right) = \left((4, 2), 2, 2\right)$, and outer-partition will result in a a layout of shape $\left(\left(\frac{8}{4}, \frac{4}{2}\right), 4, 2\right) = \left((2, 2), 4, 2\right)$. Assuming the partitions are accessed using the last two modes layout, the inner-partition layout has 4 partitions whereas the out-partition layout has 8 partitions. The starting coordinates of inner-partition and outer-partition are also different. In this case, the starting coordinates of inner-partition is $(0, 0)$, $(4, 0)$, $(0, 2)$, and $(4, 2)$, whereas the starting coordinates of outer-partition is $(0, 0)$, $(1, 0)$, $(2, 0)$, $(3, 0)$, $(0, 1)$, $(1, 1)$, $(2, 1)$, and $(3, 1)$. Outer-partition is usually performed at the CUDA thread level because all the consecutive threads in a warp, if accessing a piece of contiguous data synergistically especially on global memory, can have a better performance because of the CUDA coalesced memory access.
In each partition, the partition tensor will follow the layout algebra and apply the correct strides to access the data during iteration.
12345678910111213
auto global_tile_src{tensor_src(cute::make_coord(cute::_, cute::_), blockIdx.y, blockIdx.x)}; // (TileSizeY, TileSizeX)auto global_tile_dst_transposed{ tensor_dst_transposed(cute::make_coord(cute::_, cute::_), blockIdx.y, blockIdx.x)}; // (TileSizeY, TileSizeX)auto thread_global_tile_src{cute::local_partition( global_tile_src, ThreadLayout{}, threadIdx.x)}; // (ThreadValueSizeY, ThreadValueSizeX)auto thread_global_tile_dst_transposed{cute::local_partition( global_tile_dst_transposed, ThreadLayout{}, threadIdx.x)}; // (ThreadValueSizeY, ThreadValueSizeX)
Predicates and Boundary Checking
CUDA memory access boundary checking is critical in a CUDA kernel in practice, when the problem distribution is not perfect. In CuTe, CUDA memory access boundary checking is performed via predicates. In our particular, we could query the matrix sizes $M$ and $N$ from the strides of the tiled tensor. It’s also more common to just pass these two values to the CUDA kernel.
During the iteration of the CuTe tensor, the iterator has to know its 2D coordinate and check if the element it is about to access is within the boundary. So we will create a 2D identity tensor (it’s an 2D array of tuples though) whose shape is exactly the same as the partition tensor from the global memory. The 2D identity tensor takes a 2D coordinate as input and produces the same 2D coordinate as output. If the partition tensor abd the identity tensor are iterated together, the iterator could get the information of its current coordinate within the partition tensor, making boundary checking possible. At the CUDA thread level, the 2D identity tensor is further partitioned into a 2D thread identity tensor according the same thread layout that is used for partitioning the data. Then the predicates used for accessing the partitioned input tensor and the output tensor can be prepared using the 2D coordinates of the iterator, the partitioned tensor index, the partitioned tensor and the original full tensor shape information.
123456789101112131415161718192021222324252627282930313233
// A 2D array of tuples that maps (x, y) to (x, y).auto const identity_tensor{cute::make_identity_tensor(cute::make_shape( cute::size<0>(global_tile_src), cute::size<1>(global_tile_src)))};auto const thread_identity_tensor{ cute::local_partition(identity_tensor, ThreadLayout{}, threadIdx.x)};auto fragment{cute::make_tensor_like(thread_global_tile_src)};auto predicator{cute::make_tensor<bool>( cute::make_shape(cute::size<0>(fragment), cute::size<1>(fragment)))};auto const num_max_columns{cute::stride<0>(global_tile_src)};auto const num_max_rows{cute::stride<1>(global_tile_dst_transposed)};constexpr auto global_tile_columns{cute::size<1>(global_tile_src)};constexpr auto global_tile_rows{cute::size<0>(global_tile_src)};CUTE_UNROLLfor (unsigned int i{0}; i < cute::size<0>(predicator); ++i){ CUTE_UNROLL for (unsigned int j{0}; j < cute::size<1>(predicator); ++j) { auto const thread_identity{thread_identity_tensor(i, j)}; bool const is_row_in_bound{cute::get<0>(thread_identity) + blockIdx.y * global_tile_rows < num_max_rows}; bool const is_column_in_bound{cute::get<1>(thread_identity) + blockIdx.x * global_tile_columns < num_max_columns}; predicator(i, j) = is_row_in_bound && is_column_in_bound; }}cute::copy_if(predicator, thread_global_tile_src, fragment);cute::copy_if(predicator, fragment, thread_global_tile_dst_transposed);
Using predicates and performing boundary checking can have degrade CUDA kernel performance, because the warp instruction, such as load from global memory, has to stall before all the predicates from all the threads in the warp are evaluated. To accelerate the computing the most, usually specialized kernels are used for each of the problem configurations and boundary checking are eliminated from the CUDA kernel.
So instead of using cute::copy_if, cute::copy should be used.
12
cute::copy(thread_global_tile_src, fragment);cute::copy(fragment, thread_global_tile_dst_transposed);
CuTe Matrix Transpose Using Shared Memory
In the CuTe matrix transpose CUDA kernel implementation using shared memory, we will perform strided memory reads and writes on shared memory instead of global memory. Using shared memory naively will result in shared memory bank conflicts when performing strided memory reads or writes on shared memory, which will degrade the performance. To mitigate the shared memory bank conflicts, we will also perform special optimizations, such as shared memory padding and swizzling.
CuTe Matrix Transpose Using Shared Memory Implementation
The CuTe matrix transpose using shared memory CUDA kernel implementation could also be found from my CUTLASS Examples GitHub repository.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638639640641642643644645646647648649650651652653654655656657658659660661662663664665666667668669670671672673674675676677678679680681682683684685686687688689690691692693694695696697698699700701702703704705706707708709710711712713714715716717718719720721722723724725726727728729730731732733734735736737738739740741742743744745746747748749750751752753754755756757758759760761762763764765766767768769770771772773774775776777778779780781782783784785786787788789790791792793794795796797798799800801802803804805806807808809810811812813814815816817818819820821822823824825826827828829830831832833834835836837838839840841842843844845846847848849850851852853854855856857858859860861862863864865866867868869870871872873874875876877878879880881882883884885886887888889890891892893894895896897898899900901902903904905906907908909910911912913914915916917918919920921922923924925926927928929930931
#include <iomanip>#include <iostream>#include <cuda_runtime.h>#include <cute/tensor.hpp>#include "cute_matrix_transpose.hpp"template <class TensorSrc, class TensorDst, class SharedMemoryLayoutSrc, class SharedMemoryLayoutDst, class ThreadLayoutSrc, class ThreadLayoutDst>__global__ void static matrix_transpose_shared_memory( TensorSrc tensor_src, TensorDst tensor_dst, SharedMemoryLayoutSrc, SharedMemoryLayoutDst, ThreadLayoutSrc, ThreadLayoutDst){ using Element = typename TensorSrc::value_type; CUTE_STATIC_ASSERT_V(cute::size(SharedMemoryLayoutSrc{}) == cute::size(SharedMemoryLayoutDst{}), "SharedMemoryLayoutSrc and SharedMemoryLayoutDst " "must have the same size."); __shared__ Element shared_memory[cute::cosize(SharedMemoryLayoutSrc{})]; auto tensor_cache_src{cute::make_tensor(cute::make_smem_ptr(shared_memory), SharedMemoryLayoutSrc{})}; auto tensor_cache_dst{cute::make_tensor(cute::make_smem_ptr(shared_memory), SharedMemoryLayoutDst{})}; auto global_tile_src{tensor_src(cute::make_coord(cute::_, cute::_), blockIdx.y, blockIdx.x)}; // (TileSizeY, TileSizeX) auto global_tile_dst{tensor_dst(cute::make_coord(cute::_, cute::_), blockIdx.y, blockIdx.x)}; // (TileSizeY, TileSizeX) auto thread_global_tile_src{cute::local_partition( global_tile_src, ThreadLayoutSrc{}, threadIdx.x)}; // (ThreadValueSizeY, ThreadValueSizeX) auto thread_global_tile_dst{cute::local_partition( global_tile_dst, ThreadLayoutDst{}, threadIdx.x)}; // (ThreadValueSizeX, ThreadValueSizeY) auto thread_shared_tile_src{cute::local_partition( tensor_cache_src, ThreadLayoutSrc{}, threadIdx.x)}; // (ThreadValueSizeY, ThreadValueSizeX) auto thread_shared_tile_dst{cute::local_partition( tensor_cache_dst, ThreadLayoutDst{}, threadIdx.x)}; // (ThreadValueSizeX, ThreadValueSizeY) // A 2D array of tuples that maps (x, y) to (x, y). auto const identity_tensor_src{cute::make_identity_tensor(cute::make_shape( cute::size<0>(global_tile_src), cute::size<1>(global_tile_src)))}; auto const thread_identity_tensor_src{cute::local_partition( identity_tensor_src, ThreadLayoutSrc{}, threadIdx.x)}; auto predicator_src{cute::make_tensor<bool>( cute::make_shape(cute::size<0>(thread_global_tile_src), cute::size<1>(thread_global_tile_src)))}; auto const identity_tensor_dst{cute::make_identity_tensor(cute::make_shape( cute::size<0>(global_tile_dst), cute::size<1>(global_tile_dst)))}; auto const thread_identity_tensor_dst{cute::local_partition( identity_tensor_dst, ThreadLayoutDst{}, threadIdx.x)}; auto predicator_dst{cute::make_tensor<bool>( cute::make_shape(cute::size<0>(thread_global_tile_dst), cute::size<1>(thread_global_tile_dst)))}; auto const num_max_columns{cute::stride<0>(global_tile_src)}; auto const num_max_rows{cute::stride<1>(global_tile_dst)}; constexpr auto global_tile_columns{cute::size<1>(global_tile_src)}; constexpr auto global_tile_rows{cute::size<0>(global_tile_src)}; CUTE_UNROLL for (unsigned int i{0}; i < cute::size<0>(predicator_src); ++i) { CUTE_UNROLL for (unsigned int j{0}; j < cute::size<1>(predicator_src); ++j) { auto const thread_identity{thread_identity_tensor_src(i, j)}; bool const is_row_in_bound{cute::get<0>(thread_identity) + blockIdx.y * global_tile_rows < num_max_rows}; bool const is_column_in_bound{cute::get<1>(thread_identity) + blockIdx.x * global_tile_columns < num_max_columns}; predicator_src(i, j) = is_row_in_bound && is_column_in_bound; } } CUTE_UNROLL for (unsigned int i{0}; i < cute::size<0>(predicator_dst); ++i) { CUTE_UNROLL for (unsigned int j{0}; j < cute::size<1>(predicator_dst); ++j) { auto const thread_identity{thread_identity_tensor_dst(i, j)}; bool const is_row_in_bound{cute::get<0>(thread_identity) + blockIdx.y * global_tile_rows < num_max_rows}; bool const is_column_in_bound{cute::get<1>(thread_identity) + blockIdx.x * global_tile_columns < num_max_columns}; predicator_dst(i, j) = is_row_in_bound && is_column_in_bound; } } cute::copy_if(predicator_src, thread_global_tile_src, thread_shared_tile_src); cute::cp_async_fence(); cute::cp_async_wait<0>(); __syncthreads(); cute::copy_if(predicator_dst, thread_shared_tile_dst, thread_global_tile_dst);}template <class TensorSrc, class TensorDst, class SharedMemoryLayoutSrc, class SharedMemoryLayoutDst, class ThreadLayoutSrc, class ThreadLayoutDst, class VectorLayout>__global__ void static matrix_transpose_shared_memory_vectorized( TensorSrc tensor_src, TensorDst tensor_dst, SharedMemoryLayoutSrc, SharedMemoryLayoutDst, ThreadLayoutSrc, ThreadLayoutDst, VectorLayout){ using Element = typename TensorSrc::value_type; CUTE_STATIC_ASSERT_V(cute::size(SharedMemoryLayoutSrc{}) == cute::size(SharedMemoryLayoutDst{}), "SharedMemoryLayoutSrc and SharedMemoryLayoutDst " "must have the same size."); __shared__ Element shared_memory[cute::cosize(SharedMemoryLayoutSrc{})]; auto tensor_cache_src{cute::make_tensor(cute::make_smem_ptr(shared_memory), SharedMemoryLayoutSrc{})}; auto tensor_cache_dst{cute::make_tensor(cute::make_smem_ptr(shared_memory), SharedMemoryLayoutDst{})}; auto global_tile_src{tensor_src(cute::make_coord(cute::_, cute::_), blockIdx.y, blockIdx.x)}; // (TileSizeY, TileSizeX) auto global_tile_dst{tensor_dst(cute::make_coord(cute::_, cute::_), blockIdx.y, blockIdx.x)}; // (TileSizeY, TileSizeX) using AccessType = cutlass::AlignedArray<Element, cute::size(VectorLayout{})>; using CopyAtom = cute::Copy_Atom<cute::UniversalCopy<AccessType>, Element>; auto tiled_copy_src{ cute::make_tiled_copy(CopyAtom{}, ThreadLayoutSrc{}, VectorLayout{})}; auto thread_copy_src{tiled_copy_src.get_thread_slice(threadIdx.x)}; auto thread_global_tile_src{thread_copy_src.partition_S( global_tile_src)}; // (CopyAtomShape, NumCopyTile) auto thread_shared_tile_src{thread_copy_src.partition_D( tensor_cache_src)}; // (CopyAtomShape, NumCopyTile) auto thread_global_tile_dst{cute::local_partition( global_tile_dst, ThreadLayoutDst{}, threadIdx.x)}; // (ThreadValueSizeX, ThreadValueSizeY) auto thread_shared_tile_dst{cute::local_partition( tensor_cache_dst, ThreadLayoutDst{}, threadIdx.x)}; // (ThreadValueSizeX, ThreadValueSizeY) auto const num_max_columns{cute::stride<0>(global_tile_src)}; auto const num_max_rows{cute::stride<1>(global_tile_dst)}; constexpr auto global_tile_columns{cute::size<1>(global_tile_src)}; constexpr auto global_tile_rows{cute::size<0>(global_tile_src)}; // A 2D array of tuples that maps (x, y) to (x, y). auto const identity_tensor_src{cute::make_identity_tensor(cute::make_shape( cute::size<0>(global_tile_src), cute::size<1>(global_tile_src)))}; auto thread_identity_tensor_src{thread_copy_src.partition_S( identity_tensor_src)}; // (CopyAtomShape, NumCopyTile) auto predicator_src{cute::make_tensor<bool>( cute::make_shape(cute::size<1>(thread_global_tile_src), cute::size<2>(thread_global_tile_src)))}; CUTE_UNROLL for (unsigned int i{0}; i < cute::size<0>(predicator_src); ++i) { CUTE_UNROLL for (unsigned int j{0}; j < cute::size<1>(predicator_src); ++j) { auto const thread_identity{thread_identity_tensor_src(0, i, j)}; bool const is_row_in_bound{cute::get<0>(thread_identity) + blockIdx.y * global_tile_rows < num_max_rows}; bool const is_column_in_bound{cute::get<1>(thread_identity) + blockIdx.x * global_tile_columns < num_max_columns}; predicator_src(i, j) = is_row_in_bound && is_column_in_bound; } } auto const identity_tensor_dst{cute::make_identity_tensor(cute::make_shape( cute::size<0>(global_tile_dst), cute::size<1>(global_tile_dst)))}; auto const thread_identity_tensor_dst{cute::local_partition( identity_tensor_dst, ThreadLayoutDst{}, threadIdx.x)}; auto predicator_dst{cute::make_tensor<bool>( cute::make_shape(cute::size<0>(thread_global_tile_dst), cute::size<1>(thread_global_tile_dst)))}; CUTE_UNROLL for (unsigned int i{0}; i < cute::size<0>(predicator_dst); ++i) { CUTE_UNROLL for (unsigned int j{0}; j < cute::size<1>(predicator_dst); ++j) { auto const thread_identity{thread_identity_tensor_dst(i, j)}; bool const is_row_in_bound{cute::get<0>(thread_identity) + blockIdx.y * global_tile_rows < num_max_rows}; bool const is_column_in_bound{cute::get<1>(thread_identity) + blockIdx.x * global_tile_columns < num_max_columns}; predicator_dst(i, j) = is_row_in_bound && is_column_in_bound; } } cute::copy_if(tiled_copy_src, predicator_src, thread_global_tile_src, thread_shared_tile_src); cute::cp_async_fence(); cute::cp_async_wait<0>(); __syncthreads(); cute::copy_if(predicator_dst, thread_shared_tile_dst, thread_global_tile_dst);}enum class SharedMemoryBankConflictAccessMode{ Read, Write};template <typename T>static cudaError_t launch_matrix_transpose_shared_memory_bank_conflict_base( T const* input_matrix, T* output_matrix, unsigned int M, unsigned int N, SharedMemoryBankConflictAccessMode bank_conflict_access_mode, cudaStream_t stream){ auto const tensor_shape{cute::make_shape(M, N)}; auto const tensor_shape_transposed{cute::make_shape(N, M)}; // Input matrix: row-major M x N matrix. auto const global_memory_layout_src{cute::make_layout( tensor_shape, cute::GenRowMajor{})}; // (M, N) : (N, 1) // Output matrix: row-major N x M matrix. auto const global_memory_layout_dst{cute::make_layout( tensor_shape_transposed, cute::GenRowMajor{})}; // (N, M) : (M, 1) // Same output matrix, but different view: column-major M x N matrix. auto const global_memory_layout_dst_transposed{cute::make_layout( tensor_shape, cute::GenColMajor{})}; // (M, N) : (1, M) auto const tensor_src{cute::make_tensor(cute::make_gmem_ptr(input_matrix), global_memory_layout_src)}; auto const tensor_dst{cute::make_tensor(cute::make_gmem_ptr(output_matrix), global_memory_layout_dst)}; auto const tensor_dst_transposed{ cute::make_tensor(cute::make_gmem_ptr(output_matrix), global_memory_layout_dst_transposed)}; using TileSizeX = cute::Int<128>; // bN using TileSizeY = cute::Int<32>; // bM constexpr auto block_shape{cute::make_shape(TileSizeY{}, TileSizeX{})}; constexpr auto block_shape_transposed{ cute::make_shape(TileSizeX{}, TileSizeY{})}; auto const shared_memory_layout_src{cute::make_layout( block_shape, cute::GenRowMajor{})}; // (bM, bN) : (bN, 1) auto const shared_memory_layout_dst{cute::make_layout( block_shape_transposed, cute::GenRowMajor{})}; // (bN, bM) : (bM, 1) auto const shared_memory_layout_dst_transposed{cute::make_layout( block_shape, cute::GenColMajor{})}; // (bM, bN) : (1, bM) auto const tiled_tensor_src{cute::tiled_divide( tensor_src, block_shape)}; // ((TileSizeY, TileSizeX), M / // TileSizeY, N / TileSizeX) auto const tiled_tensor_dst{cute::tiled_divide( tensor_dst, block_shape_transposed)}; // ((TileSizeX, TileSizeY), N // / TileSizeX, M / TileSizeY) auto const tiled_tensor_dst_transposed{cute::tiled_divide( tensor_dst_transposed, block_shape)}; // ((TileSizeY, TileSizeX), M // / TileSizeY, N / TileSizeX) using ThreadBlockSizeX = cute::Int<32>; // tN using ThreadBlockSizeY = cute::Int<8>; // tM CUTE_STATIC_ASSERT_V(TileSizeX{} % ThreadBlockSizeX{} == cute::Int<0>{}, "TileSizeX must be divisible by ThreadBlockSizeX"); CUTE_STATIC_ASSERT_V(TileSizeY{} % ThreadBlockSizeY{} == cute::Int<0>{}, "TileSizeY must be divisible by ThreadBlockSizeY"); constexpr auto thread_block_shape{ cute::make_shape(ThreadBlockSizeY{}, ThreadBlockSizeX{})}; constexpr auto thread_block_shape_transposed{ cute::make_shape(ThreadBlockSizeX{}, ThreadBlockSizeY{})}; constexpr auto thread_layout{ cute::make_layout(thread_block_shape, cute::GenRowMajor{})}; constexpr auto thread_layout_transposed{ cute::make_layout(thread_block_shape_transposed, cute::GenColMajor{})}; dim3 const grid_dim{cute::size<2>(tiled_tensor_src), cute::size<1>(tiled_tensor_src)}; dim3 const thread_dim{ThreadBlockSizeX::value * ThreadBlockSizeY::value}; if (bank_conflict_access_mode == SharedMemoryBankConflictAccessMode::Read) { matrix_transpose_shared_memory<<<grid_dim, thread_dim, 0, stream>>>( tiled_tensor_src, tiled_tensor_dst_transposed, shared_memory_layout_src, shared_memory_layout_src, thread_layout, thread_layout_transposed); } else { matrix_transpose_shared_memory<<<grid_dim, thread_dim, 0, stream>>>( tiled_tensor_src, tiled_tensor_dst_transposed, shared_memory_layout_dst_transposed, shared_memory_layout_dst_transposed, thread_layout, thread_layout_transposed); } return cudaGetLastError();}template <typename T>static cudaError_tlaunch_matrix_transpose_shared_memory_vectorized_bank_conflict_base( T const* input_matrix, T* output_matrix, unsigned int M, unsigned int N, SharedMemoryBankConflictAccessMode bank_conflict_access_mode, cudaStream_t stream){ using VectorType = cute::uint128_t; CUTE_STATIC_ASSERT(sizeof(VectorType) % sizeof(T) == 0, "sizeof(VectorType) must be a multiple of sizeof(T)"); constexpr unsigned int NUM_VECTOR_ELEMENTS{sizeof(VectorType) / sizeof(T)}; if (N % NUM_VECTOR_ELEMENTS != 0) { return cudaErrorInvalidValue; } auto const tensor_shape{cute::make_shape(M, N)}; auto const tensor_shape_transposed{cute::make_shape(N, M)}; // Input matrix: row-major M x N matrix. auto const global_memory_layout_src{cute::make_layout( tensor_shape, cute::GenRowMajor{})}; // (M, N) : (N, 1) // Output matrix: row-major N x M matrix. auto const global_memory_layout_dst{cute::make_layout( tensor_shape_transposed, cute::GenRowMajor{})}; // (N, M) : (M, 1) // Same output matrix, but different view: column-major M x N matrix. auto const global_memory_layout_dst_transposed{cute::make_layout( tensor_shape, cute::GenColMajor{})}; // (M, N) : (1, M) auto const tensor_src{cute::make_tensor(cute::make_gmem_ptr(input_matrix), global_memory_layout_src)}; auto const tensor_dst{cute::make_tensor(cute::make_gmem_ptr(output_matrix), global_memory_layout_dst)}; auto const tensor_dst_transposed{ cute::make_tensor(cute::make_gmem_ptr(output_matrix), global_memory_layout_dst_transposed)}; using TileSizeX = cute::Int<128>; // bN using TileSizeY = cute::Int<32>; // bM constexpr auto block_shape{cute::make_shape(TileSizeY{}, TileSizeX{})}; constexpr auto block_shape_transposed{ cute::make_shape(TileSizeX{}, TileSizeY{})}; auto const shared_memory_layout_src{cute::make_layout( block_shape, cute::GenRowMajor{})}; // (bM, bN) : (bN, 1) auto const shared_memory_layout_dst{cute::make_layout( block_shape_transposed, cute::GenRowMajor{})}; // (bN, bM) : (bM, 1) auto const shared_memory_layout_dst_transposed{cute::make_layout( block_shape, cute::GenColMajor{})}; // (bM, bN) : (1, bM) auto const tiled_tensor_src{cute::tiled_divide( tensor_src, block_shape)}; // ((TileSizeY, TileSizeX), M / // TileSizeY, N / TileSizeX) auto const tiled_tensor_dst{cute::tiled_divide( tensor_dst, block_shape_transposed)}; // ((TileSizeX, TileSizeY), N // / TileSizeX, M / TileSizeY) auto const tiled_tensor_dst_transposed{cute::tiled_divide( tensor_dst_transposed, block_shape)}; // ((TileSizeY, TileSizeX), M // / TileSizeY, N / TileSizeX) using ThreadBlockSizeX = cute::Int<32>; // tN using ThreadBlockSizeY = cute::Int<8>; // tM CUTE_STATIC_ASSERT_V(TileSizeX{} % ThreadBlockSizeX{} == cute::Int<0>{}, "TileSizeX must be divisible by ThreadBlockSizeX"); CUTE_STATIC_ASSERT_V(TileSizeY{} % ThreadBlockSizeY{} == cute::Int<0>{}, "TileSizeY must be divisible by ThreadBlockSizeY"); constexpr auto thread_block_shape{ cute::make_shape(ThreadBlockSizeY{}, ThreadBlockSizeX{})}; constexpr auto thread_block_shape_transposed{ cute::make_shape(ThreadBlockSizeX{}, ThreadBlockSizeY{})}; constexpr auto thread_layout{ cute::make_layout(thread_block_shape, cute::GenRowMajor{})}; constexpr auto thread_layout_transposed{ cute::make_layout(thread_block_shape_transposed, cute::GenColMajor{})}; using VECTOR_SIZE_X = cute::Int<NUM_VECTOR_ELEMENTS>; constexpr auto vector_shape{ cute::make_shape(cute::Int<1>{}, VECTOR_SIZE_X{})}; // Copy atom vector layout. constexpr auto vector_layout{ cute::make_layout(vector_shape, cute::GenRowMajor{})}; dim3 const grid_dim{cute::size<2>(tiled_tensor_src), cute::size<1>(tiled_tensor_src)}; dim3 const thread_dim{ThreadBlockSizeX::value * ThreadBlockSizeY::value}; if (bank_conflict_access_mode == SharedMemoryBankConflictAccessMode::Read) { matrix_transpose_shared_memory_vectorized<<<grid_dim, thread_dim, 0, stream>>>( tiled_tensor_src, tiled_tensor_dst_transposed, shared_memory_layout_src, shared_memory_layout_src, thread_layout, thread_layout_transposed, vector_layout); } else { return cudaErrorInvalidValue; } return cudaGetLastError();}template <typename T>cudaError_t launch_matrix_transpose_shared_memory_bank_conflict_read( T const* input_matrix, T* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream){ return launch_matrix_transpose_shared_memory_bank_conflict_base( input_matrix, output_matrix, M, N, SharedMemoryBankConflictAccessMode::Read, stream);}template <typename T>cudaError_t launch_matrix_transpose_shared_memory_bank_conflict_write( T const* input_matrix, T* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream){ return launch_matrix_transpose_shared_memory_bank_conflict_base( input_matrix, output_matrix, M, N, SharedMemoryBankConflictAccessMode::Write, stream);}template <typename T>cudaError_t launch_matrix_transpose_shared_memory_vectorized_bank_conflict_read( T const* input_matrix, T* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream){ return launch_matrix_transpose_shared_memory_vectorized_bank_conflict_base< T>(input_matrix, output_matrix, M, N, SharedMemoryBankConflictAccessMode::Read, stream);}template <typename T>static cudaError_t launch_matrix_transpose_shared_memory_padded( T const* input_matrix, T* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream){ auto const tensor_shape{cute::make_shape(M, N)}; auto const tensor_shape_transposed{cute::make_shape(N, M)}; // Input matrix: row-major M x N matrix. auto const global_memory_layout_src{cute::make_layout( tensor_shape, cute::GenRowMajor{})}; // (M, N) : (N, 1) // Output matrix: row-major N x M matrix. auto const global_memory_layout_dst{cute::make_layout( tensor_shape_transposed, cute::GenRowMajor{})}; // (N, M) : (M, 1) // Same output matrix, but different view: column-major M x N matrix. auto const global_memory_layout_dst_transposed{cute::make_layout( tensor_shape, cute::GenColMajor{})}; // (M, N) : (1, M) auto const tensor_src{cute::make_tensor(cute::make_gmem_ptr(input_matrix), global_memory_layout_src)}; auto const tensor_dst{cute::make_tensor(cute::make_gmem_ptr(output_matrix), global_memory_layout_dst)}; auto const tensor_dst_transposed{ cute::make_tensor(cute::make_gmem_ptr(output_matrix), global_memory_layout_dst_transposed)}; using TileSizeX = cute::Int<64>; // bN using TILE_SIZE_X_PADDED = cute::Int<65>; // bN + 1 using TileSizeY = cute::Int<32>; // bM constexpr auto block_shape{cute::make_shape(TileSizeY{}, TileSizeX{})}; constexpr auto block_shape_transposed{ cute::make_shape(TileSizeX{}, TileSizeY{})}; auto const shared_memory_layout_src{cute::make_layout( block_shape, cute::GenRowMajor{})}; // (bM, bN) : (bN, 1) auto const shared_memory_layout_src_padded{cute::make_layout( block_shape, cute::make_stride(TILE_SIZE_X_PADDED{}, cute::Int<1>{}))}; // (bM, bN) : (bN + 1, 1) auto const shared_memory_layout_dst{cute::make_layout( block_shape_transposed, cute::GenRowMajor{})}; // (bN, bM) : (bM, 1) auto const shared_memory_layout_dst_transposed{cute::make_layout( block_shape, cute::GenColMajor{})}; // (bM, bN) : (1, bM) auto const tiled_tensor_src{cute::tiled_divide( tensor_src, block_shape)}; // ((TileSizeY, TileSizeX), M / // TileSizeY, N / TileSizeX) auto const tiled_tensor_dst{cute::tiled_divide( tensor_dst, block_shape_transposed)}; // ((TileSizeX, TileSizeY), N // / TileSizeX, M / TileSizeY) auto const tiled_tensor_dst_transposed{cute::tiled_divide( tensor_dst_transposed, block_shape)}; // ((TileSizeY, TileSizeX), M // / TileSizeY, N / TileSizeX) using ThreadBlockSizeX = cute::Int<32>; // tN using ThreadBlockSizeY = cute::Int<8>; // tM CUTE_STATIC_ASSERT_V(TileSizeX{} % ThreadBlockSizeX{} == cute::Int<0>{}, "TileSizeX must be divisible by ThreadBlockSizeX"); CUTE_STATIC_ASSERT_V(TileSizeY{} % ThreadBlockSizeY{} == cute::Int<0>{}, "TileSizeY must be divisible by ThreadBlockSizeY"); constexpr auto thread_block_shape{ cute::make_shape(ThreadBlockSizeY{}, ThreadBlockSizeX{})}; constexpr auto thread_block_shape_transposed{ cute::make_shape(ThreadBlockSizeX{}, ThreadBlockSizeY{})}; constexpr auto thread_layout{ cute::make_layout(thread_block_shape, cute::GenRowMajor{})}; constexpr auto thread_layout_transposed{ cute::make_layout(thread_block_shape_transposed, cute::GenColMajor{})}; dim3 const grid_dim{cute::size<2>(tiled_tensor_src), cute::size<1>(tiled_tensor_src)}; dim3 const thread_dim{ThreadBlockSizeX::value * ThreadBlockSizeY::value}; matrix_transpose_shared_memory<<<grid_dim, thread_dim, 0, stream>>>( tiled_tensor_src, tiled_tensor_dst_transposed, shared_memory_layout_src_padded, shared_memory_layout_src_padded, thread_layout, thread_layout_transposed); return cudaGetLastError();}template <typename T>static cudaError_t launch_matrix_transpose_shared_memory_vectorized_padded( T const* input_matrix, T* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream){ using VectorType = cute::uint128_t; CUTE_STATIC_ASSERT(sizeof(VectorType) % sizeof(T) == 0, "sizeof(VectorType) must be a multiple of sizeof(T)"); constexpr unsigned int NUM_VECTOR_ELEMENTS{sizeof(VectorType) / sizeof(T)}; if (N % NUM_VECTOR_ELEMENTS != 0) { return cudaErrorInvalidValue; } auto const tensor_shape{cute::make_shape(M, N)}; auto const tensor_shape_transposed{cute::make_shape(N, M)}; // Input matrix: row-major M x N matrix. auto const global_memory_layout_src{cute::make_layout( tensor_shape, cute::GenRowMajor{})}; // (M, N) : (N, 1) // Output matrix: row-major N x M matrix. auto const global_memory_layout_dst{cute::make_layout( tensor_shape_transposed, cute::GenRowMajor{})}; // (N, M) : (M, 1) // Same output matrix, but different view: column-major M x N matrix. auto const global_memory_layout_dst_transposed{cute::make_layout( tensor_shape, cute::GenColMajor{})}; // (M, N) : (1, M) auto const tensor_src{cute::make_tensor(cute::make_gmem_ptr(input_matrix), global_memory_layout_src)}; auto const tensor_dst{cute::make_tensor(cute::make_gmem_ptr(output_matrix), global_memory_layout_dst)}; auto const tensor_dst_transposed{ cute::make_tensor(cute::make_gmem_ptr(output_matrix), global_memory_layout_dst_transposed)}; using TileSizeX = cute::Int<128>; // bN // Such padding is necessary for the byte alignment of the vectorized // access. However, the shared memory bank conflict mitigation can be // compromised. using TILE_SIZE_X_PADDED = cute::Int<128 + NUM_VECTOR_ELEMENTS>; // bN + NUM_VECTOR_ELEMENTS using TileSizeY = cute::Int<32>; // bM constexpr auto block_shape{cute::make_shape(TileSizeY{}, TileSizeX{})}; constexpr auto block_shape_transposed{ cute::make_shape(TileSizeX{}, TileSizeY{})}; auto const shared_memory_layout_src{cute::make_layout( block_shape, cute::GenRowMajor{})}; // (bM, bN) : (bN, 1) auto const shared_memory_layout_src_padded{cute::make_layout( block_shape, cute::make_stride(TILE_SIZE_X_PADDED{}, cute::Int<1>{}))}; // (bM, bN) : (bN + 1, 1) auto const shared_memory_layout_dst{cute::make_layout( block_shape_transposed, cute::GenRowMajor{})}; // (bN, bM) : (bM, 1) auto const shared_memory_layout_dst_transposed{cute::make_layout( block_shape, cute::GenColMajor{})}; // (bM, bN) : (1, bM) auto const tiled_tensor_src{cute::tiled_divide( tensor_src, block_shape)}; // ((TileSizeY, TileSizeX), M / // TileSizeY, N / TileSizeX) auto const tiled_tensor_dst{cute::tiled_divide( tensor_dst, block_shape_transposed)}; // ((TileSizeX, TileSizeY), N // / TileSizeX, M / TileSizeY) auto const tiled_tensor_dst_transposed{cute::tiled_divide( tensor_dst_transposed, block_shape)}; // ((TileSizeY, TileSizeX), M // / TileSizeY, N / TileSizeX) using ThreadBlockSizeX = cute::Int<32>; // tN using ThreadBlockSizeY = cute::Int<8>; // tM CUTE_STATIC_ASSERT_V(TileSizeX{} % ThreadBlockSizeX{} == cute::Int<0>{}, "TileSizeX must be divisible by ThreadBlockSizeX"); CUTE_STATIC_ASSERT_V(TileSizeY{} % ThreadBlockSizeY{} == cute::Int<0>{}, "TileSizeY must be divisible by ThreadBlockSizeY"); constexpr auto thread_block_shape{ cute::make_shape(ThreadBlockSizeY{}, ThreadBlockSizeX{})}; constexpr auto thread_block_shape_transposed{ cute::make_shape(ThreadBlockSizeX{}, ThreadBlockSizeY{})}; constexpr auto thread_layout{ cute::make_layout(thread_block_shape, cute::GenRowMajor{})}; constexpr auto thread_layout_transposed{ cute::make_layout(thread_block_shape_transposed, cute::GenColMajor{})}; dim3 const grid_dim{cute::size<2>(tiled_tensor_src), cute::size<1>(tiled_tensor_src)}; dim3 const thread_dim{ThreadBlockSizeX::value * ThreadBlockSizeY::value}; using VECTOR_SIZE_X = cute::Int<NUM_VECTOR_ELEMENTS>; constexpr auto vector_shape{ cute::make_shape(cute::Int<1>{}, VECTOR_SIZE_X{})}; // Copy atom vector layout. constexpr auto vector_layout{ cute::make_layout(vector_shape, cute::GenRowMajor{})}; matrix_transpose_shared_memory_vectorized<<<grid_dim, thread_dim, 0, stream>>>( tiled_tensor_src, tiled_tensor_dst_transposed, shared_memory_layout_src_padded, shared_memory_layout_src_padded, thread_layout, thread_layout_transposed, vector_layout); return cudaGetLastError();}template <class SHARED_MEMORY_LAYOUT>static voidprint_shared_memory_bank_ids(SHARED_MEMORY_LAYOUT shared_memory_layout){ // Print the shared memory bank ids. for (unsigned int i{0}; i < cute::size<0>(shared_memory_layout); ++i) { for (unsigned int j{0}; j < cute::size<1>(shared_memory_layout); ++j) { std::cout << std::setw(2) << shared_memory_layout(i, j) % 32 << " "; } std::cout << std::endl; }}constexpr int constexpr_log2(int n){ return ((n < 2) ? 0 : 1 + constexpr_log2(n / 2));}template <typename T>static cudaError_t launch_matrix_transpose_shared_memory_swizzled( T const* input_matrix, T* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream){ auto const tensor_shape{cute::make_shape(M, N)}; auto const tensor_shape_transposed{cute::make_shape(N, M)}; // Input matrix: row-major M x N matrix. auto const global_memory_layout_src{cute::make_layout( tensor_shape, cute::GenRowMajor{})}; // (M, N) : (N, 1) // Output matrix: row-major N x M matrix. auto const global_memory_layout_dst{cute::make_layout( tensor_shape_transposed, cute::GenRowMajor{})}; // (N, M) : (M, 1) // Same output matrix, but different view: column-major M x N matrix. auto const global_memory_layout_dst_transposed{cute::make_layout( tensor_shape, cute::GenColMajor{})}; // (M, N) : (1, M) auto const tensor_src{cute::make_tensor(cute::make_gmem_ptr(input_matrix), global_memory_layout_src)}; auto const tensor_dst{cute::make_tensor(cute::make_gmem_ptr(output_matrix), global_memory_layout_dst)}; auto const tensor_dst_transposed{ cute::make_tensor(cute::make_gmem_ptr(output_matrix), global_memory_layout_dst_transposed)}; using TileSizeX = cute::Int<64>; // bN using TileSizeY = cute::Int<32>; // bM constexpr int NUM_BASE_BITS{constexpr_log2(1)}; constexpr int NUM_MASK_BITS{constexpr_log2(32 * 4 / sizeof(T)) - NUM_BASE_BITS}; constexpr int NUM_SHIFT_BITS{constexpr_log2(TileSizeX::value) - NUM_BASE_BITS}; constexpr auto block_shape{cute::make_shape(TileSizeY{}, TileSizeX{})}; constexpr auto block_shape_transposed{ cute::make_shape(TileSizeX{}, TileSizeY{})}; auto const shared_memory_layout_src{cute::make_layout( block_shape, cute::GenRowMajor{})}; // (bM, bN) : (bN, 1) auto const shared_memory_layout_dst{cute::make_layout( block_shape_transposed, cute::GenRowMajor{})}; // (bN, bM) : (bM, 1) auto const shared_memory_layout_dst_transposed{cute::make_layout( block_shape, cute::GenColMajor{})}; // (bM, bN) : (1, bM) auto const swizzle_src{ cute::Swizzle<NUM_MASK_BITS, NUM_BASE_BITS, NUM_SHIFT_BITS>{}}; auto const shared_memory_layout_swizzled_src{ cute::composition(swizzle_src, shared_memory_layout_src)}; // Inspect if the swizzling reduces the shared memory bank conflicts. // print_shared_memory_bank_ids(shared_memory_layout_swizzled_src); auto const tiled_tensor_src{cute::tiled_divide( tensor_src, block_shape)}; // ((TileSizeY, TileSizeX), M / // TileSizeY, N / TileSizeX) auto const tiled_tensor_dst{cute::tiled_divide( tensor_dst, block_shape_transposed)}; // ((TileSizeX, TileSizeY), N // / TileSizeX, M / TileSizeY) auto const tiled_tensor_dst_transposed{cute::tiled_divide( tensor_dst_transposed, block_shape)}; // ((TileSizeY, TileSizeX), M // / TileSizeY, N / TileSizeX) using ThreadBlockSizeX = cute::Int<32>; // tN using ThreadBlockSizeY = cute::Int<8>; // tM CUTE_STATIC_ASSERT_V(TileSizeX{} % ThreadBlockSizeX{} == cute::Int<0>{}, "TileSizeX must be divisible by ThreadBlockSizeX"); CUTE_STATIC_ASSERT_V(TileSizeY{} % ThreadBlockSizeY{} == cute::Int<0>{}, "TileSizeY must be divisible by ThreadBlockSizeY"); constexpr auto thread_block_shape{ cute::make_shape(ThreadBlockSizeY{}, ThreadBlockSizeX{})}; constexpr auto thread_block_shape_transposed{ cute::make_shape(ThreadBlockSizeX{}, ThreadBlockSizeY{})}; constexpr auto thread_layout{ cute::make_layout(thread_block_shape, cute::GenRowMajor{})}; constexpr auto thread_layout_transposed{ cute::make_layout(thread_block_shape_transposed, cute::GenColMajor{})}; dim3 const grid_dim{cute::size<2>(tiled_tensor_src), cute::size<1>(tiled_tensor_src)}; dim3 const thread_dim{ThreadBlockSizeX::value * ThreadBlockSizeY::value}; matrix_transpose_shared_memory<<<grid_dim, thread_dim, 0, stream>>>( tiled_tensor_src, tiled_tensor_dst_transposed, shared_memory_layout_swizzled_src, shared_memory_layout_swizzled_src, thread_layout, thread_layout_transposed); return cudaGetLastError();}template <typename T>static cudaError_t launch_matrix_transpose_shared_memory_vectorized_swizzled( T const* input_matrix, T* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream){ using VectorType = cute::uint128_t; CUTE_STATIC_ASSERT(sizeof(VectorType) % sizeof(T) == 0, "sizeof(VectorType) must be a multiple of sizeof(T)"); constexpr unsigned int NUM_VECTOR_ELEMENTS{sizeof(VectorType) / sizeof(T)}; if (N % NUM_VECTOR_ELEMENTS != 0) { return cudaErrorInvalidValue; } auto const tensor_shape{cute::make_shape(M, N)}; auto const tensor_shape_transposed{cute::make_shape(N, M)}; // Input matrix: row-major M x N matrix. auto const global_memory_layout_src{cute::make_layout( tensor_shape, cute::GenRowMajor{})}; // (M, N) : (N, 1) // Output matrix: row-major N x M matrix. auto const global_memory_layout_dst{cute::make_layout( tensor_shape_transposed, cute::GenRowMajor{})}; // (N, M) : (M, 1) // Same output matrix, but different view: column-major M x N matrix. auto const global_memory_layout_dst_transposed{cute::make_layout( tensor_shape, cute::GenColMajor{})}; // (M, N) : (1, M) auto const tensor_src{cute::make_tensor(cute::make_gmem_ptr(input_matrix), global_memory_layout_src)}; auto const tensor_dst{cute::make_tensor(cute::make_gmem_ptr(output_matrix), global_memory_layout_dst)}; auto const tensor_dst_transposed{ cute::make_tensor(cute::make_gmem_ptr(output_matrix), global_memory_layout_dst_transposed)}; using TileSizeX = cute::Int<128>; // bN using TileSizeY = cute::Int<32>; // bM constexpr int NUM_BASE_BITS{constexpr_log2(NUM_VECTOR_ELEMENTS)}; constexpr int NUM_MASK_BITS{constexpr_log2(32 * 4 / sizeof(T)) - NUM_BASE_BITS}; constexpr int NUM_SHIFT_BITS{constexpr_log2(TileSizeX::value) - NUM_BASE_BITS}; constexpr auto block_shape{cute::make_shape(TileSizeY{}, TileSizeX{})}; constexpr auto block_shape_transposed{ cute::make_shape(TileSizeX{}, TileSizeY{})}; auto const shared_memory_layout_src{cute::make_layout( block_shape, cute::GenRowMajor{})}; // (bM, bN) : (bN, 1) auto const shared_memory_layout_dst{cute::make_layout( block_shape_transposed, cute::GenRowMajor{})}; // (bN, bM) : (bM, 1) auto const shared_memory_layout_dst_transposed{cute::make_layout( block_shape, cute::GenColMajor{})}; // (bM, bN) : (1, bM) // Because of the vectorized access, NUM_BASE_BITS cannot be zero. // The shared memory bank conflict mitigation can be compromised. // Print the shared memory bank ids to see the details. auto const swizzle_src{ cute::Swizzle<NUM_MASK_BITS, NUM_BASE_BITS, NUM_SHIFT_BITS>{}}; auto const shared_memory_layout_swizzled_src{ cute::composition(swizzle_src, shared_memory_layout_src)}; // Inspect if the swizzling reduces the shared memory bank conflicts. // print_shared_memory_bank_ids(shared_memory_layout_swizzled_src); auto const tiled_tensor_src{cute::tiled_divide( tensor_src, block_shape)}; // ((TileSizeY, TileSizeX), M / // TileSizeY, N / TileSizeX) auto const tiled_tensor_dst{cute::tiled_divide( tensor_dst, block_shape_transposed)}; // ((TileSizeX, TileSizeY), N // / TileSizeX, M / TileSizeY) auto const tiled_tensor_dst_transposed{cute::tiled_divide( tensor_dst_transposed, block_shape)}; // ((TileSizeY, TileSizeX), M // / TileSizeY, N / TileSizeX) using ThreadBlockSizeX = cute::Int<32>; // tN using ThreadBlockSizeY = cute::Int<8>; // tM CUTE_STATIC_ASSERT_V(TileSizeX{} % ThreadBlockSizeX{} == cute::Int<0>{}, "TileSizeX must be divisible by ThreadBlockSizeX"); CUTE_STATIC_ASSERT_V(TileSizeY{} % ThreadBlockSizeY{} == cute::Int<0>{}, "TileSizeY must be divisible by ThreadBlockSizeY"); constexpr auto thread_block_shape{ cute::make_shape(ThreadBlockSizeY{}, ThreadBlockSizeX{})}; constexpr auto thread_block_shape_transposed{ cute::make_shape(ThreadBlockSizeX{}, ThreadBlockSizeY{})}; constexpr auto thread_layout{ cute::make_layout(thread_block_shape, cute::GenRowMajor{})}; constexpr auto thread_layout_transposed{ cute::make_layout(thread_block_shape_transposed, cute::GenColMajor{})}; using VECTOR_SIZE_X = cute::Int<NUM_VECTOR_ELEMENTS>; constexpr auto vector_shape{ cute::make_shape(cute::Int<1>{}, VECTOR_SIZE_X{})}; // Copy atom vector layout. constexpr auto vector_layout{ cute::make_layout(vector_shape, cute::GenRowMajor{})}; dim3 const grid_dim{cute::size<2>(tiled_tensor_src), cute::size<1>(tiled_tensor_src)}; dim3 const thread_dim{ThreadBlockSizeX::value * ThreadBlockSizeY::value}; matrix_transpose_shared_memory_vectorized<<<grid_dim, thread_dim, 0, stream>>>( tiled_tensor_src, tiled_tensor_dst_transposed, shared_memory_layout_swizzled_src, shared_memory_layout_swizzled_src, thread_layout, thread_layout_transposed, vector_layout); return cudaGetLastError();}// Explicit instantiation.template cudaError_tlaunch_matrix_transpose_shared_memory_bank_conflict_read<float>( float const* input_matrix, float* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream);template cudaError_tlaunch_matrix_transpose_shared_memory_bank_conflict_read<double>( double const* input_matrix, double* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream);template cudaError_tlaunch_matrix_transpose_shared_memory_vectorized_bank_conflict_read<float>( float const* input_matrix, float* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream);template cudaError_tlaunch_matrix_transpose_shared_memory_vectorized_bank_conflict_read<double>( double const* input_matrix, double* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream);template cudaError_tlaunch_matrix_transpose_shared_memory_bank_conflict_write<float>( float const* input_matrix, float* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream);template cudaError_tlaunch_matrix_transpose_shared_memory_bank_conflict_write<double>( double const* input_matrix, double* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream);template cudaError_t launch_matrix_transpose_shared_memory_padded<float>( float const* input_matrix, float* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream);template cudaError_t launch_matrix_transpose_shared_memory_padded<double>( double const* input_matrix, double* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream);template cudaError_tlaunch_matrix_transpose_shared_memory_vectorized_padded<float>( float const* input_matrix, float* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream);template cudaError_tlaunch_matrix_transpose_shared_memory_vectorized_padded<double>( double const* input_matrix, double* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream);template cudaError_t launch_matrix_transpose_shared_memory_swizzled<float>( float const* input_matrix, float* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream);template cudaError_t launch_matrix_transpose_shared_memory_swizzled<double>( double const* input_matrix, double* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream);template cudaError_tlaunch_matrix_transpose_shared_memory_vectorized_swizzled<float>( float const* input_matrix, float* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream);template cudaError_tlaunch_matrix_transpose_shared_memory_vectorized_swizzled<double>( double const* input_matrix, double* output_matrix, unsigned int M, unsigned int N, cudaStream_t stream);
Shared Memory Layout and CUDA Thread Block Layout
Because the strided memory access will be performed on shared memory, the global memory reads and writes can be fully coalesced. Then we have two options about how to perform the strided memory reads or writes on shared memory. The first option is to perform matrix transpose when reading from global memory to shared memory, and then perform matrix copy from shared memory to global memory, resulting in strided memory writes on shared memory. The second option is to perform matrix copy when reading from global memory to shared memory, and then perform matrix transpose when writing from shared memory to global memory, resulting in strided memory reads on shared memory.
To implement the first option, the shared memory layout has to be column-major if the input matrix layout is row-major. Two different CUDA thread block layouts are used for reading from global memory to shared memory and writing from shared memory to global memory. The first CUDA thread block layout is row-major if the input matrix layout is row-major, resulting in coalesced memory reads from global memory and strided memory writes to shared memory. The second CUDA thread block layout is column-major if the input matrix layout is row-major, which is the same as the output matrix layout, resulting in coalesced memory reads from shared memory and coalesced memory writes to global memory.
To implement the second option, the shared memory layout has to be row-major if the input matrix layout is row-major. Two different CUDA thread block layouts are used for reading from global memory to shared memory and writing from shared memory to global memory. The first CUDA thread block layout is row-major if the input matrix layout is row-major, resulting in coalesced memory reads from global memory and coalesced memory writes to shared memory. The second CUDA thread block layout is column-major if the input matrix layout is row-major, which is the same as the output matrix layout, resulting in strided memory reads from shared memory and coalesced memory writes to global memory.
The strided reads and writes on shared memory will result in as severe as 32-way shared memory bank conflicts. On certain platforms, this will significantly reduce the performance.
123456789101112131415161718192021222324252627282930313233343536
using ThreadBlockSizeX = cute::Int<32>; // tNusing ThreadBlockSizeY = cute::Int<8>; // tMCUTE_STATIC_ASSERT_V(TileSizeX{} % ThreadBlockSizeX{} == cute::Int<0>{}, "TileSizeX must be divisible by ThreadBlockSizeX");CUTE_STATIC_ASSERT_V(TileSizeY{} % ThreadBlockSizeY{} == cute::Int<0>{}, "TileSizeY must be divisible by ThreadBlockSizeY");constexpr auto thread_block_shape{ cute::make_shape(ThreadBlockSizeY{}, ThreadBlockSizeX{})};constexpr auto thread_block_shape_transposed{ cute::make_shape(ThreadBlockSizeX{}, ThreadBlockSizeY{})};constexpr auto thread_layout{ cute::make_layout(thread_block_shape, cute::GenRowMajor{})};constexpr auto thread_layout_transposed{ cute::make_layout(thread_block_shape_transposed, cute::GenColMajor{})};dim3 const grid_dim{cute::size<2>(tiled_tensor_src), cute::size<1>(tiled_tensor_src)};dim3 const thread_dim{ThreadBlockSizeX::value * ThreadBlockSizeY::value};if (bank_conflict_access_mode == SharedMemoryBankConflictAccessMode::Read){ matrix_transpose_shared_memory<<<grid_dim, thread_dim, 0, stream>>>( tiled_tensor_src, tiled_tensor_dst_transposed, shared_memory_layout_src, shared_memory_layout_src, thread_layout, thread_layout_transposed);}else{ matrix_transpose_shared_memory<<<grid_dim, thread_dim, 0, stream>>>( tiled_tensor_src, tiled_tensor_dst_transposed, shared_memory_layout_dst_transposed, shared_memory_layout_dst_transposed, thread_layout, thread_layout_transposed);}
Predicates and Boundary Checking
One typical mistake I would make in the implementation is to use the same predicate for both reading from global memory to shared memory and writing from shared memory to global memory because the shapes of the global memory input matrix tile layout, the global memory output matrix tile layout, the shared memory layout are the same. We could not reuse the predicate because the thread layouts for reading from global memory to shared memory and writing from shared memory to global memory are different. Therefore, even for the same thread, different identity tuples are assigned for reading from global memory to shared memory and writing from shared memory to global memory, and we have to use two sets of predicates.
12345678910111213141516171819202122232425262728293031323334353637383940414243444546474849505152535455
// A 2D array of tuples that maps (x, y) to (x, y).auto const identity_tensor_src{cute::make_identity_tensor(cute::make_shape( cute::size<0>(global_tile_src), cute::size<1>(global_tile_src)))};auto const thread_identity_tensor_src{cute::local_partition( identity_tensor_src, ThreadLayoutSrc{}, threadIdx.x)};auto predicator_src{cute::make_tensor<bool>( cute::make_shape(cute::size<0>(thread_global_tile_src), cute::size<1>(thread_global_tile_src)))};auto const identity_tensor_dst{cute::make_identity_tensor(cute::make_shape( cute::size<0>(global_tile_dst), cute::size<1>(global_tile_dst)))};auto const thread_identity_tensor_dst{cute::local_partition( identity_tensor_dst, ThreadLayoutDst{}, threadIdx.x)};auto predicator_dst{cute::make_tensor<bool>( cute::make_shape(cute::size<0>(thread_global_tile_dst), cute::size<1>(thread_global_tile_dst)))};auto const num_max_columns{cute::stride<0>(global_tile_src)};auto const num_max_rows{cute::stride<1>(global_tile_dst)};constexpr auto global_tile_columns{cute::size<1>(global_tile_src)};constexpr auto global_tile_rows{cute::size<0>(global_tile_src)};CUTE_UNROLLfor (unsigned int i{0}; i < cute::size<0>(predicator_src); ++i){ CUTE_UNROLL for (unsigned int j{0}; j < cute::size<1>(predicator_src); ++j) { auto const thread_identity{thread_identity_tensor_src(i, j)}; bool const is_row_in_bound{cute::get<0>(thread_identity) + blockIdx.y * global_tile_rows < num_max_rows}; bool const is_column_in_bound{cute::get<1>(thread_identity) + blockIdx.x * global_tile_columns < num_max_columns}; predicator_src(i, j) = is_row_in_bound && is_column_in_bound; }}CUTE_UNROLLfor (unsigned int i{0}; i < cute::size<0>(predicator_dst); ++i){ CUTE_UNROLL for (unsigned int j{0}; j < cute::size<1>(predicator_dst); ++j) { auto const thread_identity{thread_identity_tensor_dst(i, j)}; bool const is_row_in_bound{cute::get<0>(thread_identity) + blockIdx.y * global_tile_rows < num_max_rows}; bool const is_column_in_bound{cute::get<1>(thread_identity) + blockIdx.x * global_tile_columns < num_max_columns}; predicator_dst(i, j) = is_row_in_bound && is_column_in_bound; }}
Thread Block Synchronization
Because shared memory is used as a cache to store the intermediate matrix tile for transpose and all the threads in the same thread block are synergistically reading the matrix tile from global memory to shared memory, before the writing the matrix tile from shared memory to global memory, we have to make sure all the threads in the same thread block have finished reading the matrix tile from global memory to shared memory. In addition to the commonly used __syncthreads(), cute::cp_async_fence() and cute::cp_async_wait<0>() are also used in CuTe for thread block synchronization. This is because cute::copy_if and cute::copy can be asynchronous operations on SM80 and above platforms. cute::cp_async_fence() and cute::cp_async_wait<0>() are no-ops on platforms lower than SM80.
1234567
cute::copy_if(predicator_src, thread_global_tile_src, thread_shared_tile_src);cute::cp_async_fence();cute::cp_async_wait<0>();__syncthreads();cute::copy_if(predicator_dst, thread_shared_tile_dst, thread_global_tile_dst);
Shared Memory Padding
The shared memory padding is a common trick to avoid shared memory bank conflicts when a warp of threads is accessing shared memory.
In our case, assuming we have the strided memory read on shared memory. Then instead of using the shared memory layout of $(bM, bN) : (bN, 1)$, the padded shared memory layout should be $(bM, bN) : (bN + 1, 1)$. Notice that the shared memory shape remains unchanged, but the stride of the shared memory layout gets changed, resulting in the shared memory cosize, i.e. the shared memory that needs to be allocated, is also changed. Using the shared memory layout of $(bM, bN + 1) : (bN + 1, 1)$ is incorrect.
12345678910111213141516171819
using TILE_SIZE_X = cute::Int<64>; // bNusing TILE_SIZE_X_PADDED = cute::Int<65>; // bN + 1using TILE_SIZE_Y = cute::Int<32>; // bMconstexpr auto block_shape{cute::make_shape(TILE_SIZE_Y{}, TILE_SIZE_X{})};constexpr auto block_shape_transposed{ cute::make_shape(TILE_SIZE_X{}, TILE_SIZE_Y{})};auto const shared_memory_layout_src{cute::make_layout( block_shape, cute::GenRowMajor{})}; // (bM, bN) : (bN, 1)auto const shared_memory_layout_src_padded{cute::make_layout( block_shape, cute::make_stride(TILE_SIZE_X_PADDED{}, cute::Int<1>{}))}; // (bM, bN) : (bN + 1, 1)transpose_shared_memory<<<grid_dim, thread_dim, 0, stream>>>( tiled_tensor_src, tiled_tensor_dst_transposed, shared_memory_layout_src_padded, shared_memory_layout_src_padded, thread_layout, thread_layout_transposed);
Because the shared memory shape remains the same, the CUDA kernel previously implemented can be just reused.
Shared Memory Swizzling
The shared memory swizzling is another common trick to avoid shared memory bank conflicts when a warp of threads is accessing shared memory. Comparing to the shared memory padding, the shared memory swizzling will not allocate extract shared memory that is not used, and is therefore a more favorable approach. However, the formula of shared memory swizzling is very brain-twisting and the implementation can be very error-prone. In CuTe, fortunately, the shared memory swizzling is implemented as a simple template class, and the shared memory swizzling can be easily applied to the shared memory layout via CuTe layout composition. After verifying the shared memory swizzled bank ids are meeting our requirement, we could just reuse the CUDA kernel previously implemented for the shared memory swizzled layout.
1234567891011121314151617181920212223
using TileSizeX = cute::Int<64>; // bNusing TileSizeY = cute::Int<32>; // bMconstexpr int NUM_BASE_BITS{constexpr_log2(1)};constexpr int NUM_MASK_BITS{constexpr_log2(32 * 4 / sizeof(T)) - NUM_BASE_BITS};constexpr int NUM_SHIFT_BITS{constexpr_log2(TileSizeX::value) - NUM_BASE_BITS};constexpr auto block_shape{cute::make_shape(TileSizeY{}, TileSizeX{})};constexpr auto block_shape_transposed{ cute::make_shape(TileSizeX{}, TileSizeY{})};auto const shared_memory_layout_src{cute::make_layout( block_shape, cute::GenRowMajor{})}; // (bM, bN) : (bN, 1)auto const shared_memory_layout_dst{cute::make_layout( block_shape_transposed, cute::GenRowMajor{})}; // (bN, bM) : (bM, 1)auto const shared_memory_layout_dst_transposed{cute::make_layout( block_shape, cute::GenColMajor{})}; // (bM, bN) : (1, bM)auto const swizzle_src{ cute::Swizzle<NUM_MASK_BITS, NUM_BASE_BITS, NUM_SHIFT_BITS>{}};auto const shared_memory_layout_swizzled_src{ cute::composition(swizzle_src, shared_memory_layout_src)};
In our case, given the shared memory of shape $(bM, bN) : (bN, 1) = (32, 64) : (64, 1)$, the shared memory bank id before and after applying CuTe swizzling are as follows, respectively.
1234567891011121314151617181920212223242526272829303132
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 310 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31
1234567891011121314151617181920212223242526272829303132
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 1 0 3 2 5 4 7 6 9 8 11 10 13 12 15 14 17 16 19 18 21 20 23 22 25 24 27 26 29 28 31 30 1 0 3 2 5 4 7 6 9 8 11 10 13 12 15 14 17 16 19 18 21 20 23 22 25 24 27 26 29 28 31 30 2 3 0 1 6 7 4 5 10 11 8 9 14 15 12 13 18 19 16 17 22 23 20 21 26 27 24 25 30 31 28 29 2 3 0 1 6 7 4 5 10 11 8 9 14 15 12 13 18 19 16 17 22 23 20 21 26 27 24 25 30 31 28 29 3 2 1 0 7 6 5 4 11 10 9 8 15 14 13 12 19 18 17 16 23 22 21 20 27 26 25 24 31 30 29 28 3 2 1 0 7 6 5 4 11 10 9 8 15 14 13 12 19 18 17 16 23 22 21 20 27 26 25 24 31 30 29 28 4 5 6 7 0 1 2 3 12 13 14 15 8 9 10 11 20 21 22 23 16 17 18 19 28 29 30 31 24 25 26 27 4 5 6 7 0 1 2 3 12 13 14 15 8 9 10 11 20 21 22 23 16 17 18 19 28 29 30 31 24 25 26 27 5 4 7 6 1 0 3 2 13 12 15 14 9 8 11 10 21 20 23 22 17 16 19 18 29 28 31 30 25 24 27 26 5 4 7 6 1 0 3 2 13 12 15 14 9 8 11 10 21 20 23 22 17 16 19 18 29 28 31 30 25 24 27 26 6 7 4 5 2 3 0 1 14 15 12 13 10 11 8 9 22 23 20 21 18 19 16 17 30 31 28 29 26 27 24 25 6 7 4 5 2 3 0 1 14 15 12 13 10 11 8 9 22 23 20 21 18 19 16 17 30 31 28 29 26 27 24 25 7 6 5 4 3 2 1 0 15 14 13 12 11 10 9 8 23 22 21 20 19 18 17 16 31 30 29 28 27 26 25 24 7 6 5 4 3 2 1 0 15 14 13 12 11 10 9 8 23 22 21 20 19 18 17 16 31 30 29 28 27 26 25 24 8 9 10 11 12 13 14 15 0 1 2 3 4 5 6 7 24 25 26 27 28 29 30 31 16 17 18 19 20 21 22 23 8 9 10 11 12 13 14 15 0 1 2 3 4 5 6 7 24 25 26 27 28 29 30 31 16 17 18 19 20 21 22 23 9 8 11 10 13 12 15 14 1 0 3 2 5 4 7 6 25 24 27 26 29 28 31 30 17 16 19 18 21 20 23 22 9 8 11 10 13 12 15 14 1 0 3 2 5 4 7 6 25 24 27 26 29 28 31 30 17 16 19 18 21 20 23 2210 11 8 9 14 15 12 13 2 3 0 1 6 7 4 5 26 27 24 25 30 31 28 29 18 19 16 17 22 23 20 21 10 11 8 9 14 15 12 13 2 3 0 1 6 7 4 5 26 27 24 25 30 31 28 29 18 19 16 17 22 23 20 2111 10 9 8 15 14 13 12 3 2 1 0 7 6 5 4 27 26 25 24 31 30 29 28 19 18 17 16 23 22 21 20 11 10 9 8 15 14 13 12 3 2 1 0 7 6 5 4 27 26 25 24 31 30 29 28 19 18 17 16 23 22 21 2012 13 14 15 8 9 10 11 4 5 6 7 0 1 2 3 28 29 30 31 24 25 26 27 20 21 22 23 16 17 18 19 12 13 14 15 8 9 10 11 4 5 6 7 0 1 2 3 28 29 30 31 24 25 26 27 20 21 22 23 16 17 18 1913 12 15 14 9 8 11 10 5 4 7 6 1 0 3 2 29 28 31 30 25 24 27 26 21 20 23 22 17 16 19 18 13 12 15 14 9 8 11 10 5 4 7 6 1 0 3 2 29 28 31 30 25 24 27 26 21 20 23 22 17 16 19 1814 15 12 13 10 11 8 9 6 7 4 5 2 3 0 1 30 31 28 29 26 27 24 25 22 23 20 21 18 19 16 17 14 15 12 13 10 11 8 9 6 7 4 5 2 3 0 1 30 31 28 29 26 27 24 25 22 23 20 21 18 19 16 1715 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 1616 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 1517 16 19 18 21 20 23 22 25 24 27 26 29 28 31 30 1 0 3 2 5 4 7 6 9 8 11 10 13 12 15 14 17 16 19 18 21 20 23 22 25 24 27 26 29 28 31 30 1 0 3 2 5 4 7 6 9 8 11 10 13 12 15 1418 19 16 17 22 23 20 21 26 27 24 25 30 31 28 29 2 3 0 1 6 7 4 5 10 11 8 9 14 15 12 13 18 19 16 17 22 23 20 21 26 27 24 25 30 31 28 29 2 3 0 1 6 7 4 5 10 11 8 9 14 15 12 1319 18 17 16 23 22 21 20 27 26 25 24 31 30 29 28 3 2 1 0 7 6 5 4 11 10 9 8 15 14 13 12 19 18 17 16 23 22 21 20 27 26 25 24 31 30 29 28 3 2 1 0 7 6 5 4 11 10 9 8 15 14 13 1220 21 22 23 16 17 18 19 28 29 30 31 24 25 26 27 4 5 6 7 0 1 2 3 12 13 14 15 8 9 10 11 20 21 22 23 16 17 18 19 28 29 30 31 24 25 26 27 4 5 6 7 0 1 2 3 12 13 14 15 8 9 10 1121 20 23 22 17 16 19 18 29 28 31 30 25 24 27 26 5 4 7 6 1 0 3 2 13 12 15 14 9 8 11 10 21 20 23 22 17 16 19 18 29 28 31 30 25 24 27 26 5 4 7 6 1 0 3 2 13 12 15 14 9 8 11 1022 23 20 21 18 19 16 17 30 31 28 29 26 27 24 25 6 7 4 5 2 3 0 1 14 15 12 13 10 11 8 9 22 23 20 21 18 19 16 17 30 31 28 29 26 27 24 25 6 7 4 5 2 3 0 1 14 15 12 13 10 11 8 923 22 21 20 19 18 17 16 31 30 29 28 27 26 25 24 7 6 5 4 3 2 1 0 15 14 13 12 11 10 9 8 23 22 21 20 19 18 17 16 31 30 29 28 27 26 25 24 7 6 5 4 3 2 1 0 15 14 13 12 11 10 9 824 25 26 27 28 29 30 31 16 17 18 19 20 21 22 23 8 9 10 11 12 13 14 15 0 1 2 3 4 5 6 7 24 25 26 27 28 29 30 31 16 17 18 19 20 21 22 23 8 9 10 11 12 13 14 15 0 1 2 3 4 5 6 725 24 27 26 29 28 31 30 17 16 19 18 21 20 23 22 9 8 11 10 13 12 15 14 1 0 3 2 5 4 7 6 25 24 27 26 29 28 31 30 17 16 19 18 21 20 23 22 9 8 11 10 13 12 15 14 1 0 3 2 5 4 7 626 27 24 25 30 31 28 29 18 19 16 17 22 23 20 21 10 11 8 9 14 15 12 13 2 3 0 1 6 7 4 5 26 27 24 25 30 31 28 29 18 19 16 17 22 23 20 21 10 11 8 9 14 15 12 13 2 3 0 1 6 7 4 527 26 25 24 31 30 29 28 19 18 17 16 23 22 21 20 11 10 9 8 15 14 13 12 3 2 1 0 7 6 5 4 27 26 25 24 31 30 29 28 19 18 17 16 23 22 21 20 11 10 9 8 15 14 13 12 3 2 1 0 7 6 5 428 29 30 31 24 25 26 27 20 21 22 23 16 17 18 19 12 13 14 15 8 9 10 11 4 5 6 7 0 1 2 3 28 29 30 31 24 25 26 27 20 21 22 23 16 17 18 19 12 13 14 15 8 9 10 11 4 5 6 7 0 1 2 329 28 31 30 25 24 27 26 21 20 23 22 17 16 19 18 13 12 15 14 9 8 11 10 5 4 7 6 1 0 3 2 29 28 31 30 25 24 27 26 21 20 23 22 17 16 19 18 13 12 15 14 9 8 11 10 5 4 7 6 1 0 3 230 31 28 29 26 27 24 25 22 23 20 21 18 19 16 17 14 15 12 13 10 11 8 9 6 7 4 5 2 3 0 1 30 31 28 29 26 27 24 25 22 23 20 21 18 19 16 17 14 15 12 13 10 11 8 9 6 7 4 5 2 3 0 131 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 31 30 29 28 27 26 25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0
We could see that when a warp of threads read or write on the column of the shared memory of row-major, it’s shared memory bank conflict free.
Performances
The following tables show the performance measurements of the matrix transpose CUDA kernels on NVIDIA GeForce RTX 3090.
It’s somewhat surprising that except the native coalesced read CUDA kernel, all the other CUDA kernels have similar effective bandwidth and the bandwidth is very close to the ones that can be achieved in practice. Whether having shared memory bank conflicts in this CUDA kernel does not affect the performance significantly, because the performance bottleneck is in the global memory access.
After profiling using NVIDIA Nsight Compute, we could confirm that the global memory access is not fully coalesced for the native coalesced read and the native coalesced write CUDA kernels, shared memory bank load conflicts present in the shared memory bank conflict read CUDA kernel, shared memory bank store conflicts present in the shared memory bank conflict write CUDA kernel, and shared memory bank conflicts are free in the shared memory padded and shared memory swizzled CUDA kernels.
References
CuTe Matrix Transpose
https://leimao.github.io/article/CuTe-Matrix-Transpose/
Author
Lei Mao
Posted on
11-20-2024
Updated on
12-26-2024
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
Build and Develop CUTLASS CUDA Kernels
Introduction
CUTLASS is a header-only library that consists of a collection of CUDA C++ template abstractions for implementing high-performance matrix-matrix multiplication (GEMM) and related computations at all levels and scales within CUDA.
In this blog post, we will build CUTLASS and CuTe CUDA kernels using CMake in a CUDA Docker container.
CUDA Docker Container
When it comes to creating a CUDA Docker container for CUTLASS kernel development, we will encounter an option. Either we will git clone the CUTLASS header-only library inside the Docker container, or the CUTLASS header-only library will be part of the CUDA kernel source code.
In the beginning, I cloned the CUTLASS header-only library inside the Docker container. However, it became prohibitive when I tried to check the header-only library implementation from the Docker container. Although I could still try to check the CUTLASS header-only library implementation from the Docker container if the Docker container is a VS Code Dev Container, it becomes not friendly if I want to modify and contribute to the CUTLASS header-only library. Therefore, I decided to treat the CUTLASS header-only library as part of the CUDA kernel source code.
Build Docker Image
The following CUDA Dockerfile will be used for CUTLASS kernel development. It can also be found in my CUTLASS Examples GitHub repository.
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889
FROM nvcr.io/nvidia/cuda:12.4.1-devel-ubuntu22.04ARG CMAKE_VERSION=3.30.5ARG GOOGLETEST_VERSION=1.15.2ARG NUM_JOBS=8ENV DEBIAN_FRONTEND=noninteractive# Install package dependenciesRUN apt-get update && apt-get install -y --no-install-recommends \ build-essential \ software-properties-common \ autoconf \ automake \ libtool \ pkg-config \ ca-certificates \ locales \ locales-all \ python3 \ python3-dev \ python3-pip \ python3-setuptools \ wget \ git && \ apt-get clean# System locale# Important for UTF-8ENV LC_ALL=en_US.UTF-8ENV LANG=en_US.UTF-8ENV LANGUAGE=en_US.UTF-8# Install CMakeRUN cd /tmp && \ wget https://github.com/Kitware/CMake/releases/download/v${CMAKE_VERSION}/cmake-${CMAKE_VERSION}-linux-x86_64.sh && \ bash cmake-${CMAKE_VERSION}-linux-x86_64.sh --prefix=/usr/local --exclude-subdir --skip-license && \ rm -rf /tmp/*# Install GoogleTestRUN cd /tmp && \ wget https://github.com/google/googletest/archive/refs/tags/v${GOOGLETEST_VERSION}.tar.gz && \ tar -xzf v${GOOGLETEST_VERSION}.tar.gz && \ cd googletest-${GOOGLETEST_VERSION} && \ mkdir build && \ cd build && \ cmake .. && \ make -j${NUM_JOBS} && \ make install && \ rm -rf /tmp/*# Install QT6 and its dependencies for Nsight Compute GUI# https://leimao.github.io/blog/Docker-Nsight-Compute/RUN apt-get update -y && \ apt-get install -y --no-install-recommends \ apt-transport-https \ ca-certificates \ dbus \ fontconfig \ gnupg \ libasound2 \ libfreetype6 \ libglib2.0-0 \ libnss3 \ libsqlite3-0 \ libx11-xcb1 \ libxcb-glx0 \ libxcb-xkb1 \ libxcomposite1 \ libxcursor1 \ libxdamage1 \ libxi6 \ libxml2 \ libxrandr2 \ libxrender1 \ libxtst6 \ libgl1-mesa-glx \ libxkbfile-dev \ openssh-client \ xcb \ xkb-data \ libxcb-cursor0 \ qt6-base-dev && \ apt-get cleanRUN cd /usr/local/bin && \ ln -s /usr/bin/python3 python && \ ln -s /usr/bin/pip3 pip && \ pip install --upgrade pip setuptools wheel
To build the CUTLASS Docker image locally, please run the following command.
1
$ docker build -f docker/cuda.Dockerfile --no-cache --tag cuda:12.4.1 .
Run Docker Container
To run the custom Docker container, please run the following command.
1
$ docker run -it --rm --gpus device=0 -v $(pwd):/mnt -w /mnt cuda:12.4.1
To run the custom Docker container with NVIDIA Nsight Compute, please run the following command.
123
$ xhost +$ docker run -it --rm --gpus device=0 -v $(pwd):/mnt -w /mnt -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix --cap-add=SYS_ADMIN --security-opt seccomp=unconfined --network host cuda:12.4.1$ xhost -
CUTLASS Examples
To show that the CUTLASS we installed works inside the Docker container, we will build and run two CUTLASS C++ examples copied from the CUTLASS GitHub repository without any modification.
CUTLASS is header-only. There are two key header directories to include for each CUTLASS build target, including cutlass/include and cutlass/tools/util/include.
123456789101112131415
cmake_minimum_required(VERSION 3.28)project(CUTLASS-Examples VERSION 0.0.1 LANGUAGES CXX CUDA)set(CMAKE_CXX_STANDARD 17)set(CMAKE_CXX_STANDARD_REQUIRED ON)# Find CUDA Toolkitfind_package(CUDAToolkit REQUIRED)# Set CUTLASS include directoriesfind_path(CUTLASS_INCLUDE_DIR cutlass/cutlass.h HINTS cutlass/include)find_path(CUTLASS_UTILS_INCLUDE_DIR cutlass/util/host_tensor.h HINTS cutlass/tools/util/include)add_subdirectory(examples)
For each build target, the experimental flag --expt-relaxed-constexpr is needed for the NVCC compiler to use some constexpr from the host code in the device code.
12345678910
cmake_minimum_required(VERSION 3.28)project(CUTLASS-GEMM-API-V3 VERSION 0.0.1 LANGUAGES CXX CUDA)# Set the CUDA architecture to compile the code for# https://cmake.org/cmake/help/latest/prop_tgt/CUDA_ARCHITECTURES.htmladd_executable(${PROJECT_NAME} main.cu)target_include_directories(${PROJECT_NAME} PRIVATE ${CUTLASS_INCLUDE_DIR} ${CUTLASS_UTILS_INCLUDE_DIR})set_target_properties(${PROJECT_NAME} PROPERTIES CUDA_ARCHITECTURES native)target_compile_options(${PROJECT_NAME} PRIVATE --expt-relaxed-constexpr)
Build Examples
To build the CUTLASS examples using CMake, please run the following command.
12
$ cmake -B build$ cmake --build build --config Release --parallel
Run Examples
To run the CUTLASS examples, please run the following commands.
123
$ ./build/examples/gemm_api_v2/CUTLASS-GEMM-API-V2$ echo $?0
123456789101112131415161718192021222324252627282930
$ ./build/examples/gemm_api_v3/CUTLASS-GEMM-API-V310000 timing iterations of 2048 x 2048 x 2048 matrix-matrix multiplyBasic data-parallel GEMM Disposition: Passed Avg runtime: 0.175606 ms GFLOPs: 97831.9StreamK GEMM with default load-balancing Disposition: Passed Avg runtime: 0.149729 ms GFLOPs: 114740 Speedup vs Basic-DP: 1.173StreamK emulating basic data-parallel GEMM Disposition: Passed Avg runtime: 0.177553 ms GFLOPs: 96759.2 Speedup vs Basic-DP: 0.989Basic split-K GEMM with tile-splitting factor 2 Disposition: Passed Avg runtime: 0.183542 ms GFLOPs: 93601.7StreamK emulating Split-K GEMM with tile-splitting factor 2 Disposition: Passed Avg runtime: 0.173763 ms GFLOPs: 98869.8 Speedup vs Basic-SplitK: 1.056
References
Build and Develop CUTLASS CUDA Kernels
https://leimao.github.io/blog/Build-Develop-CUTLASS-CUDA-Kernels/
Author
Lei Mao
Posted on
11-12-2024
Updated on
11-17-2024
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
CuTe Layout Algebra
Introduction
CuTe layout algebra is extremely important for understanding and applying CUTLASS for accelerated computing. Despite the fact that CuTe has a documentation for its layout algebra, it cannot be understood completely without first understanding its mathematical foundations. I tried to create some proofs for the CuTe layout algebra on my own and realized that it was a huge amount of work. Gratefully, Jay Shah has created a paper “A Note on the Algebra of CuTe Layouts” that completes the CuTe layout algebra mathematical foundations that I wanted to create.
As my proofreading, I found Jay Shah’s paper mostly error-free, except for a few very minor oversights and typos. However, it does skip some details without which the paper is a little bit hard to understand. In this article, based on Jay Shah’s paper, I would like to provide more details to the proofs and explanations of the CuTe layout algebra. Most of the definitions and annotations will follow Jay Shah’s paper.
This article can be read as a complement to Jay Shah’s paper, but it’s also standalone for understanding the CuTe layout algebra.
Layout Algebra Preliminaries
Definition 2.1: Layout
A layout $L$ is a pair of positive integer tuples $\mathbf{S}$ and $\mathbf{D}$ of matching dimensions. We call $\mathbf{S}$ the shape and $\mathbf{D}$ the stride. We write $L = \mathbf{S}:\mathbf{D}$.
A flattened layout means that there is no internal parentheses in the shape and stride. For example, $L = (5, 2, 2):(16, 80, 4)$ is a flattened layout, whereas $L = (5, (2, 2)):(16, (80, 4))$ is not. Flattening a layout will not change the semantics and operations of the layout.
Definition 2.2: Layout Size, Length, and Mode
Let $\alpha \geq 0$ be an integer and $L = \mathbf{S}:\mathbf{D} = (M_0, M_1, \ldots, M_{\alpha}):(d_0, d_1, \ldots, d_{\alpha})$ be a layout. Then:
Concatenation
Given two layouts $L = \mathbf{S}:\mathbf{D}$ and $L^{\prime} = \mathbf{S}^{\prime}:\mathbf{D}^{\prime}$, let $\mathbf{S}^{\prime\prime}$ and $\mathbf{D}^{\prime\prime}$ be the shape and stride tuples given by (the flattening of) $(\mathbf{S}, \mathbf{S}^{\prime})$ and $(\mathbf{D}, \mathbf{D}^{\prime})$ respectively. Then the concatenation of $L$ and $L^{\prime}$ is given by the layout
$$(L, L^{\prime}) = \mathbf{S}^{\prime\prime}:\mathbf{D}^{\prime\prime}$$
and we say that $(L, L^{\prime})$ is decomposed by $L$ and $L^{\prime}$.
Inductively, given layouts $L_0, L_1, \ldots, L_N$, we can then form the concatenation $(L_0, L_1, \ldots, L_N)$. Conversely, given $L$ a layout, $L$ is maximally decomposed by its modes.
Isomorphism
Let $\mathbf{S} = (M_0, M_1, \ldots, M_{\alpha})$ and $\mathbf{D} = (d_0, d_1, \ldots, d_{\alpha})$ be the respective shape and stride tuples of $L = \mathbf{S}:\mathbf{D}$. Let $M = M_0 \cdot M_1 \cdot \ldots \cdot M_{\alpha}$ be the size of $L$ and let $[0, M) \subset \mathbb{N}$ be the subset of the natural numbers given by ${0, 1, 2, \ldots, M - 1}$. Then we have an isomorphism
$$\begin{align}\iota: [0, M) \cong [0, M_0) \times [0, M_1) \times \ldots \times [0, M_{\alpha}) \\\end{align}$$
Given any $x \in [0, M)$, the isomorphism $\iota$ maps $x$ to the tuple
$$\begin{align}x \mapsto \left(x \mod M_0, \left\lfloor \frac{x}{M_0} \right\rfloor \mod M_1, \ldots, \left\lfloor \frac{x}{M_0 \cdot M_1 \cdot \ldots \cdot M_{\alpha - 1}} \right\rfloor \mod M_{\alpha}\right)\end{align}$$
The isomorphism mapping is bijective. In our case, given any tuple $(x_0, x_1, \ldots, x_{\alpha}) \in [0, M_0) \times [0, M_1) \times \ldots \times [0, M_{\alpha})$, the isomorphism inverse maps the tuple to the integer
$$\begin{align}\left(x_0, x_1, \ldots, x_{\alpha}\right) \mapsto x_0 + x_1 \cdot M_0 + x_2 \cdot M_0 \cdot M_1 + \ldots + x_{\alpha} \cdot M_0 \cdot M_1 \cdot \ldots \cdot M_{\alpha - 1}\end{align}$$
It’s straightforward to verify the above isomorphism mapping is valid and proof the above isomorphism mapping is bijective (by contradiction).
One could imagine the isomorphism as a mapping between a one-dimensional coordinate and a multi-dimensional coordinate.
Definition 2.3: Layout Function
Given a layout $L$, its layout function is the function $f_L: [0, M) \to \mathbb{N}$ is defined to be the composite
$$\begin{align}[0, M) \cong [0, M_0) \times [0, M_1) \times \ldots \times [0, M_{\alpha}) \subset \mathbb{N}^{\times (\alpha + 1)} \xrightarrow{\cdot d_0, \cdot d_1, \ldots, \cdot d_{\alpha}} \mathbb{N}^{\times (\alpha + 1)} \xrightarrow{+} \mathbb{N}\end{align}$$
In other words, $f_L$ is the composition of the multilinear function
$$\begin{align}[0, M_0) \times [0, M_1) \times \ldots \times [0, M_{\alpha}) \to \mathbb{N} \\(x_0, x_1, \ldots, x_{\alpha}) \mapsto x_0 \cdot d_0 + x_1 \cdot d_1 + \ldots + x_{\alpha} \cdot d_{\alpha}\end{align}$$
determined by the stride, with the isomorphism $\iota$, determined by the shape.
Computing the value of a layout function $f_L$ at a point $x \in [0, M)$ can be decomposed into computing the sum of the values of the layout function at multiple points. This is sometimes useful for computing the value of the layout function at a point handily.
Given a layout $L = (M_0, M_1, \ldots, M_{\alpha}):(d_0, d_1, \ldots, d_{\alpha})$ and $x \in [0, M)$,
$$\begin{align}x \mapsto \left(x_0, x_1, \ldots, x_{\alpha}\right) \mapsto x_0 \cdot d_0 + x_1 \cdot d_1 + \ldots + x_{\alpha} \cdot d_{\alpha}\end{align}$$
We also have
$$\begin{align}x^{\prime}_{0} &\mapsto \left(x_0, 0, 0, \ldots, 0\right) \mapsto x_0 \cdot d_0 \\x^{\prime}_{1} &\mapsto \left(0, x_1, 0, \ldots, 0\right) \mapsto x_1 \cdot d_1 \\&\vdots \\x^{\prime}_{\alpha} &\mapsto \left(0, 0, 0, \ldots, x_{\alpha}\right) \mapsto x_{\alpha} \cdot d_{\alpha}\end{align}$$
Therefore, we have
$$\begin{align}f_L(x) = f_L(x^{\prime}_{0}) + f_L(x^{\prime}_{1}) + \ldots + f_L(x^{\prime}_{\alpha})\end{align}$$
where
$$\begin{align}x^{\prime}_{0} &= x \mod M_0 \\x^{\prime}_{1} &= \left\lfloor \frac{x}{M_0} \right\rfloor \mod M_1 \cdot M_0 \\&\vdots \\x^{\prime}_{\alpha} &= \left\lfloor \frac{x}{M_0 \cdot M_1 \cdot \ldots \cdot M_{\alpha - 1}} \right\rfloor \mod M_{\alpha} \cdot M_0 \cdot M_1 \cdot \ldots \cdot M_{\alpha - 1}\end{align}$$
For example, given a layout $L = (3, 2):(2, 3)$ and $x = 5$, we have
$$\begin{align}f_L(5) &= f_L(5 \mod 3) + f_L(\left\lfloor \frac{5}{3} \right\rfloor \mod 2 \cdot 3) \\&= f_L(2) + f_L(3) \\&= 2 \cdot 2 + \left\lfloor \frac{3}{3} \right\rfloor \cdot 3 \\&= 4 + 3 \\&= 7\end{align}$$
Extension of Layout Function
Based on the definition of layout function, the extension of the layout function $f_L$ is the function, $\widehat{f}_L: \mathbb{N} \to \mathbb{N}$, defined by replacing $M_{\alpha}$ with $\infty$ in the definition of $f_L$, i.e., the composite
$$\begin{align}\mathbb{N} \cong [0, M_0) \times [0, M_1) \times \ldots \times [0, M_{\alpha - 1}) \times \mathbb{N} \subset \mathbb{N}^{\times (\alpha + 1)} \xrightarrow{\cdot d_0, \cdot d_1, \ldots, \cdot d_{\alpha}} \mathbb{N}^{\times (\alpha + 1)} \xrightarrow{+} \mathbb{N}\end{align}$$
where the extension of the isomorphism $\iota$, $\widehat{\iota}$, is given by
$$\begin{align}x \mapsto \left(x \mod M_0, \left\lfloor \frac{x}{M_0} \right\rfloor \mod M_1, \ldots, \left\lfloor \frac{x}{M_0 \cdot M_1 \cdot \ldots \cdot M_{\alpha - 2}} \right\rfloor \mod M_{\alpha - 1}, \left\lfloor \frac{x}{M_0 \cdot M_1 \cdot \ldots \cdot M_{\alpha - 1}} \right\rfloor\right)\end{align}$$
The extension of the isomorphism mapping is also bijective. The inverse mapping of the extension of the isomorphism is also given by
$$\begin{align}\left(x_0, x_1, \ldots, x_{\alpha - 1}, x_{\alpha}\right) \mapsto x_0 + x_1 \cdot M_0 + x_2 \cdot M_0 \cdot M_1 + \ldots + x_{\alpha} \cdot M_0 \cdot M_1 \cdot \ldots \cdot M_{\alpha - 1}\end{align}$$
One could imagine the extension of the isomorphism defines the last dimension of the shape to be a “batch” dimension and the batch size can be infinite.
Complementation
Definition 2.4: Sorted Layout
Let $A = (N_0, N_1, \ldots, N_{\alpha}):(d_0, d_1, \ldots, d_{\alpha})$ be a layout. We say that $A$ is sorted if $d_0 \leq d_1 \leq \ldots \leq d_{\alpha}$ and for every $i < j$, if $d_i = d_j$, then $N_i \leq N_j$.
Note that sorting a layout, or more generally, changing the order of modes of a layout, will change the semantics and operations of the layout.
For example, suppose we have a layout $A = (2, 4):(4, 1)$ and a layout $B = (4, 2):(1, 4)$. We could see that $B$ is the sorted version of $A$. We could compute the layout function of $A$ and $B$ as follows using lookup tables:
$$\begin{align}f_A(0) &= f_A(0, 0) = 0 \cdot 4 + 0 \cdot 1 = 0 \\f_A(1) &= f_A(1, 0) = 1 \cdot 4 + 0 \cdot 1 = 4 \\f_A(2) &= f_A(0, 1) = 0 \cdot 4 + 1 \cdot 1 = 1 \\f_A(3) &= f_A(1, 1) = 1 \cdot 4 + 1 \cdot 1 = 5 \\f_A(4) &= f_A(0, 2) = 0 \cdot 4 + 2 \cdot 1 = 2 \\f_A(5) &= f_A(1, 2) = 1 \cdot 4 + 2 \cdot 1 = 6 \\f_A(6) &= f_A(0, 3) = 0 \cdot 4 + 3 \cdot 1 = 3 \\f_A(7) &= f_A(1, 3) = 1 \cdot 4 + 3 \cdot 1 = 7\end{align}$$
$$\begin{align}f_B(0) &= f_B(0, 0) = 0 \cdot 1 + 0 \cdot 4 = 0 \\f_B(1) &= f_B(1, 0) = 1 \cdot 1 + 0 \cdot 4 = 1 \\f_B(2) &= f_B(2, 0) = 2 \cdot 1 + 0 \cdot 4 = 2 \\f_B(3) &= f_B(3, 0) = 3 \cdot 1 + 0 \cdot 4 = 3 \\f_B(4) &= f_B(0, 1) = 0 \cdot 1 + 1 \cdot 4 = 4 \\f_B(5) &= f_B(1, 1) = 1 \cdot 1 + 1 \cdot 4 = 5 \\f_B(6) &= f_B(2, 1) = 2 \cdot 1 + 1 \cdot 4 = 6 \\f_B(7) &= f_B(3, 1) = 3 \cdot 1 + 1 \cdot 4 = 7\end{align}$$
We could see that the layout $B$ is typically referred as the column-major layout, and the layout $A$ is typically referred as the row-major layout. They are completely different layouts.
More generally, the sorted layout is a just like the “generalization” of the column-major layout.
Definition 2.5: Admission for Complementation
Let $A = (N_0, N_1, \ldots, N_{\alpha}):(d_0, d_1, \ldots, d_{\alpha})$ be a layout and $M$ be a positive integer. If $A$ is not sorted then replace $A$ with its sorted version. We say that the pair $\{A, M\}$ is admissible for complementation (or simply admissible) if:
That $\{A, M\}$ is admissible for complementation also implies:
Definition 2.6: Complementation
Let $A = (N_0, N_1, \ldots, N_{\alpha}):(d_0, d_1, \ldots, d_{\alpha})$ be a layout and $M$ be a positive integer. If $\{A, M\}$ is admissible for complementation, then if $A$ is not sorted, replace $A$ with its sorted version. The complement of $\{A, M\}$ is defined to be the layout
$$\begin{align}\text{complement}(A, M) = \left(d_0, \frac{d_1}{N_0 d_0}, \frac{d_2}{N_1 d_1}, \ldots, \frac{d_{\alpha}}{N_{\alpha - 1} d_{\alpha - 1}}, \frac{M}{N_{\alpha} d_{\alpha}} \right): \left(1, N_0 d_0, N_1 d_1, \ldots, N_{\alpha} d_{\alpha}\right)\end{align}$$
Note that the size of the complement of $\{A, M\}$, $\text{size}(\text{complement}(A, M))$, is $\frac{M}{\text{size}(A)} = \frac{M}{N_0 \cdot N_1 \cdot \ldots \cdot N_{\alpha}}$.
By definition, the complement of $\{A, M\}$ is insensitive to the order of the modes of $A$, since it will always be sorted before complementation.
The complement of $\{A, M\}$ is strictly increasing. This might not be very obvious, so we will show a proof.
Proof
Suppose $B = \text{complement}(A, M)$, to show that the layout function $f_{B}$, whose domain is a set of natural numbers, is strictly increasing, we need to show that for every two adjacent natural numbers $x$ and $x + 1$, $0 \leq x < x + 1 < \text{size}(B)$, we have $f_{B}(x) < f_{B}(x + 1)$.
Because of the isomorphism, suppose the mapping of $x$ is as follows:
$$\begin{align}x &\mapsto \left(x_0, x_1, \ldots, x_{\alpha}, x_{\alpha + 1}\right) \\\end{align}$$
By definition of the layout function $f_{B}$, we have
$$\begin{align}f_{B}(x) &= x_0 + x_1 \cdot N_0 d_0 + x_2 \cdot N_1 d_1 + \ldots + x_{\alpha} \cdot N_{\alpha - 1} d_{\alpha - 1} + x_{\alpha + 1} \cdot N_{\alpha} d_{\alpha} \\\end{align}$$
The mapping of $x + 1$ can have many different cases.
In the simplest case,
$$\begin{align}x + 1 &\mapsto \left(x_0 + 1, x_1, \ldots, x_{\alpha}, x_{\alpha + 1}\right) \\\end{align}$$
Then we have
$$\begin{align}f_{B}(x + 1) &= x_0 + 1 + x_1 \cdot N_0 d_0 + x_2 \cdot N_1 d_1 + \ldots + x_{\alpha} \cdot N_{\alpha - 1} d_{\alpha - 1} + x_{\alpha + 1} \cdot N_{\alpha} d_{\alpha} \\&= f_{B}(x) + 1 \\&> f_{B}(x)\end{align}$$
In a more complicated case, where $x_0 = d_0 - 1$ and $x_1 < \frac{d_1}{N_0 d_0} - 1$, we have
$$\begin{align}x + 1 &\mapsto \left(0, x_1 + 1, \ldots, x_{\alpha}, x_{\alpha + 1}\right) \\\end{align}$$
Then we have
$$\begin{align}f_{B}(x + 1) &= 0 + (x_1 + 1) \cdot N_0 d_0 + x_2 \cdot N_1 d_1 + \ldots + x_{\alpha} \cdot N_{\alpha - 1} d_{\alpha - 1} + x_{\alpha + 1} \cdot N_{\alpha} d_{\alpha} \\&= f_{B}(x) - x_0 + N_0 d_0 \\&= f_{B}(x) - (d_0 - 1) + N_0 d_0 \\&= f_{B}(x) + 1 + (N_0 - 1) d_0 \\&> f_{B}(x)\end{align}$$
Because $N_0 \geq 1$, we have $(N_0 - 1) d_0 \geq 0$, so we have
$$\begin{align}f_{B}(x + 1) &> f_{B}(x)\end{align}$$
In general, when $x_0 = d_0 - 1$, for some $k \in [1, \alpha - 1]$, $x_i = \frac{d_i}{N_{i - 1} d_{i - 1}} - 1$ for every $i \in [1, k]$, $x_{k + 1} < \frac{d_{k + 1}}{N_k d_k} - 1$, we have
$$\begin{align}x + 1 &\mapsto \left(0, 0, \ldots, 0, x_{k + 1} + 1, \ldots, x_{\alpha}, x_{\alpha + 1}\right) \\\end{align}$$
Then we have
$$\begin{align}f_{B}(x + 1) &= 0 + 0 \cdot N_0 d_0 + \ldots + 0 \cdot N_{k - 1} d_{k - 1} + (x_{k + 1} + 1) \cdot N_k d_k + \ldots + x_{\alpha} \cdot N_{\alpha - 1} d_{\alpha - 1} + x_{\alpha + 1} \cdot N_{\alpha} d_{\alpha} \\&= f_{B}(x) - x_0 - \left(\sum_{i = 1}^{k} x_i \cdot N_{i - 1} d_{i - 1}\right) + N_k d_k \\&= f_{B}(x) - (d_0 - 1) - \left(\sum_{i = 1}^{k} \left(\frac{d_i}{N_{i - 1} d_{i - 1}} - 1\right) \cdot N_{i - 1} d_{i - 1}\right) + N_k d_k \\&= f_{B}(x) - (d_0 - 1) - \left(\sum_{i = 1}^{k} \left(d_i - N_{i - 1} d_{i - 1}\right)\right) + N_k d_k \\&= f_{B}(x) - (d_0 - 1) + \sum_{i = 1}^{k} N_{i - 1} d_{i - 1} - \sum_{i = 1}^{k} d_i + N_k d_k \\&= f_{B}(x) + \sum_{i = 0}^{k} \left(N_{i} - 1\right) d_{i} + 1 \\\end{align}$$
Because $N_{i} \geq 1$ for every $i$, we have $\left(N_{i} - 1\right) d_{i} \geq 0$ for every $i$, so we have
$$\begin{align}f_{B}(x + 1) &> f_{B}(x)\end{align}$$
This concludes the proof. $\square$
Similarly, we could also prove that the extension of the complement of $\{A, M\}$ is strictly increasing.
Proposition 2.7
Let $\{A = (N_0, N_1, \ldots, N_{\alpha}):(d_0, d_1, \ldots, d_{\alpha}), M\}$ be admissible for complementation and $B = \text{complement}(A, M)$. Let $C = (A, B)$ be the concatenated layout. Then the size of $C$ is $M$ and $f_C: [0, M) \to \mathbb{N}$ restricts to a bijection $[0, M) \cong [0, M)$.
Proof
Because $\text{size}(A) = \prod_{i = 0}^{\alpha} N_i$ and $\text{size}(B) = \frac{M}{\prod_{i = 0}^{\alpha} N_i}$, we have $\text{size}(C) = \text{size}(A) \cdot \text{size}(B) = M$. Thus the domain of $f_C$ is $[0, M)$.
Note that the image of $f_C$ is the same as that of $f_{C^{\prime}}$ for any permutation $C^{\prime}$ of $C$.
To see this, suppose we have the following layout $C$ and its permutation $C^{\prime}$ in which only one pair of the modes is permuted.
$$\begin{align}C &= \left(N_0, N_1, \ldots, N_{i}, \ldots, N_{j}, \ldots, N_{\alpha} \right): \left(d_0, d_1, \ldots, d_{i}, \ldots, d_{j}, \ldots, d_{\alpha}\right) \\C^{\prime} &= \left(N_0, N_1, \ldots, N_{j}, \ldots, N_{i}, \ldots, N_{\alpha} \right): \left(d_0, d_1, \ldots, d_{j}, \ldots, d_{i}, \ldots, d_{\alpha}\right)\end{align}$$
The domains of $f_C$ and $f_C^{\prime}$ are both $[0, M)$. For any $x_C \in [0, M)$, we have
$$\begin{align}x_C &\mapsto \left(x_0, x_1, \ldots, x_{i}, \ldots, x_{j}, \ldots, x_{\alpha}\right) \\x_{C^{\prime}} &\mapsto \left(x_0, x_1, \ldots, x_{j}, \ldots, x_{i}, \ldots, x_{\alpha}\right)\end{align}$$
and $x_C$ and $x_{C^{\prime}}$ are bijective.
Because by definition, $f_C(x_C) = f_{C^{\prime}}(x_{C^{\prime}})$, the image of $f_C$ is the same as that of $f_{C^{\prime}}$.
For any permutation $C^{\prime}$ of $C$, it can be obtained by permuting one pair of the modes of $C$ at a time and each time the image of $f_C$ is the same as that of $f_{C^{\prime}}$. Therefore, the image of $f_C$ is the same as that of $f_{C^{\prime}}$ for any permutation $C^{\prime}$ of $C$.
When computing the image of $f_C$ we may sort $C$. Without loss of generality, suppose $A = (N_0, N_1, \ldots, N_{\alpha}):(d_0, d_1, \ldots, d_{\alpha})$ is already sorted. After sorting $C$, the sorted $C^{\prime}$ could only be as follows:
$$\begin{align}C^{\prime} &= \left(d_0, N_0, \frac{d_1}{N_0 d_0}, N_1, \frac{d_2}{N_1 d_1}, N_2, \ldots, \frac{d_{\alpha}}{N_{\alpha - 1} d_{\alpha - 1}}, N_{\alpha}, \frac{M}{N_{\alpha} d_{\alpha}} \right): \left(1, d_0, N_0 d_0, d_1, N_1 d_1, d_2, \ldots, N_{\alpha - 1} d_{\alpha - 1}, d_{\alpha}, N_{\alpha} d_{\alpha}\right)\end{align}$$
Because $d_i \leq N_i d_i$ and $N_i d_i \leq d_{i + 1}$ for every $i$, when $N_i = 1$, $N_i \leq \frac{d_{i + 1}}{N_i d_i}$, when $N_i d_i = d_{i + 1}$, $\frac{d_{i + 1}}{N_i d_i} \leq N_{i + 1}$, thus $C^{\prime}$ is sorted and any permutation of $C^{\prime}$ will make it not sorted.
Then we may rewrite
$$\begin{align}C^{\prime} &= \left(r_0, r_1, r_2, \ldots, r_{\beta} \right): \left(1, r_0, r_0 r_1, \ldots, r_0 r_1 \ldots r_{\beta - 1}\right)\end{align}$$
where $\beta = 2 \alpha + 1$ and the maximum value that $f_{C^{\prime}}$ attains is computed as follows:
$$\begin{align}f_{C^{\prime}}(M - 1) &= f_{C^{\prime}}(r_0 - 1, r_1 - 1, r_2 - 1, \ldots, r_{\beta - 1} - 1, r_{\beta} - 1) \\&= (r_0 - 1) + (r_1 - 1) \cdot r_0 + (r_2 - 1) \cdot r_0 r_1 + \ldots + (r_{\beta - 1} - 1) \cdot r_0 r_1 \ldots r_{\beta - 2} + (r_{\beta} - 1) \cdot r_0 r_1 \ldots r_{\beta - 1} \\&= r_0 - 1 + r_0 r_1 - r_0 + r_0 r_1 r_2 - r_0 r_1 + \ldots + r_0 r_1 \ldots r_{\beta - 1} - r_0 r_1 \ldots r_{\beta - 2} + r_0 r_1 \ldots r_{\beta} - r_0 r_1 \ldots r_{\beta - 1} \\&= r_0 r_1 \ldots r_{\beta} - 1 \\&= M - 1\end{align}$$
Then in this case, to establish the bijectivity assertion, it’s sufficient to just show $f_{C^{\prime}}(x)$ is injective, i.e., for any $x, y \in [0, M)$, if $f_{C^{\prime}}(x) = f_{C^{\prime}}(y)$, then $x = y$.
Suppose the isomorphism mapping of $x$ and $y$ are as follows:
$$\begin{align}x &\mapsto \left(x_0, x_1, \ldots, x_{\beta}\right) \\y &\mapsto \left(y_0, y_1, \ldots, y_{\beta}\right)\end{align}$$
Because $f_{C^{\prime}}(x) = f_{C^{\prime}}(y)$, we have
$$\begin{align}x_0 + x_1 \cdot r_0 + x_2 \cdot r_0 r_1 + \ldots + x_{\beta} \cdot r_0 r_1 \ldots r_{\beta - 1} = y_0 + y_1 \cdot r_0 + y_2 \cdot r_0 r_1 + \ldots + y_{\beta} \cdot r_0 r_1 \ldots r_{\beta - 1}\end{align}$$
We will use strong induction to show that $x_i = y_i$ for every $i \in [0, \beta]$.
Because $f_{C^{\prime}}(x) \mod r_0 = f_{C^{\prime}}(y) \mod r_0$, we have $x_0 = y_0$.
Now suppose by the strong induction that given $i \in (0, \beta]$, for all $j < i$, we have $x_j = y_j$. we have
$$\begin{align}x_i \cdot r_0 r_1 \ldots r_{i - 1} + x_{i + 1} \cdot r_0 r_1 \ldots r_{i} + \ldots + x_{\beta} \cdot r_0 r_1 \ldots r_{\beta - 1} = y_i \cdot r_0 r_1 \ldots r_{i - 1} + y_{i + 1} \cdot r_0 r_1 \ldots r_{i} + \ldots + y_{\beta} \cdot r_0 r_1 \ldots r_{\beta - 1}\end{align}$$
Because $x_i \in [0, r_i)$ and $y_i \in [0, r_i)$, taking this equation modulo $r_0 r_1 \ldots r_{i}$ and dividing by $r_0 r_1 \ldots r_{i - 1}$, we have $x_i = y_i$.
Because $(x_0, x_1, \ldots, x_{\beta}) = (y_0, y_1, \ldots, y_{\beta})$, and the isomorphism mapping is bijective, we have $x = y$.
Therefore $f_{C^{\prime}}: [0, M) \to \mathbb{N}$ restricts to a bijection $[0, M) \cong [0, M)$. So does $f_C$.
This concludes the proof. $\square$
Corollary 2.8 Complementation Disjointness
The Corollary 2.8 explains what it means of taking a complement of a layout.
In the setting of Proposition 2.7, let $I = [0, \text{size}(A)) = [0, N_0 N_1 \ldots N_{\alpha})$ be the domain of $f_A$. Then
$$\begin{align}f_A(I) \cap \widehat{f}_B(I) = \{0\}\end{align}$$
In other words, $\widehat{f}_A$ and $\widehat{f}_B$ have disjoint image when restricted to the domain of $f_A$, apart from 0.
Note that in the corollary, $f_A$ and $\widehat{f}_A$ are actually interchangeable, because the function domain is restricted to the domain of $f_A$.
Proof
Let $J = [0, \text{size}(B)) = [0, \frac{M}{N_0 N_1 \ldots N_{\alpha}})$ be the domain of $f_B$. Then by Proposition 2.7, we have
$$\begin{align}f_A(I) \cap f_B(J) = \{0\}\end{align}$$
To understand this, for any $x_A \in I$ and any $x_B \in J$, because of the isomorphism, we have
$$\begin{align}x_A &\mapsto \left(x_{A, 0}, x_{A, 1}, \ldots, x_{A, \alpha} \right) \\x_B &\mapsto \left(x_{B, 0}, x_{B, 1}, \ldots, x_{B, \alpha}, x_{B, \alpha + 1} \right)\end{align}$$
Then we have
$$\begin{align}f_A(x_A) &= x_{A, 0} + x_{A, 1} \cdot N_0 + x_{A, 2} \cdot N_0 N_1 + \ldots + x_{A, \alpha} \cdot N_0 N_1 \ldots N_{\alpha - 1} \\f_B(x_B) &= x_{B, 0} + x_{B, 1} \cdot N_0 d_0 + x_{B, 2} \cdot N_1 d_1 + \ldots + x_{B, \alpha} \cdot N_{\alpha - 1} d_{\alpha - 1} + x_{B, \alpha + 1} \cdot N_{\alpha} d_{\alpha}\end{align}$$
We orchestrate new coordinates for layout $C$ as follows:
$$\begin{align}x^{\prime}_A &\mapsto \left(0, x_{A, 0}, 0, x_{A, 1}, 0, x_{A, 2}, \ldots, 0, x_{A, \alpha}, 0 \right) \\x^{\prime}_B &\mapsto \left(x_{B, 0}, 0, x_{B, 1}, 0, x_{B, 2}, \ldots, x_{B, \alpha}, 0, x_{B, \alpha + 1} \right)\end{align}$$
Then we have
$$\begin{align}f_C(x^{\prime}_A) &= x_{A, 0} + x_{A, 1} \cdot N_0 + x_{A, 2} \cdot N_0 N_1 + \ldots + x_{A, \alpha} \cdot N_0 N_1 \ldots N_{\alpha - 1} \\&= f_A(x_A) \\f_C(x^{\prime}_B) &= x_{B, 0} + x_{B, 1} \cdot N_0 d_0 + x_{B, 2} \cdot N_1 d_1 + \ldots + x_{B, \alpha} \cdot N_{\alpha - 1} d_{\alpha - 1} + x_{B, \alpha + 1} \cdot N_{\alpha} d_{\alpha} \\&= f_B(x_B)\end{align}$$
By the Proposition 2.7, we have $f_C: [0, M) \to \mathbb{N}$ restricts to a bijection $[0, M) \cong [0, M)$. If $x^{\prime}_A \neq x^{\prime}_B$, then $f_C(x^{\prime}_A) \neq f_C(x^{\prime}_B)$, and $f_A(x_A) \neq f_B(x_B)$.
Obviously, other than $(0, 0, \ldots, 0)$, for any values of $x_{A, 0}, x_{A, 1}, \ldots, x_{A, \alpha}$ and $x_{B, 0}, x_{B, 1}, \ldots, x_{B, \alpha}, x_{B, \alpha + 1}$, $\left(0, x_{A, 0}, 0, x_{A, 1}, 0, x_{A, 2}, \ldots, 0, x_{A, \alpha}, 0 \right) \neq \left(x_{B, 0}, 0, x_{B, 1}, 0, x_{B, 2}, \ldots, x_{B, \alpha}, 0, x_{B, \alpha + 1} \right)$, $x^{\prime}_A \neq x^{\prime}_B$, $f_C(x^{\prime}_A) \neq f_C(x^{\prime}_B)$, and $f_A(x_A) \neq f_B(x_B)$.
This means, for any $x \in I$ that $x \neq 0$, there is no $y \in J$ such that $f_A(x) = f_B(y)$.
When $x = 0$, we have $f_A(x) = f_B(x) = 0$. Thus we could claim that
$$\begin{align}f_A(I) \cap f_B(J) = \{0\}\end{align}$$
In the Definition 2.6: Complementation, we have shown that the complement of $\{A, M\}$, $f_B$, as well as its extension $\widehat{f}_B$, are strictly increasing.
In addition, by the extension of the isomorphism, we have
$$\begin{align}\text{size}(B) \mapsto \left(0, 0, \ldots, 0, \frac{M}{N_{\alpha} d_{\alpha}} \right) \\\end{align}$$
Then we have
$$\begin{align}\widehat{f}_{B}(\text{size}(B)) &= 0 + 0 \cdot 1 + 0 \cdot N_0 d_0 + \ldots + 0 \cdot N_{\alpha - 1} d_{\alpha - 1} + \frac{M}{N_{\alpha} d_{\alpha}} \cdot N_{\alpha} d_{\alpha} \\&= M\end{align}$$
The largest value attained by $f_A$ is at $N_0 N_1 \ldots N_{\alpha} - 1$, and $f_A(N_0 N_1 \ldots N_{\alpha} - 1) = (N_0 - 1) d_0 + (N_1 - 1) d_1 + \ldots + (N_{\alpha} - 1) d_{\alpha}$.
Because $(N_0 - 1) d_0 < N_0 d_0$ and $N_i d_i \leq d_{i + 1}$ for every $i \in [0, \alpha - 1]$, $N_{\alpha} d_{\alpha} \leq M$, we have
$$\begin{align}f_A(N_0 N_1 \ldots N_{\alpha} - 1) &= (N_0 - 1) d_0 + (N_1 - 1) d_1 + \ldots + (N_{\alpha} - 1) d_{\alpha} \\&< N_0 d_0 + N_1 d_1 - d_1 + N_2 d_2 - d_2 + \ldots + N_{\alpha} d_{\alpha} - d_{\alpha} \\&\leq d_1 + N_1 d_1 - d_1 + N_2 d_2 - d_2 + \ldots + N_{\alpha} d_{\alpha} - d_{\alpha} \\&= N_1 d_1 + N_2 d_2 - d_2 + \ldots + N_{\alpha} d_{\alpha} - d_{\alpha} \\&\leq d_2 + N_2 d_2 - d_2 + \ldots + N_{\alpha} d_{\alpha} - d_{\alpha} \\&\vdots \\&\leq d_{\alpha} + N_{\alpha} d_{\alpha} - d_{\alpha} \\&= N_{\alpha} d_{\alpha} \\&\leq M\end{align}$$
Thus $f_A(N_0 N_1 \ldots N_{\alpha} - 1) < \widehat{f}_{B}(\text{size}(B))$.
In the case of $I \cap J = I$, i.e., $\text{size}(A) \leq \text{size}(B)$. Then we have
$$\begin{align}f_A(I) \cap f_B(I) = \{0\}\end{align}$$
Because in this case, $f_B(I) = \widehat{f}_B(I)$, we have
$$\begin{align}f_A(I) \cap \widehat{f}_B(I) = \{0\}\end{align}$$
In the other case of $I \cap J = J$, i.e., $\text{size}(A) \geq \text{size}(B)$. Because the largest value attained by $f_A$ is $f_A(N_0 N_1 \ldots N_{\alpha} - 1)$, and $f_A(N_0 N_1 \ldots N_{\alpha} - 1) < \widehat{f}_{B}(\text{size}(B))$, for any $x \in I/J$, we have $f_A(x) < \widehat{f}_{B}(\text{size}(B))$.
Thus,
$$\begin{align}f_A(I) \cap \widehat{f}_B(I/J) = {\emptyset}\end{align}$$
Therefore,
$$\begin{align}f_A(I) \cap \widehat{f}_B(I) &= f_A(I) \cap \left(\widehat{f}_B(I) \cup \widehat{f}_B(I/J)\right) \\&= f_A(I) \cap \left(f_B(I) \cup \widehat{f}_B(I/J)\right) \\&= \left(f_A(I) \cap f_B(I)\right) \cup \left(f_A(I) \cap \widehat{f}_B(I/J)\right) \\&= \{0\} \cup {\emptyset} \\&= \{0\}\end{align}$$
Taken together, we have
$$\begin{align}f_A(I) \cap \widehat{f}_B(I) = \{0\}\end{align}$$
This concludes the proof. $\square$
A short note on the original proof of Corollary 2.8 in the paper is that Jay Shah claimed $f_A(I \cap J) \cap f_B(I \cap J) = \{0\}$, which is insufficient to show the proof. The sufficient statement should be $f_A(I) \cap f_B(J) = \{0\}$.
Remark 2.9 Complementation Disjointness, Ordering, and Boundedness
The complement $B$ of a layout $A$ with respect to an integer $M$ should satisfy three properties:
The property 1 and 2 have been proved in the Corollary 2.8 and Definition 2.6. We will show a proof of the property 3.
Proof
By Definition 2.6, we have $\text{size}(B) = \frac{M}{\text{size}(A)}$.
Because cosize is insensitive to the ordering of the layout, without loss of generality, we sorted $A$ so that $A = (N_0, N_1, \ldots, N_{\alpha}):(d_0, d_1, \ldots, d_{\alpha})$.
By the definition of cosize, we have
$$\begin{align}\text{cosize}(B) &= f_B(\text{size}(B) - 1) + 1 \\&= f_B\left(d_0 - 1, \frac{d_1}{N_0 d_0} - 1, \ldots, \frac{d_{\alpha}}{N_{\alpha - 1} d_{\alpha - 1}} - 1, \frac{M}{N_{\alpha} d_{\alpha}} - 1\right) + 1 \\&= (d_0 - 1) + \left(\frac{d_1}{N_0 d_0} - 1\right) \cdot N_0 d_0 + \ldots + \left(\frac{d_{\alpha}}{N_{\alpha - 1} d_{\alpha - 1}} - 1\right) \cdot N_{\alpha - 1} d_{\alpha - 1} + \left(\frac{M}{N_{\alpha} d_{\alpha}} - 1\right) \cdot N_{\alpha} d_{\alpha} + 1 \\&= d_0 + d_1 + \ldots + d_{\alpha} - N_0 d_0 - N_1 d_1 - \ldots - N_{\alpha} d_{\alpha} + M \\&= M - \left(\left(N_0 - 1\right) d_0 + \left(N_1 - 1\right) d_1 + \ldots + \left(N_{\alpha} - 1\right) d_{\alpha}\right) \\&= M - f_A(\text{size}(A) - 1) \\&= M - \left( \text{cosize}(A) - 1 \right) \\\end{align}$$
To obtain the inequality $\text{cosize}(B) \leq \left\lfloor\frac{M}{\text{cosize}(A)}\right\rfloor \cdot \text{cosize}(A)$, we divide the above equation by $\text{cosize}(A)$.
$$\begin{align}\frac{\text{cosize}(B)}{\text{cosize}(A)} &= \frac{M - \left( \text{cosize}(A) - 1 \right)}{\text{cosize}(A)} \\&= \frac{M}{\text{cosize}(A)} - 1 + \frac{1}{\text{cosize}(A)} \\\end{align}$$
and we have to show that
$$\begin{align}\frac{M}{\text{cosize}(A)} - 1 + \frac{1}{\text{cosize}(A)} \leq \left\lfloor\frac{M}{\text{cosize}(A)}\right\rfloor\end{align}$$
In fact, for any $a, b \in \mathbb{N}$ and $a \geq 1$, we have
$$\begin{align}\frac{b}{a} - 1 + \frac{1}{a} \leq \left\lfloor\frac{b}{a}\right\rfloor\end{align}$$
To see this, suppose $\frac{b}{a} = \left\lfloor\frac{b}{a}\right\rfloor + c$, where $c = \frac{k}{a}$ and $k$ is an integer such that $0 \leq k < a$. Then we have
$\frac{1}{a} \leq c < 1$ and $1 \leq ac < a$. Then we want to show that
$$\begin{align}\frac{b}{a} - 1 + \frac{1}{a} \leq \frac{b}{a} - c \\\end{align}$$
$$\begin{align}-a + 1 \leq -ac \\\end{align}$$
$$\begin{align}a - ac \geq 1 \\\end{align}$$
$$\begin{align}a - k \geq 1 \\\end{align}$$
Because $a$ and $k$ are both integers and $0 \leq k < a$, we have $a - k \geq 1$. Thus the inequality holds.
This concludes the proof. $\square$
Coalescence
Coalescence simplifies the layout and does not change the layout function.
Coalescence Rules
Considering a layout with just two integral modes, $A = (N_{0}, N_{1}):(d_{0}, d_{1})$, we have four cases to consider:
In the first case, obviously, $A = (N_{0}, 1):(d_{0}, d_{1}) = (N_{0}):(d_{0})$.
In the second case, also obviously, $A = (1, N_{1}):(d_{0}, d_{1}) = (N_{1}):(d_{1})$.
In the third case, we have $A = (N_{0}, N_{1}):(d_{0}, N_{0} d_{0}) = (N_{0} N_{1}):(d_{0})$.
In the fourth case, we could do nothing and $A$ remains the same.
There is one case that can often be misunderstood, that is $d_{0} = N_{1} d_{1}$. In this case, we have $A = (N_{0}, N_{1}):(N_{1} d_{1}, d_{1})$. At first glance, it seems that we could coalesce $A$ to $(N_{0} N_{1}):(d_{1})$. However, this is not correct, because it changes the layout function.
Composition
Definition 2.11 Left Divisibility
Let $M, d > 0$ be positive integers and let $M = M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha}$ be a given factorization of $M$ by integers $M_{k} > 1$ for $k \in [0, \alpha]$. Replacing $M_{\alpha}$ by $\infty$, let
$$\begin{align}\widehat{M} = M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha - 1} \cdot \infty\end{align}$$
and consider $\infty$ to be divisible by every positive integer. We say that $M$ is left divisible by $d$ (implicitly, with respect to the given factorization) if there exists $0 \leq i \leq \alpha$ such that:
Here $i$ is necessarily unique if it exists. We could prove this by contradiction.
Proof
Suppose there exists two distinct $i$ and $j$ such that the three conditions are satisfied. Without loss of generality, suppose $i < j$.
There are two cases to consider.
In the case where $j < \alpha$, we will also have $i < \alpha$. Then we have
$$\begin{align}d &= c \cdot M_{0} \cdot M_{1} \cdot \ldots \cdot M_{i - 1} \\\end{align}$$
where $c$ is some positive integer such that $1 \leq c < M_{i}$.
Similarly,
$$\begin{align}d &= c^{\prime} \cdot M_{0} \cdot M_{1} \cdot \ldots \cdot M_{j - 1} \\\end{align}$$
where $c^{\prime}$ is some positive integer such that $1 \leq c^{\prime} < M_{j}$.
Thus,
$$\begin{align}c \cdot M_{0} \cdot M_{1} \cdot \ldots \cdot M_{i - 1} &= c^{\prime} \cdot M_{0} \cdot M_{1} \cdot \ldots \cdot M_{j - 1} \\\end{align}$$
$$\begin{align}c &= c^{\prime} \cdot M_{i} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 1} \\\end{align}$$
To make the above equation valid, we must show
$$\begin{align}c^{\prime} \cdot \frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 1} = 1\end{align}$$
However, because $M_{k} > 1$ for $k \in [0, \alpha]$, $\frac{M_{i}}{c} > 1$, and $c^{\prime} \geq 1$, it is not possible to have the above equation valid. This raises a contradiction. Therefore, $i$ is unique.
In the case where $j = \alpha$, we will also have $i < \alpha$. Then we have
$$\begin{align}d &= c \cdot M_{0} \cdot M_{1} \cdot \ldots \cdot M_{i - 1} \\\end{align}$$
where $c$ is some positive integer such that $1 \leq c < M_{i}$.
Similarly,
$$\begin{align}d &= c^{\prime} \cdot M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha - 1} \\\end{align}$$
where $c^{\prime}$ is some positive integer.
Thus,
$$\begin{align}c \cdot M_{0} \cdot M_{1} \cdot \ldots \cdot M_{i - 1} &= c^{\prime} \cdot M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha - 1} \\\end{align}$$
$$\begin{align}c &= c^{\prime} \cdot M_{i} \cdot M_{i + 1} \cdot \ldots \cdot M_{\alpha - 1} \\\end{align}$$
To make the above equation valid, we must show
$$\begin{align}c^{\prime} \cdot \frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{\alpha - 1} = 1\end{align}$$
However, because $M_{k} > 1$ for $k \in [0, \alpha]$, $\frac{M_{i}}{c} > 1$, and $c^{\prime} \geq 1$, it is not possible to have the above equation valid. This raises a contradiction. Therefore, $i$ is unique.
Taken together, $i$ is unique if it exists.
This concludes the proof. $\square$
If $i$ exists, we will refer to $i$ as the division index and write $\widehat{M} = d \cdot \widehat{M}^{\prime}$, where $\widehat{M}^{\prime}$ is endowed with the following induced factorization:
To see this, in the case where $0 \leq i < \alpha$, we have
$$\begin{align}\widehat{M} &= M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha - 1} \cdot \infty \\&= M_{0} \cdot M_{1} \cdot \ldots \cdot M_{i - 1} \cdot M_{i} \cdot M_{i + 1} \cdot \ldots \cdot M_{\alpha - 1} \cdot \infty \\&= \frac{d}{c} \cdot M_{i} \cdot M_{i + 1} \cdot \ldots \cdot M_{\alpha - 1} \cdot \infty \\&= d \cdot \frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{\alpha - 1} \cdot \infty \\&= d \cdot M_{0}^{\prime} \cdot M_{1}^{\prime} \cdot \ldots \cdot M_{\alpha - i - 1}^{\prime} \cdot \infty \\&= d \cdot \widehat{M}^{\prime}\end{align}$$
where $\widehat{M}^{\prime} = M_{0}^{\prime} \cdot M_{1}^{\prime} \cdot \ldots \cdot M_{\alpha - i - 1}^{\prime} \cdot \infty$ with $M_{0}^{\prime} = \frac{M_{i}}{c} > 1$ and $M_{j}^{\prime} = M_{i + j}$ for $0 < j < \alpha - i$.
In the case where $i = \alpha$, we have
$$\begin{align}\widehat{M} &= M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha - 1} \cdot \infty \\&= \frac{d}{c} \cdot \infty \\&= d \cdot \infty \\&= d \cdot \widehat{M}^{\prime}\end{align}$$
where $\widehat{M}^{\prime} = \infty$.
Furthermore, we say that $M$ is weakly left divisible by $d$ if there exists $0 \leq i \leq \alpha$ such that the conditions 1 and 2 are satisfied for left divisibility, but not necessarily the condition 3.
Notice that in the proof of the uniqueness of division index $i$, we have never used the condition 3. Therefore, we could still the necessarily unique $i$ the division index for weak left divisibility, but we no longer have the factorization of $\widehat{M}$, because the factorization assumes the condition 3 of left divisibility.
Also notice that $\widehat{M}^{\prime}$ with its induced factorization can itself be considered for left divisibility or weak left divisibility (with the step or replacing the last factor by $\infty$ now being superfluous). More specifically, because $\widehat{M}^{\prime} > 0$, $\widehat{M}_{j}^{\prime} > 1$ for $j \in [0, \alpha - i - 1]$, and $\widehat{M}^{\prime} = \widehat{M}_{0}^{\prime} \cdot \widehat{M}_{1}^{\prime} \cdot \ldots \cdot \widehat{M}_{\alpha - i - 1}^{\prime} \cdot \infty$, given another positive integer $d^{\prime} > 0$, we could completely test whether the properties of left divisibility or weak left divisibility hold for $\widehat{M}^{\prime}$ with respect to $d^{\prime}$. Replacing the last factor by $\infty$ is not necessary as it is already $\infty$.
Definition 2.12 Admission for Composition - Restricted Case
We first consider composition in the restricted case of length 1 layouts for the second layout.
Let $\mathbf{S} = (M_{0}, M_{1}, \ldots, M_{\alpha})$ be a shape tuple, let $M = M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha}$, and let $B = (N):(r)$ be a layout of length 1. Then we say that the pair $\{\mathbf{S}, B\}$ is admissible for composition (or simply admissible) if:
Definition 2.13 Composition - Restricted Case
The idea of admissibility is that the composition $A \circ B$ of layouts will entail “dividing $B$ along the modes of $A$”. More preciously, we have the following:
Suppose that $\mathbf{S} = (M_{0}, M_{1}, \ldots, M_{\alpha})$ is a shape tuple, and $B = (N):(r)$ is a layout of length 1 such that $\{\mathbf{S}, B\}$ is admissible for composition. Let $\mathbf{D} = (d_{0}, d_{1}, \ldots, d_{\alpha})$ be any stride tuple and let $A = (\mathbf{S}:\mathbf{D})$ be a coalesced layout.
Note that in Jay Shah’s original paper, the layout $A$ was not specified to be coalesced. It will result in some compositions not being valid.
As in Definition 2.11, let $M = M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha}$ and $\widehat{M} = r \cdot \widehat{M}^{\prime}$ with division index $0 \leq i \leq \alpha$. We separate the definition of $A \circ B$ into two cases.
First suppose that $0 \leq i < \alpha$, so that
$$\begin{align}r &= M_{0} \cdot M_{1} \cdot \ldots \cdot M_{i - 1} \cdot c \\\widehat{M}^{\prime} &= \frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{\alpha - 1} \cdot \infty\end{align}$$
Then if $N \leq \frac{M_{i}}{c}$, we let $A \circ B = (N):(cd_{i})$.
Otherwise, there exists a $j \in [i + 1, \alpha]$ such that $N = \frac{M_{i}}{c} \cdot \ldots \cdot M_{j-1} \cdot c^{\prime}$, where $1 \leq c^{\prime} < M_{j}$ if $j \neq \alpha$ (when $j = i + 1$, $N = \frac{M_{i}}{c} \cdot c^{\prime}$).
Note that here is an important fact that $c^{\prime}$ must be an integer because of the second condition for admission for composition, that is, $\widehat{M}^{\prime}$ is weakly left divisible by $N$. We must have $\frac{M_{i}}{c} \cdot \ldots \cdot M_{j-1}$ divides $N$, resulting in $c^{\prime}$ being an integer.
We let
$$\begin{align}A \circ B =\begin{cases}\left(\frac{M_{i}}{c}, M_{i + 1}, \ldots, M_{j - 1}, c^{\prime} \right) : \left(cd_{i}, d_{i + 1}, \ldots, d_{j - 1}, d_{j} \right) & \text{if } c^{\prime} > 1 \\\left(\frac{M_{i}}{c}, M_{i + 1}, \ldots, M_{j - 1} \right) : \left(cd_{i}, d_{i + 1}, \ldots, d_{j - 1} \right) & \text{if } c^{\prime} = 1\end{cases}\end{align}$$
If instead $i = \alpha$, then we have $r = M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha - 1} \cdot c$ as before but $\widehat{M}^{\prime} = \infty$, and we let $A \circ B = (N):(cd_{\alpha})$.
Let’s look at this definition more closely.
Essentially, we are taking the one-dimensional coordinates $k \cdot r$ along the layout $A$ where $k \in [0, N - 1]$. Because we have $r = M_{0} \cdot M_{1} \cdot \ldots \cdot M_{i - 1} \cdot c$, and $c$ divides $M_{i}$.
Let first consider the case of $0 \leq i < \alpha$.
If $N \leq \frac{M_{i}}{c}$, then the first mode in the layout $A$ is sufficient for dividing $B$. Consequently, the composition layout $A \circ B = (N):(cd_{i})$.
Otherwise if $N > \frac{M_{i}}{c}$, more modes in the layout $A$ will be involved for dividing $B$, and consequently the composition layout
$$\begin{align}A \circ B =\begin{cases}\left(\frac{M_{i}}{c}, M_{i + 1}, \ldots, M_{j - 1}, c^{\prime} \right) : \left(cd_{i}, d_{i + 1}, \ldots, d_{j - 1}, d_{j} \right) & \text{if } c^{\prime} > 1 \\\left(\frac{M_{i}}{c}, M_{i + 1}, \ldots, M_{j - 1} \right) : \left(cd_{i}, d_{i + 1}, \ldots, d_{j - 1} \right) & \text{if } c^{\prime} = 1\end{cases}\end{align}$$
Let’s then consider the case of $i = \alpha$. We have $r = M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha - 1} \cdot c$ and $\widehat{M}^{\prime} = \infty$. Here $c$ is an positive integer that can be infinitely large. So only the last mode in the layout $A$ is involved for dividing $B$, and consequently the composition layout $A \circ B = (N):(cd_{\alpha})$.
Note that by this definition, $\text{size}(A \circ B) = \text{size}(B)$. This is a critical property which we will use later.
Proposition 2.14
In the situation of Definition 2.13, we have that $f_{A \circ B} = \widehat{f}_A \circ f_B$.
Proof
This more formally proves the intuition we explained to Definition 2.13.
We carry over the notation from Definition 2.13.
Given an index $0 \leq k \leq \alpha$, let $\delta_{k} \in \mathbb{N}^{\times(\alpha + 1)}$ denote the coordinate that is zero everywhere except in the $k$-th position, where it is 1. Concretely,
$$\begin{align}\delta_{0} &= \underbrace{(1, 0, 0, \ldots, 0)}_{\alpha + 1} \\\delta_{1} &= \underbrace{(0, 1, 0, \ldots, 0)}_{\alpha + 1} \\&\vdots \\\delta_{\alpha} &= \underbrace{(0, 0, 0, \ldots, 1)}_{\alpha + 1}\end{align}$$
With respect to the the isomorphism of the extended layout $A$, we have
$$\begin{align}\widehat{\iota}: \mathbb{N} \cong [0, M_{0}) \times [0, M_{1}) \times \ldots \times [0, M_{\alpha - 1}) \times \mathbb{N}\end{align}$$
Because $B = (N):(r)$, we have
$$\begin{align}f_B(k) &= k \cdot r \\&= M_{0} \cdot M_{1} \cdot \ldots \cdot M_{i - 1} \cdot k \cdot c \\\end{align}$$
where $k \in [0, N - 1]$.
Let first consider the case of $0 \leq i < \alpha$.
If $N \leq \frac{M_{i}}{c}$, i.e. $N \cdot c \leq M_{i}$, then we must have $k \cdot c < M_{i}$ for all $k \in [0, N - 1]$. Because of the isomorphism of the extended layout $A$, we have
$$\begin{align}f_B(k) \mapsto \delta_{i} \cdot k \cdot c\end{align}$$
Then we have
$$\begin{align}\left(\widehat{f}_A \circ f_B \right)(k) &= \widehat{f}_A \left(f_B\left(k\right)\right) \\&= \widehat{f}_A \left(\delta_{i} \cdot k \cdot c\right) \\&= k \cdot c \cdot d_{i} \\\end{align}$$
According to Definition 2.13, we have
$$\begin{align}f_{A \circ B}(k) &= k \cdot c \cdot d_{i} \\\end{align}$$
Therefore, $f_{A \circ B} = \widehat{f}_A \circ f_B$.
Otherwise if $N > \frac{M_{i}}{c}$, i.e. $N = \frac{M_{i}}{c} \cdot \ldots \cdot M_{j-1} \cdot c^{\prime}$. Because of the isomorphism of the extended layout $A$, by definition, we have
$$\begin{align}f_B(k) &\mapsto \left(f_B(k) \mod M_{0}, \left\lfloor \frac{f_B(k)}{M_{0}} \right\rfloor \mod M_{1}, \left\lfloor \frac{f_B(k)}{M_{0} \cdot M_{1}} \right\rfloor \mod M_{2}, \ldots, \left\lfloor \frac{f_B(k)}{M_{0} \cdot M_{1} \cdot \ldots \cdot M_{i - 1}} \right\rfloor \mod M_{i}, \ldots, \left\lfloor \frac{f_B(k)}{M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha - 2}} \right\rfloor \mod M_{\alpha - 1}, \left\lfloor \frac{f_B(k)}{M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha - 1}} \right\rfloor \right) \\&= \left(0, 0, \ldots, \left(k \cdot c\right) \mod M_{i}, \left\lfloor \frac{k \cdot c}{M_{i}} \right\rfloor \mod M_{i + 1}, \ldots, \left\lfloor \frac{k \cdot c}{M_{i} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 1}} \right\rfloor \mod M_{j}, \ldots, \left\lfloor \frac{k \cdot c}{M_{i} \cdot M_{i + 1} \cdot \ldots \cdot M_{\alpha - 2}} \right\rfloor \mod M_{\alpha - 1}, \left\lfloor \frac{k \cdot c}{M_{i} \cdot M_{i + 1} \cdot \ldots \cdot M_{\alpha - 1}} \right\rfloor \right) \\&= \left(0, 0, \ldots, \left( k \mod \frac{M_{i}}{c} \right) \cdot c, \left\lfloor \frac{k}{\frac{M_{i}}{c}} \right\rfloor \mod M_{i + 1}, \ldots, \left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 1}} \right\rfloor \mod M_{j}, \ldots, \left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{\alpha - 2}} \right\rfloor \mod M_{\alpha - 1}, \left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{\alpha - 1}} \right\rfloor \right) \\\end{align}$$
Note that here we used the property $\left(k \cdot c\right) \mod M_{i} = \left( k \mod \frac{M_{i}}{c} \right) \cdot c$, if $c$ divides $M_{i}$.
To see this, suppose $k \cdot c = p \cdot M_{i} + r$, where $0 \leq r < M_{i}$. Then we have $k = \frac{p \cdot M_{i} + r}{c} = p \cdot \frac{M_{i}}{c} + \frac{r}{c}$. Because $c$ divides $M_{i}$, $\frac{r}{c}$ ust be an integer. Thus, we have $\left(k \cdot c\right) \mod M_{i} = r$, and $\left( k \mod \frac{M_{i}}{c} \right) \cdot c = \frac{r}{c} \cdot c = r$. Therefore, $\left(k \cdot c\right) \mod M_{i} = \left( k \mod \frac{M_{i}}{c} \right) \cdot c$.
Further more, because $0 \leq k < N$, we have $0 \leq k \cdot c < N \cdot c = M_{i} \cdot \ldots \cdot M_{j - 1} \cdot c^{\prime}$, where $1 \leq c^{\prime} < M_{j}$. Thus $0 \leq k \cdot c < M_{i} \cdot \ldots \cdot M_{j - 1} \cdot M_{j}$.
When $c^{\prime} > 1$, we have
$$\begin{align}\left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j}} \right\rfloor&= \left\lfloor \frac{k \cdot c}{M_{i} \cdot M_{i + 1} \cdot \ldots \cdot M_{j}} \right\rfloor \\&= 0\end{align}$$
$$\begin{align}\left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j}} \right\rfloor \mod M_{j + 1}&= 0\end{align}$$
and of course for any $l \in [j + 1, \alpha - 1]$, we have
$$\begin{align}\left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{l}} \right\rfloor&= \left\lfloor \frac{k \cdot c}{M_{i} \cdot M_{i + 1} \cdot \ldots \cdot M_{l}} \right\rfloor \\&= 0\end{align}$$
$$\begin{align}\left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{l}} \right\rfloor \mod M_{l + 1}&= 0\end{align}$$
Thus, we have
$$\begin{align}f_B(k) &\mapsto \left(0, 0, \ldots, \left( k \mod \frac{M_{i}}{c} \right) \cdot c, \left\lfloor \frac{k}{\frac{M_{i}}{c}} \right\rfloor \mod M_{i + 1}, \ldots, \left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 1}} \right\rfloor \mod M_{j}, 0, 0, \ldots, 0 \right) \\\end{align}$$
What’s more, because $k \leq (N - 1)$ and $\frac{M_{i}}{c} \cdot \ldots \cdot M_{j - 1} = \frac{N}{c^{\prime}}$, we have
$$\begin{align}\left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 1}} \right\rfloor&= \left\lfloor \frac{k}{\frac{N}{c^{\prime}}} \right\rfloor \\&= \left\lfloor \frac{k}{N} \cdot c^{\prime} \right\rfloor \\&\leq \left\lfloor \frac{N - 1}{N} \cdot c^{\prime} \right\rfloor \\&\leq \left\lfloor c^{\prime} \right\rfloor \\&\leq c^{\prime}\end{align}$$
Because $c^{\prime} < M_{j}$, we have
$$\begin{align}\left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 1}} \right\rfloor \mod M_{j}&= \left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 1}} \right\rfloor \mod c^{\prime} \\\end{align}$$
Thus, we have
$$\begin{align}f_B(k) &\mapsto \left(0, 0, \ldots, \left( k \mod \frac{M_{i}}{c} \right) \cdot c, \left\lfloor \frac{k}{\frac{M_{i}}{c}} \right\rfloor \mod M_{i + 1}, \ldots, \left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 1}} \right\rfloor \mod c^{\prime}, 0, 0, \ldots, 0 \right) \\\end{align}$$
Then we have
$$\begin{align}\left(\widehat{f}_A \circ f_B \right)(k) &= \widehat{f}_A \left(f_B\left(k\right)\right) \\&= \widehat{f}_A \left(0, 0, \ldots, \left( k \mod \frac{M_{i}}{c} \right) \cdot c, \left\lfloor \frac{k}{\frac{M_{i}}{c}} \right\rfloor \mod M_{i + 1}, \ldots, \left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 1}} \right\rfloor \mod c^{\prime}, 0, 0, \ldots, 0 \right) \\&= 0 \cdot d_{0} + 0 \cdot d_{1} + \ldots + \left( k \mod \frac{M_{i}}{c} \right) \cdot c \cdot d_{i} + \left( \left\lfloor \frac{k}{\frac{M_{i}}{c}} \right\rfloor \mod M_{i + 1} \right) \cdot d_{i + 1} + \ldots + \left( \left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 1}} \right\rfloor \mod c^{\prime} \right) \cdot d_{j} + 0 \cdot d_{j + 1} + \ldots + 0 \cdot d_{\alpha} \\&= \left( k \mod \frac{M_{i}}{c} \right) \cdot c \cdot d_{i} + \left( \left\lfloor \frac{k}{\frac{M_{i}}{c}} \right\rfloor \mod M_{i + 1} \right) \cdot d_{i + 1} + \ldots + \left( \left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 1}} \right\rfloor \mod c^{\prime} \right) \cdot d_{j} \\\end{align}$$
According to Definition 2.13, we have
$$\begin{align}f_{A \circ B}(k) &= \left( k \mod \frac{M_{i}}{c} \right) \cdot c \cdot d_{i} + \left( \left\lfloor \frac{k}{\frac{M_{i}}{c}} \right\rfloor \mod M_{i + 1} \right) \cdot d_{i + 1} + \ldots + \left( \left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 1}} \right\rfloor \mod c^{\prime} \right) \cdot d_{j} \\\end{align}$$
Therefore, $f_{A \circ B} = \widehat{f}_A \circ f_B$.
When $c^{\prime} = 1$, we have $0 \leq k \cdot c < N \cdot c = M_{i} \cdot \ldots \cdot M_{j - 1} \cdot c^{\prime} = M_{i} \cdot \ldots \cdot M_{j - 1}$.
Thus, we have
$$\begin{align}\left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 1}} \right\rfloor&= \left\lfloor \frac{k \cdot c}{M_{i} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 1}} \right\rfloor \\&= 0\end{align}$$
$$\begin{align}\left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 1}} \right\rfloor \mod M_{j}&= 0\end{align}$$
Thus, we have
$$\begin{align}f_B(k) &\mapsto \left(0, 0, \ldots, \left( k \mod \frac{M_{i}}{c} \right) \cdot c, \left\lfloor \frac{k}{\frac{M_{i}}{c}} \right\rfloor \mod M_{i + 1}, \ldots, \left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 2}} \right\rfloor \mod M_{j - 1}, 0, 0, \ldots, 0 \right) \\\end{align}$$
Then we have
$$\begin{align}\left(\widehat{f}_A \circ f_B \right)(k) &= \widehat{f}_A \left(f_B\left(k\right)\right) \\&= \widehat{f}_A \left(0, 0, \ldots, \left( k \mod \frac{M_{i}}{c} \right) \cdot c, \left\lfloor \frac{k}{\frac{M_{i}}{c}} \right\rfloor \mod M_{i + 1}, \ldots, \left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 2}} \right\rfloor \mod M_{j - 1}, 0, 0, \ldots, 0 \right) \\&= 0 \cdot d_{0} + 0 \cdot d_{1} + \ldots + \left( k \mod \frac{M_{i}}{c} \right) \cdot c \cdot d_{i} + \left( \left\lfloor \frac{k}{\frac{M_{i}}{c}} \right\rfloor \mod M_{i + 1} \right) \cdot d_{i + 1} + \ldots + \left( \left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 2}} \right\rfloor \mod M_{j - 1} \right) \cdot d_{j - 1} + 0 \cdot d_{j} + 0 \cdot d_{j + 1} + \ldots + 0 \cdot d_{\alpha} \\&= \left( k \mod \frac{M_{i}}{c} \right) \cdot c \cdot d_{i} + \left( \left\lfloor \frac{k}{\frac{M_{i}}{c}} \right\rfloor \mod M_{i + 1} \right) \cdot d_{i + 1} + \ldots + \left( \left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 2}} \right\rfloor \mod M_{j - 1} \right) \cdot d_{j - 1} \\\end{align}$$
According to Definition 2.13, we have
$$\begin{align}f_{A \circ B}(k) &= \left( k \mod \frac{M_{i}}{c} \right) \cdot c \cdot d_{i} + \left( \left\lfloor \frac{k}{\frac{M_{i}}{c}} \right\rfloor \mod M_{i + 1} \right) \cdot d_{i + 1} + \ldots + \left( \left\lfloor \frac{k}{\frac{M_{i}}{c} \cdot M_{i + 1} \cdot \ldots \cdot M_{j - 2}} \right\rfloor \mod M_{j - 1} \right) \cdot d_{j - 1} \\\end{align}$$
Therefore, $f_{A \circ B} = \widehat{f}_A \circ f_B$.
Let’s then consider the case of $i = \alpha$.
$$\begin{align}f_B(k) &= k \cdot r \\&= k \cdot M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha - 1} \cdot c \\\end{align}$$
where $k \in [0, N - 1]$.
Because of the isomorphism of the extended layout $A$, we have
$$\begin{align}f_B(k) &\mapsto \delta_{\alpha} \cdot k \cdot c \\\end{align}$$
Then we have
$$\begin{align}\left(\widehat{f}_A \circ f_B \right)(k) &= \widehat{f}_A \left(f_B\left(k\right)\right) \\&= \widehat{f}_A \left(\delta_{\alpha} \cdot k \cdot c\right) \\&= k \cdot c \cdot d_{\alpha} \\\end{align}$$
According to Definition 2.13, we have
$$\begin{align}f_{A \circ B}(k) &= k \cdot c \cdot d_{\alpha} \\\end{align}$$
Therefore, $f_{A \circ B} = \widehat{f}_A \circ f_B$.
Taken together, we have $f_{A \circ B} = \widehat{f}_A \circ f_B$ for all the cases in Definition 2.13.
This concludes the proof. $\square$
One might ask why the second condition for admission for composition is necessary. If we don’t have it, $c^{\prime}$ can be fractional and we can still define $A \circ B$ to be
$$\begin{align}A \circ B =\begin{cases}\left(\frac{M_{i}}{c}, M_{i + 1}, \ldots, M_{j - 1}, \left\lceil c^{\prime} \right\rceil \right) : \left(cd_{i}, d_{i + 1}, \ldots, d_{j - 1}, d_{j} \right) & \text{if } c^{\prime} > 1 \\\left(\frac{M_{i}}{c}, M_{i + 1}, \ldots, M_{j - 1} \right) : \left(cd_{i}, d_{i + 1}, \ldots, d_{j - 1} \right) & \text{if } c^{\prime} = 1\end{cases}\end{align}$$
It’s not too difficult to show that we still have $f_{A \circ B} = \widehat{f}_A \circ f_B$ for the domain of $f_B$ when the length of $B$ is 1.
However, the critical property $\text{size}(A \circ B) = \text{size}(B)$ will not hold in this case. As we will see later, without having this property, $f_{A \circ B} = \widehat{f}_A \circ f_B$ cannot be true when $B$ is multi-modal, i.e., the length of $B$ is greater than 1.
Definition 2.16 Interval of Definition
In the situation of Definition 2.12, where layout $B$ is of length 1, let $f_B: [0, N) \to \mathbb{N}$ be the layout function, and let $I = [r, r(N - 1)]$ be the interval given by the convex closure of the image $f_B([1, N))$. Let $M^{\prime} = M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha - 1}$ and $J = I \cap [1, M^{\prime})$ (so $J = \emptyset$ if $\alpha = 0$). Then the interval of definition for $\{\mathbf{S}, B\}$ is $J$.
Definition 2.17 Composition - General Case
Let $\mathbf{S} = (M_{0}, M_{1}, \ldots, M_{\alpha})$ be a shape tuple, and let $B = (N_{0}, N_{1}, \ldots, N_{\beta}) : (r_{0}, r_{1}, \ldots, r_{\beta})$ be a layout, let $B_{k} = (N_{k}) : (r_{k})$ for $0 \leq k \leq \beta$. Then we say that the pair $\{\mathbf{S}, B\}$ is admissible for composition if:
In this case, if $\mathbf{D} = (d_{0}, d_{1}, \ldots, d_{\alpha})$ is a stride tuple and $A = \mathbf{S} : \mathbf{D}$, then we define the composition $A \circ B$ to be the concatenated layout
$$\begin{align}A \circ B := \left(A \circ B_{0}, A \circ B_{1}, \ldots, A \circ B_{\beta}\right)\end{align}$$
where each $A \circ B_{k}$ is defined as in Definition 2.13.
Theorem 2.18 Composition - General Case
In the situation of Definition 2.17, we have that $f_{A \circ B} = \widehat{f}_A \circ f_B$.
Proof
By Definition 2.13, $\text{size}(A \circ B_{k}) = \text{size}(B_{k}) = N_{k}$ for all $0 \leq k \leq \beta$. We have the following isomorphism for both the layout $A \circ B$ or the layout $B$.
$$\begin{align}\iota: [0, N_{0} \cdot N_{1} \cdot \ldots \cdot N_{\beta}) \cong [0, N_{0}) \times [0, N_{1}) \times \ldots \times [0, N_{\beta})\end{align}$$
Given any $x \in [0, N_{0} \cdot N_{1} \cdot \ldots \cdot N_{\beta})$, because of the isomorphism $\iota$, we have
$$\begin{align}x &\mapsto \left(x_{0}, x_{1}, \ldots, x_{\beta}\right)\end{align}$$
By Lemma 2.19, we have
$$\begin{align}\widehat{f}_A \circ f_B(x) &= \widehat{f}_A \left(f_B(x)\right) \\&= \widehat{f}_A \left(f_{B_{0}}(x_{0}) + f_{B_{1}}(x_{1}) + \ldots + f_{B_{\beta}}(x_{\beta})\right) \\\end{align}$$
By Definition 2.17, Lemma 2.19, and Definition 2.13, we have
$$\begin{align}f_{A \circ B}(x) &= f_{A \circ B_{0}}(x_{0}) + f_{A \circ B_{1}}(x_{1}) + \ldots + f_{A \circ B_{\beta}}(x_{\beta}) \\&= \widehat{f}_A \circ f_{B_{0}}(x_{0}) + \widehat{f}_A \circ f_{B_{1}}(x_{1}) + \ldots + \widehat{f}_A \circ f_{B_{\beta}}(x_{\beta}) \\&= \widehat{f}_A \left(f_{B_{0}}(x_{0})\right) + \widehat{f}_A \left(f_{B_{1}}(x_{1})\right) + \ldots + \widehat{f}_A \left(f_{B_{\beta}}(x_{\beta})\right) \\\end{align}$$
Normally, we don’t have $\widehat{f}_A \left(x_{A, 0} + x_{A, 1} + \ldots + x_{A, \beta}\right) = \widehat{f}_A \left(x_{A, 0}\right) + \widehat{f}_A \left(x_{A, 1}\right) + \ldots + \widehat{f}_A \left(x_{A, \beta}\right)$, because the layout function $\widehat{f}_A$ is not linear. For example, suppose $A = (2, 3) : (1, 4)$, and we have $\widehat{f}_A(1) = 1$ and $\widehat{f}_A(3) = 5$. $\widehat{f}_A(3) = \widehat{f}_A(1 + 1 +1) \neq \widehat{f}_A(1) + \widehat{f}_A(1) + \widehat{f}_A(1)$.
However, there are some special cases where the above equation holds. For example, for simplicity, suppose $\beta = \alpha$, if we have
$$\begin{align}x_{A, 0} &\in [0, M_{0}) \\x_{A, 1} &\in \{0, 1 \cdot M_{0}, 2 \cdot M_{0}, \ldots, \infty \cdot M_{0}\} \cap [0, M_{1}) \\x_{A, 2} &\in \{0, 1 \cdot M_{0} \cdot M_{1}, 2 \cdot M_{0} \cdot M_{1}, \ldots, \infty \cdot M_{0} \cdot M_{1}\} \cap [0, M_{2}) \\&\vdots \\x_{A, \alpha} &\in \{0, 1 \cdot M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha - 1}, 2 \cdot M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha - 1}, \ldots, \infty \cdot M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha - 1}\} \cap [0, M_{\alpha}) \\\end{align}$$
By definition,
$$\begin{align}\widehat{f}_A \left(x\right)&= \left( x \mod M_{0} \right) \cdot d_{0} + \left( \left\lfloor \frac{x}{M_{0}} \right\rfloor \mod M_{1} \right) \cdot d_{1} + \ldots + \left( \left\lfloor \frac{x}{M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha - 1}} \right\rfloor \mod M_{\alpha} \right) \cdot d_{\alpha} \\\end{align}$$
So in our case, we have
$$\begin{align}\widehat{f}_A \left(x_{A, 0} + x_{A, 1} + \ldots + x_{A, \beta}\right)&= \left( \left( x_{A, 0} + x_{A, 1} + \ldots + x_{A, \beta} \right) \mod M_{0} \right) \cdot d_{0} + \left( \left\lfloor \frac{x_{A, 0} + x_{A, 1} + \ldots + x_{A, \beta}}{M_{0}} \right\rfloor \mod M_{1} \right) \cdot d_{1} + \ldots + \left( \left\lfloor \frac{x_{A, 0} + x_{A, 1} + \ldots + x_{A, \beta}}{M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha - 1}} \right\rfloor \mod M_{\alpha} \right) \cdot d_{\alpha} \\&= \left( x_{A, 0} \mod M_{0} \right) \cdot d_{0} + \left( \left\lfloor \frac{x_{A, 1}}{M_{0}} \right\rfloor \mod M_{1} \right) \cdot d_{1} + \ldots + \left( \left\lfloor \frac{x_{A, \beta}}{M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha - 1}} \right\rfloor \mod M_{\alpha} \right) \cdot d_{\alpha} \\&= \widehat{f}_A \left(x_{A, 0}\right) + \widehat{f}_A \left(x_{A, 1}\right) + \ldots + \widehat{f}_A \left(x_{A, \beta}\right) \\\end{align}$$
The idea of having the second condition for admission for composition, i.e., the interval of definition for the pairs $\{\mathbf{S}, B_{k}\}_{0 \leq k \leq \beta}$ are disjoint, are exactly the same.
Because $r_{k} = M_{0} \cdot M_{1} \cdot \ldots \cdot M_{i_{k} - 1} \cdot c$, for $x_{k} \in [0, N_{k})$, we have
$$\begin{align}f_{B_{k}}(x_{k}) &\in [0, 1 \cdot r, 2 \cdot r, \ldots, (N_{k} - 1) \cdot r] \\&= [0, M_{0} \cdot M_{1} \cdot \ldots \cdot M_{i_{k} - 1} \cdot c, 2 \cdot M_{0} \cdot M_{1} \cdot \ldots \cdot M_{i_{k} - 1} \cdot c, \ldots, (N_{k} - 1) \cdot M_{0} \cdot M_{1} \cdot \ldots \cdot M_{i_{k} - 1} \cdot c] \\\end{align}$$
Because of the isomorphism of the layout $A$, we have
$$\begin{align}f_{B_{k}}(x_{k}) &\mapsto \left(x_{A, 0}, x_{A, 1}, \ldots, x_{A, \alpha}\right) \\\end{align}$$
where
$$\begin{align}x_{A, i} = \left\lfloor \frac{f_{B_{k}}(x_{k})}{M_{0} \cdot M_{1} \cdot \ldots \cdot M_{i - 1}} \right\rfloor \mod M_{i}\end{align}$$
Then we must have some integer $p_{k}$ and $q_{k}$, $p_{k} < q_{k}$, where $x_{A, i} = 0$ for $i < p_{k}$ and $i > q_{k}$.
The second condition for admission for composition ensures that $[p_{k}, q_{k}]$ are disjoint for all $0 \leq k \leq \beta$. Therefore, we can have the equation:
$$\begin{align}\widehat{f}_A \left(x_{A, 0} + x_{A, 1} + \ldots + x_{A, \beta}\right) = \widehat{f}_A \left(x_{A, 0}\right) + \widehat{f}_A \left(x_{A, 1}\right) + \ldots + \widehat{f}_A \left(x_{A, \beta}\right)\end{align}$$
This concludes the proof. $\square$
Going back to the discussion why the second condition for admission for composition in the restricted case in Proposition 2.14 is necessary.
Because the critical property $\text{size}(A \circ B) = \text{size}(B)$ will not hold, we will have two completely different isomorphisms for the layout $A \circ B$ and the layout $B$, respectively.
Lemma 2.19 Concatenation of Layouts
Let $C = (C_{0}, C_{1}, \ldots, C_{\gamma})$ be a concatenated layout. Let
$$\begin{align}\iota: [0, \text{size}(C)) \cong [0, \text{size}(C_{0})) \times \cdots \times [0, \text{size}(C_{\gamma}))\end{align}$$
be the usual isomorphism (as in Definition 2.3). Then the following diagram commutes:
$$\begin{equation}\begin{CD}[0, \text{size}(C)) @>{\iota}>{\cong}> [0, \text{size}(C_0)) \times \dots \times [0, \text{size}(C_\gamma)) \\@V{f_C}VV @VV{(f_{C_0}, \dots, f_{C_\gamma})}V \\\mathbb{N} @<+<< \mathbb{N} \times \dots \times \mathbb{N}\end{CD}\end{equation}$$
Proof
If $C_{0}, \ldots, C_{\gamma}$ are all length 1 layouts, then this is immediate from Definition 2.3.
Concretely, suppose $C_{k} = (M_{k}) : (d_{k})$ for all $0 \leq k \leq \gamma$. The concatenated layout becomes
$$\begin{align}C &= (C_{0}, C_{1}, \ldots, C_{\gamma}) \\&= (M_{0}:d_{0}, M_{1}:d_{1}, \ldots, M_{\gamma}:d_{\gamma}) \\&= (M_{0}, M_{1}, \ldots, M_{\gamma}) : (d_{0}, d_{1}, \ldots, d_{\gamma})\end{align}$$
Because of the isomorphism of the layout $C$, we have
$$\begin{align}x &\mapsto \left(x_{0}, x_{1}, \ldots, x_{\gamma}\right) \\\end{align}$$
Then by definition, the concatenated layout function is
$$\begin{align}f_C(x) &= x_{0} \cdot d_{0} + x_{1} \cdot d_{1} + \ldots + x_{\gamma} \cdot d_{\gamma} \\\end{align}$$
For each of the length 1 layouts $C_{k}$, by definition, the layout function is
$$\begin{align}f_{C_{k}}(x_{k}) &= x_{k} \cdot d_{k} \\\end{align}$$
Therefore, we have
$$\begin{align}f_C(x) &= f_{C_{0}}(x_{0}) + f_{C_{1}}(x_{1}) + \ldots + f_{C_{\gamma}}(x_{\gamma}) \\\end{align}$$
In the case where some of the layouts $C_{k}$ are not length 1, we can apply the same argument to each of the sublayouts $C_{k}$, and the result follows by induction.
Concretely, suppose $C_{k}$ are not length 1 and $C_{k} = (C_{k, 0}, C_{k, 1}, \ldots, C_{k, \gamma_{k}})$, where $C_{k, 0}, \ldots, C_{k, \gamma_{k}}$ are length 1 layouts. Based on what we have proved above, we have
$$\begin{align}f_{C_{k}}(x_{k}) &= f_{C_{k, 0}}(x_{k, 0}) + f_{C_{k, 1}}(x_{k, 1}) + \ldots + f_{C_{k, \gamma_{k}}}(x_{k, \gamma_{k}}) \\\end{align}$$
where
$$\begin{align}x_{k} &\mapsto \left(x_{k, 0}, x_{k, 1}, \ldots, x_{k, \gamma_{k}}\right) \\\end{align}$$
Suppose the layout $C$ can be maximally decomposed into layouts of length 1.
$$\begin{align}C &= (C_{0}, C_{1}, \ldots, C_{\gamma}) \\&= (C_{0, 0}, C_{0, 1}, \ldots, C_{0, \gamma_{0}}, C_{1, 0}, C_{1, 1}, \ldots, C_{1, \gamma_{1}}, \ldots, C_{\gamma, 0}, C_{\gamma, 1}, \ldots, C_{\gamma, \gamma_{\gamma}}) \\\end{align}$$
Then we have
$$\begin{align}f_C(x) &= f_{C_{0,0}}(x_{0,0}) + f_{C_{0,1}}(x_{0,1}) + \ldots + f_{C_{0,\gamma_{0}}}(x_{0,\gamma_{0}}) + f_{C_{1,0}}(x_{1,0}) + f_{C_{1,1}}(x_{1,1}) + \ldots + f_{C_{1,\gamma_{1}}}(x_{1,\gamma_{1}}) + \ldots + f_{C_{\gamma,0}}(x_{\gamma,0}) + f_{C_{\gamma,1}}(x_{\gamma,1}) + \ldots + f_{C_{\gamma,\gamma_{\gamma}}}(x_{\gamma,\gamma_{\gamma}}) \\&= f_{C_{0}}(x_{0}) + f_{C_{1}}(x_{1}) + \ldots + f_{C_{\gamma}}(x_{\gamma}) \\\end{align}$$
where
$$\begin{align}x &\mapsto \left(x_{0}, x_{1}, \ldots, x_{\gamma}\right) \\\end{align}$$
This concludes the proof. $\square$
Definition 2.21 CUTLASS Admission for Composition - Restricted Case
The CUTLASS admission for composition in the restricted case is more restrictive.
Let $\mathbf{S} = (M_{0}, M_{1}, \ldots, M_{\alpha})$ be a shape tuple, let $M = M_{0} \cdot M_{1} \cdot \ldots \cdot M_{\alpha}$, and let $B = (N):(r)$ be a layout of length 1. Then we say that the pair $\{\mathbf{S}, B\}$ is admissible for composition (or simply admissible) if:
Note that the second condition is the left divisibility, instead of the weak left divisibility in Definition 2.12.
For example, suppose $A = (8, 6, 8) : (1, 16, 108)$ and $B = (8) : (4)$. According to Definition 2.12, $A \circ B = (2, 4) : (4, 16)$. However, if we run composition for $A$ and $B$ in CUTLASS, we will encounter an error because CUTLASS requires left divisibility for the second condition.
More specifically, in the CUTLASS composition layout algebra implementation, we have
12345678910
void shape_div(int* shapeA, int N, int& strideB) { for (int i = 0; i < N; ++i) { assert(shapeA[i] % strideB == 0 or strideB % shapeA[i] == 0); int new_shape = ceil_div(shapeA[i], strideB); int new_stride = ceil_div(strideB, shapeA[i]); shapeA[i] = new_shape; strideB = new_stride; }}
12345678910
void shape_mod(int* shapeA, int N, int& shapeB) { for (int i = 0; i < N; ++i) { assert(shapeA[i] % shapeB == 0 or shapeB % shapeA[i] == 0); int new_shapeA = min(shapeA[i], shapeB); int new_shapeB = ceil_div(shapeB, shapeA[i]); shapeA[i] = new_shapeA; shapeB = new_shapeB; }}
The reason why CUTLASS enforces this is because of the logical division operation. Without this restriction, the logical division operation will not be defined in some cases.
Logical Division
Definition 2.22 Logical Division
Let $A = \mathbf{S} : \mathbf{D}$ and $B$ be layouts, and let $M$ be the size of $A$. Suppose that the pairs $\{B, M\}$ and $\{\mathbf{S}, B\}$ are admissible (for complementation and composition, respectively). Then we define the logical division $A / B$ to be the layout
$$\begin{align}A / B := A \circ \left(B, \text{complement}(B, M)\right)\end{align}$$
Note that here the conditions of admission for composition follows Definition 2.21 rather than Definition 2.12.
Implicitly Lemma 2.23 is used in Definition 2.22.
Lemma 2.23 Logical Division Implication
Suppose $A = \mathbf{S} : \mathbf{D}$, $M = \text{size}(A)$, and $B$ are as in Definition 2.22. Then $\{\mathbf{S}, \left(B, \text{complement}(B, M)\right)\}$ is admissible for composition.
Proof
We denote $A = \mathbf{S} : \mathbf{D} = (M_{0}, M_{1}, \ldots, M_{\alpha}) : (d_{0}, d_{1}, \ldots, d_{\alpha})$, and $B = (N_{0}, N_{1}, \ldots, N_{\beta}) : (r_{0}, r_{1}, \ldots, r_{\beta})$. Let
$$\begin{align}\varphi: [0, \beta] \xrightarrow{\cong} [0, \beta]\end{align}$$
be the automorphism such that $B^{\varphi} := (N_{\varphi(0)}, N_{\varphi(1)}, \ldots, N_{\varphi(\beta)}) : (r_{\varphi(0)}, r_{\varphi(1)}, \ldots, r_{\varphi(\beta)})$ is sorted.
Then by Definition 2.6, we have
$$\begin{align}B^{\prime}&= \text{complement}(B, M) \\&= \left(r_{\varphi(0)}, \frac{r_{\varphi(1)}}{N_{\varphi(0)}r_{\varphi(0)}}, \frac{r_{\varphi(2)}}{N_{\varphi(1)}r_{\varphi(1)}}, \ldots, \frac{r_{\varphi(\beta)}}{N_{\varphi(\beta - 1)}r_{\varphi(\beta - 1)}}, \frac{M}{N_{\varphi(\beta)}r_{\varphi(\beta)}}\right) : (1, N_{\varphi(0)}r_{\varphi(0)}, N_{\varphi(1)}r_{\varphi(1)}, \ldots, N_{\varphi(\beta - 1)}r_{\varphi(\beta - 1)}, N_{\varphi(\beta)}r_{\varphi(\beta)})\end{align}$$
Now we denote each mode of $B^{\prime}$ as
$$\begin{align}B^{\prime}_{k}&=\begin{cases}\left(r_{\varphi(0)}\right) : (1) & \text{if } k = 0 \\\left(\frac{r_{\varphi(k)}}{N_{\varphi(k - 1)}r_{\varphi(k - 1)}}\right) : (N_{\varphi(k - 1)}r_{\varphi(k - 1)}) & \text{if } 1 \leq k \leq \beta \\\left(\frac{M}{N_{\varphi(\beta)}r_{\varphi(\beta)}}\right) : (N_{\varphi(\beta)}r_{\varphi(\beta)}) & \text{if } k = \beta + 1\end{cases}\end{align}$$
for $k \in [0, \beta + 1]$.
Because the pair $\{\mathbf{S}, B\}$ is admissible for composition, for each mode in $B$, $B_{k} = (N_{k}) : (r_{k})$ for $k \in [0, \beta]$, by Definition 2.17, the pair $\{\mathbf{S}, B_{k}\}$ is admissible for composition. Therefore, by Definition 2.12, $M$ is left divisible by $r_{k}$ and the quotient $\frac{M}{r_{k}}$ is left divisible (not weakly left divisible) by $N_{k}$ for all $k \in [0, \beta]$.
It is trivial to see $M$ is left divisible by $1$. Let’s see if $M$ is also left divisible by $N_{\varphi(k - 1)}r_{\varphi(k - 1)}$ for all $k \in [1, \beta + 1]$.
Suppose $\varphi(k - 1) = h$ and Because $M$ is left divisible by $r_{h}$, we have
$$\begin{align}r_{\varphi(k - 1)} &= r_{h} \\&= M_{0} \cdot M_{1} \cdot \ldots \cdot M_{i_{k - 1} - 1} \cdot c_{k - 1}\end{align}$$
where $c_{k}$ divides $M_{i_{k}}$.
$$\begin{align}N_{\varphi(k - 1)} &= N_{h} \\&= \frac{M_{i_{k}}}{c_{k - 1}} \cdot M_{i_{k - 1} + 1} \cdot \ldots \cdot M_{j_{k - 1} - 1} \cdot c_{k - 1}^{\prime}\end{align}$$
where $c_{k - 1}^{\prime}$ divides $M_{j_{k - 1}}$.
Thus, $M$ is also left divisible by $N_{\varphi(k - 1)}r_{\varphi(k - 1)}$, because
$$\begin{align}N_{\varphi(k - 1)}r_{\varphi(k - 1)} &= M_{0} \cdot M_{1} \cdot \ldots \cdot M_{i_{k - 1} - 1} \cdot c_{k - 1} \cdot \frac{M_{i_{k}}}{c_{k - 1}} \cdot M_{i_{k} + 1} \cdot \ldots \cdot M_{j_{k - 1} - 1} \cdot c_{k - 1}^{\prime} \\&= M_{0} \cdot M_{1} \cdot \ldots \cdot M_{j_{k - 1} - 1} \cdot c_{k - 1}^{\prime}\end{align}$$
where $c_{k - 1}^{\prime}$ divides $M_{j_{k - 1}}$.
Next, we will have to show $M$ is left divisible by $\frac{r_{\varphi(k)}}{N_{\varphi(k - 1)}r_{\varphi(k - 1)}}$ for all $k \in [1, \beta]$.
$$\begin{align}r_{\varphi(k)} &= M_{0} \cdot M_{1} \cdot \ldots \cdot M_{i_{k} - 1} \cdot c_{k} \\\end{align}$$
Because $r_{\varphi(k)} \geq N_{\varphi(k - 1)}r_{\varphi(k - 1)}$, we must have $i_{k} \geq j_{k - 1}$. Thus,
$$\begin{align}\frac{r_{\varphi(k)}}{N_{\varphi(k - 1)}r_{\varphi(k - 1)}}&= \frac{M_{0} \cdot M_{1} \cdot \ldots \cdot M_{i_{k} - 1} \cdot c_{k}}{M_{0} \cdot M_{1} \cdot \ldots \cdot M_{j_{k - 1} - 1} \cdot c_{k - 1}^{\prime}} \\&= \frac{M_{j_{k - 1}} \cdot M_{j_{k - 1} + 1} \cdot \ldots \cdot M_{i_{k} - 1} \cdot c_{k}}{c_{k - 1}^{\prime}} \\&= \frac{M_{j_{k - 1}}}{c_{k - 1}^{\prime}} \cdot M_{j_{k - 1} + 1} \cdot \ldots \cdot M_{i_{k} - 1} \cdot c_{k} \\\end{align}$$
Thus, $M$ is left divisible by $\frac{r_{\varphi(k)}}{N_{\varphi(k - 1)}r_{\varphi(k - 1)}}$ for all $k \in [1, \beta]$.
It is trivial to see $M$ is left divisible by $\frac{M}{N_{\varphi(\beta)}r_{\varphi(\beta)}}$.
Therefore, the pair $\{\mathbf{S}, B^{\prime}_{k}\}$ is admissible for composition for all $k \in [0, \beta + 1]$.
By Definition 2.17, in order to show $\{\mathbf{S}, (B, \text{complement}(B, M))\}$ is admissible for composition, we also have to show the interval of definition for the pairs $\{\mathbf{S}, B_{k}\}_{0 \leq k \leq \beta}$ and $\{\mathbf{S}, B^{\prime}_{k}\}_{0 \leq k \leq \beta + 1}$ are disjoint.
By Proposition 2.7, the concatenated layout $(B, \text{complement}(B, M))$ is automatically satisfied with the disjoint argument.
Therefore, $\{\mathbf{S}, (B, \text{complement}(B, M))\}$ is admissible for composition.
This concludes the proof. $\square$
Note that in Definition 2.22, if the conditions of admission for composition follows Definition 2.12, our proof above will not be valid. That’s why CUTLASS enforces the conditions of admission for composition follows Definition 2.21.
Permutation Expressible As Layout Functions
This section explains how to retrieve all permutations that are expressible as layout functions in a structured way. The basic language of category theory is used to describe the process.
Definition 3.1 Ordered Factorization
We define the set $\text{ob}(\textbf{Fact})$ of ordered factorizations to consists of all expressions $[p_1 \ldots p_k]$ where $k \geq 0$ and the $p_{i}$ are primes (not necessarily distinct). The case $k = 0$ corresponds to the empty factorization, which we denote as $[\ ]$.
For example, the set $\text{ob}(\textbf{Fact})$ includes expressions such as $[\ ]$, $[2]$, $[3]$, $[22]$, $[23]$, $[32]$, $[232]$, etc.
Notation 3.3
Let $\underline{k}$ denote the set $\{1, 2, \ldots, k\}$ consisting of $k$ elements. (If $k = 0$, then $\underline{0} = \emptyset$ is the empty set.)
Definition 3.4 Category of Ordered Factorizations
We define the category $\textbf{Fact}$ of ordered factorizations as follows:
$$\begin{align}E^{\alpha} = [p_{\alpha(1)} \ldots p_{\alpha(n)}] \xrightarrow{\alpha_{E}} E = [p_1 \ldots p_k]\end{align}$$
in $\textbf{Fact}$. This defines the set of all morphisms with codomain $E$, and ranging over all $E$ thus defines the set of all morphisms in $\textbf{Fact}$.
$$\begin{align}E^{\alpha} = [p_{\alpha(1)} \ldots p_{\alpha(n)}] = [q_1 \ldots q_n]\end{align}$$
Let $\gamma = \alpha \circ \beta: \underline{m} \to \underline{k}$. Then the composition of morphisms
$$\begin{align}\alpha_{E}: E^{\alpha} = [p_{\alpha(1)} \ldots p_{\alpha(n)}] \xrightarrow{} E = [p_1 \ldots p_k]\end{align}$$
$$\begin{align}\beta_{E^{\alpha}}: E^{\beta} = [q_{\beta(1)} \ldots q_{\beta(m)}] \xrightarrow{} E^{\alpha} = [q_1 \ldots q_n]\end{align}$$
is given by $\gamma_{E}: E^{\gamma} \xrightarrow{} E$, where we used that $[q_{\beta(1)} \ldots q_{\beta(m)}] = [p_{\gamma(1)} \ldots p_{\gamma(m)}]$.
It’s easy to check that the composition of morphisms in $\textbf{Fact}$ is associative and has identities, which are the two axioms that composition in a category must satisfy, so Definition 3.4 really does define a category.
To see why the composition of morphisms is associative, suppose we have morphisms of finite sets $\alpha: \underline{n} \to \underline{k}$, $\beta: \underline{m} \to \underline{n}$, and $\gamma: \underline{l} \to \underline{m}$. Then we have
$$\begin{align}\alpha \circ (\beta \circ \gamma) = (\alpha \circ \beta) \circ \gamma\end{align}$$
To see why the composition of morphisms has identities, suppose for every $n$ we have a morphism of finite sets $\text{id}_{\underline{n}}: \underline{n} \to \underline{n}$, such that $\text{id}_{\underline{n}}(i) = i$ for all $i \in \underline{n}$. Then we have
$$\begin{align}E^{\text{id}_{\underline{n}}} = [p_{\text{id}_{\underline{n}}(1)} \ldots p_{\text{id}_{\underline{n}}(n)}] \xrightarrow{} E = [p_{1} \ldots p_{n}]\end{align}$$
For every morphism $\alpha: \underline{n} \to \underline{k}$, we have
$$\begin{align}\alpha \circ \text{id}_{\underline{n}} = \text{id}_{\underline{k}} \circ \alpha = \alpha\end{align}$$
Therefore, $\text{id}_{\underline{n}}$ is the identity morphism for $\underline{n}$.
Notation 3.5
Let $\Sigma_{k}$ denote the symmetric group on $k$ letters. Given an element $\varphi \in \Sigma_{k}$, we also denote the associated automorphism of $\underline{k}$ by $\varphi$.
In mathematics, a group is a set with an operation that associates an element of the set to every pair of elements of the set (as does every binary operation) and satisfies the following constraints: the operation is associative, it has an identity element, and every element of the set has an inverse element.
In this sense, the symmetric group is a set of all permutations of a set of $k$ elements with an operation of composition of permutations (applying one permutation after another).
Suppose $E = [222]$. Then every permutation $\varphi \in \Sigma_{3}$ defines an automorphism $E^{\varphi} = E \xrightarrow{} E$ in $\textbf{Fact}$.
Suppose $E = [232]$. Then the transposition $\sigma = (13) \in \Sigma_{3}$ defines an automorphism $E^{\sigma} = E \xrightarrow{} E$ in $\textbf{Fact}$. On the other hand, the transposition $\tau = (12) \in \Sigma_{3}$ defines a morphism $E^{\tau} = [322] \xrightarrow{} E = [232]$ in $\textbf{Fact}$.
Remark 3.7
Let $\textbf{FinSet}$ denote the category of finite sets (or rather a skeleton, with objects given by the sets $\underline{n}$ for $n \geq 0$). Given an object $\underline{k} \in \textbf{FinSet}$, let $\textbf{FinSet}^{/\underline{k}}$ denote the overcategory, whose objects are morphisms $[\alpha: \underline{n} \to \underline{k}]$ and whose morphisms are commuting triangles. Recall that this category has a final object given by the identity morphism $[\text{id}_{\underline{k}}]$.
Then for every expression $E = [p_1 \ldots p_k]$ of length $k$, we have a functor
$$\begin{align}F_{E}: \textbf{FinSet}^{/\underline{k}} \to \textbf{Fact}\end{align}$$
that sends the object $[\alpha: \underline{n} \to \underline{k}]$ to the expression $E^{\alpha}$ and the unique morphism $[\alpha] \xrightarrow{} [\text{id}_{\underline{k}}]$ to $\alpha_{E}: E^{\alpha} \xrightarrow{} E$. This functor has every morphism in $\textbf{Fact}$ with codomain $E$ in its image.
Suppose we have an object of morphism $[\alpha: \underline{n} \to \underline{k}]$ and another object of morphism $[\beta: \underline{m} \to \underline{k}]$ in $\textbf{FinSet}^{/\underline{k}}$. Then the morphism of the overcategory is a commuting triangle, whose remaining morphism is $[\gamma: \underline{m} \to \underline{n}]$, that maps $[\alpha: \underline{n} \to \underline{k}]$ to $[\beta: \underline{m} \to \underline{k}]$.
The identity morphism $[\text{id}_{\underline{k}}]$ is the final object of $\textbf{FinSet}^{/\underline{k}}$ because every other object of morphism in $\textbf{FinSet}^{/\underline{k}}$ has a unique morphism to $[\text{id}_{\underline{k}}]$.
In this case, given an object of morphism $[\alpha: \underline{n} \to \underline{k}]$, the commuting triangle has a remaining morphism $[\gamma: \underline{k} \to \underline{n}]$, and we will have to show that this remaining morphism in the commuting triangle is unique.
Given $\underline{k} = \{1, 2, \ldots, k\}$, the identity morphism maps $i \in \underline{k}$ to $i \in \underline{k}$. Given a morphism $[\alpha: \underline{n} \to \underline{k}]$, in which without loss of generality we assume $\alpha(i) = i$ for all $i \in [1, k]$, the remaining morphism $[\gamma: \underline{k} \to \underline{n}]$ must have $i = \alpha(i)$ for all $i \in [1, k]$, so that we have a commuting triangle that $i \in \underline{k} \xrightarrow{\gamma} \xrightarrow{\alpha} i \in \underline{k}$. Otherwise, for the remaining morphism $[\gamma: \underline{k} \to \underline{n}]$, if there exists an $i \in \underline{k}$ such that $i \neq \alpha(i)$, then the commuting triangle will not be valid because $i \in \underline{k} \xrightarrow{\gamma} \xrightarrow{\alpha} j \in \underline{k}$ and $i \neq j$. Therefore, the morphism of commuting triangle is unique.
By definition, let $C$ and $D$ be categories. A functor $F$ from $C$ to $D$ is a mapping that
In the category $\textbf{FinSet}^{/\underline{k}}$, we have $X = [\alpha: \underline{n} \to \underline{k}]$ and $Y = [\gamma: \underline{m} \to \underline{k}]$, $Z = [\text{id}_{\underline{k}}]$, and the morphisms are commuting triangles $f = [\alpha] \xrightarrow{} [\gamma]$, $g = [\gamma] \xrightarrow{} [\text{id}_{\underline{k}}]$, and $g \circ f = [\alpha] \xrightarrow{} [\text{id}_{\underline{k}}]$.
In the category $\textbf{Fact}$, by the functor $F_{E}$, we have $F_{E}(X) = E^{\alpha} = [p_{\alpha(1)} \ldots p_{\alpha(n)}]$, $F_{E}(Y) = E^{\gamma} = [p_{\gamma(1)} \ldots p_{\gamma(m)}]$, $F_{E}(Z) = E = [p_1 \ldots p_k]$, and the morphisms are $F_{E}(f) = \alpha_{E}: E^{\alpha} \xrightarrow{} E^{\gamma}$, $F_{E}(g) = \gamma_{E}: E^{\gamma} \xrightarrow{} E$, and $F_{E}(g) \circ F_{E}(f) = \alpha_{E}: E^{\alpha} \xrightarrow{} E$.
Remark 3.8
In fact, we can identify $\textbf{Fact}$ itself as a certain overcategory (or rather, a full subcategory thereof). Namely, let $\mathcal{P}$ denote the the infinite set of primes $\{2, 3, 5 \ldots\}$, let $\textbf{Set}$ be the category of sets, and let $\textbf{FinSet}^{/\mathcal{P}}$ be the full subcategory of $\textbf{Set}^{/\mathcal{P}}$ on those morphisms $X \xrightarrow{} \mathcal{P}$ where $X$ is a finite set. Then we have an equivalence of categories
$$\begin{align}\textbf{Fact} \simeq \textbf{FinSet}^{/\mathcal{P}}\end{align}$$
that sends an expression $E = [p_1 \ldots p_k]$ to the morphism $E_{\bullet}: k \xrightarrow{} \mathcal{P}$ given by $i \mapsto p_i$.
Under this equivalence, the functor $F_{E}$ of Remark 3.7 identifies with the functor
$$\begin{align}\textbf{FinSet}^{/k} \simeq \left(\textbf{FinSet}^{/\mathcal{P}}\right)^{/E_{\bullet}} \xrightarrow{} \textbf{FinSet}^{/\mathcal{P}}\end{align}$$
that forgets the map to $E_{\bullet}$.
To understand this, let’s consider the following example.
Suppose we have an object $E = [232]$ from $\textbf{Fact}$. Then we have a morphism $E_{\bullet}: \underline{3} \xrightarrow{} \mathcal{P}$ given by
$$\begin{align}i = 1 &\mapsto p_{1} = 2 \\i = 2 &\mapsto p_{2} = 3 \\i = 3 &\mapsto p_{3} = 2\end{align}$$
Every object of morphisms in $\textbf{FinSet}^{/\mathcal{P}}$ can be mapped from an object from $\textbf{Fact}$ and thus we have an equivalence of categories $\textbf{Fact} \simeq \textbf{FinSet}^{/\mathcal{P}}$.
Because of the functor $F_{E}$ of Remark 3.7, with the equivalence above, we have
$$\begin{align}\textbf{FinSet}^{/\underline{k}} \to \textbf{FinSet}^{/\mathcal{P}}\end{align}$$
Definition 3.9
Suppose $E = [p_1 \ldots p_k]$ and $\alpha: \underline{n} \to \underline{k}$. We define a layout $L_{(E, \alpha)}$ as follows:
We also let $f_{(E, \alpha)}$ denote the associated layout function.
Suppose $E = [23]$ and $\varphi = (12) \in \Sigma_{2}$ is the nontrivial transposition. Then $L_{(E, \varphi)} = (3, 2) : (2, 1)$.
Suppose $E = [23]$, $\varphi(1) = 2$, $\varphi(2) = 1$, $\varphi(3) = 2$. Then $L_{(E, \varphi)} = (3, 2, 3) : (2, 1, 2)$. This layout seems strange because its layout function is not an injection mapping. However, it is still a valid layout by Definition 2.2.
Suppose $E = [222]$ and $\varphi = (231) \in \Sigma_{3}$, so $\varphi$ is a cycle of order $3$ with $\varphi(1) = 2$, $\varphi(2) = 3$, and $\varphi(3) = 1$. Then $L_{(E, \varphi)} = (2, 2, 2) : (2, 4, 1)$.
We could now see why $p_{i}$ is prime number. It’s used for constructing the stride tuple of any kind for the layout, because any natural number can be uniquely factored into a product of prime numbers.
Remark 3.11
Let $E = [p_1 \ldots p_k]$ and $\alpha: \underline{n} \to \underline{k}$. Let $N = p_{1} \cdot p_{2} \cdot \ldots \cdot p_{k}$ and $N^{\alpha} = p_{\alpha(1)} \cdot p_{\alpha(2)} \cdot \ldots \cdot p_{\alpha(n)}$. In what follows, consider the canonical isomorphisms
$$\begin{align}[0, N) &\cong [0, p_1) \times [0, p_2) \times \ldots \times [0, p_k) \\[0, N^{\alpha}) &\cong [0, p_{\alpha(1)}) \times [0, p_{\alpha(2)}) \times \ldots \times [0, p_{\alpha(n)})\end{align}$$
Then the associated layout function $f_{(E, \alpha)}: [0, N^{\alpha}) \to [0, N) \subset \mathbb{N}$ can be described as the multilinear function
$$\begin{align}[0, p_{\alpha(1)}) \times [0, p_{\alpha(2)}) \times \ldots \times [0, p_{\alpha(n)}) \xrightarrow{} [0, p_1) \times [0, p_2) \times \ldots \times [0, p_k)\end{align}$$
that sends the basis vector $\delta_{i}$ of one vector space to the basis vector $\beta_{\alpha(i)}$ of the other for $i \in [1, n]$.
In particular, if $\alpha$ is itself a bijection, then $f_{(E, \alpha)}$ restricts to an automorphism of $[0, N)$.
Proof
We noticed that $f_{(E, \alpha)}: [0, N^{\alpha}) \to [0, N) \subset \mathbb{N}$. The domain of $f_{(E, \alpha)}$ is $[0, N^{\alpha})$ and it is obvious. The codomain of $f_{(E, \alpha)}$ is $[0, N) \subset \mathbb{N}$, however, is less obvious.
$$\begin{align}f_{(E, \alpha)} = x_{\alpha_{(1)}} d_{1} + x_{\alpha_{(2)}} d_{2} + \ldots + x_{\alpha_{(n)}} d_{n}\end{align}$$
where $x_{\alpha_{(i)}} \in [0, p_{\alpha(i)})$ and $d_{i} = \prod_{j < \alpha(i)} p_{j}$.
So we have
$$\begin{align}\max \left( f_{(E, \alpha)} \right) &= (p_{\alpha(1)} - 1) d_{1} + (p_{\alpha(2)} - 1) d_{2} + \ldots + (p_{\alpha(n)} - 1) d_{n} \\&= (p_{\alpha(1)} - 1) \prod_{j < \alpha(1)} p_{j} + (p_{\alpha(2)} - 1) \prod_{j < \alpha(2)} p_{j} + \ldots + (p_{\alpha(n)} - 1) \prod_{j < \alpha(n)} p_{j} \\\end{align}$$
Without losing generality, we assume $p_{\alpha(1)} \leq p_{\alpha(2)} \leq \ldots \leq p_{\alpha(n)}$. Then we have
$$\begin{align}\max \left( f_{(E, \alpha)} \right) &= (p_{\alpha(1)} - 1) \prod_{j < \alpha(1)} p_{j} + (p_{\alpha(2)} - 1) \prod_{j < \alpha(2)} p_{j} + \ldots + (p_{\alpha(n)} - 1) \prod_{j < \alpha(n)} p_{j} \\&\leq p_{\alpha(1)} \prod_{j < \alpha(1)} p_{j} + (p_{\alpha(2)} - 1) \prod_{j < \alpha(2)} p_{j} + \ldots + (p_{\alpha(n)} - 1) \prod_{j < \alpha(n)} p_{j} \\&= \prod_{j < \alpha(2)} p_{j} + (p_{\alpha(2)} - 1) \prod_{j < \alpha(2)} p_{j} + \ldots + (p_{\alpha(n)} - 1) \prod_{j < \alpha(n)} p_{j} \\&= p_{\alpha(2)} \prod_{j < \alpha(2)} p_{j} + \ldots + (p_{\alpha(n)} - 1) \prod_{j < \alpha(n)} p_{j} \\&\leq \prod_{j < \alpha(3)} p_{j} + \ldots + (p_{\alpha(n)} - 1) \prod_{j < \alpha(n)} p_{j} \\&\leq \ldots \\&\leq p_{\alpha(n)} \prod_{j < \alpha(n - 1)} p_{j} + (p_{\alpha(n)} - 1) \prod_{j < \alpha(n)} p_{j} \\&= \prod_{j < \alpha(n)} p_{j} + (p_{\alpha(n)} - 1) \prod_{j < \alpha(n)} p_{j} \\&= p_{\alpha(n)} \prod_{j < \alpha(n)} p_{j} \\&= \prod_{j \leq \alpha(n)} p_{j} \\&\leq \prod_{j \leq k} p_{j} \\&= N\end{align}$$
Thus, we have $f_{(E, \alpha)}: [0, N^{\alpha}) \to [0, N) \subset \mathbb{N}$.
Because $f_{(E, \alpha)}$ is a multilinear function, and because of the canonical isomorphisms, $f_{(E, \alpha)}$ can be described as the multilinear function
$$\begin{align}[0, p_{\alpha(1)}) \times [0, p_{\alpha(2)}) \times \ldots \times [0, p_{\alpha(n)}) \xrightarrow{} [0, p_1) \times [0, p_2) \times \ldots \times [0, p_k)\end{align}$$
We denote a vector space $V = [0, p_{\alpha(1)}) \times [0, p_{\alpha(2)}) \times \ldots \times [0, p_{\alpha(n)})$ and a vector space $W = [0, p_1) \times [0, p_2) \times \ldots \times [0, p_k)$. Then the layout function $f_{(E, \alpha)}$ is a linear map $V \xrightarrow{} W$.
Suppose $v_{1}, v_{2}, av_{1}, bv_{2}, av_{1} + bv_{2} \in V$, $f_{(E, \alpha)}(v_{1}) = w_{1}$, and $f_{(E, \alpha)}(v_{2}) = w_{2}$. Then we have
$$\begin{align}f_{(E, \alpha)}(v_{1}) &= v_{1, 1} d_{1} + v_{1, 2} d_{2} + \ldots + v_{1, n} d_{n} \\f_{(E, \alpha)}(v_{2}) &= v_{2, 1} d_{1} + v_{2, 2} d_{2} + \ldots + v_{2, n} d_{n} \\f_{(E, \alpha)}(av_{1}) &= av_{1, 1} d_{1} + av_{1, 2} d_{2} + \ldots + av_{1, n} d_{n} \\&= af_{(E, \alpha)}(v_{1}) \\f_{(E, \alpha)}(bv_{2}) &= bv_{2, 1} d_{1} + bv_{2, 2} d_{2} + \ldots + bv_{2, n} d_{n} \\&= bf_{(E, \alpha)}(v_{2}) \\f_{(E, \alpha)}(av_{1} + bv_{2}) &= (av_{1} + bv_{2})_{1} d_{1} + (av_{1} + bv_{2})_{2} d_{2} + \ldots + (av_{1} + bv_{2})_{n} d_{n} \\&= af_{(E, \alpha)}(v_{1}) + bf_{(E, \alpha)}(v_{2})\end{align}$$
So $f_{(E, \alpha)}: V \xrightarrow{} W$ is indeed a linear (multilinear) map.
Given an index $1 \leq i \leq \alpha$, let $\delta_{i} \in \mathbb{N}^{\times \alpha}$ denote the coordinate that is zero everywhere except in the $k$-th position, where it is 1. Note here the indexing is 1-based, instead of the similar one used in Proposition 2.14 which is 0-based. $\delta_{i}$ is the basis vector of the vector space $V$ for $1 \leq i \leq \alpha$.
We send $\delta_{i}$ to $f_{(E, \alpha)}$ for $1 \leq i \leq \alpha$. Then we have
$$\begin{align}f_{(E, \alpha)}(\delta_{i}) &= d_{i} \\&= \prod_{j < \alpha(i)} p_{j}\end{align}$$
Given the canonical isomorphism $[0, N) \cong [0, p_1) \times [0, p_2) \times \ldots \times [0, p_k)$, we have the multilinear function $g: W \xrightarrow{} \mathbb{N}$
$$\begin{align}g(w) = w_{1} + w_{2} p_{1} + w_{3} p_{1} p_{2} + \ldots + w_{k} \prod_{j < k} p_{j}\end{align}$$
Given an index $1 \leq i \leq k$, let $\beta_{i} \in \mathbb{N}^{\times k}$ denote the coordinate that is zero everywhere except in the $k$-th position, where it is 1. $\beta_{i}$ is the basis vector of the vector space $W$ for $1 \leq i \leq k$.
Thus, we have
$$\begin{align}f_{(E, \alpha)}(\delta_{i})&= \prod_{j < \alpha(i)} p_{j} \\&= g(\beta_{\alpha(i)})\end{align}$$
This means the basis vector $\delta_{i}$ in the vector space $V$ is sent to the basis vector $\beta_{\alpha(i)}$ in the vector space $W$ by the multilinear function.
Suppose $v = c_{1} \delta_{1} + c_{2} \delta_{2} + \ldots + c_{\alpha} \delta_{\alpha} \in V$. Then we have
$$\begin{align}f_{(E, \alpha)}(v) &= f_{(E, \alpha)}(c_{1} \delta_{1} + c_{2} \delta_{2} + \ldots + c_{\alpha} \delta_{\alpha}) \\&= (c_{1} \delta_{1} + c_{2} \delta_{2} + \ldots + c_{\alpha} \delta_{\alpha})_{1} d_{1} + (c_{1} \delta_{1} + c_{2} \delta_{2} + \ldots + c_{\alpha} \delta_{\alpha})_{2} d_{2} + \ldots + (c_{1} \delta_{1} + c_{2} \delta_{2} + \ldots + c_{\alpha} \delta_{\alpha})_{\alpha} d_{\alpha} \\&= c_{1} d_{1} + c_{2} d_{2} + \ldots + c_{\alpha} d_{\alpha} \\&= c_{1} f_{(E, \alpha)}(\delta_{1}) + c_{2} f_{(E, \alpha)}(\delta_{2}) + \ldots + c_{\alpha} f_{(E, \alpha)}(\delta_{\alpha}) \\&= c_{1} g(\beta_{\alpha(1)}) + c_{2} g(\beta_{\alpha(2)}) + \ldots + c_{\alpha} g(\beta_{\alpha(\alpha)}) \\&= g(c_{1} \beta_{\alpha(1)} + c_{2} \beta_{\alpha(2)} + \ldots + c_{\alpha} \beta_{\alpha(\alpha)}) \\\end{align}$$
Therefore, we have set up the basis vector mapping for the multilinear function $f_{(E, \alpha)}: V \xrightarrow{} W$. Given $v = c_{1} \delta_{1} + c_{2} \delta_{2} + \ldots + c_{\alpha} \delta_{\alpha} \in V$, it maps to $w = c_{1} \beta_{\alpha(1)} + c_{2} \beta_{\alpha(2)} + \ldots + c_{\alpha} \beta_{\alpha(\alpha)} \in W$.
This concludes the proof. $\square$
Lemma 3.12
Elaborating on Remark 3.11, we have the following lemma, which indicates that composition in the category $\textbf{Fact}$ is compatible with the composition of layout functions.
Suppose we have morphisms of finite sets $\alpha: \underline{n} \to \underline{k}$, $\beta: \underline{m} \to \underline{n}$, and an expression $E = [p_1 p_2 \ldots p_k]$. Write $\gamma = \alpha \circ \beta$. Consider the composition
$$\begin{align}\gamma_{E}: E^{\gamma} = (E^{\alpha})^{\beta} \xrightarrow{\beta_{E^{\alpha}}} E^{\alpha} \xrightarrow{\alpha_{E}} E\end{align}$$
in $\textbf{Fact}$. Then the associated layout functions satisfy the composition equality
$$\begin{align}f_{(E, \gamma)} = f_{(E, \alpha)} \circ f_{(E^{\alpha}, \beta)}\end{align}$$
Proof
Let $N = p_{1} \cdot p_{2} \cdot \ldots \cdot p_{k}$, $N^{\alpha} = p_{\alpha(1)} \cdot p_{\alpha(2)} \cdot \ldots \cdot p_{\alpha(n)}$, and $N^{\gamma} = p_{\gamma(1)} \cdot p_{\gamma(2)} \cdot \ldots \cdot p_{\gamma(m)}$. We use the canonical isomorphisms
$$\begin{align}[0, N) &\cong [0, p_1) \times [0, p_2) \times \ldots \times [0, p_k) \\[0, N^{\alpha}) &\cong [0, p_{\alpha(1)}) \times [0, p_{\alpha(2)}) \times \ldots \times [0, p_{\alpha(n)}) \\[0, N^{\gamma}) &\cong [0, p_{\gamma(1)}) \times [0, p_{\gamma(2)}) \times \ldots \times [0, p_{\gamma(m)})\end{align}$$
to write the domains and codomains of the layout functions in question.
More specifically, we have
$$\begin{align}f_{(E, \alpha)}: [0, N^{\alpha}) &\to [0, N) \\f_{(E^{\alpha}, \beta)}: [0, N^{\gamma}) &\to [0, N^{\alpha}) \\f_{(E, \gamma)}: [0, N^{\gamma}) &\to [0, N)\end{align}$$
We are trying to equate the multilinear function
$$\begin{align}f_{(E, \gamma)}: [0, p_{\gamma(1)}) \times [0, p_{\gamma(2)}) \times \ldots \times [0, p_{\gamma(m)}) \xrightarrow{} [0, p_1) \times [0, p_2) \times \ldots \times [0, p_k)\end{align}$$
with the composition of the two multilinear functions
$$\begin{align}f_{(E, \alpha)}&: [0, p_{\alpha(1)}) \times [0, p_{\alpha(2)}) \times \ldots \times [0, p_{\alpha(n)}) \xrightarrow{} [0, p_1) \times [0, p_2) \times \ldots \times [0, p_k) \\f_{(E^{\alpha}, \beta)}&: [0, p_{\gamma(1)}) \times [0, p_{\gamma(2)}) \times \ldots \times [0, p_{\gamma(m)}) \xrightarrow{} [0, p_{\alpha(1)}) \times [0, p_{\alpha(2)}) \times \ldots \times [0, p_{\alpha(n)})\end{align}$$
We denote a vector space $V = [0, p_{\alpha(1)}) \times [0, p_{\alpha(2)}) \times \ldots \times [0, p_{\alpha(n)})$, a vector space $W = [0, p_1) \times [0, p_2) \times \ldots \times [0, p_k)$, and a vector space $U = [0, p_{\gamma(1)}) \times [0, p_{\gamma(2)}) \times \ldots \times [0, p_{\gamma(m)})$. The basis vectors of $V$, $W$, and $U$ are $\delta_{i}$, $\sigma_{j}$, and $\tau_{l}$ for $1 \leq i \leq n$, $1 \leq j \leq k$, and $1 \leq l \leq m$.
Based on the basis vector mapping by Remark 3.11, given $u = c_{1} \tau_{1} + c_{2} \tau_{2} + \ldots + c_{m} \tau_{m} \in U$, by $f_{(E^{\alpha}, \beta)}$, it maps to $v = c_{1} \delta_{\beta(1)} + c_{2} \delta_{\beta(2)} + \ldots + c_{m} \delta_{\beta(m)} \in V$. Then by $f_{(E, \alpha)}$, it maps to $w = c_{1} \sigma_{\alpha(\beta(1))} + c_{2} \sigma_{\alpha(\beta(2))} + \ldots + c_{m} \sigma_{\alpha(\beta(m))} \in W$.
Given $u = c_{1} \tau_{1} + c_{2} \tau_{2} + \ldots + c_{m} \tau_{m} \in U$, by $f_{(E, \gamma)}$, because $\gamma = \alpha \circ \beta$, $\gamma(i) = \alpha(\beta(i))$, it maps to $w^{\prime} = c_{1} \sigma_{\gamma(1)} + c_{2} \sigma_{\gamma(2)} + \ldots + c_{m} \sigma_{\gamma(m)} = c_{1} \sigma_{\alpha(\beta(1))} + c_{2} \sigma_{\alpha(\beta(2))} + \ldots + c_{m} \sigma_{\alpha(\beta(m))} \in W$.
Because $w = w^{\prime}$, we have $f_{(E, \gamma)} = f_{(E, \alpha)} \circ f_{(E^{\alpha}, \beta)}$.
This concludes the proof. $\square$
In Lemma 3.12, the per-mode condition of admissibility for composition (Definition 2.12) is satisfied. To see this, we have
$$\begin{align}E &= [p_1 p_2 \ldots p_k] \\E^{\alpha} &= [p_{\alpha(1)} p_{\alpha(2)} \ldots p_{\alpha(n)}] \\E^{\gamma} &= [p_{\gamma(1)} p_{\gamma(2)} \ldots p_{\gamma(m)}]\end{align}$$
$$\begin{align}L_{(E, \alpha)} &= \left(p_{\alpha(1)}, p_{\alpha(2)}, \ldots, p_{\alpha(n)}\right) : \left(d_{1}, d_{2}, \ldots, d_{n}\right) \\\end{align}$$
where $d_{i} = \prod_{j < \alpha(i)} p_{j}$.
$$\begin{align}L_{(E^{\alpha}, \beta)} &= \left(p_{\alpha(\beta(1))}, p_{\alpha(\beta(2))}, \ldots, p_{\alpha(\beta(m))}\right) : \left(d^{\prime}_{1}, d^{\prime}_{2}, \ldots, d^{\prime}_{m}\right) \\\end{align}$$
where $d^{\prime}_{i} = \prod_{j < \beta(i)} p_{\alpha(j)}$.
Because
$$\begin{align}M &= p_{\alpha(1)} \cdot p_{\alpha(2)} \cdot \ldots \cdot p_{\alpha(n)} \\&= \left( \prod_{j < \beta(i)} p_{\alpha(j)} \right) \cdot p_{\alpha(\beta(i))} \cdot p_{\alpha(\beta(i) + 1)} \cdot \ldots \cdot p_{\alpha(n)} \\&= \left( \prod_{j < \beta(i)} p_{\alpha(j)} \right) \cdot M^{\prime}\end{align}$$
Thus, $M$ is left divisible by $d^{\prime}_{i}$, $M^{\prime}$ is weakly left divisible and also left divisible by $p_{\alpha(\beta(i))}$, and the per-mode condition of admissibility for composition is satisfied.
The disjointness condition in Definition 2.17 is satisfied when $\beta: \underline{m} \to \underline{n}$ is an injective function and may be violated when it is not.
When $\beta: \underline{m} \to \underline{n}$ is injective, we have $m \leq n$, and $\beta{(i)} \neq \beta{(j)}$ for $i \neq j$. By Definition 2.16, for each mode $i \in [1, m]$, we have $N_{i} = p_{\alpha(\beta(i))}$, $I_{i} = [d^{\prime}_{i}, d^{\prime}_{i} (N_{i} - 1)]$. $M^{\prime} = p_{\alpha(1)} \cdot p_{\alpha(2)} \cdot \ldots \cdot p_{\alpha(n - 1)}$. So the interval of definition is $J_{i} = I_{i} \cap [1, M^{\prime})$. Because $d^{\prime}_{i} = \prod_{j < \beta(i)} p_{\alpha(j)} \geq 1$, $d^{\prime}_{i}(N_{i} - 1) = \prod_{j < \beta(i)} p_{\alpha(j)} \cdot (p_{\alpha(\beta(i))} - 1) = \prod_{j \leq \beta(i)} p_{\alpha(j)} - \prod_{j < \beta(i)} p_{\alpha(j)} < M^{\prime}$. Thus, $J_{i} = I_{i} = [\prod_{j < \beta(i)} p_{\alpha(j)}, \prod_{j \leq \beta(i)} p_{\alpha(j)} - \prod_{j < \beta(i)} p_{\alpha(j)}]$. Suppose we have a different mode $k$, $k \neq i$. Then $J_{k} = I_{k} = [\prod_{j < \beta(k)} p_{\alpha(j)}, \prod_{j \leq \beta(k)} p_{\alpha(j)} - \prod_{j < \beta(k)} p_{\alpha(j)}]$. Because $\beta(i) \neq \beta(k)$, without losing generality, we assume $\beta(i) < \beta(k)$. Then we have
$$\begin{align}\prod_{j < \beta(k)} p_{\alpha(j)} - \left( \prod_{j \leq \beta(i)} p_{\alpha(j)} - \prod_{j < \beta(i)} p_{\alpha(j)} \right)&= \prod_{j < \beta(k)} p_{\alpha(j)} - \prod_{j \leq \beta(i)} p_{\alpha(j)} + \prod_{j < \beta(i)} p_{\alpha(j)} \\&> 0\end{align}$$
Thus, $J_{i} \cap J_{k} = \emptyset$ for any $i \neq k$. The disjointness condition is satisfied.
When $\beta: \underline{m} \to \underline{n}$ is not injective, we don’t have $\beta{(i)} \neq \beta{(j)}$ for $i \neq j$. The disjointness condition may be violated.
So Lemma 3.12 actually proves Theorem 2.18 for layouts that has any arbitrary strides (not yet any arbitrary shapes) of the second layout that satisfies Definition 2.17.
Definition 3.14
We now define a “realization” functor from the category $\textbf{Fact}$ to the category $\textbf{FinSet}$ that sends morphisms of ordered factorizations to their associated layout functions.
Let $R: \textbf{Fact} \to \textbf{FinSet}$ be the functor defined as follows:
By Lemma 3.12, $R: \textbf{Fact} \to \textbf{FinSet}$ does indeed define a functor since it respects the composition of morphisms and identities as well.
We note that, as mentioned previously, $R$ does not contain every possible function expressible as a layout function in its image. However, it does contain every automorphism of $[0, N) \xrightarrow{\cong} [0, N)$ expressible as a layout function in its image.
Proposition 3.15
Let $N > 0$ be a positive integer and let $f: [0, N) \to [0, N)$ be an automorphism such that there exists a layout $L$ of size $N$ with $f = f_{L}$. Then $f_{L}$ is in the image of the realization functor $R$.
Proof
Without loss of generality, we may suppose that the shape tuple of $L$ is given by $(p_{1}, p_{2}, \ldots, p_{k})$ where the $p_{i}$ are all prime numbers and $N = p_{1} \cdot p_{2} \cdot \ldots \cdot p_{k}$.
In order for $f_{L}$ to be an automorphism of $[0, N)$, the sorted $L$, $L^{\varphi}$, must be of the form
$$\begin{align}L^{\varphi} = \left(p_{\varphi(1)}, p_{\varphi(2)}, \ldots, p_{\varphi(k)}\right) : \left(1, p_{\varphi(1)}, p_{\varphi(1)} p_{\varphi(2)}, \ldots, \prod_{1 \leq i < k} p_{\varphi(i)}\right)\end{align}$$
for some permutation $\varphi \in \Sigma_{k}$. This means that if we let $\psi = \varphi^{-1}$ be the inverse permutation, then
$$\begin{align}\psi_{E}: E^{\psi} = [p_{1} p_{2} \ldots p_{k}] = [p_{\psi(\varphi(1))} p_{\psi(\varphi(2))} \ldots p_{\psi(\varphi(k))}] \xrightarrow{} E = [p_{\varphi(1)} p_{\varphi(2)} \ldots p_{\varphi(k)}]\end{align}$$
is a morphism in $\textbf{Fact}$ that $R(\psi_{E}) = f_{L}$.
This concludes the proof. $\square$
Remark 3.16
One way to interpret Proposition 3.15 is that if we take the maximal subgroupoid $\textbf{Fact}^{\simeq}$ inside $\textbf{Fact}$, i.e., the subcategory of all invertible morphisms, then
$$\begin{align}R: \textbf{Fact}^{\simeq} \to \textbf{FinSet}\end{align}$$
carves out exactly those permutations expressible as layouts. Our motivation for this description is that for a fixed integer $N > 0$, the subset $\Sigma^{L}_{N}$ of $\Sigma_{N}$ on those automorphisms expressible as layout functions is typically not a subgroup (being not generally closed under the group multiplication, i.e., composition).
Instead, if we let
$$\begin{align}\textbf{Fact}^{\simeq}_{N} \subset \textbf{Fact}^{\simeq}\end{align}$$
be the full subgroupoid of those objects $[p_{1} p_{2} \ldots p_{k}]$ with $N = p_{1} \cdot p_{2} \cdot \ldots \cdot p_{k}$, then the $\Sigma^{L}_{N}$ consists of those morphisms in the image of $R$ on $\textbf{Fact}^{\simeq}_{N}$. However, we see that $\Sigma^{L}_{N}$ is closed under the operation of taking the group inverse (the objects taking permutations to their inverses are also in $\textbf{Fact}^{\simeq}_{N}$). Moreover, in the special case that $N$ is a prime power $p^{k}$, then $\Sigma^{L}_{N}$ is in fact a subgroup and is isomorphic to the symmetric group $\Sigma_{k}$. This corresponds to $\textbf{Fact}^{\simeq}_{p^{k}}$ being a groupoid with a single object $[pp \ldots p]$, i.e., a group.
References
CuTe Layout Algebra
https://leimao.github.io/article/CuTe-Layout-Algebra/
Author
Lei Mao
Posted on
10-20-2024
Updated on
10-20-2024
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
CUDA Cooperative Groups
Introduction
CUDA cooperative groups is a feature that allows developers to create and manage groups of threads that can synchronize and communicate with each other. Cooperative groups provide a more flexible and efficient way to write parallel algorithms on the GPU compared to traditional CUDA programming models.
In this blog post, we will discuss the parallel reduction algorithm and its implementation in CUDA using cooperative groups.
Batched Reduce Sum and Full Reduce Sum Using Cooperative Groups
In this example, we modified the two batched reduce sum kernels implemented in the previous blog post “CUDA Reduction” to use cooperative groups for synchronization and communication between threads. The reduction algorithms remain exactly the same, only the APIs used for synchronizing groups of threads are different. We also implemented a full reduce sum kernel that reduces an array of elements to a single value using cooperative groups using one single kernel launch.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585
#include <cassert>#include <functional>#include <iostream>#include <string>#include <vector>#include <cooperative_groups.h>#include <cuda_runtime.h>#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)void check(cudaError_t err, char const* func, char const* file, int line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}#define CHECK_LAST_CUDA_ERROR() check_last(__FILE__, __LINE__)void check_last(char const* file, int line){ cudaError_t const err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); }}template <class T>float measure_performance(std::function<T(cudaStream_t)> bound_function, cudaStream_t stream, size_t num_repeats = 10, size_t num_warmups = 10){ cudaEvent_t start, stop; float time; CHECK_CUDA_ERROR(cudaEventCreate(&start)); CHECK_CUDA_ERROR(cudaEventCreate(&stop)); for (size_t i{0}; i < num_warmups; ++i) { bound_function(stream); } CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); CHECK_CUDA_ERROR(cudaEventRecord(start, stream)); for (size_t i{0}; i < num_repeats; ++i) { bound_function(stream); } CHECK_CUDA_ERROR(cudaEventRecord(stop, stream)); CHECK_CUDA_ERROR(cudaEventSynchronize(stop)); CHECK_LAST_CUDA_ERROR(); CHECK_CUDA_ERROR(cudaEventElapsedTime(&time, start, stop)); CHECK_CUDA_ERROR(cudaEventDestroy(start)); CHECK_CUDA_ERROR(cudaEventDestroy(stop)); float const latency{time / num_repeats}; return latency;}std::string std_string_centered(std::string const& s, size_t width, char pad = ' '){ size_t const l{s.length()}; // Throw an exception if width is too small. if (width < l) { throw std::runtime_error("Width is too small."); } size_t const left_pad{(width - l) / 2}; size_t const right_pad{width - l - left_pad}; std::string const s_centered{std::string(left_pad, pad) + s + std::string(right_pad, pad)}; return s_centered;}template <size_t NUM_THREADS>__device__ float thread_block_reduce_sum( cooperative_groups::thread_block_tile<NUM_THREADS> group, float shared_data[NUM_THREADS], float val){ static_assert(NUM_THREADS % 32 == 0, "NUM_THREADS must be a multiple of 32"); size_t thread_idx{group.thread_rank()}; shared_data[thread_idx] = val; group.sync();#pragma unroll for (size_t offset{group.size() / 2}; offset > 0; offset /= 2) { if (thread_idx < offset) { shared_data[thread_idx] += shared_data[thread_idx + offset]; } group.sync(); } // There will be no shared memory bank conflicts here. // Because multiple threads in a warp address the same shared memory // location, resulting in a broadcast. return shared_data[0];}__device__ float thread_block_reduce_sum(cooperative_groups::thread_block group, float* shared_data, float val){ size_t const thread_idx{group.thread_rank()}; shared_data[thread_idx] = val; group.sync(); for (size_t stride{group.size() / 2}; stride > 0; stride /= 2) { if (thread_idx < stride) { shared_data[thread_idx] += shared_data[thread_idx + stride]; } group.sync(); } return shared_data[0];}template <size_t NUM_WARPS>__device__ float thread_block_reduce_sum(float shared_data[NUM_WARPS]){ float sum{0.0f};#pragma unroll for (size_t i{0}; i < NUM_WARPS; ++i) { // There will be no shared memory bank conflicts here. // Because multiple threads in a warp address the same shared memory // location, resulting in a broadcast. sum += shared_data[i]; } return sum;}__device__ float thread_reduce_sum(float const* __restrict__ input_data, size_t start_offset, size_t num_elements, size_t stride){ float sum{0.0f}; for (size_t i{start_offset}; i < num_elements; i += stride) { sum += input_data[i]; } return sum;}__device__ floatwarp_reduce_sum(cooperative_groups::thread_block_tile<32> group, float val){#pragma unroll for (size_t offset{group.size() / 2}; offset > 0; offset /= 2) { // The shfl_down function is a warp shuffle operation that only exists // for thread block tiles of size 32. val += group.shfl_down(val, offset); } // Only the first thread in the warp will return the correct result. return val;}template <size_t NUM_THREADS>__device__ floatthread_block_reduce_sum_v1(float const* __restrict__ input_data, size_t num_elements){ static_assert(NUM_THREADS % 32 == 0, "NUM_THREADS must be a multiple of 32"); __shared__ float shared_data[NUM_THREADS]; size_t const thread_idx{ cooperative_groups::this_thread_block().thread_index().x}; float sum{ thread_reduce_sum(input_data, thread_idx, num_elements, NUM_THREADS)}; shared_data[thread_idx] = sum; // This somehow does not work. // static thread block cooperative groups is still not supported. // cooperative_groups::thread_block_tile<NUM_THREADS> const // thread_block{cooperative_groups::tiled_partition<NUM_THREADS>(cooperative_groups::this_thread_block())}; // float const block_sum{thread_block_reduce_sum<NUM_THREADS>(thread_block, // shared_data, sum)}; This works. float const block_sum{thread_block_reduce_sum( cooperative_groups::this_thread_block(), shared_data, sum)}; return block_sum;}template <size_t NUM_THREADS, size_t NUM_WARPS = NUM_THREADS / 32>__device__ floatthread_block_reduce_sum_v2(float const* __restrict__ input_data, size_t num_elements){ static_assert(NUM_THREADS % 32 == 0, "NUM_THREADS must be a multiple of 32"); __shared__ float shared_data[NUM_WARPS]; size_t const thread_idx{ cooperative_groups::this_thread_block().thread_index().x}; float sum{ thread_reduce_sum(input_data, thread_idx, num_elements, NUM_THREADS)}; cooperative_groups::thread_block_tile<32> const warp{ cooperative_groups::tiled_partition<32>( cooperative_groups::this_thread_block())}; sum = warp_reduce_sum(warp, sum); if (warp.thread_rank() == 0) { shared_data[cooperative_groups::this_thread_block().thread_rank() / 32] = sum; } cooperative_groups::this_thread_block().sync(); float const block_sum{thread_block_reduce_sum<NUM_WARPS>(shared_data)}; return block_sum;}template <size_t NUM_THREADS>__global__ void batched_reduce_sum_v1(float* __restrict__ output_data, float const* __restrict__ input_data, size_t num_elements_per_batch){ static_assert(NUM_THREADS % 32 == 0, "NUM_THREADS must be a multiple of 32"); size_t const block_idx{cooperative_groups::this_grid().block_rank()}; size_t const thread_idx{ cooperative_groups::this_thread_block().thread_rank()}; float const block_sum{thread_block_reduce_sum_v1<NUM_THREADS>( input_data + block_idx * num_elements_per_batch, num_elements_per_batch)}; if (thread_idx == 0) { output_data[block_idx] = block_sum; }}template <size_t NUM_THREADS>__global__ void batched_reduce_sum_v2(float* __restrict__ output_data, float const* __restrict__ input_data, size_t num_elements_per_batch){ static_assert(NUM_THREADS % 32 == 0, "NUM_THREADS must be a multiple of 32"); constexpr size_t NUM_WARPS{NUM_THREADS / 32}; size_t const block_idx{cooperative_groups::this_grid().block_rank()}; size_t const thread_idx{ cooperative_groups::this_thread_block().thread_rank()}; float const block_sum{thread_block_reduce_sum_v2<NUM_THREADS, NUM_WARPS>( input_data + block_idx * num_elements_per_batch, num_elements_per_batch)}; if (thread_idx == 0) { output_data[block_idx] = block_sum; }}template <size_t NUM_THREADS, size_t NUM_BLOCK_ELEMENTS>__global__ void full_reduce_sum(float* output, float const* __restrict__ input_data, size_t num_elements, float* workspace){ static_assert(NUM_THREADS % 32 == 0, "NUM_THREADS must be a multiple of 32"); static_assert(NUM_BLOCK_ELEMENTS % NUM_THREADS == 0, "NUM_BLOCK_ELEMENTS must be a multiple of NUM_THREADS"); // Workspace size: num_elements. size_t const num_grid_elements{ NUM_BLOCK_ELEMENTS * cooperative_groups::this_grid().num_blocks()}; float* const workspace_ptr_1{workspace}; float* const workspace_ptr_2{workspace + num_elements / 2}; size_t remaining_elements{num_elements}; // The first iteration of the reduction. float* workspace_output_data{workspace_ptr_1}; size_t const num_grid_iterations{ (remaining_elements + num_grid_elements - 1) / num_grid_elements}; for (size_t i{0}; i < num_grid_iterations; ++i) { size_t const grid_offset{i * num_grid_elements}; size_t const block_offset{grid_offset + cooperative_groups::this_grid().block_rank() * NUM_BLOCK_ELEMENTS}; size_t const num_actual_elements_to_reduce_per_block{ remaining_elements >= block_offset ? min(NUM_BLOCK_ELEMENTS, remaining_elements - block_offset) : 0}; float const block_sum{thread_block_reduce_sum_v1<NUM_THREADS>( input_data + block_offset, num_actual_elements_to_reduce_per_block)}; if (cooperative_groups::this_thread_block().thread_rank() == 0) { workspace_output_data [i * cooperative_groups::this_grid().num_blocks() + cooperative_groups::this_grid().block_rank()] = block_sum; } } cooperative_groups::this_grid().sync(); remaining_elements = (remaining_elements + NUM_BLOCK_ELEMENTS - 1) / NUM_BLOCK_ELEMENTS; // The rest iterations of the reduction. float* workspace_input_data{workspace_output_data}; workspace_output_data = workspace_ptr_2; while (remaining_elements > 1) { size_t const num_grid_iterations{ (remaining_elements + num_grid_elements - 1) / num_grid_elements}; for (size_t i{0}; i < num_grid_iterations; ++i) { size_t const grid_offset{i * num_grid_elements}; size_t const block_offset{ grid_offset + cooperative_groups::this_grid().block_rank() * NUM_BLOCK_ELEMENTS}; size_t const num_actual_elements_to_reduce_per_block{ remaining_elements >= block_offset ? min(NUM_BLOCK_ELEMENTS, remaining_elements - block_offset) : 0}; float const block_sum{thread_block_reduce_sum_v1<NUM_THREADS>( workspace_input_data + block_offset, num_actual_elements_to_reduce_per_block)}; if (cooperative_groups::this_thread_block().thread_rank() == 0) { workspace_output_data [i * cooperative_groups::this_grid().num_blocks() + cooperative_groups::this_grid().block_rank()] = block_sum; } } cooperative_groups::this_grid().sync(); remaining_elements = (remaining_elements + NUM_BLOCK_ELEMENTS - 1) / NUM_BLOCK_ELEMENTS; // Swap the input and output data. float* const temp{workspace_input_data}; workspace_input_data = workspace_output_data; workspace_output_data = temp; } // Copy the final result to the output. workspace_output_data = workspace_input_data; if (cooperative_groups::this_grid().thread_rank() == 0) { *output = workspace_output_data[0]; }}template <size_t NUM_THREADS>void launch_batched_reduce_sum_v1(float* output_data, float const* input_data, size_t batch_size, size_t num_elements_per_batch, cudaStream_t stream){ size_t const num_blocks{batch_size}; batched_reduce_sum_v1<NUM_THREADS><<<num_blocks, NUM_THREADS, 0, stream>>>( output_data, input_data, num_elements_per_batch); CHECK_LAST_CUDA_ERROR();}template <size_t NUM_THREADS>void launch_batched_reduce_sum_v2(float* output_data, float const* input_data, size_t batch_size, size_t num_elements_per_batch, cudaStream_t stream){ size_t const num_blocks{batch_size}; batched_reduce_sum_v2<NUM_THREADS><<<num_blocks, NUM_THREADS, 0, stream>>>( output_data, input_data, num_elements_per_batch); CHECK_LAST_CUDA_ERROR();}template <size_t NUM_THREADS, size_t NUM_BLOCK_ELEMENTS>void launch_full_reduce_sum(float* output, float const* input_data, size_t num_elements, float* workspace, cudaStream_t stream){ // https://docs.nvidia.com/cuda/archive/12.4.1/cuda-c-programming-guide/index.html#grid-synchronization void const* func{reinterpret_cast<void const*>( full_reduce_sum<NUM_THREADS, NUM_BLOCK_ELEMENTS>)}; int dev{0}; cudaDeviceProp deviceProp; CHECK_CUDA_ERROR(cudaGetDeviceProperties(&deviceProp, dev)); dim3 const grid_dim{ static_cast<unsigned int>(deviceProp.multiProcessorCount)}; dim3 const block_dim{NUM_THREADS}; // This will launch a grid that can maximally fill the GPU, on the // default stream with kernel arguments. // In practice, it's not always the best. // void const* func{reinterpret_cast<void const*>( // full_reduce_sum<NUM_THREADS, NUM_BLOCK_ELEMENTS>)}; // int dev{0}; // dim3 const block_dim{NUM_THREADS}; // int num_blocks_per_sm{0}; // cudaDeviceProp deviceProp; // cudaGetDeviceProperties(&deviceProp, dev); // cudaOccupancyMaxActiveBlocksPerMultiprocessor(&num_blocks_per_sm, func, // NUM_THREADS, 0); // dim3 const grid_dim{static_cast<unsigned int>(num_blocks_per_sm)}; void* args[]{static_cast<void*>(&output), static_cast<void*>(&input_data), static_cast<void*>(&num_elements), static_cast<void*>(&workspace)}; CHECK_CUDA_ERROR(cudaLaunchCooperativeKernel(func, grid_dim, block_dim, args, 0, stream)); CHECK_LAST_CUDA_ERROR();}float profile_full_reduce_sum( std::function<void(float*, float const*, size_t, float*, cudaStream_t)> full_reduce_sum_launch_function, size_t num_elements){ cudaStream_t stream; CHECK_CUDA_ERROR(cudaStreamCreate(&stream)); constexpr float element_value{1.0f}; std::vector<float> input_data(num_elements, element_value); float output{0.0f}; float* d_input_data; float* d_workspace; float* d_output; CHECK_CUDA_ERROR(cudaMalloc(&d_input_data, num_elements * sizeof(float))); CHECK_CUDA_ERROR(cudaMalloc(&d_workspace, num_elements * sizeof(float))); CHECK_CUDA_ERROR(cudaMalloc(&d_output, sizeof(float))); CHECK_CUDA_ERROR(cudaMemcpy(d_input_data, input_data.data(), num_elements * sizeof(float), cudaMemcpyHostToDevice)); full_reduce_sum_launch_function(d_output, d_input_data, num_elements, d_workspace, stream); CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); // Verify the correctness of the kernel. CHECK_CUDA_ERROR( cudaMemcpy(&output, d_output, sizeof(float), cudaMemcpyDeviceToHost)); if (output != num_elements * element_value) { std::cout << "Expected: " << num_elements * element_value << " but got: " << output << std::endl; throw std::runtime_error("Error: incorrect sum"); } std::function<void(cudaStream_t)> const bound_function{ std::bind(full_reduce_sum_launch_function, d_output, d_input_data, num_elements, d_workspace, std::placeholders::_1)}; float const latency{measure_performance<void>(bound_function, stream)}; std::cout << "Latency: " << latency << " ms" << std::endl; // Compute effective bandwidth. size_t num_bytes{num_elements * sizeof(float) + sizeof(float)}; float const bandwidth{(num_bytes * 1e-6f) / latency}; std::cout << "Effective Bandwidth: " << bandwidth << " GB/s" << std::endl; CHECK_CUDA_ERROR(cudaFree(d_input_data)); CHECK_CUDA_ERROR(cudaFree(d_workspace)); CHECK_CUDA_ERROR(cudaFree(d_output)); CHECK_CUDA_ERROR(cudaStreamDestroy(stream)); return latency;}float profile_batched_reduce_sum( std::function<void(float*, float const*, size_t, size_t, cudaStream_t)> batched_reduce_sum_launch_function, size_t batch_size, size_t num_elements_per_batch){ size_t const num_elements{batch_size * num_elements_per_batch}; cudaStream_t stream; CHECK_CUDA_ERROR(cudaStreamCreate(&stream)); constexpr float element_value{1.0f}; std::vector<float> input_data(num_elements, element_value); std::vector<float> output_data(batch_size, 0.0f); float* d_input_data; float* d_output_data; CHECK_CUDA_ERROR(cudaMalloc(&d_input_data, num_elements * sizeof(float))); CHECK_CUDA_ERROR(cudaMalloc(&d_output_data, batch_size * sizeof(float))); CHECK_CUDA_ERROR(cudaMemcpy(d_input_data, input_data.data(), num_elements * sizeof(float), cudaMemcpyHostToDevice)); batched_reduce_sum_launch_function(d_output_data, d_input_data, batch_size, num_elements_per_batch, stream); CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); // Verify the correctness of the kernel. CHECK_CUDA_ERROR(cudaMemcpy(output_data.data(), d_output_data, batch_size * sizeof(float), cudaMemcpyDeviceToHost)); for (size_t i{0}; i < batch_size; ++i) { if (output_data.at(i) != num_elements_per_batch * element_value) { std::cout << "Expected: " << num_elements_per_batch * element_value << " but got: " << output_data.at(i) << std::endl; throw std::runtime_error("Error: incorrect sum"); } } std::function<void(cudaStream_t)> const bound_function{std::bind( batched_reduce_sum_launch_function, d_output_data, d_input_data, batch_size, num_elements_per_batch, std::placeholders::_1)}; float const latency{measure_performance<void>(bound_function, stream)}; std::cout << "Latency: " << latency << " ms" << std::endl; // Compute effective bandwidth. size_t num_bytes{num_elements * sizeof(float) + batch_size * sizeof(float)}; float const bandwidth{(num_bytes * 1e-6f) / latency}; std::cout << "Effective Bandwidth: " << bandwidth << " GB/s" << std::endl; CHECK_CUDA_ERROR(cudaFree(d_input_data)); CHECK_CUDA_ERROR(cudaFree(d_output_data)); CHECK_CUDA_ERROR(cudaStreamDestroy(stream)); return latency;}int main(){ size_t const batch_size{2048}; size_t const num_elements_per_batch{1024 * 256}; constexpr size_t string_width{50U}; std::cout << std_string_centered("", string_width, '~') << std::endl; std::cout << std_string_centered("NVIDIA GPU Device Info", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '~') << std::endl; // Query deive name and peak memory bandwidth. int device_id{0}; cudaGetDevice(&device_id); cudaDeviceProp device_prop; cudaGetDeviceProperties(&device_prop, device_id); std::cout << "Device Name: " << device_prop.name << std::endl; float const memory_size{static_cast<float>(device_prop.totalGlobalMem) / (1 << 30)}; std::cout << "Memory Size: " << memory_size << " GB" << std::endl; float const peak_bandwidth{ static_cast<float>(2.0f * device_prop.memoryClockRate * (device_prop.memoryBusWidth / 8) / 1.0e6)}; std::cout << "Peak Bandwitdh: " << peak_bandwidth << " GB/s" << std::endl; std::cout << std_string_centered("", string_width, '~') << std::endl; std::cout << std_string_centered("Reduce Sum Profiling", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '~') << std::endl; std::cout << std_string_centered("", string_width, '=') << std::endl; std::cout << "Batch Size: " << batch_size << std::endl; std::cout << "Number of Elements Per Batch: " << num_elements_per_batch << std::endl; std::cout << std_string_centered("", string_width, '=') << std::endl; constexpr size_t NUM_THREADS_PER_BATCH{256}; static_assert(NUM_THREADS_PER_BATCH % 32 == 0, "NUM_THREADS_PER_BATCH must be a multiple of 32"); static_assert(NUM_THREADS_PER_BATCH <= 1024, "NUM_THREADS_PER_BATCH must be less than or equal to 1024"); std::cout << "Batched Reduce Sum V1" << std::endl; float const latency_v1{profile_batched_reduce_sum( launch_batched_reduce_sum_v1<NUM_THREADS_PER_BATCH>, batch_size, num_elements_per_batch)}; std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << "Batched Reduce Sum V2" << std::endl; float const latency_v2{profile_batched_reduce_sum( launch_batched_reduce_sum_v2<NUM_THREADS_PER_BATCH>, batch_size, num_elements_per_batch)}; std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << "Full Reduce Sum" << std::endl; constexpr size_t NUM_THREADS{256}; constexpr size_t NUM_BLOCK_ELEMENTS{NUM_THREADS * 1024}; float const latency_v3{profile_full_reduce_sum( launch_full_reduce_sum<NUM_THREADS, NUM_BLOCK_ELEMENTS>, batch_size * num_elements_per_batch)}; std::cout << std_string_centered("", string_width, '-') << std::endl;}
To build and run the reduce sum example, please run the following commands.
123456789101112131415161718192021222324252627
$ nvcc reduce_sum_cooperative_groups.cu -o reduce_sum_cooperative_groups$ ./reduce_sum_cooperative_groups~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NVIDIA GPU Device Info~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Device Name: NVIDIA GeForce RTX 3090Memory Size: 23.6694 GBPeak Bandwitdh: 936.096 GB/s~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Reduce Sum Profiling~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~==================================================Batch Size: 2048Number of Elements Per Batch: 262144==================================================Batched Reduce Sum V1Latency: 2.43301 msEffective Bandwidth: 882.649 GB/s--------------------------------------------------Batched Reduce Sum V2Latency: 2.43445 msEffective Bandwidth: 882.126 GB/s--------------------------------------------------Full Reduce SumLatency: 2.47788 msEffective Bandwidth: 866.663 GB/s--------------------------------------------------
The performance of the batched reduce sum kernels using cooperative groups is similar to the performance of the batched reduce sum kernels using traditional CUDA programming models.
Large Array Reduce Sum
There could be three approaches to implement a large array reduce sum kernel.
Without using cooperative groups, we could only synchronize threads within a thread block, which leads to the first approach. But there are additional kernel launch overhead due to multiple kernel launches.
With cooperative groups, we could synchronize threads across thread blocks, which leads to the second approach. The second approach, however, also has drawbacks comparing to the first approach that in the later stage of reduction the number of grids being actually utilized is much smaller because the reduction problem size becomes smaller, which is a waste of computation resources.
References
CUDA Cooperative Groups
https://leimao.github.io/blog/CUDA-Cooperative-Groups/
Author
Lei Mao
Posted on
08-06-2024
Updated on
08-06-2024
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
CUDA Reduction
Introduction
Reduction is a common operation in parallel computing. Usually the reduction operation is used to compute the sum, the maximum, the minimum, the product, of a sequence of elements.
In this blog post, we will discuss the parallel reduction algorithm and its implementation in CUDA.
Batched Reduce Sum
In this example, we implemented two batched reduce sum kernels in CUDA. The batched reduce sum kernel computes the sum for each array of elements in a batch of arrays.
The idea of the reduction algorithm is simple. For each array in the batch, we will assign a thread block consisting of a fixed number of threads to compute the sum of the elements in the array. Each thread will access multiple elements in the array from the global memory and store the partial sum in the register file. After all the threads have computed the partial sum, we have two ways to further reduce the partial sum to the final sum. One way is to use shared memory to store the partial sum and reduce the partial sum in the shared memory. The other way is to use warp-level primitives to reduce the partial sum in the register file in a warp followed by a smaller scale reduction in the shared memory.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360
#include <cassert>#include <functional>#include <iostream>#include <string>#include <vector>#include <cuda_runtime.h>#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)void check(cudaError_t err, char const* func, char const* file, int line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}#define CHECK_LAST_CUDA_ERROR() check_last(__FILE__, __LINE__)void check_last(char const* file, int line){ cudaError_t const err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); }}template <class T>float measure_performance(std::function<T(cudaStream_t)> bound_function, cudaStream_t stream, size_t num_repeats = 10, size_t num_warmups = 10){ cudaEvent_t start, stop; float time; CHECK_CUDA_ERROR(cudaEventCreate(&start)); CHECK_CUDA_ERROR(cudaEventCreate(&stop)); for (size_t i{0}; i < num_warmups; ++i) { bound_function(stream); } CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); CHECK_CUDA_ERROR(cudaEventRecord(start, stream)); for (size_t i{0}; i < num_repeats; ++i) { bound_function(stream); } CHECK_CUDA_ERROR(cudaEventRecord(stop, stream)); CHECK_CUDA_ERROR(cudaEventSynchronize(stop)); CHECK_LAST_CUDA_ERROR(); CHECK_CUDA_ERROR(cudaEventElapsedTime(&time, start, stop)); CHECK_CUDA_ERROR(cudaEventDestroy(start)); CHECK_CUDA_ERROR(cudaEventDestroy(stop)); float const latency{time / num_repeats}; return latency;}std::string std_string_centered(std::string const& s, size_t width, char pad = ' '){ size_t const l{s.length()}; // Throw an exception if width is too small. if (width < l) { throw std::runtime_error("Width is too small."); } size_t const left_pad{(width - l) / 2}; size_t const right_pad{width - l - left_pad}; std::string const s_centered{std::string(left_pad, pad) + s + std::string(right_pad, pad)}; return s_centered;}template <size_t NUM_THREADS>__device__ float shared_data_reduce_sum_v1(float shared_data[NUM_THREADS]){ static_assert(NUM_THREADS % 32 == 0, "NUM_THREADS must be a multiple of 32"); size_t const thread_idx{threadIdx.x};#pragma unroll for (size_t stride{NUM_THREADS / 2}; stride > 0; stride /= 2) { if (thread_idx < stride) { shared_data[thread_idx] += shared_data[thread_idx + stride]; } __syncthreads(); } return shared_data[0];}template <size_t NUM_WARPS>__device__ float shared_data_reduce_sum_v2(float shared_data[NUM_WARPS]){ float sum{0.0f};#pragma unroll for (size_t i{0}; i < NUM_WARPS; ++i) { // There will be no shared memory bank conflicts here. // Because multiple threads in a warp address the same shared memory // location, resulting in a broadcast. sum += shared_data[i]; } return sum;}__device__ float warp_reduce_sum(float val){ constexpr unsigned int FULL_MASK{0xffffffff};#pragma unroll for (size_t offset{16}; offset > 0; offset /= 2) { val += __shfl_down_sync(FULL_MASK, val, offset); } // Only the first thread in the warp will return the correct result. return val;}template <size_t NUM_THREADS>__device__ float block_reduce_sum_v1(float const* __restrict__ input_data, float shared_data[NUM_THREADS], size_t num_elements){ static_assert(NUM_THREADS % 32 == 0, "NUM_THREADS must be a multiple of 32"); size_t const num_elements_per_thread{(num_elements + NUM_THREADS - 1) / NUM_THREADS}; size_t const thread_idx{threadIdx.x}; float sum{0.0f}; for (size_t i{0}; i < num_elements_per_thread; ++i) { size_t const offset{thread_idx + i * NUM_THREADS}; if (offset < num_elements) { sum += input_data[offset]; } } shared_data[thread_idx] = sum; __syncthreads(); float const block_sum{shared_data_reduce_sum_v1<NUM_THREADS>(shared_data)}; return block_sum;}template <size_t NUM_THREADS, size_t NUM_WARPS = NUM_THREADS / 32>__device__ float block_reduce_sum_v2(float const* __restrict__ input_data, float shared_data[NUM_WARPS], size_t num_elements){ size_t const num_elements_per_thread{(num_elements + NUM_THREADS - 1) / NUM_THREADS}; size_t const thread_idx{threadIdx.x}; float sum{0.0f}; for (size_t i{0}; i < num_elements_per_thread; ++i) { size_t const offset{thread_idx + i * NUM_THREADS}; if (offset < num_elements) { sum += input_data[offset]; } } sum = warp_reduce_sum(sum); if (threadIdx.x % 32 == 0) { shared_data[threadIdx.x / 32] = sum; } __syncthreads(); float const block_sum{shared_data_reduce_sum_v2<NUM_WARPS>(shared_data)}; return block_sum;}template <size_t NUM_THREADS>__global__ void batched_reduce_sum_v1(float* __restrict__ output_data, float const* __restrict__ input_data, size_t num_elements_per_batch){ static_assert(NUM_THREADS % 32 == 0, "NUM_THREADS must be a multiple of 32"); size_t const block_idx{blockIdx.x}; size_t const thread_idx{threadIdx.x}; __shared__ float shared_data[NUM_THREADS]; float const block_sum{block_reduce_sum_v1<NUM_THREADS>( input_data + block_idx * num_elements_per_batch, shared_data, num_elements_per_batch)}; if (thread_idx == 0) { output_data[block_idx] = block_sum; }}template <size_t NUM_THREADS>__global__ void batched_reduce_sum_v2(float* __restrict__ output_data, float const* __restrict__ input_data, size_t num_elements_per_batch){ static_assert(NUM_THREADS % 32 == 0, "NUM_THREADS must be a multiple of 32"); constexpr size_t NUM_WARPS{NUM_THREADS / 32}; size_t const block_idx{blockIdx.x}; size_t const thread_idx{threadIdx.x}; __shared__ float shared_data[NUM_WARPS]; float const block_sum{block_reduce_sum_v2<NUM_THREADS, NUM_WARPS>( input_data + block_idx * num_elements_per_batch, shared_data, num_elements_per_batch)}; if (thread_idx == 0) { output_data[block_idx] = block_sum; }}template <size_t NUM_THREADS>void launch_batched_reduce_sum_v1(float* output_data, float const* input_data, size_t batch_size, size_t num_elements_per_batch, cudaStream_t stream){ size_t const num_blocks{batch_size}; batched_reduce_sum_v1<NUM_THREADS><<<num_blocks, NUM_THREADS, 0, stream>>>( output_data, input_data, num_elements_per_batch); CHECK_LAST_CUDA_ERROR();}template <size_t NUM_THREADS>void launch_batched_reduce_sum_v2(float* output_data, float const* input_data, size_t batch_size, size_t num_elements_per_batch, cudaStream_t stream){ size_t const num_blocks{batch_size}; batched_reduce_sum_v2<NUM_THREADS><<<num_blocks, NUM_THREADS, 0, stream>>>( output_data, input_data, num_elements_per_batch); CHECK_LAST_CUDA_ERROR();}float profile_batched_reduce_sum( std::function<void(float*, float const*, size_t, size_t, cudaStream_t)> batched_reduce_sum_launch_function, size_t batch_size, size_t num_elements_per_batch){ size_t const num_elements{batch_size * num_elements_per_batch}; cudaStream_t stream; CHECK_CUDA_ERROR(cudaStreamCreate(&stream)); constexpr float element_value{1.0f}; std::vector<float> input_data(num_elements, element_value); std::vector<float> output_data(batch_size, 0.0f); float* d_input_data; float* d_output_data; CHECK_CUDA_ERROR(cudaMalloc(&d_input_data, num_elements * sizeof(float))); CHECK_CUDA_ERROR(cudaMalloc(&d_output_data, batch_size * sizeof(float))); CHECK_CUDA_ERROR(cudaMemcpy(d_input_data, input_data.data(), num_elements * sizeof(float), cudaMemcpyHostToDevice)); batched_reduce_sum_launch_function(d_output_data, d_input_data, batch_size, num_elements_per_batch, stream); CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); // Verify the correctness of the kernel. CHECK_CUDA_ERROR(cudaMemcpy(output_data.data(), d_output_data, batch_size * sizeof(float), cudaMemcpyDeviceToHost)); for (size_t i{0}; i < batch_size; ++i) { if (output_data.at(i) != num_elements_per_batch * element_value) { std::cout << "Expected: " << num_elements_per_batch * element_value << " but got: " << output_data.at(i) << std::endl; throw std::runtime_error("Error: incorrect sum"); } } std::function<void(cudaStream_t)> const bound_function{std::bind( batched_reduce_sum_launch_function, d_output_data, d_input_data, batch_size, num_elements_per_batch, std::placeholders::_1)}; float const latency{measure_performance<void>(bound_function, stream)}; std::cout << "Latency: " << latency << " ms" << std::endl; // Compute effective bandwidth. size_t num_bytes{num_elements * sizeof(float) + batch_size * sizeof(float)}; float const bandwidth{(num_bytes * 1e-6f) / latency}; std::cout << "Effective Bandwidth: " << bandwidth << " GB/s" << std::endl; CHECK_CUDA_ERROR(cudaFree(d_input_data)); CHECK_CUDA_ERROR(cudaFree(d_output_data)); CHECK_CUDA_ERROR(cudaStreamDestroy(stream)); return latency;}int main(){ size_t const batch_size{2048}; size_t const num_elements_per_batch{1024 * 256}; constexpr size_t string_width{50U}; std::cout << std_string_centered("", string_width, '~') << std::endl; std::cout << std_string_centered("NVIDIA GPU Device Info", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '~') << std::endl; // Query deive name and peak memory bandwidth. int device_id{0}; cudaGetDevice(&device_id); cudaDeviceProp device_prop; cudaGetDeviceProperties(&device_prop, device_id); std::cout << "Device Name: " << device_prop.name << std::endl; float const memory_size{static_cast<float>(device_prop.totalGlobalMem) / (1 << 30)}; std::cout << "Memory Size: " << memory_size << " GB" << std::endl; float const peak_bandwidth{ static_cast<float>(2.0f * device_prop.memoryClockRate * (device_prop.memoryBusWidth / 8) / 1.0e6)}; std::cout << "Peak Bandwitdh: " << peak_bandwidth << " GB/s" << std::endl; std::cout << std_string_centered("", string_width, '~') << std::endl; std::cout << std_string_centered("Reduce Sum Profiling", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '~') << std::endl; std::cout << std_string_centered("", string_width, '=') << std::endl; std::cout << "Batch Size: " << batch_size << std::endl; std::cout << "Number of Elements Per Batch: " << num_elements_per_batch << std::endl; std::cout << std_string_centered("", string_width, '=') << std::endl; constexpr size_t NUM_THREADS_PER_BATCH{256}; static_assert(NUM_THREADS_PER_BATCH % 32 == 0, "NUM_THREADS_PER_BATCH must be a multiple of 32"); static_assert(NUM_THREADS_PER_BATCH <= 1024, "NUM_THREADS_PER_BATCH must be less than or equal to 1024"); std::cout << "Batched Reduce Sum V1" << std::endl; float const latency_v1{profile_batched_reduce_sum( launch_batched_reduce_sum_v1<NUM_THREADS_PER_BATCH>, batch_size, num_elements_per_batch)}; std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << "Batched Reduce Sum V2" << std::endl; float const latency_v2{profile_batched_reduce_sum( launch_batched_reduce_sum_v2<NUM_THREADS_PER_BATCH>, batch_size, num_elements_per_batch)}; std::cout << std_string_centered("", string_width, '-') << std::endl;}
To build and run the reduce sum example, please run the following commands.
1234567891011121314151617181920212223
$ nvcc reduce_sum.cu -o reduce_sum$ ./reduce_sum~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NVIDIA GPU Device Info~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Device Name: NVIDIA GeForce RTX 3090Memory Size: 23.6694 GBPeak Bandwitdh: 936.096 GB/s~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ Reduce Sum Profiling~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~==================================================Batch Size: 2048Number of Elements Per Batch: 262144==================================================Batched Reduce Sum V1Latency: 2.42976 msEffective Bandwidth: 883.83 GB/s--------------------------------------------------Batched Reduce Sum V2Latency: 2.44303 msEffective Bandwidth: 879.028 GB/s--------------------------------------------------
It turns out that the two batched reduce sum kernels have similar performance. The effective bandwidth is about 94% of the peak bandwidth of the GPU. It should be noted that on my system, the effective bandwidth can vary from run to run at different times of the day, from 750 GB/s to 900 GB/s.
Large Array Reduce Sum
What if we have much larger arrays and much smaller batch sizes? The maximum number of threads in a thread block is 1024. If only one thread block is assigned to compute the sum of the elements in a much larger array and the batch size is very small, the GPU utilization and the effective bandwidth will be very low.
In this case, we will need to split a large array into multiple smaller arrays as if each large array is a batch of arrays. We will assign multiple thread blocks to compute the sum of the elements in each smaller array. Once the sum of the elements in each smaller array is computed, we will further reduce the partial sum to the final sum using the batched reduce sum kernels again.
Concretely, suppose a batch of data is of shape (batch_size, num_elements_per_batch), if num_elements_per_batch is very large and batch_size is very small, we can always reshape the data into a shape of (batch_size * inner_batch_size, inner_num_elements_per_batch) and run batched reduce sum kernel. The resulting reduced sum will be of shape (batch_size * inner_batch_size, 1). We can further reshape the reduced sum into a shape of (batch_size, inner_batch_size) (let’s call it (batch_size, num_elements_per_batch) again) and run batched reduce sum kernel. This process can be repeated until the num_elements_per_batch is not too large.
Of course, instead of running the batched reduce sum kernel and synchronization multiple times, we can also try adding the partial sum of each smaller array to the final sum in the global memory using atomic operations. This, however, may or may not have performance degradations comparing to running the batched reduce sum kernel and synchronization multiple times.
References
CUDA Reduction
https://leimao.github.io/blog/CUDA-Reduction/
Author
Lei Mao
Posted on
07-30-2024
Updated on
07-30-2024
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
CUDA Vectorized Memory Access
Introduction
Reading and writing data from and to the DRAM is one of the fundamental operations in CUDA programming. The effective memory bandwidth of the CUDA device is one of the most critical factors that affect the performance of a CUDA function, especially when the CUDA function is memory-bound.
In this blog post, we will show how to improve the effective memory bandwidth of a CUDA function by using vectorized memory access.
CUDA Vectorized Memory Access
In the following example, we will implement a naive custom device memcpy function and show how to improve its effective memory bandwidth via 8-byte or 16-byte per thread vectorized memory transactions for contiguous data of different data types. The consequence of using 8-byte or 16-byte per thread vectorized memory transactions is that the number of memory transactions required for data copy gets reduced, which improves the effective memory bandwidth in almost all the use cases.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518519520521522523524525526527528529530531532533534535536537538539540541542543544545546547548549550551552553554555556557558559560561562563564565566567568569570571572573574575576577578579580581582583584585586587588589590591592593594595596597598599600601602603604605606607608609610611612613614615616617618619620621622623624625626627628629630631632633634635636637638
#include <chrono>#include <functional>#include <iomanip>#include <iostream>#include <tuple>#include <type_traits>#include <vector>#include <cuda_runtime.h>#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)void check(cudaError_t err, const char* const func, const char* const file, const int line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}#define CHECK_LAST_CUDA_ERROR() check_last(__FILE__, __LINE__)void check_last(const char* const file, const int line){ cudaError_t const err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); }}std::string std_string_centered(std::string const& s, size_t width, char pad = ' '){ size_t const l{s.length()}; // Throw an exception if width is too small. if (width < l) { throw std::runtime_error("Width is too small."); } size_t const left_pad{(width - l) / 2}; size_t const right_pad{width - l - left_pad}; std::string const s_centered{std::string(left_pad, pad) + s + std::string(right_pad, pad)}; return s_centered;}template <class T>float measure_performance(std::function<T(cudaStream_t)> const& bound_function, cudaStream_t stream, unsigned int num_repeats = 100, unsigned int num_warmups = 100){ cudaEvent_t start, stop; float time; CHECK_CUDA_ERROR(cudaEventCreate(&start)); CHECK_CUDA_ERROR(cudaEventCreate(&stop)); for (unsigned int i{0U}; i < num_warmups; ++i) { bound_function(stream); } CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); CHECK_CUDA_ERROR(cudaEventRecord(start, stream)); for (unsigned int i{0U}; i < num_repeats; ++i) { bound_function(stream); } CHECK_CUDA_ERROR(cudaEventRecord(stop, stream)); CHECK_CUDA_ERROR(cudaEventSynchronize(stop)); CHECK_LAST_CUDA_ERROR(); CHECK_CUDA_ERROR(cudaEventElapsedTime(&time, start, stop)); CHECK_CUDA_ERROR(cudaEventDestroy(start)); CHECK_CUDA_ERROR(cudaEventDestroy(stop)); float const latency{time / num_repeats}; return latency;}template <typename T>__global__ void custom_device_memcpy(T* __restrict__ output, T const* __restrict__ input, size_t n){ size_t const idx{blockDim.x * blockIdx.x + threadIdx.x}; size_t const stride{blockDim.x * gridDim.x}; for (size_t i{idx}; i < n; i += stride) { output[i] = input[i]; }}template <typename T>void launch_custom_device_memcpy(T* output, T const* input, size_t n, cudaStream_t stream){ dim3 const threads_per_block{1024}; dim3 const blocks_per_grid{static_cast<unsigned int>(std::min( (n + threads_per_block.x - 1U) / threads_per_block.x, static_cast<size_t>(std::numeric_limits<unsigned int>::max())))}; custom_device_memcpy<<<blocks_per_grid, threads_per_block, 0, stream>>>( output, input, n); CHECK_LAST_CUDA_ERROR();}template <typename T, unsigned int BLOCK_DIM_X>__global__ void custom_device_memcpy_shared_memory(T* __restrict__ output, T const* __restrict__ input, size_t n){ // Using shared memory as intermediate buffer. __shared__ T shared_memory[BLOCK_DIM_X]; size_t const idx{blockDim.x * blockIdx.x + threadIdx.x}; size_t const stride{blockDim.x * gridDim.x}; for (size_t i{idx}; i < n; i += stride) { shared_memory[threadIdx.x] = input[i]; // Synchronization is not necessary in this case. // __syncthreads(); output[i] = shared_memory[threadIdx.x]; }}template <typename T>void launch_custom_device_memcpy_shared_memory(T* output, T const* input, size_t n, cudaStream_t stream){ constexpr dim3 threads_per_block{1024}; dim3 const blocks_per_grid{static_cast<unsigned int>(std::min( (n + threads_per_block.x - 1U) / threads_per_block.x, static_cast<size_t>(std::numeric_limits<unsigned int>::max())))}; custom_device_memcpy_shared_memory<T, threads_per_block.x> <<<blocks_per_grid, threads_per_block, 0, stream>>>(output, input, n); CHECK_LAST_CUDA_ERROR();}// One thread copies sizeof(R) bytes of data.// One warp copies 32 x sizeof(R) bytes of data via one of few memory// transactions.template <typename T, typename R = uint64_t>__global__ void custom_device_memcpy_optimized(T* __restrict__ output, T const* __restrict__ input, size_t n){ size_t const idx{blockDim.x * blockIdx.x + threadIdx.x}; size_t const stride{blockDim.x * gridDim.x}; for (size_t i{idx}; i * sizeof(R) / sizeof(T) < n; i += stride) { if ((i + 1U) * sizeof(R) / sizeof(T) < n) { reinterpret_cast<R*>(output)[i] = reinterpret_cast<R const*>(input)[i]; } else { // Remaining units to copy. size_t const start_index{i * sizeof(R) / sizeof(T)}; size_t const remaining_units_to_copy{(n - start_index)}; for (size_t j{0}; j < remaining_units_to_copy; ++j) { output[start_index + j] = input[start_index + j]; } } }}template <typename T, typename R = uint64_t>void launch_custom_device_memcpy_optimized(T* output, T const* input, size_t n, cudaStream_t stream){ dim3 const threads_per_block{1024}; size_t const num_units_to_copy_round_up{(n * sizeof(T) + sizeof(R) - 1U) / sizeof(R)}; dim3 const blocks_per_grid{static_cast<unsigned int>(std::min( (num_units_to_copy_round_up + threads_per_block.x - 1U) / threads_per_block.x, static_cast<size_t>(std::numeric_limits<unsigned int>::max())))}; custom_device_memcpy_optimized<<<blocks_per_grid, threads_per_block, 0, stream>>>(output, input, n); CHECK_LAST_CUDA_ERROR();}template <typename T>void launch_official_device_memcpy(T* output, T const* input, size_t n, cudaStream_t stream){ CHECK_CUDA_ERROR(cudaMemcpyAsync(output, input, n * sizeof(T), cudaMemcpyDeviceToDevice, stream));}// Initialize the buffer so that the unit of the data is the index of the data.template <typename T, std::enable_if_t<std::is_integral<T>::value, bool> = true>void initialize_buffer(T* buffer, size_t n){ for (size_t i{0}; i < n; ++i) { buffer[i] = static_cast<T>( i % static_cast<size_t>(std::numeric_limits<T>::max())); }}template <typename T, std::enable_if_t<std::is_integral<T>::value, bool> = true>void verify_buffer(T* buffer, size_t n){ for (size_t i{0}; i < n; ++i) { if (buffer[i] != static_cast<T>(i % static_cast<size_t>( std::numeric_limits<T>::max()))) { std::cerr << "Verification failed at index: " << i << std::endl; std::exit(EXIT_FAILURE); } }}// Measure custom device memcpy performance given the number of units to copy,// the device memcpy function to use, and the number of repeats and warmups.template <typename T>float measure_custom_device_memcpy_performance( size_t n, std::function<void(T*, T const*, size_t, cudaStream_t)> const& device_memcpy_function, int num_repeats = 100, int num_warmups = 100){ cudaStream_t stream; CHECK_CUDA_ERROR(cudaStreamCreateWithFlags(&stream, cudaStreamNonBlocking)); std::vector<T> input(n); std::vector<T> output(n, static_cast<T>(0)); initialize_buffer(input.data(), n); T* d_input; T* d_output; CHECK_CUDA_ERROR(cudaMalloc(&d_input, n * sizeof(T))); CHECK_CUDA_ERROR(cudaMalloc(&d_output, n * sizeof(T))); CHECK_CUDA_ERROR(cudaMemcpyAsync(d_input, input.data(), n * sizeof(T), cudaMemcpyHostToDevice, stream)); CHECK_CUDA_ERROR(cudaMemcpyAsync(d_output, output.data(), n * sizeof(T), cudaMemcpyHostToDevice, stream)); // Run device memcpy once to check correcness. device_memcpy_function(d_output, d_input, n, stream); CHECK_CUDA_ERROR(cudaMemcpyAsync(output.data(), d_output, n * sizeof(T), cudaMemcpyDeviceToHost, stream)); CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); // Verify the correctness of the device memcpy. verify_buffer(output.data(), n); size_t const num_bytes{n * sizeof(T)}; float const num_giga_bytes{static_cast<float>(num_bytes) / (1 << 30)}; std::function<void(cudaStream_t)> function{std::bind( device_memcpy_function, d_output, d_input, n, std::placeholders::_1)}; float const latency{ measure_performance(function, stream, num_repeats, num_warmups)}; std::cout << std::fixed << std::setprecision(3) << "Latency: " << latency << " ms" << std::endl; std::cout << "Effective Bandwitdh: " << 2.f * num_giga_bytes / (latency / 1000) << " GB/s" << std::endl; CHECK_CUDA_ERROR(cudaFree(d_input)); CHECK_CUDA_ERROR(cudaFree(d_output)); CHECK_CUDA_ERROR(cudaStreamDestroy(stream)); // Query deive name and peak memory bandwidth. int device_id{0}; cudaGetDevice(&device_id); cudaDeviceProp device_prop; cudaGetDeviceProperties(&device_prop, device_id); float const peak_bandwidth{ static_cast<float>(2.0 * device_prop.memoryClockRate * (device_prop.memoryBusWidth / 8) / 1.0e6)}; std::cout << "Percentage of Peak Bandwitdh: " << 2.f * num_giga_bytes / (latency / 1000) / peak_bandwidth * 100 << "%" << std::endl; return latency;}int main(){ constexpr unsigned int num_repeats{10U}; constexpr unsigned int num_warmups{10U}; constexpr size_t tensor_size_small{1U * 64U * 64U * 64U}; constexpr size_t tensor_size_medium{1U * 128U * 128U * 128U}; constexpr size_t tensor_size_large{1U * 512U * 512U * 512U}; constexpr size_t string_width{50U}; std::cout << std_string_centered("", string_width, '~') << std::endl; std::cout << std_string_centered("NVIDIA GPU Device Info", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '~') << std::endl; // Query deive name and peak memory bandwidth. int device_id{0}; cudaGetDevice(&device_id); cudaDeviceProp device_prop; cudaGetDeviceProperties(&device_prop, device_id); std::cout << "Device Name: " << device_prop.name << std::endl; float const memory_size{static_cast<float>(device_prop.totalGlobalMem) / (1 << 30)}; std::cout << "Memory Size: " << memory_size << " GB" << std::endl; float const peak_bandwidth{ static_cast<float>(2.0f * device_prop.memoryClockRate * (device_prop.memoryBusWidth / 8) / 1.0e6)}; std::cout << "Peak Bandwitdh: " << peak_bandwidth << " GB/s" << std::endl; std::cout << std::endl; // Measure CUDA official memcpy performance for different tensor sizes. std::cout << std_string_centered("", string_width, '*') << std::endl; std::cout << std_string_centered("CUDA Official Memcpy", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '*') << std::endl; for (size_t tensor_size : {tensor_size_small, tensor_size_medium, tensor_size_large}) { std::string const tensor_size_string{std::string("Tensor Size: ") + std::to_string(tensor_size) + std::string(" Units")}; std::cout << std_string_centered("", string_width, '=') << std::endl; std::cout << std_string_centered(tensor_size_string, string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '=') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 1 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int8_t>( tensor_size, launch_official_device_memcpy<int8_t>, num_repeats, num_warmups); std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 2 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int16_t>( tensor_size, launch_official_device_memcpy<int16_t>, num_repeats, num_warmups); std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 4 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int32_t>( tensor_size, launch_official_device_memcpy<int32_t>, num_repeats, num_warmups); std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 8 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int64_t>( tensor_size, launch_official_device_memcpy<int64_t>, num_repeats, num_warmups); } std::cout << std::endl; // Measure the latency and bandwidth of custom device memcpy for different // tensor sizes. std::cout << std_string_centered("", string_width, '*') << std::endl; std::cout << std_string_centered("Custom Device Memcpy", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '*') << std::endl; for (size_t tensor_size : {tensor_size_small, tensor_size_medium, tensor_size_large}) { std::string const tensor_size_string{std::string("Tensor Size: ") + std::to_string(tensor_size) + std::string(" Units")}; std::cout << std_string_centered("", string_width, '=') << std::endl; std::cout << std_string_centered(tensor_size_string, string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '=') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 1 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int8_t>( tensor_size, launch_custom_device_memcpy<int8_t>, num_repeats, num_warmups); std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 2 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int16_t>( tensor_size, launch_custom_device_memcpy<int16_t>, num_repeats, num_warmups); std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 4 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int32_t>( tensor_size, launch_custom_device_memcpy<int32_t>, num_repeats, num_warmups); std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 8 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int64_t>( tensor_size, launch_custom_device_memcpy<int64_t>, num_repeats, num_warmups); } std::cout << std::endl; // Conclusions: // 1. The more units of data we copy, the higher the bandwidth. // 2. The larger the unit of the data, the higher the bandwidth. // Check if shared memory can improve the latency of custom device memcpy. std::cout << std_string_centered("", string_width, '*') << std::endl; std::cout << std_string_centered("Custom Device Memcpy with Shared Memory", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '*') << std::endl; for (size_t tensor_size : {tensor_size_small, tensor_size_medium, tensor_size_large}) { std::string const tensor_size_string{std::string("Tensor Size: ") + std::to_string(tensor_size) + std::string(" Units")}; std::cout << std_string_centered("", string_width, '=') << std::endl; std::cout << std_string_centered(tensor_size_string, string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '=') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 1 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int8_t>( tensor_size, launch_custom_device_memcpy_shared_memory<int8_t>, num_repeats, num_warmups); std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 2 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int16_t>( tensor_size, launch_custom_device_memcpy_shared_memory<int16_t>, num_repeats, num_warmups); std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 4 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int32_t>( tensor_size, launch_custom_device_memcpy_shared_memory<int32_t>, num_repeats, num_warmups); std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 8 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int64_t>( tensor_size, launch_custom_device_memcpy_shared_memory<int64_t>, num_repeats, num_warmups); } std::cout << std::endl; // Conclusions: // 1. The effect of using shared memory for improving the latency of custom // device memcpy is not obvious. // Improve the latency of custom device memcpy when the unit of the data is // small. std::cout << std_string_centered("", string_width, '*') << std::endl; std::cout << std_string_centered( "Custom Device Memcpy 4-Byte Copy Per Thread", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '*') << std::endl; for (size_t tensor_size : {tensor_size_small, tensor_size_medium, tensor_size_large}) { std::string const tensor_size_string{std::string("Tensor Size: ") + std::to_string(tensor_size) + std::string(" Units")}; std::cout << std_string_centered("", string_width, '=') << std::endl; std::cout << std_string_centered(tensor_size_string, string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '=') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 1 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int8_t>( tensor_size, launch_custom_device_memcpy_optimized<int8_t, uint32_t>, num_repeats, num_warmups); std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 2 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int16_t>( tensor_size, launch_custom_device_memcpy_optimized<int16_t, uint32_t>, num_repeats, num_warmups); std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 4 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int32_t>( tensor_size, launch_custom_device_memcpy_optimized<int32_t, uint32_t>, num_repeats, num_warmups); std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 8 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int64_t>( tensor_size, launch_custom_device_memcpy_optimized<int64_t, uint32_t>, num_repeats, num_warmups); } std::cout << std::endl; std::cout << std_string_centered("", string_width, '*') << std::endl; std::cout << std_string_centered( "Custom Device Memcpy 8-Byte Copy Per Thread", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '*') << std::endl; for (size_t tensor_size : {tensor_size_small, tensor_size_medium, tensor_size_large}) { std::string const tensor_size_string{std::string("Tensor Size: ") + std::to_string(tensor_size) + std::string(" Units")}; std::cout << std_string_centered("", string_width, '=') << std::endl; std::cout << std_string_centered(tensor_size_string, string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '=') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 1 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int8_t>( tensor_size, launch_custom_device_memcpy_optimized<int8_t, uint64_t>, num_repeats, num_warmups); std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 2 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int16_t>( tensor_size, launch_custom_device_memcpy_optimized<int16_t, uint64_t>, num_repeats, num_warmups); std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 4 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int32_t>( tensor_size, launch_custom_device_memcpy_optimized<int32_t, uint64_t>, num_repeats, num_warmups); std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 8 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int64_t>( tensor_size, launch_custom_device_memcpy_optimized<int64_t, uint64_t>, num_repeats, num_warmups); } std::cout << std::endl; std::cout << std_string_centered("", string_width, '*') << std::endl; std::cout << std_string_centered( "Custom Device Memcpy 16-Byte Copy Per Thread", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '*') << std::endl; for (size_t tensor_size : {tensor_size_small, tensor_size_medium, tensor_size_large}) { std::string const tensor_size_string{std::string("Tensor Size: ") + std::to_string(tensor_size) + std::string(" Units")}; std::cout << std_string_centered("", string_width, '=') << std::endl; std::cout << std_string_centered(tensor_size_string, string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '=') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 1 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int8_t>( tensor_size, launch_custom_device_memcpy_optimized<int8_t, uint4>, num_repeats, num_warmups); std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 2 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int16_t>( tensor_size, launch_custom_device_memcpy_optimized<int16_t, uint4>, num_repeats, num_warmups); std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 4 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int32_t>( tensor_size, launch_custom_device_memcpy_optimized<int32_t, uint4>, num_repeats, num_warmups); std::cout << std_string_centered("", string_width, '-') << std::endl; std::cout << std_string_centered("Unit Size: 8 Byte", string_width, ' ') << std::endl; std::cout << std_string_centered("", string_width, '-') << std::endl; measure_custom_device_memcpy_performance<int64_t>( tensor_size, launch_custom_device_memcpy_optimized<int64_t, uint4>, num_repeats, num_warmups); } std::cout << std::endl; // Conclusions: // 1. Copying data in units of 8 bytes or 16 bytes can improve the latency // of custom device memcpy.}
The CUDA program was compiled and profiled on an NVIDIA RTX 3090 GPU with CUDA 12.0.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373374375376377378379380381382383384385386387388389390391392393394395396397398399400401402403404405406407408409410411412413414415416417418419420421422423424425426427428429430431432433434435436437438439440441442443444445446447448449450451452453454455456457458459460461462463464465466467468469470471472473474475476477478479480481482483484485486487488489490491492493494495496497498499500501502503504505506507508509510511512513514515516517518
$ nvcc memcpy.cu -o memcpy -std=c++14$ ./memcpy~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ NVIDIA GPU Device Info~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~Device Name: NVIDIA GeForce RTX 3090Memory Size: 23.6694 GBPeak Bandwitdh: 936.096 GB/s************************************************** CUDA Official Memcpy**************************************************================================================== Tensor Size: 262144 Units==================================================-------------------------------------------------- Unit Size: 1 Byte--------------------------------------------------Latency: 0.002 msEffective Bandwitdh: 217.362 GB/sPercentage of Peak Bandwitdh: 23.220%-------------------------------------------------- Unit Size: 2 Byte--------------------------------------------------Latency: 0.002 msEffective Bandwitdh: 414.641 GB/sPercentage of Peak Bandwitdh: 44.295%-------------------------------------------------- Unit Size: 4 Byte--------------------------------------------------Latency: 0.003 msEffective Bandwitdh: 706.425 GB/sPercentage of Peak Bandwitdh: 75.465%-------------------------------------------------- Unit Size: 8 Byte--------------------------------------------------Latency: 0.004 msEffective Bandwitdh: 1030.999 GB/sPercentage of Peak Bandwitdh: 110.138%================================================== Tensor Size: 2097152 Units==================================================-------------------------------------------------- Unit Size: 1 Byte--------------------------------------------------Latency: 0.004 msEffective Bandwitdh: 1059.638 GB/sPercentage of Peak Bandwitdh: 113.198%-------------------------------------------------- Unit Size: 2 Byte--------------------------------------------------Latency: 0.011 msEffective Bandwitdh: 719.754 GB/sPercentage of Peak Bandwitdh: 76.889%-------------------------------------------------- Unit Size: 4 Byte--------------------------------------------------Latency: 0.023 msEffective Bandwitdh: 675.261 GB/sPercentage of Peak Bandwitdh: 72.136%-------------------------------------------------- Unit Size: 8 Byte--------------------------------------------------Latency: 0.043 msEffective Bandwitdh: 719.330 GB/sPercentage of Peak Bandwitdh: 76.844%================================================== Tensor Size: 134217728 Units==================================================-------------------------------------------------- Unit Size: 1 Byte--------------------------------------------------Latency: 0.321 msEffective Bandwitdh: 778.091 GB/sPercentage of Peak Bandwitdh: 83.121%-------------------------------------------------- Unit Size: 2 Byte--------------------------------------------------Latency: 0.640 msEffective Bandwitdh: 781.539 GB/sPercentage of Peak Bandwitdh: 83.489%-------------------------------------------------- Unit Size: 4 Byte--------------------------------------------------Latency: 1.275 msEffective Bandwitdh: 784.214 GB/sPercentage of Peak Bandwitdh: 83.775%-------------------------------------------------- Unit Size: 8 Byte--------------------------------------------------Latency: 2.560 msEffective Bandwitdh: 781.282 GB/sPercentage of Peak Bandwitdh: 83.462%************************************************** Custom Device Memcpy**************************************************================================================== Tensor Size: 262144 Units==================================================-------------------------------------------------- Unit Size: 1 Byte--------------------------------------------------Latency: 0.003 msEffective Bandwitdh: 183.399 GB/sPercentage of Peak Bandwitdh: 19.592%-------------------------------------------------- Unit Size: 2 Byte--------------------------------------------------Latency: 0.003 msEffective Bandwitdh: 354.443 GB/sPercentage of Peak Bandwitdh: 37.864%-------------------------------------------------- Unit Size: 4 Byte--------------------------------------------------Latency: 0.003 msEffective Bandwitdh: 681.196 GB/sPercentage of Peak Bandwitdh: 72.770%-------------------------------------------------- Unit Size: 8 Byte--------------------------------------------------Latency: 0.003 msEffective Bandwitdh: 1192.093 GB/sPercentage of Peak Bandwitdh: 127.347%================================================== Tensor Size: 2097152 Units==================================================-------------------------------------------------- Unit Size: 1 Byte--------------------------------------------------Latency: 0.010 msEffective Bandwitdh: 378.747 GB/sPercentage of Peak Bandwitdh: 40.460%-------------------------------------------------- Unit Size: 2 Byte--------------------------------------------------Latency: 0.018 msEffective Bandwitdh: 445.593 GB/sPercentage of Peak Bandwitdh: 47.601%-------------------------------------------------- Unit Size: 4 Byte--------------------------------------------------Latency: 0.024 msEffective Bandwitdh: 660.732 GB/sPercentage of Peak Bandwitdh: 70.584%-------------------------------------------------- Unit Size: 8 Byte--------------------------------------------------Latency: 0.042 msEffective Bandwitdh: 737.140 GB/sPercentage of Peak Bandwitdh: 78.746%================================================== Tensor Size: 134217728 Units==================================================-------------------------------------------------- Unit Size: 1 Byte--------------------------------------------------Latency: 0.972 msEffective Bandwitdh: 257.207 GB/sPercentage of Peak Bandwitdh: 27.477%-------------------------------------------------- Unit Size: 2 Byte--------------------------------------------------Latency: 1.076 msEffective Bandwitdh: 464.543 GB/sPercentage of Peak Bandwitdh: 49.626%-------------------------------------------------- Unit Size: 4 Byte--------------------------------------------------Latency: 1.369 msEffective Bandwitdh: 730.586 GB/sPercentage of Peak Bandwitdh: 78.046%-------------------------------------------------- Unit Size: 8 Byte--------------------------------------------------Latency: 2.536 msEffective Bandwitdh: 788.727 GB/sPercentage of Peak Bandwitdh: 84.257%************************************************** Custom Device Memcpy with Shared Memory**************************************************================================================== Tensor Size: 262144 Units==================================================-------------------------------------------------- Unit Size: 1 Byte--------------------------------------------------Latency: 0.003 msEffective Bandwitdh: 175.995 GB/sPercentage of Peak Bandwitdh: 18.801%-------------------------------------------------- Unit Size: 2 Byte--------------------------------------------------Latency: 0.003 msEffective Bandwitdh: 328.853 GB/sPercentage of Peak Bandwitdh: 35.130%-------------------------------------------------- Unit Size: 4 Byte--------------------------------------------------Latency: 0.003 msEffective Bandwitdh: 653.481 GB/sPercentage of Peak Bandwitdh: 69.809%-------------------------------------------------- Unit Size: 8 Byte--------------------------------------------------Latency: 0.003 msEffective Bandwitdh: 1128.192 GB/sPercentage of Peak Bandwitdh: 120.521%================================================== Tensor Size: 2097152 Units==================================================-------------------------------------------------- Unit Size: 1 Byte--------------------------------------------------Latency: 0.011 msEffective Bandwitdh: 353.213 GB/sPercentage of Peak Bandwitdh: 37.733%-------------------------------------------------- Unit Size: 2 Byte--------------------------------------------------Latency: 0.018 msEffective Bandwitdh: 433.488 GB/sPercentage of Peak Bandwitdh: 46.308%-------------------------------------------------- Unit Size: 4 Byte--------------------------------------------------Latency: 0.024 msEffective Bandwitdh: 650.261 GB/sPercentage of Peak Bandwitdh: 69.465%-------------------------------------------------- Unit Size: 8 Byte--------------------------------------------------Latency: 0.042 msEffective Bandwitdh: 737.864 GB/sPercentage of Peak Bandwitdh: 78.824%================================================== Tensor Size: 134217728 Units==================================================-------------------------------------------------- Unit Size: 1 Byte--------------------------------------------------Latency: 1.011 msEffective Bandwitdh: 247.181 GB/sPercentage of Peak Bandwitdh: 26.406%-------------------------------------------------- Unit Size: 2 Byte--------------------------------------------------Latency: 1.113 msEffective Bandwitdh: 449.172 GB/sPercentage of Peak Bandwitdh: 47.984%-------------------------------------------------- Unit Size: 4 Byte--------------------------------------------------Latency: 1.391 msEffective Bandwitdh: 718.748 GB/sPercentage of Peak Bandwitdh: 76.781%-------------------------------------------------- Unit Size: 8 Byte--------------------------------------------------Latency: 2.546 msEffective Bandwitdh: 785.429 GB/sPercentage of Peak Bandwitdh: 83.905%************************************************** Custom Device Memcpy 4-Byte Copy Per Thread**************************************************================================================== Tensor Size: 262144 Units==================================================-------------------------------------------------- Unit Size: 1 Byte--------------------------------------------------Latency: 0.002 msEffective Bandwitdh: 238.419 GB/sPercentage of Peak Bandwitdh: 25.469%-------------------------------------------------- Unit Size: 2 Byte--------------------------------------------------Latency: 0.002 msEffective Bandwitdh: 437.842 GB/sPercentage of Peak Bandwitdh: 46.773%-------------------------------------------------- Unit Size: 4 Byte--------------------------------------------------Latency: 0.003 msEffective Bandwitdh: 684.251 GB/sPercentage of Peak Bandwitdh: 73.096%-------------------------------------------------- Unit Size: 8 Byte--------------------------------------------------Latency: 0.004 msEffective Bandwitdh: 1003.868 GB/sPercentage of Peak Bandwitdh: 107.240%================================================== Tensor Size: 2097152 Units==================================================-------------------------------------------------- Unit Size: 1 Byte--------------------------------------------------Latency: 0.004 msEffective Bandwitdh: 968.812 GB/sPercentage of Peak Bandwitdh: 103.495%-------------------------------------------------- Unit Size: 2 Byte--------------------------------------------------Latency: 0.012 msEffective Bandwitdh: 675.168 GB/sPercentage of Peak Bandwitdh: 72.126%-------------------------------------------------- Unit Size: 4 Byte--------------------------------------------------Latency: 0.024 msEffective Bandwitdh: 660.196 GB/sPercentage of Peak Bandwitdh: 70.527%-------------------------------------------------- Unit Size: 8 Byte--------------------------------------------------Latency: 0.045 msEffective Bandwitdh: 690.443 GB/sPercentage of Peak Bandwitdh: 73.758%================================================== Tensor Size: 134217728 Units==================================================-------------------------------------------------- Unit Size: 1 Byte--------------------------------------------------Latency: 0.366 msEffective Bandwitdh: 682.529 GB/sPercentage of Peak Bandwitdh: 72.912%-------------------------------------------------- Unit Size: 2 Byte--------------------------------------------------Latency: 0.722 msEffective Bandwitdh: 692.125 GB/sPercentage of Peak Bandwitdh: 73.937%-------------------------------------------------- Unit Size: 4 Byte--------------------------------------------------Latency: 1.422 msEffective Bandwitdh: 703.431 GB/sPercentage of Peak Bandwitdh: 75.145%-------------------------------------------------- Unit Size: 8 Byte--------------------------------------------------Latency: 2.824 msEffective Bandwitdh: 708.144 GB/sPercentage of Peak Bandwitdh: 75.649%************************************************** Custom Device Memcpy 8-Byte Copy Per Thread**************************************************================================================== Tensor Size: 262144 Units==================================================-------------------------------------------------- Unit Size: 1 Byte--------------------------------------------------Latency: 0.002 msEffective Bandwitdh: 238.792 GB/sPercentage of Peak Bandwitdh: 25.509%-------------------------------------------------- Unit Size: 2 Byte--------------------------------------------------Latency: 0.002 msEffective Bandwitdh: 434.723 GB/sPercentage of Peak Bandwitdh: 46.440%-------------------------------------------------- Unit Size: 4 Byte--------------------------------------------------Latency: 0.003 msEffective Bandwitdh: 681.196 GB/sPercentage of Peak Bandwitdh: 72.770%-------------------------------------------------- Unit Size: 8 Byte--------------------------------------------------Latency: 0.004 msEffective Bandwitdh: 1030.999 GB/sPercentage of Peak Bandwitdh: 110.138%================================================== Tensor Size: 2097152 Units==================================================-------------------------------------------------- Unit Size: 1 Byte--------------------------------------------------Latency: 0.004 msEffective Bandwitdh: 978.128 GB/sPercentage of Peak Bandwitdh: 104.490%-------------------------------------------------- Unit Size: 2 Byte--------------------------------------------------Latency: 0.012 msEffective Bandwitdh: 677.416 GB/sPercentage of Peak Bandwitdh: 72.366%-------------------------------------------------- Unit Size: 4 Byte--------------------------------------------------Latency: 0.022 msEffective Bandwitdh: 696.748 GB/sPercentage of Peak Bandwitdh: 74.431%-------------------------------------------------- Unit Size: 8 Byte--------------------------------------------------Latency: 0.042 msEffective Bandwitdh: 738.924 GB/sPercentage of Peak Bandwitdh: 78.937%================================================== Tensor Size: 134217728 Units==================================================-------------------------------------------------- Unit Size: 1 Byte--------------------------------------------------Latency: 0.320 msEffective Bandwitdh: 781.750 GB/sPercentage of Peak Bandwitdh: 83.512%-------------------------------------------------- Unit Size: 2 Byte--------------------------------------------------Latency: 0.636 msEffective Bandwitdh: 786.536 GB/sPercentage of Peak Bandwitdh: 84.023%-------------------------------------------------- Unit Size: 4 Byte--------------------------------------------------Latency: 1.265 msEffective Bandwitdh: 790.547 GB/sPercentage of Peak Bandwitdh: 84.451%-------------------------------------------------- Unit Size: 8 Byte--------------------------------------------------Latency: 2.530 msEffective Bandwitdh: 790.419 GB/sPercentage of Peak Bandwitdh: 84.438%************************************************** Custom Device Memcpy 16-Byte Copy Per Thread**************************************************================================================== Tensor Size: 262144 Units==================================================-------------------------------------------------- Unit Size: 1 Byte--------------------------------------------------Latency: 0.002 msEffective Bandwitdh: 216.744 GB/sPercentage of Peak Bandwitdh: 23.154%-------------------------------------------------- Unit Size: 2 Byte--------------------------------------------------Latency: 0.002 msEffective Bandwitdh: 414.641 GB/sPercentage of Peak Bandwitdh: 44.295%-------------------------------------------------- Unit Size: 4 Byte--------------------------------------------------Latency: 0.002 msEffective Bandwitdh: 829.282 GB/sPercentage of Peak Bandwitdh: 88.589%-------------------------------------------------- Unit Size: 8 Byte--------------------------------------------------Latency: 0.003 msEffective Bandwitdh: 1192.093 GB/sPercentage of Peak Bandwitdh: 127.347%================================================== Tensor Size: 2097152 Units==================================================-------------------------------------------------- Unit Size: 1 Byte--------------------------------------------------Latency: 0.003 msEffective Bandwitdh: 1128.192 GB/sPercentage of Peak Bandwitdh: 120.521%-------------------------------------------------- Unit Size: 2 Byte--------------------------------------------------Latency: 0.010 msEffective Bandwitdh: 755.386 GB/sPercentage of Peak Bandwitdh: 80.695%-------------------------------------------------- Unit Size: 4 Byte--------------------------------------------------Latency: 0.023 msEffective Bandwitdh: 687.333 GB/sPercentage of Peak Bandwitdh: 73.425%-------------------------------------------------- Unit Size: 8 Byte--------------------------------------------------Latency: 0.043 msEffective Bandwitdh: 728.343 GB/sPercentage of Peak Bandwitdh: 77.806%================================================== Tensor Size: 134217728 Units==================================================-------------------------------------------------- Unit Size: 1 Byte--------------------------------------------------Latency: 0.321 msEffective Bandwitdh: 779.006 GB/sPercentage of Peak Bandwitdh: 83.219%-------------------------------------------------- Unit Size: 2 Byte--------------------------------------------------Latency: 0.639 msEffective Bandwitdh: 782.639 GB/sPercentage of Peak Bandwitdh: 83.607%-------------------------------------------------- Unit Size: 4 Byte--------------------------------------------------Latency: 1.280 msEffective Bandwitdh: 781.520 GB/sPercentage of Peak Bandwitdh: 83.487%-------------------------------------------------- Unit Size: 8 Byte--------------------------------------------------Latency: 2.552 msEffective Bandwitdh: 783.602 GB/sPercentage of Peak Bandwitdh: 83.710%
Conclusions
We could see from the results that:
Note that even though we could just use the CUDA official memcpy for this use case, it is still good to know how to write and improve a custom device memcpy function, because in more practical CUDA applications, the data to copy may not be contiguous in memory, and we may need to copy data from multiple sources to multiple destinations.
References
CUDA Vectorized Memory Access
https://leimao.github.io/blog/CUDA-Vectorized-Memory-Access/
Author
Lei Mao
Posted on
01-14-2024
Updated on
01-14-2024
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
CUDA Constant Memory
Introduction
CUDA constant memory is a special memory space on the device. It’s cached and read-only.
There are some caveats when using constant memory. In this post, we will discuss the usages and caveats of constant memory.
Constant Memory
There is a total of 64 KB constant memory on a device. The constant memory space is cached. As a result, a read from constant memory costs one memory read from device memory only on a cache miss; otherwise, it just costs one read from the constant cache. Accesses to different addresses by threads within a warp are serialized, thus the cost scales linearly with the number of unique addresses read by all threads within a warp. As such, the constant cache is best when threads in the same warp accesses only a few distinct locations. If all threads of a warp access the same location, then constant memory can be as fast as a register access.
Constant Memory Usage and Performance
In the following example, we perform additions for an array. One of the constant input arrays is stored on global memory, and the other constant input arrays is stored on global memory or constant memory. We compare the performance of accessing constant memory and global memory under different access patterns.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122123124125126127128129130131132133134135136137138139140141142143144145146147148149150151152153154155156157158159160161162163164165166167168169170171172173174175176177178179180181182183184185186187188189190191192193194195196197198199200201202203204205206207208209210211212213214215216217218219220221222223224225226227228229230231232233234235236237238239240241242243244245246247248249250251252253254255256257258259260261262263264265266267268269270271272273274275276277278279280281282283284285286287288289290291292293294295296297298299300301302303304305306307308309310311312313314315316317318319320321322323324325326327328329330331332333334335336337338339340341342343344345346347348349350351352353354355356357358359360361362363364365366367368369370371372373
#include <functional>#include <iostream>#include <string>#include <vector>#include <cuda_runtime.h>#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)void check(cudaError_t err, const char* const func, const char* const file, const int line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}#define CHECK_LAST_CUDA_ERROR() checkLast(__FILE__, __LINE__)void checkLast(const char* const file, const int line){ cudaError_t const err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); }}template <class T>float measure_performance(std::function<T(cudaStream_t)> bound_function, cudaStream_t stream, unsigned int num_repeats = 100, unsigned int num_warmups = 100){ cudaEvent_t start, stop; float time; CHECK_CUDA_ERROR(cudaEventCreate(&start)); CHECK_CUDA_ERROR(cudaEventCreate(&stop)); for (unsigned int i{0}; i < num_warmups; ++i) { bound_function(stream); } CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); CHECK_CUDA_ERROR(cudaEventRecord(start, stream)); for (unsigned int i{0}; i < num_repeats; ++i) { bound_function(stream); } CHECK_CUDA_ERROR(cudaEventRecord(stop, stream)); CHECK_CUDA_ERROR(cudaEventSynchronize(stop)); CHECK_LAST_CUDA_ERROR(); CHECK_CUDA_ERROR(cudaEventElapsedTime(&time, start, stop)); CHECK_CUDA_ERROR(cudaEventDestroy(start)); CHECK_CUDA_ERROR(cudaEventDestroy(stop)); float const latency{time / num_repeats}; return latency;}// Use all the constant memory.constexpr unsigned int N{64U * 1024U / sizeof(int)};__constant__ int const_values[N];// Magic number for generating the pseudo-random access pattern.constexpr unsigned int magic_number{1357U};enum struct AccessPattern{ OneAccessPerBlock, OneAccessPerWarp, OneAccessPerThread, PseudoRandom};void add_constant_cpu(int* sums, int const* inputs, int const* values, unsigned int num_sums, unsigned int num_values, unsigned int block_size, AccessPattern access_pattern){ for (unsigned int i{0U}; i < num_sums; ++i) { unsigned int const block_id{i / block_size}; unsigned int const thread_id{i % block_size}; unsigned int const warp_id{thread_id / 32U}; unsigned int index{0U}; switch (access_pattern) { case AccessPattern::OneAccessPerBlock: index = block_id % num_values; break; case AccessPattern::OneAccessPerWarp: index = warp_id % num_values; break; case AccessPattern::OneAccessPerThread: index = thread_id % num_values; break; case AccessPattern::PseudoRandom: index = (thread_id * magic_number) % num_values; break; } sums[i] = inputs[i] + values[index]; }}__global__ void add_constant_global_memory( int* sums, int const* inputs, int const* values, unsigned int num_sums, unsigned int num_values, AccessPattern access_pattern = AccessPattern::OneAccessPerBlock){ unsigned int const i{blockIdx.x * blockDim.x + threadIdx.x}; unsigned int const block_id{blockIdx.x}; unsigned int const thread_id{threadIdx.x}; unsigned int const warp_id{threadIdx.x / warpSize}; unsigned int index{0U}; switch (access_pattern) { case AccessPattern::OneAccessPerBlock: index = block_id % num_values; break; case AccessPattern::OneAccessPerWarp: index = warp_id % num_values; break; case AccessPattern::OneAccessPerThread: index = thread_id % num_values; break; case AccessPattern::PseudoRandom: index = (thread_id * magic_number) % num_values; break; } if (i < num_sums) { sums[i] = inputs[i] + values[index]; }}void launch_add_constant_global_memory(int* sums, int const* inputs, int const* values, unsigned int num_sums, unsigned int num_values, unsigned int block_size, AccessPattern access_pattern, cudaStream_t stream){ add_constant_global_memory<<<(num_sums + block_size - 1) / block_size, block_size, 0, stream>>>( sums, inputs, values, num_sums, num_values, access_pattern); CHECK_LAST_CUDA_ERROR();}__global__ void add_constant_constant_memory(int* sums, int const* inputs, unsigned int num_sums, AccessPattern access_pattern){ unsigned int const i{blockIdx.x * blockDim.x + threadIdx.x}; unsigned int const block_id{blockIdx.x}; unsigned int const thread_id{threadIdx.x}; unsigned int const warp_id{threadIdx.x / warpSize}; unsigned int index{0U}; switch (access_pattern) { case AccessPattern::OneAccessPerBlock: index = block_id % N; break; case AccessPattern::OneAccessPerWarp: index = warp_id % N; break; case AccessPattern::OneAccessPerThread: index = thread_id % N; break; case AccessPattern::PseudoRandom: index = (thread_id * magic_number) % N; break; } if (i < num_sums) { sums[i] = inputs[i] + const_values[index]; }}void launch_add_constant_constant_memory(int* sums, int const* inputs, unsigned int num_sums, unsigned int block_size, AccessPattern access_pattern, cudaStream_t stream){ add_constant_constant_memory<<<(num_sums + block_size - 1) / block_size, block_size, 0, stream>>>( sums, inputs, num_sums, access_pattern); CHECK_LAST_CUDA_ERROR();}void parse_args(int argc, char** argv, AccessPattern& access_pattern, unsigned int& block_size, unsigned int& num_sums){ if (argc < 4) { std::cerr << "Usage: " << argv[0] << " <access pattern> <block size> <number of sums>" << std::endl; std::exit(EXIT_FAILURE); } std::string const access_pattern_str{argv[1]}; if (access_pattern_str == "one_access_per_block") { access_pattern = AccessPattern::OneAccessPerBlock; } else if (access_pattern_str == "one_access_per_warp") { access_pattern = AccessPattern::OneAccessPerWarp; } else if (access_pattern_str == "one_access_per_thread") { access_pattern = AccessPattern::OneAccessPerThread; } else if (access_pattern_str == "pseudo_random") { access_pattern = AccessPattern::PseudoRandom; } else { std::cerr << "Invalid access pattern: " << access_pattern_str << std::endl; std::exit(EXIT_FAILURE); } block_size = std::stoi(argv[2]); num_sums = std::stoi(argv[3]);}int main(int argc, char** argv){ constexpr unsigned int num_warmups{100U}; constexpr unsigned int num_repeats{100U}; AccessPattern access_pattern{AccessPattern::OneAccessPerBlock}; unsigned int block_size{1024U}; unsigned int num_sums{12800000U}; // Modify access pattern, block size and number of sums from command line. parse_args(argc, argv, access_pattern, block_size, num_sums); cudaStream_t stream; CHECK_CUDA_ERROR(cudaStreamCreate(&stream)); int h_values[N]; // Initialize values on host memory. for (unsigned int i{0U}; i < N; ++i) { h_values[i] = i; } // Initialize values on global memory. int* d_values; CHECK_CUDA_ERROR(cudaMallocAsync(&d_values, N * sizeof(int), stream)); CHECK_CUDA_ERROR(cudaMemcpyAsync(d_values, h_values, N * sizeof(int), cudaMemcpyHostToDevice, stream)); // Initialize values on constant memory. CHECK_CUDA_ERROR(cudaMemcpyToSymbolAsync(const_values, h_values, N * sizeof(int), 0, cudaMemcpyHostToDevice, stream)); std::vector<int> inputs(num_sums, 0); int* h_inputs{inputs.data()}; int* d_inputs_for_constant; int* d_inputs_for_global; CHECK_CUDA_ERROR(cudaMallocAsync(&d_inputs_for_constant, num_sums * sizeof(int), stream)); CHECK_CUDA_ERROR( cudaMallocAsync(&d_inputs_for_global, num_sums * sizeof(int), stream)); CHECK_CUDA_ERROR(cudaMemcpyAsync(d_inputs_for_constant, h_inputs, num_sums * sizeof(int), cudaMemcpyHostToDevice, stream)); CHECK_CUDA_ERROR(cudaMemcpyAsync(d_inputs_for_global, h_inputs, num_sums * sizeof(int), cudaMemcpyHostToDevice, stream)); std::vector<int> reference_sums(num_sums, 0); std::vector<int> sums_from_constant(num_sums, 1); std::vector<int> sums_from_global(num_sums, 2); int* h_reference_sums{reference_sums.data()}; int* h_sums_from_constant{sums_from_constant.data()}; int* h_sums_from_global{sums_from_global.data()}; int* d_sums_from_constant; int* d_sums_from_global; CHECK_CUDA_ERROR( cudaMallocAsync(&d_sums_from_constant, num_sums * sizeof(int), stream)); CHECK_CUDA_ERROR( cudaMallocAsync(&d_sums_from_global, num_sums * sizeof(int), stream)); // Synchronize. CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); // Compute reference sums on CPU. add_constant_cpu(h_reference_sums, h_inputs, h_values, num_sums, N, block_size, access_pattern); // Compute reference sums on GPU using global memory. launch_add_constant_global_memory(d_sums_from_global, d_inputs_for_global, d_values, num_sums, N, block_size, access_pattern, stream); // Compute reference sums on GPU using constant memory. launch_add_constant_constant_memory(d_sums_from_constant, d_inputs_for_constant, num_sums, block_size, access_pattern, stream); // Copy results from device to host. CHECK_CUDA_ERROR(cudaMemcpyAsync(h_sums_from_constant, d_sums_from_constant, num_sums * sizeof(int), cudaMemcpyDeviceToHost, stream)); CHECK_CUDA_ERROR(cudaMemcpyAsync(h_sums_from_global, d_sums_from_global, num_sums * sizeof(int), cudaMemcpyDeviceToHost, stream)); // Synchronize. CHECK_CUDA_ERROR(cudaStreamSynchronize(stream)); // Verify results. for (unsigned int i{0U}; i < num_sums; ++i) { if (h_reference_sums[i] != h_sums_from_constant[i]) { std::cerr << "Error at index " << i << " for constant memory." << std::endl; std::exit(EXIT_FAILURE); } if (h_reference_sums[i] != h_sums_from_global[i]) { std::cerr << "Error at index " << i << " for global memory." << std::endl; std::exit(EXIT_FAILURE); } } // Measure performance. std::function<void(cudaStream_t)> bound_function_constant_memory{ std::bind(launch_add_constant_constant_memory, d_sums_from_constant, d_inputs_for_constant, num_sums, block_size, access_pattern, std::placeholders::_1)}; std::function<void(cudaStream_t)> bound_function_global_memory{ std::bind(launch_add_constant_global_memory, d_sums_from_global, d_inputs_for_global, d_values, num_sums, N, block_size, access_pattern, std::placeholders::_1)}; float const latency_constant_memory{measure_performance( bound_function_constant_memory, stream, num_repeats, num_warmups)}; float const latency_global_memory{measure_performance( bound_function_global_memory, stream, num_repeats, num_warmups)}; std::cout << "Latency for Add using constant memory: " << latency_constant_memory << " ms" << std::endl; std::cout << "Latency for Add using global memory: " << latency_global_memory << " ms" << std::endl; CHECK_CUDA_ERROR(cudaStreamDestroy(stream)); CHECK_CUDA_ERROR(cudaFree(d_values)); CHECK_CUDA_ERROR(cudaFree(d_inputs_for_constant)); CHECK_CUDA_ERROR(cudaFree(d_inputs_for_global)); CHECK_CUDA_ERROR(cudaFree(d_sums_from_constant)); CHECK_CUDA_ERROR(cudaFree(d_sums_from_global)); return 0;}
The program was compiled and executed on an NVIDIA RTX 3090 GPU.
1
$ nvcc add_constant.cu -o add_constant
If we have 12800000 adds to perform using 1024 threads per block.
123456789101112
$ ./add_constant one_access_per_block 1024 12800000Latency for Add using constant memory: 0.151798 msLatency for Add using global memory: 0.171404 ms$ ./add_constant one_access_per_warp 1024 12800000Latency for Add using constant memory: 0.164012 msLatency for Add using global memory: 0.189501 ms$ ./add_constant one_access_per_thread 1024 12800000Latency for Add using constant memory: 0.281967 msLatency for Add using global memory: 0.164649 ms$ ./add_constant pseudo_random 1024 12800000Latency for Add using constant memory: 1.2925 msLatency for Add using global memory: 0.159621 ms
If we have 128000 adds to perform using 1024 threads per block.
123456789101112
$ ./add_constant one_access_per_block 1024 128000Latency for Add using constant memory: 0.00289792 msLatency for Add using global memory: 0.00323584 ms$ ./add_constant one_access_per_warp 1024 128000Latency for Add using constant memory: 0.00315392 msLatency for Add using global memory: 0.00359392 ms$ ./add_constant one_access_per_thread 1024 128000Latency for Add using constant memory: 0.00596992 msLatency for Add using global memory: 0.00383264 ms$ ./add_constant pseudo_random 1024 128000Latency for Add using constant memory: 0.0215347 msLatency for Add using global memory: 0.00482304 ms
In both cases, we could see that accessing constant memory is ~10% faster than accessing global memory if the it’s one access per block or one access per warp. If it’s one access per thread, then accessing constant memory is ~70% slower than accessing global memory. If it’s pseudo random access, then accessing constant memory is ~800% slower than accessing global memory.
Conclusions
To use constant memory, it’s important to roughly know the access pattern. If the access pattern is one access per block or one access per warp, which is typically used in broadcast, then constant memory is a good choice. If the access pattern is one access per thread or even pseudo random, then constant memory is a very bad choice.
References
CUDA Constant Memory
https://leimao.github.io/blog/CUDA-Constant-Memory/
Author
Lei Mao
Posted on
12-01-2023
Updated on
12-01-2023
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
CUDA Default Stream
Introduction
CUDA default stream can have different synchronization behaviors in different scenarios. Sometimes, it helps the program to run correctly even if we made some mistakes in assigning the CUDA streams to different kernels.
In this blog post, I would like to introduce the two types of the CUDA default streams, the default legacy stream and the default per-thread stream, and discuss their synchronization behaviors in different scenarios.
Default Stream and Non-Default Blocking Stream
In the following example, I created a non-default blocking stream using cudaStreamCreate. For a series of CUDA kernels that is supposed to be run in sequence on the same non-default blocking CUDA stream, I made a mistake and accidentally used the default stream for one of the kernels.
If the default stream is a default legacy stream, when an action is taken in the legacy stream such as a kernel launch or cudaStreamWaitEvent(), the legacy stream first waits on all blocking streams, the action is queued in the legacy stream, and then all blocking streams wait on the legacy stream. Therefore, even if I made a mistake, the CUDA kernels are still run in sequence and the correctness of the application is not affected.
If the default stream is a default per-thread stream, it is non-blocking and will not synchronize with other CUDA streams. Therefore, my mistake will cause the application to run incorrectly.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101
#include <cassert>#include <iostream>#include <vector>#include <cuda_runtime.h>#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)void check(cudaError_t err, const char* const func, const char* const file, const int line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}#define CHECK_LAST_CUDA_ERROR() checkLast(__FILE__, __LINE__)void checkLast(const char* const file, const int line){ cudaError_t const err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); }}__global__ void add_val_in_place(int32_t* data, int32_t val, uint32_t n){ uint32_t const idx{blockDim.x * blockIdx.x + threadIdx.x}; uint32_t const stride{blockDim.x * gridDim.x}; for (uint32_t i{idx}; i < n; i += stride) { data[i] += val; }}void launch_add_val_in_place(int32_t* data, int32_t val, uint32_t n, cudaStream_t stream){ dim3 const threads_per_block{1024}; dim3 const blocks_per_grid{32}; add_val_in_place<<<blocks_per_grid, threads_per_block, 0, stream>>>(data, val, n); CHECK_LAST_CUDA_ERROR();}bool check_array_value(int32_t const* data, uint32_t n, int32_t val){ for (uint32_t i{0}; i < n; ++i) { if (data[i] != val) { return false; } } return true;}int main(){ constexpr uint32_t const n{1000000}; constexpr int32_t const val_1{1}; constexpr int32_t const val_2{2}; constexpr int32_t const val_3{3}; // Create an multi-stream application. cudaStream_t stream_1{0}; cudaStream_t stream_2{0}; // stream_1 is a non-default blocking stream. CHECK_CUDA_ERROR(cudaStreamCreate(&stream_1)); std::vector<int32_t> vec(n, 0); int32_t* d_data{nullptr}; CHECK_CUDA_ERROR(cudaMalloc(&d_data, n * sizeof(int32_t))); CHECK_CUDA_ERROR(cudaMemcpy(d_data, vec.data(), n * sizeof(int32_t), cudaMemcpyHostToDevice)); // Run a sequence of CUDA kernels in order on the same CUDA stream. launch_add_val_in_place(d_data, val_1, n, stream_1); // The second kernel launch is supposed to be run on stream_1. // However, the implementation has a typo such that the kernel launch // is run on the default stream_2. launch_add_val_in_place(d_data, val_2, n, stream_2); launch_add_val_in_place(d_data, val_3, n, stream_1); CHECK_CUDA_ERROR(cudaStreamSynchronize(stream_1)); CHECK_CUDA_ERROR(cudaMemcpy(vec.data(), d_data, n * sizeof(int32_t), cudaMemcpyDeviceToHost)); // Check the correctness of the application. // Yet the result will still be correct if the default stream_2 // is a legacy default stream. assert(check_array_value(vec.data(), n, val_1 + val_2 + val_3)); CHECK_CUDA_ERROR(cudaFree(d_data)); CHECK_CUDA_ERROR(cudaStreamDestroy(stream_1));}
We made in the implementation that the three kernels are not run in the same CUDA stream, yet the result is still correct.
12
$ nvcc add.cu -o add -std=c++14$ ./add
This is the same as running the follow command as the default value for --default-stream is legacy.
12
$ nvcc add.cu -o add -std=c++14 --default-stream=legacy$ ./add
Depending on the use cases, this kind of mistake sometimes may affect application performance. It could usually be identified using CUDA profiling software, such as Nsight Systems.
However, if the default stream becomes per-thread, the result is no longer correct, because the kernel launch are no longer issued in sequence.
1234
$ nvcc add.cu -o add -std=c++14 --default-stream=per-thread$ ./addadd: add.cu:98: int main(): Assertion `check_array_value(vec.data(), n, val_1 + val_2 + val_3)' failed.Aborted (core dumped)
Default Stream and Non-Default Non-Blocking Stream
In some applications, the non-default stream can be created using cudaStreamCreateWithFlags and the non-default stream created becomes non-blocking. In this case, the default stream, even if it is the default legacy stream, cannot synchronize with the non-default non-blocking stream. Therefore, Therefore, my mistake will cause the application to run incorrectly, regardless whether the non-default stream is legacy or per-thread.
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101
#include <cassert>#include <iostream>#include <vector>#include <cuda_runtime.h>#define CHECK_CUDA_ERROR(val) check((val), #val, __FILE__, __LINE__)void check(cudaError_t err, const char* const func, const char* const file, const int line){ if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << " " << func << std::endl; std::exit(EXIT_FAILURE); }}#define CHECK_LAST_CUDA_ERROR() checkLast(__FILE__, __LINE__)void checkLast(const char* const file, const int line){ cudaError_t const err{cudaGetLastError()}; if (err != cudaSuccess) { std::cerr << "CUDA Runtime Error at: " << file << ":" << line << std::endl; std::cerr << cudaGetErrorString(err) << std::endl; std::exit(EXIT_FAILURE); }}__global__ void add_val_in_place(int32_t* data, int32_t val, uint32_t n){ uint32_t const idx{blockDim.x * blockIdx.x + threadIdx.x}; uint32_t const stride{blockDim.x * gridDim.x}; for (uint32_t i{idx}; i < n; i += stride) { data[i] += val; }}void launch_add_val_in_place(int32_t* data, int32_t val, uint32_t n, cudaStream_t stream){ dim3 const threads_per_block{1024}; dim3 const blocks_per_grid{32}; add_val_in_place<<<blocks_per_grid, threads_per_block, 0, stream>>>(data, val, n); CHECK_LAST_CUDA_ERROR();}bool check_array_value(int32_t const* data, uint32_t n, int32_t val){ for (uint32_t i{0}; i < n; ++i) { if (data[i] != val) { return false; } } return true;}int main(){ constexpr uint32_t const n{1000000}; constexpr int32_t const val_1{1}; constexpr int32_t const val_2{2}; constexpr int32_t const val_3{3}; // Create an multi-stream application. cudaStream_t stream_1{0}; cudaStream_t stream_2{0}; // stream_1 is a non-default non-blocking stream. CHECK_CUDA_ERROR(cudaStreamCreateWithFlags(&stream_1, cudaStreamNonBlocking)); std::vector<int32_t> vec(n, 0); int32_t* d_data{nullptr}; CHECK_CUDA_ERROR(cudaMalloc(&d_data, n * sizeof(int32_t))); CHECK_CUDA_ERROR(cudaMemcpy(d_data, vec.data(), n * sizeof(int32_t), cudaMemcpyHostToDevice)); // Run a sequence of CUDA kernels in order on the same CUDA stream. launch_add_val_in_place(d_data, val_1, n, stream_1); // The second kernel launch is supposed to be run on stream_1. // However, the implementation has a typo so that the kernel launch // is run on the default stream_2. launch_add_val_in_place(d_data, val_2, n, stream_2); launch_add_val_in_place(d_data, val_3, n, stream_1); CHECK_CUDA_ERROR(cudaStreamSynchronize(stream_1)); CHECK_CUDA_ERROR(cudaMemcpy(vec.data(), d_data, n * sizeof(int32_t), cudaMemcpyDeviceToHost)); // Check the correctness of the application. // Yet the result will still be correct if the default stream_2 // is a legacy default stream. assert(check_array_value(vec.data(), n, val_1 + val_2 + val_3)); CHECK_CUDA_ERROR(cudaFree(d_data)); CHECK_CUDA_ERROR(cudaStreamDestroy(stream_1));}
1234
$ nvcc add.cu -o add -std=c++14 --default-stream=legacy$ ./addadd: add.cu:98: int main(): Assertion `check_array_value(vec.data(), n, val_1 + val_2 + val_3)' failed.Aborted (core dumped)
1234
$ nvcc add.cu -o add -std=c++14 --default-stream=per-thread$ ./addadd: add.cu:98: int main(): Assertion `check_array_value(vec.data(), n, val_1 + val_2 + val_3)' failed.Aborted (core dumped)
References
CUDA Default Stream
https://leimao.github.io/blog/CUDA-Default-Stream/
Author
Lei Mao
Posted on
11-06-2023
Updated on
11-06-2023
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
CUDA Tensor Layouts for Convolution
Introduction
There are commonly two layouts for the activation tensors involved in the convolution operations in neural networks, NCHW, NHWC, and NC/xHWx.
In general, the performance of convolution using NHWC is much faster than using NCHW. The NC/xHWx layout is an variant of NHWC that is prepared for NVIDIA Tensor Core operations.
In this blog post, I would like to discuss how to perform convolution on GPU and why NHWC and NC/xHWx activation tensor layouts are much more favored than the NCHW activation tensor layout for convolutional neural network inference.
Convolution
The description of convolution in neural networks can be found in the documentation of many deep learning frameworks, such as PyTorch.
Convolution Dimensions
The 2D convolution operation in neural networks consists of an input activation tensor, a filter tensor, an optional bias tensor, and an output activation tensor. We will ignore the bias tensor in this article since it is usually simple to deal with.
The input and output activation tensors and the filter tensor are all 4D tensors that consist of four dimensions. We use N to describe the batch dimension for the input and output tensors. We use C, H, W to describe the number of channels, the spatial height, and the spatial width of the input activation tensor. In order to distinguish the output activation tensor from the input activation tensor, we use K, P, Q to describe the number of channels, the spatial height, and the spatial width of the output activation tensor instead. The filter height and width are described using R and S, respectively.
Implicit GEMM for Convolution
In my previous article “Fast Fourier Transform for Convolution”, I described how to perform convolution using the asymptotically faster fast Fourier transform. But this technique is still not the most common way of performing convolution nowadays on GPU and it is out of the scope of this article.
In my previous article “Convolution and Transposed Convolution as Matrix Multiplication”, I described how to perform convolution using matrix multiplication in which the activation tensors are dense but the filter tensor is sparse.
On GPUs, convolution is usually performed using a method called implicit GEMM. GEMM stands for general matrix multiplication. The difference between the implicit GEMM method that I am about to describe and the method I described in the article “Convolution and Transposed Convolution as Matrix Multiplication” is that all the matrices used in the implicit GEMM method are dense matrices.
The implicit GEMM method for convolution can be described using the following figure. We will focus on the forward propagation (a) only as the gradient updates (b and c) usually do not happen in the neural network inference.
Theoretically, if we transpose, expand and reshape the input activation from a 4D tensor of shape $(N, C, H, W)$ to a 2D tensor of shape $(NPQ, CRS)$, transpose and reshape the weight tensor from 4D $(K, C, S, R)$ to 2D $(CRS, K)$, multiply the two tensors, the 2D output tensor is a tensor of shape $(NPQ, K)$ and can be further transposed to the 4D output activation tensor of shape $(N, K, P, Q)$. For example, suppose $N = 1$, $C = K = 1$, $H = W = 3$, $R = S = 2$, $P = Q = 2$ (the convolution stride is 1 and the padding is “valid”). Because there is only one input channel, the only spatial feature in the input activation tensor is a matrix and its values can be assumed to be
$$\begin{bmatrix}1 & 2 & 3 \\4 & 5 & 6 \\7 & 8 & 9 \\\end{bmatrix}$$
The reconstructed input activation matrix will be of shape $(4, 4)$ and its values are
$$\begin{bmatrix}1 & 2 & 4 & 5 \\2 & 3 & 5 & 6 \\4 & 5 & 7 & 8 \\5 & 6 & 8 & 9 \\\end{bmatrix}$$
The problem of this theoretical formulation is that a new matrices always need to be constructed during inference because the output activation tensor, which will usually be used as the input activation tensor for the next convolution layer, is not of the reconstructed format. Even though the theoretical ration between the number of values in the reconstructed input activation matrix and the number of values in the original input activation tensor is $\frac{PQRS}{HW}$ which sometimes can be 1, constructing such a new matrix is not a no-op and will introduce overhead in computing, not to mention consuming additional memory when this ratio is high. Therefore, in practice, this reconstructed input activation matrix is never constructed in the implicit GEMM method for convolution. The values are read from the input activation tensor of its original layout instead.
NVIDIA Tensor Core
NVIDIA Tensor Core performs small matrix multiplications to accelerate GEMM with extremely high throughput. For example, NVIDIA Tensor Core could perform 16×16×16 GEMM, 16x16 and 16x16 matrix multiplication (and accumulation) for half precision floating point data on a warp basis. Fundamentally, the mathematical motivation of Tensor Core GEMM acceleration has been described in my previous article CUDA Matrix Multiplication, although not explicitly at that time.
$$\mathbf{A} =\begin{bmatrix}\mathbf{A}_{1,1}^{d \times d} & \mathbf{A}_{1,2}^{d \times d} & \cdots & \mathbf{A}_{1,n/d}^{d \times d} \\\mathbf{A}_{2,1}^{d \times d} & \mathbf{A}_{2,2}^{d \times d} & \cdots & \mathbf{A}_{2,n/d}^{d \times d} \\\vdots & \vdots & \ddots & \vdots \\\mathbf{A}_{m/d,1}^{d \times d} & \mathbf{A}_{m/d,2}^{d \times d} & \cdots & \mathbf{A}_{m/d,n/d}^{d \times d} \\\end{bmatrix}$$
$$\mathbf{B} =\begin{bmatrix}\mathbf{B}_{1,1}^{d \times d} & \mathbf{B}_{1,2}^{d \times d} & \cdots & \mathbf{B}_{1,p/d}^{d \times d} \\\mathbf{B}_{2,1}^{d \times d} & \mathbf{B}_{2,2}^{d \times d} & \cdots & \mathbf{B}_{2,p/d}^{d \times d} \\\vdots & \vdots & \ddots & \vdots \\\mathbf{B}_{n/d,1}^{d \times d} & \mathbf{B}_{n/d,2}^{d \times d} & \cdots & \mathbf{B}_{n/d,p/d}^{d \times d} \\\end{bmatrix}$$
$$\mathbf{C} =\begin{bmatrix}\mathbf{C}_{1,1}^{d \times d} & \mathbf{C}_{1,2}^{d \times d} & \cdots & \mathbf{C}_{1,p/d}^{d \times d} \\\mathbf{C}_{2,1}^{d \times d} & \mathbf{C}_{2,2}^{d \times d} & \cdots & \mathbf{C}_{2,p/d}^{d \times d} \\\vdots & \vdots & \ddots & \vdots \\\mathbf{C}_{m/d,1}^{d \times d} & \mathbf{C}_{m/d,2}^{d \times d} & \cdots & \mathbf{C}_{m/d,p/d}^{d \times d} \\\end{bmatrix}$$
$$\mathbf{C}_{i,j}^{d \times d} = \sum_{k=1}^{n/d} \mathbf{A}_{i,k}^{d \times d} \mathbf{B}_{k,j}^{d \times d}$$
Basically, by decomposing the large matrix multiplication into smaller matrix multiplications and accumulation and caching the small matrices, we could make GEMM extremely math bound. Specifically, the small matrices $\mathbf{A}_{i,k}^{d \times d}$ and $\mathbf{B}_{k,j}^{d \times d}$ are cached in the registers of a warp, and each warp computes a $\mathbf{C}_{i,j}^{d \times d}$ using Tensor Core by iterating the small matrice multiplication and accumulation $\frac{n}{d}$ times.
Tensor Layouts
Now that we have some basic idea of how convolution is performed on GPU via implicit GEMM. Let’s check the impact of different activation layouts on the performance of convolution.
NCHW
In the NCHW layout, C is not the fastest dimension. This means, even without assuming the implementation, getting the entire channels from the input activations for implicit GEMM ($CRS$) needs to stride lots of times, which significantly reduced the valid memory throughput on GPU. For example, suppose the input activation tensor has $N = 1$, $C = 256$, and $H = W = 128$, to get an entire channel from the spatial indices $(12, 35)$ for 1x1 convolution, the slicing operation we will perform for the first sample is $X[1, :, 12, 35]$. Under the hood, getting an entire channel of size $C = 256$ needs to stride $C = 256$ times of size $HW = 16384$ and this invalids the coalesced reading of the data from the DRAM on GPU.
Therefore, the NCHW layout is not favored for the implicit GEMM for convolution.
NHWC
In the NHWC layout, C becomes the fastest dimension. Unlike NCHW slicing for the C dimension, the NHWC slicing for the C dimension can be fully coalesced from the DRAM.
Therefore, the NHWC layout is favored over the NCHW layout for the implicit GEMM for convolution.
NC/xHWx
To take the advantage of NVIDIA Tensor Core, the “virtual” reconstructed input activation matrix needs to be divided in a way that is compatible with the Tensor Core GEMM. This requires the “virtual” reconstructed input activation matrix to be padded (with zeros) so that $CSR$ could be divided by the small matrix dimension requirements from Tensor Core operations. The NHWC layout provides no such guarantee therefore applying the NHWC tensor for Tensor Core GEMM requires padding during the runtime and therefore is a little bit cumbersome to use with Tensor Core.
The NC/xHWx layout is always padded to x elements for the fastest (C) dimension, where x is usually the Tensor Core GEMM dimension requirement. Therefore, it is immediately ready to be used with Tensor Core.
One might ask, is there a NHWC variant layout whose C dimension is not divided like the NC/xHWx layout but padded to x elements according to the Tensor Core GEMM dimension requirement. The answer is yes and using that layout can be very performant for the implicit GEMM method for convolution as well. My educative guess for the reason why NC/xHWx is slightly more often seen than the padded NHWC layout is that the indexing and slicing for the NC/xHWx layout might be more natural in the implementation than that for the padded NHWC layout. For example, using the padded NHWC layout, the indexing and slicing of the input activation tensor would be like this.
$$\begin{align}&X[1, 0:4, 0:4, 0:16] \\&X[1, 0:4, 0:4, 16:32] \\&X[1, 0:4, 0:4, 32:48] \\\end{align}$$
Using the NC/16HW16 layout instead, the indexing and slicing of the input activation tensor to get the equivalent matrices would be like this.
$$\begin{align}&X[1, 0, 0:4, 0:4, :] \\&X[1, 1, 0:4, 0:4, :] \\&X[1, 2, 0:4, 0:4, :] \\\end{align}$$
References
CUDA Tensor Layouts for Convolution
https://leimao.github.io/blog/CUDA-Convolution-Tensor-Layouts/
Author
Lei Mao
Posted on
06-04-2023
Updated on
06-04-2023
Licensed under
Like this article? Support the author with
Comments
Lei Mao
Artificial Intelligence
Machine Learning
Computer Science
Santa Clara, California
Posts
1008
Categories
7
Tags
662
Advertisement
Catalogue
© 2017-2025 Lei Mao Powered by Hexo & IcarusSite UV: Site PV:
|
Subsets and Splits