GPU

Fun with triangles

The part of the hardware that created graphics in some form or another. To render things with them one often use some form of Graphical API. This pages focus on the way today's GPU's currently works, for past generations look at the GPU History page. Here is a list of the major manufacturers of GPU's.

Overview

Note: Working on writing this part so most of it incorrect :)

To confuse everyone including each other all the manufacturers have different names for the (almost) same things in a GPU. So i use my own name here and you can use the table below to find out what each vendor calls the each thing.

A GPU is made up of a custom processors (GPUCore) that execute programs in form of shaders. It can execute all types of shaders such as vertex shaders, pixel shaders or compute shaders.

Much of the work in a GPU is the same. The same shaders will need to run on all the vertices in a mesh and all the pixels in a triangle need to run the same pixel shader. The input for each one is different but the code to run is the same. To make use of that the GPUCore's are grouped with each other and each group is controlled by a GPUCoreMaster. All the cores in a group will run the same code and execute it in lock-step with each other. So if there is 32 cores in a group it will run 32 pixel shaders at the same time or vertex shaders at the same time. This is known as a SIMT architecture. If you only draw a cube with 8 vertices only 8 cores will do meaningful work in the group.

While executing there might be branch like if statements that send the cores down on different paths in the code. That is known as Divergence and the cores will keep running in lock-step and run the code inside the if statement. The cores that failed the statement will be masked out and throw away the result. When they exit the if statement the cores will converge and the masked out cores be activated again. Divergence lowers performance as some of the cores do wasteful work that are thrown away in the end.

A stall is when a core have to wait to run the next instruction. A common example is sampling a texture and waiting for it to return from memory. As the cores in a group run in lock-step they all have to wait for everyone to get the result back. This is solved with a form of threading and switching to another thread while waiting for the result of an operation. As the cores run in lock-step all threads in the group needs to be switched out at the same time. So each core runs a thread and all the threads running in a group in lock-step with each other is called a Wave. When a wave stalls the GPUCoreMaster can switch to one of the other wave's that are running on the group.

GPU Dictionary

Thread - thread (NVIDIA) / work-item (AMD)

A thread is a single invocation of a program on the GPU. It can be a pixel shader or a vertex shader for example.

GPUCore - CUDA Core (NVIDIA) / Processing Element (AMD)

Wave - warp (NVIDIA) / wavefront (AMD)

Threads are executed in a group called a wave and all the threads in the wave execute the same instruction in lock-step.

GPUCoreMaster - Streaming Multiprocessor (NVIDIA) / Compute Unit (AMD)

Controls a group of GPUCores and run wave's on them.

Reference

Understanding Latency Hiding on GPUs - 2016

GPU Programming - 2016

Uniform buffers vs texture buffers: The 2015 edition - 2015

Visual Computing Systems - 2014

How the rasterization process works, the RasterizerState object - 2011

From Shader Code to a Terafop : How GPU Shader Cores Work - 2010

How the GPU works - appendix A - 2009

The Latest Graphics Processing Units - 2009

Scalable Multi Agent Simulation on the GPU - 2009

Bullet: A Case Study in Optimizing Physics Middleware for the GPU - 2009

Next-Generation Graphics DRAM: Challenges and Opportunities - 2009

GPU Pipeline for Everyone - 2008

GPU versus CPU - 2008

A Closer look at GPUs - 2008

How the GPU works - 2008. Part I, Part II and Part III.

[Mobile] Graphics Hardware - 2007

3D Pipeline Of SM3/DX9 GPUs - 2006

PC Hardware Collection

Tiled hardware (speculations) - c0de517e.blogspot.ca

Revisiting The Vertex Cache: Understanding and Optimizing Vertex Processing on the modern GPU - 2018

TL;DR of the paper 'Revisiting The Vertex Cache: Understanding and Optimizing Vertex Processing on the modern GPU' - 2018

How does a GPU shader core work? - 2018

Intro to GPU Scalarization - 2018

THE STORY OF THE 3DFX VOODOO1

GPU Architectures

IMG A-Series: the GPU for generation 2020

GPU resources

NVIDIA Ampere Architecture In-Depth

a history of nvidia stream multiprocessor

GPU architecture resources

GPU architecture resources

https://interplayoflight.wordpress.com/2020/05/09/gpu-architecture-resources/

Optimizing for the RDNA architecture: presentation notes

https://interplayoflight.wordpress.com/2020/05/23/optimizing-for-the-rdna-architecture-presentation-notes/

GPU Optimization for GameDev

https://gist.github.com/silvesthu/505cf0cbf284bb4b971f6834b8fec93d#gpu-optimization-for-gamedev

Unified Radeon™ GPU Profiler and Radeon™ Memory Visualizer usage with Radeon™ Developer Panel 2.1

https://gpuopen.com/learn/using_rdp_2-1/

Capturing GPU Work

https://devblogs.microsoft.com/pix/capturinggpuwork/

Does subgroup/wave size matter?

http://jason-blog.jlekstrand.net/2020/10/does-subgroupwave-size-doesnt-matter.html

Loads, Stores, Passes, and Advanced GPU Pipelines

https://developer.oculus.com/blog/loads-stores-passes-and-advanced-gpu-pipelines/

GPU Captures: How we support placed and reserved resources

https://devblogs.microsoft.com/pix/gpu-captures-how-we-support-placed-and-reserved-resources/

Nsight: The Most Important Ampere Tools In Your Utility Belt

https://news.developer.nvidia.com/nsight-the-most-important-ampere-tools-in-your-utility-belt/

Five years of GPU DB

https://anteru.net/blog/2020/five-years-of-gpu-db/