Cache
It's like the memory of things you work on right now
- CPU caches are pools of memory that store information the CPU is most likely to need next.
- When the CPU is reading a memory address it first look in the cache for it's data.
- If the data is in the cache it's called a cache hit and the CPU can move on without accessing main memory.
- If it's not there the CPU has to wait for it to be revived from main memory. It's called a cache miss. In a cache miss the CPU is more or less idle wasting time doing nothing useful.
- When reading main memory it will read a chunk of memory, called a cache line and store in the cache. So if the CPU access memory close by to the last there is a bigger chance that the memory is now in the cache.
- Common cache lines size is often 32 or 64 bytes.
- There is often more the one cache named from L1, L2 and so on. Lower level cache is faster and smaller then the higher ones and in the end of the cache chain is the main memory.
Reference
- A Survey of CPU Caches - 2017
- Why do CPUs have multiple cache levels? - 2016
- Performance Optimization, SIMD and Cache - 2015
- Cache And How To Work For It - 2015
- Cache In A Multi-Core Environment - 2015
- What's New in CPUs Since the 80s and How Does It Affect Programmers? - 2015
- Cache coherency primer - 2014
- Modern C++: What You Need to Know - 2014
- Native Code Performance and Memory: The Elephant in the CPU - 2013
- What every programmer should know about memory. Part 1 - 9 - 2007
- Memory Optimization (Christer Ericson, GDC) - 2003
- Intel's Overview of cache
- Modern Microprocessors, a 90 minutes guide
- Handmade Hero Day 112 - A Mental Model of CPU Performance
- Cold, Hard Cache: Insomniac's Cache Simulator - 2017
- AoS vs SoA performance - 2019
- L1 cache lines - 2019