Synchronization

:)

Atomic

An atomic operation is one that completes in a single step relative to other threads. No other thread can access the same memory and find the result of the operation half-complete. Atomic operations form the basic of many synchronization primitives.

Atomic Loads and Stores

This type of operation let you read or write a variable and you can be sure to get it's correct value that has not been half changed. If it's value is 0xFF01 and some other tread write 0xAA02 both threads have to use Atomic LOS operations to make sure you do not read 0xFF02 or 0xAA01.

Atomic Read-modify-write (RMW)

An atomic RWM reads from a variable, modifies it and writes the result back, all in a single step that can not be interrupted by another thread. There exist a number of these operations in c++11 but the important one is compare_exchange. It sets a variable to the selected value but only if it's current value is a specific ones. If the value is something else the function does not change the variable but instead returns the current value.

That can be used in a while loop, or compare-and-swap (CAS) loop to do operations on a variable.

uint32 oldValue = shared.load();

while (!shared.compare_exange(oldValue, oldValue + 1)) {}

You Can Do Any Kind of Atomic Read-Modify-Write Operation - 2015

Comparison: Lockless programming with atomics in C++ 11 vs. mutex and RW-locks - 2015

Atomic vs. Non-Atomic Operations - 2013

The Happens-Before Relation - 2013

The Synchronizes-With Relation - 2013

Acquire and Release Fences - 2013

Double-Checked Locking is Fixed In C++11 - 2013

Acquire and Release Fences Don't Work the Way You'd Expect - 2013

Memory Models

Memory Reordering Caught in the Act - 2012

Memory Ordering at Compile Time - 2012

Memory Barriers Are Like Source Control Operations - 2012

Acquire and Release Semantics - 2012

Weak vs. Strong Memory Models - 2012

This Is Why They Call It a Weakly-Ordered CPU - 2012

C++ Memory Model, Martin Kempf - 2012

How does Java do it? Motivation for C++ programmers - 2008

C++ atomics and memory ordering -2008

Memory Models and Synchronization - 2008

The Intel x86 Memory Ordering Guarantees and the C++ Memory Model - 2008

Foundations of the C++ Concurrency Memory Model - 2008

Intel and AMD Define Memory Ordering - 2007

Memory Model = Instruction Reordering + Store Atomicity - 2006

The Purpose of memory_order_consume in C++11 - 2014

Fixing GCC's Implementation of memory_order_consume - 2014

Memory Barriers: a Hardware View for Software Hackers - 2010

Memory Models: A Case for Rethinking Parallel Languages and Hardware - 2010

Lessons learnt while spinning - 2011

Understanding Memory Ordering -2012

Understanding Atomic Operations - 2011

Optimizing the recursive read-write spinlock - 2014

Implementing a recursive read-write spinlock - 2014

Synchronizable objects for C++ - 2010

Exploit parallelism with the least effort - 2010

Synchronization

Mutex - 'Think of it like a Talking stick'

A mutex (mutual exclusion) is used to ensure that a shared resource is use by only one thread at any one point. Only one thread at a time can own the mutex and when owning it the thread can use the shared resource. When it is done it should release the mutex again so someone else can get it.To start owning a mutex wait for it and to stop release it. While waiting the thread is blocked from progressing so the system might swap out the thread.

Semaphores - 'It's a bouncer that limit the number of threads in the party'

A semaphore put a limit on the number of threads that can use the same resource. When created one sets the maximum threads that can use it at the same time. Each time a thread waits for it it decrease the count and with each release it increase the count. If the count i zero the wait function will wait for a release before the thread can continue. While waiting the thread is blocked from progressing so the system might swap out the thread.

Spinlock - 'Are we there yet?'

A spinlock limits the use of a shared resource to a single thread. Unlike a mutex it does not surrender it CPU time on it's own. Instead it keep waiting (spinning) in place for the lock. The idea is that if the lock is released soon it is better to wait for it then to switch out the thread and loose to allotted CPU time.

Critical sections

Events