Concurrency
ISO C++
library
DesignInterface to Locks and MutexesThe file <ext/concurrence.h> contains all the higher-level
constructs for playing with threads. In contrast to the atomics layer,
the concurrence layer consists largely of types. All types are defined within namespace __gnu_cxx.
These types can be used in a portable manner, regardless of the
specific environment. They are carefully designed to provide optimum
efficiency and speed, abstracting out underlying thread calls and
accesses when compiling for single-threaded situations (even on hosts
that support multiple threads.)
The enumerated type _Lock_policy details the set of
available locking
policies: _S_single, _S_mutex,
and _S_atomic.
_S_singleIndicates single-threaded code that does not need locking.
_S_mutexIndicates multi-threaded code using thread-layer abstractions.
_S_atomicIndicates multi-threaded code using atomic operations.
The compile-time constant __default_lock_policy is set
to one of the three values above, depending on characteristics of the
host environment and the current compilation flags.
Two more datatypes make up the rest of the
interface: __mutex, and __scoped_lock.
The scoped lock idiom is well-discussed within the C++
community. This version takes a __mutex reference, and
locks it during construction of __scoped_locke and
unlocks it during destruction. This is an efficient way of locking
critical sections, while retaining exception-safety.
Interface to Atomic Functions
Two functions and one type form the base of atomic support.
The type _Atomic_word is a signed integral type
supporting atomic operations.
The two functions functions are:
_Atomic_word
__exchange_and_add_dispatch(volatile _Atomic_word*, int);
void
__atomic_add_dispatch(volatile _Atomic_word*, int);
Both of these functions are declared in the header file
<ext/atomicity.h>, and are in namespace __gnu_cxx.
__exchange_and_add_dispatch
Adds the second argument's value to the first argument. Returns the old value.
__atomic_add_dispatch
Adds the second argument's value to the first argument. Has no return value.
These functions forward to one of several specialized helper
functions, depending on the circumstances. For instance,
__exchange_and_add_dispatch
Calls through to either of:
__exchange_and_addMulti-thread version. Inlined if compiler-generated builtin atomics
can be used, otherwise resolved at link time to a non-builtin code
sequence.
__exchange_and_add_singleSingle threaded version. Inlined.However, only __exchange_and_add_dispatch
and __atomic_add_dispatch should be used. These functions
can be used in a portable manner, regardless of the specific
environment. They are carefully designed to provide optimum efficiency
and speed, abstracting out atomic accesses when they are not required
(even on hosts that support compiler intrinsics for atomic
operations.)
In addition, there are two macros
_GLIBCXX_READ_MEM_BARRIER
_GLIBCXX_WRITE_MEM_BARRIER
Which expand to the appropriate write and read barrier required by the
host hardware and operating system.
ImplementationUsing Builtin Atomic FunctionsThe functions for atomic operations described above are either
implemented via compiler intrinsics (if the underlying host is
capable) or by library fallbacks.Compiler intrinsics (builtins) are always preferred. However, as
the compiler builtins for atomics are not universally implemented,
using them directly is problematic, and can result in undefined
function calls. (An example of an undefined symbol from the use
of __sync_fetch_and_add on an unsupported host is a
missing reference to __sync_fetch_and_add_4.)
In addition, on some hosts the compiler intrinsics are enabled
conditionally, via the -march command line flag. This makes
usage vary depending on the target hardware and the flags used during
compile.
If builtins are possible for bool-sized integral types,
_GLIBCXX_ATOMIC_BUILTINS_1 will be defined.
If builtins are possible for int-sized integral types,
_GLIBCXX_ATOMIC_BUILTINS_4 will be defined.
For the following hosts, intrinsics are enabled by default.
alphaia64powerpcs390For others, some form of -march may work. On
non-ancient x86 hardware, -march=native usually does the
trick. For hosts without compiler intrinsics, but with capable
hardware, hand-crafted assembly is selected. This is the case for the following hosts:
crishppai386i486m48kmipssparcAnd for the rest, a simulated atomic lock via pthreads.
Detailed information about compiler intrinsics for atomic operations can be found in the GCC documentation.
More details on the library fallbacks from the porting section.
Thread AbstractionA thin layer above IEEE 1003.1 (i.e. pthreads) is used to abstract
the thread interface for GCC. This layer is called "gthread," and is
comprised of one header file that wraps the host's default thread layer with
a POSIX-like interface.
The file <gthr-default.h> points to the deduced wrapper for
the current host. In libstdc++ implementation files,
<bits/gthr.h> is used to select the proper gthreads file.
Within libstdc++ sources, all calls to underlying thread functionality
use this layer. More detail as to the specific interface can be found in the source documentation.
By design, the gthread layer is interoperable with the types,
functions, and usage found in the usual <pthread.h> file,
including pthread_t, pthread_once_t, pthread_create,
etc.
UseTypical usage of the last two constructs is demonstrated as follows:
#include <ext/concurrence.h>
namespace
{
__gnu_cxx::__mutex safe_base_mutex;
} // anonymous namespace
namespace other
{
void
foo()
{
__gnu_cxx::__scoped_lock sentry(safe_base_mutex);
for (int i = 0; i < max; ++i)
{
_Safe_iterator_base* __old = __iter;
__iter = __iter-<_M_next;
__old-<_M_detach_single();
}
}
In this sample code, an anonymous namespace is used to keep
the __mutex private to the compilation unit,
and __scoped_lock is used to guard access to the critical
section within the for loop, locking the mutex on creation and freeing
the mutex as control moves out of this block.
Several exception classes are used to keep track of
concurrence-related errors. These classes
are: __concurrence_lock_error, __concurrence_unlock_error, __concurrence_wait_error,
and __concurrence_broadcast_error.