ck_ec implements 32- and (on 64 bit platforms) 64- bit event
counts. Event counts let us easily integrate OS-level blocking (e.g.,
futexes) in lock-free protocols. Waking up waiters only locks in the
OS kernel, and does not happen at all when no waiter is blocked.
Waiters only block conditionally, if the event count's value is
still equal to some prior value.
ck_ec supports multiple producers (wakers) and consumers (waiters),
and, on x86-TSO, has a more efficient specialisation for single
producer mode. In the latter mode, the overhead compared to a version
counter is on the order of 2-3 cycles and 1-2 instructions, in the
fast path. The slow path, when there are threads blocked on the event
count, consists of one additional atomic instruction and a futex
syscall.
Similarly, the fast path for consumers, when an update comes quickly,
has no overhead compared to spinning on a read-only counter. After
a few thousand cycles, consumers (waiters) enter the slow path with
one atomic instruction and a few blocking syscalls.
The single-producer specialisation requires the x86-TSO memory model,
x86's non-atomic read-modify-write instructions, and, ideally a
futex-like OS abstraction. On !x86/x86_64 platforms, single producer
increments fall back to the multiple producer code path.
Fixes https://github.com/concurrencykit/ck/issues/79
This work is from jtl@FreeBSD.org. FreeBSD expects to call ck_epoch_poll
from a record that is in an active section. Previously, it was
considered an API violation to call write-side functions while in a read
section.
This is now permitted for poll as we we serialize behind the global
epoch counter.
Note that these functions are not reentrant. In the case of the
FreeBSD kernel, all these functions are called with preemption disabled.
This work is from Jonathan T. Looney from the FreeBSD project
(jtl@).
The return value of ck_epoch_poll has also changed. It returns false
only if the epoch counter has not progressed, if no memory was reclaimed
(in other words, no forward progress) and / or not all threads have been
observed in a quiescent state (grace period).
Below are his notes:
Epoch calls are stored in a 4-bucket hash table. The 4-bucket hash table
allows for calls to be stored for three epochs: the current epoch and
two previous ones. The comments at the beginning of ck_epoch.c explain
why this is necessary.
When there are active threads, ck_epoch_poll_deferred() current runs the
epoch calls for the current global epoch + 1. Because of modulo
arithmetic, this is equivalent to running the calls for epoch - 3.
However, this means that ck_epoch_poll_deferred() is waiting
unnecessarily long to run epoch calls.
Further, there could be races in incrementing the global epoch. Imagine
all active threads have observed epoch n. CPU 0 sees this. It increments
the global epoch to (n + 1). It runs the epoch calls for (n - 3). Now,
CPU 1 checks. It sees that there are active threads which have not yet
observed the new global epoch (n + 1). In this case,
ck_epoch_poll_deferred() will return without running any epochs. In the
worst case (CPU 1 continually losing the race), these epoch calls could
be deferred indefinitely.
To fix this, always run any epoch calls for global epoch - 2. Further,
if all active threads have observed the global epoch, run epoch calls
for global epoch - 1.
The global epoch is only incremented when all active threads have
observed it. Therefore, all active threads must always have observed
global epoch - 1 or the current global epoch. Accordingly, it is safe to
always run epoch calls for global epoch - 2.
Further, if all active threads have observed the global epoch, it is
safe to run epoch calls for global epoch - 1.
This primarily affects the FreeBSD kernel, where the popcount builtin
can be problematic (relies on compiler-provided libraries). See the
history of __POPCNT__ for details [1].
- A new flag, CK_MD_CC_BUILTIN_DISABLE, can be set to indicate that CK
should not rely on compiler builtins when possible.
- ck_cc_clz has been removed, it was unused.
- ck_internal_bsf has been removed, it was duplicate of ck_cc_ffs but broken,
replaced in favor of ck_cc_ffs. Previous consumers were using the bsf
instruction, eitherway.
- ck_{rhs,hs,ht} have been updated to use ck_cc_ffs*.
If FreeBSD requires the builtins for performance reasons, we will lift the
appropriate detection into ck_md (at least, bt* bs* family of functions don't
have the same problems on most targets unlike popcount).
1: https://lists.freebsd.org/pipermail/svn-src-head/2015-March/069663.html
Annotate fall through cases in switch statements where that behavior is
desirable to quiet compiler warnings with the -Wimplicit-fallthrough
flag. The annotation format used is supported by both GCC and Clang.
Fixes#108.
Memoize the map into ck_hs_iterator_t to make iteration more safe in the face of growth or shrinkage of the map. Tests for same.
Work from Riley Berton.
This is in preparation for upcoming work for allowing record sharing.
The write-side operations rely only on global state. Future work, we can play
tricks by caching latest call epoch while still building on the core EBR
concept.
An idle grace period requires all threads to be idle. This optimization
introduced a regression with idle detection if subset of threads are
both active and idle. Unfortunately, none of our test machines detected
the problem.
This issue was reported by Julie Zhao <julie.zhao@sparkpos....>
- ck_epoch_begin: Disallow early load of epoch as it leads to measurable
performance degradation in some benchmarks.
- ck_epoch_synchronize: Enforce barrier semantics.
The default value is still 50, but that may be revisited later.
Also, pre-calculate the max number of entries before growing, toi avoid
having to do it at each insert.
We use some macro trickery to enforce that ck_pr_store_* is actually
storing the correct type into the target variable, without any actual
side effects--by making the assignment into an rvalue and using a
comma expression, the compiler should optimize it away.
On the load side, we simply cast the result to the type of the target
variable for pointer loads.
There is an unsafe version of the store_ptr macro called
ck_pr_store_ptr_unsafe for those times when you are _really_ sure that
you know what you're doing.
This commit also updates some of the source files (ck_ht, ck_hs,
ck_rhs): ck_ht now uses the unsafe macro, as its conversion between
uintptr_t and void * is invalid under the new macros. ck_hs and ck_rhs
have had some casts added to preserve validity.