This is similar to the set of changes done to ck_ht.
In addition to this, a bug was caught were minimal
probe sequence was based on last rather than first
tombstone observed.
This hashset will now face long-running bursty
write -> delete -> write workloads much more effectively
and includes significant performance improvements for
delete heavy workloads (at least 2x measured).
Previously, a probe to an empty slot was necessary in order
to avoid hash table growth. As long as a tombstone is available,
re-use it. This prevents excessive growth on workloads involving
long bursts of writes (near 0.5 load factor) followed by long
bursts of deletes.
CK_LIST_INSERT_HEAD was incorrectly managing prev
pointer on insertion to non-empty list. This bug
would cause erroneous behavior on CK_LIST_REMOVE
to non-head elements. Unit test will be updated
for this regression.
An off-by-one was introduced in downgrade path from writer.
This can cause deadlock if a writer downgrades from a write lock.
Pointed out by Jeffrey Birnbaum <jmb...@...>.
Both LLVM-backed compilers and GCC incorrectly treat
a barrier-sandwiched load as a loop invariant in dequeue_spmc.
Forcing volatile atomic load semantics generates the right
thing.
Thanks to Devon O'Dell and Abel Mathew for help in catching
this issue.
The distinction between additive/exponential implementation
and geometric implementation does little but confuse users.
The terminology used in ck_backoff now reflects terminology
used in literature.
ck_backoff_gb has been removed.
Execution history exists such that first thread will dequeue
from an uninitialized ring buffer. The unit test has been
(un)fortunate in that it would busy-wait during dequeue.
Full barrier semantics will enforce visibility.
Reported by: Maxime Henrion <mhenrion@gma....>.