This is similar to the set of changes done to ck_ht.
In addition to this, a bug was caught were minimal
probe sequence was based on last rather than first
tombstone observed.
This hashset will now face long-running bursty
write -> delete -> write workloads much more effectively
and includes significant performance improvements for
delete heavy workloads (at least 2x measured).
Previously, a probe to an empty slot was necessary in order
to avoid hash table growth. As long as a tombstone is available,
re-use it. This prevents excessive growth on workloads involving
long bursts of writes (near 0.5 load factor) followed by long
bursts of deletes.
The distinction between additive/exponential implementation
and geometric implementation does little but confuse users.
The terminology used in ck_backoff now reflects terminology
used in literature.
ck_backoff_gb has been removed.
This allows us to only wait two rather than three epoch-protected
sections, which is a considerable performance improvement. Thanks
much to Richard Bornat for clarifications and feedback.
I had the pleasure of spending a significant amount of time at the most
recent LPC with Mathieu Desnoyers and Paul McKenney. In discussing
RCU semantics in relation to epoch reclamation, it was argued that
epoch reclamation is a specialisation of RCU (rather than a generalization).
In light of this discussion, I thought it would make more sense to not expose
write-side synchronization semantics aside from ck_epoch_call (similar to
RCU call), ck_epoch_poll (identical to tick), ck_epoch_barrier and
ck_epoch_synchronization (similar to ck_epoch_synchronization). Writers will
now longer have to use write-side epoch sections but can instead rely on
epoch_barrier/synchronization for blocking semantics and ck_epoch_poll
for old tick semantics.
One advantage of this is we can avoid write-side recursion for certain workloads.
Additionally, for infrequent writes, epoch_barrier and epoch_synchronization both
allow for blocking semantics to be used so you don't have to pay the cost of
epoch_entry for non-blocking dispatch.
Example usage:
e = stack_pop(mystack);
ck_epoch_synchronize(...);
free(e);
read_begin and read_end has been replaced with ck_epoch_begin and ck_epoch_end.
If multiple writers need SMR guarantees, then they can also use ck_epoch_begin
and ck_epoch_end. Any dispatch in presence of multiple writers should be done
with-in an epoch section (for now).
There are some follow-up commits to come.