ck_epoch_poll: improve reliability and reclaim sooner.

This work is from Jonathan T. Looney from the FreeBSD project
(jtl@).

The return value of ck_epoch_poll has also changed. It returns false
only if the epoch counter has not progressed, if no memory was reclaimed
(in other words, no forward progress) and / or not all threads have been
observed in a quiescent state (grace period).

Below are his notes:
Epoch calls are stored in a 4-bucket hash table. The 4-bucket hash table
allows for calls to be stored for three epochs: the current epoch and
two previous ones. The comments at the beginning of ck_epoch.c explain
why this is necessary.

When there are active threads, ck_epoch_poll_deferred() current runs the
epoch calls for the current global epoch + 1. Because of modulo
arithmetic, this is equivalent to running the calls for epoch - 3.
However, this means that ck_epoch_poll_deferred() is waiting
unnecessarily long to run epoch calls.

Further, there could be races in incrementing the global epoch. Imagine
all active threads have observed epoch n. CPU 0 sees this. It increments
the global epoch to (n + 1). It runs the epoch calls for (n - 3). Now,
CPU 1 checks. It sees that there are active threads which have not yet
observed the new global epoch (n + 1). In this case,
ck_epoch_poll_deferred() will return without running any epochs. In the
worst case (CPU 1 continually losing the race), these epoch calls could
be deferred indefinitely.

To fix this, always run any epoch calls for global epoch - 2. Further,
if all active threads have observed the global epoch, run epoch calls
for global epoch - 1.

The global epoch is only incremented when all active threads have
observed it. Therefore, all active threads must always have observed
global epoch - 1 or the current global epoch. Accordingly, it is safe to
always run epoch calls for global epoch - 2.

Further, if all active threads have observed the global epoch, it is
safe to run epoch calls for global epoch - 1.
awsm
Samy Al Bahra 6 years ago
parent dbfe282866
commit dac27da321

@ -348,7 +348,7 @@ ck_epoch_scan(struct ck_epoch *global,
return NULL; return NULL;
} }
static void static unsigned int
ck_epoch_dispatch(struct ck_epoch_record *record, unsigned int e, ck_stack_t *deferred) ck_epoch_dispatch(struct ck_epoch_record *record, unsigned int e, ck_stack_t *deferred)
{ {
unsigned int epoch = e & (CK_EPOCH_LENGTH - 1); unsigned int epoch = e & (CK_EPOCH_LENGTH - 1);
@ -366,6 +366,7 @@ ck_epoch_dispatch(struct ck_epoch_record *record, unsigned int e, ck_stack_t *de
ck_stack_push_spnc(deferred, &entry->stack_entry); ck_stack_push_spnc(deferred, &entry->stack_entry);
else else
entry->function(entry); entry->function(entry);
i++; i++;
} }
@ -381,7 +382,7 @@ ck_epoch_dispatch(struct ck_epoch_record *record, unsigned int e, ck_stack_t *de
ck_pr_sub_uint(&record->n_pending, i); ck_pr_sub_uint(&record->n_pending, i);
} }
return; return i;
} }
/* /*
@ -560,15 +561,29 @@ ck_epoch_poll_deferred(struct ck_epoch_record *record, ck_stack_t *deferred)
unsigned int epoch; unsigned int epoch;
struct ck_epoch_record *cr = NULL; struct ck_epoch_record *cr = NULL;
struct ck_epoch *global = record->global; struct ck_epoch *global = record->global;
unsigned int n_dispatch;
epoch = ck_pr_load_uint(&global->epoch); epoch = ck_pr_load_uint(&global->epoch);
/* Serialize epoch snapshots with respect to global epoch. */ /* Serialize epoch snapshots with respect to global epoch. */
ck_pr_fence_memory(); ck_pr_fence_memory();
/*
* At this point, epoch is the current global epoch value.
* There may or may not be active threads which observed epoch - 1.
* (ck_epoch_scan() will tell us that). However, there should be
* no active threads which observed epoch - 2.
*
* Note that checking epoch - 2 is necessary, as race conditions can
* allow another thread to increment the global epoch before this
* thread runs.
*/
n_dispatch = ck_epoch_dispatch(record, epoch - 2, deferred);
cr = ck_epoch_scan(global, cr, epoch, &active); cr = ck_epoch_scan(global, cr, epoch, &active);
if (cr != NULL) { if (cr != NULL) {
record->epoch = epoch; record->epoch = epoch;
return false; return (n_dispatch > 0);
} }
/* We are at a grace period if all threads are inactive. */ /* We are at a grace period if all threads are inactive. */
@ -580,10 +595,17 @@ ck_epoch_poll_deferred(struct ck_epoch_record *record, ck_stack_t *deferred)
return true; return true;
} }
/* If an active thread exists, rely on epoch observation. */ /*
* If an active thread exists, rely on epoch observation.
*
* All the active threads entered the epoch section during
* the current epoch. Therefore, we can now run the handlers
* for the immediately preceding epoch and attempt to
* advance the epoch if it hasn't been already.
*/
(void)ck_pr_cas_uint(&global->epoch, epoch, epoch + 1); (void)ck_pr_cas_uint(&global->epoch, epoch, epoch + 1);
ck_epoch_dispatch(record, epoch + 1, deferred); ck_epoch_dispatch(record, epoch - 1, deferred);
return true; return true;
} }

Loading…
Cancel
Save