Add a new configure option, --enable-lse, which is only effective for
the AArch64 architecture. When used, most ck_pr_* atomics will use Large
System Extensions instructions as per the ARMv8.1 specification, rather
then LL/SC instruction pairs.
We don't have to claim we will read the value from variables when we do not,
this was only done to work around a bug on some versions of gcc for arm
a while ago, hopefully this won't be needed here.
This should fix the (harmless) warnings described in issue #83.
- ck_epoch_begin: Disallow early load of epoch as it leads to measurable
performance degradation in some benchmarks.
- ck_epoch_synchronize: Enforce barrier semantics.
Break out internal implementations to _mp and _sc variants from which
public interface is built on. Do not rely on macro. Adopt CK_CC_RESTRICT
instead of using restrict directly.
--platform let you set the platform, instead of relying on uname -m
--use-cc-builtins force the usage of gcc atomic builtins, instead of using the one provided by CK.
The atomicity of the sequence number's increment is unnecessary, since
there should be only one writer at any given time. Fix it by changing
it for a regular increment + store.
Signed-off-by: Emilio G. Cota <cota@braap.org>
This only affects RMO. This adds stricter semantics for critical section
serialization. In addition to this, asymmetric synchronization primitives will
now provide load ordering with respect to readers.
This also modifies locked operations to have acquire semantics
(they're there for elision predicates, and this doesn't impact them
in any way). There are several performance improvements included in this
as well (redundant fence was removed from days of wanting to support
Alpha).
These primitives are meant to be used in lock implementations
where control dependency ordering is sufficient to enforce
ordering of critical section. At the moment, this only affects
PPC. Currently, we rely on lwsync for entry into critical sections
which is insufficient. sync is rather heavy-weight, and assuming
we aren't falling victim into compiler re-ordering, isync should
be sufficient.
There is follow-up work to be done in ARM, as we may have cheaper
(but target-specialized) ISB-tricks for load-load ordering.
On TSO architectures, this relies on atomic ordering guarantees
rather than a full barrier. On Pentium M, this results in
approximately 30% improvement in latency for stack.
The default value is still 50, but that may be revisited later.
Also, pre-calculate the max number of entries before growing, toi avoid
having to do it at each insert.
DECONST_PTR is a hack to deconstify void pointer values
that is safe in presence of -Wcast-qual. CK_CC_RESTRICT
is restrict qualifier that can be disabled for only
partially C99 compliant compilers.
We use some macro trickery to enforce that ck_pr_store_* is actually
storing the correct type into the target variable, without any actual
side effects--by making the assignment into an rvalue and using a
comma expression, the compiler should optimize it away.
On the load side, we simply cast the result to the type of the target
variable for pointer loads.
There is an unsafe version of the store_ptr macro called
ck_pr_store_ptr_unsafe for those times when you are _really_ sure that
you know what you're doing.
This commit also updates some of the source files (ck_ht, ck_hs,
ck_rhs): ck_ht now uses the unsafe macro, as its conversion between
uintptr_t and void * is invalid under the new macros. ck_hs and ck_rhs
have had some casts added to preserve validity.
Commit 554e2f08 removed underscores from _CK prefixes. It missed
the arm bits, breaking the arm build (which checks for the
non-existing _CK_PR_H). Fix it.
While at it, fix the mismatch between _CK_ISB and __CK_ISB; convert
them both to CK_ISB.
Signed-off-by: Emilio G. Cota <cota@braap.org>
This was accidentally grouped into previous commit.
The destiny of this interface for internal use is still unclear (in context of
utilization in built-in data structures). The interface is enabled by default
on x86, as it is compatible with read-side prefetch* operations and
binary-compatible with 3DNow! extension. Older compilers will waste an
additional byte (they generate 3DNow! variant) on this, but older compilers
waste more on spillage if we encode instruction. Power support is coming soon.
This avoids memory traffic in busy-wait loops. Been on TODO list for a while,
may as well bite the bullet. No regressions introduced in recent versions of
GCC, clang and ICC.
gcc is smart enough to use an even register for 64bits operations, and provide
a way to access the first and the second words, so use that instead of
hardcoding registers.