Trait frame_support::dispatch::marker::Sync
1.0.0 · source · pub unsafe auto trait Sync { }
Expand description
Types for which it is safe to share references between threads.
This trait is automatically implemented when the compiler determines it’s appropriate.
The precise definition is: a type T
is Sync
if and only if &T
is
Send
. In other words, if there is no possibility of
undefined behavior (including data races) when passing
&T
references between threads.
As one would expect, primitive types like u8
and f64
are all Sync
, and so are simple aggregate types containing them,
like tuples, structs and enums. More examples of basic Sync
types include “immutable” types like &T
, and those with simple
inherited mutability, such as Box<T>
, Vec<T>
and
most other collection types. (Generic parameters need to be Sync
for their container to be Sync
.)
A somewhat surprising consequence of the definition is that &mut T
is Sync
(if T
is Sync
) even though it seems like that might
provide unsynchronized mutation. The trick is that a mutable
reference behind a shared reference (that is, & &mut T
)
becomes read-only, as if it were a & &T
. Hence there is no risk
of a data race.
A shorter overview of how Sync
and Send
relate to referencing:
&T
isSend
if and only ifT
isSync
&mut T
isSend
if and only ifT
isSend
&T
and&mut T
areSync
if and only ifT
isSync
Types that are not Sync
are those that have “interior
mutability” in a non-thread-safe form, such as Cell
and RefCell
. These types allow for mutation of
their contents even through an immutable, shared reference. For
example the set
method on Cell<T>
takes &self
, so it requires
only a shared reference &Cell<T>
. The method performs no
synchronization, thus Cell
cannot be Sync
.
Another example of a non-Sync
type is the reference-counting
pointer Rc
. Given any reference &Rc<T>
, you can clone
a new Rc<T>
, modifying the reference counts in a non-atomic way.
For cases when one does need thread-safe interior mutability,
Rust provides atomic data types, as well as explicit locking via
sync::Mutex
and sync::RwLock
. These types
ensure that any mutation cannot cause data races, hence the types
are Sync
. Likewise, sync::Arc
provides a thread-safe
analogue of Rc
.
Any types with interior mutability must also use the
cell::UnsafeCell
wrapper around the value(s) which
can be mutated through a shared reference. Failing to doing this is
undefined behavior. For example, transmute
-ing
from &T
to &mut T
is invalid.
See the Nomicon for more details about Sync
.
Implementors§
impl !Sync for Args
impl !Sync for ArgsOs
impl Sync for TableElementwhere VMExternRef: Sync,
impl Sync for Bytes
impl Sync for BytesMut
impl Sync for Select<'_>
impl Sync for Collector
impl Sync for Unparker
impl Sync for Scope<'_>
impl Sync for AtomicWaker
impl Sync for GuardNoSend
impl Sync for GdbJitImageRegistration
impl Sync for ExportFunction
impl Sync for ExportGlobal
impl Sync for ExportMemory
impl Sync for ExportTable
impl Sync for VMExternRef
impl Sync for InstanceHandle
impl Sync for VMCallerCheckedAnyfunc
impl Sync for VMFunctionImport
impl Sync for VMGlobalImport
impl Sync for VMMemoryImport
impl Sync for VMRuntimeLimits
impl Sync for VMTableImport
impl Sync for VMHostFuncContext
impl Sync for alloc::string::Drain<'_>
impl Sync for AtomicBool
impl Sync for AtomicI8
impl Sync for AtomicI16
impl Sync for AtomicI32
impl Sync for AtomicI64
impl Sync for AtomicI128
impl Sync for AtomicIsize
impl Sync for AtomicU8
impl Sync for AtomicU16
impl Sync for AtomicU32
impl Sync for AtomicU64
impl Sync for AtomicU128
impl Sync for AtomicUsize
impl Sync for Waker
impl<'a> Sync for CDict<'a>
impl<'a> Sync for DDict<'a>
impl<'a> Sync for IoSlice<'a>
impl<'a> Sync for IoSliceMut<'a>
impl<'a, 'b, K, Q, V, S, A> Sync for OccupiedEntryRef<'a, 'b, K, Q, V, S, A>where K: Sync, Q: Sync + ?Sized, V: Sync, S: Sync, A: Sync + Allocator + Clone,
impl<'a, A> Sync for arrayvec::Drain<'a, A>where A: Array + Sync,
impl<'a, K, V> Sync for lru::Iter<'a, K, V>where K: Sync, V: Sync,
impl<'a, K, V> Sync for lru::IterMut<'a, K, V>where K: Sync, V: Sync,
impl<'a, R, G, T> Sync for MappedReentrantMutexGuard<'a, R, G, T>where R: RawMutex + Sync + 'a, G: GetThreadId + Sync + 'a, T: Sync + 'a + ?Sized,
impl<'a, R, G, T> Sync for ReentrantMutexGuard<'a, R, G, T>where R: RawMutex + Sync + 'a, G: GetThreadId + Sync + 'a, T: Sync + 'a + ?Sized,
impl<'a, R, T> Sync for lock_api::mutex::MappedMutexGuard<'a, R, T>where R: RawMutex + Sync + 'a, T: Sync + 'a + ?Sized,
impl<'a, R, T> Sync for lock_api::mutex::MutexGuard<'a, R, T>where R: RawMutex + Sync + 'a, T: Sync + 'a + ?Sized,
impl<'a, R, T> Sync for MappedRwLockReadGuard<'a, R, T>where R: RawRwLock + 'a, T: Sync + 'a + ?Sized,
impl<'a, R, T> Sync for MappedRwLockWriteGuard<'a, R, T>where R: RawRwLock + 'a, T: Sync + 'a + ?Sized,
impl<'a, R, T> Sync for RwLockUpgradableReadGuard<'a, R, T>where R: RawRwLockUpgrade + 'a, T: Sync + 'a + ?Sized,
impl<'a, T> Sync for OnceRef<'a, T>where T: Sync,
impl<'a, T> Sync for smallvec::Drain<'a, T>where T: Sync + Array,
impl<'a, T, const CAP: usize> Sync for arrayvec::arrayvec::Drain<'a, T, CAP>where T: Sync,
impl<C> Sync for Secp256k1<C>where C: Context,
impl<Dyn> Sync for DynMetadata<Dyn>where Dyn: ?Sized,
impl<Fut> Sync for futures_util::stream::futures_unordered::iter::IntoIter<Fut>where Fut: Sync + Unpin,
impl<Fut> Sync for IterPinMut<'_, Fut>where Fut: Sync,
impl<Fut> Sync for IterPinRef<'_, Fut>where Fut: Sync,
impl<Fut> Sync for FuturesUnordered<Fut>where Fut: Sync,
impl<K, V> Sync for indexmap::map::core::raw::OccupiedEntry<'_, K, V>where K: Sync, V: Sync,
impl<K, V, S> Sync for LruCache<K, V, S>where K: Sync, V: Sync, S: Sync,
impl<K, V, S, A> Sync for hashbrown::map::OccupiedEntry<'_, K, V, S, A>where K: Sync, V: Sync, S: Sync, A: Sync + Allocator + Clone,
impl<K, V, S, A> Sync for RawOccupiedEntryMut<'_, K, V, S, A>where K: Sync, V: Sync, S: Sync, A: Sync + Allocator + Clone,
impl<M, T, O> Sync for BitRef<'_, M, T, O>where M: Mutability, T: BitStore + Sync, O: BitOrder,
impl<R, G> Sync for RawReentrantMutex<R, G>where R: RawMutex + Sync, G: GetThreadId + Sync,
impl<R, G, T> Sync for ReentrantMutex<R, G, T>where R: RawMutex + Sync, G: GetThreadId + Sync, T: Send + ?Sized,
impl<R, T> Sync for lock_api::mutex::Mutex<R, T>where R: RawMutex + Sync, T: Send + ?Sized,
impl<R, T> Sync for lock_api::rwlock::RwLock<R, T>where R: RawRwLock + Sync, T: Send + Sync + ?Sized,
impl<T> !Sync for *const Twhere T: ?Sized,
impl<T> !Sync for *mut Twhere T: ?Sized,
impl<T> !Sync for Rc<T>where T: ?Sized,
impl<T> !Sync for alloc::rc::Weak<T>where T: ?Sized,
impl<T> !Sync for OnceCell<T>
impl<T> !Sync for Cell<T>where T: ?Sized,
impl<T> !Sync for RefCell<T>where T: ?Sized,
impl<T> !Sync for UnsafeCell<T>where T: ?Sized,
impl<T> !Sync for NonNull<T>where T: ?Sized,
NonNull
pointers are not Sync
because the data they reference may be aliased.
impl<T> !Sync for std::sync::mpsc::Receiver<T>
impl<T> !Sync for std::sync::mpsc::Sender<T>
impl<T> Sync for BitSpanError<T>where T: BitStore,
impl<T> Sync for MisalignError<T>
impl<T> Sync for crossbeam_channel::channel::Receiver<T>where T: Send,
impl<T> Sync for crossbeam_channel::channel::Sender<T>where T: Send,
impl<T> Sync for Injector<T>where T: Send,
impl<T> Sync for Stealer<T>where T: Send,
impl<T> Sync for Atomic<T>where T: Pointable + Send + Sync + ?Sized,
impl<T> Sync for AtomicCell<T>where T: Send,
impl<T> Sync for CachePadded<T>where T: Sync,
impl<T> Sync for ShardedLock<T>where T: Send + Sync + ?Sized,
impl<T> Sync for ShardedLockReadGuard<'_, T>where T: Sync + ?Sized,
impl<T> Sync for ShardedLockWriteGuard<'_, T>where T: Sync + ?Sized,
impl<T> Sync for ScopedJoinHandle<'_, T>
impl<T> Sync for BiLockGuard<'_, T>where T: Send + Sync,
impl<T> Sync for futures_util::lock::mutex::Mutex<T>where T: Send + ?Sized,
impl<T> Sync for futures_util::lock::mutex::MutexGuard<'_, T>where T: Sync + ?Sized,
impl<T> Sync for MutexLockFuture<'_, T>where T: ?Sized,
impl<T> Sync for OwnedMutexGuard<T>where T: Sync + ?Sized,
impl<T> Sync for OwnedMutexLockFuture<T>where T: ?Sized,
impl<T> Sync for OnceBox<T>where T: Sync + Send,
impl<T> Sync for ThreadLocal<T>where T: Send,
impl<T> Sync for ThinBox<T>where T: Sync + ?Sized,
ThinBox<T>
is Sync
if T
is Sync
because the data is owned.
impl<T> Sync for alloc::collections::linked_list::Iter<'_, T>where T: Sync,
impl<T> Sync for alloc::collections::linked_list::IterMut<'_, T>where T: Sync,
impl<T> Sync for Arc<T>where T: Sync + Send + ?Sized,
impl<T> Sync for alloc::sync::Weak<T>where T: Sync + Send + ?Sized,
impl<T> Sync for SyncUnsafeCell<T>where T: Sync + ?Sized,
impl<T> Sync for ChunksExactMut<'_, T>where T: Sync,
impl<T> Sync for ChunksMut<'_, T>where T: Sync,
impl<T> Sync for core::slice::iter::Iter<'_, T>where T: Sync,
impl<T> Sync for core::slice::iter::IterMut<'_, T>where T: Sync,
impl<T> Sync for RChunksExactMut<'_, T>where T: Sync,
impl<T> Sync for RChunksMut<'_, T>where T: Sync,
impl<T> Sync for AtomicPtr<T>
impl<T> Sync for Exclusive<T>where T: ?Sized,
impl<T> Sync for std::sync::mutex::Mutex<T>where T: Send + ?Sized,
impl<T> Sync for std::sync::mutex::MutexGuard<'_, T>where T: Sync + ?Sized,
impl<T> Sync for OnceLock<T>where T: Sync + Send,
impl<T> Sync for std::sync::rwlock::RwLock<T>where T: Send + Sync + ?Sized,
impl<T> Sync for RwLockReadGuard<'_, T>where T: Sync + ?Sized,
impl<T> Sync for RwLockWriteGuard<'_, T>where T: Sync + ?Sized,
impl<T> Sync for JoinHandle<T>
impl<T, A> Sync for RawDrain<'_, T, A>where A: Allocator + Copy + Sync, T: Sync,
impl<T, A> Sync for RawIntoIter<T, A>where A: Allocator + Clone + Sync, T: Sync,
impl<T, A> Sync for RawTable<T, A>where A: Allocator + Clone + Sync, T: Sync,
impl<T, A> Sync for Cursor<'_, T, A>where T: Sync, A: Allocator + Sync,
impl<T, A> Sync for CursorMut<'_, T, A>where T: Sync, A: Allocator + Sync,
impl<T, A> Sync for LinkedList<T, A>where T: Sync, A: Allocator + Sync,
impl<T, A> Sync for alloc::collections::vec_deque::drain::Drain<'_, T, A>where T: Sync, A: Allocator + Sync,
impl<T, A> Sync for alloc::vec::drain::Drain<'_, T, A>where T: Sync, A: Sync + Allocator,
impl<T, A> Sync for alloc::vec::into_iter::IntoIter<T, A>where T: Sync, A: Allocator + Sync,
impl<T, C> Sync for OwnedRef<T, C>where T: Sync + Clear + Default, C: Config,
impl<T, C> Sync for OwnedRefMut<T, C>where T: Sync + Clear + Default, C: Config,
impl<T, C> Sync for Pool<T, C>where T: Sync + Clear + Default, C: Config,
impl<T, C> Sync for OwnedEntry<T, C>where T: Sync, C: Config,
impl<T, C> Sync for Slab<T, C>where T: Sync, C: Config,
impl<T, F> Sync for Lazy<T, F>where F: Send, OnceCell<T>: Sync,
impl<T, F> Sync for LazyLock<T, F>where T: Sync + Send, F: Send,
impl<T, F, S> Sync for ScopeGuard<T, F, S>where T: Sync, F: FnOnce(T), S: Strategy,
impl<T, N> Sync for generic_array::GenericArray<T, N>where T: Sync, N: ArrayLength<T>,
impl<T, N> Sync for generic_array::GenericArray<T, N>where T: Sync, N: ArrayLength<T>,
impl<T, O> Sync for bitvec::boxed::iter::IntoIter<T, O>where T: BitStore + Sync, O: BitOrder,
impl<T, O> Sync for BitBox<T, O>where T: BitStore, O: BitOrder,
impl<T, O> Sync for bitvec::slice::iter::Iter<'_, T, O>where T: BitStore, O: BitOrder, BitSlice<T, O>: Sync,
impl<T, O> Sync for bitvec::slice::iter::IterMut<'_, T, O>where T: BitStore, O: BitOrder, BitSlice<T, O>: Sync,
impl<T, O> Sync for BitSlice<T, O>where T: BitStore + Sync, O: BitOrder,
Bit-Slice Thread Safety
This allows bit-slice references to be moved across thread boundaries only when
the underlying T
element can tolerate concurrency.
All BitSlice
references, shared or exclusive, are only threadsafe if the T
element type is Send
, because any given bit-slice reference may only have
partial control of a memory element that is also being shared by a bit-slice
reference on another thread. As such, this is never implemented for Cell<U>
,
but always implemented for AtomicU
and U
for a given unsigned integer type
U
.
Atomic integers safely handle concurrent writes, cells do not allow concurrency
at all, so the only missing piece is &mut BitSlice<_, U: Unsigned>
. This is
handled by the aliasing system that the mutable splitters employ: a mutable
reference to an unsynchronized bit-slice can only cross threads when no other
handle is able to exist to the elements it governs. Splitting a mutable
bit-slice causes the split halves to change over to either atomics or cells, so
concurrency is either safe or impossible.