#[repr(transparent)]pub struct BitSlice<T = usize, O = Lsb0>where
T: BitStore,
O: BitOrder,{ /* private fields */ }
Expand description
Bit-Addressable Memory
A slice of individual bits, anywhere in memory.
BitSlice<T, O>
is an unsized region type; you interact with it through
&BitSlice<T, O>
and &mut BitSlice<T, O>
references, which work exactly like
all other Rust references. As with the standard slice’s relationship to arrays
and vectors, this is bitvec
’s primary working type, but you will probably
hold it through one of the provided BitArray
, BitBox
, or BitVec
containers.
BitSlice
is conceptually a [bool]
slice, and provides a nearly complete
mirror of [bool]
’s API.
Every bit-vector crate can give you an opaque type that hides shift/mask
calculations from you. BitSlice
does far more than this: it offers you the
full Rust guarantees about reference behavior, including lifetime tracking,
mutability and aliasing awareness, and explicit memory control, as well as the
full set of tools and APIs available to the standard [bool]
slice type.
BitSlice
can arbitrarily split and subslice, just like [bool]
. You can write
a linear consuming function and keep the patterns you already know.
For example, to trim all the bits off either edge that match a condition, you could write
use bitvec::prelude::*;
fn trim<T: BitStore, O: BitOrder>(
bits: &BitSlice<T, O>,
to_trim: bool,
) -> &BitSlice<T, O> {
let stop = |b: bool| b != to_trim;
let front = bits.iter()
.by_vals()
.position(stop)
.unwrap_or(0);
let back = bits.iter()
.by_vals()
.rposition(stop)
.map_or(0, |p| p + 1);
&bits[front .. back]
}
to get behavior something like
trim(&BitSlice[0, 0, 1, 1, 0, 1, 0], false) == &BitSlice[1, 1, 0, 1]
.
Documentation
All APIs that mirror something in the standard library will have an Original
section linking to the corresponding item. All APIs that have a different
signature or behavior than the original will have an API Differences
section
explaining what has changed, and how to adapt your existing code to the change.
These sections look like this:
Original
API Differences
The slice type [bool]
has no type parameters. BitSlice<T, O>
has two: one
for the integer type used as backing storage, and one for the order of bits
within that integer type.
&BitSlice<T, O>
is capable of producing &bool
references to read bits out
of its memory, but is not capable of producing &mut bool
references to write
bits into its memory. Any [bool]
API that would produce a &mut bool
will
instead produce a BitRef<Mut, T, O>
proxy reference.
Behavior
BitSlice
is a wrapper over [T]
. It describes a region of memory, and must be
handled indirectly. This is most commonly done through the reference types
&BitSlice
and &mut BitSlice
, which borrow memory owned by some other value
in the program. These buffers can be directly owned by the sibling types
BitBox
, which behaves like Box<[T]>
, and BitVec
,
which behaves like Vec<T>
. It cannot be used as the type parameter to a
pointer type such as Box
, Rc
, Arc
, or any other indirection.
The BitSlice
region provides access to each individual bit in the region, as
if each bit had a memory address that you could use to dereference it. It packs
each logical bit into exactly one bit of storage memory, just like
std::bitset
and std::vector<bool>
in C++.
Type Parameters
BitSlice
has two type parameters which propagate through nearly every public
API in the crate. These are very important to its operation, and your choice
of type arguments informs nearly every part of this library’s behavior.
T: BitStore
BitStore
is the simpler of the two parameters. It refers to the integer type
used to hold bits. It must be one of the Rust unsigned integer fundamentals:
u8
, u16
, u32
, usize
, and on 64-bit systems only, u64
. In addition, it
can also be an alias-safe wrapper over them (see the access
module) in
order to permit bit-slices to share underlying memory without interfering with
each other.
BitSlice
references can only be constructed over the integers, not over their
aliasing wrappers. BitSlice
will only use aliasing types in its T
slots when
you invoke APIs that produce them, such as .split_at_mut()
.
The default type argument is usize
.
The argument you choose is used as the basis of a [T]
slice, over which the
BitSlice
view is produced. BitSlice<T, _>
is subject to all of the rules
about alignment that [T]
is. If you are working with in-memory representation
formats, chances are that you already have a T
type with which you’ve been
working, and should use it here.
If you are only using this crate to discard the seven wasted bits per bool
in a collection of bool
s, and are not too concerned about the in-memory
representation, then you should use the default type argument of usize
. This
is because most processors work best when moving an entire usize
between
memory and the processor itself, and using a smaller type may cause it to slow
down. Additionally, processor instructions are typically optimized for the whole
register, and the processor might need to do additional clearing work for
narrower types.
O: BitOrder
BitOrder
is the more complex parameter. It has a default argument which,
like usize
, is a good baseline choice when you do not explicitly need to
control the representation of bits in memory.
This parameter determines how bitvec
indexes the bits within a single T
memory element. Computers all agree that in a slice of T
elements, the element
with the lower index has a lower memory address than the element with the higher
index. But the individual bits within an element do not have addresses, and so
there is no uniform standard of which bit is the zeroth, which is the first,
which is the penultimate, and which is the last.
To make matters even more confusing, there are two predominant ideas of
in-element ordering that often correlate with the in-element byte ordering
of integer types, but are in fact wholly unrelated! bitvec
provides these two
main orderings as types for you, and if you need a different one, it also
provides the tools you need to write your own.
Least Significant Bit Comes First
This ordering, named the Lsb0
type, indexes bits within an element by
placing the 0
index at the least significant bit (numeric value 1
) and the
final index at the most significant bit (numeric value T::MIN
for
signed integers on most machines).
For example, this is the ordering used by most C compilers to lay out bit-field struct members on little-endian byte-ordered machines.
Most Significant Bit Comes First
This ordering, named the Msb0
type, indexes bits within an element by
placing the 0
index at the most significant bit (numeric value
T::MIN
for most signed integers) and the final index at the least
significant bit (numeric value 1
).
For example, this is the ordering used by the TCP wire format, and by most C compilers to lay out bit-field struct members on big-endian byte-ordered machines.
Default Ordering
The default ordering is Lsb0
, as it typically produces shorter object code
than Msb0
does. If you are implementing a collection, then Lsb0
will
likely give you better performance; if you are implementing a buffer protocol,
then your choice of ordering is dictated by the protocol definition.
Safety
BitSlice
is designed to never introduce new memory unsafety that you did not
provide yourself, either before or during the use of this crate. However, safety
bugs have been identified before, and you are welcome to submit any discovered
flaws as a defect report.
The &BitSlice
reference type uses a private encoding scheme to hold all of the
information needed in its stack value. This encoding is not part of the
public API of the library, and is not binary-compatible with &[T]
.
Furthermore, in order to satisfy Rust’s requirements about alias conditions,
BitSlice
performs type transformations on the T
parameter to ensure that it
never creates the potential for undefined behavior or data races.
You must never attempt to type-cast a reference to BitSlice
in any way. You
must not use mem::transmute
with BitSlice
anywhere in its type arguments.
You must not use as
-casting to convert between *BitSlice
and any other type.
You must not attempt to modify the binary representation of a &BitSlice
reference value. These actions will all lead to runtime memory unsafety, are
(hopefully) likely to induce a program crash, and may possibly cause undefined
behavior at compile-time.
Everything in the BitSlice
public API, even the unsafe
parts, are guaranteed
to have no more unsafety than their equivalent items in the standard library.
All unsafe
APIs will have documentation explicitly detailing what the API
requires you to uphold in order for it to function safely and correctly. All
safe APIs will do so themselves.
Performance
Like the standard library’s [T]
slice, BitSlice
is designed to be very easy
to use safely, while supporting unsafe
usage when necessary. Rust has a
powerful optimizing engine, and BitSlice
will frequently be compiled to have
zero runtime cost. Where it is slower, it will not be significantly slower than
a manual replacement.
As the machine instructions operate on registers rather than bits, your choice
of T: BitStore
type parameter can influence your bits-slice’s performance.
Using larger register types means that bit-slices can gallop over
completely-used interior elements faster, while narrower register types permit
more graceful handling of subslicing and aliased splits.
Construction
BitSlice
views of memory can be constructed over borrowed data in a number of
ways. As this is a reference-only type, it can only ever be built by borrowing
an existing memory buffer and taking temporary control of your program’s view of
the region.
Macro Constructor
BitSlice
buffers can be constructed at compile-time through the bits!
macro. This macro accepts a superset of the vec!
arguments, and creates an
appropriate buffer in the local scope. The macro expands to a borrowed
BitArray
temporary, which will live for the duration of the bound name.
use bitvec::prelude::*;
let immut = bits![u8, Lsb0; 0, 1, 0, 0, 1, 0, 0, 1];
let mutable: &mut BitSlice<_, _> = bits![mut u8, Msb0; 0; 8];
assert_ne!(immut, mutable);
mutable.clone_from_bitslice(immut);
assert_eq!(immut, mutable);
Borrowing Constructors
You may borrow existing elements or slices with the following functions:
from_element
andfrom_element_mut
,from_slice
andfrom_slice_mut
,try_from_slice
andtry_from_slice_mut
These take references to existing memory and construct BitSlice
references
from them. These are the most basic ways to borrow memory and view it as bits;
however, you should prefer the BitView
trait methods instead.
use bitvec::prelude::*;
let data = [0u16; 3];
let local_borrow = BitSlice::<_, Lsb0>::from_slice(&data);
let mut data = [0u8; 5];
let local_mut = BitSlice::<_, Lsb0>::from_slice_mut(&mut data);
Trait Method Constructors
The BitView
trait implements .view_bits::<O>()
and
.view_bits_mut::<O>()
methods on elements, arrays, and slices. This trait,
imported in the crate prelude, is probably the easiest way for you to borrow
memory as bits.
use bitvec::prelude::*;
let data = [0u32; 5];
let trait_view = data.view_bits::<Lsb0>();
let mut data = 0usize;
let trait_mut = data.view_bits_mut::<Msb0>();
Owned Bit Slices
If you wish to take ownership of a memory region and enforce that it is always
viewed as a BitSlice
by default, you can use one of the BitArray
,
BitBox
, or BitVec
types, rather than pairing ordinary buffer types with
the borrowing constructors.
use bitvec::prelude::*;
let slice = bits![0; 27];
let array = bitarr![u8, LocalBits; 0; 10];
let boxed = bitbox![0; 10];
let vec = bitvec![0; 20];
// arrays always round up
assert_eq!(array.as_bitslice(), slice[.. 16]);
assert_eq!(boxed.as_bitslice(), slice[.. 10]);
assert_eq!(vec.as_bitslice(), slice[.. 20]);
Usage
BitSlice
implements the full standard-library [bool]
API. The documentation
for these API surfaces is intentionally sparse, and forwards to the standard
library rather than try to replicate it.
BitSlice
also has a great deal of novel API surfaces. These are broken into
separate impl
blocks below. A short summary:
- Since there is no
BitSlice
literal, the constructor functions::empty()
,::from_element()
,::from_slice()
, and::try_from_slice()
, and their_mut
counterparts, create bit-slices as needed. - Since
bits[idx] = value
does not exist, you can use.set()
or.replace()
(as well as their_unchecked
and_aliased
counterparts) to write into a bit-slice. - Raw memory can be inspected with
.domain()
and.domain_mut()
, and a bit-slice can be split on aliasing lines with.bit_domain()
and.bit_domain_mut()
. - The population can be queried for which indices have
0
or1
bits by iterating across all such indices, counting them, or counting leading or trailing blocks. Additionally,.any()
,.all()
,.not_any()
,.not_all()
, and.some()
test whether bit-slices satisfy aggregate Boolean qualities. - Buffer contents can be relocated internally by shifting or rotating to the left or right.
Trait Implementations
BitSlice
adds trait implementations that [bool]
and [T]
do not necessarily
have, including numeric formatting and Boolean arithmetic operators.
Additionally, the BitField
trait allows bit-slices to act as a buffer for
wide-value storage.
Implementations§
source§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where T: BitStore, O: BitOrder,
Port of the [T]
inherent API.
sourcepub fn len(&self) -> usize
pub fn len(&self) -> usize
sourcepub fn is_empty(&self) -> bool
pub fn is_empty(&self) -> bool
sourcepub fn first(&self) -> Option<BitRef<'_, Const, T, O>>
pub fn first(&self) -> Option<BitRef<'_, Const, T, O>>
Gets a reference to the first bit of the bit-slice, or None
if it is
empty.
Original
API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
.
Examples
use bitvec::prelude::*;
let bits = bits![1, 0, 0];
assert_eq!(bits.first().as_deref(), Some(&true));
assert!(bits![].first().is_none());
sourcepub fn first_mut(&mut self) -> Option<BitRef<'_, Mut, T, O>>
pub fn first_mut(&mut self) -> Option<BitRef<'_, Mut, T, O>>
Gets a mutable reference to the first bit of the bit-slice, or None
if
it is empty.
Original
API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
. This must be bound as mut
in order to write
through it.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
if let Some(mut first) = bits.first_mut() {
*first = true;
}
assert_eq!(bits, bits![1, 0, 0]);
assert!(bits![mut].first_mut().is_none());
sourcepub fn split_first(&self) -> Option<(BitRef<'_, Const, T, O>, &Self)>
pub fn split_first(&self) -> Option<(BitRef<'_, Const, T, O>, &Self)>
Splits the bit-slice into a reference to its first bit, and the rest of
the bit-slice. Returns None
when empty.
Original
API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
.
Examples
use bitvec::prelude::*;
let bits = bits![1, 0, 0];
let (first, rest) = bits.split_first().unwrap();
assert_eq!(first, &true);
assert_eq!(rest, bits![0; 2]);
sourcepub fn split_first_mut(
&mut self
) -> Option<(BitRef<'_, Mut, T::Alias, O>, &mut BitSlice<T::Alias, O>)>
pub fn split_first_mut( &mut self ) -> Option<(BitRef<'_, Mut, T::Alias, O>, &mut BitSlice<T::Alias, O>)>
Splits the bit-slice into mutable references of its first bit, and the
rest of the bit-slice. Returns None
when empty.
Original
API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
. This must be bound as mut
in order to write
through it.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
if let Some((mut first, rest)) = bits.split_first_mut() {
*first = true;
assert_eq!(rest, bits![0; 2]);
}
assert_eq!(bits, bits![1, 0, 0]);
sourcepub fn split_last(&self) -> Option<(BitRef<'_, Const, T, O>, &Self)>
pub fn split_last(&self) -> Option<(BitRef<'_, Const, T, O>, &Self)>
Splits the bit-slice into a reference to its last bit, and the rest of
the bit-slice. Returns None
when empty.
Original
API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
.
Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1];
let (last, rest) = bits.split_last().unwrap();
assert_eq!(last, &true);
assert_eq!(rest, bits![0; 2]);
sourcepub fn split_last_mut(
&mut self
) -> Option<(BitRef<'_, Mut, T::Alias, O>, &mut BitSlice<T::Alias, O>)>
pub fn split_last_mut( &mut self ) -> Option<(BitRef<'_, Mut, T::Alias, O>, &mut BitSlice<T::Alias, O>)>
Splits the bit-slice into mutable references to its last bit, and the
rest of the bit-slice. Returns None
when empty.
Original
API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
. This must be bound as mut
in order to write
through it.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
if let Some((mut last, rest)) = bits.split_last_mut() {
*last = true;
assert_eq!(rest, bits![0; 2]);
}
assert_eq!(bits, bits![0, 0, 1]);
sourcepub fn last(&self) -> Option<BitRef<'_, Const, T, O>>
pub fn last(&self) -> Option<BitRef<'_, Const, T, O>>
Gets a reference to the last bit of the bit-slice, or None
if it is
empty.
Original
API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
.
Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1];
assert_eq!(bits.last().as_deref(), Some(&true));
assert!(bits![].last().is_none());
sourcepub fn last_mut(&mut self) -> Option<BitRef<'_, Mut, T, O>>
pub fn last_mut(&mut self) -> Option<BitRef<'_, Mut, T, O>>
Gets a mutable reference to the last bit of the bit-slice, or None
if
it is empty.
Original
API Differences
bitvec
uses a custom structure for both read-only and mutable
references to bool
. This must be bound as mut
in order to write
through it.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
if let Some(mut last) = bits.last_mut() {
*last = true;
}
assert_eq!(bits, bits![0, 0, 1]);
assert!(bits![mut].last_mut().is_none());
sourcepub fn get<'a, I>(&'a self, index: I) -> Option<I::Immut>where
I: BitSliceIndex<'a, T, O>,
pub fn get<'a, I>(&'a self, index: I) -> Option<I::Immut>where I: BitSliceIndex<'a, T, O>,
Gets a reference to a single bit or a subsection of the bit-slice,
depending on the type of index
.
- If given a
usize
, this produces a reference structure to thebool
at the position. - If given any form of range, this produces a smaller bit-slice.
This returns None
if the index
departs the bounds of self
.
Original
API Differences
BitSliceIndex
uses discrete types for immutable and mutable
references, rather than a single referent type.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0];
assert_eq!(bits.get(1).as_deref(), Some(&true));
assert_eq!(bits.get(0 .. 2), Some(bits![0, 1]));
assert!(bits.get(3).is_none());
assert!(bits.get(0 .. 4).is_none());
sourcepub fn get_mut<'a, I>(&'a mut self, index: I) -> Option<I::Mut>where
I: BitSliceIndex<'a, T, O>,
pub fn get_mut<'a, I>(&'a mut self, index: I) -> Option<I::Mut>where I: BitSliceIndex<'a, T, O>,
Gets a mutable reference to a single bit or a subsection of the
bit-slice, depending on the type of index
.
- If given a
usize
, this produces a reference structure to thebool
at the position. - If given any form of range, this produces a smaller bit-slice.
This returns None
if the index
departs the bounds of self
.
Original
API Differences
BitSliceIndex
uses discrete types for immutable and mutable
references, rather than a single referent type.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 3];
*bits.get_mut(0).unwrap() = true;
bits.get_mut(1 ..).unwrap().fill(true);
assert_eq!(bits, bits![1; 3]);
sourcepub unsafe fn get_unchecked<'a, I>(&'a self, index: I) -> I::Immutwhere
I: BitSliceIndex<'a, T, O>,
pub unsafe fn get_unchecked<'a, I>(&'a self, index: I) -> I::Immutwhere I: BitSliceIndex<'a, T, O>,
Gets a reference to a single bit or to a subsection of the bit-slice, without bounds checking.
This has the same arguments and behavior as .get()
, except that it
does not check that index
is in bounds.
Original
Safety
You must ensure that index
is within bounds (within the range 0 .. self.len()
), or this method will introduce memory safety and/or
undefined behavior.
It is library-level undefined behavior to index beyond the length of any bit-slice, even if you know that the offset remains within an allocation as measured by Rust or LLVM.
Examples
use bitvec::prelude::*;
let data = 0b0001_0010u8;
let bits = &data.view_bits::<Lsb0>()[.. 3];
unsafe {
assert!(bits.get_unchecked(1));
assert!(bits.get_unchecked(4));
}
sourcepub unsafe fn get_unchecked_mut<'a, I>(&'a mut self, index: I) -> I::Mutwhere
I: BitSliceIndex<'a, T, O>,
pub unsafe fn get_unchecked_mut<'a, I>(&'a mut self, index: I) -> I::Mutwhere I: BitSliceIndex<'a, T, O>,
Gets a mutable reference to a single bit or a subsection of the
bit-slice, depending on the type of index
.
This has the same arguments and behavior as .get_mut()
, except that
it does not check that index
is in bounds.
Original
Safety
You must ensure that index
is within bounds (within the range 0 .. self.len()
), or this method will introduce memory safety and/or
undefined behavior.
It is library-level undefined behavior to index beyond the length of any bit-slice, even if you know that the offset remains within an allocation as measured by Rust or LLVM.
Examples
use bitvec::prelude::*;
let mut data = 0u8;
let bits = &mut data.view_bits_mut::<Lsb0>()[.. 3];
unsafe {
bits.get_unchecked_mut(1).commit(true);
bits.get_unchecked_mut(4 .. 6).fill(true);
}
assert_eq!(data, 0b0011_0010);
pub fn as_ptr(&self) -> BitPtr<Const, T, O>
.as_bitptr()
insteadpub fn as_mut_ptr(&mut self) -> BitPtr<Mut, T, O>
.as_mut_bitptr()
insteadsourcepub fn as_ptr_range(&self) -> Range<BitPtr<Const, T, O>>
pub fn as_ptr_range(&self) -> Range<BitPtr<Const, T, O>>
Produces a range of bit-pointers to each bit in the bit-slice.
This is a standard-library range, which has no real functionality for
pointer types. You should prefer .as_bitptr_range()
instead, as it
produces a custom structure that provides expected ranging
functionality.
Original
sourcepub fn as_mut_ptr_range(&mut self) -> Range<BitPtr<Mut, T, O>>
pub fn as_mut_ptr_range(&mut self) -> Range<BitPtr<Mut, T, O>>
Produces a range of mutable bit-pointers to each bit in the bit-slice.
This is a standard-library range, which has no real functionality for
pointer types. You should prefer .as_mut_bitptr_range()
instead, as
it produces a custom structure that provides expected ranging
functionality.
Original
sourcepub fn swap(&mut self, a: usize, b: usize)
pub fn swap(&mut self, a: usize, b: usize)
sourcepub fn reverse(&mut self)
pub fn reverse(&mut self)
sourcepub fn iter(&self) -> Iter<'_, T, O> ⓘ
pub fn iter(&self) -> Iter<'_, T, O> ⓘ
Produces an iterator over each bit in the bit-slice.
Original
API Differences
This iterator yields proxy-reference structures, not &bool
. It can be
adapted to yield &bool
with the .by_refs()
method, or bool
with
.by_vals()
.
This iterator, and its adapters, are fast. Do not try to be more clever
than them by abusing .as_bitptr_range()
.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 1];
let mut iter = bits.iter();
assert!(!iter.next().unwrap());
assert!( iter.next().unwrap());
assert!( iter.next_back().unwrap());
assert!(!iter.next_back().unwrap());
assert!( iter.next().is_none());
sourcepub fn iter_mut(&mut self) -> IterMut<'_, T, O> ⓘ
pub fn iter_mut(&mut self) -> IterMut<'_, T, O> ⓘ
Produces a mutable iterator over each bit in the bit-slice.
Original
API Differences
This iterator yields proxy-reference structures, not &mut bool
. In
addition, it marks each proxy as alias-tainted.
If you are using this in an ordinary loop and not keeping multiple
yielded proxy-references alive at the same scope, you may use the
.remove_alias()
adapter to undo the alias marking.
This iterator is fast. Do not try to be more clever than it by abusing
.as_mut_bitptr_range()
.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 4];
let mut iter = bits.iter_mut();
iter.nth(1).unwrap().commit(true); // index 1
iter.next_back().unwrap().commit(true); // index 3
assert!(iter.next().is_some()); // index 2
assert!(iter.next().is_none()); // complete
assert_eq!(bits, bits![0, 1, 0, 1]);
sourcepub fn windows(&self, size: usize) -> Windows<'_, T, O> ⓘ
pub fn windows(&self, size: usize) -> Windows<'_, T, O> ⓘ
Iterates over consecutive windowing subslices in a bit-slice.
Windows are overlapping views of the bit-slice. Each window advances one
bit from the previous, so in a bit-slice [A, B, C, D, E]
, calling
.windows(3)
will yield [A, B, C]
, [B, C, D]
, and [C, D, E]
.
Original
Panics
This panics if size
is 0
.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.windows(3);
assert_eq!(iter.next(), Some(bits![0, 1, 0]));
assert_eq!(iter.next(), Some(bits![1, 0, 0]));
assert_eq!(iter.next(), Some(bits![0, 0, 1]));
assert!(iter.next().is_none());
sourcepub fn chunks(&self, chunk_size: usize) -> Chunks<'_, T, O> ⓘ
pub fn chunks(&self, chunk_size: usize) -> Chunks<'_, T, O> ⓘ
Iterates over non-overlapping subslices of a bit-slice.
Unlike .windows()
, the subslices this yields do not overlap with each
other. If self.len()
is not an even multiple of chunk_size
, then the
last chunk yielded will be shorter.
Original
Sibling Methods
.chunks_mut()
has the same division logic, but each yielded bit-slice is mutable..chunks_exact()
does not yield the final chunk if it is shorter thanchunk_size
..rchunks()
iterates from the back of the bit-slice to the front, with the final, possibly-shorter, segment at the front edge.
Panics
This panics if chunk_size
is 0
.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.chunks(2);
assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![0, 0]));
assert_eq!(iter.next(), Some(bits![1]));
assert!(iter.next().is_none());
sourcepub fn chunks_mut(&mut self, chunk_size: usize) -> ChunksMut<'_, T, O> ⓘ
pub fn chunks_mut(&mut self, chunk_size: usize) -> ChunksMut<'_, T, O> ⓘ
Iterates over non-overlapping mutable subslices of a bit-slice.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
Original
Sibling Methods
.chunks()
has the same division logic, but each yielded bit-slice is immutable..chunks_exact_mut()
does not yield the final chunk if it is shorter thanchunk_size
..rchunks_mut()
iterates from the back of the bit-slice to the front, with the final, possibly-shorter, segment at the front edge.
Panics
This panics if chunk_size
is 0
.
Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 5];
for (idx, chunk) in unsafe {
bits.chunks_mut(2).remove_alias()
}.enumerate() {
chunk.store(idx + 1);
}
assert_eq!(bits, bits![0, 1, 1, 0, 1]);
// ^^^^ ^^^^ ^
sourcepub fn chunks_exact(&self, chunk_size: usize) -> ChunksExact<'_, T, O> ⓘ
pub fn chunks_exact(&self, chunk_size: usize) -> ChunksExact<'_, T, O> ⓘ
Iterates over non-overlapping subslices of a bit-slice.
If self.len()
is not an even multiple of chunk_size
, then the last
few bits are not yielded by the iterator at all. They can be accessed
with the .remainder()
method if the iterator is bound to a name.
Original
Sibling Methods
.chunks()
yields any leftover bits at the end as a shorter chunk during iteration..chunks_exact_mut()
has the same division logic, but each yielded bit-slice is mutable..rchunks_exact()
iterates from the back of the bit-slice to the front, with the unyielded remainder segment at the front edge.
Panics
This panics if chunk_size
is 0
.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.chunks_exact(2);
assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![0, 0]));
assert!(iter.next().is_none());
assert_eq!(iter.remainder(), bits![1]);
sourcepub fn chunks_exact_mut(
&mut self,
chunk_size: usize
) -> ChunksExactMut<'_, T, O> ⓘ
pub fn chunks_exact_mut( &mut self, chunk_size: usize ) -> ChunksExactMut<'_, T, O> ⓘ
Iterates over non-overlapping mutable subslices of a bit-slice.
If self.len()
is not an even multiple of chunk_size
, then the last
few bits are not yielded by the iterator at all. They can be accessed
with the .into_remainder()
method if the iterator is bound to a
name.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
Original
Sibling Methods
.chunks_mut()
yields any leftover bits at the end as a shorter chunk during iteration..chunks_exact()
has the same division logic, but each yielded bit-slice is immutable..rchunks_exact_mut()
iterates from the back of the bit-slice forwards, with the unyielded remainder segment at the front edge.
Panics
This panics if chunk_size
is 0
.
Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 5];
let mut iter = bits.chunks_exact_mut(2);
for (idx, chunk) in iter.by_ref().enumerate() {
chunk.store(idx + 1);
}
iter.into_remainder().store(1u8);
assert_eq!(bits, bits![0, 1, 1, 0, 1]);
// remainder ^
sourcepub fn rchunks(&self, chunk_size: usize) -> RChunks<'_, T, O> ⓘ
pub fn rchunks(&self, chunk_size: usize) -> RChunks<'_, T, O> ⓘ
Iterates over non-overlapping subslices of a bit-slice, from the back edge.
Unlike .chunks()
, this aligns its chunks to the back edge of self
.
If self.len()
is not an even multiple of chunk_size
, then the
leftover partial chunk is self[0 .. len % chunk_size]
.
Original
Sibling Methods
.rchunks_mut()
has the same division logic, but each yielded bit-slice is mutable..rchunks_exact()
does not yield the final chunk if it is shorter thanchunk_size
..chunks()
iterates from the front of the bit-slice to the back, with the final, possibly-shorter, segment at the back edge.
Panics
This panics if chunk_size
is 0
.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.rchunks(2);
assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![1, 0]));
assert_eq!(iter.next(), Some(bits![0]));
assert!(iter.next().is_none());
sourcepub fn rchunks_mut(&mut self, chunk_size: usize) -> RChunksMut<'_, T, O> ⓘ
pub fn rchunks_mut(&mut self, chunk_size: usize) -> RChunksMut<'_, T, O> ⓘ
Iterates over non-overlapping mutable subslices of a bit-slice, from the back edge.
Unlike .chunks_mut()
, this aligns its chunks to the back edge of
self
. If self.len()
is not an even multiple of chunk_size
, then
the leftover partial chunk is self[0 .. len % chunk_size]
.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded values for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
Original
Sibling Methods
.rchunks()
has the same division logic, but each yielded bit-slice is immutable..rchunks_exact_mut()
does not yield the final chunk if it is shorter thanchunk_size
..chunks_mut()
iterates from the front of the bit-slice to the back, with the final, possibly-shorter, segment at the back edge.
Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 5];
for (idx, chunk) in unsafe {
bits.rchunks_mut(2).remove_alias()
}.enumerate() {
chunk.store(idx + 1);
}
assert_eq!(bits, bits![1, 1, 0, 0, 1]);
// remainder ^ ^^^^ ^^^^
sourcepub fn rchunks_exact(&self, chunk_size: usize) -> RChunksExact<'_, T, O> ⓘ
pub fn rchunks_exact(&self, chunk_size: usize) -> RChunksExact<'_, T, O> ⓘ
Iterates over non-overlapping subslices of a bit-slice, from the back edge.
If self.len()
is not an even multiple of chunk_size
, then the first
few bits are not yielded by the iterator at all. They can be accessed
with the .remainder()
method if the iterator is bound to a name.
Original
Sibling Methods
.rchunks()
yields any leftover bits at the front as a shorter chunk during iteration..rchunks_exact_mut()
has the same division logic, but each yielded bit-slice is mutable..chunks_exact()
iterates from the front of the bit-slice to the back, with the unyielded remainder segment at the back edge.
Panics
This panics if chunk_size
is 0
.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1];
let mut iter = bits.rchunks_exact(2);
assert_eq!(iter.next(), Some(bits![0, 1]));
assert_eq!(iter.next(), Some(bits![1, 0]));
assert!(iter.next().is_none());
assert_eq!(iter.remainder(), bits![0]);
sourcepub fn rchunks_exact_mut(
&mut self,
chunk_size: usize
) -> RChunksExactMut<'_, T, O> ⓘ
pub fn rchunks_exact_mut( &mut self, chunk_size: usize ) -> RChunksExactMut<'_, T, O> ⓘ
Iterates over non-overlapping mutable subslices of a bit-slice, from the back edge.
If self.len()
is not an even multiple of chunk_size
, then the first
few bits are not yielded by the iterator at all. They can be accessed
with the .into_remainder()
method if the iterator is bound to a
name.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
Sibling Methods
.rchunks_mut()
yields any leftover bits at the front as a shorter chunk during iteration..rchunks_exact()
has the same division logic, but each yielded bit-slice is immutable..chunks_exact_mut()
iterates from the front of the bit-slice backwards, with the unyielded remainder segment at the back edge.
Panics
This panics if chunk_size
is 0
.
Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 5];
let mut iter = bits.rchunks_exact_mut(2);
for (idx, chunk) in iter.by_ref().enumerate() {
chunk.store(idx + 1);
}
iter.into_remainder().store(1u8);
assert_eq!(bits, bits![1, 1, 0, 0, 1]);
// remainder ^
sourcepub fn split_at(&self, mid: usize) -> (&Self, &Self)
pub fn split_at(&self, mid: usize) -> (&Self, &Self)
Splits a bit-slice in two parts at an index.
The returned bit-slices are self[.. mid]
and self[mid ..]
. mid
is
included in the right bit-slice, not the left.
If mid
is 0
then the left bit-slice is empty; if it is self.len()
then the right bit-slice is empty.
This method guarantees that even when either partition is empty, the
encoded bit-pointer values of the bit-slice references is &self[0]
and
&self[mid]
.
Original
Panics
This panics if mid
is greater than self.len()
. It is allowed to be
equal to the length, in which case the right bit-slice is simply empty.
Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 0, 1, 1, 1];
let base = bits.as_bitptr();
let (a, b) = bits.split_at(0);
assert_eq!(unsafe { a.as_bitptr().offset_from(base) }, 0);
assert_eq!(unsafe { b.as_bitptr().offset_from(base) }, 0);
let (a, b) = bits.split_at(6);
assert_eq!(unsafe { b.as_bitptr().offset_from(base) }, 6);
let (a, b) = bits.split_at(3);
assert_eq!(a, bits![0; 3]);
assert_eq!(b, bits![1; 3]);
sourcepub fn split_at_mut(
&mut self,
mid: usize
) -> (&mut BitSlice<T::Alias, O>, &mut BitSlice<T::Alias, O>)
pub fn split_at_mut( &mut self, mid: usize ) -> (&mut BitSlice<T::Alias, O>, &mut BitSlice<T::Alias, O>)
Splits a mutable bit-slice in two parts at an index.
The returned bit-slices are self[.. mid]
and self[mid ..]
. mid
is
included in the right bit-slice, not the left.
If mid
is 0
then the left bit-slice is empty; if it is self.len()
then the right bit-slice is empty.
This method guarantees that even when either partition is empty, the
encoded bit-pointer values of the bit-slice references is &self[0]
and
&self[mid]
.
Original
API Differences
The end bits of the left half and the start bits of the right half might
be stored in the same memory element. In order to avoid breaking
bitvec
’s memory-safety guarantees, both bit-slices are marked as
T::Alias
. This marking allows them to be used without interfering with
each other when they interact with memory.
Panics
This panics if mid
is greater than self.len()
. It is allowed to be
equal to the length, in which case the right bit-slice is simply empty.
Examples
use bitvec::prelude::*;
let bits = bits![mut u8, Msb0; 0; 6];
let base = bits.as_mut_bitptr();
let (a, b) = bits.split_at_mut(0);
assert_eq!(unsafe { a.as_mut_bitptr().offset_from(base) }, 0);
assert_eq!(unsafe { b.as_mut_bitptr().offset_from(base) }, 0);
let (a, b) = bits.split_at_mut(6);
assert_eq!(unsafe { b.as_mut_bitptr().offset_from(base) }, 6);
let (a, b) = bits.split_at_mut(3);
a.store(3);
b.store(5);
assert_eq!(bits, bits![0, 1, 1, 1, 0, 1]);
sourcepub fn split<F>(&self, pred: F) -> Split<'_, T, O, F> ⓘwhere
F: FnMut(usize, &bool) -> bool,
pub fn split<F>(&self, pred: F) -> Split<'_, T, O, F> ⓘwhere F: FnMut(usize, &bool) -> bool,
Iterates over subslices separated by bits that match a predicate. The matched bit is not contained in the yielded bit-slices.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.split_mut()
has the same splitting logic, but each yielded bit-slice is mutable..split_inclusive()
includes the matched bit in the yielded bit-slice..rsplit()
iterates from the back of the bit-slice instead of the front..splitn()
times out aftern
yields.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 1, 0];
// ^
let mut iter = bits.split(|pos, _bit| pos % 3 == 2);
assert_eq!(iter.next().unwrap(), bits![0, 1]);
assert_eq!(iter.next().unwrap(), bits![0]);
assert!(iter.next().is_none());
If the first bit is matched, then an empty bit-slice will be the first item yielded by the iterator. Similarly, if the last bit in the bit-slice matches, then an empty bit-slice will be the last item yielded.
use bitvec::prelude::*;
let bits = bits![0, 0, 1];
// ^
let mut iter = bits.split(|_pos, bit| *bit);
assert_eq!(iter.next().unwrap(), bits![0; 2]);
assert!(iter.next().unwrap().is_empty());
assert!(iter.next().is_none());
If two matched bits are directly adjacent, then an empty bit-slice will be yielded between them:
use bitvec::prelude::*;
let bits = bits![1, 0, 0, 1];
// ^ ^
let mut iter = bits.split(|_pos, bit| !*bit);
assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().unwrap().is_empty());
assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().is_none());
sourcepub fn split_mut<F>(&mut self, pred: F) -> SplitMut<'_, T, O, F> ⓘwhere
F: FnMut(usize, &bool) -> bool,
pub fn split_mut<F>(&mut self, pred: F) -> SplitMut<'_, T, O, F> ⓘwhere F: FnMut(usize, &bool) -> bool,
Iterates over mutable subslices separated by bits that match a predicate. The matched bit is not contained in the yielded bit-slices.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.split()
has the same splitting logic, but each yielded bit-slice is immutable..split_inclusive_mut()
includes the matched bit in the yielded bit-slice..rsplit_mut()
iterates from the back of the bit-slice instead of the front..splitn_mut()
times out aftern
yields.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 1, 0];
// ^ ^
for group in bits.split_mut(|_pos, bit| *bit) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 1, 1, 1]);
sourcepub fn split_inclusive<F>(&self, pred: F) -> SplitInclusive<'_, T, O, F> ⓘwhere
F: FnMut(usize, &bool) -> bool,
pub fn split_inclusive<F>(&self, pred: F) -> SplitInclusive<'_, T, O, F> ⓘwhere F: FnMut(usize, &bool) -> bool,
Iterates over subslices separated by bits that match a predicate. Unlike
.split()
, this does include the matching bit as the last bit in the
yielded bit-slice.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.split_inclusive_mut()
has the same splitting logic, but each yielded bit-slice is mutable..split()
does not include the matched bit in the yielded bit-slice.
Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1, 0, 1];
// ^ ^
let mut iter = bits.split_inclusive(|_pos, bit| *bit);
assert_eq!(iter.next().unwrap(), bits![0, 0, 1]);
assert_eq!(iter.next().unwrap(), bits![0, 1]);
assert!(iter.next().is_none());
sourcepub fn split_inclusive_mut<F>(
&mut self,
pred: F
) -> SplitInclusiveMut<'_, T, O, F> ⓘwhere
F: FnMut(usize, &bool) -> bool,
pub fn split_inclusive_mut<F>( &mut self, pred: F ) -> SplitInclusiveMut<'_, T, O, F> ⓘwhere F: FnMut(usize, &bool) -> bool,
Iterates over mutable subslices separated by bits that match a
predicate. Unlike .split_mut()
, this does include the matching bit
as the last bit in the bit-slice.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.split_inclusive()
has the same splitting logic, but each yielded bit-slice is immutable..split_mut()
does not include the matched bit in the yielded bit-slice.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 0, 0, 0];
// ^
for group in bits.split_inclusive_mut(|pos, _bit| pos % 3 == 2) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 0, 1, 0]);
sourcepub fn rsplit<F>(&self, pred: F) -> RSplit<'_, T, O, F> ⓘwhere
F: FnMut(usize, &bool) -> bool,
pub fn rsplit<F>(&self, pred: F) -> RSplit<'_, T, O, F> ⓘwhere F: FnMut(usize, &bool) -> bool,
Iterates over subslices separated by bits that match a predicate, from the back edge. The matched bit is not contained in the yielded bit-slices.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.rsplit_mut()
has the same splitting logic, but each yielded bit-slice is mutable..split()
iterates from the front of the bit-slice instead of the back..rsplitn()
times out aftern
yields.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 1, 0];
// ^
let mut iter = bits.rsplit(|pos, _bit| pos % 3 == 2);
assert_eq!(iter.next().unwrap(), bits![0]);
assert_eq!(iter.next().unwrap(), bits![0, 1]);
assert!(iter.next().is_none());
If the last bit is matched, then an empty bit-slice will be the first item yielded by the iterator. Similarly, if the first bit in the bit-slice matches, then an empty bit-slice will be the last item yielded.
use bitvec::prelude::*;
let bits = bits![0, 0, 1];
// ^
let mut iter = bits.rsplit(|_pos, bit| *bit);
assert!(iter.next().unwrap().is_empty());
assert_eq!(iter.next().unwrap(), bits![0; 2]);
assert!(iter.next().is_none());
If two yielded bits are directly adjacent, then an empty bit-slice will be yielded between them:
use bitvec::prelude::*;
let bits = bits![1, 0, 0, 1];
// ^ ^
let mut iter = bits.split(|_pos, bit| !*bit);
assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().unwrap().is_empty());
assert_eq!(iter.next().unwrap(), bits![1]);
assert!(iter.next().is_none());
sourcepub fn rsplit_mut<F>(&mut self, pred: F) -> RSplitMut<'_, T, O, F> ⓘwhere
F: FnMut(usize, &bool) -> bool,
pub fn rsplit_mut<F>(&mut self, pred: F) -> RSplitMut<'_, T, O, F> ⓘwhere F: FnMut(usize, &bool) -> bool,
Iterates over mutable subslices separated by bits that match a predicate, from the back. The matched bit is not contained in the yielded bit-slices.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.rsplit()
has the same splitting logic, but each yielded bit-slice is immutable..split_mut()
iterates from the front of the bit-slice to the back..rsplitn_mut()
iterates from the front of the bit-slice to the back.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 1, 0];
// ^ ^
for group in bits.rsplit_mut(|_pos, bit| *bit) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 1, 1, 1]);
sourcepub fn splitn<F>(&self, n: usize, pred: F) -> SplitN<'_, T, O, F> ⓘwhere
F: FnMut(usize, &bool) -> bool,
pub fn splitn<F>(&self, n: usize, pred: F) -> SplitN<'_, T, O, F> ⓘwhere F: FnMut(usize, &bool) -> bool,
Iterates over subslices separated by bits that match a predicate, giving
up after yielding n
times. The n
th yield contains the rest of the
bit-slice. As with .split()
, the yielded bit-slices do not contain the
matched bit.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.splitn_mut()
has the same splitting logic, but each yielded bit-slice is mutable..rsplitn()
iterates from the back of the bit-slice instead of the front..split()
has the same splitting logic, but never times out.
Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1, 0, 1, 0];
let mut iter = bits.splitn(2, |_pos, bit| *bit);
assert_eq!(iter.next().unwrap(), bits![0, 0]);
assert_eq!(iter.next().unwrap(), bits![0, 1, 0]);
assert!(iter.next().is_none());
sourcepub fn splitn_mut<F>(&mut self, n: usize, pred: F) -> SplitNMut<'_, T, O, F> ⓘwhere
F: FnMut(usize, &bool) -> bool,
pub fn splitn_mut<F>(&mut self, n: usize, pred: F) -> SplitNMut<'_, T, O, F> ⓘwhere F: FnMut(usize, &bool) -> bool,
Iterates over mutable subslices separated by bits that match a
predicate, giving up after yielding n
times. The n
th yield contains
the rest of the bit-slice. As with .split_mut()
, the yielded
bit-slices do not contain the matched bit.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.splitn()
has the same splitting logic, but each yielded bit-slice is immutable..rsplitn_mut()
iterates from the back of the bit-slice instead of the front..split_mut()
has the same splitting logic, but never times out.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 1, 0];
for group in bits.splitn_mut(2, |_pos, bit| *bit) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 1, 1, 0]);
sourcepub fn rsplitn<F>(&self, n: usize, pred: F) -> RSplitN<'_, T, O, F> ⓘwhere
F: FnMut(usize, &bool) -> bool,
pub fn rsplitn<F>(&self, n: usize, pred: F) -> RSplitN<'_, T, O, F> ⓘwhere F: FnMut(usize, &bool) -> bool,
Iterates over mutable subslices separated by bits that match a
predicate from the back edge, giving up after yielding n
times. The
n
th yield contains the rest of the bit-slice. As with .split_mut()
,
the yielded bit-slices do not contain the matched bit.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.rsplitn_mut()
has the same splitting logic, but each yielded bit-slice is mutable..splitn()
: iterates from the front of the bit-slice instead of the back..rsplit()
has the same splitting logic, but never times out.
Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1, 1, 0];
// ^
let mut iter = bits.rsplitn(2, |_pos, bit| *bit);
assert_eq!(iter.next().unwrap(), bits![0]);
assert_eq!(iter.next().unwrap(), bits![0, 0, 1]);
assert!(iter.next().is_none());
sourcepub fn rsplitn_mut<F>(&mut self, n: usize, pred: F) -> RSplitNMut<'_, T, O, F> ⓘwhere
F: FnMut(usize, &bool) -> bool,
pub fn rsplitn_mut<F>(&mut self, n: usize, pred: F) -> RSplitNMut<'_, T, O, F> ⓘwhere F: FnMut(usize, &bool) -> bool,
Iterates over mutable subslices separated by bits that match a
predicate from the back edge, giving up after yielding n
times. The
n
th yield contains the rest of the bit-slice. As with .split_mut()
,
the yielded bit-slices do not contain the matched bit.
Iterators do not require that each yielded item is destroyed before the
next is produced. This means that each bit-slice yielded must be marked
as aliased. If you are using this in a loop that does not collect
multiple yielded subslices for the same scope, then you can remove the
alias marking by calling the (unsafe
) method .remove_alias()
on
the iterator.
Original
API Differences
The predicate function receives the index being tested as well as the bit value at that index. This allows the predicate to have more than one bit of information about the bit-slice being traversed.
Sibling Methods
.rsplitn()
has the same splitting logic, but each yielded bit-slice is immutable..splitn_mut()
iterates from the front of the bit-slice instead of the back..rsplit_mut()
has the same splitting logic, but never times out.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 0, 1, 0, 0, 0];
for group in bits.rsplitn_mut(2, |_idx, bit| *bit) {
group.set(0, true);
}
assert_eq!(bits, bits![1, 0, 1, 0, 0, 1, 1, 0, 0]);
// ^ group 2 ^ group 1
sourcepub fn contains<T2, O2>(&self, other: &BitSlice<T2, O2>) -> boolwhere
T2: BitStore,
O2: BitOrder,
pub fn contains<T2, O2>(&self, other: &BitSlice<T2, O2>) -> boolwhere T2: BitStore, O2: BitOrder,
Tests if the bit-slice contains the given sequence anywhere within it.
This scans over self.windows(other.len())
until one of the windows
matches. The search key does not need to share type parameters with the
bit-slice being tested, as the comparison is bit-wise. However, sharing
type parameters will accelerate the comparison.
Original
Examples
use bitvec::prelude::*;
let bits = bits![0, 0, 1, 0, 1, 1, 0, 0];
assert!( bits.contains(bits![0, 1, 1, 0]));
assert!(!bits.contains(bits![1, 0, 0, 1]));
sourcepub fn starts_with<T2, O2>(&self, needle: &BitSlice<T2, O2>) -> boolwhere
T2: BitStore,
O2: BitOrder,
pub fn starts_with<T2, O2>(&self, needle: &BitSlice<T2, O2>) -> boolwhere T2: BitStore, O2: BitOrder,
Tests if the bit-slice begins with the given sequence.
The search key does not need to share type parameters with the bit-slice being tested, as the comparison is bit-wise. However, sharing type parameters will accelerate the comparison.
Original
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 1, 0];
assert!( bits.starts_with(bits![0, 1]));
assert!(!bits.starts_with(bits![1, 0]));
This always returns true
if the needle is empty:
use bitvec::prelude::*;
let bits = bits![0, 1, 0];
let empty = bits![];
assert!(bits.starts_with(empty));
assert!(empty.starts_with(empty));
sourcepub fn ends_with<T2, O2>(&self, needle: &BitSlice<T2, O2>) -> boolwhere
T2: BitStore,
O2: BitOrder,
pub fn ends_with<T2, O2>(&self, needle: &BitSlice<T2, O2>) -> boolwhere T2: BitStore, O2: BitOrder,
Tests if the bit-slice ends with the given sequence.
The search key does not need to share type parameters with the bit-slice being tested, as the comparison is bit-wise. However, sharing type parameters will accelerate the comparison.
Original
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 1, 0];
assert!( bits.ends_with(bits![1, 0]));
assert!(!bits.ends_with(bits![0, 1]));
This always returns true
if the needle is empty:
use bitvec::prelude::*;
let bits = bits![0, 1, 0];
let empty = bits![];
assert!(bits.ends_with(empty));
assert!(empty.ends_with(empty));
sourcepub fn strip_prefix<T2, O2>(&self, prefix: &BitSlice<T2, O2>) -> Option<&Self>where
T2: BitStore,
O2: BitOrder,
pub fn strip_prefix<T2, O2>(&self, prefix: &BitSlice<T2, O2>) -> Option<&Self>where T2: BitStore, O2: BitOrder,
Removes a prefix bit-slice, if present.
Like .starts_with()
, the search key does not need to share type
parameters with the bit-slice being stripped. If
self.starts_with(suffix)
, then this returns Some(&self[prefix.len() ..])
, otherwise it returns None
.
Original
API Differences
BitSlice
does not support pattern searches; instead, it permits self
and prefix
to differ in type parameters.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1, 0, 1, 1, 0];
assert_eq!(bits.strip_prefix(bits![0, 1]).unwrap(), bits[2 ..]);
assert_eq!(bits.strip_prefix(bits![0, 1, 0, 0,]).unwrap(), bits[4 ..]);
assert!(bits.strip_prefix(bits![1, 0]).is_none());
sourcepub fn strip_suffix<T2, O2>(&self, suffix: &BitSlice<T2, O2>) -> Option<&Self>where
T2: BitStore,
O2: BitOrder,
pub fn strip_suffix<T2, O2>(&self, suffix: &BitSlice<T2, O2>) -> Option<&Self>where T2: BitStore, O2: BitOrder,
Removes a suffix bit-slice, if present.
Like .ends_with()
, the search key does not need to share type
parameters with the bit-slice being stripped. If
self.ends_with(suffix)
, then this returns Some(&self[.. self.len() - suffix.len()])
, otherwise it returns None
.
Original
API Differences
BitSlice
does not support pattern searches; instead, it permits self
and suffix
to differ in type parameters.
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1, 0, 1, 1, 0];
assert_eq!(bits.strip_suffix(bits![1, 0]).unwrap(), bits[.. 7]);
assert_eq!(bits.strip_suffix(bits![0, 1, 1, 0]).unwrap(), bits[.. 5]);
assert!(bits.strip_suffix(bits![0, 1]).is_none());
sourcepub fn rotate_left(&mut self, by: usize)
pub fn rotate_left(&mut self, by: usize)
Rotates the contents of a bit-slice to the left (towards the zero index).
This essentially splits the bit-slice at by
, then exchanges the two
pieces. self[.. by]
becomes the first section, and is then followed by
self[.. by]
.
The implementation is batch-accelerated where possible. It should have a
runtime complexity much lower than O(by)
.
Original
Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 0, 1, 0];
// split occurs here ^
bits.rotate_left(2);
assert_eq!(bits, bits![1, 0, 1, 0, 0, 0]);
sourcepub fn rotate_right(&mut self, by: usize)
pub fn rotate_right(&mut self, by: usize)
Rotates the contents of a bit-slice to the right (away from the zero index).
This essentially splits the bit-slice at self.len() - by
, then
exchanges the two pieces. self[len - by ..]
becomes the first section,
and is then followed by self[.. len - by]
.
The implementation is batch-accelerated where possible. It should have a
runtime complexity much lower than O(by)
.
Original
Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 0, 1, 1, 1, 0];
// split occurs here ^
bits.rotate_right(2);
assert_eq!(bits, bits![1, 0, 0, 0, 1, 1]);
sourcepub fn fill(&mut self, value: bool)
pub fn fill(&mut self, value: bool)
Fills the bit-slice with a given bit.
This is a recent stabilization in the standard library. bitvec
previously offered this behavior as the novel API .set_all()
. That
method name is now removed in favor of this standard-library analogue.
Original
Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 5];
bits.fill(true);
assert_eq!(bits, bits![1; 5]);
sourcepub fn fill_with<F>(&mut self, func: F)where
F: FnMut(usize) -> bool,
pub fn fill_with<F>(&mut self, func: F)where F: FnMut(usize) -> bool,
Fills the bit-slice with bits produced by a generator function.
Original
API Differences
The generator function receives the index of the bit being initialized as an argument.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0; 5];
bits.fill_with(|idx| idx % 2 == 0);
assert_eq!(bits, bits![1, 0, 1, 0, 1]);
pub fn clone_from_slice<T2, O2>(&mut self, src: &BitSlice<T2, O2>)where T2: BitStore, O2: BitOrder,
.clone_from_bitslice()
insteadpub fn copy_from_slice(&mut self, src: &Self)
.copy_from_bitslice()
insteadsourcepub fn copy_within<R>(&mut self, src: R, dest: usize)where
R: RangeExt<usize>,
pub fn copy_within<R>(&mut self, src: R, dest: usize)where R: RangeExt<usize>,
Copies a span of bits to another location in the bit-slice.
src
is the range of bit-indices in the bit-slice to copy, and dest is the starting index of the destination range.
srcand
dest .. dest +
src.len()are permitted to overlap; the copy will automatically detect and manage this. However, both
srcand
dest .. dest + src.len()**must** fall within the bounds of
self`.
Original
Panics
This panics if either the source or destination range exceed
self.len()
.
Examples
use bitvec::prelude::*;
let bits = bits![mut 1, 1, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0];
bits.copy_within(1 .. 5, 8);
// v v v v
assert_eq!(bits, bits![1, 1, 1, 1, 0, 0, 0, 0, 1, 1, 1, 0]);
// ^ ^ ^ ^
pub fn swap_with_slice<T2, O2>(&mut self, other: &mut BitSlice<T2, O2>)where T2: BitStore, O2: BitOrder,
.swap_with_bitslice()
insteadsourcepub unsafe fn align_to<U>(&self) -> (&Self, &BitSlice<U, O>, &Self)where
U: BitStore,
pub unsafe fn align_to<U>(&self) -> (&Self, &BitSlice<U, O>, &Self)where U: BitStore,
Produces bit-slice view(s) with different underlying storage types.
This may have unexpected effects, and you cannot assume that
before[idx] == after[idx]
! Consult the tables in the manual
for information about memory layouts.
Original
Notes
Unlike the standard library documentation, this explicitly guarantees that the middle bit-slice will have maximal size. You may rely on this property.
Safety
You may not use this to cast away alias protections. Rust does not have
support for higher-kinded types, so this cannot express the relation
Outer<T> -> Outer<U> where Outer: BitStoreContainer
, but memory safety
does require that you respect this rule. Reälign integers to integers,
Cell
s to Cell
s, and atomics to atomics, but do not cross these
boundaries.
Examples
use bitvec::prelude::*;
let bytes: [u8; 7] = [1, 2, 3, 4, 5, 6, 7];
let bits = bytes.view_bits::<Lsb0>();
let (pfx, mid, sfx) = unsafe {
bits.align_to::<u16>()
};
assert!(pfx.len() <= 8);
assert_eq!(mid.len(), 48);
assert!(sfx.len() <= 8);
sourcepub unsafe fn align_to_mut<U>(
&mut self
) -> (&mut Self, &mut BitSlice<U, O>, &mut Self)where
U: BitStore,
pub unsafe fn align_to_mut<U>( &mut self ) -> (&mut Self, &mut BitSlice<U, O>, &mut Self)where U: BitStore,
Produces bit-slice view(s) with different underlying storage types.
This may have unexpected effects, and you cannot assume that
before[idx] == after[idx]
! Consult the tables in the manual
for information about memory layouts.
Original
Notes
Unlike the standard library documentation, this explicitly guarantees that the middle bit-slice will have maximal size. You may rely on this property.
Safety
You may not use this to cast away alias protections. Rust does not have
support for higher-kinded types, so this cannot express the relation
Outer<T> -> Outer<U> where Outer: BitStoreContainer
, but memory safety
does require that you respect this rule. Reälign integers to integers,
Cell
s to Cell
s, and atomics to atomics, but do not cross these
boundaries.
Examples
use bitvec::prelude::*;
let mut bytes: [u8; 7] = [1, 2, 3, 4, 5, 6, 7];
let bits = bytes.view_bits_mut::<Lsb0>();
let (pfx, mid, sfx) = unsafe {
bits.align_to_mut::<u16>()
};
assert!(pfx.len() <= 8);
assert_eq!(mid.len(), 48);
assert!(sfx.len() <= 8);
source§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where T: BitStore, O: BitOrder,
pub fn to_vec(&self) -> BitVec<T::Unalias, O> ⓘ
.to_bitvec()
insteadsourcepub fn repeat(&self, n: usize) -> BitVec<T::Unalias, O> ⓘ
pub fn repeat(&self, n: usize) -> BitVec<T::Unalias, O> ⓘ
Creates a bit-vector by repeating a bit-slice n
times.
Original
Panics
This method panics if self.len() * n
exceeds the BitVec
capacity.
Examples
use bitvec::prelude::*;
assert_eq!(bits![0, 1].repeat(3), bitvec![0, 1, 0, 1, 0, 1]);
This panics by exceeding bit-vector maximum capacity:
use bitvec::prelude::*;
bits![0, 1].repeat(BitSlice::<usize, Lsb0>::MAX_BITS);
source§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where T: BitStore, O: BitOrder,
Constructors.
sourcepub fn empty_mut<'a>() -> &'a mut Self
pub fn empty_mut<'a>() -> &'a mut Self
sourcepub fn from_element(elem: &T) -> &Self
pub fn from_element(elem: &T) -> &Self
Constructs a shared &BitSlice
reference over a shared element.
The BitView
trait, implemented on all BitStore
implementors,
provides a .view_bits::<O>()
method which delegates to this function
and may be more convenient for you to write.
Parameters
elem
: A shared reference to a memory element.
Returns
A shared &BitSlice
over elem
.
Examples
use bitvec::prelude::*;
let elem = 0u8;
let bits = BitSlice::<_, Lsb0>::from_element(&elem);
assert_eq!(bits.len(), 8);
let bits = elem.view_bits::<Lsb0>();
sourcepub fn from_element_mut(elem: &mut T) -> &mut Self
pub fn from_element_mut(elem: &mut T) -> &mut Self
Constructs an exclusive &mut BitSlice
reference over an element.
The BitView
trait, implemented on all BitStore
implementors,
provides a .view_bits_mut::<O>()
method which delegates to this
function and may be more convenient for you to write.
Parameters
elem
: An exclusive reference to a memory element.
Returns
An exclusive &mut BitSlice
over elem
.
Note that the original elem
reference will be inaccessible for the
duration of the returned bit-slice handle’s lifetime.
Examples
use bitvec::prelude::*;
let mut elem = 0u8;
let bits = BitSlice::<_, Lsb0>::from_element_mut(&mut elem);
bits.set(1, true);
assert!(bits[1]);
assert_eq!(elem, 2);
let bits = elem.view_bits_mut::<Lsb0>();
sourcepub fn from_slice(slice: &[T]) -> &Self
pub fn from_slice(slice: &[T]) -> &Self
Constructs a shared &BitSlice
reference over a slice of elements.
The BitView
trait, implemented on all [T]
slices, provides a
.view_bits::<O>()
method which delegates to this function and may be
more convenient for you to write.
Parameters
slice
: A shared reference to a slice of memory elements.
Returns
A shared BitSlice
reference over all of slice
.
Panics
This will panic if slice
is too long to encode as a bit-slice view.
Examples
use bitvec::prelude::*;
let data = [0u16, 1];
let bits = BitSlice::<_, Lsb0>::from_slice(&data);
assert!(bits[16]);
let bits = data.view_bits::<Lsb0>();
sourcepub fn try_from_slice(slice: &[T]) -> Result<&Self, BitSpanError<T>>
pub fn try_from_slice(slice: &[T]) -> Result<&Self, BitSpanError<T>>
Attempts to construct a shared &BitSlice
reference over a slice of
elements.
The BitView
, implemented on all [T]
slices, provides a
.try_view_bits::<O>()
method which delegates to this function and
may be more convenient for you to write.
This is very hard, if not impossible, to cause to fail. Rust will not create excessive arrays on 64-bit architectures.
Parameters
slice
: A shared reference to a slice of memory elements.
Returns
A shared &BitSlice
over slice
. If slice
is longer than can be
encoded into a &BitSlice
(see MAX_ELTS
), this will fail and return
the original slice
as an error.
Examples
use bitvec::prelude::*;
let data = [0u8, 1];
let bits = BitSlice::<_, Msb0>::try_from_slice(&data).unwrap();
assert!(bits[15]);
let bits = data.try_view_bits::<Msb0>().unwrap();
sourcepub fn from_slice_mut(slice: &mut [T]) -> &mut Self
pub fn from_slice_mut(slice: &mut [T]) -> &mut Self
Constructs an exclusive &mut BitSlice
reference over a slice of
elements.
The BitView
trait, implemented on all [T]
slices, provides a
.view_bits_mut::<O>()
method which delegates to this function and
may be more convenient for you to write.
Parameters
slice
: An exclusive reference to a slice of memory elements.
Returns
An exclusive &mut BitSlice
over all of slice
.
Panics
This panics if slice
is too long to encode as a bit-slice view.
Examples
use bitvec::prelude::*;
let mut data = [0u16; 2];
let bits = BitSlice::<_, Lsb0>::from_slice_mut(&mut data);
bits.set(0, true);
bits.set(17, true);
assert_eq!(data, [1, 2]);
let bits = data.view_bits_mut::<Lsb0>();
sourcepub fn try_from_slice_mut(slice: &mut [T]) -> Result<&mut Self, BitSpanError<T>>
pub fn try_from_slice_mut(slice: &mut [T]) -> Result<&mut Self, BitSpanError<T>>
Attempts to construct an exclusive &mut BitSlice
reference over a
slice of elements.
The BitView
trait, implemented on all [T]
slices, provides a
.try_view_bits_mut::<O>()
method which delegates to this function
and may be more convenient for you to write.
Parameters
slice
: An exclusive reference to a slice of memory elements.
Returns
An exclusive &mut BitSlice
over slice
. If slice
is longer than can
be encoded into a &mut BitSlice
(see MAX_ELTS
), this will fail and
return the original slice
as an error.
Examples
use bitvec::prelude::*;
let mut data = [0u8; 2];
let bits = BitSlice::<_, Msb0>::try_from_slice_mut(&mut data).unwrap();
bits.set(7, true);
bits.set(15, true);
assert_eq!(data, [1; 2]);
let bits = data.try_view_bits_mut::<Msb0>().unwrap();
sourcepub unsafe fn from_slice_unchecked(slice: &[T]) -> &Self
pub unsafe fn from_slice_unchecked(slice: &[T]) -> &Self
Constructs a shared &BitSlice
over an element slice, without checking
its length.
If slice
is too long to encode into a &BitSlice
, then the produced
bit-slice’s length is unspecified.
Safety
You must ensure that slice.len() < BitSlice::MAX_ELTS
.
Calling this function with an over-long slice is library-level undefined behavior. You may not assume anything about its implementation or behavior, and must conservatively assume that over-long slices cause compiler UB.
sourcepub unsafe fn from_slice_unchecked_mut(slice: &mut [T]) -> &mut Self
pub unsafe fn from_slice_unchecked_mut(slice: &mut [T]) -> &mut Self
Constructs an exclusive &mut BitSlice
over an element slice, without
checking its length.
If slice
is too long to encode into a &mut BitSlice
, then the
produced bit-slice’s length is unspecified.
Safety
You must ensure that slice.len() < BitSlice::MAX_ELTS
.
Calling this function with an over-long slice is library-level undefined behavior. You may not assume anything about its implementation or behavior, and must conservatively assume that over-long slices cause compiler UB.
source§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where T: BitStore, O: BitOrder,
Alternates of standard APIs.
sourcepub fn as_bitptr(&self) -> BitPtr<Const, T, O>
pub fn as_bitptr(&self) -> BitPtr<Const, T, O>
Gets a raw pointer to the zeroth bit of the bit-slice.
Original
API Differences
This is renamed in order to indicate that it is returning a bitvec
structure, not a raw pointer.
sourcepub fn as_mut_bitptr(&mut self) -> BitPtr<Mut, T, O>
pub fn as_mut_bitptr(&mut self) -> BitPtr<Mut, T, O>
Gets a raw, write-capable pointer to the zeroth bit of the bit-slice.
Original
API Differences
This is renamed in order to indicate that it is returning a bitvec
structure, not a raw pointer.
sourcepub fn as_bitptr_range(&self) -> BitPtrRange<Const, T, O> ⓘ
pub fn as_bitptr_range(&self) -> BitPtrRange<Const, T, O> ⓘ
Views the bit-slice as a half-open range of bit-pointers, to its first bit in the bit-slice and first bit beyond it.
Original
API Differences
This is renamed to indicate that it returns a bitvec
structure, rather
than an ordinary Range
.
Notes
BitSlice
does define a .as_ptr_range()
, which returns a
Range<BitPtr>
. BitPtrRange
has additional capabilities that
Range<*const T>
and Range<BitPtr>
do not.
sourcepub fn as_mut_bitptr_range(&mut self) -> BitPtrRange<Mut, T, O> ⓘ
pub fn as_mut_bitptr_range(&mut self) -> BitPtrRange<Mut, T, O> ⓘ
Views the bit-slice as a half-open range of write-capable bit-pointers, to its first bit in the bit-slice and the first bit beyond it.
Original
API Differences
This is renamed to indicate that it returns a bitvec
structure, rather
than an ordinary Range
.
Notes
BitSlice
does define a [.as_mut_ptr_range()
], which returns a
Range<BitPtr>
. BitPtrRange
has additional capabilities that
Range<*mut T>
and Range<BitPtr>
do not.
sourcepub fn clone_from_bitslice<T2, O2>(&mut self, src: &BitSlice<T2, O2>)where
T2: BitStore,
O2: BitOrder,
pub fn clone_from_bitslice<T2, O2>(&mut self, src: &BitSlice<T2, O2>)where T2: BitStore, O2: BitOrder,
Copies the bits from src
into self
.
self
and src
must have the same length.
Performance
If src
has the same type arguments as self
, it will use the same
implementation as .copy_from_bitslice()
; if you know that this will
always be the case, you should prefer to use that method directly.
Only .copy_from_bitslice()
is able to perform acceleration; this
method is always required to perform a bit-by-bit crawl over both
bit-slices.
Original
API Differences
This is renamed to reflect that it copies from another bit-slice, not from an element slice.
In order to support general usage, it allows src
to have different
type parameters than self
, at the cost of performance optimizations.
Panics
This panics if the two bit-slices have different lengths.
Examples
use bitvec::prelude::*;
sourcepub fn copy_from_bitslice(&mut self, src: &Self)
pub fn copy_from_bitslice(&mut self, src: &Self)
sourcepub fn swap_with_bitslice<T2, O2>(&mut self, other: &mut BitSlice<T2, O2>)where
T2: BitStore,
O2: BitOrder,
pub fn swap_with_bitslice<T2, O2>(&mut self, other: &mut BitSlice<T2, O2>)where T2: BitStore, O2: BitOrder,
Swaps the contents of two bit-slices.
self
and other
must have the same length.
Original
API Differences
This method is renamed, as it takes a bit-slice rather than an element slice.
Panics
This panics if the two bit-slices have different lengths.
Examples
use bitvec::prelude::*;
let mut one = [0xA5u8, 0x69];
let mut two = 0x1234u16;
let one_bits = one.view_bits_mut::<Msb0>();
let two_bits = two.view_bits_mut::<Lsb0>();
one_bits.swap_with_bitslice(two_bits);
assert_eq!(one, [0x2C, 0x48]);
assert_eq!(two, 0x96A5);
source§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where T: BitStore, O: BitOrder,
Extensions of standard APIs.
sourcepub fn set(&mut self, index: usize, value: bool)
pub fn set(&mut self, index: usize, value: bool)
Writes a new value into a single bit.
This is the replacement for *slice[index] = value;
, as bitvec
is not
able to express that under the current IndexMut
API signature.
Parameters
&mut self
index
: The bit-index to set. It must be in0 .. self.len()
.value
: The new bit-value to write into the bit atindex
.
Panics
This panics if index
is out of bounds.
Examples
use bitvec::prelude::*;
let bits = bits![mut 0, 1];
bits.set(0, true);
bits.set(1, false);
assert_eq!(bits, bits![1, 0]);
sourcepub unsafe fn set_unchecked(&mut self, index: usize, value: bool)
pub unsafe fn set_unchecked(&mut self, index: usize, value: bool)
Writes a new value into a single bit, without bounds checking.
Parameters
&mut self
index
: The bit-index to set. It must be in0 .. self.len()
.value
: The new bit-value to write into the bit atindex
.
Safety
You must ensure that index
is in the range 0 .. self.len()
.
This performs bit-pointer offset arithmetic without doing any bounds
checks. If index
is out of bounds, then this will issue an
out-of-bounds access and will trigger memory unsafety.
Examples
use bitvec::prelude::*;
let mut data = 0u8;
let bits = &mut data.view_bits_mut::<Lsb0>()[.. 2];
assert_eq!(bits.len(), 2);
unsafe {
bits.set_unchecked(3, true);
}
assert_eq!(data, 8);
sourcepub unsafe fn replace_unchecked(&mut self, index: usize, value: bool) -> bool
pub unsafe fn replace_unchecked(&mut self, index: usize, value: bool) -> bool
sourcepub unsafe fn swap_unchecked(&mut self, a: usize, b: usize)
pub unsafe fn swap_unchecked(&mut self, a: usize, b: usize)
Swaps two bits in a bit-slice, without bounds checking.
See .swap()
for documentation.
Safety
You must ensure that a
and b
are both in the range 0 .. self.len()
.
This method performs bit-pointer offset arithmetic without doing any
bounds checks. If a
or b
are out of bounds, then this will issue an
out-of-bounds access and will trigger memory unsafety.
sourcepub unsafe fn split_at_unchecked(&self, mid: usize) -> (&Self, &Self)
pub unsafe fn split_at_unchecked(&self, mid: usize) -> (&Self, &Self)
Splits a bit-slice at an index, without bounds checking.
See .split_at()
for documentation.
Safety
You must ensure that mid
is in the range 0 ..= self.len()
.
This method produces new bit-slice references. If mid
is out of
bounds, its behavior is library-level undefined. You must
conservatively assume that an out-of-bounds split point produces
compiler-level UB.
sourcepub unsafe fn split_at_unchecked_mut(
&mut self,
mid: usize
) -> (&mut BitSlice<T::Alias, O>, &mut BitSlice<T::Alias, O>)
pub unsafe fn split_at_unchecked_mut( &mut self, mid: usize ) -> (&mut BitSlice<T::Alias, O>, &mut BitSlice<T::Alias, O>)
Splits a mutable bit-slice at an index, without bounds checking.
See .split_at_mut()
for documentation.
Safety
You must ensure that mid
is in the range 0 ..= self.len()
.
This method produces new bit-slice references. If mid
is out of
bounds, its behavior is library-level undefined. You must
conservatively assume that an out-of-bounds split point produces
compiler-level UB.
sourcepub unsafe fn copy_within_unchecked<R>(&mut self, src: R, dest: usize)where
R: RangeExt<usize>,
pub unsafe fn copy_within_unchecked<R>(&mut self, src: R, dest: usize)where R: RangeExt<usize>,
Copies bits from one region of the bit-slice to another region of itself, without doing bounds checks.
The regions are allowed to overlap.
Parameters
&mut self
src
: The range withinself
from which to copy.dst
: The starting index withinself
at which to paste.
Effects
self[src]
is copied to self[dest .. dest + src.len()]
. The bits of
self[src]
are in an unspecified, but initialized, state.
Safety
src.end()
and dest + src.len()
must be entirely within bounds.
Examples
use bitvec::prelude::*;
let mut data = 0b1011_0000u8;
let bits = data.view_bits_mut::<Msb0>();
unsafe {
bits.copy_within_unchecked(.. 4, 2);
}
assert_eq!(data, 0b1010_1100);
source§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where T: BitStore, O: BitOrder,
Views of underlying memory.
sourcepub fn bit_domain(&self) -> BitDomain<'_, Const, T, O>
pub fn bit_domain(&self) -> BitDomain<'_, Const, T, O>
Partitions a bit-slice into maybe-contended and known-uncontended parts.
The documentation of BitDomain
goes into this in more detail. In
short, this produces a &BitSlice
that is as large as possible without
requiring alias protection, as well as any bits that were not able to be
included in the unaliased bit-slice.
sourcepub fn bit_domain_mut(&mut self) -> BitDomain<'_, Mut, T, O>
pub fn bit_domain_mut(&mut self) -> BitDomain<'_, Mut, T, O>
Partitions a mutable bit-slice into maybe-contended and known-uncontended parts.
The documentation of BitDomain
goes into this in more detail. In
short, this produces a &mut BitSlice
that is as large as possible
without requiring alias protection, as well as any bits that were not
able to be included in the unaliased bit-slice.
sourcepub fn domain(&self) -> Domain<'_, Const, T, O> ⓘ
pub fn domain(&self) -> Domain<'_, Const, T, O> ⓘ
Views the underlying memory of a bit-slice, removing alias protections where possible.
The documentation of Domain
goes into this in more detail. In short,
this produces a &[T]
slice with alias protections removed, covering
all elements that self
completely fills. Partially-used elements on
either the front or back edge of the slice are returned separately.
sourcepub fn domain_mut(&mut self) -> Domain<'_, Mut, T, O>
pub fn domain_mut(&mut self) -> Domain<'_, Mut, T, O>
Views the underlying memory of a bit-slice, removing alias protections where possible.
The documentation of Domain
goes into this in more detail. In short,
this produces a &mut [T]
slice with alias protections removed,
covering all elements that self
completely fills. Partially-used
elements on the front or back edge of the slice are returned separately.
source§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where T: BitStore, O: BitOrder,
Bit-value queries.
sourcepub fn count_ones(&self) -> usize
pub fn count_ones(&self) -> usize
Counts the number of bits set to 1
in the bit-slice contents.
Examples
use bitvec::prelude::*;
let bits = bits![1, 1, 0, 0];
assert_eq!(bits[.. 2].count_ones(), 2);
assert_eq!(bits[2 ..].count_ones(), 0);
assert_eq!(bits![].count_ones(), 0);
sourcepub fn count_zeros(&self) -> usize
pub fn count_zeros(&self) -> usize
Counts the number of bits cleared to 0
in the bit-slice contents.
Examples
use bitvec::prelude::*;
let bits = bits![1, 1, 0, 0];
assert_eq!(bits[.. 2].count_zeros(), 0);
assert_eq!(bits[2 ..].count_zeros(), 2);
assert_eq!(bits![].count_zeros(), 0);
sourcepub fn iter_ones(&self) -> IterOnes<'_, T, O> ⓘ
pub fn iter_ones(&self) -> IterOnes<'_, T, O> ⓘ
Enumerates the index of each bit in a bit-slice set to 1
.
This is a shorthand for a .enumerate().filter_map()
iterator that
selects the index of each true
bit; however, its implementation is
eligible for optimizations that the individual-bit iterator is not.
Specializations for the Lsb0
and Msb0
orderings allow processors
with instructions that seek particular bits within an element to operate
on whole elements, rather than on each bit individually.
Examples
This example uses .iter_ones()
, a .filter_map()
that finds the index
of each set bit, and the known indices, in order to show that they have
equivalent behavior.
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 0, 1, 0, 0, 0, 1];
let iter_ones = bits.iter_ones();
let known_indices = [1, 4, 8].iter().copied();
let filter = bits.iter()
.by_vals()
.enumerate()
.filter_map(|(idx, bit)| if bit { Some(idx) } else { None });
let all = iter_ones.zip(known_indices).zip(filter);
for ((iter_one, known), filtered) in all {
assert_eq!(iter_one, known);
assert_eq!(known, filtered);
}
sourcepub fn iter_zeros(&self) -> IterZeros<'_, T, O> ⓘ
pub fn iter_zeros(&self) -> IterZeros<'_, T, O> ⓘ
Enumerates the index of each bit in a bit-slice cleared to 0
.
This is a shorthand for a .enumerate().filter_map()
iterator that
selects the index of each false
bit; however, its implementation is
eligible for optimizations that the individual-bit iterator is not.
Specializations for the Lsb0
and Msb0
orderings allow processors
with instructions that seek particular bits within an element to operate
on whole elements, rather than on each bit individually.
Examples
This example uses .iter_zeros()
, a .filter_map()
that finds the
index of each cleared bit, and the known indices, in order to show that
they have equivalent behavior.
use bitvec::prelude::*;
let bits = bits![1, 0, 1, 1, 0, 1, 1, 1, 0];
let iter_zeros = bits.iter_zeros();
let known_indices = [1, 4, 8].iter().copied();
let filter = bits.iter()
.by_vals()
.enumerate()
.filter_map(|(idx, bit)| if !bit { Some(idx) } else { None });
let all = iter_zeros.zip(known_indices).zip(filter);
for ((iter_zero, known), filtered) in all {
assert_eq!(iter_zero, known);
assert_eq!(known, filtered);
}
sourcepub fn first_one(&self) -> Option<usize>
pub fn first_one(&self) -> Option<usize>
Finds the index of the first bit in the bit-slice set to 1
.
Returns None
if there is no true
bit in the bit-slice.
Examples
use bitvec::prelude::*;
assert!(bits![].first_one().is_none());
assert!(bits![0].first_one().is_none());
assert_eq!(bits![0, 1].first_one(), Some(1));
sourcepub fn first_zero(&self) -> Option<usize>
pub fn first_zero(&self) -> Option<usize>
Finds the index of the first bit in the bit-slice cleared to 0
.
Returns None
if there is no false
bit in the bit-slice.
Examples
use bitvec::prelude::*;
assert!(bits![].first_zero().is_none());
assert!(bits![1].first_zero().is_none());
assert_eq!(bits![1, 0].first_zero(), Some(1));
sourcepub fn last_one(&self) -> Option<usize>
pub fn last_one(&self) -> Option<usize>
Finds the index of the last bit in the bit-slice set to 1
.
Returns None
if there is no true
bit in the bit-slice.
Examples
use bitvec::prelude::*;
assert!(bits![].last_one().is_none());
assert!(bits![0].last_one().is_none());
assert_eq!(bits![1, 0].last_one(), Some(0));
sourcepub fn last_zero(&self) -> Option<usize>
pub fn last_zero(&self) -> Option<usize>
Finds the index of the last bit in the bit-slice cleared to 0
.
Returns None
if there is no false
bit in the bit-slice.
Examples
use bitvec::prelude::*;
assert!(bits![].last_zero().is_none());
assert!(bits![1].last_zero().is_none());
assert_eq!(bits![0, 1].last_zero(), Some(0));
sourcepub fn leading_ones(&self) -> usize
pub fn leading_ones(&self) -> usize
Counts the number of bits from the start of the bit-slice to the first
bit set to 0
.
This returns 0
if the bit-slice is empty.
Examples
use bitvec::prelude::*;
assert_eq!(bits![].leading_ones(), 0);
assert_eq!(bits![0].leading_ones(), 0);
assert_eq!(bits![1, 0].leading_ones(), 1);
sourcepub fn leading_zeros(&self) -> usize
pub fn leading_zeros(&self) -> usize
Counts the number of bits from the start of the bit-slice to the first
bit set to 1
.
This returns 0
if the bit-slice is empty.
Examples
use bitvec::prelude::*;
assert_eq!(bits![].leading_zeros(), 0);
assert_eq!(bits![1].leading_zeros(), 0);
assert_eq!(bits![0, 1].leading_zeros(), 1);
sourcepub fn trailing_ones(&self) -> usize
pub fn trailing_ones(&self) -> usize
Counts the number of bits from the end of the bit-slice to the last bit
set to 0
.
This returns 0
if the bit-slice is empty.
Examples
use bitvec::prelude::*;
assert_eq!(bits![].trailing_ones(), 0);
assert_eq!(bits![0].trailing_ones(), 0);
assert_eq!(bits![0, 1].trailing_ones(), 1);
sourcepub fn trailing_zeros(&self) -> usize
pub fn trailing_zeros(&self) -> usize
Counts the number of bits from the end of the bit-slice to the last bit
set to 1
.
This returns 0
if the bit-slice is empty.
Examples
use bitvec::prelude::*;
assert_eq!(bits![].trailing_zeros(), 0);
assert_eq!(bits![1].trailing_zeros(), 0);
assert_eq!(bits![1, 0].trailing_zeros(), 1);
sourcepub fn any(&self) -> bool
pub fn any(&self) -> bool
Tests if there is at least one bit set to 1
in the bit-slice.
Returns false
when self
is empty.
Examples
use bitvec::prelude::*;
assert!(!bits![].any());
assert!(!bits![0].any());
assert!(bits![0, 1].any());
sourcepub fn all(&self) -> bool
pub fn all(&self) -> bool
Tests if every bit is set to 1
in the bit-slice.
Returns true
when self
is empty.
Examples
use bitvec::prelude::*;
assert!( bits![].all());
assert!(!bits![0].all());
assert!( bits![1].all());
sourcepub fn not_any(&self) -> bool
pub fn not_any(&self) -> bool
Tests if every bit is cleared to 0
in the bit-slice.
Returns true
when self
is empty.
Examples
use bitvec::prelude::*;
assert!( bits![].not_any());
assert!(!bits![1].not_any());
assert!( bits![0].not_any());
sourcepub fn not_all(&self) -> bool
pub fn not_all(&self) -> bool
Tests if at least one bit is cleared to 0
in the bit-slice.
Returns false
when self
is empty.
Examples
use bitvec::prelude::*;
assert!(!bits![].not_all());
assert!(!bits![1].not_all());
assert!( bits![0].not_all());
sourcepub fn some(&self) -> bool
pub fn some(&self) -> bool
Tests if at least one bit is set to 1
, and at least one bit is cleared
to 0
, in the bit-slice.
Returns false
when self
is empty.
Examples
use bitvec::prelude::*;
assert!(!bits![].some());
assert!(!bits![0].some());
assert!(!bits![1].some());
assert!( bits![0, 1].some());
source§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where T: BitStore, O: BitOrder,
Buffer manipulation.
sourcepub fn shift_left(&mut self, by: usize)
pub fn shift_left(&mut self, by: usize)
Shifts the contents of a bit-slice “left” (towards the zero-index),
clearing the “right” bits to 0
.
This is a strictly-worse analogue to taking bits = &bits[by ..]
: it
has to modify the entire memory region that bits
governs, and destroys
contained information. Unless the actual memory layout and contents of
your bit-slice matters to your program, you should probably prefer to
munch your way forward through a bit-slice handle.
Note also that the “left” here is semantic only, and does not necessarily correspond to a left-shift instruction applied to the underlying integer storage.
This has no effect when by
is 0
. When by
is self.len()
, the
bit-slice is entirely cleared to 0
.
Panics
This panics if by
is not less than self.len()
.
Examples
use bitvec::prelude::*;
let bits = bits![mut 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1];
// these bits are retained ^--------------------------^
bits.shift_left(2);
assert_eq!(bits, bits![1, 1, 0, 0, 1, 0, 1, 1, 1, 1, 0, 0]);
// and move here ^--------------------------^
let bits = bits![mut 1; 2];
bits.shift_left(2);
assert_eq!(bits, bits![0; 2]);
sourcepub fn shift_right(&mut self, by: usize)
pub fn shift_right(&mut self, by: usize)
Shifts the contents of a bit-slice “right” (away from the zero-index),
clearing the “left” bits to 0
.
This is a strictly-worse analogue to taking `bits = &bits[.. bits.len()
- by]
: it must modify the entire memory region that
bits` governs, and destroys contained information. Unless the actual memory layout and contents of your bit-slice matters to your program, you should probably prefer to munch your way backward through a bit-slice handle.
Note also that the “right” here is semantic only, and does not necessarily correspond to a right-shift instruction applied to the underlying integer storage.
This has no effect when by
is 0
. When by
is self.len()
, the
bit-slice is entirely cleared to 0
.
Panics
This panics if by
is not less than self.len()
.
Examples
use bitvec::prelude::*;
let bits = bits![mut 1, 1, 1, 1, 0, 0, 1, 0, 1, 1, 1, 1];
// these bits stay ^--------------------------^
bits.shift_right(2);
assert_eq!(bits, bits![0, 0, 1, 1, 1, 1, 0, 0, 1, 0, 1, 1]);
// and move here ^--------------------------^
let bits = bits![mut 1; 2];
bits.shift_right(2);
assert_eq!(bits, bits![0; 2]);
source§impl<T, O> BitSlice<T, O>where
T: BitStore + Radium,
O: BitOrder,
impl<T, O> BitSlice<T, O>where T: BitStore + Radium, O: BitOrder,
Methods available only when T
allows shared mutability.
sourcepub fn set_aliased(&self, index: usize, value: bool)
pub fn set_aliased(&self, index: usize, value: bool)
Writes a new value into a single bit, using alias-safe operations.
This is equivalent to .set()
, except that it does not require an
&mut
reference, and allows bit-slices with alias-safe storage to share
write permissions.
Parameters
&self
: This method only exists on bit-slices with alias-safe storage, and so does not require exclusive access.index
: The bit index to set. It must be in0 .. self.len()
.value
: The new bit-value to write into the bit atindex
.
Panics
This panics if index
is out of bounds.
Examples
use bitvec::prelude::*;
use core::cell::Cell;
let bits: &BitSlice<_, _> = bits![Cell<usize>, Lsb0; 0, 1];
bits.set_aliased(0, true);
bits.set_aliased(1, false);
assert_eq!(bits, bits![1, 0]);
sourcepub unsafe fn set_aliased_unchecked(&self, index: usize, value: bool)
pub unsafe fn set_aliased_unchecked(&self, index: usize, value: bool)
Writes a new value into a single bit, using alias-safe operations and without bounds checking.
This is equivalent to .set_unchecked()
, except that it does not
require an &mut
reference, and allows bit-slices with alias-safe
storage to share write permissions.
Parameters
&self
: This method only exists on bit-slices with alias-safe storage, and so does not require exclusive access.index
: The bit index to set. It must be in0 .. self.len()
.value
: The new bit-value to write into the bit atindex
.
Safety
The caller must ensure that index
is not out of bounds.
Examples
use bitvec::prelude::*;
use core::cell::Cell;
let data = Cell::new(0u8);
let bits = &data.view_bits::<Lsb0>()[.. 2];
unsafe {
bits.set_aliased_unchecked(3, true);
}
assert_eq!(data.get(), 8);
source§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where T: BitStore, O: BitOrder,
Miscellaneous information.
sourcepub const MAX_BITS: usize = 2_305_843_009_213_693_951usize
pub const MAX_BITS: usize = 2_305_843_009_213_693_951usize
The inclusive maximum length of a BitSlice<_, T>
.
As BitSlice
is zero-indexed, the largest possible index is one less
than this value.
CPU word width | Value |
---|---|
32 bits | 0x1fff_ffff |
64 bits | 0x1fff_ffff_ffff_ffff |
sourcepub const MAX_ELTS: usize = BitSpan<Const, T, O>::REGION_MAX_ELTS
pub const MAX_ELTS: usize = BitSpan<Const, T, O>::REGION_MAX_ELTS
The inclusive maximum length that a [T]
slice can be for
BitSlice<_, T>
to cover it.
A BitSlice<_, T>
that begins in the interior of an element and
contains the maximum number of bits will extend one element past the
cutoff that would occur if the bit-slice began at the zeroth bit. Such a
bit-slice is difficult to manually construct, but would not otherwise
fail.
Type Bits | Max Elements (32-bit) | Max Elements (64-bit) |
---|---|---|
8 | 0x0400_0001 | 0x0400_0000_0000_0001 |
16 | 0x0200_0001 | 0x0200_0000_0000_0001 |
32 | 0x0100_0001 | 0x0100_0000_0000_0001 |
64 | 0x0080_0001 | 0x0080_0000_0000_0001 |
source§impl<T, O> BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitSlice<T, O>where T: BitStore, O: BitOrder,
sourcepub fn to_bitvec(&self) -> BitVec<T::Unalias, O> ⓘ
pub fn to_bitvec(&self) -> BitVec<T::Unalias, O> ⓘ
Copies a bit-slice into an owned bit-vector.
Since the new vector is freshly owned, this gets marked as ::Unalias
to remove any guards that may have been inserted by the bit-slice’s
history.
It does not use the underlying memory type, so that a BitSlice<_, Cell<_>>
will produce a BitVec<_, Cell<_>>
.
Original
Examples
use bitvec::prelude::*;
let bits = bits![0, 1, 0, 1];
let bv = bits.to_bitvec();
assert_eq!(bits, bv);
Trait Implementations§
source§impl<A, O> AsMut<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> AsMut<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O>where A: BitViewSized, O: BitOrder,
source§impl<A, O> AsRef<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> AsRef<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O>where A: BitViewSized, O: BitOrder,
source§impl<T, O> AsRef<BitSlice<<T as BitStore>::Alias, O>> for IterMut<'_, T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> AsRef<BitSlice<<T as BitStore>::Alias, O>> for IterMut<'_, T, O>where T: BitStore, O: BitOrder,
source§impl<T, O> Binary for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> Binary for BitSlice<T, O>where T: BitStore, O: BitOrder,
Bit-Slice Rendering
This implementation prints the contents of a &BitSlice
in one of binary,
octal, or hexadecimal. It is important to note that this does not render the
raw underlying memory! They render the semantically-ordered contents of the
bit-slice as numerals. This distinction matters if you use type parameters that
differ from those presumed by your debugger (which is usually <u8, Msb0>
).
The output separates the T
elements as individual list items, and renders each
element as a base- 2, 8, or 16 numeric string. When walking an element, the bits
traversed by the bit-slice are considered to be stored in
most-significant-bit-first ordering. This means that index [0]
is the high bit
of the left-most digit, and index [n]
is the low bit of the right-most digit,
in a given printed word.
In order to render according to expectations of the Arabic numeral system, an
element being transcribed is chunked into digits from the least-significant end
of its rendered form. This is most noticeable in octal, which will always have a
smaller ceiling on the left-most digit in a printed word, while the right-most
digit in that word is able to use the full 0 ..= 7
numeral range.
Examples
use bitvec::prelude::*;
let data = [
0b000000_10u8,
// digits print LTR
0b10_001_101,
// significance is computed RTL
0b01_000000,
];
let bits = &data.view_bits::<Msb0>()[6 .. 18];
assert_eq!(format!("{:b}", bits), "[10, 10001101, 01]");
assert_eq!(format!("{:o}", bits), "[2, 215, 1]");
assert_eq!(format!("{:X}", bits), "[2, 8D, 1]");
The {:#}
format modifier causes the standard 0b
, 0o
, or 0x
prefix to be
applied to each printed word. The other format specifiers are not interpreted by
this implementation, and apply to the entire rendered text, not to individual
words.
source§impl<A, O> BitAndAssign<&BitArray<A, O>> for BitSlice<A::Store, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitAndAssign<&BitArray<A, O>> for BitSlice<A::Store, O>where A: BitViewSized, O: BitOrder,
source§fn bitand_assign(&mut self, rhs: &BitArray<A, O>)
fn bitand_assign(&mut self, rhs: &BitArray<A, O>)
&=
operation. Read moresource§impl<T, O> BitAndAssign<&BitBox<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitAndAssign<&BitBox<T, O>> for BitSlice<T, O>where T: BitStore, O: BitOrder,
source§fn bitand_assign(&mut self, rhs: &BitBox<T, O>)
fn bitand_assign(&mut self, rhs: &BitBox<T, O>)
&=
operation. Read moresource§impl<T1, T2, O1, O2> BitAndAssign<&BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> BitAndAssign<&BitSlice<T2, O2>> for BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
source§fn bitand_assign(&mut self, rhs: &BitSlice<T2, O2>)
fn bitand_assign(&mut self, rhs: &BitSlice<T2, O2>)
Boolean Arithmetic
This merges another bit-slice into self
with a Boolean arithmetic operation.
If the other bit-slice is shorter than self
, it is zero-extended. For BitAnd
,
this clears all excess bits of self
to 0
; for BitOr
and BitXor
, it
leaves them untouched
Behavior
The Boolean operation proceeds across each bit-slice in iteration order. This is
3O(n)
in the length of the shorter of self
and rhs
. However, it can be
accelerated if rhs
has the same type parameters as self
, and both are using
one of the orderings provided by bitvec
. In this case, the implementation
specializes to use BitField
batch operations to operate on the slices one word
at a time, rather than one bit.
Acceleration is not currently provided for custom bit-orderings that use the same storage type.
Pre-1.0
Behavior
In the 0.
development series, Boolean arithmetic was implemented against all
I: Iterator<Item = bool>
. This allowed code such as bits |= [false, true];
,
but forbad acceleration in the most common use case (combining two bit-slices)
because BitSlice
is not such an iterator.
Usage surveys indicate that it is better for the arithmetic operators to operate
on bit-slices, and to allow the possibility of specialized acceleration, rather
than to allow folding against any iterator of bool
s.
If pre-1.0
code relies on this behavior specifically, and has non-BitSlice
arguments to the Boolean sigils, then they will need to be replaced with the
equivalent loop.
Examples
use bitvec::prelude::*;
let a = bits![mut 0, 0, 1, 1];
let b = bits![ 0, 1, 0, 1];
*a ^= b;
assert_eq!(a, bits![0, 1, 1, 0]);
let c = bits![mut 0, 0, 1, 1];
let d = [false, true, false, true];
// no longer allowed
// c &= d.into_iter().by_vals();
for (mut c, d) in c.iter_mut().zip(d.into_iter())
{
*c ^= d;
}
assert_eq!(c, bits![0, 1, 1, 0]);
source§impl<T, O> BitAndAssign<&BitVec<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitAndAssign<&BitVec<T, O>> for BitSlice<T, O>where T: BitStore, O: BitOrder,
source§fn bitand_assign(&mut self, rhs: &BitVec<T, O>)
fn bitand_assign(&mut self, rhs: &BitVec<T, O>)
&=
operation. Read moresource§impl<A, O> BitAndAssign<BitArray<A, O>> for BitSlice<A::Store, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitAndAssign<BitArray<A, O>> for BitSlice<A::Store, O>where A: BitViewSized, O: BitOrder,
source§fn bitand_assign(&mut self, rhs: BitArray<A, O>)
fn bitand_assign(&mut self, rhs: BitArray<A, O>)
&=
operation. Read moresource§impl<T, O> BitAndAssign<BitBox<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitAndAssign<BitBox<T, O>> for BitSlice<T, O>where T: BitStore, O: BitOrder,
source§fn bitand_assign(&mut self, rhs: BitBox<T, O>)
fn bitand_assign(&mut self, rhs: BitBox<T, O>)
&=
operation. Read moresource§impl<T, O> BitAndAssign<BitVec<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitAndAssign<BitVec<T, O>> for BitSlice<T, O>where T: BitStore, O: BitOrder,
source§fn bitand_assign(&mut self, rhs: BitVec<T, O>)
fn bitand_assign(&mut self, rhs: BitVec<T, O>)
&=
operation. Read moresource§impl<T> BitField for BitSlice<T, Lsb0>where
T: BitStore,
impl<T> BitField for BitSlice<T, Lsb0>where T: BitStore,
Lsb0
Bit-Field Behavior
BitField
has no requirements about the in-memory representation or layout of
stored integers within a bit-slice, only that round-tripping an integer through
a store and a load of the same element suffix on the same bit-slice is
idempotent (with respect to sign truncation).
Lsb0
provides a contiguous translation from bit-index to real memory: for any
given bit index n
and its position P(n)
, P(n + 1)
is P(n) + 1
. This
allows it to provide batched behavior: since the section of contiguous indices
used within an element translates to a section of contiguous bits in real
memory, the transaction is always a single shift/mask operation.
Each implemented method contains documentation and examples showing exactly how the abstract integer space is mapped to real memory.
source§fn load_le<I>(&self) -> Iwhere
I: Integral,
fn load_le<I>(&self) -> Iwhere I: Integral,
Lsb0
Little-Endian Integer Loading
This implementation uses the Lsb0
bit-ordering to determine which bits in a
partially-occupied memory element contain the contents of an integer to be
loaded, using little-endian element ordering.
See the trait method definition for an overview of what element ordering means.
Signed-Integer Loading
As described in the trait definition, when loading as a signed integer, the most significant bit loaded from memory is sign-extended to the full width of the returned type. In this method, that means the most-significant loaded bit of the final element.
Examples
In each memory element, the Lsb0
ordering counts indices leftward from the
right edge:
use bitvec::prelude::*;
let raw = 0b00_10110_0u8;
// 76 54321 0
// ^ sign bit
assert_eq!(
raw.view_bits::<Lsb0>()
[1 .. 6]
.load_le::<u8>(),
0b000_10110,
);
assert_eq!(
raw.view_bits::<Lsb0>()
[1 .. 6]
.load_le::<i8>(),
0b111_10110u8 as i8,
);
In bit-slices that span multiple elements, the little-endian element ordering means that the slice index increases with numerical significance:
use bitvec::prelude::*;
let raw = [
0x8_Fu8,
// 7 0
0x0_1u8,
// 15 8
0b1111_0010u8,
// ^ sign bit
// 23 16
];
assert_eq!(
raw.view_bits::<Lsb0>()
[4 .. 20]
.load_le::<u16>(),
0x2018u16,
);
Note that while these examples use u8
storage for convenience in displaying
the literals, BitField
operates identically with any storage type. As most
machines use little-endian byte ordering within wider element types, and
bitvec
exclusively operates on elements, the actual bytes of memory may
rapidly start to behave oddly when translating between numeric literals and
in-memory representation.
The user guide has a chapter that translates bit indices into memory positions
for each combination of <T: BitStore, O: BitOrder>
, and may be of additional
use when choosing a combination of type parameters and load functions.
source§fn load_be<I>(&self) -> Iwhere
I: Integral,
fn load_be<I>(&self) -> Iwhere I: Integral,
Lsb0
Big-Endian Integer Loading
This implementation uses the Lsb0
bit-ordering to determine which bits in a
partially-occupied memory element contain the contents of an integer to be
loaded, using big-endian element ordering.
See the trait method definition for an overview of what element ordering means.
Signed-Integer Loading
As described in the trait definition, when loading as a signed integer, the most significant bit loaded from memory is sign-extended to the full width of the returned type. In this method, that means that the most-significant bit of the first element.
Examples
In each memory element, the Lsb0
ordering counts indices leftward from the
right edge:
use bitvec::prelude::*;
let raw = 0b00_10110_0u8;
// 76 54321 0
// ^ sign bit
assert_eq!(
raw.view_bits::<Lsb0>()
[1 .. 6]
.load_be::<u8>(),
0b000_10110,
);
assert_eq!(
raw.view_bits::<Lsb0>()
[1 .. 6]
.load_be::<i8>(),
0b111_10110u8 as i8,
);
In bit-slices that span multiple elements, the big-endian element ordering means that the slice index increases while numeric significance decreases:
use bitvec::prelude::*;
let raw = [
0b0010_1111u8,
// ^ sign bit
// 7 0
0x0_1u8,
// 15 8
0xF_8u8,
// 23 16
];
assert_eq!(
raw.view_bits::<Lsb0>()
[4 .. 20]
.load_be::<u16>(),
0x2018u16,
);
Note that while these examples use u8
storage for convenience in displaying
the literals, BitField
operates identically with any storage type. As most
machines use little-endian byte ordering within wider element types, and
bitvec
exclusively operates on elements, the actual bytes of memory may
rapidly start to behave oddly when translating between numeric literals and
in-memory representation.
The user guide has a chapter that translates bit indices into memory positions
for each combination of <T: BitStore, O: BitOrder>
, and may be of additional
use when choosing a combination of type parameters and load functions.
source§fn store_le<I>(&mut self, value: I)where
I: Integral,
fn store_le<I>(&mut self, value: I)where I: Integral,
Lsb0
Little-Endian Integer Storing
This implementation uses the Lsb0
bit-ordering to determine which bits in a
partially-occupied memory element are used for storage, using little-endian
element ordering.
See the trait method definition for an overview of what element ordering means.
Narrowing Behavior
Integers are truncated from the high end. When storing into a bit-slice of
length n
, the n
least numerically significant bits are stored, and any
remaining high bits are ignored.
Be aware of this behavior if you are storing signed integers! The signed integer
-14i8
(bit pattern 0b1111_0010u8
) will, when stored into and loaded back
from a 4-bit slice, become the value 2i8
.
Examples
use bitvec::prelude::*;
let mut raw = 0u8;
raw.view_bits_mut::<Lsb0>()
[1 .. 6]
.store_le(22u8);
assert_eq!(raw, 0b00_10110_0);
// 76 54321 0
raw.view_bits_mut::<Lsb0>()
[1 .. 6]
.store_le(-10i8);
assert_eq!(raw, 0b00_10110_0);
In bit-slices that span multiple elements, the little-endian element ordering means that the slice index increases with numerical significance:
use bitvec::prelude::*;
let mut raw = [!0u8; 3];
raw.view_bits_mut::<Lsb0>()
[4 .. 20]
.store_le(0x2018u16);
assert_eq!(raw, [
0x8_F,
// 7 0
0x0_1,
// 15 8
0xF_2,
// 23 16
]);
Note that while these examples use u8
storage for convenience in displaying
the literals, BitField
operates identically with any storage type. As most
machines use little-endian byte ordering within wider element types, and
bitvec
exclusively operates on elements, the actual bytes of memory may
rapidly start to behave oddly when translating between numeric literals and
in-memory representation.
The user guide has a chapter that translates bit indices into memory positions
for each combination of <T: BitStore, O: BitOrder>
, and may be of additional
use when choosing a combination of type parameters and store functions.
source§fn store_be<I>(&mut self, value: I)where
I: Integral,
fn store_be<I>(&mut self, value: I)where I: Integral,
Lsb0
Big-Endian Integer Storing
This implementation uses the Lsb0
bit-ordering to determine which bits in a
partially-occupied memory element are used for storage, using big-endian element
ordering.
See the trait method definition for an overview of what element ordering means.
Narrowing Behavior
Integers are truncated from the high end. When storing into a bit-slice of
length n
, the n
least numerically significant bits are stored, and any
remaining high bits are ignored.
Be aware of this behavior if you are storing signed integers! The signed integer
-14i8
(bit pattern 0b1111_0010u8
) will, when stored into and loaded back
from a 4-bit slice, become the value 2i8
.
Examples
use bitvec::prelude::*;
let mut raw = 0u8;
raw.view_bits_mut::<Lsb0>()
[1 .. 6]
.store_be(22u8);
assert_eq!(raw, 0b00_10110_0);
// 76 54321 0
raw.view_bits_mut::<Lsb0>()
[1 .. 6]
.store_be(-10i8);
assert_eq!(raw, 0b00_10110_0);
In bit-slices that span multiple elements, the big-endian element ordering means that the slice index increases while numerical significance decreases:
use bitvec::prelude::*;
let mut raw = [!0u8; 3];
raw.view_bits_mut::<Lsb0>()
[4 .. 20]
.store_be(0x2018u16);
assert_eq!(raw, [
0x2_F,
// 7 0
0x0_1,
// 15 8
0xF_8,
// 23 16
]);
Note that while these examples use u8
storage for convenience in displaying
the literals, BitField
operates identically with any storage type. As most
machines use little-endian byte ordering within wider element types, and
bitvec
exclusively operates on elements, the actual bytes of memory may
rapidly start to behave oddly when translating between numeric literals and
in-memory representation.
The user guide has a chapter that translates bit indices into memory positions
for each combination of <T: BitStore, O: BitOrder>
, and may be of additional
use when choosing a combination of type parameters and store functions.
source§impl<T> BitField for BitSlice<T, Msb0>where
T: BitStore,
impl<T> BitField for BitSlice<T, Msb0>where T: BitStore,
Msb0
Bit-Field Behavior
BitField
has no requirements about the in-memory representation or layout of
stored integers within a bit-slice, only that round-tripping an integer through
a store and a load of the same element suffix on the same bit-slice is
idempotent (with respect to sign truncation).
Msb0
provides a contiguous translation from bit-index to real memory: for any
given bit index n
and its position P(n)
, P(n + 1)
is P(n) - 1
. This
allows it to provide batched behavior: since the section of contiguous indices
used within an element translates to a section of contiguous bits in real
memory, the transaction is always a single shift-mask operation.
Each implemented method contains documentation and examples showing exactly how the abstract integer space is mapped to real memory.
Notes
In particular, note that while Msb0
indexes bits from the most significant
down to the least, and integers index from the least up to the most, this
does not reörder any bits of the integer value! This ordering only finds a
region in real memory; it does not affect the partial-integer contents stored
in that region.
source§fn load_le<I>(&self) -> Iwhere
I: Integral,
fn load_le<I>(&self) -> Iwhere I: Integral,
Msb0
Little-Endian Integer Loading
This implementation uses the Msb0
bit-ordering to determine which bits in a
partially-occupied memory element contain the contents of an integer to be
loaded, using little-endian element ordering.
See the trait method definition for an overview of what element ordering means.
Signed-Integer Loading
As described in the trait definition, when loading as a signed integer, the most significant bit loaded from memory is sign-extended to the full width of the returned type. In this method, that means the most-significant loaded bit of the final element.
Examples
In each memory element, the Msb0
ordering counts indices rightward from the
left edge:
use bitvec::prelude::*;
let raw = 0b00_10110_0u8;
// 01 23456 7
// ^ sign bit
assert_eq!(
raw.view_bits::<Msb0>()
[2 .. 7]
.load_le::<u8>(),
0b000_10110,
);
assert_eq!(
raw.view_bits::<Msb0>()
[2 .. 7]
.load_le::<i8>(),
0b111_10110u8 as i8,
);
In bit-slices that span multiple elements, the little-endian element ordering means that the slice index increases with numerical significance:
use bitvec::prelude::*;
let raw = [
0xF_8u8,
// 0 7
0x0_1u8,
// 8 15
0b0010_1111u8,
// ^ sign bit
// 16 23
];
assert_eq!(
raw.view_bits::<Msb0>()
[4 .. 20]
.load_le::<u16>(),
0x2018u16,
);
Note that while these examples use u8
storage for convenience in displaying
the literals, BitField
operates identically with any storage type. As most
machines use little-endian byte ordering within wider element types, and
bitvec
exclusively operates on elements, the actual bytes of memory may
rapidly start to behave oddly when translating between numeric literals and
in-memory representation.
The user guide has a chapter that translates bit indices into memory positions
for each combination of <T: BitStore, O: BitOrder>
, and may be of additional
use when choosing a combination of type parameters and load functions.
source§fn load_be<I>(&self) -> Iwhere
I: Integral,
fn load_be<I>(&self) -> Iwhere I: Integral,
Msb0
Big-Endian Integer Loading
This implementation uses the Msb0
bit-ordering to determine which bits in a
partially-occupied memory element contain the contents of an integer to be
loaded, using big-endian element ordering.
See the trait method definition for an overview of what element ordering means.
Signed-Integer Loading
As described in the trait definition, when loading as a signed integer, the most significant bit loaded from memory is sign-extended to the full width of the returned type. In this method, that means the most-significant loaded bit of the first element.
Examples
In each memory element, the Msb0
ordering counts indices rightward from the
left edge:
use bitvec::prelude::*;
let raw = 0b00_10110_0u8;
// 01 23456 7
// ^ sign bit
assert_eq!(
raw.view_bits::<Msb0>()
[2 .. 7]
.load_be::<u8>(),
0b000_10110,
);
assert_eq!(
raw.view_bits::<Msb0>()
[2 .. 7]
.load_be::<i8>(),
0b111_10110u8 as i8,
);
In bit-slices that span multiple elements, the big-endian element ordering means that the slice index increases with numerical significance:
use bitvec::prelude::*;
let raw = [
0b1111_0010u8,
// ^ sign bit
// 0 7
0x0_1u8,
// 8 15
0x8_Fu8,
// 16 23
];
assert_eq!(
raw.view_bits::<Msb0>()
[4 .. 20]
.load_be::<u16>(),
0x2018u16,
);
Note that while these examples use u8
storage for convenience in displaying
the literals, BitField
operates identically with any storage type. As most
machines use little-endian byte ordering within wider element types, and
bitvec
exclusively operates on elements, the actual bytes of memory may
rapidly start to behave oddly when translating between numeric literals and
in-memory representation.
The user guide has a chapter that translates bit indices into memory positions
for each combination of <T: BitStore, O: BitOrder>
, and may be of additional
use when choosing a combination of type parameters and load functions.
source§fn store_le<I>(&mut self, value: I)where
I: Integral,
fn store_le<I>(&mut self, value: I)where I: Integral,
Msb0
Little-Endian Integer Storing
This implementation uses the Msb0
bit-ordering to determine which bits in a
partially-occupied memory element are used for storage, using little-endian
element ordering.
See the trait method definition for an overview of what element ordering means.
Narrowing Behavior
Integers are truncated from the high end. When storing into a bit-slice of
length n
, the n
least numerically significant bits are stored, and any
remaining high bits are ignored.
Be aware of this behavior if you are storing signed integers! The signed integer
-14i8
(bit pattern 0b1111_0010u8
) will, when stored into and loaded back
from a 4-bit slice, become the value 2i8
.
Examples
use bitvec::prelude::*;
let mut raw = 0u8;
raw.view_bits_mut::<Msb0>()
[2 .. 7]
.store_le(22u8);
assert_eq!(raw, 0b00_10110_0);
// 01 23456 7
raw.view_bits_mut::<Msb0>()
[2 .. 7]
.store_le(-10i8);
assert_eq!(raw, 0b00_10110_0);
In bit-slices that span multiple elements, the little-endian element ordering means that the slice index increases with numerical significance:
use bitvec::prelude::*;
let mut raw = [!0u8; 3];
raw.view_bits_mut::<Msb0>()
[4 .. 20]
.store_le(0x2018u16);
assert_eq!(raw, [
0xF_8,
// 0 7
0x0_1,
// 8 15
0x2_F,
// 16 23
]);
Note that while these examples use u8
storage for convenience in displaying
the literals, BitField
operates identically with any storage type. As most
machines use little-endian byte ordering within wider element types, and
bitvec
exclusively operates on elements, the actual bytes of memory may
rapidly start to behave oddly when translating between numeric literals and
in-memory representation.
The user guide has a chapter that translates bit indices into memory positions
for each combination of <T: BitStore, O: BitOrder>
, and may be of additional
use when choosing a combination of type parameters and store functions.
source§fn store_be<I>(&mut self, value: I)where
I: Integral,
fn store_be<I>(&mut self, value: I)where I: Integral,
Msb0
Big-Endian Integer Storing
This implementation uses the Msb0
bit-ordering to determine which bits in a
partially-occupied memory element are used for storage, using big-endian element
ordering.
See the trait method definition for an overview of what element ordering means.
Narrowing Behavior
Integers are truncated from the high end. When storing into a bit-slice of
length n
, the n
least numerically significant bits are stored, and any
remaining high bits are ignored.
Be aware of this behavior if you are storing signed integers! The signed integer
-14i8
(bit pattern 0b1111_0010u8
) will, when stored into and loaded back
from a 4-bit slice, become the value 2i8
.
Examples
use bitvec::prelude::*;
let mut raw = 0u8;
raw.view_bits_mut::<Msb0>()
[2 .. 7]
.store_be(22u8);
assert_eq!(raw, 0b00_10110_0);
// 01 23456 7
raw.view_bits_mut::<Msb0>()
[2 .. 7]
.store_be(-10i8);
assert_eq!(raw, 0b00_10110_0);
In bit-slices that span multiple elements, the big-endian element ordering means that the slice index increases while numerical significance decreases:
use bitvec::prelude::*;
let mut raw = [!0u8; 3];
raw.view_bits_mut::<Msb0>()
[4 .. 20]
.store_be(0x2018u16);
assert_eq!(raw, [
0xF_2,
// 0 7
0x0_1,
// 8 15
0x8_F,
// 16 23
]);
Note that while these examples use u8
storage for convenience in displaying
the literals, BitField
operates identically with any storage type. As most
machines use little-endian byte ordering within wider element types, and
bitvec
exclusively operates on elements, the actual bytes of memory may
rapidly start to behave oddly when translating between numeric literals and
in-memory representation.
The user guide has a chapter that translates bit indices into memory positions
for each combination of <T: BitStore, O: BitOrder>
, and may be of additional
use when choosing a combination of type parameters and store functions.
source§impl<A, O> BitOrAssign<&BitArray<A, O>> for BitSlice<A::Store, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitOrAssign<&BitArray<A, O>> for BitSlice<A::Store, O>where A: BitViewSized, O: BitOrder,
source§fn bitor_assign(&mut self, rhs: &BitArray<A, O>)
fn bitor_assign(&mut self, rhs: &BitArray<A, O>)
|=
operation. Read moresource§impl<T, O> BitOrAssign<&BitBox<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitOrAssign<&BitBox<T, O>> for BitSlice<T, O>where T: BitStore, O: BitOrder,
source§fn bitor_assign(&mut self, rhs: &BitBox<T, O>)
fn bitor_assign(&mut self, rhs: &BitBox<T, O>)
|=
operation. Read moresource§impl<T1, T2, O1, O2> BitOrAssign<&BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> BitOrAssign<&BitSlice<T2, O2>> for BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
source§fn bitor_assign(&mut self, rhs: &BitSlice<T2, O2>)
fn bitor_assign(&mut self, rhs: &BitSlice<T2, O2>)
Boolean Arithmetic
This merges another bit-slice into self
with a Boolean arithmetic operation.
If the other bit-slice is shorter than self
, it is zero-extended. For BitAnd
,
this clears all excess bits of self
to 0
; for BitOr
and BitXor
, it
leaves them untouched
Behavior
The Boolean operation proceeds across each bit-slice in iteration order. This is
3O(n)
in the length of the shorter of self
and rhs
. However, it can be
accelerated if rhs
has the same type parameters as self
, and both are using
one of the orderings provided by bitvec
. In this case, the implementation
specializes to use BitField
batch operations to operate on the slices one word
at a time, rather than one bit.
Acceleration is not currently provided for custom bit-orderings that use the same storage type.
Pre-1.0
Behavior
In the 0.
development series, Boolean arithmetic was implemented against all
I: Iterator<Item = bool>
. This allowed code such as bits |= [false, true];
,
but forbad acceleration in the most common use case (combining two bit-slices)
because BitSlice
is not such an iterator.
Usage surveys indicate that it is better for the arithmetic operators to operate
on bit-slices, and to allow the possibility of specialized acceleration, rather
than to allow folding against any iterator of bool
s.
If pre-1.0
code relies on this behavior specifically, and has non-BitSlice
arguments to the Boolean sigils, then they will need to be replaced with the
equivalent loop.
Examples
use bitvec::prelude::*;
let a = bits![mut 0, 0, 1, 1];
let b = bits![ 0, 1, 0, 1];
*a ^= b;
assert_eq!(a, bits![0, 1, 1, 0]);
let c = bits![mut 0, 0, 1, 1];
let d = [false, true, false, true];
// no longer allowed
// c &= d.into_iter().by_vals();
for (mut c, d) in c.iter_mut().zip(d.into_iter())
{
*c ^= d;
}
assert_eq!(c, bits![0, 1, 1, 0]);
source§impl<T, O> BitOrAssign<&BitVec<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitOrAssign<&BitVec<T, O>> for BitSlice<T, O>where T: BitStore, O: BitOrder,
source§fn bitor_assign(&mut self, rhs: &BitVec<T, O>)
fn bitor_assign(&mut self, rhs: &BitVec<T, O>)
|=
operation. Read moresource§impl<A, O> BitOrAssign<BitArray<A, O>> for BitSlice<A::Store, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitOrAssign<BitArray<A, O>> for BitSlice<A::Store, O>where A: BitViewSized, O: BitOrder,
source§fn bitor_assign(&mut self, rhs: BitArray<A, O>)
fn bitor_assign(&mut self, rhs: BitArray<A, O>)
|=
operation. Read moresource§impl<T, O> BitOrAssign<BitBox<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitOrAssign<BitBox<T, O>> for BitSlice<T, O>where T: BitStore, O: BitOrder,
source§fn bitor_assign(&mut self, rhs: BitBox<T, O>)
fn bitor_assign(&mut self, rhs: BitBox<T, O>)
|=
operation. Read moresource§impl<T, O> BitOrAssign<BitVec<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitOrAssign<BitVec<T, O>> for BitSlice<T, O>where T: BitStore, O: BitOrder,
source§fn bitor_assign(&mut self, rhs: BitVec<T, O>)
fn bitor_assign(&mut self, rhs: BitVec<T, O>)
|=
operation. Read moresource§impl<A, O> BitXorAssign<&BitArray<A, O>> for BitSlice<A::Store, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitXorAssign<&BitArray<A, O>> for BitSlice<A::Store, O>where A: BitViewSized, O: BitOrder,
source§fn bitxor_assign(&mut self, rhs: &BitArray<A, O>)
fn bitxor_assign(&mut self, rhs: &BitArray<A, O>)
^=
operation. Read moresource§impl<T, O> BitXorAssign<&BitBox<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitXorAssign<&BitBox<T, O>> for BitSlice<T, O>where T: BitStore, O: BitOrder,
source§fn bitxor_assign(&mut self, rhs: &BitBox<T, O>)
fn bitxor_assign(&mut self, rhs: &BitBox<T, O>)
^=
operation. Read moresource§impl<T1, T2, O1, O2> BitXorAssign<&BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> BitXorAssign<&BitSlice<T2, O2>> for BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
source§fn bitxor_assign(&mut self, rhs: &BitSlice<T2, O2>)
fn bitxor_assign(&mut self, rhs: &BitSlice<T2, O2>)
Boolean Arithmetic
This merges another bit-slice into self
with a Boolean arithmetic operation.
If the other bit-slice is shorter than self
, it is zero-extended. For BitAnd
,
this clears all excess bits of self
to 0
; for BitOr
and BitXor
, it
leaves them untouched
Behavior
The Boolean operation proceeds across each bit-slice in iteration order. This is
3O(n)
in the length of the shorter of self
and rhs
. However, it can be
accelerated if rhs
has the same type parameters as self
, and both are using
one of the orderings provided by bitvec
. In this case, the implementation
specializes to use BitField
batch operations to operate on the slices one word
at a time, rather than one bit.
Acceleration is not currently provided for custom bit-orderings that use the same storage type.
Pre-1.0
Behavior
In the 0.
development series, Boolean arithmetic was implemented against all
I: Iterator<Item = bool>
. This allowed code such as bits |= [false, true];
,
but forbad acceleration in the most common use case (combining two bit-slices)
because BitSlice
is not such an iterator.
Usage surveys indicate that it is better for the arithmetic operators to operate
on bit-slices, and to allow the possibility of specialized acceleration, rather
than to allow folding against any iterator of bool
s.
If pre-1.0
code relies on this behavior specifically, and has non-BitSlice
arguments to the Boolean sigils, then they will need to be replaced with the
equivalent loop.
Examples
use bitvec::prelude::*;
let a = bits![mut 0, 0, 1, 1];
let b = bits![ 0, 1, 0, 1];
*a ^= b;
assert_eq!(a, bits![0, 1, 1, 0]);
let c = bits![mut 0, 0, 1, 1];
let d = [false, true, false, true];
// no longer allowed
// c &= d.into_iter().by_vals();
for (mut c, d) in c.iter_mut().zip(d.into_iter())
{
*c ^= d;
}
assert_eq!(c, bits![0, 1, 1, 0]);
source§impl<T, O> BitXorAssign<&BitVec<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitXorAssign<&BitVec<T, O>> for BitSlice<T, O>where T: BitStore, O: BitOrder,
source§fn bitxor_assign(&mut self, rhs: &BitVec<T, O>)
fn bitxor_assign(&mut self, rhs: &BitVec<T, O>)
^=
operation. Read moresource§impl<A, O> BitXorAssign<BitArray<A, O>> for BitSlice<A::Store, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> BitXorAssign<BitArray<A, O>> for BitSlice<A::Store, O>where A: BitViewSized, O: BitOrder,
source§fn bitxor_assign(&mut self, rhs: BitArray<A, O>)
fn bitxor_assign(&mut self, rhs: BitArray<A, O>)
^=
operation. Read moresource§impl<T, O> BitXorAssign<BitBox<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitXorAssign<BitBox<T, O>> for BitSlice<T, O>where T: BitStore, O: BitOrder,
source§fn bitxor_assign(&mut self, rhs: BitBox<T, O>)
fn bitxor_assign(&mut self, rhs: BitBox<T, O>)
^=
operation. Read moresource§impl<T, O> BitXorAssign<BitVec<T, O>> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> BitXorAssign<BitVec<T, O>> for BitSlice<T, O>where T: BitStore, O: BitOrder,
source§fn bitxor_assign(&mut self, rhs: BitVec<T, O>)
fn bitxor_assign(&mut self, rhs: BitVec<T, O>)
^=
operation. Read moresource§impl<A, O> Borrow<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> Borrow<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O>where A: BitViewSized, O: BitOrder,
source§impl<A, O> BorrowMut<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> BorrowMut<BitSlice<<A as BitView>::Store, O>> for BitArray<A, O>where A: BitViewSized, O: BitOrder,
source§impl<T, O> Index<usize> for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> Index<usize> for BitSlice<T, O>where T: BitStore, O: BitOrder,
source§fn index(&self, index: usize) -> &Self::Output
fn index(&self, index: usize) -> &Self::Output
Looks up a single bit by its semantic index.
Examples
use bitvec::prelude::*;
let bits = bits![u8, Msb0; 0, 1, 0];
assert!(!bits[0]); // -----^ | |
assert!( bits[1]); // --------^ |
assert!(!bits[2]); // -----------^
If the index is greater than or equal to the length, indexing will panic.
The below test will panic when accessing index 1, as only index 0 is valid.
use bitvec::prelude::*;
let bits = bits![0, ];
bits[1]; // --------^
source§impl<T, O> IndexMut<RangeToInclusive<usize>> for BitSlice<T, O>where
O: BitOrder,
T: BitStore,
impl<T, O> IndexMut<RangeToInclusive<usize>> for BitSlice<T, O>where O: BitOrder, T: BitStore,
source§impl<T, O> LowerHex for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> LowerHex for BitSlice<T, O>where T: BitStore, O: BitOrder,
Bit-Slice Rendering
This implementation prints the contents of a &BitSlice
in one of binary,
octal, or hexadecimal. It is important to note that this does not render the
raw underlying memory! They render the semantically-ordered contents of the
bit-slice as numerals. This distinction matters if you use type parameters that
differ from those presumed by your debugger (which is usually <u8, Msb0>
).
The output separates the T
elements as individual list items, and renders each
element as a base- 2, 8, or 16 numeric string. When walking an element, the bits
traversed by the bit-slice are considered to be stored in
most-significant-bit-first ordering. This means that index [0]
is the high bit
of the left-most digit, and index [n]
is the low bit of the right-most digit,
in a given printed word.
In order to render according to expectations of the Arabic numeral system, an
element being transcribed is chunked into digits from the least-significant end
of its rendered form. This is most noticeable in octal, which will always have a
smaller ceiling on the left-most digit in a printed word, while the right-most
digit in that word is able to use the full 0 ..= 7
numeral range.
Examples
use bitvec::prelude::*;
let data = [
0b000000_10u8,
// digits print LTR
0b10_001_101,
// significance is computed RTL
0b01_000000,
];
let bits = &data.view_bits::<Msb0>()[6 .. 18];
assert_eq!(format!("{:b}", bits), "[10, 10001101, 01]");
assert_eq!(format!("{:o}", bits), "[2, 215, 1]");
assert_eq!(format!("{:X}", bits), "[2, 8D, 1]");
The {:#}
format modifier causes the standard 0b
, 0o
, or 0x
prefix to be
applied to each printed word. The other format specifiers are not interpreted by
this implementation, and apply to the entire rendered text, not to individual
words.
source§impl<'a, T, O> Not for &'a mut BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<'a, T, O> Not for &'a mut BitSlice<T, O>where T: BitStore, O: BitOrder,
Inverts each bit in the bit-slice.
Unlike the &
, |
, and ^
operators, this implementation is guaranteed to
update each memory element only once, and is not required to traverse every live
bit in the underlying region.
source§impl<T, O> Octal for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> Octal for BitSlice<T, O>where T: BitStore, O: BitOrder,
Bit-Slice Rendering
This implementation prints the contents of a &BitSlice
in one of binary,
octal, or hexadecimal. It is important to note that this does not render the
raw underlying memory! They render the semantically-ordered contents of the
bit-slice as numerals. This distinction matters if you use type parameters that
differ from those presumed by your debugger (which is usually <u8, Msb0>
).
The output separates the T
elements as individual list items, and renders each
element as a base- 2, 8, or 16 numeric string. When walking an element, the bits
traversed by the bit-slice are considered to be stored in
most-significant-bit-first ordering. This means that index [0]
is the high bit
of the left-most digit, and index [n]
is the low bit of the right-most digit,
in a given printed word.
In order to render according to expectations of the Arabic numeral system, an
element being transcribed is chunked into digits from the least-significant end
of its rendered form. This is most noticeable in octal, which will always have a
smaller ceiling on the left-most digit in a printed word, while the right-most
digit in that word is able to use the full 0 ..= 7
numeral range.
Examples
use bitvec::prelude::*;
let data = [
0b000000_10u8,
// digits print LTR
0b10_001_101,
// significance is computed RTL
0b01_000000,
];
let bits = &data.view_bits::<Msb0>()[6 .. 18];
assert_eq!(format!("{:b}", bits), "[10, 10001101, 01]");
assert_eq!(format!("{:o}", bits), "[2, 215, 1]");
assert_eq!(format!("{:X}", bits), "[2, 8D, 1]");
The {:#}
format modifier causes the standard 0b
, 0o
, or 0x
prefix to be
applied to each printed word. The other format specifiers are not interpreted by
this implementation, and apply to the entire rendered text, not to individual
words.
source§impl<T1, T2, O1, O2> PartialEq<&BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialEq<&BitSlice<T2, O2>> for BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
source§impl<T1, T2, O1, O2> PartialEq<&mut BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialEq<&mut BitSlice<T2, O2>> for BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
source§impl<O1, A, O2, T> PartialEq<BitArray<A, O2>> for BitSlice<T, O1>where
O1: BitOrder,
O2: BitOrder,
A: BitViewSized,
T: BitStore,
impl<O1, A, O2, T> PartialEq<BitArray<A, O2>> for BitSlice<T, O1>where O1: BitOrder, O2: BitOrder, A: BitViewSized, T: BitStore,
source§impl<O1, O2, T1, T2> PartialEq<BitBox<T2, O2>> for &BitSlice<T1, O1>where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
impl<O1, O2, T1, T2> PartialEq<BitBox<T2, O2>> for &BitSlice<T1, O1>where O1: BitOrder, O2: BitOrder, T1: BitStore, T2: BitStore,
source§impl<O1, O2, T1, T2> PartialEq<BitBox<T2, O2>> for &mut BitSlice<T1, O1>where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
impl<O1, O2, T1, T2> PartialEq<BitBox<T2, O2>> for &mut BitSlice<T1, O1>where O1: BitOrder, O2: BitOrder, T1: BitStore, T2: BitStore,
source§impl<O1, O2, T1, T2> PartialEq<BitBox<T2, O2>> for BitSlice<T1, O1>where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
impl<O1, O2, T1, T2> PartialEq<BitBox<T2, O2>> for BitSlice<T1, O1>where O1: BitOrder, O2: BitOrder, T1: BitStore, T2: BitStore,
source§impl<T1, T2, O1, O2> PartialEq<BitSlice<T2, O2>> for &BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialEq<BitSlice<T2, O2>> for &BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
source§impl<T1, T2, O1, O2> PartialEq<BitSlice<T2, O2>> for &mut BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialEq<BitSlice<T2, O2>> for &mut BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
source§impl<T1, T2, O1, O2> PartialEq<BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialEq<BitSlice<T2, O2>> for BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
Tests if two BitSlice
s are semantically — not representationally — equal.
It is valid to compare slices of different ordering or memory types.
The equality condition requires that they have the same length and that at each index, the two slices have the same bit value.
source§impl<T1, T2, O1, O2> PartialEq<BitVec<T2, O2>> for &BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialEq<BitVec<T2, O2>> for &BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
source§impl<T1, T2, O1, O2> PartialEq<BitVec<T2, O2>> for &mut BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialEq<BitVec<T2, O2>> for &mut BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
source§impl<T1, T2, O1, O2> PartialEq<BitVec<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialEq<BitVec<T2, O2>> for BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
source§impl<T1, T2, O1, O2> PartialOrd<&BitSlice<T2, O2>> for &mut BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialOrd<&BitSlice<T2, O2>> for &mut BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read moresource§impl<T1, T2, O1, O2> PartialOrd<&BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialOrd<&BitSlice<T2, O2>> for BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read moresource§impl<T1, T2, O1, O2> PartialOrd<&mut BitSlice<T2, O2>> for &BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialOrd<&mut BitSlice<T2, O2>> for &BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read moresource§impl<T1, T2, O1, O2> PartialOrd<&mut BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialOrd<&mut BitSlice<T2, O2>> for BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read moresource§impl<A, T, O> PartialOrd<BitArray<A, O>> for BitSlice<T, O>where
A: BitViewSized,
T: BitStore,
O: BitOrder,
impl<A, T, O> PartialOrd<BitArray<A, O>> for BitSlice<T, O>where A: BitViewSized, T: BitStore, O: BitOrder,
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read moresource§impl<'a, O1, O2, T1, T2> PartialOrd<BitBox<T2, O2>> for &'a BitSlice<T1, O1>where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
impl<'a, O1, O2, T1, T2> PartialOrd<BitBox<T2, O2>> for &'a BitSlice<T1, O1>where O1: BitOrder, O2: BitOrder, T1: BitStore, T2: BitStore,
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read moresource§impl<'a, O1, O2, T1, T2> PartialOrd<BitBox<T2, O2>> for &'a mut BitSlice<T1, O1>where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
impl<'a, O1, O2, T1, T2> PartialOrd<BitBox<T2, O2>> for &'a mut BitSlice<T1, O1>where O1: BitOrder, O2: BitOrder, T1: BitStore, T2: BitStore,
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read moresource§impl<O1, O2, T1, T2> PartialOrd<BitBox<T2, O2>> for BitSlice<T1, O1>where
O1: BitOrder,
O2: BitOrder,
T1: BitStore,
T2: BitStore,
impl<O1, O2, T1, T2> PartialOrd<BitBox<T2, O2>> for BitSlice<T1, O1>where O1: BitOrder, O2: BitOrder, T1: BitStore, T2: BitStore,
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read moresource§impl<T1, T2, O1, O2> PartialOrd<BitSlice<T2, O2>> for &BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialOrd<BitSlice<T2, O2>> for &BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read moresource§impl<T1, T2, O1, O2> PartialOrd<BitSlice<T2, O2>> for &mut BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialOrd<BitSlice<T2, O2>> for &mut BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read moresource§impl<T1, T2, O1, O2> PartialOrd<BitSlice<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialOrd<BitSlice<T2, O2>> for BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
Compares two BitSlice
s by semantic — not representational — ordering.
The comparison sorts by testing at each index if one slice has a high bit where the other has a low. At the first index where the slices differ, the slice with the high bit is greater. If the slices are equal until at least one terminates, then they are compared by length.
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read moresource§impl<'a, T1, T2, O1, O2> PartialOrd<BitVec<T2, O2>> for &'a BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<'a, T1, T2, O1, O2> PartialOrd<BitVec<T2, O2>> for &'a BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read moresource§impl<'a, T1, T2, O1, O2> PartialOrd<BitVec<T2, O2>> for &'a mut BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<'a, T1, T2, O1, O2> PartialOrd<BitVec<T2, O2>> for &'a mut BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read moresource§impl<T1, T2, O1, O2> PartialOrd<BitVec<T2, O2>> for BitSlice<T1, O1>where
T1: BitStore,
T2: BitStore,
O1: BitOrder,
O2: BitOrder,
impl<T1, T2, O1, O2> PartialOrd<BitVec<T2, O2>> for BitSlice<T1, O1>where T1: BitStore, T2: BitStore, O1: BitOrder, O2: BitOrder,
1.0.0 · source§fn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
self
and other
) and is used by the <=
operator. Read moresource§impl<T, O> Read for &BitSlice<T, O>where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
impl<T, O> Read for &BitSlice<T, O>where T: BitStore, O: BitOrder, BitSlice<T, O>: BitField,
Reading From a Bit-Slice
The implementation loads bytes out of the referenced bit-slice until either the
destination buffer is filled or the source has no more bytes to provide. When
.read()
returns, the provided bit-slice handle will have been updated to no
longer include the leading segment copied out as bytes into buf
.
Note that the return value of .read()
is always the number of bytes of buf
filled!
The implementation uses BitField::load_be
to collect bytes. Note that unlike
the standard library, it is implemented on bit-slices of any underlying
element type. However, using a BitSlice<_, u8>
is still likely to be fastest.
Original
source§fn read(&mut self, buf: &mut [u8]) -> Result<usize>
fn read(&mut self, buf: &mut [u8]) -> Result<usize>
1.36.0 · source§fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> Result<usize, Error>
fn read_vectored(&mut self, bufs: &mut [IoSliceMut<'_>]) -> Result<usize, Error>
read
, except that it reads into a slice of buffers. Read moresource§fn is_read_vectored(&self) -> bool
fn is_read_vectored(&self) -> bool
can_vector
)1.0.0 · source§fn read_to_end(&mut self, buf: &mut Vec<u8, Global>) -> Result<usize, Error>
fn read_to_end(&mut self, buf: &mut Vec<u8, Global>) -> Result<usize, Error>
buf
. Read more1.0.0 · source§fn read_to_string(&mut self, buf: &mut String) -> Result<usize, Error>
fn read_to_string(&mut self, buf: &mut String) -> Result<usize, Error>
buf
. Read more1.6.0 · source§fn read_exact(&mut self, buf: &mut [u8]) -> Result<(), Error>
fn read_exact(&mut self, buf: &mut [u8]) -> Result<(), Error>
buf
. Read moresource§fn read_buf(&mut self, buf: BorrowedCursor<'_>) -> Result<(), Error>
fn read_buf(&mut self, buf: BorrowedCursor<'_>) -> Result<(), Error>
read_buf
)source§fn read_buf_exact(&mut self, cursor: BorrowedCursor<'_>) -> Result<(), Error>
fn read_buf_exact(&mut self, cursor: BorrowedCursor<'_>) -> Result<(), Error>
read_buf
)cursor
. Read more1.0.0 · source§fn by_ref(&mut self) -> &mut Selfwhere
Self: Sized,
fn by_ref(&mut self) -> &mut Selfwhere Self: Sized,
Read
. Read moresource§impl<T, O> ToOwned for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> ToOwned for BitSlice<T, O>where T: BitStore, O: BitOrder,
source§impl<'a, T, O> TryFrom<&'a [T]> for &'a BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<'a, T, O> TryFrom<&'a [T]> for &'a BitSlice<T, O>where T: BitStore, O: BitOrder,
Calls BitSlice::try_from_slice
, but returns the original Rust slice on
error instead of the failure event.
This only fails if slice.len()
exceeds BitSlice::MAX_ELTS
.
source§impl<'a, T, O> TryFrom<&'a mut [T]> for &'a mut BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<'a, T, O> TryFrom<&'a mut [T]> for &'a mut BitSlice<T, O>where T: BitStore, O: BitOrder,
Calls BitSlice::try_from_slice_mut
, but returns the original Rust slice
on error instead of the failure event.
This only fails if slice.len()
exceeds BitSlice::MAX_ELTS
.
source§impl<A, O> TryFrom<&BitSlice<<A as BitView>::Store, O>> for &BitArray<A, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> TryFrom<&BitSlice<<A as BitView>::Store, O>> for &BitArray<A, O>where A: BitViewSized, O: BitOrder,
source§impl<A, O> TryFrom<&BitSlice<<A as BitView>::Store, O>> for BitArray<A, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> TryFrom<&BitSlice<<A as BitView>::Store, O>> for BitArray<A, O>where A: BitViewSized, O: BitOrder,
source§impl<A, O> TryFrom<&mut BitSlice<<A as BitView>::Store, O>> for &mut BitArray<A, O>where
A: BitViewSized,
O: BitOrder,
impl<A, O> TryFrom<&mut BitSlice<<A as BitView>::Store, O>> for &mut BitArray<A, O>where A: BitViewSized, O: BitOrder,
source§impl<T, O> UpperHex for BitSlice<T, O>where
T: BitStore,
O: BitOrder,
impl<T, O> UpperHex for BitSlice<T, O>where T: BitStore, O: BitOrder,
Bit-Slice Rendering
This implementation prints the contents of a &BitSlice
in one of binary,
octal, or hexadecimal. It is important to note that this does not render the
raw underlying memory! They render the semantically-ordered contents of the
bit-slice as numerals. This distinction matters if you use type parameters that
differ from those presumed by your debugger (which is usually <u8, Msb0>
).
The output separates the T
elements as individual list items, and renders each
element as a base- 2, 8, or 16 numeric string. When walking an element, the bits
traversed by the bit-slice are considered to be stored in
most-significant-bit-first ordering. This means that index [0]
is the high bit
of the left-most digit, and index [n]
is the low bit of the right-most digit,
in a given printed word.
In order to render according to expectations of the Arabic numeral system, an
element being transcribed is chunked into digits from the least-significant end
of its rendered form. This is most noticeable in octal, which will always have a
smaller ceiling on the left-most digit in a printed word, while the right-most
digit in that word is able to use the full 0 ..= 7
numeral range.
Examples
use bitvec::prelude::*;
let data = [
0b000000_10u8,
// digits print LTR
0b10_001_101,
// significance is computed RTL
0b01_000000,
];
let bits = &data.view_bits::<Msb0>()[6 .. 18];
assert_eq!(format!("{:b}", bits), "[10, 10001101, 01]");
assert_eq!(format!("{:o}", bits), "[2, 215, 1]");
assert_eq!(format!("{:X}", bits), "[2, 8D, 1]");
The {:#}
format modifier causes the standard 0b
, 0o
, or 0x
prefix to be
applied to each printed word. The other format specifiers are not interpreted by
this implementation, and apply to the entire rendered text, not to individual
words.
source§impl<T, O> Write for &mut BitSlice<T, O>where
T: BitStore,
O: BitOrder,
BitSlice<T, O>: BitField,
impl<T, O> Write for &mut BitSlice<T, O>where T: BitStore, O: BitOrder, BitSlice<T, O>: BitField,
Writing Into a Bit-Slice
The implementation stores bytes into the referenced bit-slice until either the
source buffer is exhausted or the destination has no more slots to fill. When
.write()
returns, the provided bit-slice handle will have been updated to no
longer include the leading segment filled with bytes from buf
.
Note that the return value of .write()
is always the number of bytes of
buf
consumed!
The implementation uses BitField::store_be
to fill bytes. Note that unlike
the standard library, it is implemented on bit-slices of any underlying
element type. However, using a BitSlice<_, u8>
is still likely to be fastest.
Original
source§fn write(&mut self, buf: &[u8]) -> Result<usize>
fn write(&mut self, buf: &[u8]) -> Result<usize>
source§fn flush(&mut self) -> Result<()>
fn flush(&mut self) -> Result<()>
source§fn is_write_vectored(&self) -> bool
fn is_write_vectored(&self) -> bool
can_vector
)1.0.0 · source§fn write_all(&mut self, buf: &[u8]) -> Result<(), Error>
fn write_all(&mut self, buf: &[u8]) -> Result<(), Error>
source§fn write_all_vectored(&mut self, bufs: &mut [IoSlice<'_>]) -> Result<(), Error>
fn write_all_vectored(&mut self, bufs: &mut [IoSlice<'_>]) -> Result<(), Error>
write_all_vectored
)impl<T, O> Eq for BitSlice<T, O>where T: BitStore, O: BitOrder,
impl<T, O> Send for BitSlice<T, O>where T: BitStore + Sync, O: BitOrder,
Bit-Slice Thread Safety
This allows bit-slice references to be moved across thread boundaries only when
the underlying T
element can tolerate concurrency.
All BitSlice
references, shared or exclusive, are only threadsafe if the T
element type is Send
, because any given bit-slice reference may only have
partial control of a memory element that is also being shared by a bit-slice
reference on another thread. As such, this is never implemented for Cell<U>
,
but always implemented for AtomicU
and U
for a given unsigned integer type
U
.
Atomic integers safely handle concurrent writes, cells do not allow concurrency
at all, so the only missing piece is &mut BitSlice<_, U: Unsigned>
. This is
handled by the aliasing system that the mutable splitters employ: a mutable
reference to an unsynchronized bit-slice can only cross threads when no other
handle is able to exist to the elements it governs. Splitting a mutable
bit-slice causes the split halves to change over to either atomics or cells, so
concurrency is either safe or impossible.
impl<T, O> Sync for BitSlice<T, O>where T: BitStore + Sync, O: BitOrder,
Bit-Slice Thread Safety
This allows bit-slice references to be moved across thread boundaries only when
the underlying T
element can tolerate concurrency.
All BitSlice
references, shared or exclusive, are only threadsafe if the T
element type is Send
, because any given bit-slice reference may only have
partial control of a memory element that is also being shared by a bit-slice
reference on another thread. As such, this is never implemented for Cell<U>
,
but always implemented for AtomicU
and U
for a given unsigned integer type
U
.
Atomic integers safely handle concurrent writes, cells do not allow concurrency
at all, so the only missing piece is &mut BitSlice<_, U: Unsigned>
. This is
handled by the aliasing system that the mutable splitters employ: a mutable
reference to an unsynchronized bit-slice can only cross threads when no other
handle is able to exist to the elements it governs. Splitting a mutable
bit-slice causes the split halves to change over to either atomics or cells, so
concurrency is either safe or impossible.
impl<T, O> Unpin for BitSlice<T, O>where T: BitStore, O: BitOrder,
Auto Trait Implementations§
impl<T, O> RefUnwindSafe for BitSlice<T, O>where O: RefUnwindSafe, T: RefUnwindSafe,
impl<T = usize, O = Lsb0> !Sized for BitSlice<T, O>
impl<T, O> UnwindSafe for BitSlice<T, O>where O: UnwindSafe, T: UnwindSafe,
Blanket Implementations§
source§impl<T> BorrowMut<T> for Twhere
T: ?Sized,
impl<T> BorrowMut<T> for Twhere T: ?Sized,
source§fn borrow_mut(&mut self) -> &mut T
fn borrow_mut(&mut self) -> &mut T
source§impl<T> FmtForward for T
impl<T> FmtForward for T
source§fn fmt_binary(self) -> FmtBinary<Self>where
Self: Binary,
fn fmt_binary(self) -> FmtBinary<Self>where Self: Binary,
self
to use its Binary
implementation when Debug
-formatted.source§fn fmt_display(self) -> FmtDisplay<Self>where
Self: Display,
fn fmt_display(self) -> FmtDisplay<Self>where Self: Display,
self
to use its Display
implementation when
Debug
-formatted.source§fn fmt_lower_exp(self) -> FmtLowerExp<Self>where
Self: LowerExp,
fn fmt_lower_exp(self) -> FmtLowerExp<Self>where Self: LowerExp,
self
to use its LowerExp
implementation when
Debug
-formatted.source§fn fmt_lower_hex(self) -> FmtLowerHex<Self>where
Self: LowerHex,
fn fmt_lower_hex(self) -> FmtLowerHex<Self>where Self: LowerHex,
self
to use its LowerHex
implementation when
Debug
-formatted.source§fn fmt_octal(self) -> FmtOctal<Self>where
Self: Octal,
fn fmt_octal(self) -> FmtOctal<Self>where Self: Octal,
self
to use its Octal
implementation when Debug
-formatted.source§fn fmt_pointer(self) -> FmtPointer<Self>where
Self: Pointer,
fn fmt_pointer(self) -> FmtPointer<Self>where Self: Pointer,
self
to use its Pointer
implementation when
Debug
-formatted.source§fn fmt_upper_exp(self) -> FmtUpperExp<Self>where
Self: UpperExp,
fn fmt_upper_exp(self) -> FmtUpperExp<Self>where Self: UpperExp,
self
to use its UpperExp
implementation when
Debug
-formatted.source§fn fmt_upper_hex(self) -> FmtUpperHex<Self>where
Self: UpperHex,
fn fmt_upper_hex(self) -> FmtUpperHex<Self>where Self: UpperHex,
self
to use its UpperHex
implementation when
Debug
-formatted.source§impl<T> Pipe for Twhere
T: ?Sized,
impl<T> Pipe for Twhere T: ?Sized,
source§fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere
Self: Sized,
fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> Rwhere Self: Sized,
source§fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> Rwhere R: 'a,
self
and passes that borrow into the pipe function. Read moresource§fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere
R: 'a,
fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> Rwhere R: 'a,
self
and passes that borrow into the pipe function. Read moresource§fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> Rwhere
Self: Borrow<B>,
B: 'a + ?Sized,
R: 'a,
fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> Rwhere Self: Borrow<B>, B: 'a + ?Sized, R: 'a,
source§fn pipe_borrow_mut<'a, B, R>(
&'a mut self,
func: impl FnOnce(&'a mut B) -> R
) -> Rwhere
Self: BorrowMut<B>,
B: 'a + ?Sized,
R: 'a,
fn pipe_borrow_mut<'a, B, R>( &'a mut self, func: impl FnOnce(&'a mut B) -> R ) -> Rwhere Self: BorrowMut<B>, B: 'a + ?Sized, R: 'a,
source§fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> Rwhere
Self: AsRef<U>,
U: 'a + ?Sized,
R: 'a,
fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> Rwhere Self: AsRef<U>, U: 'a + ?Sized, R: 'a,
self
, then passes self.as_ref()
into the pipe function.source§fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> Rwhere
Self: AsMut<U>,
U: 'a + ?Sized,
R: 'a,
fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> Rwhere Self: AsMut<U>, U: 'a + ?Sized, R: 'a,
self
, then passes self.as_mut()
into the pipe
function.source§impl<T> Tap for T
impl<T> Tap for T
source§fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Selfwhere
Self: Borrow<B>,
B: ?Sized,
fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Selfwhere Self: Borrow<B>, B: ?Sized,
Borrow<B>
of a value. Read moresource§fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Selfwhere
Self: BorrowMut<B>,
B: ?Sized,
fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Selfwhere Self: BorrowMut<B>, B: ?Sized,
BorrowMut<B>
of a value. Read moresource§fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Selfwhere
Self: AsRef<R>,
R: ?Sized,
fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Selfwhere Self: AsRef<R>, R: ?Sized,
AsRef<R>
view of a value. Read moresource§fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Selfwhere
Self: AsMut<R>,
R: ?Sized,
fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Selfwhere Self: AsMut<R>, R: ?Sized,
AsMut<R>
view of a value. Read moresource§fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Selfwhere
Self: Deref<Target = T>,
T: ?Sized,
fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Selfwhere Self: Deref<Target = T>, T: ?Sized,
Deref::Target
of a value. Read moresource§fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Selfwhere
Self: DerefMut<Target = T> + Deref,
T: ?Sized,
fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Selfwhere Self: DerefMut<Target = T> + Deref, T: ?Sized,
Deref::Target
of a value. Read moresource§fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
.tap()
only in debug builds, and is erased in release builds.source§fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
.tap_mut()
only in debug builds, and is erased in release
builds.source§fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Selfwhere
Self: Borrow<B>,
B: ?Sized,
fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Selfwhere Self: Borrow<B>, B: ?Sized,
.tap_borrow()
only in debug builds, and is erased in release
builds.source§fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Selfwhere
Self: BorrowMut<B>,
B: ?Sized,
fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Selfwhere Self: BorrowMut<B>, B: ?Sized,
.tap_borrow_mut()
only in debug builds, and is erased in release
builds.source§fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Selfwhere
Self: AsRef<R>,
R: ?Sized,
fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Selfwhere Self: AsRef<R>, R: ?Sized,
.tap_ref()
only in debug builds, and is erased in release
builds.source§fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Selfwhere
Self: AsMut<R>,
R: ?Sized,
fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Selfwhere Self: AsMut<R>, R: ?Sized,
.tap_ref_mut()
only in debug builds, and is erased in release
builds.