#[repr(C, packed)]pub struct BitPtr<M, O = Lsb0, T = usize> where
M: Mutability,
O: BitOrder,
T: BitStore, { /* private fields */ }
Expand description
Pointer to an individual bit in a memory element. Analagous to *bool
.
Original
*bool
and
NonNull<bool>
API Differences
This must be a structure, rather than a raw pointer, for two reasons:
- It is larger than a raw pointer.
- Raw pointers are not
#[fundamental]
and cannot have foreign implementations.
Additionally, rather than create two structures to map to *const bool
and
*mut bool
, respectively, this takes mutability as a type parameter.
Because the encoded span pointer requires that memory addresses are well aligned, this type also imposes the alignment requirement and refuses construction for misaligned element addresses. While this type is used in the API equivalent of ordinary raw pointers, it is restricted in value to only be references to memory elements.
ABI Differences
This has alignment 1
, rather than an alignment to the processor word. This is
necessary for some crate-internal optimizations.
Type Parameters
M
: Marks whether the pointer permits mutation of memory through it.O
: The ordering of bits within a memory element.T
: A memory type used to select both the register size and the access behavior when performing loads/stores.
Usage
This structure is used as the bitvec
equivalent to *bool
. It is used in
all raw-pointer APIs, and provides behavior to emulate raw pointers. It cannot
be directly dereferenced, as it is not a pointer; it can only be transformed
back into higher referential types, or used in bitvec::ptr
free functions.
These pointers can never be null, or misaligned.
Implementations
sourceimpl<M, O, T> BitPtr<M, O, T> where
M: Mutability,
O: BitOrder,
T: BitStore,
impl<M, O, T> BitPtr<M, O, T> where
M: Mutability,
O: BitOrder,
T: BitStore,
sourcepub const DANGLING: Self
pub const DANGLING: Self
The dangling pointer. This selects the starting bit of the T
dangling
address.
sourcepub fn try_new<A>(addr: A, head: u8) -> Result<Self, BitPtrError<T>> where
A: TryInto<Address<M, T>>,
BitPtrError<T>: From<A::Error>,
pub fn try_new<A>(addr: A, head: u8) -> Result<Self, BitPtrError<T>> where
A: TryInto<Address<M, T>>,
BitPtrError<T>: From<A::Error>,
Tries to construct a BitPtr
from a memory location and a bit index.
Type Parameters
A
: This accepts anything that may be used as a memory address.
Parameters
addr
: The memory address to use in theBitPtr
. If this value violates theAddress
rules, then its conversion error will be returned.head
: The index of the bit in*addr
that this pointer selects. If this value violates theBitIdx
rules, then its conversion error will be returned.
Returns
A new BitPtr
, selecting the memory location addr
and the bit head
.
If either addr
or head
are invalid values, then this propagates
their error.
sourcepub fn new(addr: Address<M, T>, head: BitIdx<T::Mem>) -> Self
pub fn new(addr: Address<M, T>, head: BitIdx<T::Mem>) -> Self
Constructs a BitPtr
from a memory location and a bit index.
Since this requires that the address and bit index are already
well-formed, it can assemble the BitPtr
without inspecting their
values.
Parameters
addr
: A well-formed memory address ofT
.head
: A well-formed bit index withinT
.
Returns
A BitPtr
selecting the head
bit in the location addr
.
sourcepub fn raw_parts(self) -> (Address<M, T>, BitIdx<T::Mem>)
pub fn raw_parts(self) -> (Address<M, T>, BitIdx<T::Mem>)
Decomposes the pointer into its element address and bit index.
Parameters
self
Returns
.0
: The memory address in which the referent bit is located..1
: The index of the referent bit within*.0
.
sourcepub unsafe fn range(self, count: usize) -> BitPtrRange<M, O, T>ⓘNotable traits for BitPtrRange<M, O, T>impl<M, O, T> Iterator for BitPtrRange<M, O, T> where
M: Mutability,
O: BitOrder,
T: BitStore, type Item = BitPtr<M, O, T>;
pub unsafe fn range(self, count: usize) -> BitPtrRange<M, O, T>ⓘNotable traits for BitPtrRange<M, O, T>impl<M, O, T> Iterator for BitPtrRange<M, O, T> where
M: Mutability,
O: BitOrder,
T: BitStore, type Item = BitPtr<M, O, T>;
M: Mutability,
O: BitOrder,
T: BitStore, type Item = BitPtr<M, O, T>;
Produces a pointer range starting at self
and running for count
bits.
This calls self.add(count)
, then bundles the resulting pointer as the
high end of the produced range.
Parameters
self
: The starting pointer of the produced range.count
: The number of bits that the produced range includes.
Returns
A half-open range of pointers, beginning at (and including) self
,
running for count
bits, and ending at (and excluding)
self.add(count)
.
Safety
count
cannot violate the constraints in add
.
sourcepub unsafe fn into_bitref<'a>(self) -> BitRef<'a, M, O, T>
pub unsafe fn into_bitref<'a>(self) -> BitRef<'a, M, O, T>
sourcepub unsafe fn assert_mut(self) -> BitPtr<Mut, O, T>
pub unsafe fn assert_mut(self) -> BitPtr<Mut, O, T>
Adds write permissions to a bit-pointer.
Safety
This pointer must have been derived from a *mut
pointer.
sourcepub fn is_null(self) -> bool
👎 Deprecated: BitPtr
is never null
pub fn is_null(self) -> bool
BitPtr
is never null
Tests if a bit-pointer is the null value.
This is always false, as BitPtr
is a NonNull
internally. Use
Option<BitPtr>
to express the potential for a null pointer.
Original
sourcepub fn cast<U>(self) -> BitPtr<M, O, U> where
U: BitStore,
pub fn cast<U>(self) -> BitPtr<M, O, U> where
U: BitStore,
Casts to a bit-pointer of another storage type, preserving the bit-ordering and mutability permissions.
Original
Behavior
This is not a free typecast! It encodes the pointer as a crate-internal
span descriptor, casts the span descriptor to the U
storage element
parameter, then decodes the result. This preserves general correctness,
but will likely change both the virtual and physical bits addressed by
this pointer.
sourcepub unsafe fn as_ref<'a>(self) -> Option<BitRef<'a, Const, O, T>>
pub unsafe fn as_ref<'a>(self) -> Option<BitRef<'a, Const, O, T>>
Produces a proxy reference to the referent bit.
Because BitPtr
is a non-null, well-aligned, pointer, this never
returns None
.
Original
API Differences
This produces a proxy type rather than a true reference. The proxy
implements Deref<Target = bool>
, and can be converted to &bool
with
&*
.
Safety
Since BitPtr
does not permit null or misaligned pointers, this method
will always dereference the pointer and you must ensure the following
conditions are met:
- the pointer must be dereferencable as defined in the standard library documentation
- the pointer must point to an initialized instance of
T
- you must ensure that no other pointer will race to modify the referent location while this call is reading from memory to produce the proxy
Examples
use bitvec::prelude::*;
let data = 1u8;
let ptr = BitPtr::<_, Lsb0, _>::from_ref(&data);
let val = unsafe { ptr.as_ref() }.unwrap();
assert!(*val);
sourcepub unsafe fn offset(self, count: isize) -> Self
pub unsafe fn offset(self, count: isize) -> Self
Calculates the offset from a pointer.
count
is in units of bits.
Original
Safety
If any of the following conditions are violated, the result is Undefined Behavior:
- Both the starting and resulting pointer must be either in bounds or one byte past the end of the same allocated object. Note that in Rust, every (stack-allocated) variable is considered a separate allocated object.
- The computed offset, in bytes, cannot overflow an
isize
. - The offset being in bounds cannot rely on “wrapping around” the
address space. That is, the infinite-precision sum, in bytes must
fit in a
usize
.
These pointers are almost always derived from BitSlice
regions,
which have an encoding limitation that the high three bits of the length
counter are zero, so bitvec
pointers are even less likely than
ordinary pointers to run afoul of these limitations.
Use wrapping_offset
if you expect to risk hitting the high edge of
the address space.
Examples
use bitvec::prelude::*;
let data = 5u8;
let ptr = BitPtr::<_, Lsb0, _>::from_ref(&data);
assert!(unsafe { ptr.read() });
assert!(!unsafe { ptr.offset(1).read() });
assert!(unsafe { ptr.offset(2).read() });
sourcepub fn wrapping_offset(self, count: isize) -> Self
pub fn wrapping_offset(self, count: isize) -> Self
Calculates the offset from a pointer using wrapping arithmetic.
count
is in units of bits.
Original
Safety
The resulting pointer does not need to be in bounds, but it is potentially hazardous to dereference.
In particular, the resulting pointer remains attached to the same
allocated object that self
points to. It may not be used to access a
different allocated object. Note that in Rust, every (stack-allocated)
variable is considered a separate allocated object.
In other words, x.wrapping_offset((y as usize).wrapping_sub(x as usize)
is not the same as y
, and dereferencing it is undefined
behavior unless x
and y
point into the same allocated object.
Compared to offset
, this method basically delays the requirement of
staying within the same allocated object: offset
is immediate
Undefined Behavior when crossing object boundaries; wrapping_offset
produces a pointer but still leads to Undefined Behavior if that pointer
is dereferenced. offset
can be optimized better and is thus
preferable in performance-sensitive code.
If you need to cross object boundaries, destructure this pointer into its base address and bit index, cast the base address to an integer, and do the arithmetic in the purely integer space.
Examples
use bitvec::prelude::*;
let data = 0u8;
let mut ptr = BitPtr::<_, Lsb0, _>::from_ref(&data);
let end = ptr.wrapping_offset(8);
while ptr < end {
println!("{}", unsafe { ptr.read() });
ptr = ptr.wrapping_offset(3);
}
sourcepub unsafe fn offset_from(self, origin: Self) -> isize
pub unsafe fn offset_from(self, origin: Self) -> isize
Calculates the distance between two pointers. The returned value is in units of bits.
This function is the inverse of offset
.
Original
Safety
If any of the following conditions are violated, the result is Undefined Behavior:
- Both the starting and other pointer must be either in bounds or one byte past the end of the same allocated object. Note that in Rust, every (stack-allocated) variable is considered a separate allocated object.
- Both pointers must be derived from a pointer to the same object.
- The distance between the pointers, in bytes, cannot overflow an
isize
. - The distance being in bounds cannot rely on “wrapping around” the address space.
These pointers are almost always derived from BitSlice
regions,
which have an encoding limitation that the high three bits of the length
counter are zero, so bitvec
pointers are even less likely than
ordinary pointers to run afoul of these limitations.
Examples
Basic usage:
use bitvec::prelude::*;
let data = 0u16;
let base = BitPtr::<_, Lsb0, _>::from_ref(&data);
let low = unsafe { base.add(5) };
let high = unsafe { low.add(6) };
unsafe {
assert_eq!(high.offset_from(low), 6);
assert_eq!(low.offset_from(high), -6);
assert_eq!(low.offset(6), high);
assert_eq!(high.offset(-6), low);
}
Incorrect usage:
use bitvec::prelude::*;
let a = 0u8;
let b = !0u8;
let a_ptr = BitPtr::<_, Lsb0, _>::from_ref(&a);
let b_ptr = BitPtr::<_, Lsb0, _>::from_ref(&b);
let diff = (b_ptr.pointer() as isize)
.wrapping_sub(a_ptr.pointer() as isize)
// Remember: raw pointers are byte-addressed,
// but these are bit-addressed.
.wrapping_mul(8);
// Create a pointer to `b`, derived from `a`.
let b_ptr_2 = a_ptr.wrapping_offset(diff);
// The pointers are *arithmetically* equal now
assert_eq!(b_ptr, b_ptr_2);
// Undefined Behavior!
unsafe {
b_ptr_2.offset_from(b_ptr);
}
sourcepub unsafe fn add(self, count: usize) -> Self
pub unsafe fn add(self, count: usize) -> Self
sourcepub unsafe fn sub(self, count: usize) -> Self
pub unsafe fn sub(self, count: usize) -> Self
sourcepub fn wrapping_add(self, count: usize) -> Self
pub fn wrapping_add(self, count: usize) -> Self
Calculates the offset from a pointer using wrapping arithmetic
(convenience for .wrapping_offset(count as isize)
).
Original
Safety
See wrapping_offset
.
sourcepub fn wrapping_sub(self, count: usize) -> Self
pub fn wrapping_sub(self, count: usize) -> Self
Calculates the offset from a pointer using wrapping arithmetic
(convenience for .wrapping_offset((count as isize).wrapping_neg())
).
Original
Safety
See wrapping_offset
.
sourcepub unsafe fn read(self) -> bool
pub unsafe fn read(self) -> bool
sourcepub unsafe fn read_volatile(self) -> bool
pub unsafe fn read_volatile(self) -> bool
Performs a volatile read of the bit from self
.
Volatile operations are intended to act on I/O memory, and are guaranteed to not be elided or reördered by the compiler across other volatile operations.
Original
Safety
See ptr::read_volatile
for safety concerns and examples.
sourcepub unsafe fn copy_to<O2, T2>(self, dest: BitPtr<Mut, O2, T2>, count: usize) where
O2: BitOrder,
T2: BitStore,
pub unsafe fn copy_to<O2, T2>(self, dest: BitPtr<Mut, O2, T2>, count: usize) where
O2: BitOrder,
T2: BitStore,
sourcepub unsafe fn copy_to_nonoverlapping<O2, T2>(
self,
dest: BitPtr<Mut, O2, T2>,
count: usize
) where
O2: BitOrder,
T2: BitStore,
pub unsafe fn copy_to_nonoverlapping<O2, T2>(
self,
dest: BitPtr<Mut, O2, T2>,
count: usize
) where
O2: BitOrder,
T2: BitStore,
Copies count
bits from self
to dest
. The source and destination
may not overlap.
NOTE: this has the same argument order as
ptr::copy_nonoverlapping
.
Original
pointer::copy_to_nonoverlapping
Safety
See ptr::copy_nonoverlapping
for safety concerns and examples.
sourcepub fn align_offset(self, align: usize) -> usize
pub fn align_offset(self, align: usize) -> usize
Computes the offset (in bits) that needs to be applied to the pointer in
order to make it aligned to align
.
“Alignment” here means that the pointer is selecting the start bit of a memory location whose address satisfies the requested alignment.
align
is measured in bytes. If you wish to align your bit-pointer
to a specific fraction (½, ¼, or ⅛ of one byte), please file an issue
and this functionality will be added to BitIdx
.
Original
If the base-element address of the pointer is already aligned to
align
, then this will return the bit-offset required to select the
first bit of the successor element.
If it is not possible to align the pointer, the implementation returns
usize::MAX
. It is permissible for the implementation to always
return usize::MAX
. Only your algorithm’s performance can depend on
getting a usable offset here, not its correctness.
The offset is expressed in number of bits, and not T
elements or
bytes. The value returned can be used with the wrapping_add
method.
Safety
There are no guarantees whatsoëver that offsetting the pointer will not overflow or go beyond the allocation that the pointer points into. It is up to the caller to ensure that the returned offset is correct in all terms other than alignment.
Panics
The function panics if align
is not a power-of-two.
Examples
use bitvec::prelude::*;
let data = [0u8; 3];
let ptr = BitPtr::<_, Lsb0, _>::from_ref(&data[0]);
let ptr = unsafe { ptr.add(2) };
let count = ptr.align_offset(2);
assert!(count > 0);
sourceimpl<O, T> BitPtr<Const, O, T> where
O: BitOrder,
T: BitStore,
impl<O, T> BitPtr<Const, O, T> where
O: BitOrder,
T: BitStore,
sourcepub fn from_ref(elem: &T) -> Self
pub fn from_ref(elem: &T) -> Self
Constructs a BitPtr
from an element reference.
Parameters
elem
: A borrowed memory element.
Returns
A read-only bit-pointer to the zeroth bit in the *elem
location.
sourcepub fn from_ptr(elem: *const T) -> Result<Self, BitPtrError<T>>
pub fn from_ptr(elem: *const T) -> Result<Self, BitPtrError<T>>
Attempts to construct a BitPtr
from an element location.
Parameters
elem
: A read-only element address.
Returns
A read-only bit-pointer to the zeroth bit in the *elem
location, if
elem
is well-formed.
sourcepub fn from_slice(slice: &[T]) -> Self
pub fn from_slice(slice: &[T]) -> Self
Constructs a BitPtr
from a slice reference.
This differs from from_ref
in that the returned pointer keeps its
provenance over the entire slice, whereas producing a pointer to the
base bit of a slice with BitPtr::from_ref(&slice[0])
narrows its
provenance to only the slice[0]
element, and calling add
to leave
that element, even remaining in the slice, may cause UB.
Parameters
slice
: An immutabily borrowed slice of memory.
Returns
A read-only bit-pointer to the zeroth bit in the base location of the slice.
This pointer has provenance over the entire slice
, and may safely use
add
to traverse memory elements as long as it stays within the
slice.
sourceimpl<O, T> BitPtr<Mut, O, T> where
O: BitOrder,
T: BitStore,
impl<O, T> BitPtr<Mut, O, T> where
O: BitOrder,
T: BitStore,
sourcepub fn from_mut(elem: &mut T) -> Self
pub fn from_mut(elem: &mut T) -> Self
Constructs a BitPtr
from an element reference.
Parameters
elem
: A mutably borrowed memory element.
Returns
A write-capable bit-pointer to the zeroth bit in the *elem
location.
Note that even if elem
is an address within a contiguous array or
slice, the returned bit-pointer only has provenance for the elem
location, and no other.
Safety
The exclusive borrow of elem
is released after this function returns.
However, you must not use any other pointer than that returned by this
function to view or modify *elem
, unless the T
type supports aliased
mutation.
sourcepub fn from_mut_ptr(elem: *mut T) -> Result<Self, BitPtrError<T>>
pub fn from_mut_ptr(elem: *mut T) -> Result<Self, BitPtrError<T>>
Attempts to construct a BitPtr
from an element location.
Parameters
elem
: A write-capable element address.
Returns
A write-capable bit-pointer to the zeroth bit in the *elem
location,
if elem
is well-formed.
sourcepub fn from_mut_slice(slice: &mut [T]) -> Self
pub fn from_mut_slice(slice: &mut [T]) -> Self
Constructs a BitPtr
from a slice reference.
This differs from from_mut
in that the returned pointer keeps its
provenance over the entire slice, whereas producing a pointer to the
base bit of a slice with BitPtr::from_mut(&mut slice[0])
narrows its
provenance to only the slice[0]
element, and calling add
to leave
that element, even remaining in the slice, may cause UB.
Parameters
slice
: A mutabily borrowed slice of memory.
Returns
A write-capable bit-pointer to the zeroth bit in the base location of the slice.
This pointer has provenance over the entire slice
, and may safely use
add
to traverse memory elements as long as it stays within the
slice.
sourcepub fn pointer(&self) -> *mut T
pub fn pointer(&self) -> *mut T
Gets the pointer to the base memory location containing the referent bit.
sourcepub unsafe fn as_mut<'a>(self) -> Option<BitRef<'a, Mut, O, T>>
pub unsafe fn as_mut<'a>(self) -> Option<BitRef<'a, Mut, O, T>>
Produces a proxy mutable reference to the referent bit.
Because BitPtr
is a non-null, well-aligned, pointer, this never
returns None
.
Original
API Differences
This produces a proxy type rather than a true reference. The proxy
implements DerefMut<Target = bool>
, and can be converted to &mut bool
with &mut *
. Writes to the proxy are not reflected in the
proxied location until the proxy is destroyed, either through Drop
or
with its set
method.
The proxy must be bound as mut
in order to write through the binding.
Safety
Since BitPtr
does not permit null or misaligned pointers, this method
will always dereference the pointer and you must ensure the following
conditions are met:
- the pointer must be dereferencable as defined in the standard library documentation
- the pointer must point to an initialized instance of
T
- you must ensure that no other pointer will race to modify the referent location while this call is reading from memory to produce the proxy
Examples
use bitvec::prelude::*;
let mut data = 0u8;
let ptr = BitPtr::<_, Lsb0, _>::from_mut(&mut data);
let mut val = unsafe { ptr.as_mut() }.unwrap();
assert!(!*val);
*val = true;
assert!(*val);
sourcepub unsafe fn copy_from<O2, T2>(self, src: BitPtr<Const, O2, T2>, count: usize) where
O2: BitOrder,
T2: BitStore,
pub unsafe fn copy_from<O2, T2>(self, src: BitPtr<Const, O2, T2>, count: usize) where
O2: BitOrder,
T2: BitStore,
sourcepub unsafe fn copy_from_nonoverlapping<O2, T2>(
self,
src: BitPtr<Const, O2, T2>,
count: usize
) where
O2: BitOrder,
T2: BitStore,
pub unsafe fn copy_from_nonoverlapping<O2, T2>(
self,
src: BitPtr<Const, O2, T2>,
count: usize
) where
O2: BitOrder,
T2: BitStore,
Copies count
bits from src
to self
. The source and destination may
not overlap.
NOTE: this has the opposite argument order of
ptr::copy_nonoverlapping
.
Original
pointer::copy_from_nonoverlapping
Safety
See ptr::copy_nonoverlapping
for safety concerns and examples.
sourcepub unsafe fn write(self, value: bool)
pub unsafe fn write(self, value: bool)
Overwrites a memory location with the given bit.
See ptr::write
for safety concerns and examples.
Original
sourcepub unsafe fn write_volatile(self, val: bool)
pub unsafe fn write_volatile(self, val: bool)
Performs a volatile write of a memory location with the given bit.
Because processors do not have single-bit write instructions, this must perform a volatile read of the location, perform the bit modification within the processor register, and then perform a volatile write back to memory. These three steps are guaranteed to be sequential, but are not guaranteed to be atomic.
Volatile operations are intended to act on I/O memory, and are guaranteed to not be elided or reördered by the compiler across other volatile operations.
Original
Safety
See ptr::write_volatile
for safety concerns and examples.
Trait Implementations
sourceimpl<M, O, T> Ord for BitPtr<M, O, T> where
M: Mutability,
O: BitOrder,
T: BitStore,
impl<M, O, T> Ord for BitPtr<M, O, T> where
M: Mutability,
O: BitOrder,
T: BitStore,
sourceimpl<M1, M2, O, T1, T2> PartialEq<BitPtr<M2, O, T2>> for BitPtr<M1, O, T1> where
M1: Mutability,
M2: Mutability,
O: BitOrder,
T1: BitStore,
T2: BitStore,
impl<M1, M2, O, T1, T2> PartialEq<BitPtr<M2, O, T2>> for BitPtr<M1, O, T1> where
M1: Mutability,
M2: Mutability,
O: BitOrder,
T1: BitStore,
T2: BitStore,
sourceimpl<M1, M2, O, T1, T2> PartialOrd<BitPtr<M2, O, T2>> for BitPtr<M1, O, T1> where
M1: Mutability,
M2: Mutability,
O: BitOrder,
T1: BitStore,
T2: BitStore,
impl<M1, M2, O, T1, T2> PartialOrd<BitPtr<M2, O, T2>> for BitPtr<M1, O, T1> where
M1: Mutability,
M2: Mutability,
O: BitOrder,
T1: BitStore,
T2: BitStore,
sourcefn partial_cmp(&self, other: &BitPtr<M2, O, T2>) -> Option<Ordering>
fn partial_cmp(&self, other: &BitPtr<M2, O, T2>) -> Option<Ordering>
This method returns an ordering between self
and other
values if one exists. Read more
1.0.0 · sourcefn lt(&self, other: &Rhs) -> bool
fn lt(&self, other: &Rhs) -> bool
This method tests less than (for self
and other
) and is used by the <
operator. Read more
1.0.0 · sourcefn le(&self, other: &Rhs) -> bool
fn le(&self, other: &Rhs) -> bool
This method tests less than or equal to (for self
and other
) and is used by the <=
operator. Read more
sourceimpl<M, O, T> RangeBounds<BitPtr<M, O, T>> for BitPtrRange<M, O, T> where
M: Mutability,
O: BitOrder,
T: BitStore,
impl<M, O, T> RangeBounds<BitPtr<M, O, T>> for BitPtrRange<M, O, T> where
M: Mutability,
O: BitOrder,
T: BitStore,
sourcefn start_bound(&self) -> Bound<&BitPtr<M, O, T>>
fn start_bound(&self) -> Bound<&BitPtr<M, O, T>>
Start index bound. Read more
1.35.0 · sourcefn contains<U>(&self, item: &U) -> bool where
T: PartialOrd<U>,
U: PartialOrd<T> + ?Sized,
fn contains<U>(&self, item: &U) -> bool where
T: PartialOrd<U>,
U: PartialOrd<T> + ?Sized,
Returns true
if item
is contained in the range. Read more
impl<M, O, T> Copy for BitPtr<M, O, T> where
M: Mutability,
O: BitOrder,
T: BitStore,
impl<M, O, T> Eq for BitPtr<M, O, T> where
M: Mutability,
O: BitOrder,
T: BitStore,
Auto Trait Implementations
impl<M, O, T> RefUnwindSafe for BitPtr<M, O, T> where
M: RefUnwindSafe,
O: RefUnwindSafe,
T: RefUnwindSafe,
<T as BitStore>::Mem: RefUnwindSafe,
impl<M, O = Lsb0, T = usize> !Send for BitPtr<M, O, T>
impl<M, O = Lsb0, T = usize> !Sync for BitPtr<M, O, T>
impl<M, O, T> Unpin for BitPtr<M, O, T> where
M: Unpin,
O: Unpin,
impl<M, O, T> UnwindSafe for BitPtr<M, O, T> where
M: UnwindSafe,
O: UnwindSafe,
T: RefUnwindSafe,
<T as BitStore>::Mem: UnwindSafe,
Blanket Implementations
sourceimpl<T> BorrowMut<T> for T where
T: ?Sized,
impl<T> BorrowMut<T> for T where
T: ?Sized,
const: unstable · sourcepub fn borrow_mut(&mut self) -> &mut T
pub fn borrow_mut(&mut self) -> &mut T
Mutably borrows from an owned value. Read more
sourceimpl<T> FmtForward for T
impl<T> FmtForward for T
sourcefn fmt_binary(self) -> FmtBinary<Self> where
Self: Binary,
fn fmt_binary(self) -> FmtBinary<Self> where
Self: Binary,
Causes self
to use its Binary
implementation when Debug
-formatted.
sourcefn fmt_display(self) -> FmtDisplay<Self> where
Self: Display,
fn fmt_display(self) -> FmtDisplay<Self> where
Self: Display,
Causes self
to use its Display
implementation when
Debug
-formatted. Read more
sourcefn fmt_lower_exp(self) -> FmtLowerExp<Self> where
Self: LowerExp,
fn fmt_lower_exp(self) -> FmtLowerExp<Self> where
Self: LowerExp,
Causes self
to use its LowerExp
implementation when
Debug
-formatted. Read more
sourcefn fmt_lower_hex(self) -> FmtLowerHex<Self> where
Self: LowerHex,
fn fmt_lower_hex(self) -> FmtLowerHex<Self> where
Self: LowerHex,
Causes self
to use its LowerHex
implementation when
Debug
-formatted. Read more
sourcefn fmt_octal(self) -> FmtOctal<Self> where
Self: Octal,
fn fmt_octal(self) -> FmtOctal<Self> where
Self: Octal,
Causes self
to use its Octal
implementation when Debug
-formatted.
sourcefn fmt_pointer(self) -> FmtPointer<Self> where
Self: Pointer,
fn fmt_pointer(self) -> FmtPointer<Self> where
Self: Pointer,
Causes self
to use its Pointer
implementation when
Debug
-formatted. Read more
sourcefn fmt_upper_exp(self) -> FmtUpperExp<Self> where
Self: UpperExp,
fn fmt_upper_exp(self) -> FmtUpperExp<Self> where
Self: UpperExp,
Causes self
to use its UpperExp
implementation when
Debug
-formatted. Read more
sourcefn fmt_upper_hex(self) -> FmtUpperHex<Self> where
Self: UpperHex,
fn fmt_upper_hex(self) -> FmtUpperHex<Self> where
Self: UpperHex,
Causes self
to use its UpperHex
implementation when
Debug
-formatted. Read more
sourceimpl<T> Pipe for T where
T: ?Sized,
impl<T> Pipe for T where
T: ?Sized,
sourcefn pipe<R>(self, func: impl FnOnce(Self) -> R) -> R
fn pipe<R>(self, func: impl FnOnce(Self) -> R) -> R
Pipes by value. This is generally the method you want to use. Read more
sourcefn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> R where
R: 'a,
fn pipe_ref<'a, R>(&'a self, func: impl FnOnce(&'a Self) -> R) -> R where
R: 'a,
Borrows self
and passes that borrow into the pipe function. Read more
sourcefn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> R where
R: 'a,
fn pipe_ref_mut<'a, R>(&'a mut self, func: impl FnOnce(&'a mut Self) -> R) -> R where
R: 'a,
Mutably borrows self
and passes that borrow into the pipe function. Read more
sourcefn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R where
Self: Borrow<B>,
B: 'a + ?Sized,
R: 'a,
fn pipe_borrow<'a, B, R>(&'a self, func: impl FnOnce(&'a B) -> R) -> R where
Self: Borrow<B>,
B: 'a + ?Sized,
R: 'a,
Borrows self
, then passes self.borrow()
into the pipe function. Read more
sourcefn pipe_borrow_mut<'a, B, R>(
&'a mut self,
func: impl FnOnce(&'a mut B) -> R
) -> R where
Self: BorrowMut<B>,
B: 'a + ?Sized,
R: 'a,
fn pipe_borrow_mut<'a, B, R>(
&'a mut self,
func: impl FnOnce(&'a mut B) -> R
) -> R where
Self: BorrowMut<B>,
B: 'a + ?Sized,
R: 'a,
Mutably borrows self
, then passes self.borrow_mut()
into the pipe
function. Read more
sourcefn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R where
Self: AsRef<U>,
U: 'a + ?Sized,
R: 'a,
fn pipe_as_ref<'a, U, R>(&'a self, func: impl FnOnce(&'a U) -> R) -> R where
Self: AsRef<U>,
U: 'a + ?Sized,
R: 'a,
Borrows self
, then passes self.as_ref()
into the pipe function.
sourcefn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R where
Self: AsMut<U>,
U: 'a + ?Sized,
R: 'a,
fn pipe_as_mut<'a, U, R>(&'a mut self, func: impl FnOnce(&'a mut U) -> R) -> R where
Self: AsMut<U>,
U: 'a + ?Sized,
R: 'a,
Mutably borrows self
, then passes self.as_mut()
into the pipe
function. Read more
sourceimpl<T> PipeAsRef for T
impl<T> PipeAsRef for T
sourceimpl<T> PipeBorrow for T
impl<T> PipeBorrow for T
sourceimpl<T> PipeDeref for T
impl<T> PipeDeref for T
sourceimpl<T> PipeRef for T
impl<T> PipeRef for T
sourceimpl<T> Tap for T
impl<T> Tap for T
sourcefn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self where
Self: Borrow<B>,
B: ?Sized,
fn tap_borrow<B>(self, func: impl FnOnce(&B)) -> Self where
Self: Borrow<B>,
B: ?Sized,
Immutable access to the Borrow<B>
of a value. Read more
sourcefn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self where
Self: BorrowMut<B>,
B: ?Sized,
fn tap_borrow_mut<B>(self, func: impl FnOnce(&mut B)) -> Self where
Self: BorrowMut<B>,
B: ?Sized,
Mutable access to the BorrowMut<B>
of a value. Read more
sourcefn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self where
Self: AsRef<R>,
R: ?Sized,
fn tap_ref<R>(self, func: impl FnOnce(&R)) -> Self where
Self: AsRef<R>,
R: ?Sized,
Immutable access to the AsRef<R>
view of a value. Read more
sourcefn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self where
Self: AsMut<R>,
R: ?Sized,
fn tap_ref_mut<R>(self, func: impl FnOnce(&mut R)) -> Self where
Self: AsMut<R>,
R: ?Sized,
Mutable access to the AsMut<R>
view of a value. Read more
sourcefn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self where
Self: Deref<Target = T>,
T: ?Sized,
fn tap_deref<T>(self, func: impl FnOnce(&T)) -> Self where
Self: Deref<Target = T>,
T: ?Sized,
Immutable access to the Deref::Target
of a value. Read more
sourcefn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self where
Self: DerefMut<Target = T> + Deref,
T: ?Sized,
fn tap_deref_mut<T>(self, func: impl FnOnce(&mut T)) -> Self where
Self: DerefMut<Target = T> + Deref,
T: ?Sized,
Mutable access to the Deref::Target
of a value. Read more
sourcefn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
fn tap_dbg(self, func: impl FnOnce(&Self)) -> Self
Calls .tap()
only in debug builds, and is erased in release builds.
sourcefn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
fn tap_mut_dbg(self, func: impl FnOnce(&mut Self)) -> Self
Calls .tap_mut()
only in debug builds, and is erased in release
builds. Read more
sourcefn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self where
Self: Borrow<B>,
B: ?Sized,
fn tap_borrow_dbg<B>(self, func: impl FnOnce(&B)) -> Self where
Self: Borrow<B>,
B: ?Sized,
Calls .tap_borrow()
only in debug builds, and is erased in release
builds. Read more
sourcefn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self where
Self: BorrowMut<B>,
B: ?Sized,
fn tap_borrow_mut_dbg<B>(self, func: impl FnOnce(&mut B)) -> Self where
Self: BorrowMut<B>,
B: ?Sized,
Calls .tap_borrow_mut()
only in debug builds, and is erased in release
builds. Read more
sourcefn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self where
Self: AsRef<R>,
R: ?Sized,
fn tap_ref_dbg<R>(self, func: impl FnOnce(&R)) -> Self where
Self: AsRef<R>,
R: ?Sized,
Calls .tap_ref()
only in debug builds, and is erased in release
builds. Read more
sourcefn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self where
Self: AsMut<R>,
R: ?Sized,
fn tap_ref_mut_dbg<R>(self, func: impl FnOnce(&mut R)) -> Self where
Self: AsMut<R>,
R: ?Sized,
Calls .tap_ref_mut()
only in debug builds, and is erased in release
builds. Read more
sourceimpl<T> Tap for T
impl<T> Tap for T
sourcefn tap<F, R>(self, func: F) -> Self where
F: FnOnce(&Self) -> R,
fn tap<F, R>(self, func: F) -> Self where
F: FnOnce(&Self) -> R,
Provides immutable access for inspection. Read more
sourcefn tap_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&Self) -> R,
fn tap_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&Self) -> R,
Calls tap
in debug builds, and does nothing in release builds.
sourcefn tap_mut<F, R>(self, func: F) -> Self where
F: FnOnce(&mut Self) -> R,
fn tap_mut<F, R>(self, func: F) -> Self where
F: FnOnce(&mut Self) -> R,
Provides mutable access for modification. Read more
sourcefn tap_mut_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&mut Self) -> R,
fn tap_mut_dbg<F, R>(self, func: F) -> Self where
F: FnOnce(&mut Self) -> R,
Calls tap_mut
in debug builds, and does nothing in release builds.
sourceimpl<T, U> TapAsRef<U> for T where
U: ?Sized,
impl<T, U> TapAsRef<U> for T where
U: ?Sized,
sourcefn tap_ref<F, R>(self, func: F) -> Self where
Self: AsRef<T>,
F: FnOnce(&T) -> R,
fn tap_ref<F, R>(self, func: F) -> Self where
Self: AsRef<T>,
F: FnOnce(&T) -> R,
Provides immutable access to the reference for inspection.
sourcefn tap_ref_dbg<F, R>(self, func: F) -> Self where
Self: AsRef<T>,
F: FnOnce(&T) -> R,
fn tap_ref_dbg<F, R>(self, func: F) -> Self where
Self: AsRef<T>,
F: FnOnce(&T) -> R,
Calls tap_ref
in debug builds, and does nothing in release builds.
sourcefn tap_ref_mut<F, R>(self, func: F) -> Self where
Self: AsMut<T>,
F: FnOnce(&mut T) -> R,
fn tap_ref_mut<F, R>(self, func: F) -> Self where
Self: AsMut<T>,
F: FnOnce(&mut T) -> R,
Provides mutable access to the reference for modification.
sourcefn tap_ref_mut_dbg<F, R>(self, func: F) -> Self where
Self: AsMut<T>,
F: FnOnce(&mut T) -> R,
fn tap_ref_mut_dbg<F, R>(self, func: F) -> Self where
Self: AsMut<T>,
F: FnOnce(&mut T) -> R,
Calls tap_ref_mut
in debug builds, and does nothing in release builds.
sourceimpl<T, U> TapBorrow<U> for T where
U: ?Sized,
impl<T, U> TapBorrow<U> for T where
U: ?Sized,
sourcefn tap_borrow<F, R>(self, func: F) -> Self where
Self: Borrow<T>,
F: FnOnce(&T) -> R,
fn tap_borrow<F, R>(self, func: F) -> Self where
Self: Borrow<T>,
F: FnOnce(&T) -> R,
Provides immutable access to the borrow for inspection. Read more
sourcefn tap_borrow_dbg<F, R>(self, func: F) -> Self where
Self: Borrow<T>,
F: FnOnce(&T) -> R,
fn tap_borrow_dbg<F, R>(self, func: F) -> Self where
Self: Borrow<T>,
F: FnOnce(&T) -> R,
Calls tap_borrow
in debug builds, and does nothing in release builds.
sourcefn tap_borrow_mut<F, R>(self, func: F) -> Self where
Self: BorrowMut<T>,
F: FnOnce(&mut T) -> R,
fn tap_borrow_mut<F, R>(self, func: F) -> Self where
Self: BorrowMut<T>,
F: FnOnce(&mut T) -> R,
Provides mutable access to the borrow for modification.
sourceimpl<T> TapDeref for T
impl<T> TapDeref for T
sourcefn tap_deref<F, R>(self, func: F) -> Self where
Self: Deref,
F: FnOnce(&Self::Target) -> R,
fn tap_deref<F, R>(self, func: F) -> Self where
Self: Deref,
F: FnOnce(&Self::Target) -> R,
Immutably dereferences self
for inspection.
sourcefn tap_deref_dbg<F, R>(self, func: F) -> Self where
Self: Deref,
F: FnOnce(&Self::Target) -> R,
fn tap_deref_dbg<F, R>(self, func: F) -> Self where
Self: Deref,
F: FnOnce(&Self::Target) -> R,
Calls tap_deref
in debug builds, and does nothing in release builds.
sourcefn tap_deref_mut<F, R>(self, func: F) -> Self where
Self: DerefMut,
F: FnOnce(&mut Self::Target) -> R,
fn tap_deref_mut<F, R>(self, func: F) -> Self where
Self: DerefMut,
F: FnOnce(&mut Self::Target) -> R,
Mutably dereferences self
for modification.
sourceimpl<T> ToOwned for T where
T: Clone,
impl<T> ToOwned for T where
T: Clone,
type Owned = T
type Owned = T
The resulting type after obtaining ownership.
sourcepub fn to_owned(&self) -> T
pub fn to_owned(&self) -> T
Creates owned data from borrowed data, usually by cloning. Read more
sourcepub fn clone_into(&self, target: &mut T)
pub fn clone_into(&self, target: &mut T)
toowned_clone_into
)Uses borrowed data to replace owned data, usually by cloning. Read more