Commit graph

135 commits

Author SHA1 Message Date
Oli Scherer
e96ce20b34 s/generator/coroutine/ 2023-10-20 21:14:01 +00:00
Oli Scherer
60956837cf s/Generator/Coroutine/ 2023-10-20 21:10:38 +00:00
ltdk
b9c2d0e4ab Fix typo in atomic docs 2023-10-20 00:57:29 -04:00
Ralf Jung
e494df436d remove 128bit atomics, they are anyway not exposed on those targets 2023-10-17 07:56:49 +02:00
Ralf Jung
6605116463 use target-arch based table 2023-10-16 19:29:16 +02:00
Ralf Jung
9d8506d27f acquire loads can be done as relaxed load; acquire fence 2023-10-15 17:41:50 +02:00
Ralf Jung
9b8686d832 only guarantee for Relaxed; add ptr-size fallback 2023-10-15 17:41:50 +02:00
Ralf Jung
275d5c8251 wording 2023-10-15 17:41:50 +02:00
Ralf Jung
69b62ecc69 define 'read-only memory' 2023-10-15 17:41:50 +02:00
Ralf Jung
07b8c10ed8 add general powerpc64le bound
(some powerpc64le targets can guarantee more, but for now it doesn't seem worth separating by OS/vendor)
2023-10-15 17:41:50 +02:00
Ralf Jung
7453235feb add ARM and RISC-V values 2023-10-15 17:41:50 +02:00
Ralf Jung
b5e67a00d9 document when atomic loads are guaranteed read-only 2023-10-15 17:41:50 +02:00
bors
39acbed8d6 Auto merge of #116407 - Mark-Simulacrum:bootstrap-bump, r=onur-ozkan
Bump bootstrap compiler to just-released beta

https://forge.rust-lang.org/release/process.html#master-bootstrap-update-t-2-day-tuesday
2023-10-14 05:44:48 +00:00
Trevor Gross
227c844b16 Stabilize 'atomic_from_ptr', move const gate to 'const_atomic_from_ptr' 2023-10-13 16:10:33 -04:00
Trevor Gross
3209d2d46e Correct documentation for atomic_from_ptr
* Remove duplicate alignment note that mentioned `AtomicBool` with other
  types
* Update safety requirements about when non-atomic operations are
  allowed
2023-10-13 16:10:29 -04:00
Mark Rousskov
ea1066d0be Bump to latest beta 2023-10-08 19:57:43 -04:00
David Tolnay
a95f20c9ad
Add Exclusive forwarding impls (FnOnce, FnMut, Generator) 2023-09-28 10:22:19 -07:00
est31
33970db8c6 Add #[rustc_never_returns_null_ptr] to std functions
Add the attribute to standard library functions that
are guaranteed to never return null pointers, as their
originating data wouldn't allow it.
2023-08-06 00:20:28 +02:00
bors
e7d6ce3a6f Auto merge of #114034 - Amanieu:riscv-atomicbool, r=thomcc
Optimize `AtomicBool` for target that don't support byte-sized atomics

`AtomicBool` is defined to have the same layout as `bool`, which means that we guarantee that it has a size of 1 byte. However on certain architectures such as RISC-V, LLVM will emulate byte atomics using a masked CAS loop on an aligned word.

We can take advantage of the fact that `bool` only ever has a value of 0 or 1 to replace `swap` operations with `and`/`or` operations that LLVM can lower to word-sized atomic `and`/`or` operations. This takes advantage of the fact that the incoming value to a `swap` or `compare_exchange` for `AtomicBool` is often a compile-time constant.

### Example

```rust
pub fn swap_true(atomic: &AtomicBool) -> bool {
    atomic.swap(true, Ordering::Relaxed)
}
```

### Old

```asm
	andi	a1, a0, -4
	slli	a0, a0, 3
	li	a2, 255
	sllw	a2, a2, a0
	li	a3, 1
	sllw	a3, a3, a0
	slli	a3, a3, 32
	srli	a3, a3, 32
.LBB1_1:
	lr.w	a4, (a1)
	mv	a5, a3
	xor	a5, a5, a4
	and	a5, a5, a2
	xor	a5, a5, a4
	sc.w	a5, a5, (a1)
	bnez	a5, .LBB1_1
	srlw	a0, a4, a0
	andi	a0, a0, 255
	snez	a0, a0
	ret
```

### New

```asm
	andi	a1, a0, -4
	slli	a0, a0, 3
	li	a2, 1
	sllw	a2, a2, a0
	amoor.w	a1, a2, (a1)
	srlw	a0, a1, a0
	andi	a0, a0, 255
	snez	a0, a0
	ret
```
2023-07-27 01:00:12 +00:00
Amanieu d'Antras
3dbee5bc71 Optimize AtomicBool for target that don't support byte-sized atomics
`AtomicBool` is defined to have the same layout as `bool`, which means
that we guarantee that it has a size of 1 byte. However on certain
architectures such as RISC-V, LLVM will emulate byte atomics using a
masked CAS loop on an aligned word.

We can take advantage of the fact that `bool` only ever has a value of 0
or 1 to replace `swap` operations with `and`/`or` operations that LLVM
can lower to word-sized atomic `and`/`or` operations. This takes
advantage of the fact that the incoming value to a `swap` or
`compare_exchange` for `AtomicBool` is often a compile-time constant.
2023-07-26 00:09:49 +01:00
KaDiWa
c3462a0676
remove the unstable core::sync::atomic::ATOMIC_*_INIT constants 2023-07-18 09:45:52 +02:00
Pietro Albini
4e04da6183 replace version placeholders 2023-04-28 08:47:55 -07:00
Deadbeef
76dbe29104 rm const traits in libcore 2023-04-16 06:49:27 +00:00
KaDiWa
ad2b34d0e3
remove some unneeded imports 2023-04-12 19:27:18 +02:00
Mark Rousskov
bb8a0ffa23 Bump to latest beta 2023-03-15 08:55:22 -04:00
Matthias Krüger
e670379b57
Rollup merge of #108419 - tgross35:atomic-as-ptr, r=m-ou-se
Stabilize `atomic_as_ptr`

Fixes #66893

This stabilizes the `as_ptr` methods for atomics. The stabilization feature gate used here is `atomic_as_ptr` which supersedes `atomic_mut_ptr` to match the change in https://github.com/rust-lang/rust/pull/107736.

This needs FCP.

New stable API:

```rust
impl AtomicBool {
    pub const fn as_ptr(&self) -> *mut bool;
}

impl AtomicI32 {
    pub const fn as_ptr(&self) -> *mut i32;
}

// Includes all other atomic types

impl<T> AtomicPtr<T> {
    pub const fn as_ptr(&self) -> *mut *mut T;
}
```

r? libs-api
``@rustbot`` label +needs-fcp
2023-03-13 21:55:35 +01:00
Maybe Waffle
a2baba09a2 Fill-in tracking issue for feature("atomic_from_ptr") 2023-03-02 12:00:26 +00:00
Maybe Waffle
4a2c555904 Add Atomic*::from_ptr 2023-02-27 19:05:19 +00:00
Trevor Gross
318be2bee9 Stabilize atomic_as_ptr 2023-02-23 23:29:10 -05:00
Trevor Gross
787b1116e8 Rename atomic 'as_mut_ptr' to 'as_ptr' to match Cell (ref #66893) 2023-02-10 20:46:14 -05:00
Trevor Gross
b51d3b9443 Mark 'atomic_mut_ptr' methods const 2023-02-05 17:03:46 -05:00
Michal Rostecki
474ea87943 core: Support variety of atomic widths in width-agnostic functions
Before this change, the following functions and macros were annotated
with `#[cfg(target_has_atomic = "8")]` or
`#[cfg(target_has_atomic_load_store = "8")]`:

* `atomic_int`
* `strongest_failure_ordering`
* `atomic_swap`
* `atomic_add`
* `atomic_sub`
* `atomic_compare_exchange`
* `atomic_compare_exchange_weak`
* `atomic_and`
* `atomic_nand`
* `atomic_or`
* `atomic_xor`
* `atomic_max`
* `atomic_min`
* `atomic_umax`
* `atomic_umin`

However, none of those functions and macros actually depend on 8-bit
width and they are needed for all atomic widths (16-bit, 32-bit, 64-bit
etc.). Some targets might not support 8-bit atomics (i.e. BPF, if we
would enable atomic CAS for it).

This change fixes that by removing the `"8"` argument from annotations,
which results in accepting the whole variety of widths.

Fixes #106845
Fixes #106795

Signed-off-by: Michal Rostecki <vadorovsky@gmail.com>
2023-01-25 10:44:03 +08:00
Maybe Waffle
22b4c68895 Make // SAFETY comment part of the doctest, and not surrounding code 2023-01-12 07:28:43 +00:00
Maybe Waffle
f1a63bc2dd Remove unused mut from a doctest 2023-01-12 07:27:51 +00:00
Maybe Waffle
a513c84a5b Add AtomicPtr::as_mut_ptr 2023-01-12 07:27:36 +00:00
Albert Larsan
736342bb46
Correct typos in core::sync::Exclusive::get_{pin_mut, mut} 2022-12-12 09:19:17 +01:00
clubby789
19bc8fb05a Remove extra spaces 2022-10-19 23:54:00 +01:00
Rageking8
7122abaddf more dupe word typos 2022-10-14 12:57:56 +08:00
Matthias Krüger
d13f7aef70
Rollup merge of #101774 - Riolku:atomic-update-aba, r=m-ou-se
Warn about safety of `fetch_update`

Specifically as it relates to the ABA problem.

`fetch_update` is a useful function, and one that isn't provided by, say, C++. However, this does not mean the function is magic. It is implemented in terms of `compare_exchange_weak`, and in particular, suffers from the ABA problem. See the following code, which is a naive implementation of `pop` in a lock-free queue:

```rust
fn pop(&self) -> Option<i32> {
    self.front.fetch_update(Ordering::Relaxed, Ordering::Acquire, |front| {
        if front == ptr::null_mut() {
            None
        }
        else {
            Some(unsafe { (*front).next })
        }
    }.ok()
}
```

This code is unsound if called from multiple threads because of the ABA problem. Specifically, suppose nodes are allocated with `Box`. Suppose the following sequence happens:

```
Initial: Queue is X -> Y.

Thread A: Starts popping, is pre-empted.
Thread B: Pops successfully, twice, leaving the queue empty.
Thread C: Pushes, and `Box` returns X (very common for allocators)
Thread A: Wakes up, sees the head is still X, and stores Y as the new head.
```

But `Y` is deallocated. This is undefined behaviour.

Adding a note about this problem to `fetch_update` should hopefully prevent users from being misled, and also, a link to this common problem is, in my opinion, an improvement to our docs on atomics.
2022-10-11 18:59:46 +02:00
Thom Chiovoloni
29efe8c789
Add #[inline] to trivial functions on core::sync::Exclusive 2022-09-22 22:15:27 -07:00
Keenan Gugeler
3d28a1ad76 Warn about safety of fetch_update
Specifically as it relates to the ABA problem.
2022-09-14 13:25:14 -04:00
Maybe Waffle
b2625e24b9 fix nitpicks from review 2022-08-21 06:36:11 +04:00
Maybe Waffle
3ba393465f Make some docs nicer wrt pointer offsets 2022-08-21 02:22:20 +04:00
Mark Rousskov
154a09dd91 Adjust cfgs 2022-08-12 16:28:15 -04:00
Ralf Jung
2b269cad43 miri: prune some atomic operation details from stacktrace 2022-07-20 16:34:24 -04:00
Yuki Okushi
796bc7cae3
Rollup merge of #98383 - m-ou-se:remove-memory-order-restrictions, r=joshtriplett
Remove restrictions on compare-exchange memory ordering.

We currently don't allow the failure memory ordering of compare-exchange operations to be stronger than the success ordering, as was the case in C++11 when its memory model was copied to Rust. However, this restriction was lifted in C++17 as part of [p0418r2](https://wg21.link/p0418r2). It's time  we lift the restriction too.

| Success | Failure | Before | After |
|---------|---------|--------|-------|
| Relaxed | Relaxed | ✔️ | ✔️ |
| Relaxed | Acquire |                 | ✔️ |
| Relaxed | SeqCst  |                 | ✔️ |
| Acquire | Relaxed | ✔️ | ✔️ |
| Acquire | Acquire | ✔️ | ✔️ |
| Acquire | SeqCst  |                 | ✔️ |
| Release | Relaxed | ✔️ | ✔️ |
| Release | Acquire |                 | ✔️ |
| Release | SeqCst  |                 | ✔️ |
| AcqRel  | Relaxed | ✔️ | ✔️ |
| AcqRel  | Acquire | ✔️ | ✔️ |
| AcqRel  | SeqCst  |                 | ✔️ |
| SeqCst  | Relaxed | ✔️ | ✔️ |
| SeqCst  | Acquire | ✔️ | ✔️ |
| SeqCst  | SeqCst  | ✔️ | ✔️ |
| \*      | Release |                 |                 |
| \*      | AcqRel  |                 |                 |

Fixes https://github.com/rust-lang/rust/issues/68464
2022-07-18 08:39:57 +09:00
Matthias Krüger
342b666d59
Rollup merge of #99094 - AldaronLau:atomic-ptr-extra-space, r=Dylan-DPC
Remove extra space in AtomicPtr::new docs
2022-07-11 00:33:48 +02:00
Maybe Waffle
e9292b7652 fill new tracking issue for feature(strict_provenance_atomic_ptr) 2022-07-10 13:17:33 +04:00
Jeron Aldaron Lau
4944b5769b Remove extra space in AtomicPtr::new docs 2022-07-09 14:20:34 -05:00
Dylan DPC
3c35da224b
Rollup merge of #99070 - tamird:update-tracking-issue, r=RalfJung
Update integer_atomics tracking issue

Updates #32976.
Updates #99069.

r? ``@RalfJung``
2022-07-09 11:28:09 +05:30