Commit graph

994 commits

Author SHA1 Message Date
bors
aca749eefc Auto merge of #121801 - zetanumbers:async_drop_glue, r=oli-obk
Add simple async drop glue generation

This is a prototype of the async drop glue generation for some simple types. Async drop glue is intended to behave very similar to the regular drop glue except for being asynchronous. Currently it does not execute synchronous drops but only calls user implementations of `AsyncDrop::async_drop` associative function and awaits the returned future. It is not complete as it only recurses into arrays, slices, tuples, and structs and does not have same sensible restrictions as the old `Drop` trait implementation like having the same bounds as the type definition, while code assumes their existence (requires a future work).

This current design uses a workaround as it does not create any custom async destructor state machine types for ADTs, but instead uses types defined in the std library called future combinators (deferred_async_drop, chain, ready_unit).

Also I recommend reading my [explainer](https://zetanumbers.github.io/book/async-drop-design.html).

This is a part of the [MCP: Low level components for async drop](https://github.com/rust-lang/compiler-team/issues/727) work.

Feature completeness:

 - [x] `AsyncDrop` trait
 - [ ] `async_drop_in_place_raw`/async drop glue generation support for
   - [x] Trivially destructible types (integers, bools, floats, string slices, pointers, references, etc.)
   - [x] Arrays and slices (array pointer is unsized into slice pointer)
   - [x] ADTs (enums, structs, unions)
   - [x] tuple-like types (tuples, closures)
   - [ ] Dynamic types (`dyn Trait`, see explainer's [proposed design](https://github.com/zetanumbers/posts/blob/main/async-drop-design.md#async-drop-glue-for-dyn-trait))
   - [ ] coroutines (https://github.com/rust-lang/rust/pull/123948)
 - [x] Async drop glue includes sync drop glue code
 - [x] Cleanup branch generation for `async_drop_in_place_raw`
 - [ ] Union rejects non-trivially async destructible fields
 - [ ] `AsyncDrop` implementation requires same bounds as type definition
 - [ ] Skip trivially destructible fields (optimization)
 - [ ] New [`TyKind::AdtAsyncDestructor`](https://github.com/zetanumbers/posts/blob/main/async-drop-design.md#adt-async-destructor-types) and get rid of combinators
 - [ ] [Synchronously undroppable types](https://github.com/zetanumbers/posts/blob/main/async-drop-design.md#exclusively-async-drop)
 - [ ] Automatic async drop at the end of the scope in async context
2024-04-23 02:10:23 +00:00
Scott McMurray
bb8d6f790b Address PR feedback 2024-04-21 11:08:37 -07:00
Scott McMurray
de64ff76f8 Use it in the library, and InstSimplify it away in the easy places 2024-04-21 11:08:37 -07:00
bors
ce3263e60e Auto merge of #124113 - RalfJung:interpret-scalar-ops, r=oli-obk
interpret: use ScalarInt for bin-ops; avoid PartialOrd for ScalarInt

Best reviewed commit-by-commit

r? `@oli-obk`
2024-04-19 17:00:28 +00:00
Ralf Jung
42220f0930 ScalarInt: add methods to assert being a (u)int of given size 2024-04-19 13:51:52 +02:00
Ralf Jung
5e6184cdb7 interpret/binary_int_op: avoid dropping to raw ints until we determined the sign 2024-04-18 14:25:06 +02:00
bors
c25473ff62 Auto merge of #124008 - nnethercote:simpler-static_assert_size, r=Nilstrieb
Simplify `static_assert_size`s.

We want to run them on all 64-bit platforms.

r? `@ghost`
2024-04-18 09:47:45 +00:00
Nicholas Nethercote
0d97669a17 Simplify static_assert_sizes.
We want to run them on all 64-bit platforms.
2024-04-18 15:36:25 +10:00
bors
5260893724 Auto merge of #122684 - oli-obk:delay_interning_errors_to_after_validaiton, r=RalfJung
Delay interning errors to after validation

fixes https://github.com/rust-lang/rust/issues/122398
fixes #122548

This improves diagnostics since validation errors are usually more helpful compared with interning errors that just make broad statements about the entire constant

r? `@RalfJung`
2024-04-18 02:34:04 +00:00
Oli Scherer
126dcc618d Use less fragile error handling 2024-04-17 09:50:44 +00:00
Oli Scherer
77fe9f0a72 Validate before reporting interning errors.
validation produces much higher quality errors and already handles most of the cases
2024-04-17 09:50:44 +00:00
Oli Scherer
8b2a4f8b43 Simplify alloc id mutability check 2024-04-17 09:50:44 +00:00
Oli Scherer
140c9e10bb Deduplicate logic for checking the mutability of allocations 2024-04-17 09:50:44 +00:00
Oli Scherer
d87e9636d5 Run the "is this static mutable" logic the same way as in in_mutable_memory 2024-04-17 09:50:44 +00:00
Oli Scherer
8c9cba2be7 Validate nested static items 2024-04-17 09:50:15 +00:00
Ralf Jung
ae7b07f2dc interpret: rename base_pointer -> root_pointer
also in Miri, "base tag" -> "root tag"
2024-04-17 07:35:48 +02:00
Ralf Jung
9e239bdc76 interpret: pass MemoryKind to adjust_alloc_base_pointer 2024-04-17 07:35:48 +02:00
zetanumbers
24a24ec6ba Add simple async drop glue generation
Explainer: https://zetanumbers.github.io/book/async-drop-design.html

https://github.com/rust-lang/rust/pull/121801
2024-04-16 20:45:07 +03:00
Matthias Krüger
4971d9ffe4
Rollup merge of #124024 - RalfJung:interpret-comment, r=oli-obk
interpret: remove outdated comment

In https://github.com/rust-lang/rust/pull/107756, allocation became generally fallible, so the "only panic if there is provenance" no longer applies.

r? ``@oli-obk``
2024-04-16 17:54:46 +02:00
Ralf Jung
5b8b9cfaaa interpret: remove outdated comment 2024-04-16 17:33:12 +02:00
Ralf Jung
18bfca50f1 interpret: pass MemoryKind to before_memory_deallocation 2024-04-16 16:37:34 +02:00
Oli Scherer
84acfe86de Actually create ranged int types in the type system. 2024-04-08 12:02:19 +00:00
Ben Kimock
a7912cb421 Put checks that detect UB under their own flag below debug_assertions 2024-04-06 11:21:47 -04:00
Jacob Pratt
4332498a6d
Rollup merge of #123401 - Zalathar:assert-size-aarch64, r=fmease
Check `x86_64` size assertions on `aarch64`, too

(Context: https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Checking.20size.20assertions.20on.20aarch64.3F)

Currently the compiler has around 30 sets of `static_assert_size!` for various size-critical data structures (e.g. various IR nodes), guarded by `#[cfg(all(target_arch = "x86_64", target_pointer_width = "64"))]`.

(Presumably this cfg avoids having to maintain separate size values for 32-bit targets and unusual 64-bit targets. Apparently it may have been necessary before the i128/u128 alignment changes, too.)

This is slightly incovenient for people on aarch64 workstations (e.g. Macs), because the assertions normally aren't checked until we push to a PR. So this PR adds `aarch64` to the `#[cfg(..)]` guarding all of those assertions in the compiler.

---

Implemented with a simple find/replace. Verified by manually inspecting each `static_assert_size!` in `compiler/`, and checking that either the replacement succeeded, or adding aarch64 wouldn't have been appropriate.
2024-04-03 20:17:06 -04:00
joboet
989660c3e6
rename expose_addr to expose_provenance 2024-04-03 16:00:38 +02:00
Zalathar
2d47cd77ac Check x86_64 size assertions on aarch64, too
This makes it easier for contributors on aarch64 workstations (e.g. Macs) to
notice when these assertions have been violated.
2024-04-03 16:53:03 +11:00
Jacob Pratt
e9ef8e1efa
Rollup merge of #122935 - RalfJung:with-exposed-provenance, r=Amanieu
rename ptr::from_exposed_addr -> ptr::with_exposed_provenance

As discussed on [Zulip](https://rust-lang.zulipchat.com/#narrow/stream/136281-t-opsem/topic/To.20expose.20or.20not.20to.20expose/near/427757066).

The old name, `from_exposed_addr`, makes little sense as it's not the address that is exposed, it's the provenance. (`ptr.expose_addr()` stays unchanged as we haven't found a better option yet. The intended interpretation is "expose the provenance and return the address".)

The new name nicely matches `ptr::without_provenance`.
2024-04-02 20:37:39 -04:00
bors
88c2f4f5f5 Auto merge of #123385 - matthiaskrgr:rollup-v69vjbn, r=matthiaskrgr
Rollup of 8 pull requests

Successful merges:

 - #123198 (Add fn const BuildHasherDefault::new)
 - #123226 (De-LLVM the unchecked shifts [MCP#693])
 - #123302 (Make sure to insert `Sized` bound first into clauses list)
 - #123348 (rustdoc: add a couple of regression tests)
 - #123362 (Check that nested statics in thread locals are duplicated per thread.)
 - #123368 (CFI: Support non-general coroutines)
 - #123375 (rustdoc: synthetic auto trait impls: accept unresolved region vars for now)
 - #123378 (Update sysinfo to 0.30.8)

Failed merges:

 - #123349 (Fix capture analysis for by-move closure bodies)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-04-02 21:23:53 +00:00
bors
a77322c16f Auto merge of #118310 - scottmcm:three-way-compare, r=davidtwco
Add `Ord::cmp` for primitives as a `BinOp` in MIR

Update: most of this OP was written months ago.  See https://github.com/rust-lang/rust/pull/118310#issuecomment-2016940014 below for where we got to recently that made it ready for review.

---

There are dozens of reasonable ways to implement `Ord::cmp` for integers using comparison, bit-ops, and branches.  Those differences are irrelevant at the rust level, however, so we can make things better by adding `BinOp::Cmp` at the MIR level:

1. Exactly how to implement it is left up to the backends, so LLVM can use whatever pattern its optimizer best recognizes and cranelift can use whichever pattern codegens the fastest.
2. By not inlining those details for every use of `cmp`, we drastically reduce the amount of MIR generated for `derive`d `PartialOrd`, while also making it more amenable to MIR-level optimizations.

Having extremely careful `if` ordering to μoptimize resource usage on broadwell (#63767) is great, but it really feels to me like libcore is the wrong place to put that logic.  Similarly, using subtraction [tricks](https://graphics.stanford.edu/~seander/bithacks.html#CopyIntegerSign) (#105840) is arguably even nicer, but depends on the optimizer understanding it (https://github.com/llvm/llvm-project/issues/73417) to be practical.  Or maybe [bitor is better than add](https://discourse.llvm.org/t/representing-in-ir/67369/2?u=scottmcm)?  But maybe only on a future version that [has `or disjoint` support](https://discourse.llvm.org/t/rfc-add-or-disjoint-flag/75036?u=scottmcm)?  And just because one of those forms happens to be good for LLVM, there's no guarantee that it'd be the same form that GCC or Cranelift would rather see -- especially given their very different optimizers.  Not to mention that if LLVM gets a spaceship intrinsic -- [which it should](https://rust-lang.zulipchat.com/#narrow/stream/131828-t-compiler/topic/Suboptimal.20inlining.20in.20std.20function.20.60binary_search.60/near/404250586) -- we'll need at least a rustc intrinsic to be able to call it.

As for simplifying it in Rust, we now regularly inline `{integer}::partial_cmp`, but it's quite a large amount of IR.  The best way to see that is with 8811efa88b (diff-d134c32d028fbe2bf835fef2df9aca9d13332dd82284ff21ee7ebf717bfa4765R113) -- I added a new pre-codegen MIR test for a simple 3-tuple struct, and this PR change it from 36 locals and 26 basic blocks down to 24 locals and 8 basic blocks.  Even better, as soon as the construct-`Some`-then-match-it-in-same-BB noise is cleaned up, this'll expose the `Cmp == 0` branches clearly in MIR, so that an InstCombine (#105808) can simplify that to just a `BinOp::Eq` and thus fix some of our generated code perf issues.  (Tracking that through today's `if a < b { Less } else if a == b { Equal } else { Greater }` would be *much* harder.)

---

r? `@ghost`
But first I should check that perf is ok with this
~~...and my true nemesis, tidy.~~
2024-04-02 19:21:44 +00:00
Oli Scherer
64b75f736d Forbid implicit nested statics in thread local statics 2024-04-02 13:00:46 +00:00
Michael Goulet
4ff8a9bd6b Don't inherit codegen attrs from parent static 2024-03-31 22:34:00 -04:00
Scott McMurray
3da115a93b Add+Use mir::BinOp::Cmp 2024-03-23 23:23:41 -07:00
Ralf Jung
f2cff5ebb9 also rename the SIMD intrinsic 2024-03-23 23:03:37 +01:00
bors
2f090c30dd Auto merge of #122629 - RalfJung:assert-unsafe-precondition, r=saethlin
refactor check_{lang,library}_ub: use a single intrinsic

This enacts the plan I laid out [here](https://github.com/rust-lang/rust/pull/122282#issuecomment-1996917998): use a single intrinsic, called `ub_checks` (in aniticpation of https://github.com/rust-lang/compiler-team/issues/725), that just exposes the value of `debug_assertions` (consistently implemented in both codegen and the interpreter). Put the language vs library UB logic into the library.

This makes it easier to do something like https://github.com/rust-lang/rust/pull/122282 in the future: that just slightly alters the semantics of `ub_checks` (making it more approximating when crates built with different flags are mixed), but it no longer affects whether these checks can happen in Miri or compile-time.

The first commit just moves things around; I don't think these macros and functions belong into `intrinsics.rs` as they are not intrinsics.

r? `@saethlin`
2024-03-23 21:11:00 +00:00
Ralf Jung
6177530420 refactor check_{lang,library}_ub: use a single intrinsic, put policy into library 2024-03-23 18:45:05 +01:00
bors
020bbe46bd Auto merge of #122947 - matthiaskrgr:rollup-10j7orh, r=matthiaskrgr
Rollup of 11 pull requests

Successful merges:

 - #120577 (Stabilize slice_split_at_unchecked)
 - #122698 (Cancel `cargo update` job if there's no updates)
 - #122780 (Rename `hir::Local` into `hir::LetStmt`)
 - #122915 (Delay a bug if no RPITITs were found)
 - #122916 (docs(sync): normalize dot in fn summaries)
 - #122921 (Enable more mir-opt tests in debug builds)
 - #122922 (-Zprint-type-sizes: print the types of awaitees and unnamed coroutine locals.)
 - #122927 (Change an ICE regression test to use the original reproducer)
 - #122930 (add panic location to 'panicked while processing panic')
 - #122931 (Fix some typos in the pin.rs)
 - #122933 (tag_for_variant follow-ups)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-03-23 15:58:17 +00:00
bors
d6eb0f5a09 Auto merge of #122582 - scottmcm:swap-intrinsic-v2, r=oli-obk
Let codegen decide when to `mem::swap` with immediates

Making `libcore` decide this is silly; the backend has so much better information about when it's a good idea.

Thus this PR introduces a new `typed_swap` intrinsic with a fallback body, and replaces that fallback implementation when swapping immediates or scalar pairs.

r? oli-obk

Replaces #111744, and means we'll never need more libs PRs like #111803 or #107140
2024-03-23 13:57:55 +00:00
Ralf Jung
038e7c6c38 rename MIR int2ptr casts to match library name 2024-03-23 13:18:33 +01:00
Ralf Jung
928bd3b4e0 tag_for_variant follow-ups 2024-03-23 10:45:42 +01:00
bors
0ad5e0d2de Auto merge of #122900 - matthiaskrgr:rollup-nls90mb, r=matthiaskrgr
Rollup of 8 pull requests

Successful merges:

 - #114009 (compiler: allow transmute of ZST arrays with generics)
 - #122195 (Note that the caller chooses a type for type param)
 - #122651 (Suggest `_` for missing generic arguments in turbofish)
 - #122784 (Add `tag_for_variant` query)
 - #122839 (Split out `PredicatePolarity` from `ImplPolarity`)
 - #122873 (Merge my contributor emails into one using mailmap)
 - #122885 (Adjust better spastorino membership to triagebot's adhoc_groups)
 - #122888 (add a couple more tests)

r? `@ghost`
`@rustbot` modify labels: rollup
2024-03-22 22:35:11 +00:00
Matthias Krüger
96be3e7cc8
Rollup merge of #122784 - jswrenn:tag_for_variant, r=compiler-errors
Add `tag_for_variant` query

This query allows for sharing code between `rustc_const_eval` and `rustc_transmutability`. It's a precursor to a PR I'm working on to entirely replace the bespoke layout computations in `rustc_transmutability`.

r? `@compiler-errors`
2024-03-22 20:31:29 +01:00
Jack Wrenn
2de9010f66 Add tag_for_variant query
This query allows for sharing code between `rustc_const_eval` and
`rustc_transmutability`.

Also moves `DummyMachine` to `rustc_const_eval`.
2024-03-22 17:01:49 +00:00
Michael Goulet
7be0dbe772 Make RawPtr take Ty and Mutbl separately 2024-03-22 11:13:29 -04:00
Michael Goulet
ff0c31e6b9 Programmatically convert some of the pat ctors 2024-03-22 11:13:29 -04:00
Matthias Krüger
84e55be8da
Rollup merge of #122537 - RalfJung:interpret-allocation, r=oli-obk
interpret/allocation: fix aliasing issue in interpreter and refactor getters a bit

That new raw getter will be needed to let Miri pass pointers to natively executed FFI code ("extern-so" mode).

While doing that I realized our get_bytes_mut are named less scary than get_bytes_unchecked so I rectified that. Also I realized `mem_copy_repeatedly` would break if we called it for multiple overlapping copies so I made sure this does not happen.

And I realized that we are actually [violating Stacked Borrows in the interpreter](https://rust-lang.zulipchat.com/#narrow/stream/136281-t-opsem/topic/I.20think.20Miri.20violates.20Stacked.20Borrows.20.F0.9F.99.88).^^ That was introduced in https://github.com/rust-lang/rust/pull/87777.

r? ```@oli-obk```
2024-03-22 11:36:59 +01:00
Matthias Krüger
4f3050b85a
Rollup merge of #121543 - onur-ozkan:clippy-args, r=oli-obk
various clippy fixes

We need to keep the order of the given clippy lint rules before passing them.
Since clap doesn't offer any useful interface for this purpose out of the box,
we have to handle it manually.

Additionally, this PR makes `-D` rules work as expected. Previously, lint rules were limited to `-W`. By enabling `-D`, clippy began to complain numerous lines in the tree, all of which have been resolved in this PR as well.

Fixes #121481
cc `@matthiaskrgr`
2024-03-20 05:51:22 +01:00
onur-ozkan
81d7d7aabd resolve clippy errors
Signed-off-by: onur-ozkan <work@onurozkan.dev>
2024-03-20 00:12:00 +03:00
Oli Scherer
3a09680671 Ensure nested statics have a HIR node to prevent various queries from ICEing 2024-03-19 09:38:15 +00:00
bors
196ff446d2 Auto merge of #122493 - lukas-code:sized-constraint, r=lcnr
clean up `Sized` checking

This PR cleans up `sized_constraint` and related functions to make them simpler and faster. This should not make more or less code compile, but it can change error output in some rare cases.

## enums and unions are `Sized`, even if they are not WF

The previous code has some special handling for enums, which made them sized if and only if the last field of each variant is sized. For example given this definition (which is not WF)
```rust
enum E<T1: ?Sized, T2: ?Sized, U1: ?Sized, U2: ?Sized> {
    A(T1, T2),
    B(U1, U2),
}
```
the enum was sized if and only if `T2` and `U2` are sized, while `T1` and `T2` were ignored for `Sized` checking. After this PR this enum will always be sized.

Unsized enums are not a thing in Rust and removing this special case allows us to return an `Option<Ty>` from `sized_constraint`, rather than a `List<Ty>`.

Similarly, the old code made an union defined like this
```rust
union Union<T: ?Sized, U: ?Sized> {
    head: T,
    tail: U,
}
```
sized if and only if `U` is sized, completely ignoring `T`. This just makes no sense at all and now this union is always sized.

## apply the "perf hack" to all (non-error) types, instead of just type parameters

This "perf hack" skips evaluating `sized_constraint(adt): Sized` if `sized_constraint(adt): Sized` exactly matches a predicate defined on `adt`, for example:

```rust
// `Foo<T>: Sized` iff `T: Sized`, but we know `T: Sized` from a predicate of `Foo`
struct Foo<T /*: Sized */>(T);
```

Previously this was only applied to type parameters and now it is applied to every type. This means that for example this type is now always sized:

```rust
// Note that this definition is WF, but the type `S<T>` not WF in the global/empty ParamEnv
struct S<T>([T]) where [T]: Sized;
```

I don't anticipate this to affect compile time of any real-world program, but it makes the code a bit nicer and it also makes error messages a bit more consistent if someone does write such a cursed type.

## tuples are sized if the last type is sized

The old solver already has this behavior and this PR also implements it for the new solver and `is_trivially_sized`. This makes it so that tuples work more like a struct defined like this:

```rust
struct TupleN<T1, T2, /* ... */ Tn: ?Sized>(T1, T2, /* ... */ Tn);
```

This might improve the compile time of programs with large tuples a little, but is mostly also a consistency fix.

## `is_trivially_sized` for more types

This function is used post-typeck code (borrowck, const eval, codegen) to skip evaluating `T: Sized` in some cases. It will now return `true` in more cases, most notably `UnsafeCell<T>` and `ManuallyDrop<T>` where `T.is_trivially_sized`.

I'm anticipating that this change will improve compile time for some real world programs.
2024-03-19 04:21:14 +00:00
Oli Scherer
adda9da604 Avoid various uses of Option<Span> in favor of using DUMMY_SP in the few cases that used None 2024-03-18 09:34:08 +00:00