Commit graph

330 commits

Author SHA1 Message Date
Mark Rousskov
a46ef2d01e Remove dead instructions in terminate blocks 2025-06-22 11:38:47 -04:00
Jakub Beránek
fd6b24f162
Rollup merge of #142383 - scottmcm:operandref-builder, r=workingjubilee
CodeGen: rework Aggregate implemention for rvalue_creates_operand cases

A non-trivial refactor pulled out from rust-lang/rust#138759
r? workingjubilee

The previous implementation I'd written here based on `index_by_increasing_offset` is complicated to follow and difficult to extend to non-structs.

This changes the implementation, without actually changing any codegen (thus no test changes either), to be more like the existing `extract_field` (<2b0274c71d/compiler/rustc_codegen_ssa/src/mir/operand.rs (L345-L425)>) in that it allows setting a particular field directly.

Notably I've found this one much easier to get right, in particular because having the `OperandRef<Result<V, Scalar>>` gives a really useful thing to include in ICE messages if something did happen to go wrong.
2025-06-18 18:06:50 +02:00
Scott McMurray
e4f196a7b4 CodeGen: rework Aggregate implemention for rvalue_creates_operand cases
Another refactor pulled out from 138759

The previous implementation I'd written here based on `index_by_increasing_offset` is complicated to follow and difficult to extend to non-structs.

This changes the implementation, without actually changing any codegen (thus no test changes either), to be more like the existing `extract_field` (<2b0274c71d/compiler/rustc_codegen_ssa/src/mir/operand.rs (L345-L425)>) in that it allows setting a particular field directly.

Notably I've found this one much easier to get right, in particular because having the `OperandRef<Result<V, Scalar>>` gives a really useful thing to include in ICE messages if something did happen to go wrong.
2025-06-17 18:59:22 -07:00
León Orell Valerian Liehr
0b249d3f85
Rollup merge of #141769 - bjorn3:codegen_metadata_module_rework, r=workingjubilee,saethlin
Move metadata object generation for dylibs to the linker code

This deduplicates some code between codegen backends and may in the future allow adding extra metadata that is only known at link time.

Prerequisite of https://github.com/rust-lang/rust/issues/96708.
2025-06-15 23:51:54 +02:00
bors
cc87afd8c0 Auto merge of #142259 - sayantn:simplify-intrinsics, r=workingjubilee
Simplify implementation of Rust intrinsics by using type parameters in the cache

The current implementation of intrinsics have a lot of duplication to handle different overloads of overloaded LLVM intrinsic. This PR uses the **base name and the type parameters** in the cache instead of the full, overloaded name. This has the benefit that `call_intrinsic` doesn't need to provide the full name, rather the type parameters (which is most of the time more available). This uses `LLVMIntrinsicCopyOverloadedName2` to get the overloaded name from the base name and the type parameters, and only uses it to declare the function.

(originally was part of rust-lang/rust#140763, split off later)

`@rustbot` label A-codegen A-LLVM
r? codegen
2025-06-14 16:43:34 +00:00
sayantn
d56fcd968d
Simplify implementation of Rust intrinsics by using type parameters in the cache 2025-06-12 00:32:42 +05:30
Jubilee Young
b88c0061c4 compiler: Change c_int_width to be an integer type 2025-06-11 00:42:14 -07:00
bjorn3
0bd7aa1116 Move metadata object generation for dylibs to the linker code
This deduplicates some code between codegen backends and may in the
future allow adding extra metadata that is only known at link time.
2025-06-03 10:04:34 +00:00
bjorn3
badabab01f Only borrow EncodedMetadata in codegen_crate
And move passing it to the linker to the driver code.
2025-06-03 10:04:34 +00:00
bjorn3
2e8401ae5f Remove type_test from IntrinsicCallBuilderMethods
It is only used within cg_llvm.
2025-06-03 10:00:56 +00:00
bjorn3
00a88b903d Remove get_dbg_loc from DebugInfoBuilderMethods
It is only used within cg_llvm.
2025-06-03 10:00:11 +00:00
Matthias Krüger
ad2d91ce11
Rollup merge of #141507 - RalfJung:atomic-intrinsics, r=bjorn3
atomic_load intrinsic: use const generic parameter for ordering

We have a gazillion intrinsics for the atomics because we encode the ordering into the intrinsic name rather than making it a parameter. This is particularly bad for those operations that take two orderings. Let's fix that!

This PR only converts `load`, to see if there's any feedback that would fundamentally change the strategy we pursue for the const generic intrinsics.

The first two commits are preparation and could be a separate PR if you prefer.

`@BoxyUwU` -- I hope this is a use of const generics that is unlikely to explode? All we need is a const generic of enum type. We could funnel it through an integer if we had to but an enum is obviously nicer...

`@bjorn3` it seems like the cranelift backend entirely ignores the ordering?
2025-05-30 07:01:30 +02:00
Ralf Jung
a387c86a92 get rid of rustc_codegen_ssa::common::AtomicOrdering 2025-05-28 22:57:55 +02:00
bjorn3
865c7b9c78 Remove unused arg_memory_ty method 2025-05-28 20:55:00 +00:00
bjorn3
f0707fad31 Mark all optimize methods and the codegen method as safe
There is no safety contract and I don't think any of them can actually
cause UB in more ways than passing malicious source code to rustc can.
While LtoModuleCodegen::optimize says that the returned ModuleCodegen
points into the LTO module, the LTO module has already been dropped by
the time this function returns, so if the returned ModuleCodegen indeed
points into the LTO module, we would have seen crashes on every LTO
compilation, which we don't. As such the comment is outdated.
2025-05-28 20:55:00 +00:00
bjorn3
d7c0bde0c1 Remove methods from StaticCodegenMethods that are not called in cg_ssa itself 2025-05-28 20:55:00 +00:00
bjorn3
669e2ea848 Make predefine methods take &mut self 2025-05-28 20:55:00 +00:00
bjorn3
0809b41cd9 Move supports_parallel from CodegenBackend to ExtraBackendMethods
It is only relevant when using cg_ssa for driving compilation.
2025-05-28 20:55:00 +00:00
bjorn3
0fd257d66c Remove a couple of uses of interior mutability around statics 2025-05-28 20:55:00 +00:00
bjorn3
a4cb1c72c5 Reduce amount of types that need to be PartialEq 2025-05-28 20:55:00 +00:00
bjorn3
5b0ab2cbdd The personality function is a Function, not a Value 2025-05-28 20:55:00 +00:00
bjorn3
c593c01703 Remove codegen_unit from MiscCodegenMethods 2025-05-28 20:55:00 +00:00
bjorn3
0a14e1b2e7 Remove usage of FnAbi in codegen_intrinsic_call 2025-05-26 10:13:03 +00:00
bjorn3
6016f84e71 Pass PlaceRef rather than Bx::Value to codegen_intrinsic_call 2025-05-26 10:13:03 +00:00
Matthias Krüger
555df301f8
Rollup merge of #134232 - bjorn3:naked_asm_improvements, r=wesleywiser
Share the naked asm impl between cg_ssa and cg_clif

This was introduced in https://github.com/rust-lang/rust/pull/128004.
2025-04-30 17:27:57 +02:00
Trevor Gross
e3458dcf19 Update documentation for fn target_config
This was missed as part of [1].

[1]: https://github.com/rust-lang/rust/pull/140323
2025-04-29 06:13:02 +00:00
Trevor Gross
6ceeb0849e Implement the internal feature cfg_target_has_reliable_f16_f128
Support for `f16` and `f128` is varied across targets, backends, and
backend versions. Eventually we would like to reach a point where all
backends support these approximately equally, but until then we have to
work around some of these nuances of support being observable.

Introduce the `cfg_target_has_reliable_f16_f128` internal feature, which
provides the following new configuration gates:

* `cfg(target_has_reliable_f16)`
* `cfg(target_has_reliable_f16_math)`
* `cfg(target_has_reliable_f128)`
* `cfg(target_has_reliable_f128_math)`

`reliable_f16` and `reliable_f128` indicate that basic arithmetic for
the type works correctly. The `_math` versions indicate that anything
relying on `libm` works correctly, since sometimes this hits a separate
class of codegen bugs.

These options match configuration set by the build script at [1]. The
logic for LLVM support is duplicated as-is from the same script. There
are a few possible updates that will come as a follow up.

The config introduced here is not planned to ever become stable, it is
only intended to replace the build scripts for `std` tests and
`compiler-builtins` that don't have any way to configure based on the
codegen backend.

MCP: https://github.com/rust-lang/compiler-team/issues/866
Closes: https://github.com/rust-lang/compiler-team/issues/866

[1]: 555e1d0386/library/std/build.rs (L84-L186)
2025-04-27 19:58:44 +00:00
bjorn3
421f22e8bf Pass &mut self to codegen_global_asm 2025-04-14 09:38:04 +00:00
bors
1df5affaca Auto merge of #133984 - DaniPopes:scmp-ucmp, r=scottmcm
Lower BinOp::Cmp to llvm.{s,u}cmp.* intrinsics

Lowers `mir::BinOp::Cmp` (`three_way_compare` intrinsic) to the corresponding LLVM `llvm.{s,u}cmp.i8.*` intrinsics.

These are the intrinsics mentioned in https://github.com/rust-lang/rust/pull/118310, which are now available in LLVM 19.

I couldn't find any follow-up PRs/discussions about this, please let me know if I missed something.

r? `@scottmcm`
2025-03-24 22:53:12 +00:00
bors
ebf0cf75d3 Auto merge of #137586 - nnethercote:SetImpliedBits, r=bjorn3
Speed up target feature computation

The LLVM backend calls `LLVMRustHasFeature` twice for every feature. In short-running rustc invocations, this accounts for a surprising amount of work.

r? `@bjorn3`
2025-03-11 12:05:16 +00:00
Matthias Krüger
63c548d82c
Rollup merge of #137549 - oli-obk:llvm-ffi, r=davidtwco
Clean up various LLVM FFI things in codegen_llvm

cc ```@ZuseZ4``` I touched some autodiff parts

The major change of this PR is [bfd88ce](https://github.com/rust-lang/rust/pull/137549/commits/bfd88cead0dd79717f123ad7e9a26ecad88653cb) which makes `CodegenCx` generic just like `GenericBuilder`

The other commits mostly took advantage of the new feature of making extern functions safe, but also just used some wrappers that were already there and shrunk unsafe blocks.

best reviewed commit-by-commit
2025-03-07 19:15:34 +01:00
DaniPopes
58c10c66c1
Lower BinOp::Cmp to llvm.{s,u}cmp.* intrinsics
Lowers `mir::BinOp::Cmp` (`three_way_compare` intrinsic) to the corresponding
LLVM `llvm.{s,u}cmp.i8.*` intrinsics, added in LLVM 19.
2025-03-06 22:29:05 +08:00
Nicholas Nethercote
936a8232df Change signature of target_features_cfg.
Currently it is called twice, once with `allow_unstable` set to true and
once with it set to false. This results in some duplicated work. Most
notably, for the LLVM backend, `LLVMRustHasFeature` is called twice for
every feature, and it's moderately slow. For very short running
compilations on platforms with many features (e.g. a `check` build of
hello-world on x86) this is a significant fraction of runtime.

This commit changes `target_features_cfg` so it is only called once, and
it now returns a pair of feature sets. This halves the number of
`LLVMRustHasFeature` calls.
2025-03-05 09:49:17 +11:00
Michael Goulet
a59a8f9e75 Revert "Auto merge of #135335 - oli-obk:push-zxwssomxxtnq, r=saethlin"
This reverts commit a7a6c64a65, reversing
changes made to ebbe63891f.
2025-03-02 18:52:48 +00:00
bors
0c72c0d11a Auto merge of #133250 - DianQK:embed-bitcode-pgo, r=nikic
The embedded bitcode should always be prepared for LTO/ThinLTO

Fixes #115344. Fixes #117220.

There are currently two methods for generating bitcode that used for LTO. One method involves using `-C linker-plugin-lto` to emit object files as bitcode, which is the typical setting used by cargo. The other method is through `-C embed-bitcode=yes`.

When using with `-C embed-bitcode=yes -C lto=no`, we run a complete non-LTO LLVM pipeline to obtain bitcode, then the bitcode is used for LTO. We run the Call Graph Profile Pass twice on the same module.

This PR is doing something similar to LLVM's `buildFatLTODefaultPipeline`, obtaining the bitcode for embedding after running `buildThinLTOPreLinkDefaultPipeline`.

r? nikic
2025-03-01 08:22:18 +00:00
Oli Scherer
29440b84a9 Remove an unused lifetime param 2025-02-24 15:11:29 +00:00
Oli Scherer
840e31b29f Generalize BaseTypeCodegenMethods 2025-02-24 15:11:29 +00:00
Oli Scherer
d4379d2afd Remove an unnecessary lifetime 2025-02-24 15:05:56 +00:00
David Wood
5afa6a111b
ssa/mono: deduplicate type_has_metadata
The implementation of the `type_has_metadata` function is duplicated in
`rustc_codegen_ssa` and `rustc_monomorphize`, so move this to
`rustc_middle`.
2025-02-24 08:08:23 +00:00
bors
e0be1a0262 Auto merge of #137271 - nikic:gep-nuw-2, r=scottmcm
Emit getelementptr inbounds nuw for pointer::add()

Lower pointer::add (via intrinsic::offset with unsigned offset) to getelementptr inbounds nuw on LLVM versions that support it. This lets LLVM make use of the pre-condition that the offset addition does not wrap in an unsigned sense. Together with inbounds, this also implies that the offset is non-negative.

Fixes https://github.com/rust-lang/rust/issues/137217.
2025-02-24 03:06:16 +00:00
DianQK
da50297a6e
Save pre-link bitcode to ModuleCodegen 2025-02-23 21:23:38 +08:00
Scott McMurray
6f9cfd694d Rework OperandRef::extract_field to stop calling to_immediate_scalar on things which are already immediates
That means it stops trying to truncate things that are already `i1`s.
2025-02-19 12:03:40 -08:00
Scott McMurray
511bf307f0 Emit trunc nuw for unchecked shifts and to_immediate_scalar
- For shifts this shrinks the IR by no longer needing an `assume` while still providing the UB information
- Having this on the `i8`→`i1` truncations will hopefully help with some places that have to load `i8`s or pass those in LLVM structs without range information
2025-02-19 11:36:52 -08:00
Nikita Popov
31cc4c074d Emit getelementptr inbounds nuw for pointer::add() 2025-02-19 11:32:32 +01:00
bors
3b022d8cee Auto merge of #133852 - x17jiri:cold_path, r=saethlin
improve cold_path()

#120370 added a new instrinsic `cold_path()` and used it to fix `likely` and `unlikely`

However, in order to limit scope, the information about cold code paths is only used in 2-target switch instructions. This is sufficient for `likely` and `unlikely`, but limits usefulness of `cold_path` for idiomatic rust. For example, code like this:

```
if let Some(x) = y { ... }
```

may generate 3-target switch:

```
switch y.discriminator:
0 => true branch
1 = > false branch
_ => unreachable
```

and therefore marking a branch as cold will have no effect.

This PR improves `cold_path()` to work with arbitrary switch instructions.

Note that for 2-target switches, we can use `llvm.expect`, but for multiple targets we need to manually emit branch weights. I checked Clang and it also emits weights in this situation. The Clang's weight calculation is more complex that this PR, which I believe is mainly because `switch` in `C/C++` can have multiple cases going to the same target.
2025-02-18 07:49:09 +00:00
Jiri Bobek
7bb5f4dd78 improve cold_path() 2025-02-17 06:39:58 +01:00
bors
bdc97d1046 Auto merge of #136575 - scottmcm:nsuw-math, r=nikic
Set both `nuw` and `nsw` in slice size calculation

There's an old note in the code to do this, and now that [LLVM-C has an API for it](f0b8ff1251/llvm/include/llvm-c/Core.h (L4403-L4408)), we might as well.  And it's been there since what looks like LLVM 17 de9b6aa341 so doesn't even need to be conditional.

(There's other places, like `RawVecInner` or `Layout`, that might want to do things like this too, but I'll leave those for a future PR.)
2025-02-14 14:21:29 +00:00
Scott McMurray
9ad6839f7a Set both nuw and nsw in slice size calculation
There's an old note in the code to do this, and now that LLVM-C has an API for it, we might as well.
2025-02-13 21:26:48 -08:00
Scott McMurray
0cc14b688d transmute should also assume non-null pointers
Previously it only did integer-ABI things, but this way it does data pointers too.  That gives more information in general to the backend, and allows slightly simplifying one of the helpers in slice iterators.
2025-02-12 23:01:27 -08:00
Jubilee Young
eddfe8f503 compiler: remove reexports from rustc_target::callconv 2025-02-07 11:25:18 -08:00