Merge pull request #2762 from rust-lang/tshepang/sembr

sembr a few files
This commit is contained in:
Tshepang Mbambo 2026-02-03 01:14:02 +02:00 committed by GitHub
commit 060ca1f06f
No known key found for this signature in database
GPG key ID: B5690EEEBB952194
17 changed files with 392 additions and 375 deletions

View file

@ -6,16 +6,15 @@
[codegen]: ./codegen.md
This section is about debugging compiler bugs in code generation (e.g. why the
compiler generated some piece of code or crashed in LLVM). LLVM is a big
project on its own that probably needs to have its own debugging document (not
that I could find one). But here are some tips that are important in a rustc
context:
compiler generated some piece of code or crashed in LLVM).
LLVM is a big project that probably needs to have its own debugging document,
but following are some tips that are important in a rustc context.
### Minimize the example
As a general rule, compilers generate lots of information from analyzing code.
Thus, a useful first step is usually to find a minimal example. One way to do
this is to
Thus, a useful first step is usually to find a minimal example.
One way to do this is to
1. create a new crate that reproduces the issue (e.g. adding whatever crate is
at fault as a dependency, and using it from there)
@ -35,14 +34,13 @@ For more discussion on methodology for steps 2 and 3 above, there is an
The official compilers (including nightlies) have LLVM assertions disabled,
which means that LLVM assertion failures can show up as compiler crashes (not
ICEs but "real" crashes) and other sorts of weird behavior. If you are
encountering these, it is a good idea to try using a compiler with LLVM
ICEs but "real" crashes) and other sorts of weird behavior.
If you are encountering these, it is a good idea to try using a compiler with LLVM
assertions enabled - either an "alt" nightly or a compiler you build yourself
by setting `[llvm] assertions=true` in your bootstrap.toml - and see whether
anything turns up.
by setting `llvm.assertions = true` in your bootstrap.toml - and see whether anything turns up.
The rustc build process builds the LLVM tools into
`./build/<host-triple>/llvm/bin`. They can be called directly.
The rustc build process builds the LLVM tools into `build/host/llvm/bin`.
They can be called directly.
These tools include:
* [`llc`], which compiles bitcode (`.bc` files) to executable code; this can be used to
replicate LLVM backend bugs.
@ -55,9 +53,10 @@ These tools include:
[`bugpoint`]: https://llvm.org/docs/Bugpoint.html
By default, the Rust build system does not check for changes to the LLVM source code or
its build configuration settings. So, if you need to rebuild the LLVM that is linked
its build configuration settings.
So, if you need to rebuild the LLVM that is linked
into `rustc`, first delete the file `.llvm-stamp`, which should be located
in `build/<host-triple>/llvm/`.
in `build/host/llvm/`.
The default rustc compilation pipeline has multiple codegen units, which is
hard to replicate manually and means that LLVM is called multiple times in
@ -66,26 +65,27 @@ disappear), passing `-C codegen-units=1` to rustc will make debugging easier.
### Get your hands on raw LLVM input
For rustc to generate LLVM IR, you need to pass the `--emit=llvm-ir` flag. If
you are building via cargo, use the `RUSTFLAGS` environment variable (e.g.
`RUSTFLAGS='--emit=llvm-ir'`). This causes rustc to spit out LLVM IR into the
target directory.
For rustc to generate LLVM IR, you need to pass the `--emit=llvm-ir` flag.
If you are building via cargo,
use the `RUSTFLAGS` environment variable (e.g. `RUSTFLAGS='--emit=llvm-ir'`).
This causes rustc to spit out LLVM IR into the target directory.
`cargo llvm-ir [options] path` spits out the LLVM IR for a particular function
at `path`. (`cargo install cargo-asm` installs `cargo asm` and `cargo
llvm-ir`). `--build-type=debug` emits code for debug builds. There are also
other useful options. Also, debug info in LLVM IR can clutter the output a lot:
`cargo llvm-ir [options] path` spits out the LLVM IR for a particular function at `path`.
(`cargo install cargo-asm` installs `cargo asm` and `cargo llvm-ir`).
`--build-type=debug` emits code for debug builds.
There are also other useful options.
Also, debug info in LLVM IR can clutter the output a lot:
`RUSTFLAGS="-C debuginfo=0"` is really useful.
`RUSTFLAGS="-C save-temps"` outputs LLVM bitcode at
different stages during compilation, which is sometimes useful. The output LLVM
bitcode will be in `.bc` files in the compiler's output directory, set via the
different stages during compilation, which is sometimes useful.
The output LLVM bitcode will be in `.bc` files in the compiler's output directory, set via the
`--out-dir DIR` argument to `rustc`.
* If you are hitting an assertion failure or segmentation fault from the LLVM
backend when invoking `rustc` itself, it is a good idea to try passing each
of these `.bc` files to the `llc` command, and see if you get the same
failure. (LLVM developers often prefer a bug reduced to a `.bc` file over one
of these `.bc` files to the `llc` command, and see if you get the same failure.
(LLVM developers often prefer a bug reduced to a `.bc` file over one
that uses a Rust crate for its minimized reproduction.)
* To get human readable versions of the LLVM bitcode, one just needs to convert
@ -100,7 +100,7 @@ you should:
```bash
$ rustc +local my-file.rs --emit=llvm-ir -O -C no-prepopulate-passes \
-C codegen-units=1
$ OPT=./build/$TRIPLE/llvm/bin/opt
$ OPT=build/$TRIPLE/llvm/bin/opt
$ $OPT -S -O2 < my-file.ll > my
```
@ -112,8 +112,8 @@ llvm-args='-filter-print-funcs=EXACT_FUNCTION_NAME` (e.g. `-C
llvm-args='-filter-print-funcs=_ZN11collections3str21_$LT$impl$u20$str$GT$\
7replace17hbe10ea2e7c809b0bE'`).
That produces a lot of output into standard error, so you'll want to pipe that
to some file. Also, if you are using neither `-filter-print-funcs` nor `-C
That produces a lot of output into standard error, so you'll want to pipe that to some file.
Also, if you are using neither `-filter-print-funcs` nor `-C
codegen-units=1`, then, because the multiple codegen units run in parallel, the
printouts will mix together and you won't be able to read anything.
@ -125,8 +125,8 @@ printouts will mix together and you won't be able to read anything.
* Within LLVM itself, calling `F.getParent()->dump()` at the beginning of
`SafeStackLegacyPass::runOnFunction` will dump the whole module, which
may provide better basis for reproduction. (However, you
should be able to get that same dump from the `.bc` files dumped by
may provide better basis for reproduction.
(However, you should be able to get that same dump from the `.bc` files dumped by
`-C save-temps`.)
If you want just the IR for a specific function (say, you want to see why it
@ -145,29 +145,29 @@ $ ./build/$TRIPLE/llvm/bin/llvm-extract \
If you are seeing incorrect behavior due to an optimization pass, a very handy
LLVM option is `-opt-bisect-limit`, which takes an integer denoting the index
value of the highest pass to run. Index values for taken passes are stable
value of the highest pass to run.
Index values for taken passes are stable
from run to run; by coupling this with software that automates bisecting the
search space based on the resulting program, an errant pass can be quickly
determined. When an `-opt-bisect-limit` is specified, all runs are displayed
search space based on the resulting program, an errant pass can be quickly determined.
When an `-opt-bisect-limit` is specified, all runs are displayed
to standard error, along with their index and output indicating if the
pass was run or skipped. Setting the limit to an index of -1 (e.g.,
`RUSTFLAGS="-C llvm-args=-opt-bisect-limit=-1"`) will show all passes and
their corresponding index values.
If you want to play with the optimization pipeline, you can use the [`opt`] tool
from `./build/<host-triple>/llvm/bin/` with the LLVM IR emitted by rustc.
from `./build/host/llvm/bin/` with the LLVM IR emitted by rustc.
When investigating the implementation of LLVM itself, you should be
aware of its [internal debug infrastructure][llvm-debug].
This is provided in LLVM Debug builds, which you enable for rustc
LLVM builds by changing this setting in the bootstrap.toml:
```
[llvm]
# Indicates whether the LLVM assertions are enabled or not
assertions = true
llvm.assertions = true
# Indicates whether the LLVM build is a Release or Debug build
optimize = false
llvm.optimize = false
```
The quick summary is:
* Setting `assertions=true` enables coarse-grain debug messaging.
@ -190,8 +190,8 @@ specifically the `#t-compiler/wg-llvm` channel.
### Compiler options to know and love
The `-C help` and `-Z help` compiler switches will list out a variety
of interesting options you may find useful. Here are a few of the most
common that pertain to LLVM development (some of them are employed in the
of interesting options you may find useful.
Here are a few of the most common that pertain to LLVM development (some of them are employed in the
tutorial above):
- The `--emit llvm-ir` option emits a `<filename>.ll` file with LLVM IR in textual format
@ -201,7 +201,8 @@ tutorial above):
e.g. `-C llvm-args=-print-before-all` to print IR before every LLVM
pass.
- The `-C no-prepopulate-passes` will avoid pre-populate the LLVM pass
manager with a list of passes. This will allow you to view the LLVM
manager with a list of passes.
This will allow you to view the LLVM
IR that rustc generates, not the LLVM IR after optimizations.
- The `-C passes=val` option allows you to supply a space separated list of extra LLVM passes to run
- The `-C save-temps` option saves all temporary output files during compilation
@ -211,18 +212,17 @@ tutorial above):
- The `-Z no-parallel-backend` will disable parallel compilation of distinct compilation units
- The `-Z llvm-time-trace` option will output a Chrome profiler compatible JSON file
which contains details and timings for LLVM passes.
- The `-C llvm-args=-opt-bisect-limit=<index>` option allows for bisecting LLVM
optimizations.
- The `-C llvm-args=-opt-bisect-limit=<index>` option allows for bisecting LLVM optimizations.
### Filing LLVM bug reports
When filing an LLVM bug report, you will probably want some sort of minimal
working example that demonstrates the problem. The Godbolt compiler explorer is
really helpful for this.
working example that demonstrates the problem.
The Godbolt compiler explorer is really helpful for this.
1. Once you have some LLVM IR for the problematic code (see above), you can
create a minimal working example with Godbolt. Go to
[llvm.godbolt.org](https://llvm.godbolt.org).
create a minimal working example with Godbolt.
Go to [llvm.godbolt.org](https://llvm.godbolt.org).
2. Choose `LLVM-IR` as programming language.
@ -230,8 +230,7 @@ create a minimal working example with Godbolt. Go to
- There are some useful flags: `-mattr` enables target features, `-march=`
selects the target, `-mcpu=` selects the CPU, etc.
- Commands like `llc -march=help` output all architectures available, which
is useful because sometimes the Rust arch names and the LLVM names do not
match.
is useful because sometimes the Rust arch names and the LLVM names do not match.
- If you have compiled rustc yourself somewhere, in the target directory
you have binaries for `llc`, `opt`, etc.
@ -239,7 +238,8 @@ create a minimal working example with Godbolt. Go to
optimizations transform it.
5. Once you have a godbolt link demonstrating the issue, it is pretty easy to
fill in an LLVM bug. Just visit their [github issues page][llvm-issues].
fill in an LLVM bug.
Just visit their [github issues page][llvm-issues].
[llvm-issues]: https://github.com/llvm/llvm-project/issues
@ -251,8 +251,8 @@ gotten the fix yet (or perhaps you are familiar enough with LLVM to fix it yours
In that case, we can sometimes opt to port the fix for the bug
directly to our own LLVM fork, so that rustc can use it more easily.
Our fork of LLVM is maintained in [rust-lang/llvm-project]. Once
you've landed the fix there, you'll also need to land a PR modifying
Our fork of LLVM is maintained in [rust-lang/llvm-project].
Once you've landed the fix there, you'll also need to land a PR modifying
our submodule commits -- ask around on Zulip for help.
[rust-lang/llvm-project]: https://github.com/rust-lang/llvm-project/

View file

@ -3,16 +3,16 @@
<!-- date-check: Aug 2024 -->
Rust supports building against multiple LLVM versions:
* Tip-of-tree for the current LLVM development branch is usually supported
within a few days. PRs for such fixes are tagged with `llvm-main`.
* Tip-of-tree for the current LLVM development branch is usually supported within a few days.
PRs for such fixes are tagged with `llvm-main`.
* The latest released major version is always supported.
* The one or two preceding major versions are usually supported.
By default, Rust uses its own fork in the [rust-lang/llvm-project repository].
This fork is based on a `release/$N.x` branch of the upstream project, where
`$N` is either the latest released major version, or the current major version
in release candidate phase. The fork is never based on the `main` development
branch.
in release candidate phase.
The fork is never based on the `main` development branch.
Our LLVM fork only accepts:
@ -32,16 +32,15 @@ There are three types of LLVM updates, with different procedures:
## Backports (upstream supported)
While the current major LLVM version is supported upstream, fixes should be
backported upstream first, and the release branch then merged back into the
Rust fork.
backported upstream first, and the release branch then merged back into the Rust fork.
1. Make sure the bugfix is in upstream LLVM.
2. If this hasn't happened already, request a backport to the upstream release
branch. If you have LLVM commit access, follow the [backport process].
Otherwise, open an issue requesting the backport. Continue once the
backport has been approved and merged.
3. Identify the branch that rustc is currently using. The `src/llvm-project`
submodule is always pinned to a branch of the
2. If this hasn't happened already, request a backport to the upstream release branch.
If you have LLVM commit access, follow the [backport process].
Otherwise, open an issue requesting the backport.
Continue once the backport has been approved and merged.
3. Identify the branch that rustc is currently using.
The `src/llvm-project` submodule is always pinned to a branch of the
[rust-lang/llvm-project repository].
4. Fork the rust-lang/llvm-project repository.
5. Check out the appropriate branch (typically named `rustc/a.b-yyyy-mm-dd`).
@ -51,26 +50,23 @@ Rust fork.
7. Merge the `upstream/release/$N.x` branch.
8. Push this branch to your fork.
9. Send a Pull Request to rust-lang/llvm-project to the same branch as before.
Be sure to reference the Rust and/or LLVM issue that you're fixing in the PR
description.
Be sure to reference the Rust and/or LLVM issue that you're fixing in the PR description.
10. Wait for the PR to be merged.
11. Send a PR to rust-lang/rust updating the `src/llvm-project` submodule with
your bugfix. This can be done locally with `git submodule update --remote
src/llvm-project` typically.
11. Send a PR to rust-lang/rust updating the `src/llvm-project` submodule with your bugfix.
This can be done locally with `git submodule update --remote src/llvm-project` typically.
12. Wait for PR to be merged.
An example PR:
[#59089](https://github.com/rust-lang/rust/pull/59089)
An example PR: [#59089](https://github.com/rust-lang/rust/pull/59089)
## Backports (upstream not supported)
Upstream LLVM releases are only supported for two to three months after the
GA release. Once upstream backports are no longer accepted, changes should be
Upstream LLVM releases are only supported for two to three months after the GA release.
Once upstream backports are no longer accepted, changes should be
cherry-picked directly to our fork.
1. Make sure the bugfix is in upstream LLVM.
2. Identify the branch that rustc is currently using. The `src/llvm-project`
submodule is always pinned to a branch of the
2. Identify the branch that rustc is currently using.
The `src/llvm-project` submodule is always pinned to a branch of the
[rust-lang/llvm-project repository].
3. Fork the rust-lang/llvm-project repository.
4. Check out the appropriate branch (typically named `rustc/a.b-yyyy-mm-dd`).
@ -80,16 +76,13 @@ cherry-picked directly to our fork.
6. Cherry-pick the relevant commit(s) using `git cherry-pick -x`.
7. Push this branch to your fork.
8. Send a Pull Request to rust-lang/llvm-project to the same branch as before.
Be sure to reference the Rust and/or LLVM issue that you're fixing in the PR
description.
Be sure to reference the Rust and/or LLVM issue that you're fixing in the PR description.
9. Wait for the PR to be merged.
10. Send a PR to rust-lang/rust updating the `src/llvm-project` submodule with
your bugfix. This can be done locally with `git submodule update --remote
src/llvm-project` typically.
10. Send a PR to rust-lang/rust updating the `src/llvm-project` submodule with your bugfix.
This can be done locally with `git submodule update --remote src/llvm-project` typically.
11. Wait for PR to be merged.
An example PR:
[#59089](https://github.com/rust-lang/rust/pull/59089)
An example PR: [#59089](https://github.com/rust-lang/rust/pull/59089)
## New LLVM Release Updates
@ -110,14 +103,12 @@ so let's go through each in detail.
1. Create a new branch in the [rust-lang/llvm-project repository]
from this `release/$N.x` branch,
and name it `rustc/a.b-yyyy-mm-dd`,
where `a.b` is the current version number of LLVM in-tree
at the time of the branch,
where `a.b` is the current version number of LLVM in-tree at the time of the branch,
and the remaining part is the current date.
1. Apply Rust-specific patches to the llvm-project repository.
All features and bugfixes are upstream,
but there's often some weird build-related patches
that don't make sense to upstream.
but there's often some weird build-related patches that don't make sense to upstream.
These patches are typically the latest patches in the
rust-lang/llvm-project branch that rustc is currently using.
@ -145,13 +136,14 @@ so let's go through each in detail.
This is done by having the following setting in `bootstrap.toml`:
```toml
[llvm]
download-ci-llvm = false
llvm.download-ci-llvm = false
```
1. Test for regressions across other platforms. LLVM often has at least one bug
1. Test for regressions across other platforms.
LLVM often has at least one bug
for non-tier-1 architectures, so it's good to do some more testing before
sending this to bors! If you're low on resources you can send the PR as-is
sending this to bors!
If you're low on resources you can send the PR as-is
now to bors, though, and it'll get tested anyway.
Ideally, build LLVM and test it on a few platforms:
@ -168,9 +160,11 @@ so let's go through each in detail.
* `./src/ci/docker/run.sh dist-various-2`
* `./src/ci/docker/run.sh armhf-gnu`
1. Prepare a PR to `rust-lang/rust`. Work with maintainers of
1. Prepare a PR to `rust-lang/rust`.
Work with maintainers of
`rust-lang/llvm-project` to get your commit in a branch of that repository,
and then you can send a PR to `rust-lang/rust`. You'll change at least
and then you can send a PR to `rust-lang/rust`.
You'll change at least
`src/llvm-project` and will likely also change [`llvm-wrapper`] as well.
<!-- date-check: mar 2025 -->
@ -192,14 +186,12 @@ so let's go through each in detail.
We will often want to have those bug fixes as well.
The merge process for that is to use `git merge` itself to merge LLVM's
`release/a.b` branch with the branch created in step 2.
This is typically
done multiple times when necessary while LLVM's release branch is baking.
This is typically done multiple times when necessary while LLVM's release branch is baking.
1. LLVM then announces the release of version `a.b`.
1. After LLVM's official release,
we follow the process of creating a new branch on the
rust-lang/llvm-project repository again,
we follow the process of creating a new branch on the rust-lang/llvm-project repository again,
this time with a new date.
It is only then that the PR to update Rust to use that version is merged.

View file

@ -39,8 +39,7 @@ like the standard library (std) or the compiler (rustc).
To create it by default with `x doc`, modify `bootstrap.toml`:
```toml
[build]
compiler-docs = true
build.compiler-docs = true
```
Note that when enabled,

View file

@ -4,22 +4,25 @@
For `profile = "library"` users, or users who use `download-rustc = true | "if-unchanged"`, please be advised that
the `./x test library/std` flow where `download-rustc` is active (i.e. no compiler changes) is currently broken.
This is tracked in <https://github.com/rust-lang/rust/issues/142505>. Only the `./x test` flow is affected in this
This is tracked in <https://github.com/rust-lang/rust/issues/142505>.
Only the `./x test` flow is affected in this
case, `./x {check,build} library/std` should still work.
In the short-term, you may need to disable `download-rustc` for `./x test library/std`. This can be done either by:
In the short-term, you may need to disable `download-rustc` for `./x test library/std`.
This can be done either by:
1. `./x test library/std --set rust.download-rustc=false`
2. Or set `rust.download-rustc=false` in `bootstrap.toml`.
2. Or set `rust.download-rustc = false` in `bootstrap.toml`.
Unfortunately that will require building the stage 1 compiler. The bootstrap team is working on this, but
Unfortunately that will require building the stage 1 compiler.
The bootstrap team is working on this, but
implementing a maintainable fix is taking some time.
</div>
The compiler is built using a tool called `x.py`. You will need to
have Python installed to run it.
The compiler is built using a tool called `x.py`.
You will need to have Python installed to run it.
## Quick Start
@ -28,7 +31,8 @@ For a less in-depth quick-start of getting the compiler running, see [quickstart
## Get the source code
The main repository is [`rust-lang/rust`][repo]. This contains the compiler,
The main repository is [`rust-lang/rust`][repo].
This contains the compiler,
the standard library (including `core`, `alloc`, `test`, `proc_macro`, etc),
and a bunch of tools (e.g. `rustdoc`, the bootstrapping infrastructure, etc).
@ -86,8 +90,8 @@ cd rust
## What is `x.py`?
`x.py` is the build tool for the `rust` repository. It can build docs, run tests, and compile the
compiler and standard library.
`x.py` is the build tool for the `rust` repository.
It can build docs, run tests, and build the compiler and standard library.
This chapter focuses on the basics to be productive, but
if you want to learn more about `x.py`, [read this chapter][bootstrap].
@ -103,11 +107,13 @@ Also, using `x` rather than `x.py` is recommended as:
(You can find the platform related scripts around the `x.py`, like `x.ps1`)
Notice that this is not absolute. For instance, using Nushell in VSCode on Win10,
typing `x` or `./x` still opens `x.py` in an editor rather than invoking the program. :)
Notice that this is not absolute.
For instance, using Nushell in VSCode on Win10,
typing `x` or `./x` still opens `x.py` in an editor rather than invoking the program.
:)
In the rest of this guide, we use `x` rather than `x.py` directly. The following
command:
In the rest of this guide, we use `x` rather than `x.py` directly.
The following command:
```bash
./x check
@ -164,9 +170,10 @@ Set-ExecutionPolicy -ExecutionPolicy RemoteSigned -Scope CurrentUser
#### Running `x.py` slightly more conveniently
There is a binary that wraps `x.py` called `x` in `src/tools/x`. All it does is
run `x.py`, but it can be installed system-wide and run from any subdirectory
of a checkout. It also looks up the appropriate version of `python` to use.
There is a binary that wraps `x.py` called `x` in `src/tools/x`.
All it does is run `x.py`, but it can be installed system-wide and run from any subdirectory
of a checkout.
It also looks up the appropriate version of `python` to use.
You can install it with `cargo install --path src/tools/x`.
@ -177,18 +184,20 @@ shell to run the platform related scripts.
## Create a `bootstrap.toml`
To start, run `./x setup` and select the `compiler` defaults. This will do some initialization
and create a `bootstrap.toml` for you with reasonable defaults. If you use a different default (which
To start, run `./x setup` and select the `compiler` defaults.
This will do some initialization and create a `bootstrap.toml` for you with reasonable defaults.
If you use a different default (which
you'll likely want to do if you want to contribute to an area of rust other than the compiler, such
as rustdoc), make sure to read information about that default (located in `src/bootstrap/defaults`)
as the build process may be different for other defaults.
Alternatively, you can write `bootstrap.toml` by hand. See `bootstrap.example.toml` for all the available
settings and explanations of them. See `src/bootstrap/defaults` for common settings to change.
Alternatively, you can write `bootstrap.toml` by hand.
See `bootstrap.example.toml` for all the available settings and what they do.
See `src/bootstrap/defaults` for common settings to change.
If you have already built `rustc` and you change settings related to LLVM, then you may have to
execute `./x clean --all` for subsequent configuration changes to take effect. Note that `./x
clean` will not cause a rebuild of LLVM.
execute `./x clean --all` for subsequent configuration changes to take effect.
Note that `./x clean` will not cause a rebuild of LLVM.
## Common `x` commands
@ -202,28 +211,28 @@ working on `rustc`, `std`, `rustdoc`, and other tools.
| `./x test` | Runs all tests |
| `./x fmt` | Formats all code |
As written, these commands are reasonable starting points. However, there are
additional options and arguments for each of them that are worth learning for
serious development work. In particular, `./x build` and `./x test`
provide many ways to compile or test a subset of the code, which can save a lot
of time.
As written, these commands are reasonable starting points.
However, there are additional options and arguments for each of them that are worth learning for
serious development work.
In particular, `./x build` and `./x test`
provide many ways to compile or test a subset of the code, which can save a lot of time.
Also, note that `x` supports all kinds of path suffixes for `compiler`, `library`,
and `src/tools` directories. So, you can simply run `x test tidy` instead of
`x test src/tools/tidy`. Or, `x build std` instead of `x build library/std`.
and `src/tools` directories.
So, you can simply run `x test tidy` instead of `x test src/tools/tidy`.
Or, `x build std` instead of `x build library/std`.
[rust-analyzer]: suggested.html#configuring-rust-analyzer-for-rustc
See the chapters on
[testing](../tests/running.md) and [rustdoc](../rustdoc.md) for more details.
See the chapters on [testing](../tests/running.md) and [rustdoc](../rustdoc.md) for more details.
### Building the compiler
Note that building will require a relatively large amount of storage space.
You may want to have upwards of 10 or 15 gigabytes available to build the compiler.
Once you've created a `bootstrap.toml`, you are now ready to run
`x`. There are a lot of options here, but let's start with what is
Once you've created a `bootstrap.toml`, you are now ready to run `x`.
There are a lot of options here, but let's start with what is
probably the best "go to" command for building a local compiler:
```console
@ -236,8 +245,7 @@ What this command does is:
- Assemble a working stage1 sysroot, containing the stage1 compiler and stage1 standard libraries.
This final product (stage1 compiler + libs built using that compiler)
is what you need to build other Rust programs (unless you use `#![no_std]` or
`#![no_core]`).
is what you need to build other Rust programs (unless you use `#![no_std]` or `#![no_core]`).
You will probably find that building the stage1 `std` is a bottleneck for you,
but fear not, there is a (hacky) workaround...
@ -245,12 +253,12 @@ see [the section on avoiding rebuilds for std][keep-stage].
[keep-stage]: ./suggested.md#faster-rebuilds-with---keep-stage-std
Sometimes you don't need a full build. When doing some kind of
"type-based refactoring", like renaming a method, or changing the
Sometimes you don't need a full build.
When doing some kind of "type-based refactoring", like renaming a method, or changing the
signature of some function, you can use `./x check` instead for a much faster build.
Note that this whole command just gives you a subset of the full `rustc`
build. The **full** `rustc` build (what you get with `./x build
Note that this whole command just gives you a subset of the full `rustc` build.
The **full** `rustc` build (what you get with `./x build
--stage 2 rustc`) has quite a few more steps:
- Build `rustc` with the stage1 compiler.
@ -262,8 +270,8 @@ You almost never need to do this.
### Build specific components
If you are working on the standard library, you probably don't need to build
every other default component. Instead, you can build a specific component by
providing its name, like this:
every other default component.
Instead, you can build a specific component by providing its name, like this:
```bash
./x build --stage 1 library
@ -275,12 +283,13 @@ default).
## Creating a rustup toolchain
Once you have successfully built `rustc`, you will have created a bunch
of files in your `build` directory. In order to actually run the
resulting `rustc`, we recommend creating rustup toolchains. The first
command listed below creates the stage1 toolchain, which was built in the
steps above, with the name `stage1`. The second command creates the stage2
toolchain using the stage1 compiler. This will be needed in the future
if running the entire test suite, but will not be built in this page.
of files in your `build` directory.
In order to actually run the resulting `rustc`, we recommend creating rustup toolchains.
The first command listed below creates the stage1 toolchain, which was built in the
steps above, with the name `stage1`.
The second command creates the stage2 toolchain using the stage1 compiler.
This will be needed in the future
if running the entire test suite, but will not be built in this page.
Building stage2 is done with the same `./x build` command as for stage1,
specifying that the stage is 2 instead.
@ -289,8 +298,8 @@ rustup toolchain link stage1 build/host/stage1
rustup toolchain link stage2 build/host/stage2
```
Now you can run the `rustc` you built with via the toolchain. If you run with
`-vV`, you should see a version number ending in `-dev`, indicating a build from
Now you can run the `rustc` you built with via the toolchain.
If you run with `-vV`, you should see a version number ending in `-dev`, indicating a build from
your local environment:
```bash
@ -308,15 +317,19 @@ The rustup toolchain points to the specified toolchain compiled in your `build`
so the rustup toolchain will be updated whenever `x build` or `x test` are run for
that toolchain/stage.
**Note:** the toolchain we've built does not include `cargo`. In this case, `rustup` will
**Note:** the toolchain we've built does not include `cargo`.
In this case, `rustup` will
fall back to using `cargo` from the installed `nightly`, `beta`, or `stable` toolchain
(in that order). If you need to use unstable `cargo` flags, be sure to run
`rustup install nightly` if you haven't already. See the
(in that order).
If you need to use unstable `cargo` flags, be sure to run
`rustup install nightly` if you haven't already.
See the
[rustup documentation on custom toolchains](https://rust-lang.github.io/rustup/concepts/toolchains.html#custom-toolchains).
**Note:** rust-analyzer and IntelliJ Rust plugin use a component called
`rust-analyzer-proc-macro-srv` to work with proc macros. If you intend to use a
custom toolchain for a project (e.g. via `rustup override set stage1`) you may
`rust-analyzer-proc-macro-srv` to work with proc macros.
If you intend to use a
custom toolchain for a project (e.g. via `rustup override set stage1`), you may
want to build this component:
```bash
@ -342,8 +355,7 @@ If you want to always build for other targets without needing to pass flags to `
you can configure this in the `[build]` section of your `bootstrap.toml` like so:
```toml
[build]
target = ["x86_64-unknown-linux-gnu", "wasm32-wasip1"]
build.target = ["x86_64-unknown-linux-gnu", "wasm32-wasip1"]
```
Note that building for some targets requires having external dependencies installed
@ -369,16 +381,14 @@ cargo +stage1 build --target wasm32-wasip1
## Other `x` commands
Here are a few other useful `x` commands. We'll cover some of them in detail
in other sections:
Here are a few other useful `x` commands.
We'll cover some of them in detail in other sections:
- Building things:
- `./x build` builds everything using the stage 1 compiler,
not just up to `std`
- `./x build --stage 2` builds everything with the stage 2 compiler including
`rustdoc`
- Running tests (see the [section on running tests](../tests/running.html) for
more details):
- `./x build --stage 2` builds everything with the stage 2 compiler including `rustdoc`
- Running tests (see the [section on running tests](../tests/running.html) for more details):
- `./x test library/std` runs the unit tests and integration tests from `std`
- `./x test tests/ui` runs the `ui` test suite
- `./x test tests/ui/const-generics` - runs all the tests in
@ -390,8 +400,8 @@ in other sections:
Sometimes you need to start fresh, but this is normally not the case.
If you need to run this then bootstrap is most likely not acting right and
you should file a bug as to what is going wrong. If you do need to clean
everything up then you only need to run one command!
you should file a bug as to what is going wrong.
If you do need to clean everything up then you only need to run one command!
```bash
./x clean
@ -403,15 +413,17 @@ a long time even on fast computers.
## Remarks on disk space
Building the compiler (especially if beyond stage 1) can require significant amounts of free disk
space, possibly around 100GB. This is compounded if you have a separate build directory for
space, possibly around 100GB.
This is compounded if you have a separate build directory for
rust-analyzer (e.g. `build-rust-analyzer`). This is easy to hit with dev-desktops which have a [set
disk
quota](https://github.com/rust-lang/simpleinfra/blob/8a59e4faeb75a09b072671c74a7cb70160ebef50/ansible/roles/dev-desktop/defaults/main.yml#L7)
for each user, but this also applies to local development as well. Occasionally, you may need to:
for each user, but this also applies to local development as well.
Occasionally, you may need to:
- Remove `build/` directory.
- Remove `build-rust-analyzer/` directory (if you have a separate rust-analyzer build directory).
- Uninstall unnecessary toolchains if you use `cargo-bisect-rustc`. You can check which toolchains
are installed with `rustup toolchain list`.
- Uninstall unnecessary toolchains if you use `cargo-bisect-rustc`.
You can check which toolchains are installed with `rustup toolchain list`.
[^1]: issue[#1707](https://github.com/rust-lang/rustc-dev-guide/issues/1707)

View file

@ -52,7 +52,7 @@ own preinstalled LLVM, you will need to provide `FileCheck` in some other way.
On Debian-based systems, you can install the `llvm-N-tools` package (where `N`
is the LLVM version number, e.g. `llvm-8-tools`). Alternately, you can specify
the path to `FileCheck` with the `llvm-filecheck` config item in `bootstrap.toml`
or you can disable codegen test with the `codegen-tests` item in `bootstrap.toml`.
or you can disable codegen test with the `rust.codegen-tests` item in `bootstrap.toml`.
## Creating a target specification
@ -72,8 +72,7 @@ somewhat successfully, you can copy the specification into the compiler itself.
You will need to add a line to the big table inside of the
`supported_targets` macro in the `rustc_target::spec` module.
You will then add a corresponding file for your new target containing a
`target` function.
You will then add a corresponding file for your new target containing a `target` function.
Look for existing targets to use as examples.
@ -118,16 +117,10 @@ After this, run `cargo update -p libc` to update the lockfiles.
Beware that if you patch to a local `path` dependency, this will enable
warnings for that dependency.
Some dependencies are not warning-free, and due
to the `deny-warnings` setting in `bootstrap.toml`, the build may suddenly start to fail.
to the `rust.deny-warnings` setting in `bootstrap.toml`, the build may suddenly start to fail.
To work around warnings, you may want to:
- Modify the dependency to remove the warnings
- Or for local development purposes, suppress the warnings by setting deny-warnings = false in bootstrap.toml.
```toml
# bootstrap.toml
[rust]
deny-warnings = false
```
- Or for local development purposes, suppress the warnings by setting `rust.deny-warnings = false` in bootstrap.toml.
[`libc`]: https://crates.io/crates/libc
[`cc`]: https://crates.io/crates/cc

View file

@ -2,20 +2,20 @@
There are multiple additional build configuration options and techniques that can be used to compile a
build of `rustc` that is as optimized as possible (for example when building `rustc` for a Linux
distribution). The status of these configuration options for various Rust targets is tracked [here].
distribution).
The status of these configuration options for various Rust targets is tracked [here].
This page describes how you can use these approaches when building `rustc` yourself.
[here]: https://github.com/rust-lang/rust/issues/103595
## Link-time optimization
Link-time optimization is a powerful compiler technique that can increase program performance. To
enable (Thin-)LTO when building `rustc`, set the `rust.lto` config option to `"thin"`
Link-time optimization is a powerful compiler technique that can increase program performance.
To enable (Thin-)LTO when building `rustc`, set the `rust.lto` config option to `"thin"`
in `bootstrap.toml`:
```toml
[rust]
lto = "thin"
rust.lto = "thin"
```
> Note that LTO for `rustc` is currently supported and tested only for
@ -30,13 +30,12 @@ Enabling LTO on Linux has [produced] speed-ups by up to 10%.
## Memory allocator
Using a different memory allocator for `rustc` can provide significant performance benefits. If you
want to enable the `jemalloc` allocator, you can set the `rust.jemalloc` option to `true`
Using a different memory allocator for `rustc` can provide significant performance benefits.
If you want to enable the `jemalloc` allocator, you can set the `rust.jemalloc` option to `true`
in `bootstrap.toml`:
```toml
[rust]
jemalloc = true
rust.jemalloc = true
```
> Note that this option is currently only supported for Linux and macOS targets.
@ -48,17 +47,16 @@ You can modify the number of codegen units for `rustc` and `libstd` in `bootstra
following options:
```toml
[rust]
codegen-units = 1
codegen-units-std = 1
rust.codegen-units = 1
rust.codegen-units-std = 1
```
## Instruction set
By default, `rustc` is compiled for a generic (and conservative) instruction set architecture
(depending on the selected target), to make it support as many CPUs as possible. If you want to
compile `rustc` for a specific instruction set architecture, you can set the `target_cpu` compiler
option in `RUSTFLAGS`:
(depending on the selected target), to make it support as many CPUs as possible.
If you want to compile `rustc` for a specific instruction set architecture,
you can set the `target_cpu` compiler option in `RUSTFLAGS`:
```bash
RUSTFLAGS="-C target_cpu=x86-64-v3" ./x build ...
@ -68,22 +66,23 @@ If you also want to compile LLVM for a specific instruction set, you can set `ll
in `bootstrap.toml`:
```toml
[llvm]
cxxflags = "-march=x86-64-v3"
cflags = "-march=x86-64-v3"
llvm.cxxflags = "-march=x86-64-v3"
llvm.cflags = "-march=x86-64-v3"
```
## Profile-guided optimization
Applying profile-guided optimizations (or more generally, feedback-directed optimizations) can
produce a large increase to `rustc` performance, by up to 15% ([1], [2]). However, these techniques
produce a large increase to `rustc` performance, by up to 15% ([1], [2]).
However, these techniques
are not simply enabled by a configuration option, but rather they require a complex build workflow
that compiles `rustc` multiple times and profiles it on selected benchmarks.
There is a tool called `opt-dist` that is used to optimize `rustc` with [PGO] (profile-guided
optimizations) and [BOLT] (a post-link binary optimizer) for builds distributed to end users. You
can examine the tool, which is located in `src/tools/opt-dist`, and build a custom PGO build
workflow based on it, or try to use it directly. Note that the tool is currently quite hardcoded to
optimizations) and [BOLT] (a post-link binary optimizer) for builds distributed to end users.
You can examine the tool, which is located in `src/tools/opt-dist`, and build a custom PGO build
workflow based on it, or try to use it directly.
Note that the tool is currently quite hardcoded to
the way we use it in Rust's continuous integration workflows, and it might require some custom
changes to make it work in a different environment.
@ -97,9 +96,9 @@ changes to make it work in a different environment.
To use the tool, you will need to provide some external dependencies:
- A Python3 interpreter (for executing `x.py`).
- Compiled LLVM toolchain, with the `llvm-profdata` binary. Optionally, if you want to use BOLT,
the `llvm-bolt` and
`merge-fdata` binaries have to be available in the toolchain.
- Compiled LLVM toolchain, with the `llvm-profdata` binary.
Optionally, if you want to use BOLT,
the `llvm-bolt` and `merge-fdata` binaries have to be available in the toolchain.
These dependencies are provided to `opt-dist` by an implementation of the [`Environment`] struct.
It specifies directories where will the PGO/BOLT pipeline take place, and also external dependencies
@ -108,9 +107,8 @@ like Python or LLVM.
Here is an example of how can `opt-dist` be used locally (outside of CI):
1. Enable metrics in your `bootstrap.toml` file, because `opt-dist` expects it to be enabled:
```toml
[build]
metrics = true
```toml
build.metrics = true
```
2. Build the tool with the following command:
```bash

View file

@ -118,8 +118,7 @@ requires extra disk space.
Selecting `vscode` in `./x setup editor` will prompt you to create a
`.vscode/settings.json` file which will configure Visual Studio code.
The recommended `rust-analyzer` settings live at
[`src/etc/rust_analyzer_settings.json`].
The recommended `rust-analyzer` settings live at [`src/etc/rust_analyzer_settings.json`].
If running `./x check` on save is inconvenient, in VS Code you can use a [Build Task] instead:
@ -253,8 +252,7 @@ It can be configured through `.zed/settings.json`, as described
[here](https://zed.dev/docs/configuring-languages).
Selecting `zed` in `./x setup editor` will prompt you to create a `.zed/settings.json`
file which will configure Zed with the recommended configuration.
The recommended `rust-analyzer` settings live
at [`src/etc/rust_analyzer_zed.json`].
The recommended `rust-analyzer` settings live at [`src/etc/rust_analyzer_zed.json`].
## Check, check, and check again
@ -441,7 +439,7 @@ ln -s ./src/tools/nix-dev-shell/envrc-shell ./.envrc # Use nix-shell
### Note
Note that when using nix on a not-NixOS distribution, it may be necessary to set
**`patch-binaries-for-nix = true` in `bootstrap.toml`**. Bootstrap tries to detect
**`build.patch-binaries-for-nix = true` in `bootstrap.toml`**. Bootstrap tries to detect
whether it's running in nix and enable patching automatically, but this
detection can have false negatives.

View file

@ -1,26 +1,27 @@
# Debugging the compiler
This chapter contains a few tips to debug the compiler. These tips aim to be
useful no matter what you are working on. Some of the other chapters have
This chapter contains a few tips to debug the compiler.
These tips aim to be useful no matter what you are working on.
Some of the other chapters have
advice about specific parts of the compiler (e.g. the [Queries Debugging and
Testing chapter](./incrcomp-debugging.html) or the [LLVM Debugging
chapter](./backend/debugging.md)).
## Configuring the compiler
By default, rustc is built without most debug information. To enable debug info,
set `debug = true` in your bootstrap.toml.
By default, rustc is built without most debug information.
To enable debug info,
set `rust.debug = true` in your bootstrap.toml.
Setting `debug = true` turns on many different debug options (e.g., `debug-assertions`,
Setting `rust.debug = true` turns on many different debug options (e.g., `debug-assertions`,
`debug-logging`, etc.) which can be individually tweaked if you want to, but many people
simply set `debug = true`.
simply set `rust.debug = true`.
If you want to use GDB to debug rustc, please set `bootstrap.toml` with options:
```toml
[rust]
debug = true
debuginfo-level = 2
rust.debug = true
rust.debuginfo-level = 2
```
> NOTE:
@ -36,8 +37,7 @@ This requires at least GDB v10.2,
otherwise you need to disable new symbol-mangling-version in `bootstrap.toml`.
```toml
[rust]
new-symbol-mangling = false
rust.new-symbol-mangling = false
```
> See the comments in `bootstrap.example.toml` for more info.
@ -47,16 +47,16 @@ You will need to rebuild the compiler after changing any configuration option.
## Suppressing the ICE file
By default, if rustc encounters an Internal Compiler Error (ICE) it will dump the ICE contents to an
ICE file within the current working directory named `rustc-ice-<timestamp>-<pid>.txt`. If this is
not desirable, you can prevent the ICE file from being created with `RUSTC_ICE=0`.
ICE file within the current working directory named `rustc-ice-<timestamp>-<pid>.txt`.
If this is not desirable, you can prevent the ICE file from being created with `RUSTC_ICE=0`.
## Getting a backtrace
[getting-a-backtrace]: #getting-a-backtrace
When you have an ICE (panic in the compiler), you can set
`RUST_BACKTRACE=1` to get the stack trace of the `panic!` like in
normal Rust programs. IIRC backtraces **don't work** on MinGW,
sorry. If you have trouble or the backtraces are full of `unknown`,
`RUST_BACKTRACE=1` to get the stack trace of the `panic!` like in normal Rust programs.
IIRC backtraces **don't work** on MinGW, sorry.
If you have trouble or the backtraces are full of `unknown`,
you might want to find some way to use Linux, Mac, or MSVC on Windows.
In the default configuration (without `debug` set to `true`), you don't have line numbers
@ -100,9 +100,10 @@ stack backtrace:
## `-Z` flags
The compiler has a bunch of `-Z *` flags. These are unstable flags that are only
enabled on nightly. Many of them are useful for debugging. To get a full listing
of `-Z` flags, use `-Z help`.
The compiler has a bunch of `-Z *` flags.
These are unstable flags that are only enabled on nightly.
Many of them are useful for debugging.
To get a full listing of `-Z` flags, use `-Z help`.
One useful flag is `-Z verbose-internals`, which generally enables printing more
info that could be useful for debugging.
@ -114,7 +115,8 @@ Right below you can find elaborate explainers on a selected few.
If you want to get a backtrace to the point where the compiler emits an
error message, you can pass the `-Z treat-err-as-bug=n`, which will make
the compiler panic on the `nth` error. If you leave off `=n`, the compiler will
the compiler panic on the `nth` error.
If you leave off `=n`, the compiler will
assume `1` for `n` and thus panic on the first error it encounters.
For example:
@ -190,13 +192,12 @@ Cool, now I have a backtrace for the error!
The `-Z eagerly-emit-delayed-bugs` option makes it easy to debug delayed bugs.
It turns them into normal errors, i.e. makes them visible. This can be used in
combination with `-Z treat-err-as-bug` to stop at a particular delayed bug and
get a backtrace.
combination with `-Z treat-err-as-bug` to stop at a particular delayed bug and get a backtrace.
### Getting the error creation location
`-Z track-diagnostics` can help figure out where errors are emitted. It uses `#[track_caller]`
for this and prints its location alongside the error:
`-Z track-diagnostics` can help figure out where errors are emitted.
It uses `#[track_caller]` for this and prints its location alongside the error:
```
$ RUST_BACKTRACE=1 rustc +stage1 error.rs -Z track-diagnostics
@ -238,11 +239,11 @@ For details see [the guide section on tracing](./tracing.md)
## Narrowing (Bisecting) Regressions
The [cargo-bisect-rustc][bisect] tool can be used as a quick and easy way to
find exactly which PR caused a change in `rustc` behavior. It automatically
downloads `rustc` PR artifacts and tests them against a project you provide
until it finds the regression. You can then look at the PR to get more context
on *why* it was changed. See [this tutorial][bisect-tutorial] on how to use
it.
find exactly which PR caused a change in `rustc` behavior.
It automatically downloads `rustc` PR artifacts and tests them against a project you provide
until it finds the regression.
You can then look at the PR to get more context on *why* it was changed.
See [this tutorial][bisect-tutorial] on how to use it.
[bisect]: https://github.com/rust-lang/cargo-bisect-rustc
[bisect-tutorial]: https://rust-lang.github.io/cargo-bisect-rustc/tutorial.html
@ -252,8 +253,9 @@ it.
The [rustup-toolchain-install-master][rtim] tool by kennytm can be used to
download the artifacts produced by Rust's CI for a specific SHA1 -- this
basically corresponds to the successful landing of some PR -- and then sets
them up for your local use. This also works for artifacts produced by `@bors
try`. This is helpful when you want to examine the resulting build of a PR
them up for your local use.
This also works for artifacts produced by `@bors try`.
This is helpful when you want to examine the resulting build of a PR
without doing the build yourself.
[rtim]: https://github.com/kennytm/rustup-toolchain-install-master
@ -261,10 +263,12 @@ without doing the build yourself.
## `#[rustc_*]` TEST attributes
The compiler defines a whole lot of internal (perma-unstable) attributes some of which are useful
for debugging by dumping extra compiler-internal information. These are prefixed with `rustc_` and
for debugging by dumping extra compiler-internal information.
These are prefixed with `rustc_` and
are gated behind the internal feature `rustc_attrs` (enabled via e.g. `#![feature(rustc_attrs)]`).
For a complete and up to date list, see [`builtin_attrs`]. More specifically, the ones marked `TEST`.
For a complete and up to date list, see [`builtin_attrs`].
More specifically, the ones marked `TEST`.
Here are some notable ones:
| Attribute | Description |
@ -313,7 +317,8 @@ $ firefox maybe_init_suffix.pdf # Or your favorite pdf viewer
### Debugging type layouts
The internal attribute `#[rustc_layout]` can be used to dump the [`Layout`] of
the type it is attached to. For example:
the type it is attached to.
For example:
```rust
#![feature(rustc_attrs)]

View file

@ -2,16 +2,17 @@
## The `HostEffect` predicate
[`HostEffectPredicate`]s are a kind of predicate from `~const Tr` or `const Tr`
bounds. It has a trait reference, and a `constness` which could be `Maybe` or
`Const` depending on the bound. Because `~const Tr`, or rather `Maybe` bounds
[`HostEffectPredicate`]s are a kind of predicate from `~const Tr` or `const Tr` bounds.
It has a trait reference, and a `constness` which could be `Maybe` or
`Const` depending on the bound.
Because `~const Tr`, or rather `Maybe` bounds
apply differently based on whichever contexts they are in, they have different
behavior than normal bounds. Where normal trait bounds on a function such as
behavior than normal bounds.
Where normal trait bounds on a function such as
`T: Tr` are collected within the [`predicates_of`] query to be proven when a
function is called and to be assumed within the function, bounds such as
`T: ~const Tr` will behave as a normal trait bound and add `T: Tr` to the result
from `predicates_of`, but also adds a `HostEffectPredicate` to the
[`const_conditions`] query.
from `predicates_of`, but also adds a `HostEffectPredicate` to the [`const_conditions`] query.
On the other hand, `T: const Tr` bounds do not change meaning across contexts,
therefore they will result in `HostEffect(T: Tr, const)` being added to
@ -23,17 +24,17 @@ therefore they will result in `HostEffect(T: Tr, const)` being added to
## The `const_conditions` query
`predicates_of` represents a set of predicates that need to be proven to use an
item. For example, to use `foo` in the example below:
`predicates_of` represents a set of predicates that need to be proven to use an item.
For example, to use `foo` in the example below:
```rust
fn foo<T>() where T: Default {}
```
We must be able to prove that `T` implements `Default`. In a similar vein,
We must be able to prove that `T` implements `Default`.
In a similar vein,
`const_conditions` represents a set of predicates that need to be proven to use
an item *in const contexts*. If we adjust the example above to use `const` trait
bounds:
an item *in const contexts*. If we adjust the example above to use `const` trait bounds:
```rust
const fn foo<T>() where T: ~const Default {}
@ -45,13 +46,13 @@ prove that `T` has a const implementation of `Default`.
## Enforcement of `const_conditions`
`const_conditions` are currently checked in various places.
`const_conditions` are currently checked in various places.
Every call in HIR from a const context (which includes `const fn` and `const`
items) will check that `const_conditions` of the function we are calling hold.
This is done in [`FnCtxt::enforce_context_effects`]. Note that we don't check
if the function is only referred to but not called, as the following code needs
to compile:
This is done in [`FnCtxt::enforce_context_effects`].
Note that we don't check
if the function is only referred to but not called, as the following code needs to compile:
```rust
const fn hi<T: ~const Default>() -> T {
@ -61,8 +62,8 @@ const X: fn() -> u32 = hi::<u32>;
```
For a trait `impl` to be well-formed, we must be able to prove the
`const_conditions` of the trait from the `impl`'s environment. This is checked
in [`wfcheck::check_impl`].
`const_conditions` of the trait from the `impl`'s environment.
This is checked in [`wfcheck::check_impl`].
Here's an example:
@ -77,10 +78,11 @@ impl const Foo for () {}
```
Methods of trait impls must not have stricter bounds than the method of the
trait that they are implementing. To check that the methods are compatible, a
trait that they are implementing.
To check that the methods are compatible, a
hybrid environment is constructed with the predicates of the `impl` plus the
predicates of the trait method, and we attempt to prove the predicates of the
impl method. We do the same for `const_conditions`:
predicates of the trait method, and we attempt to prove the predicates of the impl method.
We do the same for `const_conditions`:
```rust
const trait Foo {
@ -95,10 +97,10 @@ impl<T: ~const Clone> Foo for Vec<T> {
}
```
These checks are done in [`compare_method_predicate_entailment`]. A similar
function that does the same check for associated types is called
[`compare_type_predicate_entailment`]. Both of these need to consider
`const_conditions` when in const contexts.
These checks are done in [`compare_method_predicate_entailment`].
A similar function that does the same check for associated types is called
[`compare_type_predicate_entailment`].
Both of these need to consider `const_conditions` when in const contexts.
In MIR, as part of const checking, `const_conditions` of items that are called
are revalidated again in [`Checker::revalidate_conditional_constness`].
@ -111,7 +113,9 @@ are revalidated again in [`Checker::revalidate_conditional_constness`].
## `explicit_implied_const_bounds` on associated types and traits
Bounds on associated types, opaque types, and supertraits such as
Bounds on associated types, opaque types, and supertraits such as the following
have their bounds represented differently:
```rust
trait Foo: ~const PartialEq {
type X: ~const PartialEq;
@ -122,11 +126,11 @@ fn foo() -> impl ~const PartialEq {
}
```
Have their bounds represented differently. Unlike `const_conditions` which need
to be proved for callers, and can be assumed inside the definition (e.g. trait
Unlike `const_conditions`, which need to be proved for callers,
and can be assumed inside the definition (e.g. trait
bounds on functions), these bounds need to be proved at definition (at the impl,
or when returning the opaque) but can be assumed for callers. The non-const
equivalent of these bounds are called [`explicit_item_bounds`].
or when returning the opaque) but can be assumed for callers.
The non-const equivalent of these bounds are called [`explicit_item_bounds`].
These bounds are checked in [`compare_impl_item::check_type_bounds`] for HIR
typeck, [`evaluate_host_effect_from_item_bounds`] in the old solver and
@ -139,18 +143,16 @@ typeck, [`evaluate_host_effect_from_item_bounds`] in the old solver and
## Proving `HostEffectPredicate`s
`HostEffectPredicate`s are implemented both in the [old solver] and the [new
trait solver]. In general, we can prove a `HostEffect` predicate when either of
these conditions are met:
`HostEffectPredicate`s are implemented both in the [old solver] and the [new trait solver].
In general, we can prove a `HostEffect` predicate when either of these conditions are met:
* The predicate can be assumed from caller bounds;
* The type has a `const` `impl` for the trait, *and* that const conditions on
the impl holds, *and* that the `explicit_implied_const_bounds` on the trait
holds; or
* The type has a built-in implementation for the trait in const contexts. For
example, `Fn` may be implemented by function items if their const conditions
are satisfied, or `Destruct` is implemented in const contexts if the type can
be dropped at compile time.
the impl holds, *and* that the `explicit_implied_const_bounds` on the trait holds; or
* The type has a built-in implementation for the trait in const contexts.
For example, `Fn` may be implemented by function items if their const conditions
are satisfied, or `Destruct` is implemented in const contexts if the type can
be dropped at compile time.
[old solver]: https://doc.rust-lang.org/nightly/nightly-rustc/src/rustc_trait_selection/traits/effects.rs.html
[new trait solver]: https://doc.rust-lang.org/nightly/nightly-rustc/src/rustc_next_trait_solver/solve/effect_goals.rs.html
@ -161,10 +163,11 @@ To be expanded later.
### The `#[rustc_non_const_trait_method]` attribute
This is intended for internal (standard library) usage only. With this attribute
applied to a trait method, the compiler will not check the default body of this
method for ability to run in compile time. Users of the trait will also not be
allowed to use this trait method in const contexts. This attribute is primarily
This is intended for internal (standard library) usage only.
With this attribute applied to a trait method, the compiler will not check the default body of this
method for ability to run in compile time.
Users of the trait will also not be allowed to use this trait method in const contexts.
This attribute is primarily
used for constifying large traits such as `Iterator` without having to make all
its methods `const` at the same time.

View file

@ -7,13 +7,13 @@ libraries and binaries with additional instructions and data, at compile time.
The coverage instrumentation injects calls to the LLVM intrinsic instruction
[`llvm.instrprof.increment`][llvm-instrprof-increment] at code branches
(based on a MIR-based control flow analysis), and LLVM converts these to
instructions that increment static counters, when executed. The LLVM coverage
instrumentation also requires a [Coverage Map] that encodes source metadata,
instructions that increment static counters, when executed.
The LLVM coverage instrumentation also requires a [Coverage Map] that encodes source metadata,
mapping counter IDs--directly and indirectly--to the file locations (with
start and end line and column).
Rust libraries, with or without coverage instrumentation, can be linked into
instrumented binaries. When the program is executed and cleanly terminates,
Rust libraries, with or without coverage instrumentation, can be linked into instrumented binaries.
When the program is executed and cleanly terminates,
LLVM libraries write the final counter values to a file (`default.profraw` or
a custom file set through environment variable `LLVM_PROFILE_FILE`).
@ -21,9 +21,7 @@ Developers use existing LLVM coverage analysis tools to decode `.profraw`
files, with corresponding Coverage Maps (from matching binaries that produced
them), and generate various reports for analysis, for example:
<img alt="Screenshot of sample `llvm-cov show` result, for function add_quoted_string"
src="img/llvm-cov-show-01.png" class="center"/>
<br/>
![Screenshot of sample `llvm-cov show` result, for function add_quoted_string](img/llvm-cov-show-01.png)
Detailed instructions and examples are documented in the
[rustc book][rustc-book-instrument-coverage].
@ -39,31 +37,28 @@ When working on the coverage instrumentation code, it is usually necessary to
This allows the compiler to produce instrumented binaries, and makes it possible
to run the full coverage test suite.
Enabling debug assertions in the compiler and in LLVM is recommended, but not
mandatory.
Enabling debug assertions in the compiler and in LLVM is recommended, but not mandatory.
```toml
# Similar to the "compiler" profile, but also enables debug assertions in LLVM.
# These assertions can detect malformed coverage mappings in some cases.
profile = "codegen"
[build]
# IMPORTANT: This tells the build system to build the LLVM profiler runtime.
# Without it, the compiler can't produce coverage-instrumented binaries,
# and many of the coverage tests will be skipped.
profiler = true
build.profiler = true
[rust]
# Enable debug assertions in the compiler.
debug-assertions = true
rust.debug-assertions = true
```
## Rust symbol mangling
`-C instrument-coverage` automatically enables Rust symbol mangling `v0` (as
if the user specified `-C symbol-mangling-version=v0` option when invoking
`rustc`) to ensure consistent and reversible name mangling. This has two
important benefits:
`rustc`) to ensure consistent and reversible name mangling.
This has two important benefits:
1. LLVM coverage tools can analyze coverage over multiple runs, including some
changes to source code; so mangled names must be consistent across compilations.
@ -73,8 +68,8 @@ important benefits:
## The LLVM profiler runtime
Coverage data is only generated by running the executable Rust program. `rustc`
statically links coverage-instrumented binaries with LLVM runtime code
Coverage data is only generated by running the executable Rust program.
`rustc` statically links coverage-instrumented binaries with LLVM runtime code
([compiler-rt][compiler-rt-profile]) that implements program hooks
(such as an `exit` hook) to write the counter values to the `.profraw` file.

View file

@ -33,10 +33,11 @@ Since most of the time compiling rustc is spent in LLVM, the idea is that by
reducing the amount of code passed to LLVM, compiling rustc gets faster.
To use `cargo-llvm-lines` together with somewhat custom rustc build process, you can use
`-C save-temps` to obtain required LLVM IR. The option preserves temporary work products
created during compilation. Among those is LLVM IR that represents an input to the
optimization pipeline; ideal for our purposes. It is stored in files with `*.no-opt.bc`
extension in LLVM bitcode format.
`-C save-temps` to obtain required LLVM IR.
The option preserves temporary work products created during compilation.
Among those is LLVM IR that represents an input to the
optimization pipeline; ideal for our purposes.
It is stored in files with `*.no-opt.bc` extension in LLVM bitcode format.
Example usage:
```
@ -89,24 +90,24 @@ Since this doesn't seem to work with incremental compilation or `./x check`,
you will be compiling rustc _a lot_.
I recommend changing a few settings in `bootstrap.toml` to make it bearable:
```
[rust]
# A debug build takes _a third_ as long on my machine,
# but compiling more than stage0 rustc becomes unbearably slow.
optimize = false
rust.optimize = false
# We can't use incremental anyway, so we disable it for a little speed boost.
incremental = false
rust.incremental = false
# We won't be running it, so no point in compiling debug checks.
debug = false
rust.debug = false
# Using a single codegen unit gives less output, but is slower to compile.
codegen-units = 0 # num_cpus
rust.codegen-units = 0 # num_cpus
```
The llvm-lines output is affected by several options.
`optimize = false` increases it from 2.1GB to 3.5GB and `codegen-units = 0` to 4.1GB.
`rust.optimize = false` increases it from 2.1GB to 3.5GB and `codegen-units = 0` to 4.1GB.
MIR optimizations have little impact. Compared to the default `RUSTFLAGS="-Z
MIR optimizations have little impact.
Compared to the default `RUSTFLAGS="-Z
mir-opt-level=1"`, level 0 adds 0.3GB and level 2 removes 0.2GB.
As of <!-- date-check --> July 2022,
inlining happens in LLVM and GCC codegen backends,

View file

@ -3,13 +3,13 @@
## Introducing WPR and WPA
High-level performance analysis (including memory usage) can be performed with the Windows
Performance Recorder (WPR) and Windows Performance Analyzer (WPA). As the names suggest, WPR is for
recording system statistics (in the form of event trace log a.k.a. ETL files), while WPA is for
analyzing these ETL files.
Performance Recorder (WPR) and Windows Performance Analyzer (WPA).
As the names suggest, WPR is for recording system statistics (in the form of event trace log a.k.a.
ETL files), while WPA is for analyzing these ETL files.
WPR collects system wide statistics, so it won't just record things relevant to rustc but also
everything else that's running on the machine. During analysis, we can filter to just the things we
find interesting.
everything else that's running on the machine.
During analysis, we can filter to just the things we find interesting.
These tools are quite powerful but also require a bit of learning
before we can successfully profile the Rust compiler.
@ -21,36 +21,43 @@ specifically designed to make analyzing rustc easier.
### Installing WPR and WPA
You can install WPR and WPA as part of the Windows Performance Toolkit which itself is an option as
part of downloading the Windows Assessment and Deployment Kit (ADK). You can download the ADK
installer [here](https://learn.microsoft.com/en-us/windows-hardware/get-started/adk-install). Make
sure to select the Windows Performance Toolkit (you don't need to select anything else).
part of downloading the Windows Assessment and Deployment Kit (ADK).
You can download the ADK
installer [here](https://learn.microsoft.com/en-us/windows-hardware/get-started/adk-install).
Make sure to select the Windows Performance Toolkit (you don't need to select anything else).
## Recording
In order to perform system analysis, you'll first need to record your system with WPR. Open WPR and
at the bottom of the window select the "profiles" of the things you want to record. For looking
In order to perform system analysis, you'll first need to record your system with WPR.
Open WPR and at the bottom of the window select the "profiles" of the things you want to record.
For looking
into memory usage of the rustc bootstrap process, we'll want to select the following items:
* CPU usage
* VirtualAlloc usage
You might be tempted to record "Heap usage" as well, but this records every single heap allocation
and can be very, very expensive. For high-level analysis, it might be best to leave that turned
off.
and can be very, very expensive.
For high-level analysis, it might be best to leave that turned off.
Now we need to get our setup ready to record. For memory usage analysis, it is best to record the
stage 2 compiler build with a stage 1 compiler build with debug symbols. Having symbols in the
Now we need to get our setup ready to record.
For memory usage analysis, it is best to record the
stage 2 compiler build with a stage 1 compiler build with debug symbols.
Having symbols in the
compiler we're using to build rustc will aid our analysis greatly by allowing WPA to resolve Rust
symbols correctly. Unfortunately, the stage 0 compiler does not have symbols turned on which is why
we'll need to build a stage 1 compiler and then a stage 2 compiler ourselves.
symbols correctly.
Unfortunately, the stage 0 compiler does not have symbols turned on,
which is why we'll need to build a stage 1 compiler,
and then a stage 2 compiler ourselves.
To do this, make sure you have set `debuginfo-level = 1` in your `bootstrap.toml` file. This tells
rustc to generate debug information which includes stack frames when bootstrapping.
To do this, make sure you have set `rust.debuginfo-level = 1` in your `bootstrap.toml` file.
This tells rustc to generate debug information which includes stack frames when bootstrapping.
Now you can build the stage 1 compiler: `x build --stage 1 -i library` or however
else you want to build the stage 1 compiler.
Now that the stage 1 compiler is built, we can record the stage 2 build. Go back to WPR, click the
Now that the stage 1 compiler is built, we can record the stage 2 build.
Go back to WPR, click the
"start" button and build the stage 2 compiler (e.g., `x build --stage=2 -i library`).
When this process finishes, stop the recording.
@ -61,8 +68,10 @@ appears.
## Analysis
Now that our ETL file is open in WPA, we can analyze the results. First, we'll want to apply the
pre-made "profile" which will put WPA into a state conducive to analyzing rustc bootstrap. Download
Now that our ETL file is open in WPA, we can analyze the results.
First, we'll want to apply the
pre-made "profile" which will put WPA into a state conducive to analyzing rustc bootstrap.
Download
the profile [here](https://github.com/wesleywiser/rustc-bootstrap-wpa-analysis/releases/download/1/rustc.generic.wpaProfile).
Select the "Profiles" menu at the top, then "apply" and then choose the downloaded profile.
@ -71,8 +80,9 @@ You should see something resembling the following:
![WPA with profile applied](../img/wpa-initial-memory.png)
Next, we will need to tell WPA to load and process debug symbols so that it can properly demangle
the Rust stack traces. To do this, click "Trace" and then choose "Load Symbols". This step can take
a while.
the Rust stack traces.
To do this, click "Trace" and then choose "Load Symbols".
This step can take a while.
Once WPA has loaded symbols for rustc, we can expand the rustc.exe node and begin drilling down
into the stack with the largest allocations.
@ -81,8 +91,8 @@ To do that, we'll expand the `[Root]` node in the "Commit Stack" column and cont
until we find interesting stack frames.
> Tip: After selecting the node you want to expand, press the right arrow key. This will expand the
node and put the selection on the next largest node in the expanded set. You can continue pressing
the right arrow key until you reach an interesting frame.
node and put the selection on the next largest node in the expanded set.
You can continue pressing the right arrow key until you reach an interesting frame.
![WPA with expanded stack](../img/wpa-stack.png)

View file

@ -36,8 +36,7 @@ Highlight of the most important aspects of the implementation:
when enabled in `bootstrap.toml`:
```toml
[build]
sanitizers = true
build.sanitizers = true
```
The runtimes are [placed into target libdir][sanitizer-copy].
@ -84,7 +83,7 @@ Sanitizers are validated by code generation tests in
[`tests/ui/sanitizer/`][test-ui] directory.
Testing sanitizer functionality requires the sanitizer runtimes (built when
`sanitizer = true` in `bootstrap.toml`) and target providing support for particular a sanitizer.
`build.sanitizer = true` in `bootstrap.toml`) and target providing support for particular a sanitizer.
When a sanitizer is unsupported on a given target, sanitizer tests will be ignored.
This behaviour is controlled by compiletest `needs-sanitizer-*` directives.

View file

@ -277,8 +277,7 @@ the debugger currently being used:
gdb is in a range (inclusive)
- `min-lldb-version: 310` — ignores the test if the version of lldb is below the given version
- `rust-lldb` — ignores the test if lldb is not contain the Rust plugin.
NOTE: The "Rust" version of LLDB doesn't exist anymore, so this will always be
ignored.
NOTE: The "Rust" version of LLDB doesn't exist anymore, so this will always be ignored.
This should probably be removed.
By passing the `--debugger` option to compiletest, you can specify a single debugger to run tests with.
@ -570,9 +569,7 @@ Instrumented binaries need to be linked against the LLVM profiler runtime, so
is enabled in `bootstrap.toml`:
```toml
# bootstrap.toml
[build]
profiler = true
build.profiler = true
```
This also means that they typically don't run in PR CI jobs, though they do run

View file

@ -164,9 +164,9 @@ The following directives will check rustc build settings and target settings:
via `--target`, use `needs-llvm-components` instead to ensure the appropriate
backend is available.
- `needs-profiler-runtime` — ignores the test if the profiler runtime was not
enabled for the target (`build.profiler = true` in rustc's `bootstrap.toml`)
enabled for the target (`build.profiler = true` in `bootstrap.toml`)
- `needs-sanitizer-support` — ignores if the sanitizer support was not enabled
for the target (`sanitizers = true` in rustc's `bootstrap.toml`)
for the target (`build.sanitizers = true` in `bootstrap.toml`)
- `needs-sanitizer-{address,hwaddress,leak,memory,thread}` — ignores if the
corresponding sanitizer is not enabled for the target (AddressSanitizer,
hardware-assisted AddressSanitizer, LeakSanitizer, MemorySanitizer or

View file

@ -1,12 +1,14 @@
# Testing with Docker
The [`src/ci/docker`] directory includes [Docker] image definitions for Linux-based jobs executed on GitHub Actions (non-Linux jobs run outside Docker). You can run these jobs on your local development machine, which can be
helpful to test environments different from your local system. You will
need to install Docker on a Linux, Windows, or macOS system (typically Linux
The [`src/ci/docker`] directory includes [Docker] image definitions for Linux-based jobs executed on GitHub Actions (non-Linux jobs run outside Docker).
You can run these jobs on your local development machine, which can be
helpful to test environments different from your local system.
You will need to install Docker on a Linux, Windows, or macOS system (typically Linux
will be much faster than Windows or macOS because the latter use virtual
machines to emulate a Linux environment).
Jobs running in CI are configured through a set of bash scripts, and it is not always trivial to reproduce their behavior locally. If you want to run a CI job locally in the simplest way possible, you can use a provided helper `citool` that tries to replicate what happens on CI as closely as possible:
Jobs running in CI are configured through a set of bash scripts, and it is not always trivial to reproduce their behavior locally.
If you want to run a CI job locally in the simplest way possible, you can use a provided helper `citool` that tries to replicate what happens on CI as closely as possible:
```bash
cargo run --manifest-path src/ci/citool/Cargo.toml run-local <job-name>
@ -18,39 +20,53 @@ If the above script does not work for you, you would like to have more control o
## The `run.sh` script
The [`src/ci/docker/run.sh`] script is used to build a specific Docker image, run it,
build Rust within the image, and either run tests or prepare a set of archives designed for distribution. The script will mount your local Rust source tree in read-only mode, and an `obj` directory in read-write mode. All the compiler artifacts will be stored in the `obj` directory. The shell will start out in the `obj`directory. From there, it will execute `../src/ci/run.sh` which starts the build as defined by the Docker image.
build Rust within the image, and either run tests or prepare a set of archives designed for distribution.
The script will mount your local Rust source tree in read-only mode, and an `obj` directory in read-write mode.
All the compiler artifacts will be stored in the `obj` directory.
The shell will start out in the `obj`directory.
From there, it will execute `../src/ci/run.sh` which starts the build as defined by the Docker image.
You can run `src/ci/docker/run.sh <image-name>` directly. A few important notes regarding the `run.sh` script:
- When executed on CI, the script expects that all submodules are checked out. If some submodule that is accessed by the job is not available, the build will result in an error. You should thus make sure that you have all required submodules checked out locally. You can either do that manually through git, or set `submodules = true` in your `bootstrap.toml` and run a command such as `x build` to let bootstrap download the most important submodules (this might not be enough for the given CI job that you are trying to execute though).
- `<image-name>` corresponds to a single directory located in one of the `src/ci/docker/host-*` directories. Note that image name does not necessarily correspond to a job name, as some jobs execute the same image, but with different environment variables or Docker build arguments (this is a part of the complexity that makes it difficult to run CI jobs locally).
You can run `src/ci/docker/run.sh <image-name>` directly.
A few important notes regarding the `run.sh` script:
- When executed on CI, the script expects that all submodules are checked out.
If some submodule that is accessed by the job is not available, the build will result in an error.
You should thus make sure that you have all required submodules checked out locally.
You can either do that manually through git, or set `build.submodules = true` in your `bootstrap.toml` and run a command such as `x build` to let bootstrap download the most important submodules
Note that this might not be enough for the given CI job that you are trying to execute though.
- `<image-name>` corresponds to a single directory located in one of the `src/ci/docker/host-*` directories.
Note that image name does not necessarily correspond to a job name, as some jobs execute the same image, but with different environment variables or Docker build arguments
This is a part of the complexity that makes it difficult to run CI jobs locally.
- If you are executing a "dist" job (job beginning with `dist-`), you should set the `DEPLOY=1` environment variable.
- If you are executing an "alternative dist" job (job beginning with `dist-` and ending with `-alt`), you should set the `DEPLOY_ALT=1` environment variable.
- Some of the std tests require IPv6 support. Docker on Linux seems to have it
disabled by default. Run the commands in [`enable-docker-ipv6.sh`] to enable
IPv6 before creating the container. This only needs to be done once.
- Some of the std tests require IPv6 support.
Docker on Linux seems to have it disabled by default.
Run the commands in [`enable-docker-ipv6.sh`] to enable IPv6 before creating the container.
This only needs to be done once.
### Interactive mode
Sometimes, it can be useful to build a specific Docker image, and then run custom commands inside it, so that you can experiment with how the given system behaves. You can do that using an interactive mode, which will
Sometimes, it can be useful to build a specific Docker image, and then run custom commands inside it, so that you can experiment with how the given system behaves.
You can do that using an interactive mode, which will
start a bash shell in the container, using `src/ci/docker/run.sh --dev <image-name>`.
When inside the Docker container, you can run individual commands to do specific tasks. For
example, you can run `../x test tests/ui` to just run UI tests.
When inside the Docker container, you can run individual commands to do specific tasks.
For example, you can run `../x test tests/ui` to just run UI tests.
Some additional notes about using the interactive mode:
- The container will be deleted automatically when you exit the shell, however
the build artifacts persist in the `obj` directory. If you are switching
between different Docker images, the artifacts from previous environments
stored in the `obj` directory may confuse the build system. Sometimes you
will need to delete parts or all of the `obj` directory before building
the build artifacts persist in the `obj` directory.
If you are switching between different Docker images, the artifacts from previous environments
stored in the `obj` directory may confuse the build system.
Sometimes you will need to delete parts or all of the `obj` directory before building
inside the container.
- The container is bare-bones, with only a minimal set of packages. You may
want to install some things like `apt install less vim`.
- You can open multiple shells in the container. First you need the container
- The container is bare-bones, with only a minimal set of packages.
You may want to install some things like `apt install less vim`.
- You can open multiple shells in the container.
First you need the container
name (a short hash), which is displayed in the shell prompt, or you can run
`docker container ls` outside of the container to list the available
containers. With the container name, run `docker exec -it <CONTAINER>
`docker container ls` outside of the container to list the available containers.
With the container name, run `docker exec -it <CONTAINER>
/bin/bash` where `<CONTAINER>` is the container name like `4ba195e95cef`.
[Docker]: https://www.docker.com/

View file

@ -74,8 +74,7 @@ RUSTC_LOG=[typeck]
The query arguments are included as a tracing field which means that you can
filter on the debug display of the arguments.
For example, the `typeck` query has an argument `key: LocalDefId` of what is being checked.
You can use a regex to match on that `LocalDefId` to log type checking for a specific
function:
You can use a regex to match on that `LocalDefId` to log type checking for a specific function:
```
RUSTC_LOG=[typeck{key=.*name_of_item.*}]
@ -194,7 +193,7 @@ calls to `debug!` and `trace!` are only included in the program if
`rust.debug-logging=true` is turned on in bootstrap.toml (it is
turned off by default), so if you don't see `DEBUG` logs, especially
if you run the compiler with `RUSTC_LOG=rustc rustc some.rs` and only see
`INFO` logs, make sure that `debug-logging=true` is turned on in your bootstrap.toml.
`INFO` logs, make sure that `rust.debug-logging=true` is turned on in your bootstrap.toml.
## Logging etiquette and conventions