diff --git a/src/doc/rustc-dev-guide/src/tests/compiletest.md b/src/doc/rustc-dev-guide/src/tests/compiletest.md index 626525fa804d..06a4728c93e1 100644 --- a/src/doc/rustc-dev-guide/src/tests/compiletest.md +++ b/src/doc/rustc-dev-guide/src/tests/compiletest.md @@ -2,8 +2,8 @@ ## Introduction -`compiletest` is the main test harness of the Rust test suite. It allows test -authors to organize large numbers of tests (the Rust compiler has many +`compiletest` is the main test harness of the Rust test suite. +It allows test authors to organize large numbers of tests (the Rust compiler has many thousands), efficient test execution (parallel execution is supported), and allows the test author to configure behavior and expected results of both individual and groups of tests. @@ -22,9 +22,10 @@ individual and groups of tests. `compiletest` may check test code for compile-time or run-time success/failure. Tests are typically organized as a Rust source file with annotations in comments -before and/or within the test code. These comments serve to direct `compiletest` -on if or how to run the test, what behavior to expect, and more. See -[directives](directives.md) and the test suite documentation below for more details +before and/or within the test code. +These comments serve to direct `compiletest` +on if or how to run the test, what behavior to expect, and more. +See [directives](directives.md) and the test suite documentation below for more details on these annotations. See the [Adding new tests](adding.md) and [Best practices](best-practices.md) @@ -40,16 +41,18 @@ Additionally, bootstrap accepts several common arguments directly, e.g. `x test --no-capture --force-rerun --run --pass`. Compiletest itself tries to avoid running tests when the artifacts that are -involved (mainly the compiler) haven't changed. You can use `x test --test-args +involved (mainly the compiler) haven't changed. +You can use `x test --test-args --force-rerun` to rerun a test even when none of the inputs have changed. ## Test suites -All of the tests are in the [`tests`] directory. The tests are organized into -"suites", with each suite in a separate subdirectory. Each test suite behaves a -little differently, with different compiler behavior and different checks for -correctness. For example, the [`tests/incremental`] directory contains tests for -incremental compilation. The various suites are defined in +All of the tests are in the [`tests`] directory. +The tests are organized into "suites", with each suite in a separate subdirectory. +Each test suite behaves a +little differently, with different compiler behavior and different checks for correctness. +For example, the [`tests/incremental`] directory contains tests for incremental compilation. +The various suites are defined in [`src/tools/compiletest/src/common.rs`] in the `pub enum Mode` declaration. The following test suites are available, with links for more information: @@ -105,10 +108,9 @@ Run-make tests pertaining to rustdoc are typically named `run-make/rustdoc-*/`. ### Pretty-printer tests -The tests in [`tests/pretty`] exercise the "pretty-printing" functionality of -`rustc`. The `-Z unpretty` CLI option for `rustc` causes it to translate the -input source into various different formats, such as the Rust source after macro -expansion. +The tests in [`tests/pretty`] exercise the "pretty-printing" functionality of `rustc`. +The `-Z unpretty` CLI option for `rustc` causes it to translate the +input source into various different formats, such as the Rust source after macro expansion. The pretty-printer tests have several [directives](directives.md) described below. These commands can significantly change the behavior of the test, but the @@ -125,17 +127,20 @@ If any of the commands above fail, then the test fails. The directives for pretty-printing tests are: - `pretty-mode` specifies the mode pretty-print tests should run in (that is, - the argument to `-Zunpretty`). The default is `normal` if not specified. + the argument to `-Zunpretty`). + The default is `normal` if not specified. - `pretty-compare-only` causes a pretty test to only compare the pretty-printed - output (stopping after step 3 from above). It will not try to compile the - expanded output to type check it. This is needed for a pretty-mode that does - not expand to valid Rust, or for other situations where the expanded output - cannot be compiled. + output (stopping after step 3 from above). + It will not try to compile the expanded output to type check it. + This is needed for a pretty-mode that does + not expand to valid Rust, or for other situations where the expanded output cannot be compiled. - `pp-exact` is used to ensure a pretty-print test results in specific output. If specified without a value, then it means the pretty-print output should - match the original source. If specified with a value, as in `//@ + match the original source. + If specified with a value, as in `//@ pp-exact:foo.pp`, it will ensure that the pretty-printed output matches the - contents of the given file. Otherwise, if `pp-exact` is not specified, then + contents of the given file. + Otherwise, if `pp-exact` is not specified, then the pretty-printed output will be pretty-printed one more time, and the output of the two pretty-printing rounds will be compared to ensure that the pretty-printed output converges to a steady state. @@ -144,13 +149,12 @@ The directives for pretty-printing tests are: ### Incremental tests -The tests in [`tests/incremental`] exercise incremental compilation. They use -[`revisions` directive](#revisions) to tell compiletest to run the compiler in a +The tests in [`tests/incremental`] exercise incremental compilation. +They use [`revisions` directive](#revisions) to tell compiletest to run the compiler in a series of steps. Compiletest starts with an empty directory with the `-C incremental` flag, and -then runs the compiler for each revision, reusing the incremental results from -previous steps. +then runs the compiler for each revision, reusing the incremental results from previous steps. The revisions should start with: @@ -158,8 +162,7 @@ The revisions should start with: * `rfail` — the test should compile successfully, but the executable should fail to run * `cfail` — the test should fail to compile -To make the revisions unique, you should add a suffix like `rpass1` and -`rpass2`. +To make the revisions unique, you should add a suffix like `rpass1` and `rpass2`. To simulate changing the source, compiletest also passes a `--cfg` flag with the current revision name. @@ -183,20 +186,20 @@ fn main() { foo(); } ``` `cfail` tests support the `forbid-output` directive to specify that a certain -substring must not appear anywhere in the compiler output. This can be useful to -ensure certain errors do not appear, but this can be fragile as error messages -change over time, and a test may no longer be checking the right thing but will -still pass. +substring must not appear anywhere in the compiler output. +This can be useful to ensure certain errors do not appear, but this can be fragile as error messages +change over time, and a test may no longer be checking the right thing but will still pass. `cfail` tests support the `should-ice` directive to specify that a test should -cause an Internal Compiler Error (ICE). This is a highly specialized directive +cause an Internal Compiler Error (ICE). +This is a highly specialized directive to check that the incremental cache continues to work after an ICE. -Incremental tests may use the attribute `#[rustc_clean(...)]` attribute. This attribute compares -the fingerprint from the current compilation session with the previous one. +Incremental tests may use the attribute `#[rustc_clean(...)]` attribute. +This attribute compares the fingerprint from the current compilation session with the previous one. The first revision should never have an active `rustc_clean` attribute, since it will always be dirty. -In the default mode, it asserts that the fingerprints must be the same. +In the default mode, it asserts that the fingerprints must be the same. The attribute takes the following arguments: * `cfg=""` — checks the cfg condition ``, and only runs the check if the config condition evaluates to true. @@ -204,9 +207,10 @@ The attribute takes the following arguments: * `except=",,..."` — asserts that the query results for the listed queries must be different, rather than the same. * `loaded_from_disk=",,..."` — asserts that the query results for the listed queries - were actually loaded from disk (not just marked green). + were actually loaded from disk (not just marked green). This can be useful to ensure that a test is actually exercising the deserialization - logic for a particular query result. This can be combined with `except`. + logic for a particular query result. + This can be combined with `except`. A simple example of a test using `rustc_clean` is the [hello_world test]. @@ -215,9 +219,9 @@ A simple example of a test using `rustc_clean` is the [hello_world test]. ### Debuginfo tests -The tests in [`tests/debuginfo`] test debuginfo generation. They build a -program, launch a debugger, and issue commands to the debugger. A single test -can work with cdb, gdb, and lldb. +The tests in [`tests/debuginfo`] test debuginfo generation. +They build a program, launch a debugger, and issue commands to the debugger. +A single test can work with cdb, gdb, and lldb. Most tests should have the `//@ compile-flags: -g` directive or something similar to generate the appropriate debuginfo. @@ -228,8 +232,7 @@ The debuginfo tests consist of a series of debugger commands along with "check" lines which specify output that is expected from the debugger. The commands are comments of the form `// $DEBUGGER-command:$COMMAND` where -`$DEBUGGER` is the debugger being used and `$COMMAND` is the debugger command -to execute. +`$DEBUGGER` is the debugger being used and `$COMMAND` is the debugger command to execute. The debugger values can be: @@ -245,8 +248,7 @@ The command to check the output are of the form `// $DEBUGGER-check:$OUTPUT` where `$OUTPUT` is the output to expect. For example, the following will build the test, start the debugger, set a -breakpoint, launch the program, inspect a value, and check what the debugger -prints: +breakpoint, launch the program, inspect a value, and check what the debugger prints: ```rust,ignore //@ compile-flags: -g @@ -268,17 +270,16 @@ the debugger currently being used: - `min-cdb-version: 10.0.18317.1001` — ignores the test if the version of cdb is below the given version -- `min-gdb-version: 8.2` — ignores the test if the version of gdb is below the - given version +- `min-gdb-version: 8.2` — ignores the test if the version of gdb is below the given version - `ignore-gdb-version: 9.2` — ignores the test if the version of gdb is equal to the given version - `ignore-gdb-version: 7.11.90 - 8.0.9` — ignores the test if the version of gdb is in a range (inclusive) -- `min-lldb-version: 310` — ignores the test if the version of lldb is below - the given version -- `rust-lldb` — ignores the test if lldb is not contain the Rust plugin. NOTE: - The "Rust" version of LLDB doesn't exist anymore, so this will always be - ignored. This should probably be removed. +- `min-lldb-version: 310` — ignores the test if the version of lldb is below the given version +- `rust-lldb` — ignores the test if lldb is not contain the Rust plugin. + NOTE: The "Rust" version of LLDB doesn't exist anymore, so this will always be + ignored. + This should probably be removed. By passing the `--debugger` option to compiletest, you can specify a single debugger to run tests with. For example, `./x test tests/debuginfo -- --debugger gdb` will only test GDB commands. @@ -311,17 +312,18 @@ For example, `./x test tests/debuginfo -- --debugger gdb` will only test GDB com ### Codegen tests -The tests in [`tests/codegen-llvm`] test LLVM code generation. They compile the -test with the `--emit=llvm-ir` flag to emit LLVM IR. They then run the LLVM -[FileCheck] tool. The test is annotated with various `// CHECK` comments to -check the generated code. See the [FileCheck] documentation for a tutorial and -more information. +The tests in [`tests/codegen-llvm`] test LLVM code generation. +They compile the test with the `--emit=llvm-ir` flag to emit LLVM IR. +They then run the LLVM [FileCheck] tool. +The test is annotated with various `// CHECK` comments to check the generated code. +See the [FileCheck] documentation for a tutorial and more information. See also the [assembly tests](#assembly-tests) for a similar set of tests. By default, codegen tests will have `//@ needs-target-std` *implied* (that the target needs to support std), *unless* the `#![no_std]`/`#![no_core]` attribute -was specified in the test source. You can override this behavior and explicitly +was specified in the test source. +You can override this behavior and explicitly write `//@ needs-target-std` to only run the test when target supports std, even if the test is `#![no_std]`/`#![no_core]`. @@ -334,17 +336,15 @@ If you need to work with `#![no_std]` cross-compiling tests, consult the ### Assembly tests -The tests in [`tests/assembly-llvm`] test LLVM assembly output. They compile the test -with the `--emit=asm` flag to emit a `.s` file with the assembly output. They -then run the LLVM [FileCheck] tool. +The tests in [`tests/assembly-llvm`] test LLVM assembly output. +They compile the test with the `--emit=asm` flag to emit a `.s` file with the assembly output. +They then run the LLVM [FileCheck] tool. Each test should be annotated with the `//@ assembly-output:` directive with a -value of either `emit-asm` or `ptx-linker` to indicate the type of assembly -output. +value of either `emit-asm` or `ptx-linker` to indicate the type of assembly output. -Then, they should be annotated with various `// CHECK` comments to check the -assembly output. See the [FileCheck] documentation for a tutorial and more -information. +Then, they should be annotated with various `// CHECK` comments to check the assembly output. +See the [FileCheck] documentation for a tutorial and more information. See also the [codegen tests](#codegen-tests) for a similar set of tests. @@ -364,13 +364,11 @@ monomorphization collection pass, i.e., `-Zprint-mono-items`, and then special annotations in the file are used to compare against that. Then, the test should be annotated with comments of the form `//~ MONO_ITEM -name` where `name` is the monomorphized string printed by rustc like `fn ::foo`. +name` where `name` is the monomorphized string printed by rustc like `fn ::foo`. To check for CGU partitioning, a comment of the form `//~ MONO_ITEM name @@ cgu` -where `cgu` is a space separated list of the CGU names and the linkage -information in brackets. For example: `//~ MONO_ITEM static function::FOO @@ -statics[Internal]` +where `cgu` is a space separated list of the CGU names and the linkage information in brackets. +For example: `//~ MONO_ITEM static function::FOO @@ statics[Internal]` [`tests/codegen-units`]: https://github.com/rust-lang/rust/tree/HEAD/tests/codegen-units @@ -378,8 +376,8 @@ statics[Internal]` ### Mir-opt tests The tests in [`tests/mir-opt`] check parts of the generated MIR to make sure it -is generated correctly and is doing the expected optimizations. Check out the -[MIR Optimizations](../mir/optimizations.md) chapter for more. +is generated correctly and is doing the expected optimizations. +Check out the [MIR Optimizations](../mir/optimizations.md) chapter for more. Compiletest will build the test with several flags to dump the MIR output and set a baseline for optimizations: @@ -391,23 +389,24 @@ set a baseline for optimizations: * `-Zdump-mir-exclude-pass-number` The test should be annotated with `// EMIT_MIR` comments that specify files that -will contain the expected MIR output. You can use `x test --bless` to create the -initial expected files. +will contain the expected MIR output. +You can use `x test --bless` to create the initial expected files. There are several forms the `EMIT_MIR` comment can take: - `// EMIT_MIR $MIR_PATH.mir` — This will check that the given filename matches - the exact output from the MIR dump. For example, + the exact output from the MIR dump. + For example, `my_test.main.SimplifyCfg-elaborate-drops.after.mir` will load that file from the test directory, and compare it against the dump from rustc. Checking the "after" file (which is after optimization) is useful if you are - interested in the final state after an optimization. Some rare cases may want - to use the "before" file for completeness. + interested in the final state after an optimization. + Some rare cases may want to use the "before" file for completeness. - `// EMIT_MIR $MIR_PATH.diff` — where `$MIR_PATH` is the filename of the MIR - dump, such as `my_test_name.my_function.EarlyOtherwiseBranch`. Compiletest - will diff the `.before.mir` and `.after.mir` files, and compare the diff + dump, such as `my_test_name.my_function.EarlyOtherwiseBranch`. + Compiletest will diff the `.before.mir` and `.after.mir` files, and compare the diff output to the expected `.diff` file from the `EMIT_MIR` comment. This is useful if you want to see how an optimization changes the MIR. @@ -417,8 +416,8 @@ There are several forms the `EMIT_MIR` comment can take: check that the output matches the given file. By default 32 bit and 64 bit targets use the same dump files, which can be -problematic in the presence of pointers in constants or other bit width -dependent things. In that case you can add `// EMIT_MIR_FOR_EACH_BIT_WIDTH` to +problematic in the presence of pointers in constants or other bit width dependent things. +In that case you can add `// EMIT_MIR_FOR_EACH_BIT_WIDTH` to your test, causing separate files to be generated for 32bit and 64bit systems. [`tests/mir-opt`]: https://github.com/rust-lang/rust/tree/HEAD/tests/mir-opt @@ -428,9 +427,8 @@ your test, causing separate files to be generated for 32bit and 64bit systems. The tests in [`tests/run-make`] and [`tests/run-make-cargo`] are general-purpose tests using Rust *recipes*, which are small programs (`rmake.rs`) allowing -arbitrary Rust code such as `rustc` invocations, and is supported by a -[`run_make_support`] library. Using Rust recipes provide the ultimate in -flexibility. +arbitrary Rust code such as `rustc` invocations, and is supported by a [`run_make_support`] library. +Using Rust recipes provide the ultimate in flexibility. `run-make` tests should be used if no other test suites better suit your needs. @@ -441,9 +439,9 @@ faster-to-iterate test suite). ### `build-std` tests -The tests in [`tests/build-std`] check that `-Zbuild-std` works. This is currently -just a run-make test suite with a single recipe. The recipe generates test cases -and runs them in parallel. +The tests in [`tests/build-std`] check that `-Zbuild-std` works. +This is currently just a run-make test suite with a single recipe. +The recipe generates test cases and runs them in parallel. [`tests/build-std`]: https://github.com/rust-lang/rust/tree/HEAD/tests/build-std @@ -457,18 +455,16 @@ If you need new utilities or functionality, consider extending and improving the [`run_make_support`] library. Compiletest directives like `//@ only-` or `//@ ignore-` are -supported in `rmake.rs`, like in UI tests. However, revisions or building -auxiliary via directives are not currently supported. +supported in `rmake.rs`, like in UI tests. +However, revisions or building auxiliary via directives are not currently supported. `rmake.rs` and `run-make-support` may *not* use any nightly/unstable features, -as they must be compilable by a stage 0 rustc that may be a beta or even stable -rustc. +as they must be compilable by a stage 0 rustc that may be a beta or even stable rustc. #### Quickly check if `rmake.rs` tests can be compiled You can quickly check if `rmake.rs` tests can be compiled without having to -build stage1 rustc by forcing `rmake.rs` to be compiled with the stage0 -compiler: +build stage1 rustc by forcing `rmake.rs` to be compiled with the stage0 compiler: ```bash $ COMPILETEST_FORCE_STAGE0=1 x test --stage 0 tests/run-make/ @@ -525,7 +521,8 @@ Then add a corresponding entry to `"rust-analyzer.linkedProjects"` ### Coverage tests The tests in [`tests/coverage`] are shared by multiple test modes that test -coverage instrumentation in different ways. Running the `coverage` test suite +coverage instrumentation in different ways. +Running the `coverage` test suite will automatically run each test in all of the different coverage modes. Each mode also has an alias to run the coverage tests in just that mode: @@ -543,31 +540,28 @@ Each mode also has an alias to run the coverage tests in just that mode: ``` If a particular test should not be run in one of the coverage test modes for -some reason, use the `//@ ignore-coverage-map` or `//@ ignore-coverage-run` -directives. +some reason, use the `//@ ignore-coverage-map` or `//@ ignore-coverage-run` directives. #### `coverage-map` suite In `coverage-map` mode, these tests verify the mappings between source code -regions and coverage counters that are emitted by LLVM. They compile the test -with `--emit=llvm-ir`, then use a custom tool ([`src/tools/coverage-dump`]) to -extract and pretty-print the coverage mappings embedded in the IR. These tests -don't require the profiler runtime, so they run in PR CI jobs and are easy to +regions and coverage counters that are emitted by LLVM. +They compile the test with `--emit=llvm-ir`, then use a custom tool ([`src/tools/coverage-dump`]) to +extract and pretty-print the coverage mappings embedded in the IR. +These tests don't require the profiler runtime, so they run in PR CI jobs and are easy to run/bless locally. These coverage map tests can be sensitive to changes in MIR lowering or MIR -optimizations, producing mappings that are different but produce identical -coverage reports. +optimizations, producing mappings that are different but produce identical coverage reports. As a rule of thumb, any PR that doesn't change coverage-specific code should **feel free to re-bless** the `coverage-map` tests as necessary, without -worrying about the actual changes, as long as the `coverage-run` tests still -pass. +worrying about the actual changes, as long as the `coverage-run` tests still pass. #### `coverage-run` suite -In `coverage-run` mode, these tests perform an end-to-end test of coverage -reporting. They compile a test program with coverage instrumentation, run that +In `coverage-run` mode, these tests perform an end-to-end test of coverage reporting. +They compile a test program with coverage instrumentation, run that program to produce raw coverage data, and then use LLVM tools to process that data into a human-readable code coverage report. @@ -587,8 +581,8 @@ as part of the full set of CI jobs used for merging. #### `coverage-run-rustdoc` suite The tests in [`tests/coverage-run-rustdoc`] also run instrumented doctests and -include them in the coverage report. This avoids having to build rustdoc when -only running the main `coverage` suite. +include them in the coverage report. +This avoids having to build rustdoc when only running the main `coverage` suite. [`tests/coverage`]: https://github.com/rust-lang/rust/tree/HEAD/tests/coverage [`src/tools/coverage-dump`]: https://github.com/rust-lang/rust/tree/HEAD/src/tools/coverage-dump @@ -597,13 +591,12 @@ only running the main `coverage` suite. ### Crash tests [`tests/crashes`] serve as a collection of tests that are expected to cause the -compiler to ICE, panic or crash in some other way, so that accidental fixes are -tracked. Formerly, this was done at but +compiler to ICE, panic or crash in some other way, so that accidental fixes are tracked. +Formerly, this was done at but doing it inside the rust-lang/rust testsuite is more convenient. -It is imperative that a test in the suite causes rustc to ICE, panic, or -crash in some other way. A test will "pass" if rustc exits with an exit status -other than 1 or 0. +It is imperative that a test in the suite causes rustc to ICE, panic, or crash in some other way. +A test will "pass" if rustc exits with an exit status other than 1 or 0. If you want to see verbose stdout/stderr, you need to set `COMPILETEST_VERBOSE_CRASHES=1`, e.g. @@ -612,16 +605,15 @@ If you want to see verbose stdout/stderr, you need to set $ COMPILETEST_VERBOSE_CRASHES=1 ./x test tests/crashes/999999.rs --stage 1 ``` -Anyone can add ["untracked" crashes] from the issue tracker. It's strongly -recommended to include test cases from several issues in a single PR. +Anyone can add ["untracked" crashes] from the issue tracker. +It's strongly recommended to include test cases from several issues in a single PR. When you do so, each issue number should be noted in the file name (`12345.rs` -should suffice) and also inside the file by means of a `//@ known-bug: #12345` -directive. Please [label][labeling] the relevant issues with `S-bug-has-test` -once your PR is merged. +should suffice) and also inside the file by means of a `//@ known-bug: #12345` directive. +Please [label][labeling] the relevant issues with `S-bug-has-test` once your PR is merged. If you happen to fix one of the crashes, please move it to a fitting -subdirectory in `tests/ui` and give it a meaningful name. Please add a doc -comment at the top of the file explaining why this test exists, even better if +subdirectory in `tests/ui` and give it a meaningful name. +Please add a doc comment at the top of the file explaining why this test exists, even better if you can briefly explain how the example causes rustc to crash previously and what was done to prevent rustc to ICE / panic / crash. @@ -635,8 +627,8 @@ Fixes #MMMMM to the description of your pull request will ensure the corresponding tickets be closed automatically upon merge. -Make sure that your fix actually fixes the root cause of the issue and not just -a subset first. The issue numbers can be found in the file name or the `//@ +Make sure that your fix actually fixes the root cause of the issue and not just a subset first. +The issue numbers can be found in the file name or the `//@ known-bug` directive inside the test file. [`tests/crashes`]: https://github.com/rust-lang/rust/tree/HEAD/tests/crashes @@ -654,8 +646,8 @@ There are multiple [directives](directives.md) to assist with that: - `aux-codegen-backend` - `proc-macro` -`aux-build` will build a separate crate from the named source file. The source -file should be in a directory called `auxiliary` beside the test file. +`aux-build` will build a separate crate from the named source file. +The source file should be in a directory called `auxiliary` beside the test file. ```rust,ignore //@ aux-build: my-helper.rs @@ -665,44 +657,48 @@ extern crate my_helper; ``` The aux crate will be built as a dylib if possible (unless on a platform that -does not support them, or the `no-prefer-dynamic` header is specified in the aux -file). The `-L` flag is used to find the extern crates. +does not support them, or the `no-prefer-dynamic` header is specified in the aux file). +The `-L` flag is used to find the extern crates. -`aux-crate` is very similar to `aux-build`. However, it uses the `--extern` flag +`aux-crate` is very similar to `aux-build`. +However, it uses the `--extern` flag to link to the extern crate to make the crate be available as an extern prelude. That allows you to specify the additional syntax of the `--extern` flag, such as -renaming a dependency. For example, `//@ aux-crate:foo=bar.rs` will compile +renaming a dependency. +For example, `//@ aux-crate:foo=bar.rs` will compile `auxiliary/bar.rs` and make it available under then name `foo` within the test. -This is similar to how Cargo does dependency renaming. It is also possible to +This is similar to how Cargo does dependency renaming. +It is also possible to specify [`--extern` modifiers](https://github.com/rust-lang/rust/issues/98405). For example, `//@ aux-crate:noprelude:foo=bar.rs`. -`aux-bin` is similar to `aux-build` but will build a binary instead of a -library. The binary will be available in `auxiliary/bin` relative to the working -directory of the test. +`aux-bin` is similar to `aux-build` but will build a binary instead of a library. +The binary will be available in `auxiliary/bin` relative to the working directory of the test. `aux-codegen-backend` is similar to `aux-build`, but will then pass the compiled -dylib to `-Zcodegen-backend` when building the main file. This will only work -for tests in `tests/ui-fulldeps`, since it requires the use of compiler crates. +dylib to `-Zcodegen-backend` when building the main file. +This will only work for tests in `tests/ui-fulldeps`, since it requires the use of compiler crates. ### Auxiliary proc-macro If you want a proc-macro dependency, then you can use the `proc-macro` directive. This directive behaves just like `aux-build`, i.e. that you should place the proc-macro test auxiliary file under a `auxiliary` folder under the -same parent folder as the main test file. However, it also has four additional +same parent folder as the main test file. +However, it also has four additional preset behavior compared to `aux-build` for the proc-macro test auxiliary: 1. The aux test file is built with `--crate-type=proc-macro`. 2. The aux test file is built without `-C prefer-dynamic`, i.e. it will not try to produce a dylib for the aux crate. 3. The aux crate is made available to the test file via extern prelude with - `--extern `. Note that since UI tests default to edition + `--extern `. + Note that since UI tests default to edition 2015, you still need to specify `extern ` unless the main test file is using an edition that is 2018 or newer if you want to use the aux crate name in a `use` import. -4. The `proc_macro` crate is made available as an extern prelude module. Same - edition 2015 vs newer edition distinction for `extern proc_macro;` applies. +4. The `proc_macro` crate is made available as an extern prelude module. + Same edition 2015 vs newer edition distinction for `extern proc_macro;` applies. For example, you might have a test `tests/ui/cat/meow.rs` and proc-macro auxiliary `tests/ui/cat/auxiliary/whiskers.rs`: @@ -744,19 +740,19 @@ pub fn identity(ts: TokenStream) -> TokenStream { ## Revisions -Revisions allow a single test file to be used for multiple tests. This is done -by adding a special directive at the top of the file: +Revisions allow a single test file to be used for multiple tests. +This is done by adding a special directive at the top of the file: ```rust,ignore //@ revisions: foo bar baz ``` This will result in the test being compiled (and tested) three times, once with -`--cfg foo`, once with `--cfg bar`, and once with `--cfg baz`. You can therefore -use `#[cfg(foo)]` etc within the test to tweak each of these results. +`--cfg foo`, once with `--cfg bar`, and once with `--cfg baz`. +You can therefore use `#[cfg(foo)]` etc within the test to tweak each of these results. -You can also customize directives and expected error messages to a particular -revision. To do this, add `[revision-name]` after the `//@` for directives, and +You can also customize directives and expected error messages to a particular revision. +To do this, add `[revision-name]` after the `//@` for directives, and after `//` for UI error annotations, like so: ```rust,ignore @@ -769,8 +765,7 @@ fn test_foo() { } ``` -Multiple revisions can be specified in a comma-separated list, such as -`//[foo,bar,baz]~^`. +Multiple revisions can be specified in a comma-separated list, such as `//[foo,bar,baz]~^`. In test suites that use the LLVM [FileCheck] tool, the current revision name is also registered as an additional prefix for FileCheck directives: @@ -787,14 +782,13 @@ also registered as an additional prefix for FileCheck directives: fn main() {} ``` -Note that not all directives have meaning when customized to a revision. For -example, the `ignore-test` directives (and all "ignore" directives) currently -only apply to the test as a whole, not to particular revisions. The only -directives that are intended to really work when customized to a revision are +Note that not all directives have meaning when customized to a revision. +For example, the `ignore-test` directives (and all "ignore" directives) currently +only apply to the test as a whole, not to particular revisions. +The only directives that are intended to really work when customized to a revision are error patterns and compiler flags. - -The following test suites support revisions: + The following test suites support revisions: - ui - assembly @@ -802,14 +796,13 @@ The following test suites support revisions: - coverage - debuginfo - rustdoc UI tests -- incremental (these are special in that they inherently cannot be run in - parallel) +- incremental (these are special in that they inherently cannot be run in parallel) ### Ignoring unused revision names Normally, revision names mentioned in other directives and error annotations -must correspond to an actual revision declared in a `revisions` directive. This is -enforced by an `./x test tidy` check. +must correspond to an actual revision declared in a `revisions` directive. +This is enforced by an `./x test tidy` check. If a revision name needs to be temporarily removed from the revision list for some reason, the above check can be suppressed by adding the revision name to an @@ -825,8 +818,7 @@ used to compare the behavior of all tests with different compiler flags enabled. This can help highlight what differences might appear with certain flags, and check for any problems that might arise. -To run the tests in a different mode, you need to pass the `--compare-mode` CLI -flag: +To run the tests in a different mode, you need to pass the `--compare-mode` CLI flag: ```bash ./x test tests/ui --compare-mode=chalk @@ -836,20 +828,17 @@ The possible compare modes are: - `polonius` — Runs with Polonius with `-Zpolonius`. - `chalk` — Runs with Chalk with `-Zchalk`. -- `split-dwarf` — Runs with unpacked split-DWARF with - `-Csplit-debuginfo=unpacked`. -- `split-dwarf-single` — Runs with packed split-DWARF with - `-Csplit-debuginfo=packed`. +- `split-dwarf` — Runs with unpacked split-DWARF with `-Csplit-debuginfo=unpacked`. +- `split-dwarf-single` — Runs with packed split-DWARF with `-Csplit-debuginfo=packed`. See [UI compare modes](ui.md#compare-modes) for more information about how UI tests support different output for different modes. -In CI, compare modes are only used in one Linux builder, and only with the -following settings: +In CI, compare modes are only used in one Linux builder, and only with the following settings: -- `tests/debuginfo`: Uses `split-dwarf` mode. This helps ensure that none of the - debuginfo tests are affected when enabling split-DWARF. +- `tests/debuginfo`: Uses `split-dwarf` mode. + This helps ensure that none of the debuginfo tests are affected when enabling split-DWARF. -Note that compare modes are separate to [revisions](#revisions). All revisions -are tested when running `./x test tests/ui`, however compare-modes must be +Note that compare modes are separate to [revisions](#revisions). +All revisions are tested when running `./x test tests/ui`, however compare-modes must be manually run individually via the `--compare-mode` flag.