Auto merge of #152484 - matthiaskrgr:rollup-h4u26eb, r=matthiaskrgr
Rollup of 9 pull requests Successful merges: - rust-lang/rust#152419 (Move more query system code) - rust-lang/rust#152431 (Restrict the set of things that const stability can be applied to) - rust-lang/rust#152436 (Reenable a GCI+mGCA+GCPT test case) - rust-lang/rust#152021 (Bump tvOS, visionOS and watchOS Aarch64 targets to tier 2) - rust-lang/rust#152146 (mGCA: Add associated const type check) - rust-lang/rust#152372 (style: remove unneeded trailing commas) - rust-lang/rust#152383 (BikeshedGuaranteedNoDrop trait: add comments indicating that it can be observed on stable) - rust-lang/rust#152397 (Update books) - rust-lang/rust#152441 (Fix typos and grammar in top-level and src/doc documentation)
This commit is contained in:
commit
5fdff787e6
61 changed files with 922 additions and 744 deletions
|
|
@ -10,7 +10,7 @@ the Zulip stream is the best place to *ask* for help.
|
||||||
|
|
||||||
Documentation for contributing to the compiler or tooling is located in the [Guide to Rustc
|
Documentation for contributing to the compiler or tooling is located in the [Guide to Rustc
|
||||||
Development][rustc-dev-guide], commonly known as the [rustc-dev-guide]. Documentation for the
|
Development][rustc-dev-guide], commonly known as the [rustc-dev-guide]. Documentation for the
|
||||||
standard library in the [Standard library developers Guide][std-dev-guide], commonly known as the [std-dev-guide].
|
standard library is in the [Standard library developers Guide][std-dev-guide], commonly known as the [std-dev-guide].
|
||||||
|
|
||||||
## Making changes to subtrees and submodules
|
## Making changes to subtrees and submodules
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -4492,6 +4492,7 @@ name = "rustc_query_impl"
|
||||||
version = "0.0.0"
|
version = "0.0.0"
|
||||||
dependencies = [
|
dependencies = [
|
||||||
"measureme",
|
"measureme",
|
||||||
|
"rustc_abi",
|
||||||
"rustc_data_structures",
|
"rustc_data_structures",
|
||||||
"rustc_errors",
|
"rustc_errors",
|
||||||
"rustc_hashes",
|
"rustc_hashes",
|
||||||
|
|
@ -4501,7 +4502,9 @@ dependencies = [
|
||||||
"rustc_middle",
|
"rustc_middle",
|
||||||
"rustc_query_system",
|
"rustc_query_system",
|
||||||
"rustc_serialize",
|
"rustc_serialize",
|
||||||
|
"rustc_session",
|
||||||
"rustc_span",
|
"rustc_span",
|
||||||
|
"rustc_thread_pool",
|
||||||
"tracing",
|
"tracing",
|
||||||
]
|
]
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -233,7 +233,7 @@ itself back on after some time).
|
||||||
|
|
||||||
### MSVC
|
### MSVC
|
||||||
|
|
||||||
MSVC builds of Rust additionally requires an installation of:
|
MSVC builds of Rust additionally require an installation of:
|
||||||
|
|
||||||
- Visual Studio 2022 (or later) build tools so `rustc` can use its linker. Older
|
- Visual Studio 2022 (or later) build tools so `rustc` can use its linker. Older
|
||||||
Visual Studio versions such as 2019 *may* work but aren't actively tested.
|
Visual Studio versions such as 2019 *may* work but aren't actively tested.
|
||||||
|
|
|
||||||
|
|
@ -1546,7 +1546,7 @@ Compatibility Notes
|
||||||
- [Check well-formedness of the source type's signature in fn pointer casts.](https://github.com/rust-lang/rust/pull/129021) This partly closes a soundness hole that comes when casting a function item to function pointer
|
- [Check well-formedness of the source type's signature in fn pointer casts.](https://github.com/rust-lang/rust/pull/129021) This partly closes a soundness hole that comes when casting a function item to function pointer
|
||||||
- [Use equality instead of subtyping when resolving type dependent paths.](https://github.com/rust-lang/rust/pull/129073)
|
- [Use equality instead of subtyping when resolving type dependent paths.](https://github.com/rust-lang/rust/pull/129073)
|
||||||
- Linking on macOS now correctly includes Rust's default deployment target. Due to a linker bug, you might have to pass `MACOSX_DEPLOYMENT_TARGET` or fix your `#[link]` attributes to point to the correct frameworks. See <https://github.com/rust-lang/rust/pull/129369>.
|
- Linking on macOS now correctly includes Rust's default deployment target. Due to a linker bug, you might have to pass `MACOSX_DEPLOYMENT_TARGET` or fix your `#[link]` attributes to point to the correct frameworks. See <https://github.com/rust-lang/rust/pull/129369>.
|
||||||
- [Rust will now correctly raise an error for `repr(Rust)` written on non-`struct`/`enum`/`union` items, since it previous did not have any effect.](https://github.com/rust-lang/rust/pull/129422)
|
- [Rust will now correctly raise an error for `repr(Rust)` written on non-`struct`/`enum`/`union` items, since it previously did not have any effect.](https://github.com/rust-lang/rust/pull/129422)
|
||||||
- The future incompatibility lint `deprecated_cfg_attr_crate_type_name` [has been made into a hard error](https://github.com/rust-lang/rust/pull/129670). It was used to deny usage of `#![crate_type]` and `#![crate_name]` attributes in `#![cfg_attr]`, which required a hack in the compiler to be able to change the used crate type and crate name after cfg expansion.
|
- The future incompatibility lint `deprecated_cfg_attr_crate_type_name` [has been made into a hard error](https://github.com/rust-lang/rust/pull/129670). It was used to deny usage of `#![crate_type]` and `#![crate_name]` attributes in `#![cfg_attr]`, which required a hack in the compiler to be able to change the used crate type and crate name after cfg expansion.
|
||||||
Users can use `--crate-type` instead of `#![cfg_attr(..., crate_type = "...")]` and `--crate-name` instead of `#![cfg_attr(..., crate_name = "...")]` when running `rustc`/`cargo rustc` on the command line.
|
Users can use `--crate-type` instead of `#![cfg_attr(..., crate_type = "...")]` and `--crate-name` instead of `#![cfg_attr(..., crate_name = "...")]` when running `rustc`/`cargo rustc` on the command line.
|
||||||
Use of those two attributes outside of `#![cfg_attr]` continue to be fully supported.
|
Use of those two attributes outside of `#![cfg_attr]` continue to be fully supported.
|
||||||
|
|
@ -1722,7 +1722,7 @@ Cargo
|
||||||
Compatibility Notes
|
Compatibility Notes
|
||||||
-------------------
|
-------------------
|
||||||
- We now [disallow setting some built-in cfgs via the command-line](https://github.com/rust-lang/rust/pull/126158) with the newly added [`explicit_builtin_cfgs_in_flags`](https://doc.rust-lang.org/rustc/lints/listing/deny-by-default.html#explicit-builtin-cfgs-in-flags) lint in order to prevent incoherent state, eg. `windows` cfg active but target is Linux based. The appropriate [`rustc` flag](https://doc.rust-lang.org/rustc/command-line-arguments.html) should be used instead.
|
- We now [disallow setting some built-in cfgs via the command-line](https://github.com/rust-lang/rust/pull/126158) with the newly added [`explicit_builtin_cfgs_in_flags`](https://doc.rust-lang.org/rustc/lints/listing/deny-by-default.html#explicit-builtin-cfgs-in-flags) lint in order to prevent incoherent state, eg. `windows` cfg active but target is Linux based. The appropriate [`rustc` flag](https://doc.rust-lang.org/rustc/command-line-arguments.html) should be used instead.
|
||||||
- The standard library has a new implementation of `binary_search` which is significantly improves performance ([#128254](https://github.com/rust-lang/rust/pull/128254)). However when a sorted slice has multiple values which compare equal, the new implementation may select a different value among the equal ones than the old implementation.
|
- The standard library has a new implementation of `binary_search` which significantly improves performance ([#128254](https://github.com/rust-lang/rust/pull/128254)). However when a sorted slice has multiple values which compare equal, the new implementation may select a different value among the equal ones than the old implementation.
|
||||||
- [illumos/Solaris now sets `MSG_NOSIGNAL` when writing to sockets](https://github.com/rust-lang/rust/pull/128259). This avoids killing the process with SIGPIPE when writing to a closed socket, which matches the existing behavior on other UNIX targets.
|
- [illumos/Solaris now sets `MSG_NOSIGNAL` when writing to sockets](https://github.com/rust-lang/rust/pull/128259). This avoids killing the process with SIGPIPE when writing to a closed socket, which matches the existing behavior on other UNIX targets.
|
||||||
- [Removes a problematic hack that always passed the --whole-archive linker flag for tests, which may cause linker errors for code accidentally relying on it.](https://github.com/rust-lang/rust/pull/128400)
|
- [Removes a problematic hack that always passed the --whole-archive linker flag for tests, which may cause linker errors for code accidentally relying on it.](https://github.com/rust-lang/rust/pull/128400)
|
||||||
- The WebAssembly target features `multivalue` and `reference-types` are now
|
- The WebAssembly target features `multivalue` and `reference-types` are now
|
||||||
|
|
@ -1872,7 +1872,7 @@ These changes do not affect any public interfaces of Rust, but they represent
|
||||||
significant improvements to the performance or internals of rustc and related
|
significant improvements to the performance or internals of rustc and related
|
||||||
tools.
|
tools.
|
||||||
|
|
||||||
- [Add a Rust-for Linux `auto` CI job to check kernel builds.](https://github.com/rust-lang/rust/pull/125209/)
|
- [Add a Rust-for-Linux `auto` CI job to check kernel builds.](https://github.com/rust-lang/rust/pull/125209/)
|
||||||
|
|
||||||
Version 1.80.1 (2024-08-08)
|
Version 1.80.1 (2024-08-08)
|
||||||
===========================
|
===========================
|
||||||
|
|
@ -4510,7 +4510,7 @@ Compatibility Notes
|
||||||
saturating to `0` instead][89926]. In the real world the panic happened mostly
|
saturating to `0` instead][89926]. In the real world the panic happened mostly
|
||||||
on platforms with buggy monotonic clock implementations rather than catching
|
on platforms with buggy monotonic clock implementations rather than catching
|
||||||
programming errors like reversing the start and end times. Such programming
|
programming errors like reversing the start and end times. Such programming
|
||||||
errors will now results in `0` rather than a panic.
|
errors will now result in `0` rather than a panic.
|
||||||
- In a future release we're planning to increase the baseline requirements for
|
- In a future release we're planning to increase the baseline requirements for
|
||||||
the Linux kernel to version 3.2, and for glibc to version 2.17. We'd love
|
the Linux kernel to version 3.2, and for glibc to version 2.17. We'd love
|
||||||
your feedback in [PR #95026][95026].
|
your feedback in [PR #95026][95026].
|
||||||
|
|
|
||||||
|
|
@ -244,7 +244,20 @@ impl<S: Stage> AttributeParser<S> for ConstStabilityParser {
|
||||||
this.promotable = true;
|
this.promotable = true;
|
||||||
}),
|
}),
|
||||||
];
|
];
|
||||||
const ALLOWED_TARGETS: AllowedTargets = ALLOWED_TARGETS;
|
const ALLOWED_TARGETS: AllowedTargets = AllowedTargets::AllowList(&[
|
||||||
|
Allow(Target::Fn),
|
||||||
|
Allow(Target::Method(MethodKind::Inherent)),
|
||||||
|
Allow(Target::Method(MethodKind::TraitImpl)),
|
||||||
|
Allow(Target::Method(MethodKind::Trait { body: true })),
|
||||||
|
Allow(Target::Impl { of_trait: false }),
|
||||||
|
Allow(Target::Impl { of_trait: true }),
|
||||||
|
Allow(Target::Use), // FIXME I don't think this does anything?
|
||||||
|
Allow(Target::Const),
|
||||||
|
Allow(Target::AssocConst),
|
||||||
|
Allow(Target::Trait),
|
||||||
|
Allow(Target::Static),
|
||||||
|
Allow(Target::Crate),
|
||||||
|
]);
|
||||||
|
|
||||||
fn finalize(mut self, cx: &FinalizeContext<'_, '_, S>) -> Option<AttributeKind> {
|
fn finalize(mut self, cx: &FinalizeContext<'_, '_, S>) -> Option<AttributeKind> {
|
||||||
if self.promotable {
|
if self.promotable {
|
||||||
|
|
|
||||||
|
|
@ -962,15 +962,6 @@ impl SyntaxExtension {
|
||||||
|
|
||||||
let stability = find_attr!(attrs, AttributeKind::Stability { stability, .. } => *stability);
|
let stability = find_attr!(attrs, AttributeKind::Stability { stability, .. } => *stability);
|
||||||
|
|
||||||
// FIXME(jdonszelmann): make it impossible to miss the or_else in the typesystem
|
|
||||||
if let Some(sp) =
|
|
||||||
find_attr!(attrs, AttributeKind::RustcConstStability { span, .. } => *span)
|
|
||||||
{
|
|
||||||
sess.dcx().emit_err(errors::MacroConstStability {
|
|
||||||
span: sp,
|
|
||||||
head_span: sess.source_map().guess_head_span(span),
|
|
||||||
});
|
|
||||||
}
|
|
||||||
if let Some(sp) = find_attr!(attrs, AttributeKind::RustcBodyStability{ span, .. } => *span)
|
if let Some(sp) = find_attr!(attrs, AttributeKind::RustcBodyStability{ span, .. } => *span)
|
||||||
{
|
{
|
||||||
sess.dcx().emit_err(errors::MacroBodyStability {
|
sess.dcx().emit_err(errors::MacroBodyStability {
|
||||||
|
|
|
||||||
|
|
@ -80,16 +80,6 @@ pub(crate) struct ResolveRelativePath {
|
||||||
pub path: String,
|
pub path: String,
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Diagnostic)]
|
|
||||||
#[diag("macros cannot have const stability attributes")]
|
|
||||||
pub(crate) struct MacroConstStability {
|
|
||||||
#[primary_span]
|
|
||||||
#[label("invalid const stability attribute")]
|
|
||||||
pub span: Span,
|
|
||||||
#[label("const stability attribute affects this macro")]
|
|
||||||
pub head_span: Span,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Diagnostic)]
|
#[derive(Diagnostic)]
|
||||||
#[diag("macros cannot have body stability attributes")]
|
#[diag("macros cannot have body stability attributes")]
|
||||||
pub(crate) struct MacroBodyStability {
|
pub(crate) struct MacroBodyStability {
|
||||||
|
|
|
||||||
|
|
@ -1569,11 +1569,40 @@ pub(super) fn check_where_clauses<'tcx>(wfcx: &WfCheckingCtxt<'_, 'tcx>, def_id:
|
||||||
|
|
||||||
let predicates = predicates.instantiate_identity(tcx);
|
let predicates = predicates.instantiate_identity(tcx);
|
||||||
|
|
||||||
|
let assoc_const_obligations: Vec<_> = predicates
|
||||||
|
.predicates
|
||||||
|
.iter()
|
||||||
|
.copied()
|
||||||
|
.zip(predicates.spans.iter().copied())
|
||||||
|
.filter_map(|(clause, sp)| {
|
||||||
|
let proj = clause.as_projection_clause()?;
|
||||||
|
let pred_binder = proj
|
||||||
|
.map_bound(|pred| {
|
||||||
|
pred.term.as_const().map(|ct| {
|
||||||
|
let assoc_const_ty = tcx
|
||||||
|
.type_of(pred.projection_term.def_id)
|
||||||
|
.instantiate(tcx, pred.projection_term.args);
|
||||||
|
ty::ClauseKind::ConstArgHasType(ct, assoc_const_ty)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.transpose();
|
||||||
|
pred_binder.map(|pred_binder| {
|
||||||
|
let cause = traits::ObligationCause::new(
|
||||||
|
sp,
|
||||||
|
wfcx.body_def_id,
|
||||||
|
ObligationCauseCode::WhereClause(def_id.to_def_id(), sp),
|
||||||
|
);
|
||||||
|
Obligation::new(tcx, cause, wfcx.param_env, pred_binder)
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.collect();
|
||||||
|
|
||||||
assert_eq!(predicates.predicates.len(), predicates.spans.len());
|
assert_eq!(predicates.predicates.len(), predicates.spans.len());
|
||||||
let wf_obligations = predicates.into_iter().flat_map(|(p, sp)| {
|
let wf_obligations = predicates.into_iter().flat_map(|(p, sp)| {
|
||||||
traits::wf::clause_obligations(infcx, wfcx.param_env, wfcx.body_def_id, p, sp)
|
traits::wf::clause_obligations(infcx, wfcx.param_env, wfcx.body_def_id, p, sp)
|
||||||
});
|
});
|
||||||
let obligations: Vec<_> = wf_obligations.chain(default_obligations).collect();
|
let obligations: Vec<_> =
|
||||||
|
wf_obligations.chain(default_obligations).chain(assoc_const_obligations).collect();
|
||||||
wfcx.register_obligations(obligations);
|
wfcx.register_obligations(obligations);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -16,8 +16,7 @@ use rustc_parse::lexer::StripTokens;
|
||||||
use rustc_parse::new_parser_from_source_str;
|
use rustc_parse::new_parser_from_source_str;
|
||||||
use rustc_parse::parser::Recovery;
|
use rustc_parse::parser::Recovery;
|
||||||
use rustc_parse::parser::attr::AllowLeadingUnsafe;
|
use rustc_parse::parser::attr::AllowLeadingUnsafe;
|
||||||
use rustc_query_impl::QueryCtxt;
|
use rustc_query_impl::{QueryCtxt, print_query_stack};
|
||||||
use rustc_query_system::query::print_query_stack;
|
|
||||||
use rustc_session::config::{self, Cfg, CheckCfg, ExpectedValues, Input, OutFileName};
|
use rustc_session::config::{self, Cfg, CheckCfg, ExpectedValues, Input, OutFileName};
|
||||||
use rustc_session::parse::ParseSess;
|
use rustc_session::parse::ParseSess;
|
||||||
use rustc_session::{CompilerIO, EarlyDiagCtxt, Session, lint};
|
use rustc_session::{CompilerIO, EarlyDiagCtxt, Session, lint};
|
||||||
|
|
|
||||||
|
|
@ -184,8 +184,7 @@ pub(crate) fn run_in_thread_pool_with_globals<
|
||||||
use rustc_data_structures::defer;
|
use rustc_data_structures::defer;
|
||||||
use rustc_data_structures::sync::FromDyn;
|
use rustc_data_structures::sync::FromDyn;
|
||||||
use rustc_middle::ty::tls;
|
use rustc_middle::ty::tls;
|
||||||
use rustc_query_impl::QueryCtxt;
|
use rustc_query_impl::{QueryCtxt, break_query_cycles};
|
||||||
use rustc_query_system::query::{QueryContext, break_query_cycles};
|
|
||||||
|
|
||||||
let thread_stack_size = init_stack_size(thread_builder_diag);
|
let thread_stack_size = init_stack_size(thread_builder_diag);
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -13,7 +13,6 @@ mod keys;
|
||||||
pub mod on_disk_cache;
|
pub mod on_disk_cache;
|
||||||
#[macro_use]
|
#[macro_use]
|
||||||
pub mod plumbing;
|
pub mod plumbing;
|
||||||
pub mod values;
|
|
||||||
|
|
||||||
pub fn describe_as_module(def_id: impl Into<LocalDefId>, tcx: TyCtxt<'_>) -> String {
|
pub fn describe_as_module(def_id: impl Into<LocalDefId>, tcx: TyCtxt<'_>) -> String {
|
||||||
let def_id = def_id.into();
|
let def_id = def_id.into();
|
||||||
|
|
|
||||||
|
|
@ -687,6 +687,9 @@ where
|
||||||
///
|
///
|
||||||
/// because these impls overlap, and I'd rather not build a coherence hack for
|
/// because these impls overlap, and I'd rather not build a coherence hack for
|
||||||
/// this harmless overlap.
|
/// this harmless overlap.
|
||||||
|
///
|
||||||
|
/// This trait is indirectly exposed on stable, so do *not* extend the set of types that
|
||||||
|
/// implement the trait without FCP!
|
||||||
fn consider_builtin_bikeshed_guaranteed_no_drop_candidate(
|
fn consider_builtin_bikeshed_guaranteed_no_drop_candidate(
|
||||||
ecx: &mut EvalCtxt<'_, D>,
|
ecx: &mut EvalCtxt<'_, D>,
|
||||||
goal: Goal<I, Self>,
|
goal: Goal<I, Self>,
|
||||||
|
|
|
||||||
|
|
@ -6,6 +6,7 @@ edition = "2024"
|
||||||
[dependencies]
|
[dependencies]
|
||||||
# tidy-alphabetical-start
|
# tidy-alphabetical-start
|
||||||
measureme = "12.0.1"
|
measureme = "12.0.1"
|
||||||
|
rustc_abi = { path = "../rustc_abi" }
|
||||||
rustc_data_structures = { path = "../rustc_data_structures" }
|
rustc_data_structures = { path = "../rustc_data_structures" }
|
||||||
rustc_errors = { path = "../rustc_errors" }
|
rustc_errors = { path = "../rustc_errors" }
|
||||||
rustc_hashes = { path = "../rustc_hashes" }
|
rustc_hashes = { path = "../rustc_hashes" }
|
||||||
|
|
@ -15,6 +16,8 @@ rustc_macros = { path = "../rustc_macros" }
|
||||||
rustc_middle = { path = "../rustc_middle" }
|
rustc_middle = { path = "../rustc_middle" }
|
||||||
rustc_query_system = { path = "../rustc_query_system" }
|
rustc_query_system = { path = "../rustc_query_system" }
|
||||||
rustc_serialize = { path = "../rustc_serialize" }
|
rustc_serialize = { path = "../rustc_serialize" }
|
||||||
|
rustc_session = { path = "../rustc_session" }
|
||||||
rustc_span = { path = "../rustc_span" }
|
rustc_span = { path = "../rustc_span" }
|
||||||
|
rustc_thread_pool = { path = "../rustc_thread_pool" }
|
||||||
tracing = "0.1"
|
tracing = "0.1"
|
||||||
# tidy-alphabetical-end
|
# tidy-alphabetical-end
|
||||||
|
|
|
||||||
|
|
@ -1,3 +1,4 @@
|
||||||
|
use rustc_errors::codes::*;
|
||||||
use rustc_hir::limit::Limit;
|
use rustc_hir::limit::Limit;
|
||||||
use rustc_macros::{Diagnostic, Subdiagnostic};
|
use rustc_macros::{Diagnostic, Subdiagnostic};
|
||||||
use rustc_span::{Span, Symbol};
|
use rustc_span::{Span, Symbol};
|
||||||
|
|
@ -22,3 +23,59 @@ pub(crate) struct QueryOverflowNote {
|
||||||
pub desc: String,
|
pub desc: String,
|
||||||
pub depth: usize,
|
pub depth: usize,
|
||||||
}
|
}
|
||||||
|
|
||||||
|
#[derive(Subdiagnostic)]
|
||||||
|
#[note("...which requires {$desc}...")]
|
||||||
|
pub(crate) struct CycleStack {
|
||||||
|
#[primary_span]
|
||||||
|
pub span: Span,
|
||||||
|
pub desc: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Subdiagnostic)]
|
||||||
|
pub(crate) enum StackCount {
|
||||||
|
#[note("...which immediately requires {$stack_bottom} again")]
|
||||||
|
Single,
|
||||||
|
#[note("...which again requires {$stack_bottom}, completing the cycle")]
|
||||||
|
Multiple,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Subdiagnostic)]
|
||||||
|
pub(crate) enum Alias {
|
||||||
|
#[note("type aliases cannot be recursive")]
|
||||||
|
#[help("consider using a struct, enum, or union instead to break the cycle")]
|
||||||
|
#[help(
|
||||||
|
"see <https://doc.rust-lang.org/reference/types.html#recursive-types> for more information"
|
||||||
|
)]
|
||||||
|
Ty,
|
||||||
|
#[note("trait aliases cannot be recursive")]
|
||||||
|
Trait,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Subdiagnostic)]
|
||||||
|
#[note("cycle used when {$usage}")]
|
||||||
|
pub(crate) struct CycleUsage {
|
||||||
|
#[primary_span]
|
||||||
|
pub span: Span,
|
||||||
|
pub usage: String,
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Diagnostic)]
|
||||||
|
#[diag("cycle detected when {$stack_bottom}", code = E0391)]
|
||||||
|
pub(crate) struct Cycle {
|
||||||
|
#[primary_span]
|
||||||
|
pub span: Span,
|
||||||
|
pub stack_bottom: String,
|
||||||
|
#[subdiagnostic]
|
||||||
|
pub cycle_stack: Vec<CycleStack>,
|
||||||
|
#[subdiagnostic]
|
||||||
|
pub stack_count: StackCount,
|
||||||
|
#[subdiagnostic]
|
||||||
|
pub alias: Option<Alias>,
|
||||||
|
#[subdiagnostic]
|
||||||
|
pub cycle_usage: Option<CycleUsage>,
|
||||||
|
#[note(
|
||||||
|
"see https://rustc-dev-guide.rust-lang.org/overview.html#queries and https://rustc-dev-guide.rust-lang.org/query.html for more information"
|
||||||
|
)]
|
||||||
|
pub note_span: (),
|
||||||
|
}
|
||||||
|
|
|
||||||
|
|
@ -9,13 +9,13 @@ use rustc_middle::dep_graph::DepsType;
|
||||||
use rustc_middle::ty::TyCtxt;
|
use rustc_middle::ty::TyCtxt;
|
||||||
use rustc_query_system::dep_graph::{DepGraphData, DepNodeKey, HasDepContext};
|
use rustc_query_system::dep_graph::{DepGraphData, DepNodeKey, HasDepContext};
|
||||||
use rustc_query_system::query::{
|
use rustc_query_system::query::{
|
||||||
ActiveKeyStatus, CycleError, CycleErrorHandling, QueryCache, QueryContext, QueryJob,
|
ActiveKeyStatus, CycleError, CycleErrorHandling, QueryCache, QueryJob, QueryJobId, QueryLatch,
|
||||||
QueryJobId, QueryJobInfo, QueryLatch, QueryMap, QueryMode, QueryStackDeferred, QueryStackFrame,
|
QueryMode, QueryStackDeferred, QueryStackFrame, QueryState, incremental_verify_ich,
|
||||||
QueryState, incremental_verify_ich, report_cycle,
|
|
||||||
};
|
};
|
||||||
use rustc_span::{DUMMY_SP, Span};
|
use rustc_span::{DUMMY_SP, Span};
|
||||||
|
|
||||||
use crate::dep_graph::{DepContext, DepNode, DepNodeIndex};
|
use crate::dep_graph::{DepContext, DepNode, DepNodeIndex};
|
||||||
|
use crate::job::{QueryJobInfo, QueryMap, find_cycle_in_stack, report_cycle};
|
||||||
use crate::{QueryCtxt, QueryFlags, SemiDynamicQueryDispatcher};
|
use crate::{QueryCtxt, QueryFlags, SemiDynamicQueryDispatcher};
|
||||||
|
|
||||||
#[inline]
|
#[inline]
|
||||||
|
|
@ -218,7 +218,7 @@ fn cycle_error<'tcx, C: QueryCache, const FLAGS: QueryFlags>(
|
||||||
.ok()
|
.ok()
|
||||||
.expect("failed to collect active queries");
|
.expect("failed to collect active queries");
|
||||||
|
|
||||||
let error = try_execute.find_cycle_in_stack(query_map, &qcx.current_query_job(), span);
|
let error = find_cycle_in_stack(try_execute, query_map, &qcx.current_query_job(), span);
|
||||||
(mk_cycle(query, qcx, error.lift()), None)
|
(mk_cycle(query, qcx, error.lift()), None)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
500
compiler/rustc_query_impl/src/job.rs
Normal file
500
compiler/rustc_query_impl/src/job.rs
Normal file
|
|
@ -0,0 +1,500 @@
|
||||||
|
use std::io::Write;
|
||||||
|
use std::iter;
|
||||||
|
use std::sync::Arc;
|
||||||
|
|
||||||
|
use rustc_data_structures::fx::{FxHashMap, FxHashSet};
|
||||||
|
use rustc_errors::{Diag, DiagCtxtHandle};
|
||||||
|
use rustc_hir::def::DefKind;
|
||||||
|
use rustc_query_system::query::{
|
||||||
|
CycleError, QueryInfo, QueryJob, QueryJobId, QueryLatch, QueryStackDeferred, QueryStackFrame,
|
||||||
|
QueryWaiter,
|
||||||
|
};
|
||||||
|
use rustc_session::Session;
|
||||||
|
use rustc_span::{DUMMY_SP, Span};
|
||||||
|
|
||||||
|
use crate::QueryCtxt;
|
||||||
|
use crate::dep_graph::DepContext;
|
||||||
|
|
||||||
|
/// Map from query job IDs to job information collected by
|
||||||
|
/// `collect_active_jobs_from_all_queries`.
|
||||||
|
pub type QueryMap<'tcx> = FxHashMap<QueryJobId, QueryJobInfo<'tcx>>;
|
||||||
|
|
||||||
|
fn query_job_id_frame<'a, 'tcx>(
|
||||||
|
id: QueryJobId,
|
||||||
|
map: &'a QueryMap<'tcx>,
|
||||||
|
) -> QueryStackFrame<QueryStackDeferred<'tcx>> {
|
||||||
|
map.get(&id).unwrap().frame.clone()
|
||||||
|
}
|
||||||
|
|
||||||
|
fn query_job_id_span<'a, 'tcx>(id: QueryJobId, map: &'a QueryMap<'tcx>) -> Span {
|
||||||
|
map.get(&id).unwrap().job.span
|
||||||
|
}
|
||||||
|
|
||||||
|
fn query_job_id_parent<'a, 'tcx>(id: QueryJobId, map: &'a QueryMap<'tcx>) -> Option<QueryJobId> {
|
||||||
|
map.get(&id).unwrap().job.parent
|
||||||
|
}
|
||||||
|
|
||||||
|
fn query_job_id_latch<'a, 'tcx>(
|
||||||
|
id: QueryJobId,
|
||||||
|
map: &'a QueryMap<'tcx>,
|
||||||
|
) -> Option<&'a QueryLatch<'tcx>> {
|
||||||
|
map.get(&id).unwrap().job.latch.as_ref()
|
||||||
|
}
|
||||||
|
|
||||||
|
#[derive(Clone, Debug)]
|
||||||
|
pub struct QueryJobInfo<'tcx> {
|
||||||
|
pub frame: QueryStackFrame<QueryStackDeferred<'tcx>>,
|
||||||
|
pub job: QueryJob<'tcx>,
|
||||||
|
}
|
||||||
|
|
||||||
|
pub(crate) fn find_cycle_in_stack<'tcx>(
|
||||||
|
id: QueryJobId,
|
||||||
|
query_map: QueryMap<'tcx>,
|
||||||
|
current_job: &Option<QueryJobId>,
|
||||||
|
span: Span,
|
||||||
|
) -> CycleError<QueryStackDeferred<'tcx>> {
|
||||||
|
// Find the waitee amongst `current_job` parents
|
||||||
|
let mut cycle = Vec::new();
|
||||||
|
let mut current_job = Option::clone(current_job);
|
||||||
|
|
||||||
|
while let Some(job) = current_job {
|
||||||
|
let info = query_map.get(&job).unwrap();
|
||||||
|
cycle.push(QueryInfo { span: info.job.span, frame: info.frame.clone() });
|
||||||
|
|
||||||
|
if job == id {
|
||||||
|
cycle.reverse();
|
||||||
|
|
||||||
|
// This is the end of the cycle
|
||||||
|
// The span entry we included was for the usage
|
||||||
|
// of the cycle itself, and not part of the cycle
|
||||||
|
// Replace it with the span which caused the cycle to form
|
||||||
|
cycle[0].span = span;
|
||||||
|
// Find out why the cycle itself was used
|
||||||
|
let usage = info
|
||||||
|
.job
|
||||||
|
.parent
|
||||||
|
.as_ref()
|
||||||
|
.map(|parent| (info.job.span, query_job_id_frame(*parent, &query_map)));
|
||||||
|
return CycleError { usage, cycle };
|
||||||
|
}
|
||||||
|
|
||||||
|
current_job = info.job.parent;
|
||||||
|
}
|
||||||
|
|
||||||
|
panic!("did not find a cycle")
|
||||||
|
}
|
||||||
|
|
||||||
|
#[cold]
|
||||||
|
#[inline(never)]
|
||||||
|
pub(crate) fn find_dep_kind_root<'tcx>(
|
||||||
|
id: QueryJobId,
|
||||||
|
query_map: QueryMap<'tcx>,
|
||||||
|
) -> (QueryJobInfo<'tcx>, usize) {
|
||||||
|
let mut depth = 1;
|
||||||
|
let info = query_map.get(&id).unwrap();
|
||||||
|
let dep_kind = info.frame.dep_kind;
|
||||||
|
let mut current_id = info.job.parent;
|
||||||
|
let mut last_layout = (info.clone(), depth);
|
||||||
|
|
||||||
|
while let Some(id) = current_id {
|
||||||
|
let info = query_map.get(&id).unwrap();
|
||||||
|
if info.frame.dep_kind == dep_kind {
|
||||||
|
depth += 1;
|
||||||
|
last_layout = (info.clone(), depth);
|
||||||
|
}
|
||||||
|
current_id = info.job.parent;
|
||||||
|
}
|
||||||
|
last_layout
|
||||||
|
}
|
||||||
|
|
||||||
|
/// A resumable waiter of a query. The usize is the index into waiters in the query's latch
|
||||||
|
type Waiter = (QueryJobId, usize);
|
||||||
|
|
||||||
|
/// Visits all the non-resumable and resumable waiters of a query.
|
||||||
|
/// Only waiters in a query are visited.
|
||||||
|
/// `visit` is called for every waiter and is passed a query waiting on `query_ref`
|
||||||
|
/// and a span indicating the reason the query waited on `query_ref`.
|
||||||
|
/// If `visit` returns Some, this function returns.
|
||||||
|
/// For visits of non-resumable waiters it returns the return value of `visit`.
|
||||||
|
/// For visits of resumable waiters it returns Some(Some(Waiter)) which has the
|
||||||
|
/// required information to resume the waiter.
|
||||||
|
/// If all `visit` calls returns None, this function also returns None.
|
||||||
|
fn visit_waiters<'tcx, F>(
|
||||||
|
query_map: &QueryMap<'tcx>,
|
||||||
|
query: QueryJobId,
|
||||||
|
mut visit: F,
|
||||||
|
) -> Option<Option<Waiter>>
|
||||||
|
where
|
||||||
|
F: FnMut(Span, QueryJobId) -> Option<Option<Waiter>>,
|
||||||
|
{
|
||||||
|
// Visit the parent query which is a non-resumable waiter since it's on the same stack
|
||||||
|
if let Some(parent) = query_job_id_parent(query, query_map)
|
||||||
|
&& let Some(cycle) = visit(query_job_id_span(query, query_map), parent)
|
||||||
|
{
|
||||||
|
return Some(cycle);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Visit the explicit waiters which use condvars and are resumable
|
||||||
|
if let Some(latch) = query_job_id_latch(query, query_map) {
|
||||||
|
for (i, waiter) in latch.info.lock().waiters.iter().enumerate() {
|
||||||
|
if let Some(waiter_query) = waiter.query {
|
||||||
|
if visit(waiter.span, waiter_query).is_some() {
|
||||||
|
// Return a value which indicates that this waiter can be resumed
|
||||||
|
return Some(Some((query, i)));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
None
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Look for query cycles by doing a depth first search starting at `query`.
|
||||||
|
/// `span` is the reason for the `query` to execute. This is initially DUMMY_SP.
|
||||||
|
/// If a cycle is detected, this initial value is replaced with the span causing
|
||||||
|
/// the cycle.
|
||||||
|
fn cycle_check<'tcx>(
|
||||||
|
query_map: &QueryMap<'tcx>,
|
||||||
|
query: QueryJobId,
|
||||||
|
span: Span,
|
||||||
|
stack: &mut Vec<(Span, QueryJobId)>,
|
||||||
|
visited: &mut FxHashSet<QueryJobId>,
|
||||||
|
) -> Option<Option<Waiter>> {
|
||||||
|
if !visited.insert(query) {
|
||||||
|
return if let Some(p) = stack.iter().position(|q| q.1 == query) {
|
||||||
|
// We detected a query cycle, fix up the initial span and return Some
|
||||||
|
|
||||||
|
// Remove previous stack entries
|
||||||
|
stack.drain(0..p);
|
||||||
|
// Replace the span for the first query with the cycle cause
|
||||||
|
stack[0].0 = span;
|
||||||
|
Some(None)
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
};
|
||||||
|
}
|
||||||
|
|
||||||
|
// Query marked as visited is added it to the stack
|
||||||
|
stack.push((span, query));
|
||||||
|
|
||||||
|
// Visit all the waiters
|
||||||
|
let r = visit_waiters(query_map, query, |span, successor| {
|
||||||
|
cycle_check(query_map, successor, span, stack, visited)
|
||||||
|
});
|
||||||
|
|
||||||
|
// Remove the entry in our stack if we didn't find a cycle
|
||||||
|
if r.is_none() {
|
||||||
|
stack.pop();
|
||||||
|
}
|
||||||
|
|
||||||
|
r
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Finds out if there's a path to the compiler root (aka. code which isn't in a query)
|
||||||
|
/// from `query` without going through any of the queries in `visited`.
|
||||||
|
/// This is achieved with a depth first search.
|
||||||
|
fn connected_to_root<'tcx>(
|
||||||
|
query_map: &QueryMap<'tcx>,
|
||||||
|
query: QueryJobId,
|
||||||
|
visited: &mut FxHashSet<QueryJobId>,
|
||||||
|
) -> bool {
|
||||||
|
// We already visited this or we're deliberately ignoring it
|
||||||
|
if !visited.insert(query) {
|
||||||
|
return false;
|
||||||
|
}
|
||||||
|
|
||||||
|
// This query is connected to the root (it has no query parent), return true
|
||||||
|
if query_job_id_parent(query, query_map).is_none() {
|
||||||
|
return true;
|
||||||
|
}
|
||||||
|
|
||||||
|
visit_waiters(query_map, query, |_, successor| {
|
||||||
|
connected_to_root(query_map, successor, visited).then_some(None)
|
||||||
|
})
|
||||||
|
.is_some()
|
||||||
|
}
|
||||||
|
|
||||||
|
// Deterministically pick an query from a list
|
||||||
|
fn pick_query<'a, 'tcx, T, F>(query_map: &QueryMap<'tcx>, queries: &'a [T], f: F) -> &'a T
|
||||||
|
where
|
||||||
|
F: Fn(&T) -> (Span, QueryJobId),
|
||||||
|
{
|
||||||
|
// Deterministically pick an entry point
|
||||||
|
// FIXME: Sort this instead
|
||||||
|
queries
|
||||||
|
.iter()
|
||||||
|
.min_by_key(|v| {
|
||||||
|
let (span, query) = f(v);
|
||||||
|
let hash = query_job_id_frame(query, query_map).hash;
|
||||||
|
// Prefer entry points which have valid spans for nicer error messages
|
||||||
|
// We add an integer to the tuple ensuring that entry points
|
||||||
|
// with valid spans are picked first
|
||||||
|
let span_cmp = if span == DUMMY_SP { 1 } else { 0 };
|
||||||
|
(span_cmp, hash)
|
||||||
|
})
|
||||||
|
.unwrap()
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Looks for query cycles starting from the last query in `jobs`.
|
||||||
|
/// If a cycle is found, all queries in the cycle is removed from `jobs` and
|
||||||
|
/// the function return true.
|
||||||
|
/// If a cycle was not found, the starting query is removed from `jobs` and
|
||||||
|
/// the function returns false.
|
||||||
|
fn remove_cycle<'tcx>(
|
||||||
|
query_map: &QueryMap<'tcx>,
|
||||||
|
jobs: &mut Vec<QueryJobId>,
|
||||||
|
wakelist: &mut Vec<Arc<QueryWaiter<'tcx>>>,
|
||||||
|
) -> bool {
|
||||||
|
let mut visited = FxHashSet::default();
|
||||||
|
let mut stack = Vec::new();
|
||||||
|
// Look for a cycle starting with the last query in `jobs`
|
||||||
|
if let Some(waiter) =
|
||||||
|
cycle_check(query_map, jobs.pop().unwrap(), DUMMY_SP, &mut stack, &mut visited)
|
||||||
|
{
|
||||||
|
// The stack is a vector of pairs of spans and queries; reverse it so that
|
||||||
|
// the earlier entries require later entries
|
||||||
|
let (mut spans, queries): (Vec<_>, Vec<_>) = stack.into_iter().rev().unzip();
|
||||||
|
|
||||||
|
// Shift the spans so that queries are matched with the span for their waitee
|
||||||
|
spans.rotate_right(1);
|
||||||
|
|
||||||
|
// Zip them back together
|
||||||
|
let mut stack: Vec<_> = iter::zip(spans, queries).collect();
|
||||||
|
|
||||||
|
// Remove the queries in our cycle from the list of jobs to look at
|
||||||
|
for r in &stack {
|
||||||
|
if let Some(pos) = jobs.iter().position(|j| j == &r.1) {
|
||||||
|
jobs.remove(pos);
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Find the queries in the cycle which are
|
||||||
|
// connected to queries outside the cycle
|
||||||
|
let entry_points = stack
|
||||||
|
.iter()
|
||||||
|
.filter_map(|&(span, query)| {
|
||||||
|
if query_job_id_parent(query, query_map).is_none() {
|
||||||
|
// This query is connected to the root (it has no query parent)
|
||||||
|
Some((span, query, None))
|
||||||
|
} else {
|
||||||
|
let mut waiters = Vec::new();
|
||||||
|
// Find all the direct waiters who lead to the root
|
||||||
|
visit_waiters(query_map, query, |span, waiter| {
|
||||||
|
// Mark all the other queries in the cycle as already visited
|
||||||
|
let mut visited = FxHashSet::from_iter(stack.iter().map(|q| q.1));
|
||||||
|
|
||||||
|
if connected_to_root(query_map, waiter, &mut visited) {
|
||||||
|
waiters.push((span, waiter));
|
||||||
|
}
|
||||||
|
|
||||||
|
None
|
||||||
|
});
|
||||||
|
if waiters.is_empty() {
|
||||||
|
None
|
||||||
|
} else {
|
||||||
|
// Deterministically pick one of the waiters to show to the user
|
||||||
|
let waiter = *pick_query(query_map, &waiters, |s| *s);
|
||||||
|
Some((span, query, Some(waiter)))
|
||||||
|
}
|
||||||
|
}
|
||||||
|
})
|
||||||
|
.collect::<Vec<(Span, QueryJobId, Option<(Span, QueryJobId)>)>>();
|
||||||
|
|
||||||
|
// Deterministically pick an entry point
|
||||||
|
let (_, entry_point, usage) = pick_query(query_map, &entry_points, |e| (e.0, e.1));
|
||||||
|
|
||||||
|
// Shift the stack so that our entry point is first
|
||||||
|
let entry_point_pos = stack.iter().position(|(_, query)| query == entry_point);
|
||||||
|
if let Some(pos) = entry_point_pos {
|
||||||
|
stack.rotate_left(pos);
|
||||||
|
}
|
||||||
|
|
||||||
|
let usage =
|
||||||
|
usage.as_ref().map(|(span, query)| (*span, query_job_id_frame(*query, query_map)));
|
||||||
|
|
||||||
|
// Create the cycle error
|
||||||
|
let error = CycleError {
|
||||||
|
usage,
|
||||||
|
cycle: stack
|
||||||
|
.iter()
|
||||||
|
.map(|&(s, ref q)| QueryInfo { span: s, frame: query_job_id_frame(*q, query_map) })
|
||||||
|
.collect(),
|
||||||
|
};
|
||||||
|
|
||||||
|
// We unwrap `waiter` here since there must always be one
|
||||||
|
// edge which is resumable / waited using a query latch
|
||||||
|
let (waitee_query, waiter_idx) = waiter.unwrap();
|
||||||
|
|
||||||
|
// Extract the waiter we want to resume
|
||||||
|
let waiter =
|
||||||
|
query_job_id_latch(waitee_query, query_map).unwrap().extract_waiter(waiter_idx);
|
||||||
|
|
||||||
|
// Set the cycle error so it will be picked up when resumed
|
||||||
|
*waiter.cycle.lock() = Some(error);
|
||||||
|
|
||||||
|
// Put the waiter on the list of things to resume
|
||||||
|
wakelist.push(waiter);
|
||||||
|
|
||||||
|
true
|
||||||
|
} else {
|
||||||
|
false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
/// Detects query cycles by using depth first search over all active query jobs.
|
||||||
|
/// If a query cycle is found it will break the cycle by finding an edge which
|
||||||
|
/// uses a query latch and then resuming that waiter.
|
||||||
|
/// There may be multiple cycles involved in a deadlock, so this searches
|
||||||
|
/// all active queries for cycles before finally resuming all the waiters at once.
|
||||||
|
pub fn break_query_cycles<'tcx>(query_map: QueryMap<'tcx>, registry: &rustc_thread_pool::Registry) {
|
||||||
|
let mut wakelist = Vec::new();
|
||||||
|
// It is OK per the comments:
|
||||||
|
// - https://github.com/rust-lang/rust/pull/131200#issuecomment-2798854932
|
||||||
|
// - https://github.com/rust-lang/rust/pull/131200#issuecomment-2798866392
|
||||||
|
#[allow(rustc::potential_query_instability)]
|
||||||
|
let mut jobs: Vec<QueryJobId> = query_map.keys().cloned().collect();
|
||||||
|
|
||||||
|
let mut found_cycle = false;
|
||||||
|
|
||||||
|
while jobs.len() > 0 {
|
||||||
|
if remove_cycle(&query_map, &mut jobs, &mut wakelist) {
|
||||||
|
found_cycle = true;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// Check that a cycle was found. It is possible for a deadlock to occur without
|
||||||
|
// a query cycle if a query which can be waited on uses Rayon to do multithreading
|
||||||
|
// internally. Such a query (X) may be executing on 2 threads (A and B) and A may
|
||||||
|
// wait using Rayon on B. Rayon may then switch to executing another query (Y)
|
||||||
|
// which in turn will wait on X causing a deadlock. We have a false dependency from
|
||||||
|
// X to Y due to Rayon waiting and a true dependency from Y to X. The algorithm here
|
||||||
|
// only considers the true dependency and won't detect a cycle.
|
||||||
|
if !found_cycle {
|
||||||
|
panic!(
|
||||||
|
"deadlock detected as we're unable to find a query cycle to break\n\
|
||||||
|
current query map:\n{:#?}",
|
||||||
|
query_map
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
// Mark all the thread we're about to wake up as unblocked. This needs to be done before
|
||||||
|
// we wake the threads up as otherwise Rayon could detect a deadlock if a thread we
|
||||||
|
// resumed fell asleep and this thread had yet to mark the remaining threads as unblocked.
|
||||||
|
for _ in 0..wakelist.len() {
|
||||||
|
rustc_thread_pool::mark_unblocked(registry);
|
||||||
|
}
|
||||||
|
|
||||||
|
for waiter in wakelist.into_iter() {
|
||||||
|
waiter.condvar.notify_one();
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
pub fn print_query_stack<'tcx>(
|
||||||
|
qcx: QueryCtxt<'tcx>,
|
||||||
|
mut current_query: Option<QueryJobId>,
|
||||||
|
dcx: DiagCtxtHandle<'_>,
|
||||||
|
limit_frames: Option<usize>,
|
||||||
|
mut file: Option<std::fs::File>,
|
||||||
|
) -> usize {
|
||||||
|
// Be careful relying on global state here: this code is called from
|
||||||
|
// a panic hook, which means that the global `DiagCtxt` may be in a weird
|
||||||
|
// state if it was responsible for triggering the panic.
|
||||||
|
let mut count_printed = 0;
|
||||||
|
let mut count_total = 0;
|
||||||
|
|
||||||
|
// Make use of a partial query map if we fail to take locks collecting active queries.
|
||||||
|
let query_map = match qcx.collect_active_jobs_from_all_queries(false) {
|
||||||
|
Ok(query_map) => query_map,
|
||||||
|
Err(query_map) => query_map,
|
||||||
|
};
|
||||||
|
|
||||||
|
if let Some(ref mut file) = file {
|
||||||
|
let _ = writeln!(file, "\n\nquery stack during panic:");
|
||||||
|
}
|
||||||
|
while let Some(query) = current_query {
|
||||||
|
let Some(query_info) = query_map.get(&query) else {
|
||||||
|
break;
|
||||||
|
};
|
||||||
|
let query_extra = query_info.frame.info.extract();
|
||||||
|
if Some(count_printed) < limit_frames || limit_frames.is_none() {
|
||||||
|
// Only print to stderr as many stack frames as `num_frames` when present.
|
||||||
|
dcx.struct_failure_note(format!(
|
||||||
|
"#{} [{:?}] {}",
|
||||||
|
count_printed, query_info.frame.dep_kind, query_extra.description
|
||||||
|
))
|
||||||
|
.with_span(query_info.job.span)
|
||||||
|
.emit();
|
||||||
|
count_printed += 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Some(ref mut file) = file {
|
||||||
|
let _ = writeln!(
|
||||||
|
file,
|
||||||
|
"#{} [{}] {}",
|
||||||
|
count_total,
|
||||||
|
qcx.tcx.dep_kind_vtable(query_info.frame.dep_kind).name,
|
||||||
|
query_extra.description
|
||||||
|
);
|
||||||
|
}
|
||||||
|
|
||||||
|
current_query = query_info.job.parent;
|
||||||
|
count_total += 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
if let Some(ref mut file) = file {
|
||||||
|
let _ = writeln!(file, "end of query stack");
|
||||||
|
}
|
||||||
|
count_total
|
||||||
|
}
|
||||||
|
|
||||||
|
#[inline(never)]
|
||||||
|
#[cold]
|
||||||
|
pub(crate) fn report_cycle<'a>(
|
||||||
|
sess: &'a Session,
|
||||||
|
CycleError { usage, cycle: stack }: &CycleError,
|
||||||
|
) -> Diag<'a> {
|
||||||
|
assert!(!stack.is_empty());
|
||||||
|
|
||||||
|
let span = stack[0].frame.info.default_span(stack[1 % stack.len()].span);
|
||||||
|
|
||||||
|
let mut cycle_stack = Vec::new();
|
||||||
|
|
||||||
|
use crate::error::StackCount;
|
||||||
|
let stack_count = if stack.len() == 1 { StackCount::Single } else { StackCount::Multiple };
|
||||||
|
|
||||||
|
for i in 1..stack.len() {
|
||||||
|
let frame = &stack[i].frame;
|
||||||
|
let span = frame.info.default_span(stack[(i + 1) % stack.len()].span);
|
||||||
|
cycle_stack
|
||||||
|
.push(crate::error::CycleStack { span, desc: frame.info.description.to_owned() });
|
||||||
|
}
|
||||||
|
|
||||||
|
let mut cycle_usage = None;
|
||||||
|
if let Some((span, ref query)) = *usage {
|
||||||
|
cycle_usage = Some(crate::error::CycleUsage {
|
||||||
|
span: query.info.default_span(span),
|
||||||
|
usage: query.info.description.to_string(),
|
||||||
|
});
|
||||||
|
}
|
||||||
|
|
||||||
|
let alias =
|
||||||
|
if stack.iter().all(|entry| matches!(entry.frame.info.def_kind, Some(DefKind::TyAlias))) {
|
||||||
|
Some(crate::error::Alias::Ty)
|
||||||
|
} else if stack.iter().all(|entry| entry.frame.info.def_kind == Some(DefKind::TraitAlias)) {
|
||||||
|
Some(crate::error::Alias::Trait)
|
||||||
|
} else {
|
||||||
|
None
|
||||||
|
};
|
||||||
|
|
||||||
|
let cycle_diag = crate::error::Cycle {
|
||||||
|
span,
|
||||||
|
cycle_stack,
|
||||||
|
stack_bottom: stack[0].frame.info.description.to_owned(),
|
||||||
|
alias,
|
||||||
|
cycle_usage,
|
||||||
|
stack_count,
|
||||||
|
note_span: (),
|
||||||
|
};
|
||||||
|
|
||||||
|
sess.dcx().create_err(cycle_diag)
|
||||||
|
}
|
||||||
|
|
@ -19,24 +19,27 @@ use rustc_middle::queries::{
|
||||||
use rustc_middle::query::AsLocalKey;
|
use rustc_middle::query::AsLocalKey;
|
||||||
use rustc_middle::query::on_disk_cache::{CacheEncoder, EncodedDepNodeIndex, OnDiskCache};
|
use rustc_middle::query::on_disk_cache::{CacheEncoder, EncodedDepNodeIndex, OnDiskCache};
|
||||||
use rustc_middle::query::plumbing::{HashResult, QuerySystem, QuerySystemFns, QueryVTable};
|
use rustc_middle::query::plumbing::{HashResult, QuerySystem, QuerySystemFns, QueryVTable};
|
||||||
use rustc_middle::query::values::Value;
|
|
||||||
use rustc_middle::ty::TyCtxt;
|
use rustc_middle::ty::TyCtxt;
|
||||||
use rustc_query_system::dep_graph::SerializedDepNodeIndex;
|
use rustc_query_system::dep_graph::SerializedDepNodeIndex;
|
||||||
use rustc_query_system::query::{
|
use rustc_query_system::query::{
|
||||||
CycleError, CycleErrorHandling, QueryCache, QueryMap, QueryMode, QueryState,
|
CycleError, CycleErrorHandling, QueryCache, QueryMode, QueryState,
|
||||||
};
|
};
|
||||||
use rustc_span::{ErrorGuaranteed, Span};
|
use rustc_span::{ErrorGuaranteed, Span};
|
||||||
|
|
||||||
|
pub use crate::job::{QueryMap, break_query_cycles, print_query_stack};
|
||||||
pub use crate::plumbing::{QueryCtxt, query_key_hash_verify_all};
|
pub use crate::plumbing::{QueryCtxt, query_key_hash_verify_all};
|
||||||
use crate::plumbing::{encode_all_query_results, try_mark_green};
|
use crate::plumbing::{encode_all_query_results, try_mark_green};
|
||||||
use crate::profiling_support::QueryKeyStringCache;
|
use crate::profiling_support::QueryKeyStringCache;
|
||||||
pub use crate::profiling_support::alloc_self_profile_query_strings;
|
pub use crate::profiling_support::alloc_self_profile_query_strings;
|
||||||
|
use crate::values::Value;
|
||||||
|
|
||||||
mod error;
|
mod error;
|
||||||
mod execution;
|
mod execution;
|
||||||
|
mod job;
|
||||||
#[macro_use]
|
#[macro_use]
|
||||||
mod plumbing;
|
mod plumbing;
|
||||||
mod profiling_support;
|
mod profiling_support;
|
||||||
|
mod values;
|
||||||
|
|
||||||
#[derive(ConstParamTy)] // Allow this struct to be used for const-generic values.
|
#[derive(ConstParamTy)] // Allow this struct to be used for const-generic values.
|
||||||
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
|
#[derive(Clone, Copy, Debug, PartialEq, Eq)]
|
||||||
|
|
|
||||||
|
|
@ -28,14 +28,15 @@ use rustc_middle::ty::tls::{self, ImplicitCtxt};
|
||||||
use rustc_middle::ty::{self, TyCtxt};
|
use rustc_middle::ty::{self, TyCtxt};
|
||||||
use rustc_query_system::dep_graph::{DepNodeKey, FingerprintStyle, HasDepContext};
|
use rustc_query_system::dep_graph::{DepNodeKey, FingerprintStyle, HasDepContext};
|
||||||
use rustc_query_system::query::{
|
use rustc_query_system::query::{
|
||||||
QueryCache, QueryContext, QueryJobId, QueryMap, QuerySideEffect, QueryStackDeferred,
|
QueryCache, QueryContext, QueryJobId, QuerySideEffect, QueryStackDeferred, QueryStackFrame,
|
||||||
QueryStackFrame, QueryStackFrameExtra,
|
QueryStackFrameExtra,
|
||||||
};
|
};
|
||||||
use rustc_serialize::{Decodable, Encodable};
|
use rustc_serialize::{Decodable, Encodable};
|
||||||
use rustc_span::def_id::LOCAL_CRATE;
|
use rustc_span::def_id::LOCAL_CRATE;
|
||||||
|
|
||||||
use crate::error::{QueryOverflow, QueryOverflowNote};
|
use crate::error::{QueryOverflow, QueryOverflowNote};
|
||||||
use crate::execution::{all_inactive, force_query};
|
use crate::execution::{all_inactive, force_query};
|
||||||
|
use crate::job::{QueryMap, find_dep_kind_root};
|
||||||
use crate::{QueryDispatcherUnerased, QueryFlags, SemiDynamicQueryDispatcher};
|
use crate::{QueryDispatcherUnerased, QueryFlags, SemiDynamicQueryDispatcher};
|
||||||
|
|
||||||
/// Implements [`QueryContext`] for use by [`rustc_query_system`], since that
|
/// Implements [`QueryContext`] for use by [`rustc_query_system`], since that
|
||||||
|
|
@ -55,7 +56,7 @@ impl<'tcx> QueryCtxt<'tcx> {
|
||||||
let query_map = self
|
let query_map = self
|
||||||
.collect_active_jobs_from_all_queries(true)
|
.collect_active_jobs_from_all_queries(true)
|
||||||
.expect("failed to collect active queries");
|
.expect("failed to collect active queries");
|
||||||
let (info, depth) = job.find_dep_kind_root(query_map);
|
let (info, depth) = find_dep_kind_root(job, query_map);
|
||||||
|
|
||||||
let suggested_limit = match self.tcx.recursion_limit() {
|
let suggested_limit = match self.tcx.recursion_limit() {
|
||||||
Limit(0) => Limit(2),
|
Limit(0) => Limit(2),
|
||||||
|
|
@ -116,6 +117,32 @@ impl<'tcx> QueryCtxt<'tcx> {
|
||||||
tls::enter_context(&new_icx, compute)
|
tls::enter_context(&new_icx, compute)
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// Returns a map of currently active query jobs, collected from all queries.
|
||||||
|
///
|
||||||
|
/// If `require_complete` is `true`, this function locks all shards of the
|
||||||
|
/// query results to produce a complete map, which always returns `Ok`.
|
||||||
|
/// Otherwise, it may return an incomplete map as an error if any shard
|
||||||
|
/// lock cannot be acquired.
|
||||||
|
///
|
||||||
|
/// Prefer passing `false` to `require_complete` to avoid potential deadlocks,
|
||||||
|
/// especially when called from within a deadlock handler, unless a
|
||||||
|
/// complete map is needed and no deadlock is possible at this call site.
|
||||||
|
pub fn collect_active_jobs_from_all_queries(
|
||||||
|
self,
|
||||||
|
require_complete: bool,
|
||||||
|
) -> Result<QueryMap<'tcx>, QueryMap<'tcx>> {
|
||||||
|
let mut jobs = QueryMap::default();
|
||||||
|
let mut complete = true;
|
||||||
|
|
||||||
|
for gather_fn in crate::PER_QUERY_GATHER_ACTIVE_JOBS_FNS.iter() {
|
||||||
|
if gather_fn(self.tcx, &mut jobs, require_complete).is_none() {
|
||||||
|
complete = false;
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
if complete { Ok(jobs) } else { Err(jobs) }
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<'tcx> HasDepContext for QueryCtxt<'tcx> {
|
impl<'tcx> HasDepContext for QueryCtxt<'tcx> {
|
||||||
|
|
@ -134,32 +161,6 @@ impl<'tcx> QueryContext<'tcx> for QueryCtxt<'tcx> {
|
||||||
&self.tcx.jobserver_proxy
|
&self.tcx.jobserver_proxy
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Returns a map of currently active query jobs, collected from all queries.
|
|
||||||
///
|
|
||||||
/// If `require_complete` is `true`, this function locks all shards of the
|
|
||||||
/// query results to produce a complete map, which always returns `Ok`.
|
|
||||||
/// Otherwise, it may return an incomplete map as an error if any shard
|
|
||||||
/// lock cannot be acquired.
|
|
||||||
///
|
|
||||||
/// Prefer passing `false` to `require_complete` to avoid potential deadlocks,
|
|
||||||
/// especially when called from within a deadlock handler, unless a
|
|
||||||
/// complete map is needed and no deadlock is possible at this call site.
|
|
||||||
fn collect_active_jobs_from_all_queries(
|
|
||||||
self,
|
|
||||||
require_complete: bool,
|
|
||||||
) -> Result<QueryMap<'tcx>, QueryMap<'tcx>> {
|
|
||||||
let mut jobs = QueryMap::default();
|
|
||||||
let mut complete = true;
|
|
||||||
|
|
||||||
for gather_fn in crate::PER_QUERY_GATHER_ACTIVE_JOBS_FNS.iter() {
|
|
||||||
if gather_fn(self.tcx, &mut jobs, require_complete).is_none() {
|
|
||||||
complete = false;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if complete { Ok(jobs) } else { Err(jobs) }
|
|
||||||
}
|
|
||||||
|
|
||||||
// Interactions with on_disk_cache
|
// Interactions with on_disk_cache
|
||||||
fn load_side_effect(
|
fn load_side_effect(
|
||||||
self,
|
self,
|
||||||
|
|
|
||||||
|
|
@ -7,15 +7,17 @@ use rustc_errors::codes::*;
|
||||||
use rustc_errors::{Applicability, MultiSpan, pluralize, struct_span_code_err};
|
use rustc_errors::{Applicability, MultiSpan, pluralize, struct_span_code_err};
|
||||||
use rustc_hir as hir;
|
use rustc_hir as hir;
|
||||||
use rustc_hir::def::{DefKind, Res};
|
use rustc_hir::def::{DefKind, Res};
|
||||||
use rustc_query_system::query::{CycleError, report_cycle};
|
use rustc_middle::dep_graph::dep_kinds;
|
||||||
|
use rustc_middle::query::plumbing::CyclePlaceholder;
|
||||||
|
use rustc_middle::ty::{self, Representability, Ty, TyCtxt};
|
||||||
|
use rustc_middle::{bug, span_bug};
|
||||||
|
use rustc_query_system::query::CycleError;
|
||||||
use rustc_span::def_id::LocalDefId;
|
use rustc_span::def_id::LocalDefId;
|
||||||
use rustc_span::{ErrorGuaranteed, Span};
|
use rustc_span::{ErrorGuaranteed, Span};
|
||||||
|
|
||||||
use crate::dep_graph::dep_kinds;
|
use crate::job::report_cycle;
|
||||||
use crate::query::plumbing::CyclePlaceholder;
|
|
||||||
use crate::ty::{self, Representability, Ty, TyCtxt};
|
|
||||||
|
|
||||||
pub trait Value<'tcx>: Sized {
|
pub(crate) trait Value<'tcx>: Sized {
|
||||||
fn from_cycle_error(tcx: TyCtxt<'tcx>, cycle_error: &CycleError, guar: ErrorGuaranteed)
|
fn from_cycle_error(tcx: TyCtxt<'tcx>, cycle_error: &CycleError, guar: ErrorGuaranteed)
|
||||||
-> Self;
|
-> Self;
|
||||||
}
|
}
|
||||||
|
|
@ -1,62 +1,4 @@
|
||||||
use rustc_errors::codes::*;
|
use rustc_macros::Diagnostic;
|
||||||
use rustc_macros::{Diagnostic, Subdiagnostic};
|
|
||||||
use rustc_span::Span;
|
|
||||||
|
|
||||||
#[derive(Subdiagnostic)]
|
|
||||||
#[note("...which requires {$desc}...")]
|
|
||||||
pub(crate) struct CycleStack {
|
|
||||||
#[primary_span]
|
|
||||||
pub span: Span,
|
|
||||||
pub desc: String,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Subdiagnostic)]
|
|
||||||
pub(crate) enum StackCount {
|
|
||||||
#[note("...which immediately requires {$stack_bottom} again")]
|
|
||||||
Single,
|
|
||||||
#[note("...which again requires {$stack_bottom}, completing the cycle")]
|
|
||||||
Multiple,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Subdiagnostic)]
|
|
||||||
pub(crate) enum Alias {
|
|
||||||
#[note("type aliases cannot be recursive")]
|
|
||||||
#[help("consider using a struct, enum, or union instead to break the cycle")]
|
|
||||||
#[help(
|
|
||||||
"see <https://doc.rust-lang.org/reference/types.html#recursive-types> for more information"
|
|
||||||
)]
|
|
||||||
Ty,
|
|
||||||
#[note("trait aliases cannot be recursive")]
|
|
||||||
Trait,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Subdiagnostic)]
|
|
||||||
#[note("cycle used when {$usage}")]
|
|
||||||
pub(crate) struct CycleUsage {
|
|
||||||
#[primary_span]
|
|
||||||
pub span: Span,
|
|
||||||
pub usage: String,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Diagnostic)]
|
|
||||||
#[diag("cycle detected when {$stack_bottom}", code = E0391)]
|
|
||||||
pub(crate) struct Cycle {
|
|
||||||
#[primary_span]
|
|
||||||
pub span: Span,
|
|
||||||
pub stack_bottom: String,
|
|
||||||
#[subdiagnostic]
|
|
||||||
pub cycle_stack: Vec<CycleStack>,
|
|
||||||
#[subdiagnostic]
|
|
||||||
pub stack_count: StackCount,
|
|
||||||
#[subdiagnostic]
|
|
||||||
pub alias: Option<Alias>,
|
|
||||||
#[subdiagnostic]
|
|
||||||
pub cycle_usage: Option<CycleUsage>,
|
|
||||||
#[note(
|
|
||||||
"see https://rustc-dev-guide.rust-lang.org/overview.html#queries and https://rustc-dev-guide.rust-lang.org/query.html for more information"
|
|
||||||
)]
|
|
||||||
pub note_span: (),
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Diagnostic)]
|
#[derive(Diagnostic)]
|
||||||
#[diag("internal compiler error: reentrant incremental verify failure, suppressing message")]
|
#[diag("internal compiler error: reentrant incremental verify failure, suppressing message")]
|
||||||
|
|
|
||||||
|
|
@ -21,9 +21,6 @@ pub trait QueryCacheKey = Hash + Eq + Copy + Debug + for<'a> HashStable<StableHa
|
||||||
/// Types implementing this trait are associated with actual key/value types
|
/// Types implementing this trait are associated with actual key/value types
|
||||||
/// by the `Cache` associated type of the `rustc_middle::query::Key` trait.
|
/// by the `Cache` associated type of the `rustc_middle::query::Key` trait.
|
||||||
pub trait QueryCache: Sized {
|
pub trait QueryCache: Sized {
|
||||||
// `Key` and `Value` are `Copy` instead of `Clone` to ensure copying them stays cheap,
|
|
||||||
// but it isn't strictly necessary.
|
|
||||||
// FIXME: Is that comment still true?
|
|
||||||
type Key: QueryCacheKey;
|
type Key: QueryCacheKey;
|
||||||
type Value: Copy;
|
type Value: Copy;
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -1,20 +1,12 @@
|
||||||
use std::fmt::Debug;
|
use std::fmt::Debug;
|
||||||
use std::hash::Hash;
|
use std::hash::Hash;
|
||||||
use std::io::Write;
|
|
||||||
use std::iter;
|
|
||||||
use std::num::NonZero;
|
use std::num::NonZero;
|
||||||
use std::sync::Arc;
|
use std::sync::Arc;
|
||||||
|
|
||||||
use parking_lot::{Condvar, Mutex};
|
use parking_lot::{Condvar, Mutex};
|
||||||
use rustc_data_structures::fx::{FxHashMap, FxHashSet};
|
use rustc_span::Span;
|
||||||
use rustc_errors::{Diag, DiagCtxtHandle};
|
|
||||||
use rustc_hir::def::DefKind;
|
|
||||||
use rustc_session::Session;
|
|
||||||
use rustc_span::{DUMMY_SP, Span};
|
|
||||||
|
|
||||||
use super::{QueryStackDeferred, QueryStackFrameExtra};
|
use super::{QueryStackDeferred, QueryStackFrameExtra};
|
||||||
use crate::dep_graph::DepContext;
|
|
||||||
use crate::error::CycleStack;
|
|
||||||
use crate::query::plumbing::CycleError;
|
use crate::query::plumbing::CycleError;
|
||||||
use crate::query::{QueryContext, QueryStackFrame};
|
use crate::query::{QueryContext, QueryStackFrame};
|
||||||
|
|
||||||
|
|
@ -32,38 +24,10 @@ impl<'tcx> QueryInfo<QueryStackDeferred<'tcx>> {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// Map from query job IDs to job information collected by
|
|
||||||
/// [`QueryContext::collect_active_jobs_from_all_queries`].
|
|
||||||
pub type QueryMap<'tcx> = FxHashMap<QueryJobId, QueryJobInfo<'tcx>>;
|
|
||||||
|
|
||||||
/// A value uniquely identifying an active query job.
|
/// A value uniquely identifying an active query job.
|
||||||
#[derive(Copy, Clone, Eq, PartialEq, Hash, Debug)]
|
#[derive(Copy, Clone, Eq, PartialEq, Hash, Debug)]
|
||||||
pub struct QueryJobId(pub NonZero<u64>);
|
pub struct QueryJobId(pub NonZero<u64>);
|
||||||
|
|
||||||
impl QueryJobId {
|
|
||||||
fn frame<'a, 'tcx>(self, map: &'a QueryMap<'tcx>) -> QueryStackFrame<QueryStackDeferred<'tcx>> {
|
|
||||||
map.get(&self).unwrap().frame.clone()
|
|
||||||
}
|
|
||||||
|
|
||||||
fn span<'a, 'tcx>(self, map: &'a QueryMap<'tcx>) -> Span {
|
|
||||||
map.get(&self).unwrap().job.span
|
|
||||||
}
|
|
||||||
|
|
||||||
fn parent<'a, 'tcx>(self, map: &'a QueryMap<'tcx>) -> Option<QueryJobId> {
|
|
||||||
map.get(&self).unwrap().job.parent
|
|
||||||
}
|
|
||||||
|
|
||||||
fn latch<'a, 'tcx>(self, map: &'a QueryMap<'tcx>) -> Option<&'a QueryLatch<'tcx>> {
|
|
||||||
map.get(&self).unwrap().job.latch.as_ref()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Clone, Debug)]
|
|
||||||
pub struct QueryJobInfo<'tcx> {
|
|
||||||
pub frame: QueryStackFrame<QueryStackDeferred<'tcx>>,
|
|
||||||
pub job: QueryJob<'tcx>,
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Represents an active query job.
|
/// Represents an active query job.
|
||||||
#[derive(Clone, Debug)]
|
#[derive(Clone, Debug)]
|
||||||
pub struct QueryJob<'tcx> {
|
pub struct QueryJob<'tcx> {
|
||||||
|
|
@ -76,7 +40,7 @@ pub struct QueryJob<'tcx> {
|
||||||
pub parent: Option<QueryJobId>,
|
pub parent: Option<QueryJobId>,
|
||||||
|
|
||||||
/// The latch that is used to wait on this job.
|
/// The latch that is used to wait on this job.
|
||||||
latch: Option<QueryLatch<'tcx>>,
|
pub latch: Option<QueryLatch<'tcx>>,
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<'tcx> QueryJob<'tcx> {
|
impl<'tcx> QueryJob<'tcx> {
|
||||||
|
|
@ -105,91 +69,23 @@ impl<'tcx> QueryJob<'tcx> {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
impl QueryJobId {
|
#[derive(Debug)]
|
||||||
pub fn find_cycle_in_stack<'tcx>(
|
pub struct QueryWaiter<'tcx> {
|
||||||
&self,
|
pub query: Option<QueryJobId>,
|
||||||
query_map: QueryMap<'tcx>,
|
pub condvar: Condvar,
|
||||||
current_job: &Option<QueryJobId>,
|
pub span: Span,
|
||||||
span: Span,
|
pub cycle: Mutex<Option<CycleError<QueryStackDeferred<'tcx>>>>,
|
||||||
) -> CycleError<QueryStackDeferred<'tcx>> {
|
|
||||||
// Find the waitee amongst `current_job` parents
|
|
||||||
let mut cycle = Vec::new();
|
|
||||||
let mut current_job = Option::clone(current_job);
|
|
||||||
|
|
||||||
while let Some(job) = current_job {
|
|
||||||
let info = query_map.get(&job).unwrap();
|
|
||||||
cycle.push(QueryInfo { span: info.job.span, frame: info.frame.clone() });
|
|
||||||
|
|
||||||
if job == *self {
|
|
||||||
cycle.reverse();
|
|
||||||
|
|
||||||
// This is the end of the cycle
|
|
||||||
// The span entry we included was for the usage
|
|
||||||
// of the cycle itself, and not part of the cycle
|
|
||||||
// Replace it with the span which caused the cycle to form
|
|
||||||
cycle[0].span = span;
|
|
||||||
// Find out why the cycle itself was used
|
|
||||||
let usage = info
|
|
||||||
.job
|
|
||||||
.parent
|
|
||||||
.as_ref()
|
|
||||||
.map(|parent| (info.job.span, parent.frame(&query_map)));
|
|
||||||
return CycleError { usage, cycle };
|
|
||||||
}
|
|
||||||
|
|
||||||
current_job = info.job.parent;
|
|
||||||
}
|
|
||||||
|
|
||||||
panic!("did not find a cycle")
|
|
||||||
}
|
|
||||||
|
|
||||||
#[cold]
|
|
||||||
#[inline(never)]
|
|
||||||
pub fn find_dep_kind_root<'tcx>(
|
|
||||||
&self,
|
|
||||||
query_map: QueryMap<'tcx>,
|
|
||||||
) -> (QueryJobInfo<'tcx>, usize) {
|
|
||||||
let mut depth = 1;
|
|
||||||
let info = query_map.get(&self).unwrap();
|
|
||||||
let dep_kind = info.frame.dep_kind;
|
|
||||||
let mut current_id = info.job.parent;
|
|
||||||
let mut last_layout = (info.clone(), depth);
|
|
||||||
|
|
||||||
while let Some(id) = current_id {
|
|
||||||
let info = query_map.get(&id).unwrap();
|
|
||||||
if info.frame.dep_kind == dep_kind {
|
|
||||||
depth += 1;
|
|
||||||
last_layout = (info.clone(), depth);
|
|
||||||
}
|
|
||||||
current_id = info.job.parent;
|
|
||||||
}
|
|
||||||
last_layout
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug)]
|
#[derive(Debug)]
|
||||||
struct QueryWaiter<'tcx> {
|
pub struct QueryLatchInfo<'tcx> {
|
||||||
query: Option<QueryJobId>,
|
pub complete: bool,
|
||||||
condvar: Condvar,
|
pub waiters: Vec<Arc<QueryWaiter<'tcx>>>,
|
||||||
span: Span,
|
|
||||||
cycle: Mutex<Option<CycleError<QueryStackDeferred<'tcx>>>>,
|
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(Debug)]
|
#[derive(Clone, Debug)]
|
||||||
struct QueryLatchInfo<'tcx> {
|
|
||||||
complete: bool,
|
|
||||||
waiters: Vec<Arc<QueryWaiter<'tcx>>>,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Debug)]
|
|
||||||
pub struct QueryLatch<'tcx> {
|
pub struct QueryLatch<'tcx> {
|
||||||
info: Arc<Mutex<QueryLatchInfo<'tcx>>>,
|
pub info: Arc<Mutex<QueryLatchInfo<'tcx>>>,
|
||||||
}
|
|
||||||
|
|
||||||
impl<'tcx> Clone for QueryLatch<'tcx> {
|
|
||||||
fn clone(&self) -> Self {
|
|
||||||
Self { info: Arc::clone(&self.info) }
|
|
||||||
}
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl<'tcx> QueryLatch<'tcx> {
|
impl<'tcx> QueryLatch<'tcx> {
|
||||||
|
|
@ -256,399 +152,10 @@ impl<'tcx> QueryLatch<'tcx> {
|
||||||
|
|
||||||
/// Removes a single waiter from the list of waiters.
|
/// Removes a single waiter from the list of waiters.
|
||||||
/// This is used to break query cycles.
|
/// This is used to break query cycles.
|
||||||
fn extract_waiter(&self, waiter: usize) -> Arc<QueryWaiter<'tcx>> {
|
pub fn extract_waiter(&self, waiter: usize) -> Arc<QueryWaiter<'tcx>> {
|
||||||
let mut info = self.info.lock();
|
let mut info = self.info.lock();
|
||||||
debug_assert!(!info.complete);
|
debug_assert!(!info.complete);
|
||||||
// Remove the waiter from the list of waiters
|
// Remove the waiter from the list of waiters
|
||||||
info.waiters.remove(waiter)
|
info.waiters.remove(waiter)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
/// A resumable waiter of a query. The usize is the index into waiters in the query's latch
|
|
||||||
type Waiter = (QueryJobId, usize);
|
|
||||||
|
|
||||||
/// Visits all the non-resumable and resumable waiters of a query.
|
|
||||||
/// Only waiters in a query are visited.
|
|
||||||
/// `visit` is called for every waiter and is passed a query waiting on `query_ref`
|
|
||||||
/// and a span indicating the reason the query waited on `query_ref`.
|
|
||||||
/// If `visit` returns Some, this function returns.
|
|
||||||
/// For visits of non-resumable waiters it returns the return value of `visit`.
|
|
||||||
/// For visits of resumable waiters it returns Some(Some(Waiter)) which has the
|
|
||||||
/// required information to resume the waiter.
|
|
||||||
/// If all `visit` calls returns None, this function also returns None.
|
|
||||||
fn visit_waiters<'tcx, F>(
|
|
||||||
query_map: &QueryMap<'tcx>,
|
|
||||||
query: QueryJobId,
|
|
||||||
mut visit: F,
|
|
||||||
) -> Option<Option<Waiter>>
|
|
||||||
where
|
|
||||||
F: FnMut(Span, QueryJobId) -> Option<Option<Waiter>>,
|
|
||||||
{
|
|
||||||
// Visit the parent query which is a non-resumable waiter since it's on the same stack
|
|
||||||
if let Some(parent) = query.parent(query_map)
|
|
||||||
&& let Some(cycle) = visit(query.span(query_map), parent)
|
|
||||||
{
|
|
||||||
return Some(cycle);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Visit the explicit waiters which use condvars and are resumable
|
|
||||||
if let Some(latch) = query.latch(query_map) {
|
|
||||||
for (i, waiter) in latch.info.lock().waiters.iter().enumerate() {
|
|
||||||
if let Some(waiter_query) = waiter.query {
|
|
||||||
if visit(waiter.span, waiter_query).is_some() {
|
|
||||||
// Return a value which indicates that this waiter can be resumed
|
|
||||||
return Some(Some((query, i)));
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
None
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Look for query cycles by doing a depth first search starting at `query`.
|
|
||||||
/// `span` is the reason for the `query` to execute. This is initially DUMMY_SP.
|
|
||||||
/// If a cycle is detected, this initial value is replaced with the span causing
|
|
||||||
/// the cycle.
|
|
||||||
fn cycle_check<'tcx>(
|
|
||||||
query_map: &QueryMap<'tcx>,
|
|
||||||
query: QueryJobId,
|
|
||||||
span: Span,
|
|
||||||
stack: &mut Vec<(Span, QueryJobId)>,
|
|
||||||
visited: &mut FxHashSet<QueryJobId>,
|
|
||||||
) -> Option<Option<Waiter>> {
|
|
||||||
if !visited.insert(query) {
|
|
||||||
return if let Some(p) = stack.iter().position(|q| q.1 == query) {
|
|
||||||
// We detected a query cycle, fix up the initial span and return Some
|
|
||||||
|
|
||||||
// Remove previous stack entries
|
|
||||||
stack.drain(0..p);
|
|
||||||
// Replace the span for the first query with the cycle cause
|
|
||||||
stack[0].0 = span;
|
|
||||||
Some(None)
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
};
|
|
||||||
}
|
|
||||||
|
|
||||||
// Query marked as visited is added it to the stack
|
|
||||||
stack.push((span, query));
|
|
||||||
|
|
||||||
// Visit all the waiters
|
|
||||||
let r = visit_waiters(query_map, query, |span, successor| {
|
|
||||||
cycle_check(query_map, successor, span, stack, visited)
|
|
||||||
});
|
|
||||||
|
|
||||||
// Remove the entry in our stack if we didn't find a cycle
|
|
||||||
if r.is_none() {
|
|
||||||
stack.pop();
|
|
||||||
}
|
|
||||||
|
|
||||||
r
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Finds out if there's a path to the compiler root (aka. code which isn't in a query)
|
|
||||||
/// from `query` without going through any of the queries in `visited`.
|
|
||||||
/// This is achieved with a depth first search.
|
|
||||||
fn connected_to_root<'tcx>(
|
|
||||||
query_map: &QueryMap<'tcx>,
|
|
||||||
query: QueryJobId,
|
|
||||||
visited: &mut FxHashSet<QueryJobId>,
|
|
||||||
) -> bool {
|
|
||||||
// We already visited this or we're deliberately ignoring it
|
|
||||||
if !visited.insert(query) {
|
|
||||||
return false;
|
|
||||||
}
|
|
||||||
|
|
||||||
// This query is connected to the root (it has no query parent), return true
|
|
||||||
if query.parent(query_map).is_none() {
|
|
||||||
return true;
|
|
||||||
}
|
|
||||||
|
|
||||||
visit_waiters(query_map, query, |_, successor| {
|
|
||||||
connected_to_root(query_map, successor, visited).then_some(None)
|
|
||||||
})
|
|
||||||
.is_some()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Deterministically pick an query from a list
|
|
||||||
fn pick_query<'a, 'tcx, T, F>(query_map: &QueryMap<'tcx>, queries: &'a [T], f: F) -> &'a T
|
|
||||||
where
|
|
||||||
F: Fn(&T) -> (Span, QueryJobId),
|
|
||||||
{
|
|
||||||
// Deterministically pick an entry point
|
|
||||||
// FIXME: Sort this instead
|
|
||||||
queries
|
|
||||||
.iter()
|
|
||||||
.min_by_key(|v| {
|
|
||||||
let (span, query) = f(v);
|
|
||||||
let hash = query.frame(query_map).hash;
|
|
||||||
// Prefer entry points which have valid spans for nicer error messages
|
|
||||||
// We add an integer to the tuple ensuring that entry points
|
|
||||||
// with valid spans are picked first
|
|
||||||
let span_cmp = if span == DUMMY_SP { 1 } else { 0 };
|
|
||||||
(span_cmp, hash)
|
|
||||||
})
|
|
||||||
.unwrap()
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Looks for query cycles starting from the last query in `jobs`.
|
|
||||||
/// If a cycle is found, all queries in the cycle is removed from `jobs` and
|
|
||||||
/// the function return true.
|
|
||||||
/// If a cycle was not found, the starting query is removed from `jobs` and
|
|
||||||
/// the function returns false.
|
|
||||||
fn remove_cycle<'tcx>(
|
|
||||||
query_map: &QueryMap<'tcx>,
|
|
||||||
jobs: &mut Vec<QueryJobId>,
|
|
||||||
wakelist: &mut Vec<Arc<QueryWaiter<'tcx>>>,
|
|
||||||
) -> bool {
|
|
||||||
let mut visited = FxHashSet::default();
|
|
||||||
let mut stack = Vec::new();
|
|
||||||
// Look for a cycle starting with the last query in `jobs`
|
|
||||||
if let Some(waiter) =
|
|
||||||
cycle_check(query_map, jobs.pop().unwrap(), DUMMY_SP, &mut stack, &mut visited)
|
|
||||||
{
|
|
||||||
// The stack is a vector of pairs of spans and queries; reverse it so that
|
|
||||||
// the earlier entries require later entries
|
|
||||||
let (mut spans, queries): (Vec<_>, Vec<_>) = stack.into_iter().rev().unzip();
|
|
||||||
|
|
||||||
// Shift the spans so that queries are matched with the span for their waitee
|
|
||||||
spans.rotate_right(1);
|
|
||||||
|
|
||||||
// Zip them back together
|
|
||||||
let mut stack: Vec<_> = iter::zip(spans, queries).collect();
|
|
||||||
|
|
||||||
// Remove the queries in our cycle from the list of jobs to look at
|
|
||||||
for r in &stack {
|
|
||||||
if let Some(pos) = jobs.iter().position(|j| j == &r.1) {
|
|
||||||
jobs.remove(pos);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Find the queries in the cycle which are
|
|
||||||
// connected to queries outside the cycle
|
|
||||||
let entry_points = stack
|
|
||||||
.iter()
|
|
||||||
.filter_map(|&(span, query)| {
|
|
||||||
if query.parent(query_map).is_none() {
|
|
||||||
// This query is connected to the root (it has no query parent)
|
|
||||||
Some((span, query, None))
|
|
||||||
} else {
|
|
||||||
let mut waiters = Vec::new();
|
|
||||||
// Find all the direct waiters who lead to the root
|
|
||||||
visit_waiters(query_map, query, |span, waiter| {
|
|
||||||
// Mark all the other queries in the cycle as already visited
|
|
||||||
let mut visited = FxHashSet::from_iter(stack.iter().map(|q| q.1));
|
|
||||||
|
|
||||||
if connected_to_root(query_map, waiter, &mut visited) {
|
|
||||||
waiters.push((span, waiter));
|
|
||||||
}
|
|
||||||
|
|
||||||
None
|
|
||||||
});
|
|
||||||
if waiters.is_empty() {
|
|
||||||
None
|
|
||||||
} else {
|
|
||||||
// Deterministically pick one of the waiters to show to the user
|
|
||||||
let waiter = *pick_query(query_map, &waiters, |s| *s);
|
|
||||||
Some((span, query, Some(waiter)))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
})
|
|
||||||
.collect::<Vec<(Span, QueryJobId, Option<(Span, QueryJobId)>)>>();
|
|
||||||
|
|
||||||
// Deterministically pick an entry point
|
|
||||||
let (_, entry_point, usage) = pick_query(query_map, &entry_points, |e| (e.0, e.1));
|
|
||||||
|
|
||||||
// Shift the stack so that our entry point is first
|
|
||||||
let entry_point_pos = stack.iter().position(|(_, query)| query == entry_point);
|
|
||||||
if let Some(pos) = entry_point_pos {
|
|
||||||
stack.rotate_left(pos);
|
|
||||||
}
|
|
||||||
|
|
||||||
let usage = usage.as_ref().map(|(span, query)| (*span, query.frame(query_map)));
|
|
||||||
|
|
||||||
// Create the cycle error
|
|
||||||
let error = CycleError {
|
|
||||||
usage,
|
|
||||||
cycle: stack
|
|
||||||
.iter()
|
|
||||||
.map(|&(s, ref q)| QueryInfo { span: s, frame: q.frame(query_map) })
|
|
||||||
.collect(),
|
|
||||||
};
|
|
||||||
|
|
||||||
// We unwrap `waiter` here since there must always be one
|
|
||||||
// edge which is resumable / waited using a query latch
|
|
||||||
let (waitee_query, waiter_idx) = waiter.unwrap();
|
|
||||||
|
|
||||||
// Extract the waiter we want to resume
|
|
||||||
let waiter = waitee_query.latch(query_map).unwrap().extract_waiter(waiter_idx);
|
|
||||||
|
|
||||||
// Set the cycle error so it will be picked up when resumed
|
|
||||||
*waiter.cycle.lock() = Some(error);
|
|
||||||
|
|
||||||
// Put the waiter on the list of things to resume
|
|
||||||
wakelist.push(waiter);
|
|
||||||
|
|
||||||
true
|
|
||||||
} else {
|
|
||||||
false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Detects query cycles by using depth first search over all active query jobs.
|
|
||||||
/// If a query cycle is found it will break the cycle by finding an edge which
|
|
||||||
/// uses a query latch and then resuming that waiter.
|
|
||||||
/// There may be multiple cycles involved in a deadlock, so this searches
|
|
||||||
/// all active queries for cycles before finally resuming all the waiters at once.
|
|
||||||
pub fn break_query_cycles<'tcx>(query_map: QueryMap<'tcx>, registry: &rustc_thread_pool::Registry) {
|
|
||||||
let mut wakelist = Vec::new();
|
|
||||||
// It is OK per the comments:
|
|
||||||
// - https://github.com/rust-lang/rust/pull/131200#issuecomment-2798854932
|
|
||||||
// - https://github.com/rust-lang/rust/pull/131200#issuecomment-2798866392
|
|
||||||
#[allow(rustc::potential_query_instability)]
|
|
||||||
let mut jobs: Vec<QueryJobId> = query_map.keys().cloned().collect();
|
|
||||||
|
|
||||||
let mut found_cycle = false;
|
|
||||||
|
|
||||||
while jobs.len() > 0 {
|
|
||||||
if remove_cycle(&query_map, &mut jobs, &mut wakelist) {
|
|
||||||
found_cycle = true;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Check that a cycle was found. It is possible for a deadlock to occur without
|
|
||||||
// a query cycle if a query which can be waited on uses Rayon to do multithreading
|
|
||||||
// internally. Such a query (X) may be executing on 2 threads (A and B) and A may
|
|
||||||
// wait using Rayon on B. Rayon may then switch to executing another query (Y)
|
|
||||||
// which in turn will wait on X causing a deadlock. We have a false dependency from
|
|
||||||
// X to Y due to Rayon waiting and a true dependency from Y to X. The algorithm here
|
|
||||||
// only considers the true dependency and won't detect a cycle.
|
|
||||||
if !found_cycle {
|
|
||||||
panic!(
|
|
||||||
"deadlock detected as we're unable to find a query cycle to break\n\
|
|
||||||
current query map:\n{:#?}",
|
|
||||||
query_map
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
// Mark all the thread we're about to wake up as unblocked. This needs to be done before
|
|
||||||
// we wake the threads up as otherwise Rayon could detect a deadlock if a thread we
|
|
||||||
// resumed fell asleep and this thread had yet to mark the remaining threads as unblocked.
|
|
||||||
for _ in 0..wakelist.len() {
|
|
||||||
rustc_thread_pool::mark_unblocked(registry);
|
|
||||||
}
|
|
||||||
|
|
||||||
for waiter in wakelist.into_iter() {
|
|
||||||
waiter.condvar.notify_one();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
#[inline(never)]
|
|
||||||
#[cold]
|
|
||||||
pub fn report_cycle<'a>(
|
|
||||||
sess: &'a Session,
|
|
||||||
CycleError { usage, cycle: stack }: &CycleError,
|
|
||||||
) -> Diag<'a> {
|
|
||||||
assert!(!stack.is_empty());
|
|
||||||
|
|
||||||
let span = stack[0].frame.info.default_span(stack[1 % stack.len()].span);
|
|
||||||
|
|
||||||
let mut cycle_stack = Vec::new();
|
|
||||||
|
|
||||||
use crate::error::StackCount;
|
|
||||||
let stack_count = if stack.len() == 1 { StackCount::Single } else { StackCount::Multiple };
|
|
||||||
|
|
||||||
for i in 1..stack.len() {
|
|
||||||
let frame = &stack[i].frame;
|
|
||||||
let span = frame.info.default_span(stack[(i + 1) % stack.len()].span);
|
|
||||||
cycle_stack.push(CycleStack { span, desc: frame.info.description.to_owned() });
|
|
||||||
}
|
|
||||||
|
|
||||||
let mut cycle_usage = None;
|
|
||||||
if let Some((span, ref query)) = *usage {
|
|
||||||
cycle_usage = Some(crate::error::CycleUsage {
|
|
||||||
span: query.info.default_span(span),
|
|
||||||
usage: query.info.description.to_string(),
|
|
||||||
});
|
|
||||||
}
|
|
||||||
|
|
||||||
let alias =
|
|
||||||
if stack.iter().all(|entry| matches!(entry.frame.info.def_kind, Some(DefKind::TyAlias))) {
|
|
||||||
Some(crate::error::Alias::Ty)
|
|
||||||
} else if stack.iter().all(|entry| entry.frame.info.def_kind == Some(DefKind::TraitAlias)) {
|
|
||||||
Some(crate::error::Alias::Trait)
|
|
||||||
} else {
|
|
||||||
None
|
|
||||||
};
|
|
||||||
|
|
||||||
let cycle_diag = crate::error::Cycle {
|
|
||||||
span,
|
|
||||||
cycle_stack,
|
|
||||||
stack_bottom: stack[0].frame.info.description.to_owned(),
|
|
||||||
alias,
|
|
||||||
cycle_usage,
|
|
||||||
stack_count,
|
|
||||||
note_span: (),
|
|
||||||
};
|
|
||||||
|
|
||||||
sess.dcx().create_err(cycle_diag)
|
|
||||||
}
|
|
||||||
|
|
||||||
pub fn print_query_stack<'tcx, Qcx: QueryContext<'tcx>>(
|
|
||||||
qcx: Qcx,
|
|
||||||
mut current_query: Option<QueryJobId>,
|
|
||||||
dcx: DiagCtxtHandle<'_>,
|
|
||||||
limit_frames: Option<usize>,
|
|
||||||
mut file: Option<std::fs::File>,
|
|
||||||
) -> usize {
|
|
||||||
// Be careful relying on global state here: this code is called from
|
|
||||||
// a panic hook, which means that the global `DiagCtxt` may be in a weird
|
|
||||||
// state if it was responsible for triggering the panic.
|
|
||||||
let mut count_printed = 0;
|
|
||||||
let mut count_total = 0;
|
|
||||||
|
|
||||||
// Make use of a partial query map if we fail to take locks collecting active queries.
|
|
||||||
let query_map = match qcx.collect_active_jobs_from_all_queries(false) {
|
|
||||||
Ok(query_map) => query_map,
|
|
||||||
Err(query_map) => query_map,
|
|
||||||
};
|
|
||||||
|
|
||||||
if let Some(ref mut file) = file {
|
|
||||||
let _ = writeln!(file, "\n\nquery stack during panic:");
|
|
||||||
}
|
|
||||||
while let Some(query) = current_query {
|
|
||||||
let Some(query_info) = query_map.get(&query) else {
|
|
||||||
break;
|
|
||||||
};
|
|
||||||
let query_extra = query_info.frame.info.extract();
|
|
||||||
if Some(count_printed) < limit_frames || limit_frames.is_none() {
|
|
||||||
// Only print to stderr as many stack frames as `num_frames` when present.
|
|
||||||
dcx.struct_failure_note(format!(
|
|
||||||
"#{} [{:?}] {}",
|
|
||||||
count_printed, query_info.frame.dep_kind, query_extra.description
|
|
||||||
))
|
|
||||||
.with_span(query_info.job.span)
|
|
||||||
.emit();
|
|
||||||
count_printed += 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Some(ref mut file) = file {
|
|
||||||
let _ = writeln!(
|
|
||||||
file,
|
|
||||||
"#{} [{}] {}",
|
|
||||||
count_total,
|
|
||||||
qcx.dep_context().dep_kind_vtable(query_info.frame.dep_kind).name,
|
|
||||||
query_extra.description
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
current_query = query_info.job.parent;
|
|
||||||
count_total += 1;
|
|
||||||
}
|
|
||||||
|
|
||||||
if let Some(ref mut file) = file {
|
|
||||||
let _ = writeln!(file, "end of query stack");
|
|
||||||
}
|
|
||||||
count_total
|
|
||||||
}
|
|
||||||
|
|
|
||||||
|
|
@ -15,10 +15,7 @@ use rustc_span::def_id::DefId;
|
||||||
pub use self::caches::{
|
pub use self::caches::{
|
||||||
DefIdCache, DefaultCache, QueryCache, QueryCacheKey, SingleCache, VecCache,
|
DefIdCache, DefaultCache, QueryCache, QueryCacheKey, SingleCache, VecCache,
|
||||||
};
|
};
|
||||||
pub use self::job::{
|
pub use self::job::{QueryInfo, QueryJob, QueryJobId, QueryLatch, QueryWaiter};
|
||||||
QueryInfo, QueryJob, QueryJobId, QueryJobInfo, QueryLatch, QueryMap, break_query_cycles,
|
|
||||||
print_query_stack, report_cycle,
|
|
||||||
};
|
|
||||||
pub use self::plumbing::*;
|
pub use self::plumbing::*;
|
||||||
use crate::dep_graph::{DepKind, DepNodeIndex, HasDepContext, SerializedDepNodeIndex};
|
use crate::dep_graph::{DepKind, DepNodeIndex, HasDepContext, SerializedDepNodeIndex};
|
||||||
|
|
||||||
|
|
@ -52,7 +49,7 @@ pub struct QueryStackFrame<I> {
|
||||||
pub dep_kind: DepKind,
|
pub dep_kind: DepKind,
|
||||||
/// This hash is used to deterministically pick
|
/// This hash is used to deterministically pick
|
||||||
/// a query to remove cycles in the parallel compiler.
|
/// a query to remove cycles in the parallel compiler.
|
||||||
hash: Hash64,
|
pub hash: Hash64,
|
||||||
pub def_id: Option<DefId>,
|
pub def_id: Option<DefId>,
|
||||||
/// A def-id that is extracted from a `Ty` in a query key
|
/// A def-id that is extracted from a `Ty` in a query key
|
||||||
pub def_id_for_ty_in_cycle: Option<DefId>,
|
pub def_id_for_ty_in_cycle: Option<DefId>,
|
||||||
|
|
@ -161,11 +158,6 @@ pub trait QueryContext<'tcx>: HasDepContext {
|
||||||
/// a token while waiting on a query.
|
/// a token while waiting on a query.
|
||||||
fn jobserver_proxy(&self) -> &Proxy;
|
fn jobserver_proxy(&self) -> &Proxy;
|
||||||
|
|
||||||
fn collect_active_jobs_from_all_queries(
|
|
||||||
self,
|
|
||||||
require_complete: bool,
|
|
||||||
) -> Result<QueryMap<'tcx>, QueryMap<'tcx>>;
|
|
||||||
|
|
||||||
/// Load a side effect associated to the node in the previous session.
|
/// Load a side effect associated to the node in the previous session.
|
||||||
fn load_side_effect(
|
fn load_side_effect(
|
||||||
self,
|
self,
|
||||||
|
|
|
||||||
|
|
@ -7,7 +7,7 @@ pub(crate) fn target() -> Target {
|
||||||
llvm_target,
|
llvm_target,
|
||||||
metadata: TargetMetadata {
|
metadata: TargetMetadata {
|
||||||
description: Some("ARM64 Apple tvOS".into()),
|
description: Some("ARM64 Apple tvOS".into()),
|
||||||
tier: Some(3),
|
tier: Some(2),
|
||||||
host_tools: Some(false),
|
host_tools: Some(false),
|
||||||
std: Some(true),
|
std: Some(true),
|
||||||
},
|
},
|
||||||
|
|
|
||||||
|
|
@ -7,7 +7,7 @@ pub(crate) fn target() -> Target {
|
||||||
llvm_target,
|
llvm_target,
|
||||||
metadata: TargetMetadata {
|
metadata: TargetMetadata {
|
||||||
description: Some("ARM64 Apple tvOS Simulator".into()),
|
description: Some("ARM64 Apple tvOS Simulator".into()),
|
||||||
tier: Some(3),
|
tier: Some(2),
|
||||||
host_tools: Some(false),
|
host_tools: Some(false),
|
||||||
std: Some(true),
|
std: Some(true),
|
||||||
},
|
},
|
||||||
|
|
|
||||||
|
|
@ -7,7 +7,7 @@ pub(crate) fn target() -> Target {
|
||||||
llvm_target,
|
llvm_target,
|
||||||
metadata: TargetMetadata {
|
metadata: TargetMetadata {
|
||||||
description: Some("ARM64 Apple visionOS".into()),
|
description: Some("ARM64 Apple visionOS".into()),
|
||||||
tier: Some(3),
|
tier: Some(2),
|
||||||
host_tools: Some(false),
|
host_tools: Some(false),
|
||||||
std: Some(true),
|
std: Some(true),
|
||||||
},
|
},
|
||||||
|
|
|
||||||
|
|
@ -7,7 +7,7 @@ pub(crate) fn target() -> Target {
|
||||||
llvm_target,
|
llvm_target,
|
||||||
metadata: TargetMetadata {
|
metadata: TargetMetadata {
|
||||||
description: Some("ARM64 Apple visionOS simulator".into()),
|
description: Some("ARM64 Apple visionOS simulator".into()),
|
||||||
tier: Some(3),
|
tier: Some(2),
|
||||||
host_tools: Some(false),
|
host_tools: Some(false),
|
||||||
std: Some(true),
|
std: Some(true),
|
||||||
},
|
},
|
||||||
|
|
|
||||||
|
|
@ -7,7 +7,7 @@ pub(crate) fn target() -> Target {
|
||||||
llvm_target,
|
llvm_target,
|
||||||
metadata: TargetMetadata {
|
metadata: TargetMetadata {
|
||||||
description: Some("ARM64 Apple watchOS".into()),
|
description: Some("ARM64 Apple watchOS".into()),
|
||||||
tier: Some(3),
|
tier: Some(2),
|
||||||
host_tools: Some(false),
|
host_tools: Some(false),
|
||||||
std: Some(true),
|
std: Some(true),
|
||||||
},
|
},
|
||||||
|
|
|
||||||
|
|
@ -7,7 +7,7 @@ pub(crate) fn target() -> Target {
|
||||||
llvm_target,
|
llvm_target,
|
||||||
metadata: TargetMetadata {
|
metadata: TargetMetadata {
|
||||||
description: Some("ARM64 Apple watchOS Simulator".into()),
|
description: Some("ARM64 Apple watchOS Simulator".into()),
|
||||||
tier: Some(3),
|
tier: Some(2),
|
||||||
host_tools: Some(false),
|
host_tools: Some(false),
|
||||||
std: Some(true),
|
std: Some(true),
|
||||||
},
|
},
|
||||||
|
|
|
||||||
|
|
@ -190,8 +190,8 @@ impl<'tcx> Visitor<'tcx> for TyPathVisitor<'tcx> {
|
||||||
}
|
}
|
||||||
|
|
||||||
Some(rbv::ResolvedArg::LateBound(debruijn_index, _, id)) => {
|
Some(rbv::ResolvedArg::LateBound(debruijn_index, _, id)) => {
|
||||||
debug!("FindNestedTypeVisitor::visit_ty: LateBound depth = {:?}", debruijn_index,);
|
debug!("FindNestedTypeVisitor::visit_ty: LateBound depth = {debruijn_index:?}");
|
||||||
debug!("id={:?}", id);
|
debug!("id={id:?}");
|
||||||
if debruijn_index == self.current_index && id.to_def_id() == self.region_def_id {
|
if debruijn_index == self.current_index && id.to_def_id() == self.region_def_id {
|
||||||
return ControlFlow::Break(());
|
return ControlFlow::Break(());
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -239,14 +239,14 @@ pub fn suggest_new_region_bound(
|
||||||
};
|
};
|
||||||
spans_suggs.push((fn_return.span.shrink_to_hi(), format!(" + {name} ")));
|
spans_suggs.push((fn_return.span.shrink_to_hi(), format!(" + {name} ")));
|
||||||
err.multipart_suggestion_verbose(
|
err.multipart_suggestion_verbose(
|
||||||
format!("{declare} `{ty}` {captures}, {use_lt}",),
|
format!("{declare} `{ty}` {captures}, {use_lt}"),
|
||||||
spans_suggs,
|
spans_suggs,
|
||||||
Applicability::MaybeIncorrect,
|
Applicability::MaybeIncorrect,
|
||||||
);
|
);
|
||||||
} else {
|
} else {
|
||||||
err.span_suggestion_verbose(
|
err.span_suggestion_verbose(
|
||||||
fn_return.span.shrink_to_hi(),
|
fn_return.span.shrink_to_hi(),
|
||||||
format!("{declare} `{ty}` {captures}, {explicit}",),
|
format!("{declare} `{ty}` {captures}, {explicit}"),
|
||||||
&plus_lt,
|
&plus_lt,
|
||||||
Applicability::MaybeIncorrect,
|
Applicability::MaybeIncorrect,
|
||||||
);
|
);
|
||||||
|
|
@ -257,7 +257,7 @@ pub fn suggest_new_region_bound(
|
||||||
if let LifetimeKind::ImplicitObjectLifetimeDefault = lt.kind {
|
if let LifetimeKind::ImplicitObjectLifetimeDefault = lt.kind {
|
||||||
err.span_suggestion_verbose(
|
err.span_suggestion_verbose(
|
||||||
fn_return.span.shrink_to_hi(),
|
fn_return.span.shrink_to_hi(),
|
||||||
format!("{declare} the trait object {captures}, {explicit}",),
|
format!("{declare} the trait object {captures}, {explicit}"),
|
||||||
&plus_lt,
|
&plus_lt,
|
||||||
Applicability::MaybeIncorrect,
|
Applicability::MaybeIncorrect,
|
||||||
);
|
);
|
||||||
|
|
|
||||||
|
|
@ -710,7 +710,7 @@ impl<'a, 'tcx> TypeErrCtxt<'a, 'tcx> {
|
||||||
predicate
|
predicate
|
||||||
);
|
);
|
||||||
let post = if post.len() > 1 || (post.len() == 1 && post[0].contains('\n')) {
|
let post = if post.len() > 1 || (post.len() == 1 && post[0].contains('\n')) {
|
||||||
format!(":\n{}", post.iter().map(|p| format!("- {p}")).collect::<Vec<_>>().join("\n"),)
|
format!(":\n{}", post.iter().map(|p| format!("- {p}")).collect::<Vec<_>>().join("\n"))
|
||||||
} else if post.len() == 1 {
|
} else if post.len() == 1 {
|
||||||
format!(": `{}`", post[0])
|
format!(": `{}`", post[0])
|
||||||
} else {
|
} else {
|
||||||
|
|
@ -722,7 +722,7 @@ impl<'a, 'tcx> TypeErrCtxt<'a, 'tcx> {
|
||||||
err.note(format!("cannot satisfy `{predicate}`"));
|
err.note(format!("cannot satisfy `{predicate}`"));
|
||||||
}
|
}
|
||||||
(0, _, 1) => {
|
(0, _, 1) => {
|
||||||
err.note(format!("{} in the `{}` crate{}", msg, crates[0], post,));
|
err.note(format!("{msg} in the `{}` crate{post}", crates[0]));
|
||||||
}
|
}
|
||||||
(0, _, _) => {
|
(0, _, _) => {
|
||||||
err.note(format!(
|
err.note(format!(
|
||||||
|
|
@ -739,7 +739,7 @@ impl<'a, 'tcx> TypeErrCtxt<'a, 'tcx> {
|
||||||
(_, 1, 1) => {
|
(_, 1, 1) => {
|
||||||
let span: MultiSpan = spans.into();
|
let span: MultiSpan = spans.into();
|
||||||
err.span_note(span, msg);
|
err.span_note(span, msg);
|
||||||
err.note(format!("and another `impl` found in the `{}` crate{}", crates[0], post,));
|
err.note(format!("and another `impl` found in the `{}` crate{post}", crates[0]));
|
||||||
}
|
}
|
||||||
_ => {
|
_ => {
|
||||||
let span: MultiSpan = spans.into();
|
let span: MultiSpan = spans.into();
|
||||||
|
|
|
||||||
|
|
@ -2263,7 +2263,7 @@ impl<'a, 'tcx> TypeErrCtxt<'a, 'tcx> {
|
||||||
err.highlighted_span_help(
|
err.highlighted_span_help(
|
||||||
self.tcx.def_span(def_id),
|
self.tcx.def_span(def_id),
|
||||||
vec![
|
vec![
|
||||||
StringPart::normal(format!("the trait `{trait_}` ",)),
|
StringPart::normal(format!("the trait `{trait_}` ")),
|
||||||
StringPart::highlighted("is"),
|
StringPart::highlighted("is"),
|
||||||
StringPart::normal(desc),
|
StringPart::normal(desc),
|
||||||
StringPart::highlighted(self_ty),
|
StringPart::highlighted(self_ty),
|
||||||
|
|
|
||||||
|
|
@ -888,7 +888,7 @@ impl<'tcx> OnUnimplementedFormatString {
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
let reported =
|
let reported =
|
||||||
struct_span_code_err!(tcx.dcx(), self.span, E0231, "{}", e.description,)
|
struct_span_code_err!(tcx.dcx(), self.span, E0231, "{}", e.description)
|
||||||
.emit();
|
.emit();
|
||||||
result = Err(reported);
|
result = Err(reported);
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -312,7 +312,7 @@ impl<'cx, 'tcx> SelectionContext<'cx, 'tcx> {
|
||||||
// `async`/`gen` constructs get lowered to a special kind of coroutine that
|
// `async`/`gen` constructs get lowered to a special kind of coroutine that
|
||||||
// should *not* `impl Coroutine`.
|
// should *not* `impl Coroutine`.
|
||||||
ty::Coroutine(did, ..) if self.tcx().is_general_coroutine(*did) => {
|
ty::Coroutine(did, ..) if self.tcx().is_general_coroutine(*did) => {
|
||||||
debug!(?self_ty, ?obligation, "assemble_coroutine_candidates",);
|
debug!(?self_ty, ?obligation, "assemble_coroutine_candidates");
|
||||||
|
|
||||||
candidates.vec.push(CoroutineCandidate);
|
candidates.vec.push(CoroutineCandidate);
|
||||||
}
|
}
|
||||||
|
|
@ -334,7 +334,7 @@ impl<'cx, 'tcx> SelectionContext<'cx, 'tcx> {
|
||||||
// async constructs get lowered to a special kind of coroutine that
|
// async constructs get lowered to a special kind of coroutine that
|
||||||
// should directly `impl Future`.
|
// should directly `impl Future`.
|
||||||
if self.tcx().coroutine_is_async(*did) {
|
if self.tcx().coroutine_is_async(*did) {
|
||||||
debug!(?self_ty, ?obligation, "assemble_future_candidates",);
|
debug!(?self_ty, ?obligation, "assemble_future_candidates");
|
||||||
|
|
||||||
candidates.vec.push(FutureCandidate);
|
candidates.vec.push(FutureCandidate);
|
||||||
}
|
}
|
||||||
|
|
@ -352,7 +352,7 @@ impl<'cx, 'tcx> SelectionContext<'cx, 'tcx> {
|
||||||
if let ty::Coroutine(did, ..) = self_ty.kind()
|
if let ty::Coroutine(did, ..) = self_ty.kind()
|
||||||
&& self.tcx().coroutine_is_gen(*did)
|
&& self.tcx().coroutine_is_gen(*did)
|
||||||
{
|
{
|
||||||
debug!(?self_ty, ?obligation, "assemble_iterator_candidates",);
|
debug!(?self_ty, ?obligation, "assemble_iterator_candidates");
|
||||||
|
|
||||||
candidates.vec.push(IteratorCandidate);
|
candidates.vec.push(IteratorCandidate);
|
||||||
}
|
}
|
||||||
|
|
@ -378,7 +378,7 @@ impl<'cx, 'tcx> SelectionContext<'cx, 'tcx> {
|
||||||
// gen constructs get lowered to a special kind of coroutine that
|
// gen constructs get lowered to a special kind of coroutine that
|
||||||
// should directly `impl AsyncIterator`.
|
// should directly `impl AsyncIterator`.
|
||||||
if self.tcx().coroutine_is_async_gen(did) {
|
if self.tcx().coroutine_is_async_gen(did) {
|
||||||
debug!(?self_ty, ?obligation, "assemble_iterator_candidates",);
|
debug!(?self_ty, ?obligation, "assemble_iterator_candidates");
|
||||||
|
|
||||||
// Can only confirm this candidate if we have constrained
|
// Can only confirm this candidate if we have constrained
|
||||||
// the `Yield` type to at least `Poll<Option<?0>>`..
|
// the `Yield` type to at least `Poll<Option<?0>>`..
|
||||||
|
|
|
||||||
|
|
@ -1246,6 +1246,8 @@ impl<'cx, 'tcx> SelectionContext<'cx, 'tcx> {
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/// This trait is indirectly exposed on stable, so do *not* extend the set of types that
|
||||||
|
/// implement the trait without FCP!
|
||||||
fn confirm_bikeshed_guaranteed_no_drop_candidate(
|
fn confirm_bikeshed_guaranteed_no_drop_candidate(
|
||||||
&mut self,
|
&mut self,
|
||||||
obligation: &PolyTraitObligation<'tcx>,
|
obligation: &PolyTraitObligation<'tcx>,
|
||||||
|
|
|
||||||
|
|
@ -1223,7 +1223,7 @@ impl<'cx, 'tcx> SelectionContext<'cx, 'tcx> {
|
||||||
&& self.match_fresh_trait_preds(stack.fresh_trait_pred, prev.fresh_trait_pred)
|
&& self.match_fresh_trait_preds(stack.fresh_trait_pred, prev.fresh_trait_pred)
|
||||||
})
|
})
|
||||||
{
|
{
|
||||||
debug!("evaluate_stack --> unbound argument, recursive --> giving up",);
|
debug!("evaluate_stack --> unbound argument, recursive --> giving up");
|
||||||
return Ok(EvaluatedToAmbigStackDependent);
|
return Ok(EvaluatedToAmbigStackDependent);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -951,6 +951,34 @@ impl<'a, 'tcx> TypeVisitor<TyCtxt<'tcx>> for WfPredicates<'a, 'tcx> {
|
||||||
ty::Binder::dummy(ty::PredicateKind::DynCompatible(principal)),
|
ty::Binder::dummy(ty::PredicateKind::DynCompatible(principal)),
|
||||||
));
|
));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
if !t.has_escaping_bound_vars() {
|
||||||
|
for projection in data.projection_bounds() {
|
||||||
|
let pred_binder = projection
|
||||||
|
.with_self_ty(tcx, t)
|
||||||
|
.map_bound(|p| {
|
||||||
|
p.term.as_const().map(|ct| {
|
||||||
|
let assoc_const_ty = tcx
|
||||||
|
.type_of(p.projection_term.def_id)
|
||||||
|
.instantiate(tcx, p.projection_term.args);
|
||||||
|
ty::PredicateKind::Clause(ty::ClauseKind::ConstArgHasType(
|
||||||
|
ct,
|
||||||
|
assoc_const_ty,
|
||||||
|
))
|
||||||
|
})
|
||||||
|
})
|
||||||
|
.transpose();
|
||||||
|
if let Some(pred_binder) = pred_binder {
|
||||||
|
self.out.push(traits::Obligation::with_depth(
|
||||||
|
tcx,
|
||||||
|
self.cause(ObligationCauseCode::WellFormed(None)),
|
||||||
|
self.recursion_depth,
|
||||||
|
self.param_env,
|
||||||
|
pred_binder,
|
||||||
|
));
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// Inference variables are the complicated case, since we don't
|
// Inference variables are the complicated case, since we don't
|
||||||
|
|
|
||||||
|
|
@ -31,7 +31,6 @@ impl<'l, 'f, T, U, const N: usize, F: FnMut(T) -> U> Drain<'l, 'f, T, N, F> {
|
||||||
}
|
}
|
||||||
|
|
||||||
/// See [`Drain::new`]; this is our fake iterator.
|
/// See [`Drain::new`]; this is our fake iterator.
|
||||||
#[rustc_const_unstable(feature = "array_try_map", issue = "79711")]
|
|
||||||
#[unstable(feature = "array_try_map", issue = "79711")]
|
#[unstable(feature = "array_try_map", issue = "79711")]
|
||||||
pub(super) struct Drain<'l, 'f, T, const N: usize, F> {
|
pub(super) struct Drain<'l, 'f, T, const N: usize, F> {
|
||||||
// FIXME(const-hack): This is essentially a slice::IterMut<'static>, replace when possible.
|
// FIXME(const-hack): This is essentially a slice::IterMut<'static>, replace when possible.
|
||||||
|
|
|
||||||
|
|
@ -466,10 +466,34 @@ auto:
|
||||||
|
|
||||||
- name: dist-apple-various
|
- name: dist-apple-various
|
||||||
env:
|
env:
|
||||||
SCRIPT: ./x.py dist bootstrap --include-default-paths --host='' --target=aarch64-apple-ios,x86_64-apple-ios,aarch64-apple-ios-sim,aarch64-apple-ios-macabi,x86_64-apple-ios-macabi
|
# Build and distribute the standard library for these targets.
|
||||||
|
TARGETS: "aarch64-apple-ios,\
|
||||||
|
aarch64-apple-ios-sim,\
|
||||||
|
x86_64-apple-ios,\
|
||||||
|
aarch64-apple-ios-macabi,\
|
||||||
|
x86_64-apple-ios-macabi,\
|
||||||
|
aarch64-apple-tvos,\
|
||||||
|
aarch64-apple-tvos-sim,\
|
||||||
|
aarch64-apple-visionos,\
|
||||||
|
aarch64-apple-visionos-sim,\
|
||||||
|
aarch64-apple-watchos,\
|
||||||
|
aarch64-apple-watchos-sim"
|
||||||
|
SCRIPT: ./x.py dist bootstrap --include-default-paths --host='' --target=$TARGETS
|
||||||
# Mac Catalyst cannot currently compile the sanitizer:
|
# Mac Catalyst cannot currently compile the sanitizer:
|
||||||
# https://github.com/rust-lang/rust/issues/129069
|
# https://github.com/rust-lang/rust/issues/129069
|
||||||
RUST_CONFIGURE_ARGS: --enable-sanitizers --enable-profiler --set rust.jemalloc --set target.aarch64-apple-ios-macabi.sanitizers=false --set target.x86_64-apple-ios-macabi.sanitizers=false
|
#
|
||||||
|
# And tvOS and watchOS don't currently support the profiler runtime:
|
||||||
|
# https://github.com/rust-lang/rust/issues/152426
|
||||||
|
RUST_CONFIGURE_ARGS: >-
|
||||||
|
--enable-sanitizers
|
||||||
|
--enable-profiler
|
||||||
|
--set rust.jemalloc
|
||||||
|
--set target.aarch64-apple-ios-macabi.sanitizers=false
|
||||||
|
--set target.x86_64-apple-ios-macabi.sanitizers=false
|
||||||
|
--set target.aarch64-apple-tvos.profiler=false
|
||||||
|
--set target.aarch64-apple-tvos-sim.profiler=false
|
||||||
|
--set target.aarch64-apple-watchos.profiler=false
|
||||||
|
--set target.aarch64-apple-watchos-sim.profiler=false
|
||||||
# Ensure that host tooling is built to support our minimum support macOS version.
|
# Ensure that host tooling is built to support our minimum support macOS version.
|
||||||
# FIXME(madsmtm): This might be redundant, as we're not building host tooling here (?)
|
# FIXME(madsmtm): This might be redundant, as we're not building host tooling here (?)
|
||||||
MACOSX_DEPLOYMENT_TARGET: 10.12
|
MACOSX_DEPLOYMENT_TARGET: 10.12
|
||||||
|
|
|
||||||
|
|
@ -1 +1 @@
|
||||||
Subproject commit 39aeceaa3aeab845bc4517e7a44e48727d3b9dbe
|
Subproject commit 05d114287b7d6f6c9253d5242540f00fbd6172ab
|
||||||
|
|
@ -194,9 +194,8 @@ resources maintained by the [Embedded Working Group] useful.
|
||||||
|
|
||||||
#### The Embedded Rust Book
|
#### The Embedded Rust Book
|
||||||
|
|
||||||
[The Embedded Rust Book] is targeted at developers familiar with embedded
|
[The Embedded Rust Book] is targeted at developers who are familiar with embedded
|
||||||
development and familiar with Rust, but have not used Rust for embedded
|
development and Rust, but who have not used Rust for embedded development.
|
||||||
development.
|
|
||||||
|
|
||||||
[The Embedded Rust Book]: embedded-book/index.html
|
[The Embedded Rust Book]: embedded-book/index.html
|
||||||
[Rust project]: https://www.rust-lang.org
|
[Rust project]: https://www.rust-lang.org
|
||||||
|
|
|
||||||
|
|
@ -1 +1 @@
|
||||||
Subproject commit 050c002a360fa45b701ea34feed7a860dc8a41bf
|
Subproject commit b8f254a991b8b7e8f704527f0d4f343a4697dfa9
|
||||||
|
|
@ -41,7 +41,7 @@ Some things that might be helpful to you though:
|
||||||
<input type="submit" value="Search" id="search-but">
|
<input type="submit" value="Search" id="search-but">
|
||||||
<!--
|
<!--
|
||||||
Don't show the options by default,
|
Don't show the options by default,
|
||||||
since "From the Standary Library" doesn't work without JavaScript
|
since "From the Standard Library" doesn't work without JavaScript
|
||||||
-->
|
-->
|
||||||
<fieldset id="search-from" style="display:none">
|
<fieldset id="search-from" style="display:none">
|
||||||
<label><input name="from" value="library" type="radio"> From the Standard Library</label>
|
<label><input name="from" value="library" type="radio"> From the Standard Library</label>
|
||||||
|
|
|
||||||
|
|
@ -1 +1 @@
|
||||||
Subproject commit 990819b86c22bbf538c0526f0287670f3dc1a67a
|
Subproject commit addd0602c819b6526b9cc97653b0fadca395528c
|
||||||
|
|
@ -148,6 +148,12 @@ target | std | notes
|
||||||
[`aarch64-apple-ios`](platform-support/apple-ios.md) | ✓ | ARM64 iOS
|
[`aarch64-apple-ios`](platform-support/apple-ios.md) | ✓ | ARM64 iOS
|
||||||
[`aarch64-apple-ios-macabi`](platform-support/apple-ios-macabi.md) | ✓ | Mac Catalyst on ARM64
|
[`aarch64-apple-ios-macabi`](platform-support/apple-ios-macabi.md) | ✓ | Mac Catalyst on ARM64
|
||||||
[`aarch64-apple-ios-sim`](platform-support/apple-ios.md) | ✓ | Apple iOS Simulator on ARM64
|
[`aarch64-apple-ios-sim`](platform-support/apple-ios.md) | ✓ | Apple iOS Simulator on ARM64
|
||||||
|
[`aarch64-apple-tvos`](platform-support/apple-tvos.md) | ✓ | ARM64 tvOS
|
||||||
|
[`aarch64-apple-tvos-sim`](platform-support/apple-tvos.md) | ✓ | ARM64 tvOS Simulator
|
||||||
|
[`aarch64-apple-visionos`](platform-support/apple-visionos.md) | ✓ | ARM64 Apple visionOS
|
||||||
|
[`aarch64-apple-visionos-sim`](platform-support/apple-visionos.md) | ✓ | ARM64 Apple visionOS Simulator
|
||||||
|
[`aarch64-apple-watchos`](platform-support/apple-watchos.md) | ✓ | ARM64 Apple WatchOS
|
||||||
|
[`aarch64-apple-watchos-sim`](platform-support/apple-watchos.md) | ✓ | ARM64 Apple WatchOS Simulator
|
||||||
[`aarch64-linux-android`](platform-support/android.md) | ✓ | ARM64 Android
|
[`aarch64-linux-android`](platform-support/android.md) | ✓ | ARM64 Android
|
||||||
[`aarch64-unknown-fuchsia`](platform-support/fuchsia.md) | ✓ | ARM64 Fuchsia
|
[`aarch64-unknown-fuchsia`](platform-support/fuchsia.md) | ✓ | ARM64 Fuchsia
|
||||||
[`aarch64-unknown-none`](platform-support/aarch64-unknown-none.md) | * | Bare ARM64, hardfloat
|
[`aarch64-unknown-none`](platform-support/aarch64-unknown-none.md) | * | Bare ARM64, hardfloat
|
||||||
|
|
@ -250,12 +256,6 @@ host tools.
|
||||||
|
|
||||||
target | std | host | notes
|
target | std | host | notes
|
||||||
-------|:---:|:----:|-------
|
-------|:---:|:----:|-------
|
||||||
[`aarch64-apple-tvos`](platform-support/apple-tvos.md) | ✓ | | ARM64 tvOS
|
|
||||||
[`aarch64-apple-tvos-sim`](platform-support/apple-tvos.md) | ✓ | | ARM64 tvOS Simulator
|
|
||||||
[`aarch64-apple-visionos`](platform-support/apple-visionos.md) | ✓ | | ARM64 Apple visionOS
|
|
||||||
[`aarch64-apple-visionos-sim`](platform-support/apple-visionos.md) | ✓ | | ARM64 Apple visionOS Simulator
|
|
||||||
[`aarch64-apple-watchos`](platform-support/apple-watchos.md) | ✓ | | ARM64 Apple WatchOS
|
|
||||||
[`aarch64-apple-watchos-sim`](platform-support/apple-watchos.md) | ✓ | | ARM64 Apple WatchOS Simulator
|
|
||||||
[`aarch64-kmc-solid_asp3`](platform-support/kmc-solid.md) | ✓ | | ARM64 SOLID with TOPPERS/ASP3
|
[`aarch64-kmc-solid_asp3`](platform-support/kmc-solid.md) | ✓ | | ARM64 SOLID with TOPPERS/ASP3
|
||||||
[`aarch64-nintendo-switch-freestanding`](platform-support/aarch64-nintendo-switch-freestanding.md) | * | | ARM64 Nintendo Switch, Horizon
|
[`aarch64-nintendo-switch-freestanding`](platform-support/aarch64-nintendo-switch-freestanding.md) | * | | ARM64 Nintendo Switch, Horizon
|
||||||
[`aarch64-unknown-freebsd`](platform-support/freebsd.md) | ✓ | ✓ | ARM64 FreeBSD
|
[`aarch64-unknown-freebsd`](platform-support/freebsd.md) | ✓ | ✓ | ARM64 FreeBSD
|
||||||
|
|
|
||||||
|
|
@ -2,10 +2,13 @@
|
||||||
|
|
||||||
Apple tvOS targets.
|
Apple tvOS targets.
|
||||||
|
|
||||||
**Tier: 3**
|
**Tier: 2 (without Host Tools)**
|
||||||
|
|
||||||
- `aarch64-apple-tvos`: Apple tvOS on ARM64.
|
- `aarch64-apple-tvos`: Apple tvOS on ARM64.
|
||||||
- `aarch64-apple-tvos-sim`: Apple tvOS Simulator on ARM64.
|
- `aarch64-apple-tvos-sim`: Apple tvOS Simulator on ARM64.
|
||||||
|
|
||||||
|
**Tier: 3**
|
||||||
|
|
||||||
- `x86_64-apple-tvos`: Apple tvOS Simulator on x86_64.
|
- `x86_64-apple-tvos`: Apple tvOS Simulator on x86_64.
|
||||||
|
|
||||||
## Target maintainers
|
## Target maintainers
|
||||||
|
|
@ -52,16 +55,13 @@ The following APIs are currently known to have missing or incomplete support:
|
||||||
|
|
||||||
## Building the target
|
## Building the target
|
||||||
|
|
||||||
The targets can be built by enabling them for a `rustc` build in
|
The tier 2 targets are distributed through `rustup`, and can be installed using one of:
|
||||||
`bootstrap.toml`, by adding, for example:
|
```console
|
||||||
|
$ rustup target add aarch64-apple-tvos
|
||||||
```toml
|
$ rustup target add aarch64-apple-tvos-sim
|
||||||
[build]
|
|
||||||
build-stage = 1
|
|
||||||
target = ["aarch64-apple-tvos", "aarch64-apple-tvos-sim"]
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Using the unstable `-Zbuild-std` with a nightly Cargo may also work.
|
See [the instructions for iOS](./apple-ios.md#building-the-target) for how to build the tier 3 target.
|
||||||
|
|
||||||
## Building Rust programs
|
## Building Rust programs
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -2,7 +2,7 @@
|
||||||
|
|
||||||
Apple visionOS / xrOS targets.
|
Apple visionOS / xrOS targets.
|
||||||
|
|
||||||
**Tier: 3**
|
**Tier: 2 (without Host Tools)**
|
||||||
|
|
||||||
- `aarch64-apple-visionos`: Apple visionOS on arm64.
|
- `aarch64-apple-visionos`: Apple visionOS on arm64.
|
||||||
- `aarch64-apple-visionos-sim`: Apple visionOS Simulator on arm64.
|
- `aarch64-apple-visionos-sim`: Apple visionOS Simulator on arm64.
|
||||||
|
|
@ -31,19 +31,12 @@ case `XROS_DEPLOYMENT_TARGET`.
|
||||||
|
|
||||||
## Building the target
|
## Building the target
|
||||||
|
|
||||||
The targets can be built by enabling them for a `rustc` build in
|
The targets are distributed through `rustup`, and can be installed using one of:
|
||||||
`bootstrap.toml`, by adding, for example:
|
```console
|
||||||
|
$ rustup target add aarch64-apple-visionos
|
||||||
```toml
|
$ rustup target add aarch64-apple-visionos-sim
|
||||||
[build]
|
|
||||||
target = ["aarch64-apple-visionos", "aarch64-apple-visionos-sim"]
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Using the unstable `-Zbuild-std` with a nightly Cargo may also work.
|
|
||||||
|
|
||||||
Note: Currently, a newer version of `libc` and `cc` may be required, this will
|
|
||||||
be fixed in [#124560](https://github.com/rust-lang/rust/pull/124560).
|
|
||||||
|
|
||||||
## Building Rust programs
|
## Building Rust programs
|
||||||
|
|
||||||
See [the instructions for iOS](./apple-ios.md#building-rust-programs).
|
See [the instructions for iOS](./apple-ios.md#building-rust-programs).
|
||||||
|
|
|
||||||
|
|
@ -2,10 +2,13 @@
|
||||||
|
|
||||||
Apple watchOS targets.
|
Apple watchOS targets.
|
||||||
|
|
||||||
**Tier: 3**
|
**Tier: 2 (without Host Tools)**
|
||||||
|
|
||||||
- `aarch64-apple-watchos`: Apple WatchOS on ARM64.
|
- `aarch64-apple-watchos`: Apple WatchOS on ARM64.
|
||||||
- `aarch64-apple-watchos-sim`: Apple WatchOS Simulator on ARM64.
|
- `aarch64-apple-watchos-sim`: Apple WatchOS Simulator on ARM64.
|
||||||
|
|
||||||
|
**Tier: 3**
|
||||||
|
|
||||||
- `x86_64-apple-watchos-sim`: Apple WatchOS Simulator on 64-bit x86.
|
- `x86_64-apple-watchos-sim`: Apple WatchOS Simulator on 64-bit x86.
|
||||||
- `arm64_32-apple-watchos`: Apple WatchOS on Arm 64_32.
|
- `arm64_32-apple-watchos`: Apple WatchOS on Arm 64_32.
|
||||||
- `armv7k-apple-watchos`: Apple WatchOS on Armv7k.
|
- `armv7k-apple-watchos`: Apple WatchOS on Armv7k.
|
||||||
|
|
@ -37,16 +40,13 @@ case `WATCHOS_DEPLOYMENT_TARGET`.
|
||||||
|
|
||||||
## Building the target
|
## Building the target
|
||||||
|
|
||||||
The targets can be built by enabling them for a `rustc` build in
|
The tier 2 targets are distributed through `rustup`, and can be installed using one of:
|
||||||
`bootstrap.toml`, by adding, for example:
|
```console
|
||||||
|
$ rustup target add aarch64-apple-watchos
|
||||||
```toml
|
$ rustup target add aarch64-apple-watchos-sim
|
||||||
[build]
|
|
||||||
build-stage = 1
|
|
||||||
target = ["aarch64-apple-watchos", "aarch64-apple-watchos-sim"]
|
|
||||||
```
|
```
|
||||||
|
|
||||||
Using the unstable `-Zbuild-std` with a nightly Cargo may also work.
|
See [the instructions for iOS](./apple-ios.md#building-the-target) for how to build the tier 3 targets.
|
||||||
|
|
||||||
## Building Rust programs
|
## Building Rust programs
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -2,13 +2,13 @@
|
||||||
#![stable(feature = "rust1", since = "1.0.0")]
|
#![stable(feature = "rust1", since = "1.0.0")]
|
||||||
|
|
||||||
#[rustc_const_stable(feature = "foo", since = "3.3.3")]
|
#[rustc_const_stable(feature = "foo", since = "3.3.3")]
|
||||||
//~^ ERROR macros cannot have const stability attributes
|
//~^ ERROR attribute cannot be used on macro defs
|
||||||
macro_rules! foo {
|
macro_rules! foo {
|
||||||
() => {};
|
() => {};
|
||||||
}
|
}
|
||||||
|
|
||||||
#[rustc_const_unstable(feature = "bar", issue="none")]
|
#[rustc_const_unstable(feature = "bar", issue = "none")]
|
||||||
//~^ ERROR macros cannot have const stability attributes
|
//~^ ERROR attribute cannot be used on macro defs
|
||||||
macro_rules! bar {
|
macro_rules! bar {
|
||||||
() => {};
|
() => {};
|
||||||
}
|
}
|
||||||
|
|
|
||||||
|
|
@ -1,20 +1,18 @@
|
||||||
error: macros cannot have const stability attributes
|
error: `#[rustc_const_stable]` attribute cannot be used on macro defs
|
||||||
--> $DIR/const-stability-on-macro.rs:4:1
|
--> $DIR/const-stability-on-macro.rs:4:1
|
||||||
|
|
|
|
||||||
LL | #[rustc_const_stable(feature = "foo", since = "3.3.3")]
|
LL | #[rustc_const_stable(feature = "foo", since = "3.3.3")]
|
||||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ invalid const stability attribute
|
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
LL |
|
|
|
||||||
LL | macro_rules! foo {
|
= help: `#[rustc_const_stable]` can be applied to associated consts, constants, crates, functions, impl blocks, statics, traits, and use statements
|
||||||
| ---------------- const stability attribute affects this macro
|
|
||||||
|
|
||||||
error: macros cannot have const stability attributes
|
error: `#[rustc_const_unstable]` attribute cannot be used on macro defs
|
||||||
--> $DIR/const-stability-on-macro.rs:10:1
|
--> $DIR/const-stability-on-macro.rs:10:1
|
||||||
|
|
|
|
||||||
LL | #[rustc_const_unstable(feature = "bar", issue="none")]
|
LL | #[rustc_const_unstable(feature = "bar", issue = "none")]
|
||||||
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ invalid const stability attribute
|
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
|
||||||
LL |
|
|
|
||||||
LL | macro_rules! bar {
|
= help: `#[rustc_const_unstable]` can be applied to associated consts, constants, crates, functions, impl blocks, statics, traits, and use statements
|
||||||
| ---------------- const stability attribute affects this macro
|
|
||||||
|
|
||||||
error: aborting due to 2 previous errors
|
error: aborting due to 2 previous errors
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -15,6 +15,7 @@ trait Trait {
|
||||||
struct Hold<T: ?Sized>(T);
|
struct Hold<T: ?Sized>(T);
|
||||||
|
|
||||||
trait Bound = Trait<Y = { Hold::<Self> }>;
|
trait Bound = Trait<Y = { Hold::<Self> }>;
|
||||||
|
//~^ ERROR the constant `Hold::<Self>` is not of type `i32`
|
||||||
|
|
||||||
fn main() {
|
fn main() {
|
||||||
let _: dyn Bound; //~ ERROR associated constant binding in trait object type mentions `Self`
|
let _: dyn Bound; //~ ERROR associated constant binding in trait object type mentions `Self`
|
||||||
|
|
|
||||||
|
|
@ -1,5 +1,17 @@
|
||||||
|
error: the constant `Hold::<Self>` is not of type `i32`
|
||||||
|
--> $DIR/dyn-compat-const-projection-behind-trait-alias-mentions-self.rs:17:21
|
||||||
|
|
|
||||||
|
LL | trait Bound = Trait<Y = { Hold::<Self> }>;
|
||||||
|
| ^^^^^^^^^^^^^^^^^^^^ expected `i32`, found struct constructor
|
||||||
|
|
|
||||||
|
note: required by a const generic parameter in `Bound`
|
||||||
|
--> $DIR/dyn-compat-const-projection-behind-trait-alias-mentions-self.rs:17:21
|
||||||
|
|
|
||||||
|
LL | trait Bound = Trait<Y = { Hold::<Self> }>;
|
||||||
|
| ^^^^^^^^^^^^^^^^^^^^ required by this const generic parameter in `Bound`
|
||||||
|
|
||||||
error: associated constant binding in trait object type mentions `Self`
|
error: associated constant binding in trait object type mentions `Self`
|
||||||
--> $DIR/dyn-compat-const-projection-behind-trait-alias-mentions-self.rs:20:12
|
--> $DIR/dyn-compat-const-projection-behind-trait-alias-mentions-self.rs:21:12
|
||||||
|
|
|
|
||||||
LL | trait Bound = Trait<Y = { Hold::<Self> }>;
|
LL | trait Bound = Trait<Y = { Hold::<Self> }>;
|
||||||
| -------------------- this binding mentions `Self`
|
| -------------------- this binding mentions `Self`
|
||||||
|
|
@ -7,5 +19,5 @@ LL | trait Bound = Trait<Y = { Hold::<Self> }>;
|
||||||
LL | let _: dyn Bound;
|
LL | let _: dyn Bound;
|
||||||
| ^^^^^^^^^ contains a mention of `Self`
|
| ^^^^^^^^^ contains a mention of `Self`
|
||||||
|
|
||||||
error: aborting due to 1 previous error
|
error: aborting due to 2 previous errors
|
||||||
|
|
||||||
|
|
|
||||||
|
|
@ -0,0 +1,11 @@
|
||||||
|
//! Check associated const binding with escaping bound vars doesn't cause ICE
|
||||||
|
//! (#151642)
|
||||||
|
//@ check-pass
|
||||||
|
|
||||||
|
#![feature(min_generic_const_args)]
|
||||||
|
#![expect(incomplete_features)]
|
||||||
|
|
||||||
|
trait Trait2<'a> { type const ASSOC: i32; }
|
||||||
|
fn g(_: for<'a> fn(Box<dyn Trait2<'a, ASSOC = 10>>)) {}
|
||||||
|
|
||||||
|
fn main() {}
|
||||||
|
|
@ -0,0 +1,11 @@
|
||||||
|
//! Check that we correctly handle associated const bindings
|
||||||
|
//! in `impl Trait` where the RHS is a const param (#151642).
|
||||||
|
|
||||||
|
#![feature(min_generic_const_args)]
|
||||||
|
#![expect(incomplete_features)]
|
||||||
|
|
||||||
|
trait Trait { type const CT: bool; }
|
||||||
|
|
||||||
|
fn f<const N: i32>(_: impl Trait<CT = { N }>) {}
|
||||||
|
//~^ ERROR the constant `N` is not of type `bool`
|
||||||
|
fn main() {}
|
||||||
|
|
@ -0,0 +1,14 @@
|
||||||
|
error: the constant `N` is not of type `bool`
|
||||||
|
--> $DIR/wf-mismatch-1.rs:9:34
|
||||||
|
|
|
||||||
|
LL | fn f<const N: i32>(_: impl Trait<CT = { N }>) {}
|
||||||
|
| ^^^^^^^^^^ expected `bool`, found `i32`
|
||||||
|
|
|
||||||
|
note: required by a const generic parameter in `f`
|
||||||
|
--> $DIR/wf-mismatch-1.rs:9:34
|
||||||
|
|
|
||||||
|
LL | fn f<const N: i32>(_: impl Trait<CT = { N }>) {}
|
||||||
|
| ^^^^^^^^^^ required by this const generic parameter in `f`
|
||||||
|
|
||||||
|
error: aborting due to 1 previous error
|
||||||
|
|
||||||
|
|
@ -0,0 +1,13 @@
|
||||||
|
//! Check that we correctly handle associated const bindings
|
||||||
|
//! in `dyn Trait` where the RHS is a const param (#151642).
|
||||||
|
|
||||||
|
#![feature(min_generic_const_args)]
|
||||||
|
#![expect(incomplete_features)]
|
||||||
|
|
||||||
|
trait Trait { type const CT: bool; }
|
||||||
|
|
||||||
|
fn f<const N: i32>() {
|
||||||
|
let _: dyn Trait<CT = { N }>;
|
||||||
|
//~^ ERROR the constant `N` is not of type `bool`
|
||||||
|
}
|
||||||
|
fn main() {}
|
||||||
|
|
@ -0,0 +1,8 @@
|
||||||
|
error: the constant `N` is not of type `bool`
|
||||||
|
--> $DIR/wf-mismatch-2.rs:10:12
|
||||||
|
|
|
||||||
|
LL | let _: dyn Trait<CT = { N }>;
|
||||||
|
| ^^^^^^^^^^^^^^^^^^^^^ expected `bool`, found `i32`
|
||||||
|
|
||||||
|
error: aborting due to 1 previous error
|
||||||
|
|
||||||
|
|
@ -0,0 +1,17 @@
|
||||||
|
//! Check that we correctly handle associated const bindings
|
||||||
|
//! where the RHS is a normalizable const projection (#151642).
|
||||||
|
|
||||||
|
#![feature(min_generic_const_args)]
|
||||||
|
#![expect(incomplete_features)]
|
||||||
|
|
||||||
|
trait Trait { type const CT: bool; }
|
||||||
|
|
||||||
|
trait Bound { type const N: u32; }
|
||||||
|
impl Bound for () { type const N: u32 = 0; }
|
||||||
|
|
||||||
|
fn f() { let _: dyn Trait<CT = { <() as Bound>::N }>; }
|
||||||
|
//~^ ERROR the constant `0` is not of type `bool`
|
||||||
|
fn g(_: impl Trait<CT = { <() as Bound>::N }>) {}
|
||||||
|
//~^ ERROR the constant `0` is not of type `bool`
|
||||||
|
|
||||||
|
fn main() {}
|
||||||
|
|
@ -0,0 +1,20 @@
|
||||||
|
error: the constant `0` is not of type `bool`
|
||||||
|
--> $DIR/wf-mismatch-3.rs:14:20
|
||||||
|
|
|
||||||
|
LL | fn g(_: impl Trait<CT = { <() as Bound>::N }>) {}
|
||||||
|
| ^^^^^^^^^^^^^^^^^^^^^^^^^ expected `bool`, found `u32`
|
||||||
|
|
|
||||||
|
note: required by a const generic parameter in `g`
|
||||||
|
--> $DIR/wf-mismatch-3.rs:14:20
|
||||||
|
|
|
||||||
|
LL | fn g(_: impl Trait<CT = { <() as Bound>::N }>) {}
|
||||||
|
| ^^^^^^^^^^^^^^^^^^^^^^^^^ required by this const generic parameter in `g`
|
||||||
|
|
||||||
|
error: the constant `0` is not of type `bool`
|
||||||
|
--> $DIR/wf-mismatch-3.rs:12:17
|
||||||
|
|
|
||||||
|
LL | fn f() { let _: dyn Trait<CT = { <() as Bound>::N }>; }
|
||||||
|
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ expected `bool`, found `u32`
|
||||||
|
|
||||||
|
error: aborting due to 2 previous errors
|
||||||
|
|
||||||
|
|
@ -1,35 +1,33 @@
|
||||||
//@ check-pass
|
//@ check-pass
|
||||||
|
|
||||||
#![feature(generic_const_items, min_generic_const_args)]
|
#![feature(generic_const_items, min_generic_const_args)]
|
||||||
#![feature(adt_const_params)]
|
#![feature(adt_const_params, unsized_const_params, generic_const_parameter_types)]
|
||||||
#![allow(incomplete_features)]
|
#![expect(incomplete_features)]
|
||||||
|
|
||||||
|
use std::marker::{ConstParamTy, ConstParamTy_};
|
||||||
|
|
||||||
trait Owner {
|
trait Owner {
|
||||||
type const C<const N: u32>: u32;
|
type const C<const N: u32>: u32;
|
||||||
type const K<const N: u32>: u32;
|
type const K<const N: u32>: u32;
|
||||||
// #[type_const]
|
type const Q<T: ConstParamTy_>: Maybe<T>;
|
||||||
// const Q<T>: Maybe<T>;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
impl Owner for () {
|
impl Owner for () {
|
||||||
type const C<const N: u32>: u32 = N;
|
type const C<const N: u32>: u32 = N;
|
||||||
type const K<const N: u32>: u32 = const { 99 + 1 };
|
type const K<const N: u32>: u32 = const { 99 + 1 };
|
||||||
// FIXME(mgca): re-enable once we properly support ctors and generics on paths
|
type const Q<T: ConstParamTy_>: Maybe<T> = Maybe::Nothing::<T>;
|
||||||
// #[type_const]
|
|
||||||
// const Q<T>: Maybe<T> = Maybe::Nothing;
|
|
||||||
}
|
}
|
||||||
|
|
||||||
fn take0<const N: u32>(_: impl Owner<C<N> = { N }>) {}
|
fn take0<const N: u32>(_: impl Owner<C<N> = { N }>) {}
|
||||||
fn take1(_: impl Owner<K<99> = 100>) {}
|
fn take1(_: impl Owner<K<99> = 100>) {}
|
||||||
// FIXME(mgca): re-enable once we properly support ctors and generics on paths
|
fn take2(_: impl Owner<Q<()> = { Maybe::Just::<()>(()) }>) {}
|
||||||
// fn take2(_: impl Owner<Q<()> = { Maybe::Just(()) }>) {}
|
|
||||||
|
|
||||||
fn main() {
|
fn main() {
|
||||||
take0::<128>(());
|
take0::<128>(());
|
||||||
take1(());
|
take1(());
|
||||||
}
|
}
|
||||||
|
|
||||||
#[derive(PartialEq, Eq, std::marker::ConstParamTy)]
|
#[derive(PartialEq, Eq, ConstParamTy)]
|
||||||
enum Maybe<T> {
|
enum Maybe<T> {
|
||||||
Nothing,
|
Nothing,
|
||||||
Just(T),
|
Just(T),
|
||||||
|
|
|
||||||
Loading…
Add table
Add a link
Reference in a new issue