Auto merge of #149114 - BoxyUwU:mgca_adt_exprs, r=lcnr

MGCA: Support struct expressions without intermediary anon consts

r? oli-obk

tracking issue: rust-lang/rust#132980

Fixes rust-lang/rust#127972
Fixes rust-lang/rust#137888
Fixes rust-lang/rust#140275

due to delaying a bug instead of ICEing in HIR ty lowering.

### High level goal

Under `feature(min_generic_const_args)` this PR adds another kind of const argument. A struct/variant construction const arg kind. We represent the values of the fields as themselves being const arguments which allows for uses of generic parameters subject to the existing restrictions present in `min_generic_const_args`:
```rust
fn foo<const N: Option<u32>>() {}

trait Trait {
    #[type_const]
    const ASSOC: usize;
}

fn bar<T: Trait, const N: u32>() {
    // the initializer of `_0` is a `N` which is a legal const argument
    // so this is ok.
    foo::<{ Some::<u32> { 0: N } }>();

    // this is allowed as mgca supports uses of assoc consts in the
    // type system. ie `<T as Trait>::ASSOC` is a legal const argument
    foo::<{ Some::<u32> { 0: <T as Trait>::ASSOC } }>();

    // this on the other hand is not allowed as `N + 1` is not a legal
    // const argument
    foo::<{ Some::<u32> { 0: N + 1 } }>();
}
```

This PR does not support uses of const ctors, e.g. `None`. And also does not support tuple constructors, e.g. `Some(N)`. I believe that it would not be difficult to add support for such functionality after this PR lands so have left it out deliberately.

We currently require that all generic parameters on the type being constructed be explicitly specified. I haven't really looked into why that is but it doesn't seem desirable to me as it should be legal to write `Some { ... }` in a const argument inside of a body and have that desugar to `Some::<_> { ... }`. Regardless this can definitely be a follow-up PR and I assume this is some underlying consistency with the way that elided args are handled with type paths elsewhere.

This PRs implementation of supporting struct expressions is somewhat incomplete. We don't handle `Foo { ..expr }` at all and aren't handling privacy/stability. The printing of `ConstArgKind::Struct` HIR nodes doesn't really exist either :')

I've tried to keep the implementation here somewhat deliberately incomplete as I think a number of these issues are actually quite small and self contained after this PR lands and I'm hoping it could be a good set of issues to mentor newer contributors on 🤔 I just wanted the "bare minimum" required to actually demonstrate that the previous changes are "necessary".

### `ValTree` now recurse through `ty::Const`

In order to actually represent struct/variant construction in `ty::Const` without going through an anon const we would need to introduce some new `ConstKind` variant. Let's say some hypothetical `ConstKind::ADT(Ty<'tcx>, List<Const<'tcx>>)`.

This variant would represent things the same way that `ValTree` does with the first element representing the `VariantIdx` of the enum (if its an enum), and then followed by a list of field values in definition order.

This *could* work but there are a few reasons why it's suboptimal.

First it would mean we have a second kind of `Const` that can be normalized. Right now we only have `ConstKind::Unevaluated` which possibly needs normalization. Similarly with `TyKind` we *only* have `TyKind::Alias`. If we introduced `ConstKind::ADT` it would need to be normalized to a `ConstKind::Value` eventually. This feels to me like it has the potential to cause bugs in the long run where only `ConstKind::Unevaluated` is handled by some code paths.

Secondly it would make type equality/inference be kind of... weird... It's desirable for `Some { 0: ?x } eq Some { 0: 1_u32 }` to result in `?x=1_u32`.  I can't see a way for this to work with this `ConstKind::ADT` design under the current architecture for how we represent types/consts and generally do equality operations.

We would need to wholly special case these two variants in type equality and have a custom recursive walker separate from the existing architecture for doing type equality. It would also be somewhat unique in that it's a non-rigid `ty::Const` (it can be normalized more later on in type inference) while also having somewhat "structural" equality behaviour.

Lastly, it's worth noting that its not *actually* `ConstKind::ADT` that we want. It's desirable to extend this setup to also support tuples and arrays, or even references if we wind up supporting those in const generics. Therefore this isn't really `ConstKind::ADT` but a more general `ConstKind::ShallowValue` or something to that effect. It represents at least one "layer" of a types value :')

Instead of doing this implementation choice we instead change `ValTree::Branch`:
```rust
enum ValTree<'tcx> {
    Leaf(ScalarInt),
    // Before this PR:
    Branch(Box<[ValTree<'tcx>]>),
    // After this PR
    Branch(Box<[Const<'tcx>]>),
}
```

The representation for so called "shallow values" is now the same as the representation for the *entire* full value. The desired inference/type equality behaviour just falls right out of this. We also don't wind up with these shallow values actually being non-rigid. And `ValTree` *already* supports references/tuples/arrays so we can handle those just fine.

I think in the future it might be worth considering inlining `ValTree` into `ty::ConstKind`. E.g:
```rust
enum ConstKind {
    Scalar(Ty<'tcx>, ScalarInt),
    ShallowValue(Ty<'tcx>, List<Const<'tcx>>),
    Unevaluated(UnevaluatedConst<'tcx>),
    ...
}
```

This would imply that the usage of `ValTree`s in patterns would now be using `ty::Const` but they already kind of are anyway and I think that's probably okay in the long run. It also would mean that the set of things we *could* represent in const patterns is greater which may be desirable in the long run for supporting things such as const patterns of const generic parameters.

Regardless, this PR doesn't actually inline `ValTree` into `ty::ConstKind`, it only changes `Branch` to recurse through `Const`. This change could be split out of this PR if desired.

I'm not sure if there'll be a perf impact from this change. It's somewhat plausible as now all const pattern values that have nesting will be interning a lot more `Ty`s. We shall see :>

### Forbidding generic parameters under mgca

Under mgca we now allow all const arguments to resolve paths to generic parameters. We then *later* actually validate that the const arg should be allowed to access generic parameters if it did wind up resolving to any.

This winds up just being a lot simpler to implement than trying to make name resolution "keep track" of whether we're inside of a non-anon-const const arg and then encounter a `const { ... }` indicating we should now stop allowing resolving to generic parameters.

It's also somewhat in line with what we'll need for a `feature(generic_const_args)` where we'll want to decide whether an anon const should have any generic parameters based off syntactically whether any generic parameters were used. Though that design is entirely hypothetical at this point :)

### Followup Work

- Make HIR ty lowering check whether lowering generic parameters is supported and if not lower to an error type/const. Should make the code cleaner, fix some other bugs, and maybe(?) recover perf since we'll be accessing less queries which I think is part of the perf regression of this PR
- Make the ValTree setup less scuffed. We should find a new name for `ConstKind::Value` and the `Val` part of `ValTree` and `ty::Value` as they no longer correspond to a fully normalized structure. It may also be worth looking into inlining `ValTreeKind` into `ConstKind` or atleast into `ty::Value` or sth 🤔
- Support tuple constructors and const constructors not just struct expressions.
- Reduce code duplication between HIR ty lowering's handling of struct expressions, and HIR typeck's handling of struct expressions
- Try fix perf https://github.com/rust-lang/rust/pull/149114#issuecomment-3668038853. Maybe this will clear up once we clean up `ValTree` a bit and stop doing double interning and whatnot
This commit is contained in:
bors 2025-12-23 23:53:55 +00:00
commit 8796b3b8b4
80 changed files with 1163 additions and 450 deletions

View file

@ -281,6 +281,13 @@ impl<'a, 'hir> Visitor<'hir> for NodeCollector<'a, 'hir> {
});
}
fn visit_const_arg_expr_field(&mut self, field: &'hir ConstArgExprField<'hir>) {
self.insert(field.span, field.hir_id, Node::ConstArgExprField(field));
self.with_parent(field.hir_id, |this| {
intravisit::walk_const_arg_expr_field(this, field);
})
}
fn visit_stmt(&mut self, stmt: &'hir Stmt<'hir>) {
self.insert(stmt.span, stmt.hir_id, Node::Stmt(stmt));

View file

@ -2410,6 +2410,47 @@ impl<'a, 'hir> LoweringContext<'a, 'hir> {
ConstArg { hir_id: self.next_id(), kind: hir::ConstArgKind::Path(qpath) }
}
ExprKind::Struct(se) => {
let path = self.lower_qpath(
expr.id,
&se.qself,
&se.path,
// FIXME(mgca): we may want this to be `Optional` instead, but
// we would also need to make sure that HIR ty lowering errors
// when these paths wind up in signatures.
ParamMode::Explicit,
AllowReturnTypeNotation::No,
ImplTraitContext::Disallowed(ImplTraitPosition::Path),
None,
);
let fields = self.arena.alloc_from_iter(se.fields.iter().map(|f| {
let hir_id = self.lower_node_id(f.id);
// FIXME(mgca): This might result in lowering attributes that
// then go unused as the `Target::ExprField` is not actually
// corresponding to `Node::ExprField`.
self.lower_attrs(hir_id, &f.attrs, f.span, Target::ExprField);
let expr = if let ExprKind::ConstBlock(anon_const) = &f.expr.kind {
let def_id = self.local_def_id(anon_const.id);
let def_kind = self.tcx.def_kind(def_id);
assert_eq!(DefKind::AnonConst, def_kind);
self.lower_anon_const_to_const_arg_direct(anon_const)
} else {
self.lower_expr_to_const_arg_direct(&f.expr)
};
&*self.arena.alloc(hir::ConstArgExprField {
hir_id,
field: self.lower_ident(f.ident),
expr: self.arena.alloc(expr),
span: self.lower_span(f.span),
})
}));
ConstArg { hir_id: self.next_id(), kind: hir::ConstArgKind::Struct(path, fields) }
}
ExprKind::Underscore => ConstArg {
hir_id: self.lower_node_id(expr.id),
kind: hir::ConstArgKind::Infer(expr.span, ()),

View file

@ -130,7 +130,7 @@ pub(super) fn codegen_simd_intrinsic_call<'tcx>(
return;
}
let idx = generic_args[2].expect_const().to_value().valtree.unwrap_branch();
let idx = generic_args[2].expect_const().to_branch();
assert_eq!(x.layout(), y.layout());
let layout = x.layout();
@ -143,7 +143,7 @@ pub(super) fn codegen_simd_intrinsic_call<'tcx>(
let total_len = lane_count * 2;
let indexes = idx.iter().map(|idx| idx.unwrap_leaf().to_u32()).collect::<Vec<u32>>();
let indexes = idx.iter().map(|idx| idx.to_leaf().to_u32()).collect::<Vec<u32>>();
for &idx in &indexes {
assert!(u64::from(idx) < total_len, "idx {} out of range 0..{}", idx, total_len);
@ -961,9 +961,8 @@ pub(super) fn codegen_simd_intrinsic_call<'tcx>(
let lane_clif_ty = fx.clif_type(val_lane_ty).unwrap();
let ptr_val = ptr.load_scalar(fx);
let alignment = generic_args[3].expect_const().to_value().valtree.unwrap_branch()[0]
.unwrap_leaf()
.to_simd_alignment();
let alignment =
generic_args[3].expect_const().to_branch()[0].to_leaf().to_simd_alignment();
let memflags = match alignment {
SimdAlign::Unaligned => MemFlags::new().with_notrap(),
@ -1006,9 +1005,8 @@ pub(super) fn codegen_simd_intrinsic_call<'tcx>(
let lane_clif_ty = fx.clif_type(val_lane_ty).unwrap();
let ret_lane_layout = fx.layout_of(ret_lane_ty);
let alignment = generic_args[3].expect_const().to_value().valtree.unwrap_branch()[0]
.unwrap_leaf()
.to_simd_alignment();
let alignment =
generic_args[3].expect_const().to_branch()[0].to_leaf().to_simd_alignment();
let memflags = match alignment {
SimdAlign::Unaligned => MemFlags::new().with_notrap(),
@ -1059,9 +1057,8 @@ pub(super) fn codegen_simd_intrinsic_call<'tcx>(
let ret_lane_layout = fx.layout_of(ret_lane_ty);
let ptr_val = ptr.load_scalar(fx);
let alignment = generic_args[3].expect_const().to_value().valtree.unwrap_branch()[0]
.unwrap_leaf()
.to_simd_alignment();
let alignment =
generic_args[3].expect_const().to_branch()[0].to_leaf().to_simd_alignment();
let memflags = match alignment {
SimdAlign::Unaligned => MemFlags::new().with_notrap(),

View file

@ -345,7 +345,7 @@ impl<'ll, 'tcx> IntrinsicCallBuilderMethods<'tcx> for Builder<'_, 'll, 'tcx> {
_ => bug!(),
};
let ptr = args[0].immediate();
let locality = fn_args.const_at(1).to_value().valtree.unwrap_leaf().to_i32();
let locality = fn_args.const_at(1).to_leaf().to_i32();
self.call_intrinsic(
"llvm.prefetch",
&[self.val_ty(ptr)],
@ -1527,7 +1527,7 @@ fn generic_simd_intrinsic<'ll, 'tcx>(
}
if name == sym::simd_shuffle_const_generic {
let idx = fn_args[2].expect_const().to_value().valtree.unwrap_branch();
let idx = fn_args[2].expect_const().to_branch();
let n = idx.len() as u64;
let (out_len, out_ty) = require_simd!(ret_ty, SimdReturn);
@ -1546,7 +1546,7 @@ fn generic_simd_intrinsic<'ll, 'tcx>(
.iter()
.enumerate()
.map(|(arg_idx, val)| {
let idx = val.unwrap_leaf().to_i32();
let idx = val.to_leaf().to_i32();
if idx >= i32::try_from(total_len).unwrap() {
bx.sess().dcx().emit_err(InvalidMonomorphization::SimdIndexOutOfBounds {
span,
@ -1958,9 +1958,7 @@ fn generic_simd_intrinsic<'ll, 'tcx>(
// those lanes whose `mask` bit is enabled.
// The memory addresses corresponding to the “off” lanes are not accessed.
let alignment = fn_args[3].expect_const().to_value().valtree.unwrap_branch()[0]
.unwrap_leaf()
.to_simd_alignment();
let alignment = fn_args[3].expect_const().to_branch()[0].to_leaf().to_simd_alignment();
// The element type of the "mask" argument must be a signed integer type of any width
let mask_ty = in_ty;
@ -2053,9 +2051,7 @@ fn generic_simd_intrinsic<'ll, 'tcx>(
// those lanes whose `mask` bit is enabled.
// The memory addresses corresponding to the “off” lanes are not accessed.
let alignment = fn_args[3].expect_const().to_value().valtree.unwrap_branch()[0]
.unwrap_leaf()
.to_simd_alignment();
let alignment = fn_args[3].expect_const().to_branch()[0].to_leaf().to_simd_alignment();
// The element type of the "mask" argument must be a signed integer type of any width
let mask_ty = in_ty;

View file

@ -77,22 +77,21 @@ impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx> {
.flatten()
.map(|val| {
// A SIMD type has a single field, which is an array.
let fields = val.unwrap_branch();
let fields = val.to_branch();
assert_eq!(fields.len(), 1);
let array = fields[0].unwrap_branch();
let array = fields[0].to_branch();
// Iterate over the array elements to obtain the values in the vector.
let values: Vec<_> = array
.iter()
.map(|field| {
if let Some(prim) = field.try_to_scalar() {
let layout = bx.layout_of(field_ty);
let BackendRepr::Scalar(scalar) = layout.backend_repr else {
bug!("from_const: invalid ByVal layout: {:#?}", layout);
};
bx.scalar_to_backend(prim, scalar, bx.immediate_backend_type(layout))
} else {
let Some(prim) = field.try_to_scalar() else {
bug!("field is not a scalar {:?}", field)
}
};
let layout = bx.layout_of(field_ty);
let BackendRepr::Scalar(scalar) = layout.backend_repr else {
bug!("from_const: invalid ByVal layout: {:#?}", layout);
};
bx.scalar_to_backend(prim, scalar, bx.immediate_backend_type(layout))
})
.collect();
bx.const_vector(&values)

View file

@ -102,7 +102,7 @@ impl<'a, 'tcx, Bx: BuilderMethods<'a, 'tcx>> FunctionCx<'a, 'tcx, Bx> {
};
let parse_atomic_ordering = |ord: ty::Value<'tcx>| {
let discr = ord.valtree.unwrap_branch()[0].unwrap_leaf();
let discr = ord.to_branch()[0].to_leaf();
discr.to_atomic_ordering()
};

View file

@ -36,13 +36,17 @@ fn branches<'tcx>(
// For enums, we prepend their variant index before the variant's fields so we can figure out
// the variant again when just seeing a valtree.
if let Some(variant) = variant {
branches.push(ty::ValTree::from_scalar_int(*ecx.tcx, variant.as_u32().into()));
branches.push(ty::Const::new_value(
*ecx.tcx,
ty::ValTree::from_scalar_int(*ecx.tcx, variant.as_u32().into()),
ecx.tcx.types.u32,
));
}
for i in 0..field_count {
let field = ecx.project_field(&place, FieldIdx::from_usize(i)).unwrap();
let valtree = const_to_valtree_inner(ecx, &field, num_nodes)?;
branches.push(valtree);
branches.push(ty::Const::new_value(*ecx.tcx, valtree, field.layout.ty));
}
// Have to account for ZSTs here
@ -65,7 +69,7 @@ fn slice_branches<'tcx>(
for i in 0..n {
let place_elem = ecx.project_index(place, i).unwrap();
let valtree = const_to_valtree_inner(ecx, &place_elem, num_nodes)?;
elems.push(valtree);
elems.push(ty::Const::new_value(*ecx.tcx, valtree, place_elem.layout.ty));
}
Ok(ty::ValTree::from_branches(*ecx.tcx, elems))
@ -200,8 +204,8 @@ fn reconstruct_place_meta<'tcx>(
&ObligationCause::dummy(),
|ty| ty,
|| {
let branches = last_valtree.unwrap_branch();
last_valtree = *branches.last().unwrap();
let branches = last_valtree.to_branch();
last_valtree = branches.last().unwrap().to_value().valtree;
debug!(?branches, ?last_valtree);
},
);
@ -212,7 +216,7 @@ fn reconstruct_place_meta<'tcx>(
};
// Get the number of elements in the unsized field.
let num_elems = last_valtree.unwrap_branch().len();
let num_elems = last_valtree.to_branch().len();
MemPlaceMeta::Meta(Scalar::from_target_usize(num_elems as u64, &tcx))
}
@ -274,7 +278,7 @@ pub fn valtree_to_const_value<'tcx>(
mir::ConstValue::ZeroSized
}
ty::Bool | ty::Int(_) | ty::Uint(_) | ty::Float(_) | ty::Char | ty::RawPtr(_, _) => {
mir::ConstValue::Scalar(Scalar::Int(cv.valtree.unwrap_leaf()))
mir::ConstValue::Scalar(Scalar::Int(cv.to_leaf()))
}
ty::Pat(ty, _) => {
let cv = ty::Value { valtree: cv.valtree, ty };
@ -301,12 +305,13 @@ pub fn valtree_to_const_value<'tcx>(
|| matches!(cv.ty.kind(), ty::Adt(def, _) if def.is_struct()))
{
// A Scalar tuple/struct; we can avoid creating an allocation.
let branches = cv.valtree.unwrap_branch();
let branches = cv.to_branch();
// Find the non-ZST field. (There can be aligned ZST!)
for (i, &inner_valtree) in branches.iter().enumerate() {
let field = layout.field(&LayoutCx::new(tcx, typing_env), i);
if !field.is_zst() {
let cv = ty::Value { valtree: inner_valtree, ty: field.ty };
let cv =
ty::Value { valtree: inner_valtree.to_value().valtree, ty: field.ty };
return valtree_to_const_value(tcx, typing_env, cv);
}
}
@ -381,7 +386,7 @@ fn valtree_into_mplace<'tcx>(
// Zero-sized type, nothing to do.
}
ty::Bool | ty::Int(_) | ty::Uint(_) | ty::Float(_) | ty::Char | ty::RawPtr(..) => {
let scalar_int = valtree.unwrap_leaf();
let scalar_int = valtree.to_leaf();
debug!("writing trivial valtree {:?} to place {:?}", scalar_int, place);
ecx.write_immediate(Immediate::Scalar(scalar_int.into()), place).unwrap();
}
@ -391,13 +396,13 @@ fn valtree_into_mplace<'tcx>(
ecx.write_immediate(imm, place).unwrap();
}
ty::Adt(_, _) | ty::Tuple(_) | ty::Array(_, _) | ty::Str | ty::Slice(_) => {
let branches = valtree.unwrap_branch();
let branches = valtree.to_branch();
// Need to downcast place for enums
let (place_adjusted, branches, variant_idx) = match ty.kind() {
ty::Adt(def, _) if def.is_enum() => {
// First element of valtree corresponds to variant
let scalar_int = branches[0].unwrap_leaf();
let scalar_int = branches[0].to_leaf();
let variant_idx = VariantIdx::from_u32(scalar_int.to_u32());
let variant = def.variant(variant_idx);
debug!(?variant);
@ -425,7 +430,7 @@ fn valtree_into_mplace<'tcx>(
};
debug!(?place_inner);
valtree_into_mplace(ecx, &place_inner, *inner_valtree);
valtree_into_mplace(ecx, &place_inner, inner_valtree.to_value().valtree);
dump_place(ecx, &place_inner);
}

View file

@ -545,7 +545,7 @@ impl<'tcx, M: Machine<'tcx>> InterpCx<'tcx, M> {
let (right, right_len) = self.project_to_simd(&args[1])?;
let (dest, dest_len) = self.project_to_simd(&dest)?;
let index = generic_args[2].expect_const().to_value().valtree.unwrap_branch();
let index = generic_args[2].expect_const().to_branch();
let index_len = index.len();
assert_eq!(left_len, right_len);
@ -553,7 +553,7 @@ impl<'tcx, M: Machine<'tcx>> InterpCx<'tcx, M> {
for i in 0..dest_len {
let src_index: u64 =
index[usize::try_from(i).unwrap()].unwrap_leaf().to_u32().into();
index[usize::try_from(i).unwrap()].to_leaf().to_u32().into();
let dest = self.project_index(&dest, i)?;
let val = if src_index < left_len {
@ -657,9 +657,7 @@ impl<'tcx, M: Machine<'tcx>> InterpCx<'tcx, M> {
self.check_simd_ptr_alignment(
ptr,
dest_layout,
generic_args[3].expect_const().to_value().valtree.unwrap_branch()[0]
.unwrap_leaf()
.to_simd_alignment(),
generic_args[3].expect_const().to_branch()[0].to_leaf().to_simd_alignment(),
)?;
for i in 0..dest_len {
@ -689,9 +687,7 @@ impl<'tcx, M: Machine<'tcx>> InterpCx<'tcx, M> {
self.check_simd_ptr_alignment(
ptr,
args[2].layout,
generic_args[3].expect_const().to_value().valtree.unwrap_branch()[0]
.unwrap_leaf()
.to_simd_alignment(),
generic_args[3].expect_const().to_branch()[0].to_leaf().to_simd_alignment(),
)?;
for i in 0..vals_len {

View file

@ -494,6 +494,7 @@ impl<'hir, Unambig> ConstArg<'hir, Unambig> {
pub fn span(&self) -> Span {
match self.kind {
ConstArgKind::Struct(path, _) => path.span(),
ConstArgKind::Path(path) => path.span(),
ConstArgKind::Anon(anon) => anon.span,
ConstArgKind::Error(span, _) => span,
@ -513,6 +514,8 @@ pub enum ConstArgKind<'hir, Unambig = ()> {
/// However, in the future, we'll be using it for all of those.
Path(QPath<'hir>),
Anon(&'hir AnonConst),
/// Represents construction of struct/struct variants
Struct(QPath<'hir>, &'hir [&'hir ConstArgExprField<'hir>]),
/// Error const
Error(Span, ErrorGuaranteed),
/// This variant is not always used to represent inference consts, sometimes
@ -520,6 +523,14 @@ pub enum ConstArgKind<'hir, Unambig = ()> {
Infer(Span, Unambig),
}
#[derive(Clone, Copy, Debug, HashStable_Generic)]
pub struct ConstArgExprField<'hir> {
pub hir_id: HirId,
pub span: Span,
pub field: Ident,
pub expr: &'hir ConstArg<'hir>,
}
#[derive(Clone, Copy, Debug, HashStable_Generic)]
pub struct InferArg {
#[stable_hasher(ignore)]
@ -4714,6 +4725,7 @@ pub enum Node<'hir> {
ConstArg(&'hir ConstArg<'hir>),
Expr(&'hir Expr<'hir>),
ExprField(&'hir ExprField<'hir>),
ConstArgExprField(&'hir ConstArgExprField<'hir>),
Stmt(&'hir Stmt<'hir>),
PathSegment(&'hir PathSegment<'hir>),
Ty(&'hir Ty<'hir>),
@ -4773,6 +4785,7 @@ impl<'hir> Node<'hir> {
Node::AssocItemConstraint(c) => Some(c.ident),
Node::PatField(f) => Some(f.ident),
Node::ExprField(f) => Some(f.ident),
Node::ConstArgExprField(f) => Some(f.field),
Node::PreciseCapturingNonLifetimeArg(a) => Some(a.ident),
Node::Param(..)
| Node::AnonConst(..)

View file

@ -396,6 +396,9 @@ pub trait Visitor<'v>: Sized {
fn visit_expr_field(&mut self, field: &'v ExprField<'v>) -> Self::Result {
walk_expr_field(self, field)
}
fn visit_const_arg_expr_field(&mut self, field: &'v ConstArgExprField<'v>) -> Self::Result {
walk_const_arg_expr_field(self, field)
}
fn visit_pattern_type_pattern(&mut self, p: &'v TyPat<'v>) -> Self::Result {
walk_ty_pat(self, p)
}
@ -954,6 +957,17 @@ pub fn walk_expr_field<'v, V: Visitor<'v>>(visitor: &mut V, field: &'v ExprField
try_visit!(visitor.visit_ident(*ident));
visitor.visit_expr(*expr)
}
pub fn walk_const_arg_expr_field<'v, V: Visitor<'v>>(
visitor: &mut V,
field: &'v ConstArgExprField<'v>,
) -> V::Result {
let ConstArgExprField { hir_id, field, expr, span: _ } = field;
try_visit!(visitor.visit_id(*hir_id));
try_visit!(visitor.visit_ident(*field));
visitor.visit_const_arg_unambig(*expr)
}
/// We track whether an infer var is from a [`Ty`], [`ConstArg`], or [`GenericArg`] so that
/// HIR visitors overriding [`Visitor::visit_infer`] can determine what kind of infer is being visited
pub enum InferKind<'hir> {
@ -1068,6 +1082,15 @@ pub fn walk_const_arg<'v, V: Visitor<'v>>(
let ConstArg { hir_id, kind } = const_arg;
try_visit!(visitor.visit_id(*hir_id));
match kind {
ConstArgKind::Struct(qpath, field_exprs) => {
try_visit!(visitor.visit_qpath(qpath, *hir_id, qpath.span()));
for field_expr in *field_exprs {
try_visit!(visitor.visit_const_arg_expr_field(field_expr));
}
V::Result::output()
}
ConstArgKind::Path(qpath) => visitor.visit_qpath(qpath, *hir_id, qpath.span()),
ConstArgKind::Anon(anon) => visitor.visit_anon_const(*anon),
ConstArgKind::Error(_, _) => V::Result::output(), // errors and spans are not important

View file

@ -2263,12 +2263,120 @@ impl<'tcx> dyn HirTyLowerer<'tcx> + '_ {
)
.unwrap_or_else(|guar| Const::new_error(tcx, guar))
}
hir::ConstArgKind::Anon(anon) => self.lower_anon_const(anon),
hir::ConstArgKind::Struct(qpath, inits) => {
self.lower_const_arg_struct(hir_id, qpath, inits, const_arg.span())
}
hir::ConstArgKind::Anon(anon) => self.lower_const_arg_anon(anon),
hir::ConstArgKind::Infer(span, ()) => self.ct_infer(None, span),
hir::ConstArgKind::Error(_, e) => ty::Const::new_error(tcx, e),
}
}
fn lower_const_arg_struct(
&self,
hir_id: HirId,
qpath: hir::QPath<'tcx>,
inits: &'tcx [&'tcx hir::ConstArgExprField<'tcx>],
span: Span,
) -> Const<'tcx> {
// FIXME(mgca): try to deduplicate this function with
// the equivalent HIR typeck logic.
let tcx = self.tcx();
let non_adt_or_variant_res = || {
let e = tcx.dcx().span_err(span, "struct expression with invalid base path");
ty::Const::new_error(tcx, e)
};
let (ty, variant_did) = match qpath {
hir::QPath::Resolved(maybe_qself, path) => {
debug!(?maybe_qself, ?path);
let opt_self_ty = maybe_qself.as_ref().map(|qself| self.lower_ty(qself));
let ty =
self.lower_resolved_ty_path(opt_self_ty, path, hir_id, PermitVariants::Yes);
let variant_did = match path.res {
Res::Def(DefKind::Variant | DefKind::Struct, did) => did,
_ => return non_adt_or_variant_res(),
};
(ty, variant_did)
}
hir::QPath::TypeRelative(hir_self_ty, segment) => {
debug!(?hir_self_ty, ?segment);
let self_ty = self.lower_ty(hir_self_ty);
let opt_res = self.lower_type_relative_ty_path(
self_ty,
hir_self_ty,
segment,
hir_id,
span,
PermitVariants::Yes,
);
let (ty, _, res_def_id) = match opt_res {
Ok(r @ (_, DefKind::Variant | DefKind::Struct, _)) => r,
Ok(_) => return non_adt_or_variant_res(),
Err(e) => return ty::Const::new_error(tcx, e),
};
(ty, res_def_id)
}
};
let ty::Adt(adt_def, adt_args) = ty.kind() else { unreachable!() };
let variant_def = adt_def.variant_with_id(variant_did);
let variant_idx = adt_def.variant_index_with_id(variant_did).as_u32();
let fields = variant_def
.fields
.iter()
.map(|field_def| {
// FIXME(mgca): we aren't really handling privacy, stability,
// or macro hygeniene but we should.
let mut init_expr =
inits.iter().filter(|init_expr| init_expr.field.name == field_def.name);
match init_expr.next() {
Some(expr) => {
if let Some(expr) = init_expr.next() {
let e = tcx.dcx().span_err(
expr.span,
format!(
"struct expression with multiple initialisers for `{}`",
field_def.name,
),
);
return ty::Const::new_error(tcx, e);
}
self.lower_const_arg(expr.expr, FeedConstTy::Param(field_def.did, adt_args))
}
None => {
let e = tcx.dcx().span_err(
span,
format!(
"struct expression with missing field initialiser for `{}`",
field_def.name
),
);
ty::Const::new_error(tcx, e)
}
}
})
.collect::<Vec<_>>();
let opt_discr_const = if adt_def.is_enum() {
let valtree = ty::ValTree::from_scalar_int(tcx, variant_idx.into());
Some(ty::Const::new_value(tcx, valtree, tcx.types.u32))
} else {
None
};
let valtree = ty::ValTree::from_branches(tcx, opt_discr_const.into_iter().chain(fields));
ty::Const::new_value(tcx, valtree, ty)
}
/// Lower a [resolved][hir::QPath::Resolved] path to a (type-level) constant.
fn lower_resolved_const_path(
&self,
@ -2372,7 +2480,7 @@ impl<'tcx> dyn HirTyLowerer<'tcx> + '_ {
/// Literals are eagerly converted to a constant, everything else becomes `Unevaluated`.
#[instrument(skip(self), level = "debug")]
fn lower_anon_const(&self, anon: &AnonConst) -> Const<'tcx> {
fn lower_const_arg_anon(&self, anon: &AnonConst) -> Const<'tcx> {
let tcx = self.tcx();
let expr = &tcx.hir_body(anon.body).value;
@ -2403,8 +2511,8 @@ impl<'tcx> dyn HirTyLowerer<'tcx> + '_ {
) -> Option<Const<'tcx>> {
let tcx = self.tcx();
// Unwrap a block, so that e.g. `{ P }` is recognised as a parameter. Const arguments
// currently have to be wrapped in curly brackets, so it's necessary to special-case.
// Unwrap a block, so that e.g. `{ 1 }` is recognised as a literal. This makes the
// performance optimisation of directly lowering anon consts occur more often.
let expr = match &expr.kind {
hir::ExprKind::Block(block, _) if block.stmts.is_empty() && block.expr.is_some() => {
block.expr.as_ref().unwrap()
@ -2412,15 +2520,18 @@ impl<'tcx> dyn HirTyLowerer<'tcx> + '_ {
_ => expr,
};
// FIXME(mgca): remove this delayed bug once we start checking this
// when lowering `Ty/ConstKind::Param`s more generally.
if let hir::ExprKind::Path(hir::QPath::Resolved(
_,
&hir::Path { res: Res::Def(DefKind::ConstParam, _), .. },
)) = expr.kind
{
span_bug!(
let e = tcx.dcx().span_delayed_bug(
expr.span,
"try_lower_anon_const_lit: received const param which shouldn't be possible"
"try_lower_anon_const_lit: received const param which shouldn't be possible",
);
return Some(ty::Const::new_error(tcx, e));
};
let lit_input = match expr.kind {

View file

@ -180,6 +180,8 @@ impl<'a> State<'a> {
Node::ConstArg(a) => self.print_const_arg(a),
Node::Expr(a) => self.print_expr(a),
Node::ExprField(a) => self.print_expr_field(a),
// FIXME(mgca): proper printing for struct exprs
Node::ConstArgExprField(_) => self.word("/* STRUCT EXPR */"),
Node::Stmt(a) => self.print_stmt(a),
Node::PathSegment(a) => self.print_path_segment(a),
Node::Ty(a) => self.print_type(a),
@ -1135,6 +1137,8 @@ impl<'a> State<'a> {
fn print_const_arg(&mut self, const_arg: &hir::ConstArg<'_>) {
match &const_arg.kind {
// FIXME(mgca): proper printing for struct exprs
ConstArgKind::Struct(..) => self.word("/* STRUCT EXPR */"),
ConstArgKind::Path(qpath) => self.print_qpath(qpath, true),
ConstArgKind::Anon(anon) => self.print_anon_const(anon),
ConstArgKind::Error(_, _) => self.word("/*ERROR*/"),

View file

@ -115,6 +115,18 @@ fn typeck_with_inspect<'tcx>(
return tcx.typeck(typeck_root_def_id);
}
// We can't handle bodies containing generic parameters even though
// these generic parameters aren't part of its `generics_of` right now.
//
// See the FIXME on `check_anon_const_invalid_param_uses`.
if tcx.features().min_generic_const_args()
&& let DefKind::AnonConst = tcx.def_kind(def_id)
&& let ty::AnonConstKind::MCG = tcx.anon_const_kind(def_id)
&& let Err(e) = tcx.check_anon_const_invalid_param_uses(def_id)
{
e.raise_fatal();
}
let id = tcx.local_def_id_to_hir_id(def_id);
let node = tcx.hir_node(id);
let span = tcx.def_span(def_id);

View file

@ -1440,6 +1440,7 @@ impl<'a, 'tcx> EncodeContext<'a, 'tcx> {
hir::Node::ConstArg(hir::ConstArg { kind, .. }) => match kind {
// Skip encoding defs for these as they should not have had a `DefId` created
hir::ConstArgKind::Error(..)
| hir::ConstArgKind::Struct(..)
| hir::ConstArgKind::Path(..)
| hir::ConstArgKind::Infer(..) => true,
hir::ConstArgKind::Anon(..) => false,

View file

@ -92,7 +92,7 @@ macro_rules! arena_types {
[] name_set: rustc_data_structures::unord::UnordSet<rustc_span::Symbol>,
[] autodiff_item: rustc_ast::expand::autodiff_attrs::AutoDiffItem,
[] ordered_name_set: rustc_data_structures::fx::FxIndexSet<rustc_span::Symbol>,
[] valtree: rustc_middle::ty::ValTreeKind<'tcx>,
[] valtree: rustc_middle::ty::ValTreeKind<rustc_middle::ty::TyCtxt<'tcx>>,
[] stable_order_of_exportable_impls:
rustc_data_structures::fx::FxIndexMap<rustc_hir::def_id::DefId, usize>,

View file

@ -2,6 +2,8 @@
//! eliminated, and all its methods are now on `TyCtxt`. But the module name
//! stays as `map` because there isn't an obviously better name for it.
use std::ops::ControlFlow;
use rustc_abi::ExternAbi;
use rustc_ast::visit::{VisitorResult, walk_list};
use rustc_data_structures::fingerprint::Fingerprint;
@ -737,6 +739,7 @@ impl<'tcx> TyCtxt<'tcx> {
Node::ConstArg(_) => node_str("const"),
Node::Expr(_) => node_str("expr"),
Node::ExprField(_) => node_str("expr field"),
Node::ConstArgExprField(_) => node_str("const arg expr field"),
Node::Stmt(_) => node_str("stmt"),
Node::PathSegment(_) => node_str("path segment"),
Node::Ty(_) => node_str("type"),
@ -1005,6 +1008,7 @@ impl<'tcx> TyCtxt<'tcx> {
Node::ConstArg(const_arg) => const_arg.span(),
Node::Expr(expr) => expr.span,
Node::ExprField(field) => field.span,
Node::ConstArgExprField(field) => field.span,
Node::Stmt(stmt) => stmt.span,
Node::PathSegment(seg) => {
let ident_span = seg.ident.span;
@ -1086,6 +1090,52 @@ impl<'tcx> TyCtxt<'tcx> {
None
}
// FIXME(mgca): this is pretty iffy. In the long term we should make
// HIR ty lowering able to return `Error` versions of types/consts when
// lowering them in contexts that aren't supposed to use generic parameters.
//
// This current impl strategy is incomplete and doesn't handle `Self` ty aliases.
pub fn check_anon_const_invalid_param_uses(
self,
anon: LocalDefId,
) -> Result<(), ErrorGuaranteed> {
struct GenericParamVisitor<'tcx>(TyCtxt<'tcx>);
impl<'tcx> Visitor<'tcx> for GenericParamVisitor<'tcx> {
type NestedFilter = nested_filter::OnlyBodies;
type Result = ControlFlow<ErrorGuaranteed>;
fn maybe_tcx(&mut self) -> TyCtxt<'tcx> {
self.0
}
fn visit_path(
&mut self,
path: &crate::hir::Path<'tcx>,
_id: HirId,
) -> ControlFlow<ErrorGuaranteed> {
if let Res::Def(
DefKind::TyParam | DefKind::ConstParam | DefKind::LifetimeParam,
_,
) = path.res
{
let e = self.0.dcx().struct_span_err(
path.span,
"generic parameters may not be used in const operations",
);
return ControlFlow::Break(e.emit());
}
intravisit::walk_path(self, path)
}
}
let body = self.hir_maybe_body_owned_by(anon).unwrap();
match GenericParamVisitor(self).visit_expr(&body.value) {
ControlFlow::Break(e) => Err(e),
ControlFlow::Continue(()) => Ok(()),
}
}
}
impl<'tcx> intravisit::HirTyCtxt<'tcx> for TyCtxt<'tcx> {

View file

@ -359,6 +359,7 @@ impl<'tcx> TyCtxt<'tcx> {
| Node::Infer(_)
| Node::WherePredicate(_)
| Node::PreciseCapturingNonLifetimeArg(_)
| Node::ConstArgExprField(_)
| Node::OpaqueTy(_) => {
unreachable!("no sub-expr expected for {parent_node:?}")
}

View file

@ -302,15 +302,7 @@ impl<'tcx> Const<'tcx> {
#[inline]
pub fn try_to_scalar(self) -> Option<Scalar> {
match self {
Const::Ty(_, c) => match c.kind() {
ty::ConstKind::Value(cv) if cv.ty.is_primitive() => {
// A valtree of a type where leaves directly represent the scalar const value.
// Just checking whether it is a leaf is insufficient as e.g. references are leafs
// but the leaf value is the value they point to, not the reference itself!
Some(cv.valtree.unwrap_leaf().into())
}
_ => None,
},
Const::Ty(_, c) => c.try_to_scalar(),
Const::Val(val, _) => val.try_to_scalar(),
Const::Unevaluated(..) => None,
}
@ -321,10 +313,7 @@ impl<'tcx> Const<'tcx> {
// This is equivalent to `self.try_to_scalar()?.try_to_int().ok()`, but measurably faster.
match self {
Const::Val(ConstValue::Scalar(Scalar::Int(x)), _) => Some(x),
Const::Ty(_, c) => match c.kind() {
ty::ConstKind::Value(cv) if cv.ty.is_primitive() => Some(cv.valtree.unwrap_leaf()),
_ => None,
},
Const::Ty(_, c) => c.try_to_leaf(),
_ => None,
}
}
@ -377,14 +366,10 @@ impl<'tcx> Const<'tcx> {
tcx: TyCtxt<'tcx>,
typing_env: ty::TypingEnv<'tcx>,
) -> Option<Scalar> {
if let Const::Ty(_, c) = self
&& let ty::ConstKind::Value(cv) = c.kind()
&& cv.ty.is_primitive()
{
// Avoid the `valtree_to_const_val` query. Can only be done on primitive types that
// are valtree leaves, and *not* on references. (References should return the
// pointer here, which valtrees don't represent.)
Some(cv.valtree.unwrap_leaf().into())
if let Const::Ty(_, c) = self {
// We don't evaluate anything for type system constants as normalizing
// the MIR will handle this for us
c.try_to_scalar()
} else {
self.eval(tcx, typing_env, DUMMY_SP).ok()?.try_to_scalar()
}

View file

@ -928,7 +928,7 @@ impl<'tcx> PatRange<'tcx> {
let lo_is_min = match self.lo {
PatRangeBoundary::NegInfinity => true,
PatRangeBoundary::Finite(value) => {
let lo = value.try_to_scalar_int().unwrap().to_bits(size) ^ bias;
let lo = value.to_leaf().to_bits(size) ^ bias;
lo <= min
}
PatRangeBoundary::PosInfinity => false,
@ -937,7 +937,7 @@ impl<'tcx> PatRange<'tcx> {
let hi_is_max = match self.hi {
PatRangeBoundary::NegInfinity => false,
PatRangeBoundary::Finite(value) => {
let hi = value.try_to_scalar_int().unwrap().to_bits(size) ^ bias;
let hi = value.to_leaf().to_bits(size) ^ bias;
hi > max || hi == max && self.end == RangeEnd::Included
}
PatRangeBoundary::PosInfinity => true,
@ -1029,7 +1029,7 @@ impl<'tcx> PatRangeBoundary<'tcx> {
}
pub fn to_bits(self, ty: Ty<'tcx>, tcx: TyCtxt<'tcx>) -> u128 {
match self {
Self::Finite(value) => value.try_to_scalar_int().unwrap().to_bits_unchecked(),
Self::Finite(value) => value.to_leaf().to_bits_unchecked(),
Self::NegInfinity => {
// Unwrap is ok because the type is known to be numeric.
ty.numeric_min_and_max_as_bits(tcx).unwrap().0
@ -1057,7 +1057,7 @@ impl<'tcx> PatRangeBoundary<'tcx> {
// many ranges such as '\u{037A}'..='\u{037F}', and chars can be compared
// in this way.
(Finite(a), Finite(b)) if matches!(ty.kind(), ty::Int(_) | ty::Uint(_) | ty::Char) => {
if let (Some(a), Some(b)) = (a.try_to_scalar_int(), b.try_to_scalar_int()) {
if let (Some(a), Some(b)) = (a.try_to_leaf(), b.try_to_leaf()) {
let sz = ty.primitive_size(tcx);
let cmp = match ty.kind() {
ty::Uint(_) | ty::Char => a.to_uint(sz).cmp(&b.to_uint(sz)),

View file

@ -6,6 +6,7 @@ use rustc_macros::{HashStable, TyDecodable, TyEncodable};
use rustc_type_ir::walk::TypeWalker;
use rustc_type_ir::{self as ir, TypeFlags, WithCachedTypeInfo};
use crate::mir::interpret::Scalar;
use crate::ty::{self, Ty, TyCtxt};
mod int;
@ -260,7 +261,7 @@ impl<'tcx> Const<'tcx> {
/// Attempts to convert to a value.
///
/// Note that this does not evaluate the constant.
/// Note that this does not normalize the constant.
pub fn try_to_value(self) -> Option<ty::Value<'tcx>> {
match self.kind() {
ty::ConstKind::Value(cv) => Some(cv),
@ -268,6 +269,45 @@ impl<'tcx> Const<'tcx> {
}
}
/// Converts to a `ValTreeKind::Leaf` value, `panic`'ing
/// if this constant is some other kind.
///
/// Note that this does not normalize the constant.
#[inline]
pub fn to_leaf(self) -> ScalarInt {
self.to_value().to_leaf()
}
/// Converts to a `ValTreeKind::Branch` value, `panic`'ing
/// if this constant is some other kind.
///
/// Note that this does not normalize the constant.
#[inline]
pub fn to_branch(self) -> &'tcx [ty::Const<'tcx>] {
self.to_value().to_branch()
}
/// Attempts to convert to a `ValTreeKind::Leaf` value.
///
/// Note that this does not normalize the constant.
pub fn try_to_leaf(self) -> Option<ScalarInt> {
self.try_to_value()?.try_to_leaf()
}
/// Attempts to convert to a `ValTreeKind::Leaf` value.
///
/// Note that this does not normalize the constant.
pub fn try_to_scalar(self) -> Option<Scalar> {
self.try_to_leaf().map(Scalar::Int)
}
/// Attempts to convert to a `ValTreeKind::Branch` value.
///
/// Note that this does not normalize the constant.
pub fn try_to_branch(self) -> Option<&'tcx [ty::Const<'tcx>]> {
self.try_to_value()?.try_to_branch()
}
/// Convenience method to extract the value of a usize constant,
/// useful to get the length of an array type.
///

View file

@ -3,89 +3,38 @@ use std::ops::Deref;
use rustc_data_structures::intern::Interned;
use rustc_hir::def::Namespace;
use rustc_macros::{HashStable, Lift, TyDecodable, TyEncodable, TypeFoldable, TypeVisitable};
use rustc_macros::{
HashStable, Lift, TyDecodable, TyEncodable, TypeFoldable, TypeVisitable, extension,
};
use super::ScalarInt;
use crate::mir::interpret::{ErrorHandled, Scalar};
use crate::ty::print::{FmtPrinter, PrettyPrinter};
use crate::ty::{self, Ty, TyCtxt};
use crate::ty::{self, Ty, TyCtxt, ValTreeKind};
/// This datastructure is used to represent the value of constants used in the type system.
///
/// We explicitly choose a different datastructure from the way values are processed within
/// CTFE, as in the type system equal values (according to their `PartialEq`) must also have
/// equal representation (`==` on the rustc data structure, e.g. `ValTree`) and vice versa.
/// Since CTFE uses `AllocId` to represent pointers, it often happens that two different
/// `AllocId`s point to equal values. So we may end up with different representations for
/// two constants whose value is `&42`. Furthermore any kind of struct that has padding will
/// have arbitrary values within that padding, even if the values of the struct are the same.
///
/// `ValTree` does not have this problem with representation, as it only contains integers or
/// lists of (nested) `ValTree`.
#[derive(Clone, Debug, Hash, Eq, PartialEq)]
#[derive(HashStable, TyEncodable, TyDecodable)]
pub enum ValTreeKind<'tcx> {
/// integers, `bool`, `char` are represented as scalars.
/// See the `ScalarInt` documentation for how `ScalarInt` guarantees that equal values
/// of these types have the same representation.
Leaf(ScalarInt),
//SliceOrStr(ValSlice<'tcx>),
// don't use SliceOrStr for now
/// The fields of any kind of aggregate. Structs, tuples and arrays are represented by
/// listing their fields' values in order.
///
/// Enums are represented by storing their variant index as a u32 field, followed by all
/// the fields of the variant.
///
/// ZST types are represented as an empty slice.
Branch(Box<[ValTree<'tcx>]>),
}
impl<'tcx> ValTreeKind<'tcx> {
#[inline]
pub fn unwrap_leaf(&self) -> ScalarInt {
match self {
Self::Leaf(s) => *s,
_ => bug!("expected leaf, got {:?}", self),
}
}
#[inline]
pub fn unwrap_branch(&self) -> &[ValTree<'tcx>] {
match self {
Self::Branch(branch) => &**branch,
_ => bug!("expected branch, got {:?}", self),
}
}
pub fn try_to_scalar(&self) -> Option<Scalar> {
self.try_to_scalar_int().map(Scalar::Int)
}
pub fn try_to_scalar_int(&self) -> Option<ScalarInt> {
match self {
Self::Leaf(s) => Some(*s),
Self::Branch(_) => None,
}
}
pub fn try_to_branch(&self) -> Option<&[ValTree<'tcx>]> {
match self {
Self::Branch(branch) => Some(&**branch),
Self::Leaf(_) => None,
}
#[extension(pub trait ValTreeKindExt<'tcx>)]
impl<'tcx> ty::ValTreeKind<TyCtxt<'tcx>> {
fn try_to_scalar(&self) -> Option<Scalar> {
self.try_to_leaf().map(Scalar::Int)
}
}
/// An interned valtree. Use this rather than `ValTreeKind`, whenever possible.
///
/// See the docs of [`ValTreeKind`] or the [dev guide] for an explanation of this type.
/// See the docs of [`ty::ValTreeKind`] or the [dev guide] for an explanation of this type.
///
/// [dev guide]: https://rustc-dev-guide.rust-lang.org/mir/index.html#valtrees
#[derive(Copy, Clone, Hash, Eq, PartialEq)]
#[derive(HashStable)]
pub struct ValTree<'tcx>(pub(crate) Interned<'tcx, ValTreeKind<'tcx>>);
// FIXME(mgca): Try not interning here. We already intern `ty::Const` which `ValTreeKind`
// recurses through
pub struct ValTree<'tcx>(pub(crate) Interned<'tcx, ty::ValTreeKind<TyCtxt<'tcx>>>);
impl<'tcx> rustc_type_ir::inherent::ValTree<TyCtxt<'tcx>> for ValTree<'tcx> {
fn kind(&self) -> &ty::ValTreeKind<TyCtxt<'tcx>> {
&self
}
}
impl<'tcx> ValTree<'tcx> {
/// Returns the zero-sized valtree: `Branch([])`.
@ -94,28 +43,33 @@ impl<'tcx> ValTree<'tcx> {
}
pub fn is_zst(self) -> bool {
matches!(*self, ValTreeKind::Branch(box []))
matches!(*self, ty::ValTreeKind::Branch(box []))
}
pub fn from_raw_bytes(tcx: TyCtxt<'tcx>, bytes: &[u8]) -> Self {
let branches = bytes.iter().map(|&b| Self::from_scalar_int(tcx, b.into()));
let branches = bytes.iter().map(|&b| {
ty::Const::new_value(tcx, Self::from_scalar_int(tcx, b.into()), tcx.types.u8)
});
Self::from_branches(tcx, branches)
}
pub fn from_branches(tcx: TyCtxt<'tcx>, branches: impl IntoIterator<Item = Self>) -> Self {
tcx.intern_valtree(ValTreeKind::Branch(branches.into_iter().collect()))
pub fn from_branches(
tcx: TyCtxt<'tcx>,
branches: impl IntoIterator<Item = ty::Const<'tcx>>,
) -> Self {
tcx.intern_valtree(ty::ValTreeKind::Branch(branches.into_iter().collect()))
}
pub fn from_scalar_int(tcx: TyCtxt<'tcx>, i: ScalarInt) -> Self {
tcx.intern_valtree(ValTreeKind::Leaf(i))
tcx.intern_valtree(ty::ValTreeKind::Leaf(i))
}
}
impl<'tcx> Deref for ValTree<'tcx> {
type Target = &'tcx ValTreeKind<'tcx>;
type Target = &'tcx ty::ValTreeKind<TyCtxt<'tcx>>;
#[inline]
fn deref(&self) -> &&'tcx ValTreeKind<'tcx> {
fn deref(&self) -> &&'tcx ty::ValTreeKind<TyCtxt<'tcx>> {
&self.0.0
}
}
@ -154,7 +108,7 @@ impl<'tcx> Value<'tcx> {
let (ty::Bool | ty::Char | ty::Uint(_) | ty::Int(_) | ty::Float(_)) = self.ty.kind() else {
return None;
};
let scalar = self.valtree.try_to_scalar_int()?;
let scalar = self.try_to_leaf()?;
let input = typing_env.with_post_analysis_normalized(tcx).as_query_input(self.ty);
let size = tcx.layout_of(input).ok()?.size;
Some(scalar.to_bits(size))
@ -164,14 +118,14 @@ impl<'tcx> Value<'tcx> {
if !self.ty.is_bool() {
return None;
}
self.valtree.try_to_scalar_int()?.try_to_bool().ok()
self.try_to_leaf()?.try_to_bool().ok()
}
pub fn try_to_target_usize(self, tcx: TyCtxt<'tcx>) -> Option<u64> {
if !self.ty.is_usize() {
return None;
}
self.valtree.try_to_scalar_int().map(|s| s.to_target_usize(tcx))
self.try_to_leaf().map(|s| s.to_target_usize(tcx))
}
/// Get the values inside the ValTree as a slice of bytes. This only works for
@ -192,9 +146,48 @@ impl<'tcx> Value<'tcx> {
_ => return None,
}
Some(tcx.arena.alloc_from_iter(
self.valtree.unwrap_branch().into_iter().map(|v| v.unwrap_leaf().to_u8()),
))
Some(tcx.arena.alloc_from_iter(self.to_branch().into_iter().map(|ct| ct.to_leaf().to_u8())))
}
/// Converts to a `ValTreeKind::Leaf` value, `panic`'ing
/// if this constant is some other kind.
#[inline]
pub fn to_leaf(self) -> ScalarInt {
match &**self.valtree {
ValTreeKind::Leaf(s) => *s,
ValTreeKind::Branch(..) => bug!("expected leaf, got {:?}", self),
}
}
/// Converts to a `ValTreeKind::Branch` value, `panic`'ing
/// if this constant is some other kind.
#[inline]
pub fn to_branch(self) -> &'tcx [ty::Const<'tcx>] {
match &**self.valtree {
ValTreeKind::Branch(branch) => &**branch,
ValTreeKind::Leaf(..) => bug!("expected branch, got {:?}", self),
}
}
/// Attempts to convert to a `ValTreeKind::Leaf` value.
pub fn try_to_leaf(self) -> Option<ScalarInt> {
match &**self.valtree {
ValTreeKind::Leaf(s) => Some(*s),
ValTreeKind::Branch(_) => None,
}
}
/// Attempts to convert to a `ValTreeKind::Leaf` value.
pub fn try_to_scalar(&self) -> Option<Scalar> {
self.try_to_leaf().map(Scalar::Int)
}
/// Attempts to convert to a `ValTreeKind::Branch` value.
pub fn try_to_branch(self) -> Option<&'tcx [ty::Const<'tcx>]> {
match &**self.valtree {
ValTreeKind::Branch(branch) => Some(&**branch),
ValTreeKind::Leaf(_) => None,
}
}
}

View file

@ -165,6 +165,7 @@ impl<'tcx> Interner for TyCtxt<'tcx> {
type ValueConst = ty::Value<'tcx>;
type ExprConst = ty::Expr<'tcx>;
type ValTree = ty::ValTree<'tcx>;
type ScalarInt = ty::ScalarInt;
type Region = Region<'tcx>;
type EarlyParamRegion = ty::EarlyParamRegion;
@ -954,7 +955,7 @@ pub struct CtxtInterners<'tcx> {
fields: InternedSet<'tcx, List<FieldIdx>>,
local_def_ids: InternedSet<'tcx, List<LocalDefId>>,
captures: InternedSet<'tcx, List<&'tcx ty::CapturedPlace<'tcx>>>,
valtree: InternedSet<'tcx, ty::ValTreeKind<'tcx>>,
valtree: InternedSet<'tcx, ty::ValTreeKind<TyCtxt<'tcx>>>,
patterns: InternedSet<'tcx, List<ty::Pattern<'tcx>>>,
outlives: InternedSet<'tcx, List<ty::ArgOutlivesPredicate<'tcx>>>,
}
@ -2777,7 +2778,7 @@ macro_rules! direct_interners {
// crate only, and have a corresponding `mk_` function.
direct_interners! {
region: pub(crate) intern_region(RegionKind<'tcx>): Region -> Region<'tcx>,
valtree: pub(crate) intern_valtree(ValTreeKind<'tcx>): ValTree -> ValTree<'tcx>,
valtree: pub(crate) intern_valtree(ValTreeKind<TyCtxt<'tcx>>): ValTree -> ValTree<'tcx>,
pat: pub mk_pat(PatternKind<'tcx>): Pattern -> Pattern<'tcx>,
const_allocation: pub mk_const_alloc(Allocation): ConstAllocation -> ConstAllocation<'tcx>,
layout: pub mk_layout(LayoutData<FieldIdx, VariantIdx>): Layout -> Layout<'tcx>,

View file

@ -77,7 +77,7 @@ pub use self::closure::{
};
pub use self::consts::{
AnonConstKind, AtomicOrdering, Const, ConstInt, ConstKind, ConstToValTreeResult, Expr,
ExprKind, ScalarInt, SimdAlign, UnevaluatedConst, ValTree, ValTreeKind, Value,
ExprKind, ScalarInt, SimdAlign, UnevaluatedConst, ValTree, ValTreeKindExt, Value,
};
pub use self::context::{
CtxtInterners, CurrentGcx, Feed, FreeRegionInfo, GlobalCtxt, Lift, TyCtxt, TyCtxtFeed, tls,

View file

@ -72,7 +72,7 @@ impl<'tcx> IrPrint<PatternKind<'tcx>> for TyCtxt<'tcx> {
write!(f, "{start}")?;
if let Some(c) = end.try_to_value() {
let end = c.valtree.unwrap_leaf();
let end = c.to_leaf();
let size = end.size();
let max = match c.ty.kind() {
ty::Int(_) => {

View file

@ -256,8 +256,8 @@ TrivialTypeTraversalImpls! {
crate::ty::AssocItem,
crate::ty::AssocKind,
crate::ty::BoundRegion,
crate::ty::ScalarInt,
crate::ty::UserTypeAnnotationIndex,
crate::ty::ValTree<'tcx>,
crate::ty::abstract_const::NotConstEvaluatable,
crate::ty::adjustment::AutoBorrowMutability,
crate::ty::adjustment::PointerCoercion,
@ -697,6 +697,37 @@ impl<'tcx> TypeSuperVisitable<TyCtxt<'tcx>> for ty::Const<'tcx> {
}
}
impl<'tcx> TypeVisitable<TyCtxt<'tcx>> for ty::ValTree<'tcx> {
fn visit_with<V: TypeVisitor<TyCtxt<'tcx>>>(&self, visitor: &mut V) -> V::Result {
let inner: &ty::ValTreeKind<TyCtxt<'tcx>> = &*self;
inner.visit_with(visitor)
}
}
impl<'tcx> TypeFoldable<TyCtxt<'tcx>> for ty::ValTree<'tcx> {
fn try_fold_with<F: FallibleTypeFolder<TyCtxt<'tcx>>>(
self,
folder: &mut F,
) -> Result<Self, F::Error> {
let inner: &ty::ValTreeKind<TyCtxt<'tcx>> = &*self;
let new_inner = inner.clone().try_fold_with(folder)?;
if inner == &new_inner {
Ok(self)
} else {
let valtree = folder.cx().intern_valtree(new_inner);
Ok(valtree)
}
}
fn fold_with<F: TypeFolder<TyCtxt<'tcx>>>(self, folder: &mut F) -> Self {
let inner: &ty::ValTreeKind<TyCtxt<'tcx>> = &*self;
let new_inner = inner.clone().fold_with(folder);
if inner == &new_inner { self } else { folder.cx().intern_valtree(new_inner) }
}
}
impl<'tcx> TypeVisitable<TyCtxt<'tcx>> for rustc_span::ErrorGuaranteed {
fn visit_with<V: TypeVisitor<TyCtxt<'tcx>>>(&self, visitor: &mut V) -> V::Result {
visitor.visit_error(*self)

View file

@ -157,7 +157,7 @@ impl<'a, 'tcx> ParseCtxt<'a, 'tcx> {
});
}
};
values.push(value.valtree.unwrap_leaf().to_bits_unchecked());
values.push(value.to_leaf().to_bits_unchecked());
targets.push(self.parse_block(arm.body)?);
}

View file

@ -2935,7 +2935,8 @@ impl<'a, 'tcx> Builder<'a, 'tcx> {
bug!("malformed valtree for an enum")
};
let ValTreeKind::Leaf(actual_variant_idx) = ***actual_variant_idx else {
let ValTreeKind::Leaf(actual_variant_idx) = *actual_variant_idx.to_value().valtree
else {
bug!("malformed valtree for an enum")
};
@ -2943,7 +2944,7 @@ impl<'a, 'tcx> Builder<'a, 'tcx> {
}
Constructor::IntRange(int_range) => {
let size = pat.ty().primitive_size(self.tcx);
let actual_int = valtree.unwrap_leaf().to_bits(size);
let actual_int = valtree.to_leaf().to_bits(size);
let actual_int = if pat.ty().is_signed() {
MaybeInfiniteInt::new_finite_int(actual_int, size.bits())
} else {
@ -2951,33 +2952,33 @@ impl<'a, 'tcx> Builder<'a, 'tcx> {
};
IntRange::from_singleton(actual_int).is_subrange(int_range)
}
Constructor::Bool(pattern_value) => match valtree.unwrap_leaf().try_to_bool() {
Constructor::Bool(pattern_value) => match valtree.to_leaf().try_to_bool() {
Ok(actual_value) => *pattern_value == actual_value,
Err(()) => bug!("bool value with invalid bits"),
},
Constructor::F16Range(l, h, end) => {
let actual = valtree.unwrap_leaf().to_f16();
let actual = valtree.to_leaf().to_f16();
match end {
RangeEnd::Included => (*l..=*h).contains(&actual),
RangeEnd::Excluded => (*l..*h).contains(&actual),
}
}
Constructor::F32Range(l, h, end) => {
let actual = valtree.unwrap_leaf().to_f32();
let actual = valtree.to_leaf().to_f32();
match end {
RangeEnd::Included => (*l..=*h).contains(&actual),
RangeEnd::Excluded => (*l..*h).contains(&actual),
}
}
Constructor::F64Range(l, h, end) => {
let actual = valtree.unwrap_leaf().to_f64();
let actual = valtree.to_leaf().to_f64();
match end {
RangeEnd::Included => (*l..=*h).contains(&actual),
RangeEnd::Excluded => (*l..*h).contains(&actual),
}
}
Constructor::F128Range(l, h, end) => {
let actual = valtree.unwrap_leaf().to_f128();
let actual = valtree.to_leaf().to_f128();
match end {
RangeEnd::Included => (*l..=*h).contains(&actual),
RangeEnd::Excluded => (*l..*h).contains(&actual),

View file

@ -116,7 +116,7 @@ impl<'a, 'tcx> Builder<'a, 'tcx> {
let switch_targets = SwitchTargets::new(
target_blocks.iter().filter_map(|(&branch, &block)| {
if let TestBranch::Constant(value) = branch {
let bits = value.valtree.unwrap_leaf().to_bits_unchecked();
let bits = value.to_leaf().to_bits_unchecked();
Some((bits, block))
} else {
None

View file

@ -897,7 +897,14 @@ impl<'a, 'tcx> Builder<'a, 'tcx> {
self.tcx,
ValTree::from_branches(
self.tcx,
[ValTree::from_scalar_int(self.tcx, variant_index.as_u32().into())],
[ty::Const::new_value(
self.tcx,
ValTree::from_scalar_int(
self.tcx,
variant_index.as_u32().into(),
),
self.tcx.types.u32,
)],
),
self.thir[value].ty,
),

View file

@ -63,7 +63,7 @@ pub(crate) fn lit_to_const<'tcx>(
// A CStr is a newtype around a byte slice, so we create the inner slice here.
// We need a branch for each "level" of the data structure.
let bytes = ty::ValTree::from_raw_bytes(tcx, byte_sym.as_byte_str());
ty::ValTree::from_branches(tcx, [bytes])
ty::ValTree::from_branches(tcx, [ty::Const::new_value(tcx, bytes, *inner_ty)])
}
(ast::LitKind::Int(n, _), ty::Uint(ui)) if !neg => {
let scalar_int = trunc(n.get(), *ui);

View file

@ -239,14 +239,14 @@ impl<'tcx> ConstToPat<'tcx> {
return self.mk_err(tcx.dcx().create_err(err), ty);
}
ty::Adt(adt_def, args) if adt_def.is_enum() => {
let (&variant_index, fields) = cv.unwrap_branch().split_first().unwrap();
let variant_index = VariantIdx::from_u32(variant_index.unwrap_leaf().to_u32());
let (&variant_index, fields) = cv.to_branch().split_first().unwrap();
let variant_index = VariantIdx::from_u32(variant_index.to_leaf().to_u32());
PatKind::Variant {
adt_def: *adt_def,
args,
variant_index,
subpatterns: self.field_pats(
fields.iter().copied().zip(
fields.iter().map(|ct| ct.to_value().valtree).zip(
adt_def.variants()[variant_index]
.fields
.iter()
@ -258,28 +258,32 @@ impl<'tcx> ConstToPat<'tcx> {
ty::Adt(def, args) => {
assert!(!def.is_union()); // Valtree construction would never succeed for unions.
PatKind::Leaf {
subpatterns: self.field_pats(cv.unwrap_branch().iter().copied().zip(
def.non_enum_variant().fields.iter().map(|field| field.ty(tcx, args)),
)),
subpatterns: self.field_pats(
cv.to_branch().iter().map(|ct| ct.to_value().valtree).zip(
def.non_enum_variant().fields.iter().map(|field| field.ty(tcx, args)),
),
),
}
}
ty::Tuple(fields) => PatKind::Leaf {
subpatterns: self.field_pats(cv.unwrap_branch().iter().copied().zip(fields.iter())),
subpatterns: self.field_pats(
cv.to_branch().iter().map(|ct| ct.to_value().valtree).zip(fields.iter()),
),
},
ty::Slice(elem_ty) => PatKind::Slice {
prefix: cv
.unwrap_branch()
.to_branch()
.iter()
.map(|val| *self.valtree_to_pat(*val, *elem_ty))
.map(|val| *self.valtree_to_pat(val.to_value().valtree, *elem_ty))
.collect(),
slice: None,
suffix: Box::new([]),
},
ty::Array(elem_ty, _) => PatKind::Array {
prefix: cv
.unwrap_branch()
.to_branch()
.iter()
.map(|val| *self.valtree_to_pat(*val, *elem_ty))
.map(|val| *self.valtree_to_pat(val.to_value().valtree, *elem_ty))
.collect(),
slice: None,
suffix: Box::new([]),
@ -312,7 +316,7 @@ impl<'tcx> ConstToPat<'tcx> {
}
},
ty::Float(flt) => {
let v = cv.unwrap_leaf();
let v = cv.to_leaf();
let is_nan = match flt {
ty::FloatTy::F16 => v.to_f16().is_nan(),
ty::FloatTy::F32 => v.to_f32().is_nan(),

View file

@ -440,7 +440,7 @@ impl<'p, 'tcx: 'p> RustcPatCtxt<'p, 'tcx> {
match bdy {
PatRangeBoundary::NegInfinity => MaybeInfiniteInt::NegInfinity,
PatRangeBoundary::Finite(value) => {
let bits = value.try_to_scalar_int().unwrap().to_bits_unchecked();
let bits = value.to_leaf().to_bits_unchecked();
match *ty.kind() {
ty::Int(ity) => {
let size = Integer::from_int_ty(&self.tcx, ity).size().bits();
@ -540,7 +540,7 @@ impl<'p, 'tcx: 'p> RustcPatCtxt<'p, 'tcx> {
}
ty::Char | ty::Int(_) | ty::Uint(_) => {
ctor = {
let bits = value.valtree.unwrap_leaf().to_bits_unchecked();
let bits = value.to_leaf().to_bits_unchecked();
let x = match *ty.kind() {
ty::Int(ity) => {
let size = Integer::from_int_ty(&cx.tcx, ity).size().bits();
@ -555,7 +555,7 @@ impl<'p, 'tcx: 'p> RustcPatCtxt<'p, 'tcx> {
}
ty::Float(ty::FloatTy::F16) => {
use rustc_apfloat::Float;
let bits = value.valtree.unwrap_leaf().to_u16();
let bits = value.to_leaf().to_u16();
let value = rustc_apfloat::ieee::Half::from_bits(bits.into());
ctor = F16Range(value, value, RangeEnd::Included);
fields = vec![];
@ -563,7 +563,7 @@ impl<'p, 'tcx: 'p> RustcPatCtxt<'p, 'tcx> {
}
ty::Float(ty::FloatTy::F32) => {
use rustc_apfloat::Float;
let bits = value.valtree.unwrap_leaf().to_u32();
let bits = value.to_leaf().to_u32();
let value = rustc_apfloat::ieee::Single::from_bits(bits.into());
ctor = F32Range(value, value, RangeEnd::Included);
fields = vec![];
@ -571,7 +571,7 @@ impl<'p, 'tcx: 'p> RustcPatCtxt<'p, 'tcx> {
}
ty::Float(ty::FloatTy::F64) => {
use rustc_apfloat::Float;
let bits = value.valtree.unwrap_leaf().to_u64();
let bits = value.to_leaf().to_u64();
let value = rustc_apfloat::ieee::Double::from_bits(bits.into());
ctor = F64Range(value, value, RangeEnd::Included);
fields = vec![];
@ -579,7 +579,7 @@ impl<'p, 'tcx: 'p> RustcPatCtxt<'p, 'tcx> {
}
ty::Float(ty::FloatTy::F128) => {
use rustc_apfloat::Float;
let bits = value.valtree.unwrap_leaf().to_u128();
let bits = value.to_leaf().to_u128();
let value = rustc_apfloat::ieee::Quad::from_bits(bits);
ctor = F128Range(value, value, RangeEnd::Included);
fields = vec![];
@ -623,12 +623,8 @@ impl<'p, 'tcx: 'p> RustcPatCtxt<'p, 'tcx> {
}
ty::Float(fty) => {
use rustc_apfloat::Float;
let lo = lo
.as_finite()
.map(|c| c.try_to_scalar_int().unwrap().to_bits_unchecked());
let hi = hi
.as_finite()
.map(|c| c.try_to_scalar_int().unwrap().to_bits_unchecked());
let lo = lo.as_finite().map(|c| c.to_leaf().to_bits_unchecked());
let hi = hi.as_finite().map(|c| c.to_leaf().to_bits_unchecked());
match fty {
ty::FloatTy::F16 => {
use rustc_apfloat::ieee::Half;

View file

@ -11,9 +11,9 @@ use rustc_hir::def_id::LocalDefId;
use rustc_middle::span_bug;
use rustc_span::hygiene::LocalExpnId;
use rustc_span::{Span, Symbol, sym};
use tracing::debug;
use tracing::{debug, instrument};
use crate::{ImplTraitContext, InvocationParent, Resolver};
use crate::{ConstArgContext, ImplTraitContext, InvocationParent, Resolver};
pub(crate) fn collect_definitions(
resolver: &mut Resolver<'_, '_>,
@ -21,6 +21,7 @@ pub(crate) fn collect_definitions(
expansion: LocalExpnId,
) {
let invocation_parent = resolver.invocation_parents[&expansion];
debug!("new fragment to visit with invocation_parent: {invocation_parent:?}");
let mut visitor = DefCollector { resolver, expansion, invocation_parent };
fragment.visit_with(&mut visitor);
}
@ -74,6 +75,12 @@ impl<'a, 'ra, 'tcx> DefCollector<'a, 'ra, 'tcx> {
self.invocation_parent.impl_trait_context = orig_itc;
}
fn with_const_arg<F: FnOnce(&mut Self)>(&mut self, ctxt: ConstArgContext, f: F) {
let orig = mem::replace(&mut self.invocation_parent.const_arg_context, ctxt);
f(self);
self.invocation_parent.const_arg_context = orig;
}
fn collect_field(&mut self, field: &'a FieldDef, index: Option<usize>) {
let index = |this: &Self| {
index.unwrap_or_else(|| {
@ -93,7 +100,10 @@ impl<'a, 'ra, 'tcx> DefCollector<'a, 'ra, 'tcx> {
}
}
#[instrument(level = "debug", skip(self))]
fn visit_macro_invoc(&mut self, id: NodeId) {
debug!(?self.invocation_parent);
let id = id.placeholder_to_expn_id();
let old_parent = self.resolver.invocation_parents.insert(id, self.invocation_parent);
assert!(old_parent.is_none(), "parent `LocalDefId` is reset for an invocation");
@ -360,36 +370,77 @@ impl<'a, 'ra, 'tcx> visit::Visitor<'a> for DefCollector<'a, 'ra, 'tcx> {
// `MgcaDisambiguation::Direct` is set even when MGCA is disabled, so
// to avoid affecting stable we have to feature gate the not creating
// anon consts
if let MgcaDisambiguation::Direct = constant.mgca_disambiguation
&& self.resolver.tcx.features().min_generic_const_args()
{
visit::walk_anon_const(self, constant);
return;
if !self.resolver.tcx.features().min_generic_const_args() {
let parent =
self.create_def(constant.id, None, DefKind::AnonConst, constant.value.span);
return self.with_parent(parent, |this| visit::walk_anon_const(this, constant));
}
let parent = self.create_def(constant.id, None, DefKind::AnonConst, constant.value.span);
self.with_parent(parent, |this| visit::walk_anon_const(this, constant));
match constant.mgca_disambiguation {
MgcaDisambiguation::Direct => self.with_const_arg(ConstArgContext::Direct, |this| {
visit::walk_anon_const(this, constant);
}),
MgcaDisambiguation::AnonConst => {
self.with_const_arg(ConstArgContext::NonDirect, |this| {
let parent =
this.create_def(constant.id, None, DefKind::AnonConst, constant.value.span);
this.with_parent(parent, |this| visit::walk_anon_const(this, constant));
})
}
};
}
#[instrument(level = "debug", skip(self))]
fn visit_expr(&mut self, expr: &'a Expr) {
let parent_def = match expr.kind {
debug!(?self.invocation_parent);
let parent_def = match &expr.kind {
ExprKind::MacCall(..) => return self.visit_macro_invoc(expr.id),
ExprKind::Closure(..) | ExprKind::Gen(..) => {
self.create_def(expr.id, None, DefKind::Closure, expr.span)
}
ExprKind::ConstBlock(ref constant) => {
for attr in &expr.attrs {
visit::walk_attribute(self, attr);
}
let def =
self.create_def(constant.id, None, DefKind::InlineConst, constant.value.span);
self.with_parent(def, |this| visit::walk_anon_const(this, constant));
return;
ExprKind::ConstBlock(constant) => {
// Under `min_generic_const_args` a `const { }` block sometimes
// corresponds to an anon const rather than an inline const.
let def_kind = match self.invocation_parent.const_arg_context {
ConstArgContext::Direct => DefKind::AnonConst,
ConstArgContext::NonDirect => DefKind::InlineConst,
};
return self.with_const_arg(ConstArgContext::NonDirect, |this| {
for attr in &expr.attrs {
visit::walk_attribute(this, attr);
}
let def = this.create_def(constant.id, None, def_kind, constant.value.span);
this.with_parent(def, |this| visit::walk_anon_const(this, constant));
});
}
// Avoid overwriting `const_arg_context` as we may want to treat const blocks
// as being anon consts if we are inside a const argument.
ExprKind::Struct(_) => return visit::walk_expr(self, expr),
// FIXME(mgca): we may want to handle block labels in some manner
ExprKind::Block(block, _) if let [stmt] = block.stmts.as_slice() => match stmt.kind {
// FIXME(mgca): this probably means that mac calls that expand
// to semi'd const blocks are handled differently to just writing
// out a semi'd const block.
StmtKind::Expr(..) | StmtKind::MacCall(..) => return visit::walk_expr(self, expr),
// Fallback to normal behaviour
StmtKind::Let(..) | StmtKind::Item(..) | StmtKind::Semi(..) | StmtKind::Empty => {
self.invocation_parent.parent_def
}
},
_ => self.invocation_parent.parent_def,
};
self.with_parent(parent_def, |this| visit::walk_expr(this, expr))
self.with_const_arg(ConstArgContext::NonDirect, |this| {
// Note in some cases the `parent_def` here may be the existing parent
// and this is actually a no-op `with_parent` call.
this.with_parent(parent_def, |this| visit::walk_expr(this, expr))
})
}
fn visit_ty(&mut self, ty: &'a Ty) {

View file

@ -4879,12 +4879,7 @@ impl<'a, 'ast, 'ra, 'tcx> LateResolutionVisitor<'a, 'ast, 'ra, 'tcx> {
constant, anon_const_kind
);
let is_trivial_const_arg = if self.r.tcx.features().min_generic_const_args() {
matches!(constant.mgca_disambiguation, MgcaDisambiguation::Direct)
} else {
constant.value.is_potential_trivial_const_arg()
};
let is_trivial_const_arg = constant.value.is_potential_trivial_const_arg();
self.resolve_anon_const_manual(is_trivial_const_arg, anon_const_kind, |this| {
this.resolve_expr(&constant.value, None)
})
@ -4914,7 +4909,10 @@ impl<'a, 'ast, 'ra, 'tcx> LateResolutionVisitor<'a, 'ast, 'ra, 'tcx> {
AnonConstKind::FieldDefaultValue => ConstantHasGenerics::Yes,
AnonConstKind::InlineConst => ConstantHasGenerics::Yes,
AnonConstKind::ConstArg(_) => {
if self.r.tcx.features().generic_const_exprs() || is_trivial_const_arg {
if self.r.tcx.features().generic_const_exprs()
|| self.r.tcx.features().min_generic_const_args()
|| is_trivial_const_arg
{
ConstantHasGenerics::Yes
} else {
ConstantHasGenerics::No(NoConstantGenericsReason::NonTrivialConstArg)

View file

@ -187,6 +187,7 @@ struct InvocationParent {
parent_def: LocalDefId,
impl_trait_context: ImplTraitContext,
in_attr: bool,
const_arg_context: ConstArgContext,
}
impl InvocationParent {
@ -194,6 +195,7 @@ impl InvocationParent {
parent_def: CRATE_DEF_ID,
impl_trait_context: ImplTraitContext::Existential,
in_attr: false,
const_arg_context: ConstArgContext::NonDirect,
};
}
@ -204,6 +206,13 @@ enum ImplTraitContext {
InBinding,
}
#[derive(Copy, Clone, Debug)]
enum ConstArgContext {
Direct,
/// Either inside of an `AnonConst` or not inside a const argument at all.
NonDirect,
}
/// Used for tracking import use types which will be used for redundant import checking.
///
/// ### Used::Scope Example

View file

@ -293,7 +293,7 @@ impl<'tcx> Printer<'tcx> for LegacySymbolMangler<'tcx> {
ty::ConstKind::Value(cv) if cv.ty.is_integral() => {
// The `pretty_print_const` formatting depends on -Zverbose-internals
// flag, so we cannot reuse it here.
let scalar = cv.valtree.unwrap_leaf();
let scalar = cv.to_leaf();
let signed = matches!(cv.ty.kind(), ty::Int(_));
write!(
self,

View file

@ -129,10 +129,7 @@ mod rustc {
use rustc_middle::ty::ScalarInt;
use rustc_span::sym;
let Some(cv) = ct.try_to_value() else {
return None;
};
let cv = ct.try_to_value()?;
let adt_def = cv.ty.ty_adt_def()?;
if !tcx.is_lang_item(adt_def.did(), LangItem::TransmuteOpts) {
@ -149,7 +146,7 @@ mod rustc {
}
let variant = adt_def.non_enum_variant();
let fields = cv.valtree.unwrap_branch();
let fields = cv.to_branch();
let get_field = |name| {
let (field_idx, _) = variant
@ -158,7 +155,7 @@ mod rustc {
.enumerate()
.find(|(_, field_def)| name == field_def.name)
.unwrap_or_else(|| panic!("There were no fields named `{name}`."));
fields[field_idx].unwrap_leaf() == ScalarInt::TRUE
fields[field_idx].to_leaf() == ScalarInt::TRUE
};
Some(Self {

View file

@ -25,15 +25,14 @@ fn destructure_const<'tcx>(
let ty::ConstKind::Value(cv) = const_.kind() else {
bug!("cannot destructure constant {:?}", const_)
};
let branches = cv.valtree.unwrap_branch();
let branches = cv.to_branch();
let (fields, variant) = match cv.ty.kind() {
ty::Array(inner_ty, _) | ty::Slice(inner_ty) => {
// construct the consts for the elements of the array/slice
let field_consts = branches
.iter()
.map(|b| ty::Const::new_value(tcx, *b, *inner_ty))
.map(|b| ty::Const::new_value(tcx, b.to_value().valtree, *inner_ty))
.collect::<Vec<_>>();
debug!(?field_consts);
@ -43,7 +42,7 @@ fn destructure_const<'tcx>(
ty::Adt(def, args) => {
let (variant_idx, branches) = if def.is_enum() {
let (head, rest) = branches.split_first().unwrap();
(VariantIdx::from_u32(head.unwrap_leaf().to_u32()), rest)
(VariantIdx::from_u32(head.to_leaf().to_u32()), rest)
} else {
(FIRST_VARIANT, branches)
};
@ -52,7 +51,8 @@ fn destructure_const<'tcx>(
for (field, field_valtree) in iter::zip(fields, branches) {
let field_ty = field.ty(tcx, args);
let field_const = ty::Const::new_value(tcx, *field_valtree, field_ty);
let field_const =
ty::Const::new_value(tcx, field_valtree.to_value().valtree, field_ty);
field_consts.push(field_const);
}
debug!(?field_consts);
@ -61,7 +61,9 @@ fn destructure_const<'tcx>(
}
ty::Tuple(elem_tys) => {
let fields = iter::zip(*elem_tys, branches)
.map(|(elem_ty, elem_valtree)| ty::Const::new_value(tcx, *elem_valtree, elem_ty))
.map(|(elem_ty, elem_valtree)| {
ty::Const::new_value(tcx, elem_valtree.to_value().valtree, elem_ty)
})
.collect::<Vec<_>>();
(fields, None)

View file

@ -127,3 +127,76 @@ impl<CTX> HashStable<CTX> for InferConst {
}
}
}
/// This datastructure is used to represent the value of constants used in the type system.
///
/// We explicitly choose a different datastructure from the way values are processed within
/// CTFE, as in the type system equal values (according to their `PartialEq`) must also have
/// equal representation (`==` on the rustc data structure, e.g. `ValTree`) and vice versa.
/// Since CTFE uses `AllocId` to represent pointers, it often happens that two different
/// `AllocId`s point to equal values. So we may end up with different representations for
/// two constants whose value is `&42`. Furthermore any kind of struct that has padding will
/// have arbitrary values within that padding, even if the values of the struct are the same.
///
/// `ValTree` does not have this problem with representation, as it only contains integers or
/// lists of (nested) `ty::Const`s (which may indirectly contain more `ValTree`s).
#[derive_where(Clone, Debug, Hash, Eq, PartialEq; I: Interner)]
#[derive(TypeVisitable_Generic, TypeFoldable_Generic)]
#[cfg_attr(
feature = "nightly",
derive(Decodable_NoContext, Encodable_NoContext, HashStable_NoContext)
)]
pub enum ValTreeKind<I: Interner> {
/// integers, `bool`, `char` are represented as scalars.
/// See the `ScalarInt` documentation for how `ScalarInt` guarantees that equal values
/// of these types have the same representation.
Leaf(I::ScalarInt),
/// The fields of any kind of aggregate. Structs, tuples and arrays are represented by
/// listing their fields' values in order.
///
/// Enums are represented by storing their variant index as a u32 field, followed by all
/// the fields of the variant.
///
/// ZST types are represented as an empty slice.
// FIXME(mgca): Use a `List` here instead of a boxed slice
Branch(Box<[I::Const]>),
}
impl<I: Interner> ValTreeKind<I> {
/// Converts to a `ValTreeKind::Leaf` value, `panic`'ing
/// if this valtree is some other kind.
#[inline]
pub fn to_leaf(&self) -> I::ScalarInt {
match self {
ValTreeKind::Leaf(s) => *s,
ValTreeKind::Branch(..) => panic!("expected leaf, got {:?}", self),
}
}
/// Converts to a `ValTreeKind::Branch` value, `panic`'ing
/// if this valtree is some other kind.
#[inline]
pub fn to_branch(&self) -> &[I::Const] {
match self {
ValTreeKind::Branch(branch) => &**branch,
ValTreeKind::Leaf(..) => panic!("expected branch, got {:?}", self),
}
}
/// Attempts to convert to a `ValTreeKind::Leaf` value.
pub fn try_to_leaf(&self) -> Option<I::ScalarInt> {
match self {
ValTreeKind::Leaf(s) => Some(*s),
ValTreeKind::Branch(_) => None,
}
}
/// Attempts to convert to a `ValTreeKind::Branch` value.
pub fn try_to_branch(&self) -> Option<&[I::Const]> {
match self {
ValTreeKind::Branch(branch) => Some(&**branch),
ValTreeKind::Leaf(_) => None,
}
}
}

View file

@ -477,7 +477,17 @@ impl<I: Interner> FlagComputation<I> {
ty::ConstKind::Placeholder(_) => {
self.add_flags(TypeFlags::HAS_CT_PLACEHOLDER);
}
ty::ConstKind::Value(cv) => self.add_ty(cv.ty()),
ty::ConstKind::Value(cv) => {
self.add_ty(cv.ty());
match cv.valtree().kind() {
ty::ValTreeKind::Leaf(_) => (),
ty::ValTreeKind::Branch(cts) => {
for ct in cts {
self.add_const(*ct);
}
}
}
}
ty::ConstKind::Expr(e) => self.add_args(e.args().as_slice()),
ty::ConstKind::Error(_) => self.add_flags(TypeFlags::HAS_ERROR),
}

View file

@ -292,6 +292,12 @@ pub trait ValueConst<I: Interner<ValueConst = Self>>: Copy + Debug + Hash + Eq {
fn valtree(self) -> I::ValTree;
}
// FIXME(mgca): This trait can be removed once we're not using a `Box` in `Branch`
pub trait ValTree<I: Interner<ValTree = Self>>: Copy + Debug + Hash + Eq {
// This isnt' `IntoKind` because then we can't return a reference
fn kind(&self) -> &ty::ValTreeKind<I>;
}
pub trait ExprConst<I: Interner<ExprConst = Self>>: Copy + Debug + Hash + Eq + Relate<I> {
fn args(self) -> I::GenericArgs;
}

View file

@ -153,7 +153,8 @@ pub trait Interner:
type PlaceholderConst: PlaceholderConst<Self>;
type ValueConst: ValueConst<Self>;
type ExprConst: ExprConst<Self>;
type ValTree: Copy + Debug + Hash + Eq;
type ValTree: ValTree<Self>;
type ScalarInt: Copy + Debug + Hash + Eq;
// Kinds of regions
type Region: Region<Self>;

View file

@ -582,13 +582,27 @@ pub fn structurally_relate_consts<I: Interner, R: TypeRelation<I>>(
}
(ty::ConstKind::Placeholder(p1), ty::ConstKind::Placeholder(p2)) => p1 == p2,
(ty::ConstKind::Value(a_val), ty::ConstKind::Value(b_val)) => {
a_val.valtree() == b_val.valtree()
match (a_val.valtree().kind(), b_val.valtree().kind()) {
(ty::ValTreeKind::Leaf(scalar_a), ty::ValTreeKind::Leaf(scalar_b)) => {
scalar_a == scalar_b
}
(ty::ValTreeKind::Branch(branches_a), ty::ValTreeKind::Branch(branches_b))
if branches_a.len() == branches_b.len() =>
{
branches_a
.into_iter()
.zip(branches_b)
.all(|(a, b)| relation.relate(*a, *b).is_ok())
}
_ => false,
}
}
// While this is slightly incorrect, it shouldn't matter for `min_const_generics`
// and is the better alternative to waiting until `generic_const_exprs` can
// be stabilized.
(ty::ConstKind::Unevaluated(au), ty::ConstKind::Unevaluated(bu)) if au.def == bu.def => {
// FIXME(mgca): remove this
if cfg!(debug_assertions) {
let a_ty = cx.type_of(au.def.into()).instantiate(cx, au.args);
let b_ty = cx.type_of(bu.def.into()).instantiate(cx, bu.args);

View file

@ -319,6 +319,10 @@ pub(crate) fn clean_const<'tcx>(constant: &hir::ConstArg<'tcx>) -> ConstantKind
hir::ConstArgKind::Path(qpath) => {
ConstantKind::Path { path: qpath_to_string(qpath).into() }
}
hir::ConstArgKind::Struct(..) => {
// FIXME(mgca): proper printing :3
ConstantKind::Path { path: "/* STRUCT EXPR */".to_string().into() }
}
hir::ConstArgKind::Anon(anon) => ConstantKind::Anonymous { body: anon.body },
hir::ConstArgKind::Infer(..) | hir::ConstArgKind::Error(..) => ConstantKind::Infer,
}
@ -1800,7 +1804,7 @@ pub(crate) fn clean_ty<'tcx>(ty: &hir::Ty<'tcx>, cx: &mut DocContext<'tcx>) -> T
let ct = cx.tcx.normalize_erasing_regions(typing_env, ct);
print_const(cx, ct)
}
hir::ConstArgKind::Path(..) => {
hir::ConstArgKind::Struct(..) | hir::ConstArgKind::Path(..) => {
let ct = lower_const_arg_for_rustdoc(cx.tcx, const_arg, FeedConstTy::No);
print_const(cx, ct)
}

View file

@ -357,7 +357,7 @@ pub(crate) fn print_const(cx: &DocContext<'_>, n: ty::Const<'_>) -> String {
}
// array lengths are obviously usize
ty::ConstKind::Value(cv) if *cv.ty.kind() == ty::Uint(ty::UintTy::Usize) => {
cv.valtree.unwrap_leaf().to_string()
cv.to_leaf().to_string()
}
_ => n.to_string(),
}

View file

@ -319,6 +319,7 @@ impl<'a, 'tcx> PrintVisitor<'a, 'tcx> {
chain!(self, "let ConstArgKind::Anon({anon_const}) = {const_arg}.kind");
self.body(field!(anon_const.body));
},
ConstArgKind::Struct(..) => chain!(self, "let ConstArgKind::Struct(..) = {const_arg}.kind"),
ConstArgKind::Infer(..) => chain!(self, "let ConstArgKind::Infer(..) = {const_arg}.kind"),
ConstArgKind::Error(..) => chain!(self, "let ConstArgKind::Error(..) = {const_arg}.kind"),
}

View file

@ -1139,7 +1139,7 @@ pub fn const_item_rhs_to_expr<'tcx>(tcx: TyCtxt<'tcx>, ct_rhs: ConstItemRhs<'tcx
ConstItemRhs::Body(body_id) => Some(tcx.hir_body(body_id).value),
ConstItemRhs::TypeConst(const_arg) => match const_arg.kind {
ConstArgKind::Anon(anon) => Some(tcx.hir_body(anon.body).value),
ConstArgKind::Path(_) | ConstArgKind::Error(..) | ConstArgKind::Infer(..) => None,
ConstArgKind::Struct(..) | ConstArgKind::Path(_) | ConstArgKind::Error(..) | ConstArgKind::Infer(..) => None,
},
}
}

View file

@ -477,11 +477,18 @@ impl HirEqInterExpr<'_, '_, '_> {
(ConstArgKind::Path(l_p), ConstArgKind::Path(r_p)) => self.eq_qpath(l_p, r_p),
(ConstArgKind::Anon(l_an), ConstArgKind::Anon(r_an)) => self.eq_body(l_an.body, r_an.body),
(ConstArgKind::Infer(..), ConstArgKind::Infer(..)) => true,
(ConstArgKind::Struct(path_a, inits_a), ConstArgKind::Struct(path_b, inits_b)) => {
self.eq_qpath(path_a, path_b)
&& inits_a.iter().zip(*inits_b).all(|(init_a, init_b)| {
self.eq_const_arg(init_a.expr, init_b.expr)
})
}
// Use explicit match for now since ConstArg is undergoing flux.
(ConstArgKind::Path(..), ConstArgKind::Anon(..))
| (ConstArgKind::Anon(..), ConstArgKind::Path(..))
| (ConstArgKind::Infer(..) | ConstArgKind::Error(..), _)
| (_, ConstArgKind::Infer(..) | ConstArgKind::Error(..)) => false,
(ConstArgKind::Path(..), _)
| (ConstArgKind::Anon(..), _)
| (ConstArgKind::Infer(..), _)
| (ConstArgKind::Struct(..), _)
| (ConstArgKind::Error(..), _) => false,
}
}
@ -1332,6 +1339,12 @@ impl<'a, 'tcx> SpanlessHash<'a, 'tcx> {
match &const_arg.kind {
ConstArgKind::Path(path) => self.hash_qpath(path),
ConstArgKind::Anon(anon) => self.hash_body(anon.body),
ConstArgKind::Struct(path, inits) => {
self.hash_qpath(path);
for init in *inits {
self.hash_const_arg(init.expr);
}
}
ConstArgKind::Infer(..) | ConstArgKind::Error(..) => {},
}
}

View file

@ -31,7 +31,7 @@ pub trait EvalContextExt<'tcx>: crate::MiriInterpCxExt<'tcx> {
let get_ord_at = |i: usize| {
let ordering = generic_args.const_at(i).to_value();
ordering.valtree.unwrap_branch()[0].unwrap_leaf().to_atomic_ordering()
ordering.to_branch()[0].to_value().to_leaf().to_atomic_ordering()
};
fn read_ord(ord: AtomicOrdering) -> AtomicReadOrd {

View file

@ -1,6 +0,0 @@
//@ known-bug: #127962
#![feature(generic_const_exprs, const_arg_path)]
fn zero_init<const usize: usize>() -> Substs1<{ (N) }> {
Substs1([0; { (usize) }])
}

View file

@ -1,11 +0,0 @@
//@ known-bug: #137888
#![feature(generic_const_exprs)]
macro_rules! empty {
() => ();
}
fn bar<const N: i32>() -> [(); {
empty! {};
N
}] {
}
fn main() {}

View file

@ -1,5 +0,0 @@
//@ known-bug: #140275
#![feature(generic_const_exprs)]
trait T{}
trait V{}
impl<const N: i32> T for [i32; N::<&mut V>] {}

View file

@ -0,0 +1,38 @@
#![feature(min_generic_const_args, adt_const_params)]
#![expect(incomplete_features)]
#[derive(Eq, PartialEq, std::marker::ConstParamTy)]
enum Option<T> {
Some(T),
None,
}
use Option::Some;
fn foo<const N: Option<u32>>() {}
trait Trait {
#[type_const]
const ASSOC: usize;
}
fn bar<T: Trait, const N: u32>() {
// the initializer of `_0` is a `N` which is a legal const argument
// so this is ok.
foo::<{ Some::<u32> { 0: N } }>();
// this is allowed as mgca supports uses of assoc consts in the
// type system. ie `<T as Trait>::ASSOC` is a legal const argument
foo::<{ Some::<u32> { 0: <T as Trait>::ASSOC } }>();
// this on the other hand is not allowed as `N + 1` is not a legal
// const argument
foo::<{ Some::<u32> { 0: N + 1 } }>();
//~^ ERROR: complex const arguments must be placed inside of a `const` block
// this also is not allowed as generic parameters cannot be used
// in anon const const args
foo::<{ Some::<u32> { 0: const { N + 1 } } }>();
//~^ ERROR: generic parameters may not be used in const operations
}
fn main() {}

View file

@ -0,0 +1,14 @@
error: complex const arguments must be placed inside of a `const` block
--> $DIR/adt_expr_arg_simple.rs:29:30
|
LL | foo::<{ Some::<u32> { 0: N + 1 } }>();
| ^^^^^
error: generic parameters may not be used in const operations
--> $DIR/adt_expr_arg_simple.rs:34:38
|
LL | foo::<{ Some::<u32> { 0: const { N + 1 } } }>();
| ^
error: aborting due to 2 previous errors

View file

@ -0,0 +1,26 @@
#![feature(min_generic_const_args, adt_const_params)]
#![expect(incomplete_features)]
#![crate_type = "lib"]
// Miscellaneous assortment of invalid cases of directly represented
// `ConstArgKind::Struct`'s under mgca.
#[derive(Eq, PartialEq, std::marker::ConstParamTy)]
struct Foo<T> { field: T }
fn NonStruct() {}
fn accepts<const N: Foo<u8>>() {}
fn bar() {
accepts::<{ Foo::<u8> { }}>();
//~^ ERROR: struct expression with missing field initialiser for `field`
accepts::<{ Foo::<u8> { field: const { 1 }, field: const { 2} }}>();
//~^ ERROR: struct expression with multiple initialisers for `field`
accepts::<{ Fooo::<u8> { field: const { 1 } }}>();
//~^ ERROR: cannot find struct, variant or union type `Fooo` in this scope
//~| ERROR: struct expression with invalid base path
accepts::<{ NonStruct { }}>();
//~^ ERROR: expected struct, variant or union type, found function `NonStruct`
//~| ERROR: struct expression with invalid base path
}

View file

@ -0,0 +1,49 @@
error[E0422]: cannot find struct, variant or union type `Fooo` in this scope
--> $DIR/adt_expr_erroneuous_inits.rs:20:17
|
LL | struct Foo<T> { field: T }
| ------------- similarly named struct `Foo` defined here
...
LL | accepts::<{ Fooo::<u8> { field: const { 1 } }}>();
| ^^^^
|
help: a struct with a similar name exists
|
LL - accepts::<{ Fooo::<u8> { field: const { 1 } }}>();
LL + accepts::<{ Foo::<u8> { field: const { 1 } }}>();
|
error[E0574]: expected struct, variant or union type, found function `NonStruct`
--> $DIR/adt_expr_erroneuous_inits.rs:23:17
|
LL | accepts::<{ NonStruct { }}>();
| ^^^^^^^^^ not a struct, variant or union type
error: struct expression with missing field initialiser for `field`
--> $DIR/adt_expr_erroneuous_inits.rs:16:17
|
LL | accepts::<{ Foo::<u8> { }}>();
| ^^^^^^^^^
error: struct expression with multiple initialisers for `field`
--> $DIR/adt_expr_erroneuous_inits.rs:18:49
|
LL | accepts::<{ Foo::<u8> { field: const { 1 }, field: const { 2} }}>();
| ^^^^^^^^^^^^^^^^^
error: struct expression with invalid base path
--> $DIR/adt_expr_erroneuous_inits.rs:20:17
|
LL | accepts::<{ Fooo::<u8> { field: const { 1 } }}>();
| ^^^^^^^^^^
error: struct expression with invalid base path
--> $DIR/adt_expr_erroneuous_inits.rs:23:17
|
LL | accepts::<{ NonStruct { }}>();
| ^^^^^^^^^
error: aborting due to 6 previous errors
Some errors have detailed explanations: E0422, E0574.
For more information about an error, try `rustc --explain E0422`.

View file

@ -0,0 +1,17 @@
//@ check-pass
// FIXME(mgca): This should error
#![feature(min_generic_const_args, adt_const_params)]
#![expect(incomplete_features)]
#[derive(Eq, PartialEq, std::marker::ConstParamTy)]
struct Foo<T> { field: T }
fn accepts<const N: Foo<u8>>() {}
fn bar<const N: bool>() {
// `N` is not of type `u8` but we don't actually check this anywhere yet
accepts::<{ Foo::<u8> { field: N }}>();
}
fn main() {}

View file

@ -0,0 +1,45 @@
//@ check-pass
#![feature(
generic_const_items,
min_generic_const_args,
adt_const_params,
generic_const_parameter_types,
unsized_const_params,
)]
#![expect(incomplete_features)]
use std::marker::{PhantomData, ConstParamTy, ConstParamTy_};
#[derive(PartialEq, Eq, ConstParamTy)]
struct Foo<T> {
field: T,
}
#[type_const]
const WRAP<T: ConstParamTy_, const N: T>: Foo<T> = { Foo::<T> {
field: N,
} };
fn main() {
// What we're trying to accomplish here is winding up with an equality relation
// between two `ty::Const` that looks something like:
//
// ```
// Foo<u8> { field: const { 1 + 2 } }
// eq
// Foo<u8> { field: ?x }
// ```
//
// Note that the `field: _` here means a const argument `_` not a wildcard pattern.
// This tests that we are able to infer `?x=3` even though the first `ty::Const`
// may be a fully evaluated constant, and the latter is not fully evaluatable due
// to inference variables.
let _: PC<_, { WRAP::<u8, const { 1 + 1 }> }>
= PC::<_, { Foo::<u8> { field: _ }}>;
}
// "PhantomConst" helper equivalent to "PhantomData" used for testing equalities
// of arbitrarily typed const arguments.
struct PC<T: ConstParamTy_, const N: T> { _0: PhantomData<T> }
const PC<T: ConstParamTy_, const N: T>: PC<T, N> = PC { _0: PhantomData::<T> };

View file

@ -0,0 +1,10 @@
#![feature(associated_const_equality, generic_const_items, min_generic_const_args)]
#![expect(incomplete_features)]
// library crates exercise weirder code paths around
// DefIds which were created for const args.
#![crate_type = "lib"]
struct Foo<const N: usize>;
type Alias<const N: usize> = Foo<const { N }>;
//~^ ERROR: generic parameters may not be used in const operations

View file

@ -0,0 +1,8 @@
error: generic parameters may not be used in const operations
--> $DIR/explicit_anon_consts-2.rs:9:42
|
LL | type Alias<const N: usize> = Foo<const { N }>;
| ^
error: aborting due to 1 previous error

View file

@ -0,0 +1,10 @@
#![feature(associated_const_equality, generic_const_items, min_generic_const_args)]
#![expect(incomplete_features)]
// library crates exercise weirder code paths around
// DefIds which were created for const args.
#![crate_type = "lib"]
struct Foo<const N: usize>;
type Alias<const N: usize> = [(); const { N }];
//~^ ERROR: generic parameters may not be used in const operations

View file

@ -0,0 +1,8 @@
error: generic parameters may not be used in const operations
--> $DIR/explicit_anon_consts-3.rs:9:43
|
LL | type Alias<const N: usize> = [(); const { N }];
| ^
error: aborting due to 1 previous error

View file

@ -0,0 +1,9 @@
#![feature(associated_const_equality, generic_const_items, min_generic_const_args)]
#![expect(incomplete_features)]
// library crates exercise weirder code paths around
// DefIds which were created for const args.
#![crate_type = "lib"]
#[type_const]
const ITEM3<const N: usize>: usize = const { N };
//~^ ERROR: generic parameters may not be used in const operations

View file

@ -0,0 +1,8 @@
error: generic parameters may not be used in const operations
--> $DIR/explicit_anon_consts-4.rs:8:46
|
LL | const ITEM3<const N: usize>: usize = const { N };
| ^
error: aborting due to 1 previous error

View file

@ -0,0 +1,16 @@
#![feature(associated_const_equality, generic_const_items, min_generic_const_args)]
#![expect(incomplete_features)]
// library crates exercise weirder code paths around
// DefIds which were created for const args.
#![crate_type = "lib"]
trait Trait {
#[type_const]
const ASSOC: usize;
}
fn ace_bounds<
const N: usize,
T: Trait<ASSOC = const { N }>,
//~^ ERROR: generic parameters may not be used in const operations
>() {}

View file

@ -0,0 +1,8 @@
error: generic parameters may not be used in const operations
--> $DIR/explicit_anon_consts-5.rs:14:30
|
LL | T: Trait<ASSOC = const { N }>,
| ^
error: aborting due to 1 previous error

View file

@ -0,0 +1,8 @@
#![feature(associated_const_equality, generic_const_items, min_generic_const_args)]
#![expect(incomplete_features)]
// library crates exercise weirder code paths around
// DefIds which were created for const args.
#![crate_type = "lib"]
struct Default3<const N: usize, const M: usize = const { N }>;
//~^ ERROR: generic parameters may not be used in const operations

View file

@ -0,0 +1,8 @@
error: generic parameters may not be used in const operations
--> $DIR/explicit_anon_consts-6.rs:7:58
|
LL | struct Default3<const N: usize, const M: usize = const { N }>;
| ^
error: aborting due to 1 previous error

View file

@ -4,20 +4,22 @@
// DefIds which were created for const args.
#![crate_type = "lib"]
// FIXME(mgca): merge the split out parts of this test back in
struct Foo<const N: usize>;
type Adt1<const N: usize> = Foo<N>;
type Adt2<const N: usize> = Foo<{ N }>;
type Adt3<const N: usize> = Foo<const { N }>;
//~^ ERROR: generic parameters may not be used in const operations
// explicit_anon_consts-2.rs
// type Adt3<const N: usize> = Foo<const { N }>;
type Adt4<const N: usize> = Foo<{ 1 + 1 }>;
//~^ ERROR: complex const arguments must be placed inside of a `const` block
type Adt5<const N: usize> = Foo<const { 1 + 1 }>;
type Arr<const N: usize> = [(); N];
type Arr2<const N: usize> = [(); { N }];
type Arr3<const N: usize> = [(); const { N }];
//~^ ERROR: generic parameters may not be used in const operations
// explicit_anon_consts-3.rs
// type Arr3<const N: usize> = [(); const { N }];
type Arr4<const N: usize> = [(); 1 + 1];
//~^ ERROR: complex const arguments must be placed inside of a `const` block
type Arr5<const N: usize> = [(); const { 1 + 1 }];
@ -36,9 +38,9 @@ fn repeats<const N: usize>() {
const ITEM1<const N: usize>: usize = N;
#[type_const]
const ITEM2<const N: usize>: usize = { N };
#[type_const]
const ITEM3<const N: usize>: usize = const { N };
//~^ ERROR: generic parameters may not be used in const operations
// explicit_anon_consts-4.rs
// #[type_const]
// const ITEM3<const N: usize>: usize = const { N };
#[type_const]
const ITEM4<const N: usize>: usize = { 1 + 1 };
//~^ ERROR: complex const arguments must be placed inside of a `const` block
@ -55,8 +57,8 @@ fn ace_bounds<
// We skip the T1 case because it doesn't resolve
// T1: Trait<ASSOC = N>,
T2: Trait<ASSOC = { N }>,
T3: Trait<ASSOC = const { N }>,
//~^ ERROR: generic parameters may not be used in const operations
// explicit_anon_consts-5.rs
// T3: Trait<ASSOC = const { N }>,
T4: Trait<ASSOC = { 1 + 1 }>,
//~^ ERROR: complex const arguments must be placed inside of a `const` block
T5: Trait<ASSOC = const { 1 + 1 }>,
@ -64,8 +66,8 @@ fn ace_bounds<
struct Default1<const N: usize, const M: usize = N>;
struct Default2<const N: usize, const M: usize = { N }>;
struct Default3<const N: usize, const M: usize = const { N }>;
//~^ ERROR: generic parameters may not be used in const operations
// explicit_anon_consts-6.rs
// struct Default3<const N: usize, const M: usize = const { N }>;
struct Default4<const N: usize, const M: usize = { 1 + 1 }>;
//~^ ERROR: complex const arguments must be placed inside of a `const` block
struct Default5<const N: usize, const M: usize = const { 1 + 1}>;

View file

@ -1,92 +1,44 @@
error: generic parameters may not be used in const operations
--> $DIR/explicit_anon_consts.rs:11:41
|
LL | type Adt3<const N: usize> = Foo<const { N }>;
| ^ cannot perform const operation using `N`
|
= help: const parameters may only be used as standalone arguments here, i.e. `N`
= help: add `#![feature(generic_const_exprs)]` to allow generic const expressions
error: generic parameters may not be used in const operations
--> $DIR/explicit_anon_consts.rs:19:42
|
LL | type Arr3<const N: usize> = [(); const { N }];
| ^ cannot perform const operation using `N`
|
= help: const parameters may only be used as standalone arguments here, i.e. `N`
= help: add `#![feature(generic_const_exprs)]` to allow generic const expressions
error: generic parameters may not be used in const operations
--> $DIR/explicit_anon_consts.rs:28:27
|
LL | let _3 = [(); const { N }];
| ^ cannot perform const operation using `N`
|
= help: const parameters may only be used as standalone arguments here, i.e. `N`
= help: add `#![feature(generic_const_exprs)]` to allow generic const expressions
error: generic parameters may not be used in const operations
--> $DIR/explicit_anon_consts.rs:40:46
|
LL | const ITEM3<const N: usize>: usize = const { N };
| ^ cannot perform const operation using `N`
|
= help: const parameters may only be used as standalone arguments here, i.e. `N`
= help: add `#![feature(generic_const_exprs)]` to allow generic const expressions
error: generic parameters may not be used in const operations
--> $DIR/explicit_anon_consts.rs:58:31
|
LL | T3: Trait<ASSOC = const { N }>,
| ^ cannot perform const operation using `N`
|
= help: const parameters may only be used as standalone arguments here, i.e. `N`
= help: add `#![feature(generic_const_exprs)]` to allow generic const expressions
error: generic parameters may not be used in const operations
--> $DIR/explicit_anon_consts.rs:67:58
|
LL | struct Default3<const N: usize, const M: usize = const { N }>;
| ^ cannot perform const operation using `N`
|
= help: const parameters may only be used as standalone arguments here, i.e. `N`
= help: add `#![feature(generic_const_exprs)]` to allow generic const expressions
error: complex const arguments must be placed inside of a `const` block
--> $DIR/explicit_anon_consts.rs:13:33
--> $DIR/explicit_anon_consts.rs:15:33
|
LL | type Adt4<const N: usize> = Foo<{ 1 + 1 }>;
| ^^^^^^^^^
error: complex const arguments must be placed inside of a `const` block
--> $DIR/explicit_anon_consts.rs:21:34
--> $DIR/explicit_anon_consts.rs:23:34
|
LL | type Arr4<const N: usize> = [(); 1 + 1];
| ^^^^^
error: complex const arguments must be placed inside of a `const` block
--> $DIR/explicit_anon_consts.rs:30:19
--> $DIR/explicit_anon_consts.rs:32:19
|
LL | let _4 = [(); 1 + 1];
| ^^^^^
error: complex const arguments must be placed inside of a `const` block
--> $DIR/explicit_anon_consts.rs:43:38
--> $DIR/explicit_anon_consts.rs:45:38
|
LL | const ITEM4<const N: usize>: usize = { 1 + 1 };
| ^^^^^^^^^
error: complex const arguments must be placed inside of a `const` block
--> $DIR/explicit_anon_consts.rs:60:23
--> $DIR/explicit_anon_consts.rs:62:23
|
LL | T4: Trait<ASSOC = { 1 + 1 }>,
| ^^^^^^^^^
error: complex const arguments must be placed inside of a `const` block
--> $DIR/explicit_anon_consts.rs:69:50
--> $DIR/explicit_anon_consts.rs:71:50
|
LL | struct Default4<const N: usize, const M: usize = { 1 + 1 }>;
| ^^^^^^^^^
error: aborting due to 12 previous errors
error: generic parameters may not be used in const operations
--> $DIR/explicit_anon_consts.rs:30:27
|
LL | let _3 = [(); const { N }];
| ^
error: aborting due to 7 previous errors

View file

@ -1,9 +1,10 @@
//@ check-pass
// We allow for literals to implicitly be anon consts still regardless
// of whether a const block is placed around them or not
//
// However, we don't allow so for const arguments in field init positions.
// This is just harder to implement so we did not do so.
#![feature(min_generic_const_args, associated_const_equality)]
#![feature(min_generic_const_args, adt_const_params, associated_const_equality)]
#![expect(incomplete_features)]
trait Trait {
@ -19,4 +20,15 @@ type ArrLen = [(); 1];
struct Foo<const N: isize>;
type NormalArg = (Foo<1>, Foo<-1>);
#[derive(Eq, PartialEq, std::marker::ConstParamTy)]
struct ADT { field: u8 }
fn struct_expr() {
fn takes_n<const N: ADT>() {}
takes_n::<{ ADT { field: 1 } }>();
//~^ ERROR: complex const arguments must be placed inside of a `const` block
takes_n::<{ ADT { field: const { 1 } } }>();
}
fn main() {}

View file

@ -0,0 +1,8 @@
error: complex const arguments must be placed inside of a `const` block
--> $DIR/explicit_anon_consts_literals_hack.rs:29:30
|
LL | takes_n::<{ ADT { field: 1 } }>();
| ^
error: aborting due to 1 previous error

View file

@ -0,0 +1,31 @@
//@ check-pass
// Test that the def collector makes `AnonConst`s not `InlineConst`s even
// when the const block is obscured via macros.
#![feature(min_generic_const_args, adt_const_params)]
#![expect(incomplete_features)]
macro_rules! const_block {
($e:expr) => { const {
$e
} }
}
macro_rules! foo_expr {
($e:expr) => { Foo {
field: $e,
} }
}
use std::marker::ConstParamTy;
#[derive(PartialEq, Eq, ConstParamTy)]
struct Foo { field: u32 }
fn foo<const N: Foo>() {}
fn main() {
foo::<{ Foo { field: const_block!{ 1 + 1 }} }>();
foo::<{ foo_expr! { const_block! { 1 + 1 }} }>();
}

View file

@ -8,27 +8,4 @@ const FREE1<T>: usize = const { std::mem::size_of::<T>() };
const FREE2<const I: usize>: usize = const { I + 1 };
//~^ ERROR generic parameters may not be used in const operations
pub trait Tr<const X: usize> {
#[type_const]
const N1<T>: usize;
#[type_const]
const N2<const I: usize>: usize;
#[type_const]
const N3: usize;
}
pub struct S;
impl<const X: usize> Tr<X> for S {
#[type_const]
const N1<T>: usize = const { std::mem::size_of::<T>() };
//~^ ERROR generic parameters may not be used in const operations
#[type_const]
const N2<const I: usize>: usize = const { I + 1 };
//~^ ERROR generic parameters may not be used in const operations
#[type_const]
const N3: usize = const { 2 & X };
//~^ ERROR generic parameters may not be used in const operations
}
fn main() {}

View file

@ -2,46 +2,13 @@ error: generic parameters may not be used in const operations
--> $DIR/type_const-on-generic-expr.rs:5:53
|
LL | const FREE1<T>: usize = const { std::mem::size_of::<T>() };
| ^ cannot perform const operation using `T`
|
= note: type parameters may not be used in const expressions
= help: add `#![feature(generic_const_exprs)]` to allow generic const expressions
| ^
error: generic parameters may not be used in const operations
--> $DIR/type_const-on-generic-expr.rs:8:46
|
LL | const FREE2<const I: usize>: usize = const { I + 1 };
| ^ cannot perform const operation using `I`
|
= help: const parameters may only be used as standalone arguments here, i.e. `I`
= help: add `#![feature(generic_const_exprs)]` to allow generic const expressions
| ^
error: generic parameters may not be used in const operations
--> $DIR/type_const-on-generic-expr.rs:24:54
|
LL | const N1<T>: usize = const { std::mem::size_of::<T>() };
| ^ cannot perform const operation using `T`
|
= note: type parameters may not be used in const expressions
= help: add `#![feature(generic_const_exprs)]` to allow generic const expressions
error: generic parameters may not be used in const operations
--> $DIR/type_const-on-generic-expr.rs:27:47
|
LL | const N2<const I: usize>: usize = const { I + 1 };
| ^ cannot perform const operation using `I`
|
= help: const parameters may only be used as standalone arguments here, i.e. `I`
= help: add `#![feature(generic_const_exprs)]` to allow generic const expressions
error: generic parameters may not be used in const operations
--> $DIR/type_const-on-generic-expr.rs:30:35
|
LL | const N3: usize = const { 2 & X };
| ^ cannot perform const operation using `X`
|
= help: const parameters may only be used as standalone arguments here, i.e. `X`
= help: add `#![feature(generic_const_exprs)]` to allow generic const expressions
error: aborting due to 5 previous errors
error: aborting due to 2 previous errors

View file

@ -0,0 +1,27 @@
#![expect(incomplete_features)]
#![feature(min_generic_const_args, generic_const_items)]
pub trait Tr<const X: usize> {
#[type_const]
const N1<T>: usize;
#[type_const]
const N2<const I: usize>: usize;
#[type_const]
const N3: usize;
}
pub struct S;
impl<const X: usize> Tr<X> for S {
#[type_const]
const N1<T>: usize = const { std::mem::size_of::<T>() };
//~^ ERROR generic parameters may not be used in const operations
#[type_const]
const N2<const I: usize>: usize = const { I + 1 };
//~^ ERROR generic parameters may not be used in const operations
#[type_const]
const N3: usize = const { 2 & X };
//~^ ERROR generic parameters may not be used in const operations
}
fn main() {}

View file

@ -0,0 +1,20 @@
error: generic parameters may not be used in const operations
--> $DIR/type_const-on-generic_expr-2.rs:17:54
|
LL | const N1<T>: usize = const { std::mem::size_of::<T>() };
| ^
error: generic parameters may not be used in const operations
--> $DIR/type_const-on-generic_expr-2.rs:20:47
|
LL | const N2<const I: usize>: usize = const { I + 1 };
| ^
error: generic parameters may not be used in const operations
--> $DIR/type_const-on-generic_expr-2.rs:23:35
|
LL | const N3: usize = const { 2 & X };
| ^
error: aborting due to 3 previous errors

View file

@ -12,7 +12,7 @@ LL | struct ExplicitlyPadded(Box<ExplicitlyPadded>);
error[E0391]: cycle detected when computing layout of `should_pad_explicitly_packed_field::ExplicitlyPadded`
|
= note: ...which immediately requires computing layout of `should_pad_explicitly_packed_field::ExplicitlyPadded` again
= note: cycle used when evaluating trait selection obligation `(): core::mem::transmutability::TransmuteFrom<should_pad_explicitly_packed_field::ExplicitlyPadded, ValTree(Branch([Leaf(0x00), Leaf(0x00), Leaf(0x00), Leaf(0x00)]): core::mem::transmutability::Assume)>`
= note: cycle used when evaluating trait selection obligation `(): core::mem::transmutability::TransmuteFrom<should_pad_explicitly_packed_field::ExplicitlyPadded, ValTree(Branch([ValTree(Leaf(0x00): bool), ValTree(Leaf(0x00): bool), ValTree(Leaf(0x00): bool), ValTree(Leaf(0x00): bool)]): core::mem::transmutability::Assume)>`
= note: see https://rustc-dev-guide.rust-lang.org/overview.html#queries and https://rustc-dev-guide.rust-lang.org/query.html for more information
error: aborting due to 2 previous errors

View file

@ -1,15 +0,0 @@
//@known-bug: #127972
//@ failure-status: 101
//@ normalize-stderr: "note: .*\n\n" -> ""
//@ normalize-stderr: "thread 'rustc'.*panicked.*\n" -> ""
//@ normalize-stderr: "(error: internal compiler error: [^:]+):\d+:\d+: " -> "$1:LL:CC: "
//@ normalize-stderr: "/rustc(?:-dev)?/[a-z0-9.]+/" -> ""
//@ rustc-env:RUST_BACKTRACE=0
#![feature(pattern_types, pattern_type_macro, generic_const_exprs)]
#![allow(internal_features)]
type Pat<const START: u32, const END: u32> =
std::pat::pattern_type!(u32 is START::<(), i32, 2>..=END::<_, Assoc = ()>);
fn main() {}

View file

@ -1,21 +0,0 @@
warning: the feature `generic_const_exprs` is incomplete and may not be safe to use and/or cause compiler crashes
--> $DIR/bad_const_generics_args_on_const_param.rs:9:47
|
LL | #![feature(pattern_types, pattern_type_macro, generic_const_exprs)]
| ^^^^^^^^^^^^^^^^^^^
|
= note: see issue #76560 <https://github.com/rust-lang/rust/issues/76560> for more information
= error: internal compiler error: compiler/rustc_hir_analysis/src/hir_ty_lowering/mod.rs:LL:CC: try_lower_anon_const_lit: received const param which shouldn't be possible
--> $DIR/bad_const_generics_args_on_const_param.rs:13:36
|
LL | std::pat::pattern_type!(u32 is START::<(), i32, 2>..=END::<_, Assoc = ()>);
| ^^^^^^^^^^^^^^^^^^^
Box<dyn Any>
query stack during panic:
#0 [type_of] expanding type alias `Pat`
#1 [check_well_formed] checking that `Pat` is well-formed
... and 2 other queries... use `env RUST_BACKTRACE=1` to see the full query stack
error: aborting due to 1 previous error; 1 warning emitted