Add Apple WatchOS compile targets Hello, I would like to add the following target triples for Apple WatchOS as Tier 3 platforms: armv7k-apple-watchos arm64_32-apple-watchos x86_64-apple-watchos-sim There are some pre-requisites Pull Requests: https://github.com/rust-lang/compiler-builtins/pull/456 (merged) https://github.com/alexcrichton/cc-rs/pull/662 (pending) https://github.com/rust-lang/libc/pull/2717 (merged) There will be a subsequent PR with standard library changes for WatchOS. Previous compiler and library changes were in a single PR (https://github.com/rust-lang/rust/pull/94736) which is now closed in favour of separate PRs. Many thanks! Vlad. ### Tier 3 Target Requirements Adds support for Apple WatchOS compile targets. Below are details on how this target meets the requirements for tier 3: > tier 3 target must have a designated developer or developers (the "target maintainers") on record to be CCed when issues arise regarding the target. (The mechanism to track and CC such developers may evolve over time.) `@deg4uss3r` has volunteered to be the target maintainer. I am also happy to help if a second maintainer is required. > Targets must use naming consistent with any existing targets; for instance, a target for the same CPU or OS as an existing Rust target should use the same name for that CPU or OS. Targets should normally use the same names and naming conventions as used elsewhere in the broader ecosystem beyond Rust (such as in other toolchains), unless they have a very good reason to diverge. Changing the name of a target can be highly disruptive, especially once the target reaches a higher tier, so getting the name right is important even for a tier 3 target. Uses the same naming as the LLVM target, and the same convention as other Apple targets. > Target names should not introduce undue confusion or ambiguity unless absolutely necessary to maintain ecosystem compatibility. For example, if the name of the target makes people extremely likely to form incorrect beliefs about what it targets, the name should be changed or augmented to disambiguate it. I don't believe there is any ambiguity here. > Tier 3 targets may have unusual requirements to build or use, but must not create legal issues or impose onerous legal terms for the Rust project or for Rust developers or users. I don't see any legal issues here. > The target must not introduce license incompatibilities. > Anything added to the Rust repository must be under the standard Rust license (MIT OR Apache-2.0). > The target must not cause the Rust tools or libraries built for any other host (even when supporting cross-compilation to the target) to depend on any new dependency less permissive than the Rust licensing policy. This applies whether the dependency is a Rust crate that would require adding new license exceptions (as specified by the tidy tool in the rust-lang/rust repository), or whether the dependency is a native library or binary. In other words, the introduction of the target must not cause a user installing or running a version of Rust or the Rust tools to be subject to any new license requirements. > If the target supports building host tools (such as rustc or cargo), those host tools must not depend on proprietary (non-FOSS) libraries, other than ordinary runtime libraries supplied by the platform and commonly used by other binaries built for the target. For instance, rustc built for the target may depend on a common proprietary C runtime library or console output library, but must not depend on a proprietary code generation library or code optimization library. Rust's license permits such combinations, but the Rust project has no interest in maintaining such combinations within the scope of Rust itself, even at tier 3. > Targets should not require proprietary (non-FOSS) components to link a functional binary or library. > "onerous" here is an intentionally subjective term. At a minimum, "onerous" legal/licensing terms include but are not limited to: non-disclosure requirements, non-compete requirements, contributor license agreements (CLAs) or equivalent, "non-commercial"/"research-only"/etc terms, requirements conditional on the employer or employment of any particular Rust developers, revocable terms, any requirements that create liability for the Rust project or its developers or users, or any requirements that adversely affect the livelihood or prospects of the Rust project or its developers or users. I see no issues with any of the above. > Neither this policy nor any decisions made regarding targets shall create any binding agreement or estoppel by any party. If any member of an approving Rust team serves as one of the maintainers of a target, or has any legal or employment requirement (explicit or implicit) that might affect their decisions regarding a target, they must recuse themselves from any approval decisions regarding the target's tier status, though they may otherwise participate in discussions. > This requirement does not prevent part or all of this policy from being cited in an explicit contract or work agreement (e.g. to implement or maintain support for a target). This requirement exists to ensure that a developer or team responsible for reviewing and approving a target does not face any legal threats or obligations that would prevent them from freely exercising their judgment in such approval, even if such judgment involves subjective matters or goes beyond the letter of these requirements. Only relevant to those making approval decisions. > Tier 3 targets should attempt to implement as much of the standard libraries as possible and appropriate (core for most targets, alloc for targets that can support dynamic memory allocation, std for targets with an operating system or equivalent layer of system-provided functionality), but may leave some code unimplemented (either unavailable or stubbed out as appropriate), whether because the target makes it impossible to implement or challenging to implement. The authors of pull requests are not obligated to avoid calling any portions of the standard library on the basis of a tier 3 target not implementing those portions. core and alloc can be used. std support will be added in a subsequent PR. > The target must provide documentation for the Rust community explaining how to build for the target, using cross-compilation if possible. If the target supports running tests (even if they do not pass), the documentation must explain how to run tests for the target, using emulation if possible or dedicated hardware if necessary. Use --target=<target> option to cross compile, just like any target. Tests can be run using the WatchOS simulator (see https://developer.apple.com/documentation/xcode/running-your-app-in-the-simulator-or-on-a-device). > Tier 3 targets must not impose burden on the authors of pull requests, or other developers in the community, to maintain the target. In particular, do not post comments (automated or manual) on a PR that derail or suggest a block on the PR based on a tier 3 target. Do not send automated messages or notifications (via any medium, including via `@)` to a PR author or others involved with a PR regarding a tier 3 target, unless they have opted into such messages. > Backlinks such as those generated by the issue/PR tracker when linking to an issue or PR are not considered a violation of this policy, within reason. However, such messages (even on a separate repository) must not generate notifications to anyone involved with a PR who has not requested such notifications. I don't foresee this being a problem. > Patches adding or updating tier 3 targets must not break any existing tier 2 or tier 1 target, and must not knowingly break another tier 3 target without approval of either the compiler team or the maintainers of the other tier 3 target. > In particular, this may come up when working on closely related targets, such as variations of the same architecture with different features. Avoid introducing unconditional uses of features that another variation of the target may not have; use conditional compilation or runtime detection, as appropriate, to let each target run code supported by that target. No other targets should be affected by the pull request. |
||
|---|---|---|
| .. | ||
| bin | ||
| builder | ||
| defaults | ||
| mk | ||
| bootstrap.py | ||
| bootstrap_test.py | ||
| build.rs | ||
| builder.rs | ||
| cache.rs | ||
| Cargo.toml | ||
| cc_detect.rs | ||
| CHANGELOG.md | ||
| channel.rs | ||
| check.rs | ||
| clean.rs | ||
| compile.rs | ||
| config.rs | ||
| configure.py | ||
| dist.rs | ||
| doc.rs | ||
| download-ci-llvm-stamp | ||
| dylib_util.rs | ||
| flags.rs | ||
| format.rs | ||
| install.rs | ||
| job.rs | ||
| lib.rs | ||
| metadata.rs | ||
| metrics.rs | ||
| native.rs | ||
| README.md | ||
| run.rs | ||
| sanity.rs | ||
| setup.rs | ||
| tarball.rs | ||
| test.rs | ||
| tool.rs | ||
| toolstate.rs | ||
| util.rs | ||
rustbuild - Bootstrapping Rust
This is an in-progress README which is targeted at helping to explain how Rust is bootstrapped and in general some of the technical details of the build system.
Using rustbuild
The rustbuild build system has a primary entry point, a top level x.py script:
$ python ./x.py build
Note that if you're on Unix you should be able to execute the script directly:
$ ./x.py build
The script accepts commands, flags, and arguments to determine what to do:
-
build- a general purpose command for compiling code. Alonebuildwill bootstrap the entire compiler, and otherwise arguments passed indicate what to build. For example:# build the whole compiler ./x.py build --stage 2 # build the stage1 compiler ./x.py build # build stage0 libstd ./x.py build --stage 0 library/std # build a particular crate in stage0 ./x.py build --stage 0 library/testIf files are dirty that would normally be rebuilt from stage 0, that can be overridden using
--keep-stage 0. Using--keep-stage nwill skip all steps that belong to stage n or earlier:# build stage 1, keeping old build products for stage 0 ./x.py build --keep-stage 0 -
test- a command for executing unit tests. Like thebuildcommand this will execute the entire test suite by default, and otherwise it can be used to select which test suite is run:# run all unit tests ./x.py test # execute tool tests ./x.py test tidy # execute the UI test suite ./x.py test src/test/ui # execute only some tests in the UI test suite ./x.py test src/test/ui --test-args substring-of-test-name # execute tests in the standard library in stage0 ./x.py test --stage 0 library/std # execute tests in the core and standard library in stage0, # without running doc tests (thus avoid depending on building the compiler) ./x.py test --stage 0 --no-doc library/core library/std # execute all doc tests ./x.py test src/doc -
doc- a command for building documentation. Like above can take arguments for what to document.
Configuring rustbuild
There are currently two methods for configuring the rustbuild build system.
First, rustbuild offers a TOML-based configuration system with a config.toml
file. An example of this configuration can be found at config.toml.example,
and the configuration file can also be passed as --config path/to/config.toml
if the build system is being invoked manually (via the python script).
Next, the ./configure options serialized in config.mk will be
parsed and read. That is, if any ./configure options are passed, they'll be
handled naturally. ./configure should almost never be used for local
installations, and is primarily useful for CI. Prefer to customize behavior
using config.toml.
Finally, rustbuild makes use of the cc-rs crate which has its own method of configuring C compilers and C flags via environment variables.
Build stages
The rustbuild build system goes through a few phases to actually build the compiler. What actually happens when you invoke rustbuild is:
- The entry point script,
x.pyis run. This script is responsible for downloading the stage0 compiler/Cargo binaries, and it then compiles the build system itself (this folder). Finally, it then invokes the actualbootstrapbinary build system. - In Rust,
bootstrapwill slurp up all configuration, perform a number of sanity checks (compilers exist for example), and then start building the stage0 artifacts. - The stage0
cargodownloaded earlier is used to build the standard library and the compiler, and then these binaries are then copied to thestage1directory. That compiler is then used to generate the stage1 artifacts which are then copied to the stage2 directory, and then finally the stage2 artifacts are generated using that compiler.
The goal of each stage is to (a) leverage Cargo as much as possible and failing that (b) leverage Rust as much as possible!
Incremental builds
You can configure rustbuild to use incremental compilation with the
--incremental flag:
$ ./x.py build --incremental
The --incremental flag will store incremental compilation artifacts
in build/<host>/stage0-incremental. Note that we only use incremental
compilation for the stage0 -> stage1 compilation -- this is because
the stage1 compiler is changing, and we don't try to cache and reuse
incremental artifacts across different versions of the compiler.
You can always drop the --incremental to build as normal (but you
will still be using the local nightly as your bootstrap).
Directory Layout
This build system houses all output under the build directory, which looks
like this:
# Root folder of all output. Everything is scoped underneath here
build/
# Location where the stage0 compiler downloads are all cached. This directory
# only contains the tarballs themselves as they're extracted elsewhere.
cache/
2015-12-19/
2016-01-15/
2016-01-21/
...
# Output directory for building this build system itself. The stage0
# cargo/rustc are used to build the build system into this location.
bootstrap/
debug/
release/
# Output of the dist-related steps like dist-std, dist-rustc, and dist-docs
dist/
# Temporary directory used for various input/output as part of various stages
tmp/
# Each remaining directory is scoped by the "host" triple of compilation at
# hand.
x86_64-unknown-linux-gnu/
# The build artifacts for the `compiler-rt` library for the target this
# folder is under. The exact layout here will likely depend on the platform,
# and this is also built with CMake so the build system is also likely
# different.
compiler-rt/
build/
# Output folder for LLVM if it is compiled for this target
llvm/
# build folder (e.g. the platform-specific build system). Like with
# compiler-rt this is compiled with CMake
build/
# Installation of LLVM. Note that we run the equivalent of 'make install'
# for LLVM to setup these folders.
bin/
lib/
include/
share/
...
# Output folder for all documentation of this target. This is what's filled
# in whenever the `doc` step is run.
doc/
# Output for all compiletest-based test suites
test/
ui/
debuginfo/
...
# Location where the stage0 Cargo and Rust compiler are unpacked. This
# directory is purely an extracted and overlaid tarball of these two (done
# by the bootstrapy python script). In theory the build system does not
# modify anything under this directory afterwards.
stage0/
# These to build directories are the cargo output directories for builds of
# the standard library and compiler, respectively. Internally these may also
# have other target directories, which represent artifacts being compiled
# from the host to the specified target.
#
# Essentially, each of these directories is filled in by one `cargo`
# invocation. The build system instruments calling Cargo in the right order
# with the right variables to ensure these are filled in correctly.
stageN-std/
stageN-test/
stageN-rustc/
stageN-tools/
# This is a special case of the above directories, **not** filled in via
# Cargo but rather the build system itself. The stage0 compiler already has
# a set of target libraries for its own host triple (in its own sysroot)
# inside of stage0/. When we run the stage0 compiler to bootstrap more
# things, however, we don't want to use any of these libraries (as those are
# the ones that we're building). So essentially, when the stage1 compiler is
# being compiled (e.g. after libstd has been built), *this* is used as the
# sysroot for the stage0 compiler being run.
#
# Basically this directory is just a temporary artifact use to configure the
# stage0 compiler to ensure that the libstd we just built is used to
# compile the stage1 compiler.
stage0-sysroot/lib/
# These output directories are intended to be standalone working
# implementations of the compiler (corresponding to each stage). The build
# system will link (using hard links) output from stageN-{std,rustc} into
# each of these directories.
#
# In theory there is no extra build output in these directories.
stage1/
stage2/
stage3/
Cargo projects
The current build is unfortunately not quite as simple as cargo build in a
directory, but rather the compiler is split into three different Cargo projects:
library/std- the standard librarylibrary/test- testing support, depends on libstdcompiler/rustc- the actual compiler itself
Each "project" has a corresponding Cargo.lock file with all dependencies, and this means that building the compiler involves running Cargo three times. The structure here serves two goals:
- Facilitating dependencies coming from crates.io. These dependencies don't
depend on
std, so libstd is a separate project compiled ahead of time before the actual compiler builds. - Splitting "host artifacts" from "target artifacts". That is, when building
code for an arbitrary target you don't need the entire compiler, but you'll
end up needing libraries like libtest that depend on std but also want to use
crates.io dependencies. Hence, libtest is split out as its own project that
is sequenced after
stdbut beforerustc. This project is built for all targets.
There is some loss in build parallelism here because libtest can be compiled in parallel with a number of rustc artifacts, but in theory the loss isn't too bad!
Build tools
We've actually got quite a few tools that we use in the compiler's build system
and for testing. To organize these, each tool is a project in src/tools with a
corresponding Cargo.toml. All tools are compiled with Cargo (currently having
independent Cargo.lock files) and do not currently explicitly depend on the
compiler or standard library. Compiling each tool is sequenced after the
appropriate libstd/libtest/librustc compile above.
Extending rustbuild
So you'd like to add a feature to the rustbuild build system or just fix a bug.
Great! One of the major motivational factors for moving away from make is that
Rust is in theory much easier to read, modify, and write. If you find anything
excessively confusing, please open an issue on this and we'll try to get it
documented or simplified pronto.
First up, you'll probably want to read over the documentation above as that'll give you a high level overview of what rustbuild is doing. You also probably want to play around a bit yourself by just getting it up and running before you dive too much into the actual build system itself.
After that, each module in rustbuild should have enough documentation to keep you up and running. Some general areas that you may be interested in modifying are:
- Adding a new build tool? Take a look at
bootstrap/tool.rsfor examples of other tools. - Adding a new compiler crate? Look no further! Adding crates can be done by
adding a new directory with
Cargo.tomlfollowed by configuring allCargo.tomlfiles accordingly. - Adding a new dependency from crates.io? This should just work inside the compiler artifacts stage (everything other than libtest and libstd).
- Adding a new configuration option? You'll want to modify
bootstrap/flags.rsfor command line flags and thenbootstrap/config.rsto copy the flags to theConfigstruct. - Adding a sanity check? Take a look at
bootstrap/sanity.rs.
If you make a major change, please remember to:
- Update
VERSIONinsrc/bootstrap/main.rs.
- Update
changelog-seen = Ninconfig.toml.example. - Add an entry in
src/bootstrap/CHANGELOG.md.
A 'major change' includes
- A new option or
- A change in the default options.
Changes that do not affect contributors to the compiler or users
building rustc from source don't need an update to VERSION.
If you have any questions feel free to reach out on the #t-infra channel in
the Rust Zulip server or ask on internals.rust-lang.org. When
you encounter bugs, please file issues on the rust-lang/rust issue tracker.