convert doc-attributes to doc-comments using ./src/etc/sugarise-doc-comments.py (and manually tweaking) - for issue #2498

This commit is contained in:
Gareth Daniel Smith 2012-07-04 22:53:12 +01:00 committed by Brian Anderson
parent bfa43ca301
commit be0141666d
123 changed files with 4981 additions and 5044 deletions

View file

@ -18,10 +18,10 @@ import std::map::hashmap;
enum pp_mode {ppm_normal, ppm_expanded, ppm_typed, ppm_identified,
ppm_expanded_identified }
#[doc = "
The name used for source code that doesn't originate in a file
(e.g. source from stdin or a string)
"]
/**
* The name used for source code that doesn't originate in a file
* (e.g. source from stdin or a string)
*/
fn anon_src() -> str { "<anon>" }
fn source_name(input: input) -> str {
@ -88,9 +88,9 @@ fn parse_cfgspecs(cfgspecs: ~[str]) -> ast::crate_cfg {
}
enum input {
#[doc = "Load source from file"]
/// Load source from file
file_input(str),
#[doc = "The string is the source"]
/// The string is the source
str_input(str)
}

View file

@ -167,7 +167,7 @@ impl session for session {
fn fast_resolve() -> bool { self.debugging_opt(fast_resolve) }
}
#[doc = "Some reasonable defaults"]
/// Some reasonable defaults
fn basic_options() -> @options {
@{
crate_type: session::lib_crate,

View file

@ -1,8 +1,4 @@
#[doc = "
Validates all used crates and extern libraries and loads their metadata
"];
//! Validates all used crates and extern libraries and loads their metadata
import syntax::diagnostic::span_handler;
import syntax::{ast, ast_util};

View file

@ -82,7 +82,7 @@ fn resolve_path(cstore: cstore::cstore, cnum: ast::crate_num,
ret result;
}
#[doc="Iterates over all the paths in the given crate."]
/// Iterates over all the paths in the given crate.
fn each_path(cstore: cstore::cstore, cnum: ast::crate_num,
f: fn(decoder::path_entry) -> bool) {
let crate_data = cstore::get_crate_data(cstore, cnum);

View file

@ -414,7 +414,7 @@ class path_entry {
}
}
#[doc="Iterates over all the paths in the given crate."]
/// Iterates over all the paths in the given crate.
fn each_path(cdata: cmd, f: fn(path_entry) -> bool) {
let root = ebml::doc(cdata.data);
let items = ebml::get_doc(root, tag_items);

View file

@ -1,8 +1,4 @@
#[doc = "
Finds crate binaries and loads their metadata
"];
//! Finds crate binaries and loads their metadata
import syntax::diagnostic::span_handler;
import syntax::{ast, attr};

View file

@ -1,152 +1,150 @@
#[doc = "
# Borrow check
This pass is in job of enforcing *memory safety* and *purity*. As
memory safety is by far the more complex topic, I'll focus on that in
this description, but purity will be covered later on. In the context
of Rust, memory safety means three basic things:
- no writes to immutable memory;
- all pointers point to non-freed memory;
- all pointers point to memory of the same type as the pointer.
The last point might seem confusing: after all, for the most part,
this condition is guaranteed by the type check. However, there are
two cases where the type check effectively delegates to borrow check.
The first case has to do with enums. If there is a pointer to the
interior of an enum, and the enum is in a mutable location (such as a
local variable or field declared to be mutable), it is possible that
the user will overwrite the enum with a new value of a different
variant, and thus effectively change the type of the memory that the
pointer is pointing at.
The second case has to do with mutability. Basically, the type
checker has only a limited understanding of mutability. It will allow
(for example) the user to get an immutable pointer with the address of
a mutable local variable. It will also allow a `@mut T` or `~mut T`
pointer to be borrowed as a `&r.T` pointer. These seeming oversights
are in fact intentional; they allow the user to temporarily treat a
mutable value as immutable. It is up to the borrow check to guarantee
that the value in question is not in fact mutated during the lifetime
`r` of the reference.
# Summary of the safety check
In order to enforce mutability, the borrow check has three tricks up
its sleeve.
First, data which is uniquely tied to the current stack frame (that'll
be defined shortly) is tracked very precisely. This means that, for
example, if an immutable pointer to a mutable local variable is
created, the borrowck will simply check for assignments to that
particular local variable: no other memory is affected.
Second, if the data is not uniquely tied to the stack frame, it may
still be possible to ensure its validity by rooting garbage collected
pointers at runtime. For example, if there is a mutable local
variable `x` of type `@T`, and its contents are borrowed with an
expression like `&*x`, then the value of `x` will be rooted (today,
that means its ref count will be temporary increased) for the lifetime
of the reference that is created. This means that the pointer remains
valid even if `x` is reassigned.
Finally, if neither of these two solutions are applicable, then we
require that all operations within the scope of the reference be
*pure*. A pure operation is effectively one that does not write to
any aliasable memory. This means that it is still possible to write
to local variables or other data that is uniquely tied to the stack
frame (there's that term again; formal definition still pending) but
not to data reached via a `&T` or `@T` pointer. Such writes could
possibly have the side-effect of causing the data which must remain
valid to be overwritten.
# Possible future directions
There are numerous ways that the `borrowck` could be strengthened, but
these are the two most likely:
- flow-sensitivity: we do not currently consider flow at all but only
block-scoping. This means that innocent code like the following is
rejected:
let mut x: int;
...
x = 5;
let y: &int = &x; // immutable ptr created
...
The reason is that the scope of the pointer `y` is the entire
enclosing block, and the assignment `x = 5` occurs within that
block. The analysis is not smart enough to see that `x = 5` always
happens before the immutable pointer is created. This is relatively
easy to fix and will surely be fixed at some point.
- finer-grained purity checks: currently, our fallback for
guaranteeing random references into mutable, aliasable memory is to
require *total purity*. This is rather strong. We could use local
type-based alias analysis to distinguish writes that could not
possibly invalid the references which must be guaranteed. This
would only work within the function boundaries; function calls would
still require total purity. This seems less likely to be
implemented in the short term as it would make the code
significantly more complex; there is currently no code to analyze
the types and determine the possible impacts of a write.
# Terminology
A **loan** is .
# How the code works
The borrow check code is divided into several major modules, each of
which is documented in its own file.
The `gather_loans` and `check_loans` are the two major passes of the
analysis. The `gather_loans` pass runs over the IR once to determine
what memory must remain valid and for how long. Its name is a bit of
a misnomer; it does in fact gather up the set of loans which are
granted, but it also determines when @T pointers must be rooted and
for which scopes purity must be required.
The `check_loans` pass walks the IR and examines the loans and purity
requirements computed in `gather_loans`. It checks to ensure that (a)
the conditions of all loans are honored; (b) no contradictory loans
were granted (for example, loaning out the same memory as mutable and
immutable simultaneously); and (c) any purity requirements are
honored.
The remaining modules are helper modules used by `gather_loans` and
`check_loans`:
- `categorization` has the job of analyzing an expression to determine
what kind of memory is used in evaluating it (for example, where
dereferences occur and what kind of pointer is dereferenced; whether
the memory is mutable; etc)
- `loan` determines when data uniquely tied to the stack frame can be
loaned out.
- `preserve` determines what actions (if any) must be taken to preserve
aliasable data. This is the code which decides when to root
an @T pointer or to require purity.
# Maps that are created
Borrowck results in two maps.
- `root_map`: identifies those expressions or patterns whose result
needs to be rooted. Conceptually the root_map maps from an
expression or pattern node to a `node_id` identifying the scope for
which the expression must be rooted (this `node_id` should identify
a block or call). The actual key to the map is not an expression id,
however, but a `root_map_key`, which combines an expression id with a
deref count and is used to cope with auto-deref.
- `mutbl_map`: identifies those local variables which are modified or
moved. This is used by trans to guarantee that such variables are
given a memory location and not used as immediates.
"];
/*!
* # Borrow check
*
* This pass is in job of enforcing *memory safety* and *purity*. As
* memory safety is by far the more complex topic, I'll focus on that in
* this description, but purity will be covered later on. In the context
* of Rust, memory safety means three basic things:
*
* - no writes to immutable memory;
* - all pointers point to non-freed memory;
* - all pointers point to memory of the same type as the pointer.
*
* The last point might seem confusing: after all, for the most part,
* this condition is guaranteed by the type check. However, there are
* two cases where the type check effectively delegates to borrow check.
*
* The first case has to do with enums. If there is a pointer to the
* interior of an enum, and the enum is in a mutable location (such as a
* local variable or field declared to be mutable), it is possible that
* the user will overwrite the enum with a new value of a different
* variant, and thus effectively change the type of the memory that the
* pointer is pointing at.
*
* The second case has to do with mutability. Basically, the type
* checker has only a limited understanding of mutability. It will allow
* (for example) the user to get an immutable pointer with the address of
* a mutable local variable. It will also allow a `@mut T` or `~mut T`
* pointer to be borrowed as a `&r.T` pointer. These seeming oversights
* are in fact intentional; they allow the user to temporarily treat a
* mutable value as immutable. It is up to the borrow check to guarantee
* that the value in question is not in fact mutated during the lifetime
* `r` of the reference.
*
* # Summary of the safety check
*
* In order to enforce mutability, the borrow check has three tricks up
* its sleeve.
*
* First, data which is uniquely tied to the current stack frame (that'll
* be defined shortly) is tracked very precisely. This means that, for
* example, if an immutable pointer to a mutable local variable is
* created, the borrowck will simply check for assignments to that
* particular local variable: no other memory is affected.
*
* Second, if the data is not uniquely tied to the stack frame, it may
* still be possible to ensure its validity by rooting garbage collected
* pointers at runtime. For example, if there is a mutable local
* variable `x` of type `@T`, and its contents are borrowed with an
* expression like `&*x`, then the value of `x` will be rooted (today,
* that means its ref count will be temporary increased) for the lifetime
* of the reference that is created. This means that the pointer remains
* valid even if `x` is reassigned.
*
* Finally, if neither of these two solutions are applicable, then we
* require that all operations within the scope of the reference be
* *pure*. A pure operation is effectively one that does not write to
* any aliasable memory. This means that it is still possible to write
* to local variables or other data that is uniquely tied to the stack
* frame (there's that term again; formal definition still pending) but
* not to data reached via a `&T` or `@T` pointer. Such writes could
* possibly have the side-effect of causing the data which must remain
* valid to be overwritten.
*
* # Possible future directions
*
* There are numerous ways that the `borrowck` could be strengthened, but
* these are the two most likely:
*
* - flow-sensitivity: we do not currently consider flow at all but only
* block-scoping. This means that innocent code like the following is
* rejected:
*
* let mut x: int;
* ...
* x = 5;
* let y: &int = &x; // immutable ptr created
* ...
*
* The reason is that the scope of the pointer `y` is the entire
* enclosing block, and the assignment `x = 5` occurs within that
* block. The analysis is not smart enough to see that `x = 5` always
* happens before the immutable pointer is created. This is relatively
* easy to fix and will surely be fixed at some point.
*
* - finer-grained purity checks: currently, our fallback for
* guaranteeing random references into mutable, aliasable memory is to
* require *total purity*. This is rather strong. We could use local
* type-based alias analysis to distinguish writes that could not
* possibly invalid the references which must be guaranteed. This
* would only work within the function boundaries; function calls would
* still require total purity. This seems less likely to be
* implemented in the short term as it would make the code
* significantly more complex; there is currently no code to analyze
* the types and determine the possible impacts of a write.
*
* # Terminology
*
* A **loan** is .
*
* # How the code works
*
* The borrow check code is divided into several major modules, each of
* which is documented in its own file.
*
* The `gather_loans` and `check_loans` are the two major passes of the
* analysis. The `gather_loans` pass runs over the IR once to determine
* what memory must remain valid and for how long. Its name is a bit of
* a misnomer; it does in fact gather up the set of loans which are
* granted, but it also determines when @T pointers must be rooted and
* for which scopes purity must be required.
*
* The `check_loans` pass walks the IR and examines the loans and purity
* requirements computed in `gather_loans`. It checks to ensure that (a)
* the conditions of all loans are honored; (b) no contradictory loans
* were granted (for example, loaning out the same memory as mutable and
* immutable simultaneously); and (c) any purity requirements are
* honored.
*
* The remaining modules are helper modules used by `gather_loans` and
* `check_loans`:
*
* - `categorization` has the job of analyzing an expression to determine
* what kind of memory is used in evaluating it (for example, where
* dereferences occur and what kind of pointer is dereferenced; whether
* the memory is mutable; etc)
* - `loan` determines when data uniquely tied to the stack frame can be
* loaned out.
* - `preserve` determines what actions (if any) must be taken to preserve
* aliasable data. This is the code which decides when to root
* an @T pointer or to require purity.
*
* # Maps that are created
*
* Borrowck results in two maps.
*
* - `root_map`: identifies those expressions or patterns whose result
* needs to be rooted. Conceptually the root_map maps from an
* expression or pattern node to a `node_id` identifying the scope for
* which the expression must be rooted (this `node_id` should identify
* a block or call). The actual key to the map is not an expression id,
* however, but a `root_map_key`, which combines an expression id with a
* deref count and is used to cope with auto-deref.
*
* - `mutbl_map`: identifies those local variables which are modified or
* moved. This is used by trans to guarantee that such variables are
* given a memory location and not used as immediates.
*/
import syntax::ast;
import syntax::ast::{mutability, m_mutbl, m_imm, m_const};
@ -304,7 +302,7 @@ fn save_and_restore<T:copy,U>(&save_and_restore_t: T, f: fn() -> U) -> U {
ret u;
}
#[doc = "Creates and returns a new root_map"]
/// Creates and returns a new root_map
fn root_map() -> root_map {
ret hashmap(root_map_key_hash, root_map_key_eq);

View file

@ -1,41 +1,40 @@
#[doc = "
# Categorization
The job of the categorization module is to analyze an expression to
determine what kind of memory is used in evaluating it (for example,
where dereferences occur and what kind of pointer is dereferenced;
whether the memory is mutable; etc)
Categorization effectively transforms all of our expressions into
expressions of the following forms (the actual enum has many more
possibilities, naturally, but they are all variants of these base
forms):
E = rvalue // some computed rvalue
| x // address of a local variable, arg, or upvar
| *E // deref of a ptr
| E.comp // access to an interior component
Imagine a routine ToAddr(Expr) that evaluates an expression and returns an
address where the result is to be found. If Expr is an lvalue, then this
is the address of the lvalue. If Expr is an rvalue, this is the address of
some temporary spot in memory where the result is stored.
Now, cat_expr() classies the expression Expr and the address A=ToAddr(Expr)
as follows:
- cat: what kind of expression was this? This is a subset of the
full expression forms which only includes those that we care about
for the purpose of the analysis.
- mutbl: mutability of the address A
- ty: the type of data found at the address A
The resulting categorization tree differs somewhat from the expressions
themselves. For example, auto-derefs are explicit. Also, an index a[b] is
decomposed into two operations: a derefence to reach the array data and
then an index to jump forward to the relevant item.
"];
/*!
* # Categorization
*
* The job of the categorization module is to analyze an expression to
* determine what kind of memory is used in evaluating it (for example,
* where dereferences occur and what kind of pointer is dereferenced;
* whether the memory is mutable; etc)
*
* Categorization effectively transforms all of our expressions into
* expressions of the following forms (the actual enum has many more
* possibilities, naturally, but they are all variants of these base
* forms):
*
* E = rvalue // some computed rvalue
* | x // address of a local variable, arg, or upvar
* | *E // deref of a ptr
* | E.comp // access to an interior component
*
* Imagine a routine ToAddr(Expr) that evaluates an expression and returns an
* address where the result is to be found. If Expr is an lvalue, then this
* is the address of the lvalue. If Expr is an rvalue, this is the address of
* some temporary spot in memory where the result is stored.
*
* Now, cat_expr() classies the expression Expr and the address A=ToAddr(Expr)
* as follows:
*
* - cat: what kind of expression was this? This is a subset of the
* full expression forms which only includes those that we care about
* for the purpose of the analysis.
* - mutbl: mutability of the address A
* - ty: the type of data found at the address A
*
* The resulting categorization tree differs somewhat from the expressions
* themselves. For example, auto-derefs are explicit. Also, an index a[b] is
* decomposed into two operations: a derefence to reach the array data and
* then an index to jump forward to the relevant item.
*/
export public_methods;
export opt_deref_kind;

View file

@ -17,27 +17,26 @@ export get_warning_level, get_warning_settings_level;
export check_crate, build_settings_crate, mk_warning_settings;
export warning_settings;
#[doc="
A 'lint' check is a kind of miscellaneous constraint that a user _might_ want
to enforce, but might reasonably want to permit as well, on a module-by-module
basis. They contrast with static constraints enforced by other phases of the
compiler, which are generally required to hold in order to compile the program
at all.
We also build up a table containing information about lint settings, in order
to allow other passes to take advantage of the warning attribute
infrastructure. To save space, the table is keyed by the id of /items/, not of
every expression. When an item has the default settings, the entry will be
omitted. If we start allowing warn attributes on expressions, we will start
having entries for expressions that do not share their enclosing items
settings.
This module then, exports two passes: one that populates the warning settings
table in the session and is run early in the compile process, and one that
does a variety of lint checks, and is run late in the compile process.
"]
/**
* A 'lint' check is a kind of miscellaneous constraint that a user _might_
* want to enforce, but might reasonably want to permit as well, on a
* module-by-module basis. They contrast with static constraints enforced by
* other phases of the compiler, which are generally required to hold in order
* to compile the program at all.
*
* We also build up a table containing information about lint settings, in
* order to allow other passes to take advantage of the warning attribute
* infrastructure. To save space, the table is keyed by the id of /items/, not
* of every expression. When an item has the default settings, the entry will
* be omitted. If we start allowing warn attributes on expressions, we will
* start having entries for expressions that do not share their enclosing
* items settings.
*
* This module then, exports two passes: one that populates the warning
* settings table in the session and is run early in the compile process, and
* one that does a variety of lint checks, and is run late in the compile
* process.
*/
enum lint {
ctypes,
@ -203,11 +202,11 @@ impl methods for ctxt {
self.sess.span_lint_level(level, span, msg);
}
#[doc="
Merge the warnings specified by any `warn(...)` attributes into the
current lint context, call the provided function, then reset the
warnings in effect to their previous state.
"]
/**
* Merge the warnings specified by any `warn(...)` attributes into the
* current lint context, call the provided function, then reset the
* warnings in effect to their previous state.
*/
fn with_warn_attrs(attrs: ~[ast::attribute], f: fn(ctxt)) {
let mut new_ctxt = self;

View file

@ -1,106 +1,104 @@
#[doc = "
A classic liveness analysis based on dataflow over the AST. Computes,
for each local variable in a function, whether that variable is live
at a given point. Program execution points are identified by their
id.
# Basic idea
The basic model is that each local variable is assigned an index. We
represent sets of local variables using a vector indexed by this
index. The value in the vector is either 0, indicating the variable
is dead, or the id of an expression that uses the variable.
We conceptually walk over the AST in reverse execution order. If we
find a use of a variable, we add it to the set of live variables. If
we find an assignment to a variable, we remove it from the set of live
variables. When we have to merge two flows, we take the union of
those two flows---if the variable is live on both paths, we simply
pick one id. In the event of loops, we continue doing this until a
fixed point is reached.
## Checking initialization
At the function entry point, all variables must be dead. If this is
not the case, we can report an error using the id found in the set of
live variables, which identifies a use of the variable which is not
dominated by an assignment.
## Checking moves
After each explicit move, the variable must be dead.
## Computing last uses
Any use of the variable where the variable is dead afterwards is a
last use.
# Extension to handle constructors
Each field is assigned an index just as with local variables. A use of
`self` is considered a use of all fields. A use of `self.f` is just a use
of `f`.
# Implementation details
The actual implementation contains two (nested) walks over the AST.
The outer walk has the job of building up the ir_maps instance for the
enclosing function. On the way down the tree, it identifies those AST
nodes and variable IDs that will be needed for the liveness analysis
and assigns them contiguous IDs. The liveness id for an AST node is
called a `live_node` (it's a newtype'd uint) and the id for a variable
is called a `variable` (another newtype'd uint).
On the way back up the tree, as we are about to exit from a function
declaration we allocate a `liveness` instance. Now that we know
precisely how many nodes and variables we need, we can allocate all
the various arrays that we will need to precisely the right size. We then
perform the actual propagation on the `liveness` instance.
This propagation is encoded in the various `propagate_through_*()`
methods. It effectively does a reverse walk of the AST; whenever we
reach a loop node, we iterate until a fixed point is reached.
## The `users` struct
At each live node `N`, we track three pieces of information for each
variable `V` (these are encapsulated in the `users` struct):
- `reader`: the `live_node` ID of some node which will read the value
that `V` holds on entry to `N`. Formally: a node `M` such
that there exists a path `P` from `N` to `M` where `P` does not
write `V`. If the `reader` is `invalid_node()`, then the current
value will never be read (the variable is dead, essentially).
- `writer`: the `live_node` ID of some node which will write the
variable `V` and which is reachable from `N`. Formally: a node `M`
such that there exists a path `P` from `N` to `M` and `M` writes
`V`. If the `writer` is `invalid_node()`, then there is no writer
of `V` that follows `N`.
- `used`: a boolean value indicating whether `V` is *used*. We
distinguish a *read* from a *use* in that a *use* is some read that
is not just used to generate a new value. For example, `x += 1` is
a read but not a use. This is used to generate better warnings.
## Special Variables
We generate various special variables for various, well, special purposes.
These are described in the `specials` struct:
- `exit_ln`: a live node that is generated to represent every 'exit' from the
function, whether it be by explicit return, fail, or other means.
- `fallthrough_ln`: a live node that represents a fallthrough
- `no_ret_var`: a synthetic variable that is only 'read' from, the
fallthrough node. This allows us to detect functions where we fail
to return explicitly.
- `self_var`: a variable representing 'self'
"];
/*!
* A classic liveness analysis based on dataflow over the AST. Computes,
* for each local variable in a function, whether that variable is live
* at a given point. Program execution points are identified by their
* id.
*
* # Basic idea
*
* The basic model is that each local variable is assigned an index. We
* represent sets of local variables using a vector indexed by this
* index. The value in the vector is either 0, indicating the variable
* is dead, or the id of an expression that uses the variable.
*
* We conceptually walk over the AST in reverse execution order. If we
* find a use of a variable, we add it to the set of live variables. If
* we find an assignment to a variable, we remove it from the set of live
* variables. When we have to merge two flows, we take the union of
* those two flows---if the variable is live on both paths, we simply
* pick one id. In the event of loops, we continue doing this until a
* fixed point is reached.
*
* ## Checking initialization
*
* At the function entry point, all variables must be dead. If this is
* not the case, we can report an error using the id found in the set of
* live variables, which identifies a use of the variable which is not
* dominated by an assignment.
*
* ## Checking moves
*
* After each explicit move, the variable must be dead.
*
* ## Computing last uses
*
* Any use of the variable where the variable is dead afterwards is a
* last use.
*
* # Extension to handle constructors
*
* Each field is assigned an index just as with local variables. A use of
* `self` is considered a use of all fields. A use of `self.f` is just a use
* of `f`.
*
* # Implementation details
*
* The actual implementation contains two (nested) walks over the AST.
* The outer walk has the job of building up the ir_maps instance for the
* enclosing function. On the way down the tree, it identifies those AST
* nodes and variable IDs that will be needed for the liveness analysis
* and assigns them contiguous IDs. The liveness id for an AST node is
* called a `live_node` (it's a newtype'd uint) and the id for a variable
* is called a `variable` (another newtype'd uint).
*
* On the way back up the tree, as we are about to exit from a function
* declaration we allocate a `liveness` instance. Now that we know
* precisely how many nodes and variables we need, we can allocate all
* the various arrays that we will need to precisely the right size. We then
* perform the actual propagation on the `liveness` instance.
*
* This propagation is encoded in the various `propagate_through_*()`
* methods. It effectively does a reverse walk of the AST; whenever we
* reach a loop node, we iterate until a fixed point is reached.
*
* ## The `users` struct
*
* At each live node `N`, we track three pieces of information for each
* variable `V` (these are encapsulated in the `users` struct):
*
* - `reader`: the `live_node` ID of some node which will read the value
* that `V` holds on entry to `N`. Formally: a node `M` such
* that there exists a path `P` from `N` to `M` where `P` does not
* write `V`. If the `reader` is `invalid_node()`, then the current
* value will never be read (the variable is dead, essentially).
*
* - `writer`: the `live_node` ID of some node which will write the
* variable `V` and which is reachable from `N`. Formally: a node `M`
* such that there exists a path `P` from `N` to `M` and `M` writes
* `V`. If the `writer` is `invalid_node()`, then there is no writer
* of `V` that follows `N`.
*
* - `used`: a boolean value indicating whether `V` is *used*. We
* distinguish a *read* from a *use* in that a *use* is some read that
* is not just used to generate a new value. For example, `x += 1` is
* a read but not a use. This is used to generate better warnings.
*
* ## Special Variables
*
* We generate various special variables for various, well, special purposes.
* These are described in the `specials` struct:
*
* - `exit_ln`: a live node that is generated to represent every 'exit' from
* the function, whether it be by explicit return, fail, or other means.
*
* - `fallthrough_ln`: a live node that represents a fallthrough
*
* - `no_ret_var`: a synthetic variable that is only 'read' from, the
* fallthrough node. This allows us to detect functions where we fail
* to return explicitly.
*
* - `self_var`: a variable representing 'self'
*/
import dvec::{dvec, extensions};
import std::map::{hashmap, int_hash, str_hash, box_str_hash};

View file

@ -109,13 +109,13 @@ enum ModuleDef {
ModuleDef(@Module), // Defines a module.
}
#[doc="Contains data for specific types of import directives."]
/// Contains data for specific types of import directives.
enum ImportDirectiveSubclass {
SingleImport(Atom /* target */, Atom /* source */),
GlobImport
}
#[doc="The context that we thread through while building the reduced graph."]
/// The context that we thread through while building the reduced graph.
enum ReducedGraphParent {
ModuleReducedGraphParent(@Module)
}
@ -235,15 +235,15 @@ class AtomTable {
}
}
#[doc="Creates a hash table of atoms."]
/// Creates a hash table of atoms.
fn atom_hashmap<V:copy>() -> hashmap<Atom,V> {
ret hashmap::<Atom,V>(|a| a, |a, b| a == b);
}
#[doc="
One local scope. In Rust, local scopes can only contain value bindings.
Therefore, we don't have to worry about the other namespaces here.
"]
/**
* One local scope. In Rust, local scopes can only contain value bindings.
* Therefore, we don't have to worry about the other namespaces here.
*/
class Rib {
let bindings: hashmap<Atom,def_like>;
let kind: RibKind;
@ -254,7 +254,7 @@ class Rib {
}
}
#[doc="One import directive."]
/// One import directive.
class ImportDirective {
let module_path: @dvec<Atom>;
let subclass: @ImportDirectiveSubclass;
@ -265,7 +265,7 @@ class ImportDirective {
}
}
#[doc="The item that an import resolves to."]
/// The item that an import resolves to.
class Target {
let target_module: @Module;
let bindings: @NameBindings;
@ -313,14 +313,14 @@ class ImportResolution {
}
}
#[doc="The link from a module up to its nearest parent node."]
/// The link from a module up to its nearest parent node.
enum ParentLink {
NoParentLink,
ModuleParentLink(@Module, Atom),
BlockParentLink(@Module, node_id)
}
#[doc="One node in the tree of modules."]
/// One node in the tree of modules.
class Module {
let parent_link: ParentLink;
let mut def_id: option<def_id>;
@ -398,10 +398,10 @@ pure fn is_none<T>(x: option<T>) -> bool {
}
}
#[doc="
Records the definitions (at most one for each namespace) that a name is
bound to.
"]
/**
* Records the definitions (at most one for each namespace) that a name is
* bound to.
*/
class NameBindings {
let mut module_def: ModuleDef; //< Meaning in the module namespace.
let mut type_def: option<def>; //< Meaning in the type namespace.
@ -415,7 +415,7 @@ class NameBindings {
self.impl_defs = ~[];
}
#[doc="Creates a new module in this set of name bindings."]
/// Creates a new module in this set of name bindings.
fn define_module(parent_link: ParentLink, def_id: option<def_id>) {
if self.module_def == NoModuleDef {
let module = @Module(parent_link, def_id);
@ -423,22 +423,22 @@ class NameBindings {
}
}
#[doc="Records a type definition."]
/// Records a type definition.
fn define_type(def: def) {
self.type_def = some(def);
}
#[doc="Records a value definition."]
/// Records a value definition.
fn define_value(def: def) {
self.value_def = some(def);
}
#[doc="Records an impl definition."]
/// Records an impl definition.
fn define_impl(implementation: @Impl) {
self.impl_defs += ~[implementation];
}
#[doc="Returns the module node if applicable."]
/// Returns the module node if applicable.
fn get_module_if_available() -> option<@Module> {
alt self.module_def {
NoModuleDef { ret none; }
@ -446,10 +446,10 @@ class NameBindings {
}
}
#[doc="
Returns the module node. Fails if this node does not have a module
definition.
"]
/**
* Returns the module node. Fails if this node does not have a module
* definition.
*/
fn get_module() -> @Module {
alt self.module_def {
NoModuleDef {
@ -508,7 +508,7 @@ class NameBindings {
}
}
#[doc="Interns the names of the primitive types."]
/// Interns the names of the primitive types.
class PrimitiveTypeTable {
let primitive_types: hashmap<Atom,prim_ty>;
@ -539,7 +539,7 @@ class PrimitiveTypeTable {
}
}
#[doc="The main resolver class."]
/// The main resolver class.
class Resolver {
let session: session;
let ast_map: ASTMap;
@ -611,7 +611,7 @@ class Resolver {
self.export_map = int_hash();
}
#[doc="The main name resolution procedure."]
/// The main name resolution procedure.
fn resolve(this: @Resolver) {
self.build_reduced_graph(this);
self.resolve_imports();
@ -627,7 +627,7 @@ class Resolver {
// any imports resolved.
//
#[doc="Constructs the reduced graph for the entire crate."]
/// Constructs the reduced graph for the entire crate.
fn build_reduced_graph(this: @Resolver) {
let initial_parent =
ModuleReducedGraphParent((*self.graph_root).get_module());
@ -654,7 +654,7 @@ class Resolver {
}));
}
#[doc="Returns the current module tracked by the reduced graph parent."]
/// Returns the current module tracked by the reduced graph parent.
fn get_module_from_parent(reduced_graph_parent: ReducedGraphParent)
-> @Module {
alt reduced_graph_parent {
@ -664,16 +664,16 @@ class Resolver {
}
}
#[doc="
Adds a new child item to the module definition of the parent node and
returns its corresponding name bindings as well as the current parent.
Or, if we're inside a block, creates (or reuses) an anonymous module
corresponding to the innermost block ID and returns the name bindings
as well as the newly-created parent.
If this node does not have a module definition and we are not inside
a block, fails.
"]
/**
* Adds a new child item to the module definition of the parent node and
* returns its corresponding name bindings as well as the current parent.
* Or, if we're inside a block, creates (or reuses) an anonymous module
* corresponding to the innermost block ID and returns the name bindings
* as well as the newly-created parent.
*
* If this node does not have a module definition and we are not inside
* a block, fails.
*/
fn add_child(name: Atom,
reduced_graph_parent: ReducedGraphParent)
-> (@NameBindings, ReducedGraphParent) {
@ -742,7 +742,7 @@ class Resolver {
}
}
#[doc="Constructs the reduced graph for one item."]
/// Constructs the reduced graph for one item.
fn build_reduced_graph_for_item(item: @item,
parent: ReducedGraphParent,
&&visitor: vt<ReducedGraphParent>) {
@ -874,10 +874,10 @@ class Resolver {
}
}
#[doc="
Constructs the reduced graph for one variant. Variants exist in the
type namespace.
"]
/**
* Constructs the reduced graph for one variant. Variants exist in the
* type namespace.
*/
fn build_reduced_graph_for_variant(variant: variant,
item_id: def_id,
parent: ReducedGraphParent,
@ -890,10 +890,10 @@ class Resolver {
local_def(variant.node.id)));
}
#[doc="
Constructs the reduced graph for one 'view item'. View items consist
of imports and use directives.
"]
/**
* Constructs the reduced graph for one 'view item'. View items consist
* of imports and use directives.
*/
fn build_reduced_graph_for_view_item(view_item: @view_item,
parent: ReducedGraphParent,
&&_visitor: vt<ReducedGraphParent>) {
@ -1045,7 +1045,7 @@ class Resolver {
}
}
#[doc="Constructs the reduced graph for one foreign item."]
/// Constructs the reduced graph for one foreign item.
fn build_reduced_graph_for_foreign_item(foreign_item: @foreign_item,
parent: ReducedGraphParent,
&&visitor:
@ -1095,10 +1095,10 @@ class Resolver {
visit_block(block, new_parent, visitor);
}
#[doc="
Builds the reduced graph rooted at the 'use' directive for an external
crate.
"]
/**
* Builds the reduced graph rooted at the 'use' directive for an external
* crate.
*/
fn build_reduced_graph_for_external_crate(root: @Module) {
// Create all the items reachable by paths.
for each_path(self.session.cstore, get(root.def_id).crate)
@ -1285,7 +1285,7 @@ class Resolver {
}
}
#[doc="Creates and adds an import directive to the given module."]
/// Creates and adds an import directive to the given module.
fn build_import_directive(module: @Module,
module_path: @dvec<Atom>,
subclass: @ImportDirectiveSubclass) {
@ -1328,10 +1328,10 @@ class Resolver {
// remain or unsuccessfully when no forward progress in resolving imports
// is made.
#[doc="
Resolves all imports for the crate. This method performs the fixed-
point iteration.
"]
/**
* Resolves all imports for the crate. This method performs the fixed-
* point iteration.
*/
fn resolve_imports() {
let mut i = 0u;
let mut prev_unresolved_imports = 0u;
@ -1358,10 +1358,10 @@ class Resolver {
}
}
#[doc="
Attempts to resolve imports for the given module and all of its
submodules.
"]
/**
* Attempts to resolve imports for the given module and all of its
* submodules.
*/
fn resolve_imports_for_module_subtree(module: @Module) {
#debug("(resolving imports for module subtree) resolving %s",
self.module_to_str(module));
@ -1383,7 +1383,7 @@ class Resolver {
}
}
#[doc="Attempts to resolve imports for the given module only."]
/// Attempts to resolve imports for the given module only.
fn resolve_imports_for_module(module: @Module) {
if (*module).all_imports_resolved() {
#debug("(resolving imports for module) all imports resolved for \
@ -1416,13 +1416,13 @@ class Resolver {
}
}
#[doc="
Attempts to resolve the given import. The return value indicates
failure if we're certain the name does not exist, indeterminate if we
don't know whether the name exists at the moment due to other
currently-unresolved imports, or success if we know the name exists.
If successful, the resolved bindings are written into the module.
"]
/**
* Attempts to resolve the given import. The return value indicates
* failure if we're certain the name does not exist, indeterminate if we
* don't know whether the name exists at the moment due to other
* currently-unresolved imports, or success if we know the name exists.
* If successful, the resolved bindings are written into the module.
*/
fn resolve_import_for_module(module: @Module,
import_directive: @ImportDirective)
-> ResolveResult<()> {
@ -1721,11 +1721,11 @@ class Resolver {
ret Success(());
}
#[doc="
Resolves a glob import. Note that this function cannot fail; it either
succeeds or bails out (as importing * from an empty module or a module
that exports nothing is valid).
"]
/**
* Resolves a glob import. Note that this function cannot fail; it either
* succeeds or bails out (as importing * from an empty module or a module
* that exports nothing is valid).
*/
fn resolve_glob_import(module: @Module, containing_module: @Module)
-> ResolveResult<()> {
@ -1927,10 +1927,10 @@ class Resolver {
ret Success(search_module);
}
#[doc="
Attempts to resolve the module part of an import directive rooted at
the given module.
"]
/**
* Attempts to resolve the module part of an import directive rooted at
* the given module.
*/
fn resolve_module_path_for_import(module: @Module,
module_path: @dvec<Atom>,
xray: XrayFlag)
@ -2093,11 +2093,11 @@ class Resolver {
module.exported_names.contains_key(name);
}
#[doc="
Attempts to resolve the supplied name in the given module for the
given namespace. If successful, returns the target corresponding to
the name.
"]
/**
* Attempts to resolve the supplied name in the given module for the
* given namespace. If successful, returns the target corresponding to
* the name.
*/
fn resolve_name_in_module(module: @Module,
name: Atom,
namespace: Namespace,
@ -2168,11 +2168,11 @@ class Resolver {
ret Failed;
}
#[doc="
Resolves a one-level renaming import of the kind `import foo = bar;`
This needs special handling, as, unlike all of the other imports, it
needs to look in the scope chain for modules and non-modules alike.
"]
/**
* Resolves a one-level renaming import of the kind `import foo = bar;`
* This needs special handling, as, unlike all of the other imports, it
* needs to look in the scope chain for modules and non-modules alike.
*/
fn resolve_one_level_renaming_import(module: @Module,
import_directive: @ImportDirective)
-> ResolveResult<()> {
@ -3496,10 +3496,10 @@ class Resolver {
}
}
#[doc="
If `check_ribs` is true, checks the local definitions first; i.e.
doesn't skip straight to the containing module.
"]
/**
* If `check_ribs` is true, checks the local definitions first; i.e.
* doesn't skip straight to the containing module.
*/
fn resolve_path(path: @path, namespace: Namespace, check_ribs: bool,
visitor: ResolveVisitor)
-> option<def> {
@ -3859,7 +3859,7 @@ class Resolver {
// hit.
//
#[doc="A somewhat inefficient routine to print out the name of a module."]
/// A somewhat inefficient routine to print out the name of a module.
fn module_to_str(module: @Module) -> str {
let atoms = dvec();
let mut current_module = module;
@ -3977,7 +3977,7 @@ class Resolver {
}
}
#[doc="Entry point to crate resolution."]
/// Entry point to crate resolution.
fn resolve_crate(session: session, ast_map: ASTMap, crate: @crate)
-> { def_map: DefMap, exp_map: ExportMap, impl_map: ImplMap } {

View file

@ -2735,14 +2735,14 @@ fn trans_lval(cx: block, e: @ast::expr) -> lval_result {
}
}
#[doc = "
Get the type of a box in the default address space.
Shared box pointers live in address space 1 so the GC strategy can find them.
Before taking a pointer to the inside of a box it should be cast into address
space 0. Otherwise the resulting (non-box) pointer will be in the wrong
address space and thus be the wrong type.
"]
/**
* Get the type of a box in the default address space.
*
* Shared box pointers live in address space 1 so the GC strategy can find
* them. Before taking a pointer to the inside of a box it should be cast into
* address space 0. Otherwise the resulting (non-box) pointer will be in the
* wrong address space and thus be the wrong type.
*/
fn non_gc_box_cast(cx: block, val: ValueRef) -> ValueRef {
#debug("non_gc_box_cast");
add_comment(cx, "non_gc_box_cast");

View file

@ -2996,9 +2996,7 @@ fn ty_params_to_tys(tcx: ty::ctxt, tps: ~[ast::ty_param]) -> ~[t] {
})
}
#[doc = "
Returns an equivalent type with all the typedefs and self regions removed.
"]
/// Returns an equivalent type with all the typedefs and self regions removed.
fn normalize_ty(cx: ctxt, t: t) -> t {
alt cx.normalized_cache.find(t) {
some(t) { ret t; }

View file

@ -1,48 +1,46 @@
#[doc = "
Conversion from AST representation of types to the ty.rs
representation. The main routine here is `ast_ty_to_ty()`: each use
is parameterized by an instance of `ast_conv` and a `region_scope`.
The parameterization of `ast_ty_to_ty()` is because it behaves
somewhat differently during the collect and check phases, particularly
with respect to looking up the types of top-level items. In the
collect phase, the crate context is used as the `ast_conv` instance;
in this phase, the `get_item_ty()` function triggers a recursive call
to `ty_of_item()` (note that `ast_ty_to_ty()` will detect recursive
types and report an error). In the check phase, when the @fn_ctxt is
used as the `ast_conv`, `get_item_ty()` just looks up the item type in
`tcx.tcache`.
The `region_scope` interface controls how region references are
handled. It has two methods which are used to resolve anonymous
region references (e.g., `&T`) and named region references (e.g.,
`&a.T`). There are numerous region scopes that can be used, but most
commonly you want either `empty_rscope`, which permits only the static
region, or `type_rscope`, which permits the self region if the type in
question is parameterized by a region.
Unlike the `ast_conv` iface, the region scope can change as we descend
the type. This is to accommodate the fact that (a) fn types are binding
scopes and (b) the default region may change. To understand case (a),
consider something like:
type foo = { x: &a.int, y: fn(&a.int) }
The type of `x` is an error because there is no region `a` in scope.
In the type of `y`, however, region `a` is considered a bound region
as it does not already appear in scope.
Case (b) says that if you have a type:
type foo/& = ...;
type bar = fn(&foo, &a.foo)
The fully expanded version of type bar is:
type bar = fn(&foo/&, &a.foo/&a)
Note that the self region for the `foo` defaulted to `&` in the first
case but `&a` in the second. Basically, defaults that appear inside
an rptr (`&r.T`) use the region `r` that appears in the rptr.
"];
/*!
* Conversion from AST representation of types to the ty.rs
* representation. The main routine here is `ast_ty_to_ty()`: each use
* is parameterized by an instance of `ast_conv` and a `region_scope`.
*
* The parameterization of `ast_ty_to_ty()` is because it behaves
* somewhat differently during the collect and check phases, particularly
* with respect to looking up the types of top-level items. In the
* collect phase, the crate context is used as the `ast_conv` instance;
* in this phase, the `get_item_ty()` function triggers a recursive call
* to `ty_of_item()` (note that `ast_ty_to_ty()` will detect recursive
* types and report an error). In the check phase, when the @fn_ctxt is
* used as the `ast_conv`, `get_item_ty()` just looks up the item type in
* `tcx.tcache`.
*
* The `region_scope` interface controls how region references are
* handled. It has two methods which are used to resolve anonymous
* region references (e.g., `&T`) and named region references (e.g.,
* `&a.T`). There are numerous region scopes that can be used, but most
* commonly you want either `empty_rscope`, which permits only the static
* region, or `type_rscope`, which permits the self region if the type in
* question is parameterized by a region.
*
* Unlike the `ast_conv` iface, the region scope can change as we descend
* the type. This is to accommodate the fact that (a) fn types are binding
* scopes and (b) the default region may change. To understand case (a),
* consider something like:
*
* type foo = { x: &a.int, y: fn(&a.int) }
*
* The type of `x` is an error because there is no region `a` in scope.
* In the type of `y`, however, region `a` is considered a bound region
* as it does not already appear in scope.
*
* Case (b) says that if you have a type:
* type foo/& = ...;
* type bar = fn(&foo, &a.foo)
* The fully expanded version of type bar is:
* type bar = fn(&foo/&, &a.foo/&a)
* Note that the self region for the `foo` defaulted to `&` in the first
* case but `&a` in the second. Basically, defaults that appear inside
* an rptr (`&r.T`) use the region `r` that appears in the rptr.
*/
import check::fn_ctxt;
import rscope::{anon_rscope, binding_rscope, empty_rscope, in_anon_rscope};

View file

@ -152,18 +152,18 @@ fn ensure_iface_methods(ccx: @crate_ctxt, id: ast::node_id) {
}
}
#[doc = "
Checks that a method from an impl/class conforms to the signature of
the same method as declared in the iface.
# Parameters
- impl_m: the method in the impl
- impl_tps: the type params declared on the impl itself (not the method!)
- if_m: the method in the iface
- if_substs: the substitutions used on the type of the iface
- self_ty: the self type of the impl
"]
/**
* Checks that a method from an impl/class conforms to the signature of
* the same method as declared in the iface.
*
* # Parameters
*
* - impl_m: the method in the impl
* - impl_tps: the type params declared on the impl itself (not the method!)
* - if_m: the method in the iface
* - if_substs: the substitutions used on the type of the iface
* - self_ty: the self type of the impl
*/
fn compare_impl_method(tcx: ty::ctxt, sp: span,
impl_m: ty::method, impl_tps: uint,
if_m: ty::method, if_substs: ty::substs,

View file

@ -546,7 +546,7 @@ fn rollback_to<V:copy vid, T:copy>(
}
impl transaction_methods for infer_ctxt {
#[doc = "Execute `f` and commit the bindings if successful"]
/// Execute `f` and commit the bindings if successful
fn commit<T,E>(f: fn() -> result<T,E>) -> result<T,E> {
assert self.tvb.bindings.len() == 0u;
@ -562,7 +562,7 @@ impl transaction_methods for infer_ctxt {
ret r;
}
#[doc = "Execute `f`, unroll bindings on failure"]
/// Execute `f`, unroll bindings on failure
fn try<T,E>(f: fn() -> result<T,E>) -> result<T,E> {
let tvbl = self.tvb.bindings.len();
@ -580,7 +580,7 @@ impl transaction_methods for infer_ctxt {
ret r;
}
#[doc = "Execute `f` then unroll any bindings it creates"]
/// Execute `f` then unroll any bindings it creates
fn probe<T,E>(f: fn() -> result<T,E>) -> result<T,E> {
assert self.tvb.bindings.len() == 0u;
assert self.rb.bindings.len() == 0u;