AnalysisActions
AnalysisActions.anon_target
def AnalysisActions.anon_target(
rule: def(**kwargs: typing.Any) -> None,
attrs: dict[str, typing.Any],
) -> anon_target
An anonymous target is defined by the hash of its attributes, rather than its name. During analysis, rules can define and access the providers of anonymous targets before producing their own providers. Two distinct rules might ask for the same anonymous target, sharing the work it performs.
For more details see https://buck2.build/docs/rule_authors/anon_targets/
AnalysisActions.anon_targets
def AnalysisActions.anon_targets(
rules: list[(def(**kwargs: typing.Any) -> None, dict[str, typing.Any])] | tuple[(def(**kwargs: typing.Any) -> None, dict[str, typing.Any]), ...],
) -> anon_targets
Generate a series of anonymous targets.
AnalysisActions.artifact_tag
def AnalysisActions.artifact_tag(
) -> artifact_tag
Allocate a new input tag. Used with the dep_files
argument to run
.
AnalysisActions.assert_short_path
def AnalysisActions.assert_short_path(
artifact: artifact,
short_path: str,
) -> artifact
Generate a promise artifact that has short path accessible on it. The short path's correctness will be asserted during analysis time.
TODO - we would prefer the API to be ctx.actions.anon_target(xxx).artifact("foo", short_path=yyy)
, but
we cannot support this until we can get access to the AnalysisContext
without passing it into this method.
AnalysisActions.cas_artifact
def AnalysisActions.cas_artifact(
output: artifact | output_artifact | str,
digest: str,
use_case: str,
/,
*,
expires_after_timestamp: int,
is_executable: bool = False,
is_tree: bool = False,
is_directory: bool = False,
) -> artifact
Downloads a CAS artifact to an output
digest
: must look likeSHA1:SIZE
use_case
: your RE use caseexpires_after_timestamp
: must be a UNIX timestamp. Your digest's TTL must exceed this timestamp. Your build will break once the digest expires, so make sure the expiry is long enough (preferably, in years).is_executable
: indicates the resulting file should be marked with executable permissionsis_tree
: digest must point to a blob of type RE.Treeis_directory
: digest must point to a blob of type RE.Directory
AnalysisActions.copied_dir
def AnalysisActions.copied_dir(
output: artifact | output_artifact | str,
srcs: dict[str, artifact],
/,
) -> artifact
Returns an artifact
which is a directory containing copied files. The srcs must be a dictionary of path (as string, relative to the result directory) to the bound artifact
, which will be laid out in the directory.
AnalysisActions.copy_dir
def AnalysisActions.copy_dir(
dest: artifact | output_artifact | str,
src: artifact,
/,
) -> artifact
Make a copy of a directory.
AnalysisActions.copy_file
def AnalysisActions.copy_file(
dest: artifact | output_artifact | str,
src: artifact,
/,
) -> artifact
Copies the source artifact
to the destination (which can be a string representing a filename or an output artifact
) and returns the output artifact
. The copy works for files or directories.
AnalysisActions.declare_output
def AnalysisActions.declare_output(
prefix: str,
filename: str = ...,
/,
*,
dir: bool = False,
) -> artifact
Returns an unbound artifact
, representing where a file will go, which must be bound before analysis terminates. The usual way of binding an artifact is with ctx.actions.run
. As an example:
my_output = ctx.actions.declare_output("output.o")
ctx.actions.run(["gcc", "-c", my_source, "-o", my_output.as_output()], category = "compile")
This snippet declares an output with the filename output.o
(it will be located in the output directory
for this target). Note the use of as_output
to tag this artifact as being an output in
the action. After binding the artifact you can subsequently use my_output
as either an
input for subsequent actions, or as the result in a provider.
Artifacts from a single target may not have the same name, so if you then want a second
artifact also named output.o
you need to supply a prefix, e.g.
ctx.actions.declare_output("directory", "output.o")
. The artifact will still report having
name output.o
, but will be located at directory/output.o
.
The dir
argument should be set to True
if the binding will be a directory.
AnalysisActions.digest_config
def AnalysisActions.digest_config() -> digest_config
Obtain this daemon's digest configuration. This allows rules to discover what digests the daemon may be able to e.g. defer download because they conform to its RE backend's expected digest format.
AnalysisActions.download_file
def AnalysisActions.download_file(
output: artifact | output_artifact | str,
url: str,
/,
*,
vpnless_url: None | str = None,
sha1: None | str = None,
sha256: None | str = None,
is_executable: bool = False,
is_deferrable: bool = False,
) -> artifact
Downloads a URL to an output (filename as string or output artifact). The file at the URL must have the given sha1 or the command will fail. The optional parameter is_executable indicates whether the resulting file should be marked with executable permissions. (Meta-internal) The optional parameter vpnless_url indicates a url from which this resource can be downloaded off VPN; this has the same restrictions as url
above.
AnalysisActions.dynamic_output
def AnalysisActions.dynamic_output(
*,
dynamic: list[artifact] | tuple[artifact, ...],
inputs: list[artifact] | tuple[artifact, ...] = ...,
outputs: list[output_artifact] | tuple[output_artifact, ...],
f: typing.Callable[[typing.Any, dict[artifact, artifact_value], dict[artifact, artifact]], None],
) -> None
dynamic_output
allows a rule to use information that was not available when the rule was first run at analysis time. Examples include things like Distributed ThinLTO (where the index file is created by another action) or OCaml builds (where the dependencies are created by ocamldeps
).
The arguments are:
dynamic
- a list of artifacts whose values will be available in the function. These will be built before the function is run.inputs
- parameter is ignored.outputs
- a list of unbound artifacts (created withdeclare_artifact
) which will be bound by the function.- The function argument is given 3 arguments:
ctx
(context) - which is the same as that passed to the initial rule analysis.artifacts
- using one of the artifacts fromdynamic
(example usage:artifacts[artifact_from_dynamic])
gives an artifact value containing the methodsread_string
,read_lines
, andread_json
to obtain the values from the disk in various formats. Anything too complex should be piped through a Python script for transformation to JSON.outputs
- using one of the artifacts from thedynamic_output
'soutputs
(example usage:outputs[artifact_from_dynamic_output_outputs]
) gives an unbounded artifact. The function argument must use itsoutputs
argument to bind output artifacts, rather than reusing artifacts from the outputs passed intodynamic_output
directly.
- The function must call
ctx.actions
(probablyctx.actions.run
) to bind all outputs. It can examine the values of the dynamic variables and depends on the inputs.- The function will usually be a
def
, aslambda
in Starlark does not allow statements, making it quite underpowered. For full details see https://buck2.build/docs/rule_authors/dynamic_dependencies/.
- The function will usually be a
Besides dynamic dependencies, there is a second use case for dynamic_output
: say that you
have some output artifact, and that the analysis to produce the action that outputs that
artifact is expensive, ie takes a lot of CPU time; you would like to skip that work in
builds that do not actually use that artifact.
This can be accomplished by putting the analysis for that artifact behind a dynamic_output
with an empty dynamic
list. The dynamic_output
's function will not be run unless one of
the actions it outputs is actually requested as part of the build.
AnalysisActions.dynamic_output_new
def AnalysisActions.dynamic_output_new(
dynamic_actions: DynamicAction,
/,
) -> DynamicValue
New version of dynamic_output
.
This is work in progress, and will eventually replace the old dynamic_output
.
AnalysisActions.run
def AnalysisActions.run(
arguments: CellPath | artifact | cell_root | cmd_args | label | output_artifact | project_root | resolved_macro | str | tagged_command_line | target_label | transitive_set_args_projection | write_json_cli_args | list | RunInfo,
/,
*,
category: str,
identifier: None | str = None,
env: dict[str, CellPath | artifact | cell_root | cmd_args | label | output_artifact | project_root | resolved_macro | str | tagged_command_line | target_label | transitive_set_args_projection | write_json_cli_args | RunInfo] = ...,
local_only: bool = False,
prefer_local: bool = False,
prefer_remote: bool = False,
low_pass_filter: bool = True,
always_print_stderr: bool = False,
weight: int = ...,
weight_percentage: int = ...,
dep_files: dict[str, artifact_tag] = ...,
metadata_env_var: str = ...,
metadata_path: str = ...,
no_outputs_cleanup: bool = False,
allow_cache_upload: bool = False,
allow_dep_file_cache_upload: bool = False,
force_full_hybrid_if_capable: bool = False,
exe: RunInfo | WorkerRunInfo = ...,
unique_input_inodes: bool = False,
error_handler: typing.Callable = ...,
remote_execution_dependencies: list[dict[str, str]] = [],
) -> None
Run a command to produce one or more artifacts.
arguments
: must be of typecmd_args
, or a type convertible to such (such as a list of strings and artifacts). See below for detailed description of artifact arguments.env
: environment variables to set when the command is executed.category
: category and identifier - when used together, identify the action in Buck2's event stream, and must be unique for a given targetweight
: used to note how heavy the command is and will typically be set to a higher value to indicate that less such commands should be run in parallel (if running locally)no_outputs_cleanup
: if this flag is set then Buck2 won't clean the outputs of a previous build that might be present on a disk; in which case, command from arguments should be responsible for the cleanup (that is useful, for example, when an action is supporting incremental mode and its outputs are based on result from a previous build)metadata_env_var
andmeadata_path
should be used together: both set or both unsetmetadata_path
: defines a path relative to the result directory for a file with action metadata, which will be created right before the command will be run.- Metadata contains the path relative to the Buck2 project root and hash digest for
every action input (this excludes symlinks as they could be resolved by a user script
if needed). The resolved path relative to the Buck2 project for the metadata file will
be passed to command from arguments, via the environment variable, with its name set
by
metadata_env_var
- Both
metadata_env_var
andmetadata_path
are useful when making actions behave in an incremental manner (for details, see Incremental Actions)
- The
prefer_local
,prefer_remote
andlocal_only
options allow selecting where the action should run if the executor selected for this target is a hybrid executor.- All those options disable concurrent execution: the action will run on the preferred platform first (concurrent execution only happens with a "full" hybrid executor).
- Execution may be retried on the "non-preferred" platform if it fails due to a
transient error, except for
local_only
, which does not allow this. - If the executor selected is a remote-only executor and you use
local_only
, that's an error. The other options will not raise errors. - Setting more than one of those options is an error.
- Those flags behave the same way as the equivalent
--prefer-remote
,--prefer-local
and--local-only
CLI flags. The CLI flags take precedence. - The
force_full_hybrid_if_capable
option overrides theuse_limited_hybrid
hybrid. The options listed above take precedence if set.
remote_execution_dependencies
: list of dependencies which is passed to Remote Execution. Each dependency is dictionary with the following keys:smc_tier
: name of the SMC tier to call by RE Scheduler.id
: name of the dependency.
When actions execute, they'll do so from the root of the repository. As they execute, actions have exclusive access to their output directory.
Actions also get exclusive access to a "scratch" path that is exposed via the environment
variable BUCK_SCRATCH_PATH
. This path is expressed as a path relative to the working
directory (i.e. relative to the project). This path is guaranteed to exist when the action
executes.
When actions run locally, the scratch path is also used as the TMPDIR
.
Input and output artifacts
Run action consumes arbitrary number of input artifacts and produces at least one output artifact.
Both input and output artifacts can be passed in:
- positional
arguments
parameters env
dict
Input artifacts must be already bound prior to this call, meaning these artifacts must be either:
- source artifacts
- coming from dependencies
- declared locally and bound to another action (passed to
.as_output()
) before thisrun()
call - or created already bound with some simple action like
write()
Output artifacts must be declared locally (within the same analysis), and must not be already bound. Output artifacts become "bound" after this call.
AnalysisActions.symlink_file
def AnalysisActions.symlink_file(
dest: artifact | output_artifact | str,
src: artifact,
/,
) -> artifact
Creates a symlink to the source artifact
at the destination (which can be a string representing a filename or an output artifact
) and returns the output artifact
. The symlink works for files or directories.
AnalysisActions.symlinked_dir
def AnalysisActions.symlinked_dir(
output: artifact | output_artifact | str,
srcs: dict[str, artifact],
/,
) -> artifact
Returns an artifact
that is a directory containing symlinks. The srcs must be a dictionary of path (as string, relative to the result directory) to bound artifact
, which will be laid out in the directory.
AnalysisActions.tset
def AnalysisActions.tset(
definition: transitive_set_definition,
/,
*,
value = ...,
children: typing.Iterable = ...,
) -> transitive_set
Creates a new transitive set. For details, see https://buck2.build/docs/rule_authors/transitive_sets/.
AnalysisActions.write
def AnalysisActions.write(
output: artifact | output_artifact | str,
content: CellPath | artifact | cell_root | cmd_args | label | output_artifact | project_root | resolved_macro | str | tagged_command_line | target_label | transitive_set_args_projection | write_json_cli_args | list | RunInfo,
/,
*,
is_executable: bool = False,
allow_args: bool = False,
with_inputs: bool = False,
absolute: bool = False,
) -> artifact | (artifact, list[artifact])
Returns an artifact
whose contents are content
is_executable
(optional): indicates whether the resulting file should be marked with executable permissionsallow_args
(optional): must be set toTrue
if you want to write parameter arguments to the file (in particular, macros that write to file)- If it is true, the result will be a pair of the
artifact
containing content and a list of artifact values that were written by macros, which should be used in hidden fields or similar
- If it is true, the result will be a pair of the
absolute
(optional): if set, this action will produce absolute paths in its output when rendering artifact paths. You generally shouldn't use this if you plan to use this action as the input for anything else, as this would effectively result in losing all shared caching.
The content is often a string, but can be any ArgLike
value. This is occasionally useful
for generating scripts to run as a part of another action. cmd_args
in the content are
newline separated unless another delimiter is explicitly specified.
AnalysisActions.write_json
def AnalysisActions.write_json(
output: artifact | output_artifact | str,
content: CellPath | None | artifact | bool | cell_root | cmd_args | enum | int | label | output_artifact | project_root | record | resolved_macro | str | tagged_command_line | tagged_value | target_label | transitive_set_args_projection | transitive_set_json_projection | write_json_cli_args | list | tuple | dict | struct(..) | RunInfo | provider,
/,
*,
with_inputs: bool = False,
pretty: bool = False,
absolute: bool = False,
) -> artifact | write_json_cli_args
Returns an artifact
whose contents are content
written as a JSON value.
output
: can be a string, or an existing artifact created withdeclare_output
content
: must be composed of the basic json types (boolean, number, string, list/tuple, dictionary) plus artifacts and command lines- An artifact will be written as a string containing the path
- A command line will be written as a list of strings, unless
joined=True
is set, in which case it will be a string
- If you pass
with_inputs = True
, you'll get back acmd_args
that expands to the JSON file but carries all the underlying inputs as dependencies (so you don't have to use, for example,hidden
for them to be added to an action that already receives the JSON file) pretty
(optional): write formatted JSON (defaults toFalse
)absolute
(optional): if set, this action will produce absolute paths in its output when rendering artifact paths. You generally shouldn't use this if you plan to use this action as the input for anything else, as this would effectively result in losing all shared caching. (defaults toFalse
)