Skip to main content

globals

ActionErrorCtx

ActionErrorCtx: type

ActionErrorLocation

ActionErrorLocation: type

ActionSubError

ActionSubError: type

AnalysisActions

AnalysisActions: type

AnalysisContext

AnalysisContext: type

AnonTarget

AnonTarget: type

AnonTargets

AnonTargets: type

Artifact

Artifact: type

ArtifactTag

ArtifactTag: type

ArtifactValue

ArtifactValue: type

Attr

Attr: type

BxlActions

BxlActions: type

BxlBuildResult

BxlBuildResult: type

BxlContext

BxlContext: type

BxlFilesystem

BxlFilesystem: type

CellPath

CellPath: type

CellRoot

CellRoot: type

CommandExecutorConfig

def CommandExecutorConfig(
*,
local_enabled: bool,
remote_enabled: bool,
remote_cache_enabled: None | bool = None,
remote_dep_file_cache_enabled: bool = False,
remote_execution_properties = None,
remote_execution_action_key = None,
remote_execution_max_input_files_mebibytes: None | int = None,
remote_execution_queue_time_threshold_s: None | int = None,
remote_execution_use_case = None,
use_limited_hybrid: bool = False,
allow_limited_hybrid_fallbacks: bool = False,
allow_hybrid_fallbacks_on_failure: bool = False,
use_windows_path_separators: bool = False,
use_persistent_workers: bool = False,
allow_cache_uploads: bool = False,
max_cache_upload_mebibytes: None | int = None,
experimental_low_pass_filter: bool = False,
remote_output_paths: None | str = None,
remote_execution_dependencies: list[dict[str, str]] = []
) -> command_executor_config

Contains configurations for how actions should be executed

.type attribute

Produces "command_executor_config"

Details

  • local_enabled : Whether to use local execution for this execution platform. If both remote_enabled and local_enabled are True, we will use the hybrid executor
  • remote_enabled: Whether to use remote execution for this execution platform
  • remote_cache_enabled: Whether to query RE caches
  • remote_execution_properties: Properties for remote execution for this platform
  • remote_execution_action_key: A component to inject into the action key This should typically used to inject variability into the action key so that it's different across e.g. build modes (RE uses the action key for things like expected memory utilization)
  • remote_execution_max_input_files_mebibytes: The maximum input file size (in bytes) that remote execution can support
  • remote_execution_queue_time_threshold_s: The maximum time in seconds we are willing to wait in the RE queue for remote execution to start running our action
  • remote_execution_use_case: The use case to use when communicating with RE
  • use_limited_hybrid: Whether to use the limited hybrid executor
  • allow_limited_hybrid_fallbacks: Whether to allow fallbacks
  • allow_hybrid_fallbacks_on_failure: Whether to allow fallbacks when the result is failure (i.e. the command failed on the primary, but the infra worked)
  • use_windows_path_separators: Whether to use Windows path separators in command line arguments
  • use_persistent workers: Whether to use persistent workers for local execution if they are available
  • allow_cache_uploads: Whether to upload local actions to the RE cache
  • max_cache_upload_mebibytes: Maximum size to upload in cache uploads
  • experimental_low_pass_filter: Whether to use the experimental low pass filter
  • remote_output_paths: How to express output paths to RE
  • remote_execution_dependencies: Dependencies for remote execution for this platform

ConfigurationInfo

def ConfigurationInfo(
*,
constraints: dict[target_label, ConstraintValueInfo],
values: dict[str, str]
) -> ConfigurationInfo

Provider that signals that a rule contains configuration info. This is used both as part of defining configurations (platform(), constraint_value()) and defining whether a target "matches" a configuration or not (config_setting(), constraint_value())

.type attribute

Produces "ConfigurationInfo"

Details

Provides a number of fields that can be accessed:

  • constraints: dict[target_label, ConstraintValueInfo] - field

  • values: dict[str, str] - field


ConfiguredProvidersLabel

ConfiguredProvidersLabel: type

ConfiguredTargetLabel

ConfiguredTargetLabel: type

ConstraintSettingInfo

def ConstraintSettingInfo(*, label: target_label) -> ConstraintSettingInfo

Provider that signals that a target can be used as a constraint key. This is the only provider returned by a constraint_setting() target.

.type attribute

Produces "ConstraintSettingInfo"

Details

Provides a number of fields that can be accessed:

  • label: target_label - field

ConstraintValueInfo

def ConstraintValueInfo(
*,
setting: ConstraintSettingInfo,
label: target_label
) -> ConstraintValueInfo

Provider that signals that a target can be used as a constraint key. This is the only provider returned by a constraint_value() target.

.type attribute

Produces "ConstraintValueInfo"

Details

Provides a number of fields that can be accessed:

  • setting: ConstraintSettingInfo - field

  • label: target_label - field


DefaultInfo

def DefaultInfo(
default_output = None,
default_outputs = None,
other_outputs = [],
sub_targets: dict[str, typing.Any] = {}
) -> DefaultInfo

A provider that all rules' implementations must return

.type attribute

Produces "DefaultInfo"

Details

In many simple cases, this can be inferred for the user.

Example of a rule's implementation function and how these fields are used by the framework:

# //foo_binary.bzl
def impl(ctx):
ctx.action.run([ctx.attrs._cc[RunInfo], "-o", ctx.attrs.out.as_output()] + ctx.attrs.srcs)
ctx.action.run([
ctx.attrs._strip[RunInfo],
"--binary",
ctx.attrs.out,
"--stripped-out",
ctx.attrs.stripped.as_output(),
"--debug-symbols-out",
ctx.attrs.debug_info.as_output(),
])
return [
DefaultInfo(
sub_targets = {
"stripped": [
DefaultInfo(default_outputs = [ctx.attrs.stripped, ctx.attrs.debug_info]),
],
},
default_output = ctx.attrs.out,
]

foo_binary = rule(
impl=impl,
attrs={
"srcs": attrs.list(attrs.source()),
"out": attrs.output(),
"stripped": attrs.output(),
"debug_info": attrs.output(),
"_cc": attrs.dep(default="//tools:cc", providers=[RunInfo]),
"_strip_script": attrs.dep(default="//tools:strip", providers=[RunInfo])
)

def foo_binary_wrapper(name, srcs):
foo_binary(
name = name,
srcs = src,
out = name,
stripped = name + ".stripped",
debug_info = name + ".debug_info",
)

# //subdir/BUCK
load("//:foo_binary.bzl", "foo_binary_wrapper")

genrule(name = "gen_stuff", ...., default_outs = ["foo.cpp"])

# ":gen_stuff" pulls the default_outputs for //subdir:gen_stuff
foo_binary_wrapper(name = "foo", srcs = glob(["*.cpp"]) + [":gen_stuff"])

# Builds just 'foo' binary. The strip command is never invoked.
$ buck build //subdir:foo

# builds the 'foo' binary, because it is needed by the 'strip' command. Ensures that
# both the stripped binary and the debug symbols are built.
$ buck build //subdir:foo[stripped]

Provides a number of fields that can be accessed:

  • sub_targets: dict[str, provider_collection] - A mapping of names to ProviderCollections. The keys are used when resolving the ProviderName portion of a ProvidersLabel in order to access the providers for a subtarget, such as when doing buck2 build cell//foo:bar[baz]. Just like any ProviderCollection, this collection must include at least a DefaultInfo provider. The subtargets can have their own subtargets as well, which can be accessed by chaining them, e.g.: buck2 build cell//foo:bar[baz][qux].

  • default_outputs: list[artifact] - A list of Artifacts that are built by default if this rule is requested explicitly, or depended on as as a "source".

  • other_outputs: list[artifact] - A list of ArtifactTraversable. The underlying Artifacts they define will be built by default if this rule is requested, but not when it's depended on as as a "source". ArtifactTraversable can be an Artifact (which yields itself), or cmd_args, which expand to all their inputs.


Dependency

Dependency: type

ExecutionPlatformInfo

def ExecutionPlatformInfo(
*,
label,
configuration,
executor_config
) -> ExecutionPlatformInfo

Provider that signals that a target represents an execution platform.

Provides a number of fields that can be accessed:

  • label: target_label - label of the defining rule, used in informative messages

  • configuration: ConfigurationInfo - The configuration of the execution platform

  • executor_config: command_executor_config - The executor config


ExecutionPlatformRegistrationInfo

def ExecutionPlatformRegistrationInfo(
*,
platforms,
fallback = _
) -> ExecutionPlatformRegistrationInfo

Provider that gives the list of all execution platforms available for this build.

Provides a number of fields that can be accessed:

  • platforms: list[ExecutionPlatformInfo] - field

  • fallback: typing.Any - field


ExternalRunnerTestInfo

def ExternalRunnerTestInfo(
type,
command = None,
env = None,
labels = None,
contacts = None,
use_project_relative_paths = None,
run_from_project_root = None,
default_executor = None,
executor_overrides = None,
local_resources = None
) -> ExternalRunnerTestInfo

Provider that signals that a rule can be tested using an external runner. This is the Buck1-compatible API for tests.

.type attribute

Produces "ExternalRunnerTestInfo"

Details

Provides a number of fields that can be accessed:

  • test_type: str - A Starlark value representing the type of this test.

  • command: list[typing.Any] - A Starlark value representing the command for this test. The external test runner is what gives meaning to this command. This is of type list[str | ArgLike].

  • env: dict[str, typing.Any] - A Starlark value representing the environment for this test. Here again, the external test runner is what will this meaning. This is of type dict[str, ArgLike].

  • labels: list[str] - A starlark value representing the labels for this test.

  • contacts: list[str] - A starlark value representing the contacts for this test. This is largely expected to be an oncall, though it's not validated in any way.

  • use_project_relative_paths: bool - Whether this test should use relative paths

  • run_from_project_root: bool - Whether this test should run from the project root, as opposed to the cell rootDefaults to True.

  • default_executor: command_executor_config - Default executor to use to run tests. If none is passed we will default to the execution platform.

  • executor_overrides: dict[str, command_executor_config] - Executors that Tpx can use to override the default executor.

  • local_resources: dict[str, None | label] - Mapping from a local resource type to a target with a corresponding provider. Required types are passed from test runner. If the value for a corresponding type is omitted it means local resource should be ignored when executing tests even if those are passed as required from test runner.


InstallInfo

def InstallInfo(installer: label, files: dict[str, typing.Any]) -> InstallInfo

A provider that can be constructed and have its fields accessed. Returned by rules.

Provides a number of fields that can be accessed:

  • installer: label - field

  • files: dict[str, artifact] - field


Label

Label: type

LocalResourceInfo

def LocalResourceInfo(
*,
setup,
resource_env_vars,
setup_timeout_seconds = None
) -> LocalResourceInfo

A provider that can be constructed and have its fields accessed. Returned by rules.

.type attribute

Produces "LocalResourceInfo"

Details

Provides a number of fields that can be accessed:

  • setup: cmd_args - Command to run to initialize a local resource. Running this command writes a JSON to stdout. This JSON represents a pool of local resources which are ready to be used. Example JSON would be: { "pid": 42, "resources": [ {"socket_address": "foo:1"}, {"socket_address": "bar:2"} ] } Where '"pid"maps to a PID of a process which should be sent SIGTERM to release the pool of resources when they are no longer needed."resources"maps to the pool of resources. When a local resource from this particular pool is needed for an execution command, single entity will be reserved from the pool, for example{"socket_address": "bar:2"}and environment variable with name resolved using mapping inresource_env_varsfield and"socket_address"` key will be added to execution command.

  • resource_env_vars: dict[str, str] - Mapping from environment variable (appended to an execution command which is dependent on this local resource) to keys in setup command JSON output.

  • setup_timeout_seconds: None | float - Timeout in seconds for setup command.


OutputArtifact

OutputArtifact: type

PlatformInfo

def PlatformInfo(
*,
label: str,
configuration: ConfigurationInfo
) -> PlatformInfo

A provider that can be constructed and have its fields accessed. Returned by rules.

.type attribute

Produces "PlatformInfo"

Details

Provides a number of fields that can be accessed:

  • label: str - field

  • configuration: ConfigurationInfo - field


Promise

Promise: type

Provider

Provider: type

ProviderCollection

ProviderCollection: type

ProvidersLabel

ProvidersLabel: type

ResolvedStringWithMacros

ResolvedStringWithMacros: type

RunInfo

def RunInfo(args = []) -> RunInfo

Provider that signals that a rule is runnable

.type attribute

Produces "RunInfo"

Details

Provides a number of fields that can be accessed:

  • args: cmd_args - The command to run, stored as CommandLine

Select

Select: type

TargetLabel

TargetLabel: type

TemplatePlaceholderInfo

def TemplatePlaceholderInfo(
unkeyed_variables = {},
keyed_variables = {}
) -> TemplatePlaceholderInfo

A provider that is used for expansions in string attribute templates

.type attribute

Produces "TemplatePlaceholderInfo"

Details

String attribute templates allow two types of user-defined placeholders, "unkeyed placeholders" like $(CXX) or $(aapt) and "keyed placeholders" that include a target key like $(cxxppflags //some:target). The expansion of each of these types is based on the TemplatePlaceholderInfo providers.

"keyed placeholders" are used for the form $(<key> <target>) or $(<key> <target> <arg>). In both cases the lookup will expect a TemplatePlaceholderInfo in the providers of <target>. It will then lookup <key> in the keyed_variables (call this the value). There are then four valid possibilities:

  1. no-arg placeholder, an arg-like value: resolve to value
  2. no-arg placeholder, a dictionary value: resolve to value["DEFAULT"]
  3. arg placeholder, a non-dictionary value: this is an error
  4. arg placeholder, a dictionary value: resolve to value[<arg>]

"unkeyed placeholders" are resolved by matching to any of the deps of the target. $(CXX) will resolve to the "CXX" value in any dep's TemplateProviderInfo.unkeyed_variables

Fields:

  • unkeyed_variables: A mapping of names to arg-like values. These are used for "unkeyed placeholder" expansion.
  • keyed_variables: A mapping of names to arg-like values or dictionary of string to arg-like values. These are used for "keyed placeholder" expansion.

Provides a number of fields that can be accessed:

  • unkeyed_variables: dict[str, typing.Any] - field

  • keyed_variables: dict[str, typing.Any] - field


TransitiveSet

TransitiveSet: type

TransitiveSetArgsProjection

TransitiveSetArgsProjection: type

TransitiveSetArgsProjectionIterator

TransitiveSetArgsProjectionIterator: type

TransitiveSetDefinition

TransitiveSetDefinition: type

TransitiveSetIterator

TransitiveSetIterator: type

TransitiveSetJsonProjection

TransitiveSetJsonProjection: type

WorkerInfo

def WorkerInfo(exe = [], *, concurrency: None | int = None) -> WorkerInfo

Provider that signals that a rule is a worker tool

.type attribute

Produces "WorkerInfo"

Details

Provides a number of fields that can be accessed:

  • exe: cmd_args - field

  • concurrency: None | int - field


WorkerRunInfo

def WorkerRunInfo(*, worker: WorkerInfo, exe = []) -> WorkerRunInfo

Provider that signals that a rule can run using a worker

.type attribute

Produces "WorkerRunInfo"

Details

Provides a number of fields that can be accessed:

  • worker: WorkerInfo - field

  • exe: cmd_args - field


__buck2_builtins__

__buck2_builtins__: struct(..)

__internal__

__internal__: struct(..)

anon_rule

def anon_rule(
*,
impl: typing.Callable[[typing.Any], list[typing.Any]],
attrs: dict[str, attribute],
doc: str = "",
artifact_promise_mappings: dict[str, typing.Callable[[typing.Any], list[typing.Any]]]
) -> "function"

Define an anon rule, similar to how a normal rule is defined, except with an extra artifact_promise_mappings field. This is a dict where the keys are the string name of the artifact, and the values are the callable functions that produce the artifact. This is only intended to be used with anon targets.


attrs

attrs: attrs

bxl

bxl: struct(..)

bxl_main

def bxl_main(
*,
impl: typing.Callable,
cli_args: dict[str, bxl_cli_args],
doc: str = ""
)

cli_args

cli_args: struct(..)

cmd_args

def cmd_args(
*args: artifact | cell_root | cmd_args | label | label_relative_path | output_artifact | project_root | resolved_macro | str | tagged_command_line | target_label | transitive_set_args_projection | write_json_cli_args | list[typing.Any] | RunInfo,
hidden = _,
delimiter: str = _,
format: str = _,
prepend: str = _,
quote: str = _,
ignore_artifacts: bool = False,
absolute_prefix: str = _,
absolute_suffix: str = _,
parent: int = 0,
relative_to: artifact | cell_root | project_root | (artifact | cell_root | project_root, int) = _,
replace_regex: list[(buck_regex | str, str)] | (buck_regex | str, str) = _
) -> cmd_args

The cmd_args type is created by this function and is consumed by ctx.actions.run. The type is a mutable collection of strings and artifact values. In general, command lines, artifacts, strings, RunInfo and lists thereof can be added to or used to construct a cmd_args value.

.type attribute

Produces "cmd_args"

Details

The arguments are:

  • *args - a list of things to add to the command line, each of which must be coercible to a command line. Further items can be added with cmd.add.
  • format - a string that provides a format to apply to the argument. for example, cmd_args(x, format="--args={}") would prepend --args= before x, or if x was a list, before each element in x.
  • delimiter - added between arguments to join them together. For example, cmd_args(["--args=",x], delimiter="") would produce a single argument to the underlying tool.
  • prepend - added as a separate argument before each argument.
  • quote - indicates whether quoting is to be applied to each argument. The only current valid value is "shell".
  • ignore_artifacts - if True, artifacts paths are used, but artifacts are not pulled.
  • hidden - artifacts not present on the command line, but added as dependencies.
  • absolute_prefix and absolute_suffix - added to the start and end of each artifact.
  • parent - for all the artifacts use their parent directory.
  • relative_to - make all artifact paths relative to a given location.
  • replace_regex - replaces arguments with a regular expression.

ignore_artifacts

ignore_artifacts=True makes cmd_args to have no declared dependencies. Allows you to reference the path of an artifact without introducing dependencies on it.

As an example where this can be useful, consider passing a dependency that is only accessed at runtime, but whose path must be baked into the binary. As an example:

resources = cmd_args(resource_file, format = "-DFOO={}").ignore_artifacts()
ctx.actions.run(cmd_args("gcc", "-c", source_file, resources))

Note that ignore_artifacts sets all artifacts referenced by this cmd_args to be ignored, including those added afterwards, so generally create a special cmd_args and scope it quite tightly.

If you actually do use the inputs referenced by this command, you will either error out due to missing dependencies (if running actions remotely) or have untracked dependencies that will fail to rebuild when it should.

hidden

Things to add to the command line which do not show up but are added as dependencies. The values can be anything normally permissible to pass to add.

Typically used if the command you are running implicitly depends on files that are not passed on the command line, e.g. headers in the case of a C compilation.

absolute_prefix and absolute_suffix

Adds a prefix to the start or end of every artifact.

Prefix is often used if you have a $ROOT variable in a shell script and want to use it to make files absolute.

Suffix is often used in conjunction with absolute_prefix to wrap artifacts in function calls.

cmd_args(script, absolute_prefix = "$ROOT/")
cmd_args(script, absolute_prefix = "call", absolute_suffix = ")")

`parent

` For all the artifacts use their parent directory.

Typically used when the file name is passed one way, and the directory another, e.g. cmd_args(artifact, format="-L{}", parent=1).

relative_to=dir or relative_to=(dir, parent)

Make all artifact paths relative to a given location. Typically used when the command you are running changes directory.

By default, the paths are relative to the artifacts themselves (equivalent to parent equals to 0). Use parent to make the paths relative to an ancestor directory. For example parent equals to 1 would make all paths relative to the containing dirs of any artifacts in the cmd_args.

dir = symlinked_dir(...)
script = [
cmd_args(dir, format = "cd {}", relative_to=dir),
]

replace_regex

Replaces all parts matching pattern regular expression (or regular expressions) in each argument with replacement strings.


ctarget_set

def ctarget_set() -> target_set

Creates an empty target set for configured nodes.

Sample usage:

def _impl_ctarget_set(ctx):
targets = ctarget_set()
ctx.output.print(type(targets))
ctx.output.print(len(targets))

dedupe

def dedupe(val, /)

Remove duplicates in a list. Uses identity of value (pointer), rather than by equality. In many cases you should use a transitive set instead.


fail_no_stacktrace

def fail_no_stacktrace(*args) -> None

file_set

def file_set() -> file_set

Creates an empty file set for configured nodes.

Sample usage:

def _impl_file_set(ctx):
files = file_set()
ctx.output.print(type(files))
ctx.output.print(len(files))

get_base_path

def get_base_path() -> str

get_base_path() can only be called in buildfiles (e.g. BUCK files) or PACKAGE files, and returns the name of the package. E.g. inside foo//bar/baz/BUCK the output will be bar/baz. E.g. inside foo//bar/PACKAGE the output will be bar.

This function is identical to package_name.


get_cell_name

def get_cell_name() -> str

get_cell_name() can be called from either a BUCK file or a .bzl file, and returns the name of the cell where the BUCK file that started the call lives.

For example, inside foo//bar/baz/BUCK the output will be foo. If that BUCK file does a load("hello//world.bzl", "something") then the result in that .bzl file will also be foo.


get_path_without_materialization

def get_path_without_materialization(
this: artifact,
ctx: bxl_ctx,
/,
*,
abs: bool = False
) -> str

The output path of an artifact-like (source, build, declared). Takes an optional boolean to print the absolute or relative path. Note that this method returns an artifact path without asking for the artifact to be materialized (i.e. it may not actually exist on the disk yet).

This is a risky function to call because you may accidentally pass this path to further BXL actions that expect the artifact to be materialized. If this happens, the BXL script will error out. If you want the path without materialization for other uses that don’t involve passing them into further actions, then it’s safe.

Sample usage:

def _impl_get_path_without_materialization(ctx):
owner = ctx.cquery().owner("cell//path/to/file")[0]
artifact = owner.get_source("cell//path/to/file", ctx)
source_artifact_project_rel_path = get_path_without_materialization(artifact, ctx)
ctx.output.print(source_artifact_project_rel_path) # Note this artifact is NOT ensured or materialized

get_paths_without_materialization

def get_paths_without_materialization(
cmd_line: artifact | cell_root | cmd_args | label | label_relative_path | output_artifact | project_root | resolved_macro | str | tagged_command_line | target_label | transitive_set_args_projection | write_json_cli_args | RunInfo,
ctx: bxl_ctx,
/,
*,
abs: bool = False
)

The output paths of a cmd_args() inputs. The output paths will be returned as a list. Takes an optional boolean to print the absolute or relative path. Note that this method returns an artifact path without asking for the artifact to be materialized, (i.e. it may not actually exist on the disk yet).

This is a risky function to call because you may accidentally pass this path to further BXL actions that expect the artifact to be materialized. If this happens, the BXL script will error out. If you want the path without materialization for other uses that don’t involve passing them into further actions, then it’s safe.

Sample usage:

def _impl_get_paths_without_materialization(ctx):
node = ctx.configured_targets("root//bin:the_binary")
providers = ctx.analysis(node).providers()
path = get_paths_without_materialization(providers[RunInfo], abs=True) # Note this artifact is NOT ensured or materialized
ctx.output.print(path)

glob

def glob(
include: list[str] | tuple[str, ...],
*,
exclude: list[str] | tuple[str, ...] = []
) -> list[str]

The glob() function specifies a set of files using patterns. Only available from BUCK files.

A typical glob call looks like:

glob(["foo/**/*.h"])

This call will match all header files in the foo directory, recursively.

You can also pass a named exclude parameter to remove files matching a pattern:

glob(["foo/**/*.h"], exclude = ["**/config.h"])

This call will remove all config.h files from the initial match.

The glob() call is evaluated against the list of files owned by this BUCK file. A file is owned by whichever BUCK file is closest above it - so given foo/BUCK and foo/bar/BUCK the file foo/file.txt would be owned by foo/BUCK (and available from its glob results) but the file foo/bar/file.txt would be owned by foo/bar/BUCk and not appear in the glob result of foo/BUCK, even if you write glob(["bar/file.txt"]). As a consequence of this rule, glob(["../foo.txt"]) will always return an empty list of files.

Currently glob is evaluated case-insensitively on all file systems, but we expect that to change to case sensitive in the near future.


host_info

def host_info() -> struct(..)

The host_info() function is used to get the current OS and processor architecture on the host. The structure returned is laid out thusly:

struct(
os=struct(
is_linux=True|False,
is_macos=True|False,
is_windows=True|False,
is_freebsd=True|False,
is_unknown=True|False,
),
arch=struct(
is_aarch64=True|False,
is_arm=True|False,
is_armeb=True|False,
is_i386=True|False,
is_mips=True|False,
is_mips64=True|False,
is_mipsel=True|False,
is_mipsel64=True|False,
is_powerpc=True|False,
is_ppc64=True|False,
is_x86_64=True|False,
is_unknown=True|False,
),
)

implicit_package_symbol

def implicit_package_symbol(name: str, default = _)

load_symbols

def load_symbols(symbols: dict[str, typing.Any]) -> None

Used in a .bzl file to set exported symbols. In most cases just defining the symbol as a top-level binding is sufficient, but sometimes the names might be programatically generated.

It is undefined behaviour if you try and use any of the symbols exported here later in the same module, or if they overlap with existing definitions. This function should be used rarely.


new_test_action_error_ctx

def new_test_action_error_ctx(
*,
stderr: str = "",
stdout: str = ""
) -> ActionErrorCtx

Global function to create a new ActionErrorContext for testing a starlark action error handler via bxl_test.


now

def now() -> instant

Creates an Instant at the current time.

Sample usage:

def _impl_elapsed_millis(ctx):
instant = now()
time_a = instant.elapsed_millis()
# do something that takes a long time
time_b = instant.elapsed_millis()

ctx.output.print(time_a)
ctx.output.print(time_b)

This function is only accessible through Bxl.


oncall

def oncall(name: str, /) -> None

Called in a BUCK file to declare the oncall contact details for all the targets defined. Must be called at most once, before any targets have been declared. Errors if called from a .bzl file.


package

def package(
*,
inherit: bool = False,
visibility: list[str] | tuple[str, ...] = [],
within_view: list[str] | tuple[str, ...] = []
) -> None

package_name

def package_name() -> str

package_name() can only be called in buildfiles (e.g. BUCK files) or PACKAGE files, and returns the name of the package. E.g. inside foo//bar/baz/BUCK the output will be bar/baz. E.g. inside foo//bar/PACKAGE the output will be bar.


plugins

plugins: plugins

provider

def provider(
*,
doc: str = "",
fields: list[str] | tuple[str, ...] | dict[str, typing.Any]
) -> provider_callable

Create a "provider" type that can be returned from rule implementations. Used to pass information from a rule to the things that depend on it. Typically named with an Info suffix.

GroovyLibraryInfo(fields = [
"objects", # a list of artifacts
"options", # a string containing compiler options
])

Given a dependency you can obtain the provider with my_dep[GroovyLibraryInfo] which returns either None or a value of type GroovyLibraryInfo.

For providers that accumulate upwards a transitive set is often a good choice.


provider_field

def provider_field(
ty,
/,
*,
default = _
) -> ProviderField

Create a field definition object which can be passed to provider type constructor.


read_config

def read_config(section: str, key: str, default = _)

Read a configuration from the nearest enclosing .buckconfig of the BUCK file that started evaluation of this code.

As an example, if you have a .buckconfig of:

[package_options]
compile = super_fast

Then you would get the following results:

read_config("package_options", "compile") == "super_fast"
read_config("package_options", "linker") == None
read_config("package_options", "linker", "a_default") == "a_default"

In general the use of .buckconfig is discouraged in favour of select, but it can still be useful.


read_package_value

def read_package_value(key: str, /)

Read value specified in the PACKAGE file.

Returns None if value is not set.


read_parent_package_value

def read_parent_package_value(key: str, /)

Read a package value defined in a parent PACKAGE file.

This function can only be called in a Package context.

Returns None if value is not set.


read_root_config

def read_root_config(
section: str,
key: str,
default: None | str = None,
/
) -> None | str

Like read_config but the project root .buckconfig is always consulted, regardless of the cell of the originating BUCK file.


regex

def regex(
regex: str,
/,
*,
fancy: bool = False
) -> buck_regex

.type attribute

Produces "buck_regex"


regex_match

def regex_match(regex: str, str: str, /) -> bool

Test if a regular expression matches a string. Fails if the regular expression is malformed.

As an example:

regex_match("^[a-z]*$", "hello") == True
regex_match("^[a-z]*$", "1234") == False

repository_name

def repository_name() -> str

Like get_cell_name() but prepends a leading @ for compatibility with Buck1. You should call get_cell_name() instead, and if you really want the @, prepend it yourself.


rule

def rule(
*,
impl: typing.Callable[[typing.Any], list[typing.Any]],
attrs: dict[str, attribute],
cfg = _,
doc: str = "",
is_configuration_rule: bool = False,
is_toolchain_rule: bool = False,
uses_plugins: list[typing.Any] | tuple = []
) -> "function"

Define a rule. As a simple example:

def _my_rule(ctx: AnalysisContext) -> list[Provider]:
output = ctx.actions.write("hello.txt", ctx.attrs.contents, executable = ctx.attrs.exe)
return [DefaultInfo(outputs = [output])]

MyRule = rule(impl = _my_rule, attrs = {
"contents": attrs.string(),
"exe": attrs.option(attrs.bool(), default = False),
})

rule_exists

def rule_exists(name: str) -> bool

Check if the target with name has already been defined, returns True if it has.

Note that this function checks for the existence of a target rather than a rule. In general use of this function is discouraged, as it makes definitions of rules not compose.


select

def select(d, /) -> selector

select_equal_internal

def select_equal_internal(left, right, /) -> bool

Tests that two selects are equal to each other. For testing use only.


select_map

def select_map(d, func, /)

Maps a selector.

Each value within a selector map and on each side of an addition will be passed to the mapping function. The returned selector will have the same structure as this one.

Ex:

def increment_items(a):
return [v + 1 for v in a]

select_map([1, 2] + select({"c": [2]}), increment_items) == [2, 3] + select({"c": [3]})

select_test

def select_test(d, func, /) -> bool

Test values in the select expression using the given function.

Returns True, if any value in the select passes, else False.

Ex:

select_test([1] + select({"c": [1]}), lambda a: len(a) > 1) == False
select_test([1, 2] + select({"c": [1]}), lambda a: len(a) > 1) == True
select_test([1] + select({"c": [1, 2]}), lambda a: len(a) > 1) == True

set_cfg_constructor

def set_cfg_constructor(
*,
stage0,
stage1,
key: str
) -> None

Register global cfg constructor.

This function can only be called from the repository root PACKAGE file.

Parameters: stage0: The first cfg constructor that will be invoked before configuration rules are analyzed. stage1: The second cfg constructor that will be invoked after configuration rules are analyzed. key: The key for cfg modifiers on PACKAGE values and metadata.


set_starlark_peak_allocated_byte_limit

def set_starlark_peak_allocated_byte_limit(value: int, /) -> None

Set the peak allocated bytes during evaluation of build ctx. Err if it has already been set


sha256

def sha256(val: str, /) -> str

Computes a sha256 digest for a string. Returns the hex representation of the digest.

sha256("Buck2 is the best build system") == "bb99a3f19ecba6c4d2c7cd321b63b669684c713881baae21a6b1d759b3ec6ac9"

soft_error

def soft_error(
category: str,
message: str,
/,
*,
quiet: bool = _,
stack: bool = _
) -> None

Produce an error that will become a hard error at some point in the future, but for now is a warning which is logged to the server. In the open source version of Buck2 this function always results in an error.

Called passing a stable key (must be snake_case and start with starlark_, used for consistent reporting) and an arbitrary message (used for debugging).

As an example:

soft_error(
"starlark_rule_is_too_long",
"Length of property exceeds 100 characters in " + repr(ctx.label),
)

transition

def transition(
*,
impl: typing.Callable,
refs: dict[str, str],
attrs: list[str] | tuple[str, ...] = _,
split: bool = False
) -> transition

transitive_set

def transitive_set(
*,
args_projections: dict[str, typing.Any] = _,
json_projections: dict[str, typing.Any] = _,
reductions: dict[str, typing.Any] = _
) -> transitive_set_definition

utarget_set

def utarget_set() -> target_set

Creates an empty target set for unconfigured nodes.

Sample usage:

def _impl_utarget_set(ctx):
targets = utarget_set()
ctx.output.print(type(targets))
ctx.output.print(len(targets))

warning

def warning(x: str, /) -> None

Print a warning. The line will be decorated with the timestamp and other details, including the word WARN (colored, if the console supports it).

If you are not writing a warning, use print instead. Be aware that printing lots of output (warnings or not) can be cause all information to be ignored by the user.


write_package_value

def write_package_value(
key: str,
value,
/,
*,
overwrite: bool = False
) -> None

Set the value to be accessible in the nested PACKAGE files.

If any parent PACKAGE value has already set the same key, it will raise an error unless you pass overwrite = True, in which case it will replace the parent value.