text stringlengths 81 112k |
|---|
Pad input in case the desired padding type requires it.
VALID and SAME padding types are directly supported by tensorflow
convolution ops, so don't require us to pad input ourselves, at least
in cases where the same method is used for all dimensions.
Other padding types (FULL, CAUSAL, REVERSE_CAUSAL) ... |
Apply a convolution operation on `inputs` using variable `w`.
Args:
inputs: A Tensor of shape `data_format` and of type `tf.float16`,
`tf.bfloat16` or `tf.float32`.
w: A weight matrix of the same type as `inputs`.
Returns:
outputs: The result of the convolution operation on `inputs... |
Applies the passed-in mask to the convolution matrix.
Returns:
w: A copy of the convolution matrix that has had the mask applied.
Raises:
base.IncompatibleShapeError: If the mask shape has more dimensions than
the weight matrix.
base.IncompatibleShapeError: If the mask and the weig... |
Returns the number of output channels.
def output_channels(self):
"""Returns the number of output channels."""
if callable(self._output_channels):
self._output_channels = self._output_channels()
# Channel must be integer.
self._output_channels = int(self._output_channels)
return self._output_... |
Returns the padding algorithm used, if this is the same for all dims.
Use `.paddings` if you want a tuple with the padding algorithm used for each
dimension.
Returns:
The padding algorithm used, if this is the same for all dimensions.
Raises:
ValueError: If different padding algorithms ar... |
Returns a cloned `_ConvND` module.
Args:
name: Optional string assigning name of cloned module. The default name
is constructed by appending "_clone" to `self.module_name`.
Returns:
A copy of the current class.
def clone(self, name=None):
"""Returns a cloned `_ConvND` module.
Arg... |
Connects the _ConvNDTranspose module into the graph.
If this is not the first time the module has been connected to the graph,
the input Tensor provided here must have the same final N dimensions, in
order for the existing variables to be the correct size for the
multiplication. The batch size may diff... |
Calculate the output shape for `inputs` after a deconvolution.
Args:
inputs: A Tensor of shape `data_format` and of type `tf.float16`,
`tf.bfloat16` or `tf.float32`.
Returns:
output_shape: A tensor of shape (`batch_size`, `conv_output_shape`).
def _infer_all_output_dims(self, inputs):
... |
Recover output tensor shape value to enable shape inference.
The batch size of `inputs` isn't preserved by the convolution op. Calculate
what the proper output shape will be for `outputs`.
Args:
inputs: A Tensor of shape `data_format` and of type `tf.float16`,
`tf.bfloat16` or `tf.float32`... |
Returns the output shape.
def output_shape(self):
"""Returns the output shape."""
if self._output_shape is None:
self._ensure_is_connected()
if callable(self._output_shape):
self._output_shape = tuple(self._output_shape())
return self._output_shape |
Returns matching `Conv1D` module.
Args:
name: Optional string assigning name of transpose module. The default name
is constructed by appending "_transpose" to `self.name`.
Returns:
`Conv1D` module.
def transpose(self, name=None):
"""Returns matching `Conv1D` module.
Args:
n... |
Returns matching `Conv2DTranspose` module.
Args:
name: Optional string assigning name of transpose module. The default name
is constructed by appending "_transpose" to `self.name`.
Returns:
`Conv2DTranspose` module.
Raises:
base.NotSupportedError: If `rate` in any dimension > 1.
... |
Returns matching `Conv2D` module.
Args:
name: Optional string assigning name of transpose module. The default name
is constructed by appending "_transpose" to `self.name`.
Returns:
`Conv2D` module.
def transpose(self, name=None):
"""Returns matching `Conv2D` module.
Args:
... |
Construct the convolution weight matrix.
Figures out the shape of the weight matrix, initialize it, and return it.
Args:
inputs: A Tensor of shape `data_format` and of type `tf.float16`,
`tf.bfloat16` or `tf.float32`.
Returns:
w: A weight matrix of the same type as `inputs` and of s... |
Apply a depthwise_conv2d operation on `inputs` using variable `w`.
Args:
inputs: A Tensor of shape `data_format` and of type `tf.float16`,
`tf.bfloat16` or `tf.float32`.
w: A weight matrix of the same type as `inputs`.
Returns:
outputs: The result of the convolution operation on `i... |
Connects the module into the graph, with input Tensor `inputs`.
Args:
inputs: A 4D Tensor of shape:
[batch_size, input_height, input_width, input_channels]
and of type `tf.float16`, `tf.bfloat16` or `tf.float32`.
Returns:
A tuple of two 4D Tensors, each with the same dtype as `... |
Apply a `separable_conv2d` operation on `inputs` using `w`.
Args:
inputs: A Tensor of shape `data_format` and of type `tf.float16`,
`tf.bfloat16` or `tf.float32`.
w: A tuple of weight matrices of the same type as `inputs`, the first
being the depthwise weight matrix, and the second be... |
Apply a `separable_conv2d` operation on `inputs` using `w`.
Args:
inputs: A Tensor of shape `data_format` and of type `tf.float16`,
`tf.bfloat16` or `tf.float32`.
w: A tuple of weight matrices of the same type as `inputs`, the first
being the depthwise weight matrix, and the second be... |
Connects the Sequential module into the graph.
Args:
*args: A tuple of inputs, to be unpacked as the arguments to the first
layer.
Returns:
The output value of the last layer.
def _build(self, *args):
"""Connects the Sequential module into the graph.
Args:
*args: A tuple ... |
Provide a warning that get_variables on Sequential always returns ().
def get_variables(self, *args, **kwargs):
"""Provide a warning that get_variables on Sequential always returns ()."""
tf.logging.warning(
"Calling Sequential.get_variables, which will always return an empty "
"tuple. get_vari... |
Creates a custom getter that applies specified named arguments.
Args:
**kwargs: Overriding arguments for the custom getter to use in preference
the named arguments it's called with.
Returns:
Custom getter.
def override_args(**kwargs):
"""Creates a custom getter that applies specified named argume... |
Creates a custom getter that applies specified named arguments.
The returned custom getter treats the specified named arguments as revised
defaults, and does not override any non-`None` argument values supplied by
the original get_variable call (or by a nested scope's custom getter).
Args:
**kwargs: Overr... |
Builds the TensorFlow variables of the migrated checkpoint.
Args:
checkpoint_reader: A `tf.train.NewCheckPointReader` of the checkpoint to
be read from.
name_value_fn: Function taking two arguments, `name` and `value`, which
returns the pair of new name and value for that a variable of that name.... |
Serializes a `tf.SparseTensor` into `nested_proto`.
Args:
sparse_tensor: An instance of `tf.SparseTensor`.
nested_proto: A `module_pb2.NestedData` instance to be filled from
`sparse_tensor`.
process_leafs: A function to be applied to the leaf valued of the nested
structure.
already_proces... |
Deserializes a `tf.SparseTensor` from `sparse_tensor_proto`.
Args:
sparse_tensor_proto: A proto representing a `tf.SparseTensor`.
process_leafs: A function to be applied to the leaf valued of the nested
structure.
Returns:
An instance of `tf.SparseTensor`.
def _from_proto_sparse_tensor(sparse_t... |
Serializes `nested_value` into `nested_proto`.
Args:
nested_value: A nested Python value.
nested_proto: A `module_pb2.NestedData` instance to be filled from the value
in `nested_value`.
process_leafs: A function to be applied to the leaf values of the nested
structure.
already_processed: ... |
Serializes `module_into`.
Args:
module_info: An instance of `ModuleInfo`.
export_scope: Optional `string`. Name scope to remove.
Returns:
An instance of `module_pb2.SonnetModule`.
def _module_info_to_proto(module_info, export_scope=None):
"""Serializes `module_into`.
Args:
module_info: An in... |
Deserializes `nested_proto`.
Args:
nested_proto: An instance of `module_pb2.NestedData`.
process_leafs: A function to be applied to the leaf values of the nested
structure.
Returns:
An instance of `string`, `tuple`, `dict` or `namedtuple`.
Raises:
base_errors.ModuleInfoError: If the probo... |
Deserializes `module_info_def` proto.
Args:
module_info_def: An instance of `module_pb2.SonnetModule`.
import_scope: Optional `string`. Name scope to use.
Returns:
An instance of `ModuleInfo`.
Raises:
base_errors.ModuleInfoError: If the probobuf is of the wrong type or
if some of its fiel... |
Deserializes the `module_info_def` proto without raising exceptions.
Args:
module_info_def: An instance of `module_pb2.SonnetModule`.
import_scope: Optional `string`. Name scope to use.
Returns:
An instance of `ModuleInfo`.
def _module_info_from_proto_safe(module_info_def, import_scope=None):
"""De... |
Returns a tf.train.CheckpointSaverHook for autosaving checkpoints.
def _configure_saver(checkpoint_dir, checkpoint_interval):
"""Returns a tf.train.CheckpointSaverHook for autosaving checkpoints."""
saver = tf.train.Saver()
return tf.train.CheckpointSaverHook(
checkpoint_dir=checkpoint_dir,
save_step... |
Constructs the computation graph.
def build_graph(lstm_depth=3, batch_size=32, num_embedding=32, num_hidden=128,
truncation_length=64, sample_length=1000, max_grad_norm=5,
initial_learning_rate=0.1, reduce_learning_rate_multiplier=0.1,
optimizer_epsilon=0.01):
"""Const... |
Trains a deep LSTM model on the Tiny Shakespeare dataset.
def train(num_training_iterations, report_interval,
reduce_learning_rate_interval):
"""Trains a deep LSTM model on the Tiny Shakespeare dataset."""
# Build the computation graph.
graph_tensors, dataset_train = build_graph(
lstm_depth=FLAG... |
Builds the deep LSTM model sub-graph.
Args:
one_hot_input_sequence: A Tensor with the input sequence encoded as a
one-hot representation. Its dimensions should be `[truncation_length,
batch_size, output_size]`.
Returns:
Tuple of the Tensor of output logits for the batch, with dimen... |
Builds sub-graph to generate a string, sampled from the model.
Args:
initial_logits: Starting logits to sample from.
initial_state: Starting state for the RNN core.
sequence_length: Number of characters to sample.
Returns:
A Tensor of characters, with dimensions `[sequence_length, batc... |
Connects the module to some inputs.
Args:
inputs: Tensor, final dimension must be equal to embedding_dim. All other
leading dimensions will be flattened and treated as a large batch.
is_training: boolean, whether this connection is to training data.
Returns:
dict containing the follo... |
Connects the module to some inputs.
Args:
inputs: Tensor, final dimension must be equal to embedding_dim. All other
leading dimensions will be flattened and treated as a large batch.
is_training: boolean, whether this connection is to training data. When
this is set to False, the intern... |
Add two arbitrarily nested `Tensors`.
def _nested_add(nested_a, nested_b):
"""Add two arbitrarily nested `Tensors`."""
return nest.map(lambda a, b: a + b, nested_a, nested_b) |
Multiply `Tensors` in arbitrarily nested `Tensor` `nested_a` with `p`.
def _nested_unary_mul(nested_a, p):
"""Multiply `Tensors` in arbitrarily nested `Tensor` `nested_a` with `p`."""
def mul_with_broadcast(tensor):
ndims = tensor.shape.ndims
if ndims != 2:
p_reshaped = tf.reshape(p, [-1] + [1] * (nd... |
The `cond` of the `tf.while_loop`.
def _cond(self, unused_x, unused_cumul_out, unused_prev_state,
unused_cumul_state, cumul_halting, unused_iteration,
unused_remainder):
"""The `cond` of the `tf.while_loop`."""
return tf.reduce_any(cumul_halting < 1) |
The `body` of `tf.while_loop`.
def _body(self, x, cumul_out, prev_state, cumul_state,
cumul_halting, iteration, remainder, halting_linear, x_ones):
"""The `body` of `tf.while_loop`."""
# Increase iteration count only for those elements that are still running.
all_ones = tf.constant(1, shape=(se... |
Connects the core to the graph.
Args:
x: Input `Tensor` of shape `(batch_size, input_size)`.
prev_state: Previous state. This could be a `Tensor`, or a tuple of
`Tensor`s.
Returns:
The tuple `(output, state)` for this core.
Raises:
ValueError: if the `Tensor` `x` does no... |
Custom getter to restore all variables with `snt.restore_initializer`.
Args:
filename: The filename of the checkpoint.
name_fn: A function which can map the name of the variable requested. This
allows restoring variables with values having different names in the
checkpoint.
collection: Only s... |
Calculate a reasonable embedding size for a vocabulary.
Rule of thumb is 6 * 4th root of vocab_size.
Args:
vocab_size: Size of the input vocabulary.
Returns:
The embedding size to use.
Raises:
ValueError: if `vocab_size` is invalid.
def _embedding_dim(vocab_size):
"""Calculate a reasonable embe... |
Lookup embeddings.
Looks up an embedding vector for each value in `ids`. All ids must be within
[0, vocab_size), else an `InvalidArgumentError` is raised at runtime.
Args:
ids: Tensor of dtype int64.
Returns:
Tensor of tf.shape(ids) + [embedding_dim] and dtype float32.
def _build(self, i... |
Generates n-dimensional homogenous coordinates for a given grid definition.
`source_shape` and `output_shape` are used to define the size of the source
and output signal domains, as opposed to the shape of the respective
Tensors. For example, for an image of size `width=W` and `height=H`,
`{source,output}_shap... |
Creates all the matrices needed to compute the output warped grids.
def _create_features(self, constraints):
"""Creates all the matrices needed to compute the output warped grids."""
affine_warp_constraints = constraints
if not isinstance(affine_warp_constraints, AffineWarpConstraints):
affine_warp_c... |
Assembles the module network and adds it to the graph.
The internal computation graph is assembled according to the set of
constraints provided at construction time.
Args:
inputs: Tensor containing a batch of transformation parameters.
Returns:
A batch of warped grids.
Raises:
... |
Returns a `sonnet` module to compute inverse affine transforms.
The function first assembles a network that given the constraints of the
current AffineGridWarper and a set of input parameters, retrieves the
coefficients of the corresponding inverse affine transform, then feeds its
output into a... |
Computes a boolean mask from the user defined constraints.
def _calc_mask(self):
"""Computes a boolean mask from the user defined constraints."""
mask = []
for row in self._constraints:
mask.append(tuple(x is None for x in row))
return tuple(mask) |
Combines two constraints, raising an error if they are not compatible.
def _combine(self, x, y):
"""Combines two constraints, raising an error if they are not compatible."""
if x is None or y is None:
return x or y
if x != y:
raise ValueError('Incompatible set of constraints provided.')
ret... |
Combines two sets of constraints into a coherent single set.
def combine_with(self, additional_constraints):
"""Combines two sets of constraints into a coherent single set."""
x = additional_constraints
if not isinstance(additional_constraints, AffineWarpConstraints):
x = AffineWarpConstraints(additi... |
Build magic (and sparsely documented) shapes_and_slices spec string.
def _partition_spec(self, shape, partition_info):
"""Build magic (and sparsely documented) shapes_and_slices spec string."""
if partition_info is None:
return '' # Empty string indicates a non-partitioned tensor.
ssi = tf.Variable.... |
Bake an ``ansible-playbook`` command so it's ready to execute and
returns ``None``.
:return: None
def bake(self):
"""
Bake an ``ansible-playbook`` command so it's ready to execute and
returns ``None``.
:return: None
"""
# Pass a directory as inventory t... |
Executes ``ansible-playbook`` and returns a string.
:return: str
def execute(self):
"""
Executes ``ansible-playbook`` and returns a string.
:return: str
"""
if self._ansible_command is None:
self.bake()
try:
self._config.driver.sanity_c... |
Log in to one instance.
def login(ctx, host, scenario_name): # pragma: no cover
""" Log in to one instance. """
args = ctx.obj.get('args')
subcommand = base._get_subcommand(__name__)
command_args = {
'subcommand': subcommand,
'host': host,
}
s = scenarios.Scenarios(
ba... |
Execute the actions necessary to perform a `molecule login` and
returns None.
:return: None
def execute(self):
"""
Execute the actions necessary to perform a `molecule login` and
returns None.
:return: None
"""
c = self._config
if ((not c.state.... |
Execute scenario sequences based on parsed command-line arguments.
This is useful for subcommands that run scenario sequences, which
excludes subcommands such as ``list``, ``login``, and ``matrix``.
``args`` and ``command_args`` are combined using :func:`get_configs`
to generate the scenario(s) config... |
Execute each command in the given scenario's configured sequence.
:param scenario: The scenario to execute.
:returns: None
def execute_scenario(scenario):
"""
Execute each command in the given scenario's configured sequence.
:param scenario: The scenario to execute.
:returns: None
"""
... |
Glob the current directory for Molecule config files, instantiate config
objects, and returns a list.
:param args: A dict of options, arguments and commands from the CLI.
:param command_args: A dict of options passed to the subcommand from
the CLI.
:param ansible_args: An optional tuple of argumen... |
Verify a Molecule config was found and returns None.
:param configs: A list containing absolute paths to Molecule config files.
:return: None
def _verify_configs(configs):
"""
Verify a Molecule config was found and returns None.
:param configs: A list containing absolute paths to Molecule config ... |
Bake a ``gilt`` command so it's ready to execute and returns None.
:return: None
def bake(self):
"""
Bake a ``gilt`` command so it's ready to execute and returns None.
:return: None
"""
self._sh_command = getattr(sh, self.command)
self._sh_command = self._sh_co... |
Execute the actions necessary to cleanup the instances and returns
None.
:return: None
def execute(self):
"""
Execute the actions necessary to cleanup the instances and returns
None.
:return: None
"""
self.print_info()
if not self._config.provi... |
Bake an `ansible-lint` command so it's ready to execute and returns
None.
:return: None
def bake(self):
"""
Bake an `ansible-lint` command so it's ready to execute and returns
None.
:return: None
"""
options = self.options
default_exclude_list =... |
Print ``Ansible`` and ``Molecule`` environment variables and returns None.
:param env: A dict containing the shell's environment as collected by
``os.environ``.
:return: None
def print_environment_vars(env):
"""
Print ``Ansible`` and ``Molecule`` environment variables and returns None.
:param... |
Execute the given command and returns None.
:param cmd: A ``sh.Command`` object to execute.
:param debug: An optional bool to toggle debug output.
:return: ``sh`` object
def run_command(cmd, debug=False):
"""
Execute the given command and returns None.
:param cmd: A ``sh.Command`` object to e... |
Writes a file with the given filename and content and returns None.
:param filename: A string containing the target filename.
:param content: A string containing the data to be written.
:return: None
def write_file(filename, content):
"""
Writes a file with the given filename and content and retur... |
Prepend an informational header on files managed by Molecule and returns
None.
:param filename: A string containing the target filename.
:return: None
def file_prepender(filename):
"""
Prepend an informational header on files managed by Molecule and returns
None.
:param filename: A string... |
Dump the provided data to a YAML document and returns a string.
:param data: A string containing an absolute path to the file to parse.
:return: str
def safe_dump(data):
"""
Dump the provided data to a YAML document and returns a string.
:param data: A string containing an absolute path to the fi... |
Parse the provided string returns a dict.
:param string: A string to be parsed.
:return: dict
def safe_load(string):
"""
Parse the provided string returns a dict.
:param string: A string to be parsed.
:return: dict
"""
try:
return yaml.safe_load(string) or {}
except yaml.s... |
Merges the values of B into A and returns a mutated dict A.
::
dict a
b:
- c: 0
- c: 2
d:
e: "aaa"
f: 3
dict b
a: 1
b:
- c: 3
d:
e: "bbb"
Will give an object such as::
{'a': 1... |
Ensure value uniqueness.
The rule's arguments are validated against this schema:
{'type': 'boolean'}
def _validate_unique(self, unique, field, value):
"""Ensure value uniqueness.
The rule's arguments are validated against this schema:
{'type': 'boolean'}
"""
if... |
Readonly but with a custom error.
The rule's arguments are validated against this schema:
{'type': 'boolean'}
def _validate_disallowed(self, disallowed, field, value):
""" Readonly but with a custom error.
The rule's arguments are validated against this schema:
{'type': 'boole... |
Readonly but with a custom error.
The rule's arguments are validated against this schema:
{'type': 'boolean'}
def _validate_molecule_env_var(self, molecule_env_var, field, value):
""" Readonly but with a custom error.
The rule's arguments are validated against this schema:
{'t... |
Execute the actions necessary to perform a `molecule idempotence` and
returns None.
:return: None
def execute(self):
"""
Execute the actions necessary to perform a `molecule idempotence` and
returns None.
:return: None
"""
self.print_info()
if n... |
Parses the output of the provisioning for changed and returns a bool.
:param output: A string containing the output of the ansible run.
:return: bool
def _is_idempotent(self, output):
"""
Parses the output of the provisioning for changed and returns a bool.
:param output: A st... |
Parses the output to identify the non idempotent tasks.
:param (str) output: A string containing the output of the ansible run.
:return: A list containing the names of the non idempotent tasks.
def _non_idempotent_tasks(self, output):
"""
Parses the output to identify the non idempoten... |
Initialize a new scenario for use with Molecule.
def scenario(ctx, dependency_name, driver_name, lint_name, provisioner_name,
role_name, scenario_name, verifier_name): # pragma: no cover
""" Initialize a new scenario for use with Molecule. """
command_args = {
'dependency_name': dependenc... |
Execute the actions necessary to perform a `molecule init scenario` and
returns None.
:return: None
def execute(self):
"""
Execute the actions necessary to perform a `molecule init scenario` and
returns None.
:return: None
"""
scenario_name = self._comm... |
Implement Docker driver sanity checks.
def sanity_checks(self):
"""Implement Docker driver sanity checks."""
if self._config.state.sanity_checked:
return
log.info("Sanity checks: '{}'".format(self._name))
HAS_DOCKER_PY = None
try:
from ansible.module_u... |
Bake a `flake8` command so it's ready to execute and returns None.
:return: None
def bake(self):
"""
Bake a `flake8` command so it's ready to execute and returns None.
:return: None
"""
self._flake8_command = sh.flake8.bake(
self.options,
self._... |
Prepare the system for using ``ansible-galaxy`` and returns None.
:return: None
def _setup(self):
"""
Prepare the system for using ``ansible-galaxy`` and returns None.
:return: None
"""
role_directory = os.path.join(self._config.scenario.directory,
... |
Use the provisioner to configure instances (dependency, create, prepare
converge).
def converge(ctx, scenario_name, ansible_args): # pragma: no cover
"""
Use the provisioner to configure instances (dependency, create, prepare
converge).
"""
args = ctx.obj.get('args')
subcommand = base._ge... |
Execute the actions necessary to perform a `molecule converge` and
returns None.
:return: None
def execute(self):
"""
Execute the actions necessary to perform a `molecule converge` and
returns None.
:return: None
"""
self.print_info()
self._conf... |
Return a list containing all scenario objects.
:return: list
def all(self):
"""
Return a list containing all scenario objects.
:return: list
"""
if self._scenario_name:
scenarios = self._filter_for_scenario()
self._verify()
return s... |
Verify the specified scenario was found and returns None.
:return: None
def _verify(self):
"""
Verify the specified scenario was found and returns None.
:return: None
"""
scenario_names = [c.scenario.name for c in self._configs]
if self._scenario_name not in sc... |
Find the scenario matching the provided scenario name and returns a
list.
:return: list
def _filter_for_scenario(self):
"""
Find the scenario matching the provided scenario name and returns a
list.
:return: list
"""
return [
c.scenario for c... |
Build a matrix of scenarios with sequence to include and returns a
dict.
{
scenario_1: {
'subcommand': [
'action-1',
'action-2',
],
},
scenario_2: {
'subcommand': [
... |
Build a logger with the given name and returns the logger.
:param name: The name for the logger. This is usually the module
name, ``__name__``.
:return: logger object
def get_logger(name=None):
"""
Build a logger with the given name and returns the logger.
:param name: The name f... |
Lint the role.
def lint(ctx, scenario_name): # pragma: no cover
""" Lint the role. """
args = ctx.obj.get('args')
subcommand = base._get_subcommand(__name__)
command_args = {
'subcommand': subcommand,
}
base.execute_cmdline_scenarios(scenario_name, args, command_args) |
Execute the actions necessary to perform a `molecule lint` and
returns None.
:return: None
def execute(self):
"""
Execute the actions necessary to perform a `molecule lint` and
returns None.
:return: None
"""
self.print_info()
linters = [
... |
Create an inventory structure and returns a dict.
.. code-block:: yaml
ungrouped:
vars:
foo: bar
hosts:
instance-1:
instance-2:
children:
$child_group_name:
hosts:
... |
Executes `ansible-playbook` against the cleanup playbook and returns
None.
:return: None
def cleanup(self):
"""
Executes `ansible-playbook` against the cleanup playbook and returns
None.
:return: None
"""
pb = self._get_ansible_playbook(self.playbooks.c... |
Executes ``ansible-playbook`` against the converge playbook unless
specified otherwise and returns a string.
:param playbook: An optional string containing an absolute path to a
playbook.
:param kwargs: An optional keyword arguments.
:return: str
def converge(self, playbook=No... |
Executes ``ansible-playbook`` against the destroy playbook and returns
None.
:return: None
def destroy(self):
"""
Executes ``ansible-playbook`` against the destroy playbook and returns
None.
:return: None
"""
pb = self._get_ansible_playbook(self.playboo... |
Executes ``ansible-playbook`` against the side_effect playbook and
returns None.
:return: None
def side_effect(self):
"""
Executes ``ansible-playbook`` against the side_effect playbook and
returns None.
:return: None
"""
pb = self._get_ansible_playbook(... |
Executes ``ansible-playbook`` against the create playbook and returns
None.
:return: None
def create(self):
"""
Executes ``ansible-playbook`` against the create playbook and returns
None.
:return: None
"""
pb = self._get_ansible_playbook(self.playbooks.... |
Executes ``ansible-playbook`` against the prepare playbook and returns
None.
:return: None
def prepare(self):
"""
Executes ``ansible-playbook`` against the prepare playbook and returns
None.
:return: None
"""
pb = self._get_ansible_playbook(self.playboo... |
Executes ``ansible-playbook`` against the converge playbook with the
``-syntax-check`` flag and returns None.
:return: None
def syntax(self):
"""
Executes ``ansible-playbook`` against the converge playbook with the
``-syntax-check`` flag and returns None.
:return: None... |
Executes ``ansible-playbook`` against the verify playbook and returns
None.
:return: None
def verify(self):
"""
Executes ``ansible-playbook`` against the verify playbook and returns
None.
:return: None
"""
pb = self._get_ansible_playbook(self.playbooks.... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.