text stringlengths 81 112k |
|---|
Coerce *s* to `str`.
To keep six with lower version, see Issue 4169, we copy this function
from six == 1.12.0.
TODO(yuhguo): remove this function when six >= 1.12.0.
For Python 2:
- `unicode` -> encoded to `str`
- `str` -> `str`
For Python 3:
- `str` -> `str`
- `bytes` ->... |
Get the device IDs in the CUDA_VISIBLE_DEVICES environment variable.
Returns:
if CUDA_VISIBLE_DEVICES is set, this returns a list of integers with
the IDs of the GPUs. If it is not set, this returns None.
def get_cuda_visible_devices():
"""Get the device IDs in the CUDA_VISIBLE_DEVICES env... |
Determine a task's resource requirements.
Args:
default_num_cpus: The default number of CPUs required by this function
or actor method.
default_num_gpus: The default number of GPUs required by this function
or actor method.
default_resources: The default custom resou... |
Setup default logging for ray.
def setup_logger(logging_level, logging_format):
"""Setup default logging for ray."""
logger = logging.getLogger("ray")
if type(logging_level) is str:
logging_level = logging.getLevelName(logging_level.upper())
logger.setLevel(logging_level)
global _default_ha... |
Run vmstat and get a particular statistic.
Args:
stat: The statistic that we are interested in retrieving.
Returns:
The parsed output.
def vmstat(stat):
"""Run vmstat and get a particular statistic.
Args:
stat: The statistic that we are interested in retrieving.
Returns:... |
Run a sysctl command and parse the output.
Args:
command: A sysctl command with an argument, for example,
["sysctl", "hw.memsize"].
Returns:
The parsed output.
def sysctl(command):
"""Run a sysctl command and parse the output.
Args:
command: A sysctl command with ... |
Return the total amount of system memory in bytes.
Returns:
The total amount of system memory in bytes.
def get_system_memory():
"""Return the total amount of system memory in bytes.
Returns:
The total amount of system memory in bytes.
"""
# Try to accurately figure out the memory... |
Get the size of the shared memory file system.
Returns:
The size of the shared memory file system in bytes.
def get_shared_memory_bytes():
"""Get the size of the shared memory file system.
Returns:
The size of the shared memory file system in bytes.
"""
# Make sure this is only ca... |
Send a warning message if the pickled object is too large.
Args:
pickled: the pickled object.
name: name of the pickled object.
obj_type: type of the pickled object, can be 'function',
'remote function', 'actor', or 'object'.
worker: the worker used to send warning messa... |
Create a thread-safe proxy which locks every method call
for the given client.
Args:
client: the client object to be guarded.
lock: the lock object that will be used to lock client's methods.
If None, a new lock will be used.
Returns:
A thread-safe proxy for the given c... |
Attempt to create a directory that is globally readable/writable.
Args:
directory_path: The path of the directory to create.
def try_to_create_directory(directory_path):
"""Attempt to create a directory that is globally readable/writable.
Args:
directory_path: The path of the directory to... |
This function produces a distributed array from a subset of the blocks in
the `a`. The result and `a` will have the same number of dimensions. For
example,
subblocks(a, [0, 1], [2, 4])
will produce a DistArray whose objectids are
[[a.objectids[0, 2], a.objectids[0, 4]],
[a.objectids... |
Assemble an array from a distributed array of object IDs.
def assemble(self):
"""Assemble an array from a distributed array of object IDs."""
first_block = ray.get(self.objectids[(0, ) * self.ndim])
dtype = first_block.dtype
result = np.zeros(self.shape, dtype=dtype)
for index i... |
Computes action log-probs from policy logits and actions.
In the notation used throughout documentation and comments, T refers to the
time dimension ranging from 0 to T-1. B refers to the batch size and
ACTION_SPACE refers to the list of numbers each representing a number of
actions.
Args:
policy_logits... |
multi_from_logits wrapper used only for tests
def from_logits(behaviour_policy_logits,
target_policy_logits,
actions,
discounts,
rewards,
values,
bootstrap_value,
clip_rho_threshold=1.0,
clip... |
r"""V-trace for softmax policies.
Calculates V-trace actor critic targets for softmax polices as described in
"IMPALA: Scalable Distributed Deep-RL with
Importance Weighted Actor-Learner Architectures"
by Espeholt, Soyer, Munos et al.
Target policy refers to the policy we are interested in improving and
... |
r"""V-trace from log importance weights.
Calculates V-trace actor critic targets as described in
"IMPALA: Scalable Distributed Deep-RL with
Importance Weighted Actor-Learner Architectures"
by Espeholt, Soyer, Munos et al.
In the notation used throughout documentation and comments, T refers to the
time di... |
With the selected log_probs for multi-discrete actions of behaviour
and target policies we compute the log_rhos for calculating the vtrace.
def get_log_rhos(target_action_log_probs, behaviour_action_log_probs):
"""With the selected log_probs for multi-discrete actions of behaviour
and target policies we co... |
weight_variable generates a weight variable of a given shape.
def weight_variable(shape):
"""weight_variable generates a weight variable of a given shape."""
initial = tf.truncated_normal(shape, stddev=0.1)
return tf.Variable(initial) |
bias_variable generates a bias variable of a given shape.
def bias_variable(shape):
"""bias_variable generates a bias variable of a given shape."""
initial = tf.constant(0.1, shape=shape)
return tf.Variable(initial) |
Prints output of given dataframe to fit into terminal.
Returns:
table (pd.DataFrame): Final outputted dataframe.
dropped_cols (list): Columns dropped due to terminal size.
empty_cols (list): Empty columns (dropped on default).
def print_format_output(dataframe):
"""Prints output of giv... |
Lists trials in the directory subtree starting at the given path.
Args:
experiment_path (str): Directory where trials are located.
Corresponds to Experiment.local_dir/Experiment.name.
sort (str): Key to sort by.
output (str): Name of file where output is saved.
filter_op... |
Lists experiments in the directory subtree.
Args:
project_path (str): Directory where experiments are located.
Corresponds to Experiment.local_dir.
sort (str): Key to sort by.
output (str): Name of file where output is saved.
filter_op (str): Filter operation in the form... |
Opens a txt file at the given path where user can add and save notes.
Args:
path (str): Directory where note will be saved.
filename (str): Name of note. Defaults to "note.txt"
def add_note(path, filename="note.txt"):
"""Opens a txt file at the given path where user can add and save notes.
... |
Rest API to query the job info, with the given job_id.
The url pattern should be like this:
curl http://<server>:<port>/query_job?job_id=<job_id>
The response may be:
{
"running_trials": 0,
"start_time": "2018-07-19 20:49:40",
"current_round": 1,
"failed_trials": 0,
... |
Rest API to query the trial info, with the given trial_id.
The url pattern should be like this:
curl http://<server>:<port>/query_trial?trial_id=<trial_id>
The response may be:
{
"app_url": "None",
"trial_status": "TERMINATED",
"params": {'a': 1, 'b': 2},
"job_id": "a... |
Callback for early stopping.
This stopping rule stops a running trial if the trial's best objective
value by step `t` is strictly worse than the median of the running
averages of all completed trials' objectives reported up to step `t`.
def on_trial_result(self, trial_runner, trial, result):
... |
Marks trial as completed if it is paused and has previously ran.
def on_trial_remove(self, trial_runner, trial):
"""Marks trial as completed if it is paused and has previously ran."""
if trial.status is Trial.PAUSED and trial in self._results:
self._completed_trials.add(trial) |
Build a Job instance from a json string.
def from_json(cls, json_info):
"""Build a Job instance from a json string."""
if json_info is None:
return None
return JobRecord(
job_id=json_info["job_id"],
name=json_info["job_name"],
user=json_info["user... |
Build a Trial instance from a json string.
def from_json(cls, json_info):
"""Build a Trial instance from a json string."""
if json_info is None:
return None
return TrialRecord(
trial_id=json_info["trial_id"],
job_id=json_info["job_id"],
trial_stat... |
Build a Result instance from a json string.
def from_json(cls, json_info):
"""Build a Result instance from a json string."""
if json_info is None:
return None
return ResultRecord(
trial_id=json_info["trial_id"],
timesteps_total=json_info["timesteps_total"],
... |
Given a rollout, compute its value targets and the advantage.
Args:
rollout (SampleBatch): SampleBatch of a single trajectory
last_r (float): Value estimation for last observation
gamma (float): Discount factor.
lambda_ (float): Parameter for GAE
use_gae (bool): Using Genera... |
Handle an xray heartbeat batch message from Redis.
def xray_heartbeat_batch_handler(self, unused_channel, data):
"""Handle an xray heartbeat batch message from Redis."""
gcs_entries = ray.gcs_utils.GcsTableEntry.GetRootAsGcsTableEntry(
data, 0)
heartbeat_data = gcs_entries.Entries(... |
Remove this driver's object/task entries from redis.
Removes control-state entries of all tasks and task return
objects belonging to the driver.
Args:
driver_id: The driver id.
def _xray_clean_up_entries_for_driver(self, driver_id):
"""Remove this driver's object/task entr... |
Handle a notification that a driver has been removed.
Args:
unused_channel: The message channel.
data: The message data.
def xray_driver_removed_handler(self, unused_channel, data):
"""Handle a notification that a driver has been removed.
Args:
unused_chann... |
Process all messages ready in the subscription channels.
This reads messages from the subscription channels and calls the
appropriate handlers until there are no messages left.
Args:
max_messages: The maximum number of messages to process before
returning.
def proc... |
Experimental: issue a flush request to the GCS.
The purpose of this feature is to control GCS memory usage.
To activate this feature, Ray must be compiled with the flag
RAY_USE_NEW_GCS set, and Ray must be started at run time with the flag
as well.
def _maybe_flush_gcs(self):
... |
Run the monitor.
This function loops forever, checking for messages about dead database
clients and cleaning up state accordingly.
def run(self):
"""Run the monitor.
This function loops forever, checking for messages about dead database
clients and cleaning up state accordingl... |
View for the home page.
def index(request):
"""View for the home page."""
recent_jobs = JobRecord.objects.order_by("-start_time")[0:100]
recent_trials = TrialRecord.objects.order_by("-start_time")[0:500]
total_num = len(recent_trials)
running_num = sum(t.trial_status == Trial.RUNNING for t in rece... |
View for a single job.
def job(request):
"""View for a single job."""
job_id = request.GET.get("job_id")
recent_jobs = JobRecord.objects.order_by("-start_time")[0:100]
recent_trials = TrialRecord.objects \
.filter(job_id=job_id) \
.order_by("-start_time")
trial_records = []
for ... |
View for a single trial.
def trial(request):
"""View for a single trial."""
job_id = request.GET.get("job_id")
trial_id = request.GET.get("trial_id")
recent_trials = TrialRecord.objects \
.filter(job_id=job_id) \
.order_by("-start_time")
recent_results = ResultRecord.objects \
... |
Get job information for current job.
def get_job_info(current_job):
"""Get job information for current job."""
trials = TrialRecord.objects.filter(job_id=current_job.job_id)
total_num = len(trials)
running_num = sum(t.trial_status == Trial.RUNNING for t in trials)
success_num = sum(t.trial_status =... |
Get job information for current trial.
def get_trial_info(current_trial):
"""Get job information for current trial."""
if current_trial.end_time and ("_" in current_trial.end_time):
# end time is parsed from result.json and the format
# is like: yyyy-mm-dd_hh-MM-ss, which will be converted
... |
Get winner trial of a job.
def get_winner(trials):
"""Get winner trial of a job."""
winner = {}
# TODO: sort_key should be customized here
sort_key = "accuracy"
if trials and len(trials) > 0:
first_metrics = get_trial_info(trials[0])["metrics"]
if first_metrics and not first_metrics... |
Returns a base argument parser for the ray.tune tool.
Args:
parser_creator: A constructor for the parser class.
kwargs: Non-positional args to be passed into the
parser class constructor.
def make_parser(parser_creator=None, **kwargs):
"""Returns a base argument parser for the ray.... |
Converts configuration to a command line argument format.
def to_argv(config):
"""Converts configuration to a command line argument format."""
argv = []
for k, v in config.items():
if "-" in k:
raise ValueError("Use '_' instead of '-' in `{}`".format(k))
if v is None:
... |
Creates a Trial object from parsing the spec.
Arguments:
spec (dict): A resolved experiment specification. Arguments should
The args here should correspond to the command line flags
in ray.tune.config_parser.
output_path (str); A specific output path within the local_dir.
... |
Poll for compute zone operation until finished.
def wait_for_compute_zone_operation(compute, project_name, operation, zone):
"""Poll for compute zone operation until finished."""
logger.info("wait_for_compute_zone_operation: "
"Waiting for operation {} to finish...".format(
... |
Return the task id associated to the generic source of the signal.
Args:
source: source of the signal, it can be either an object id returned
by a task, a task id, or an actor handle.
Returns:
- If source is an object id, return id of task which creted object.
- If source i... |
Send signal.
The signal has a unique identifier that is computed from (1) the id
of the actor or task sending this signal (i.e., the actor or task calling
this function), and (2) an index that is incremented every time this
source sends a signal. This index starts from 1.
Args:
signal: Sig... |
Get all outstanding signals from sources.
A source can be either (1) an object ID returned by the task (we want
to receive signals from), or (2) an actor handle.
When invoked by the same entity E (where E can be an actor, task or
driver), for each source S in sources, this function returns all signals... |
Reset the worker state associated with any signals that this worker
has received so far.
If the worker calls receive() on a source next, it will get all the
signals generated by that source starting with index = 1.
def reset():
"""
Reset the worker state associated with any signals that this worke... |
Returns True if this is the "first" call for a given key.
Various logging settings can adjust the definition of "first".
Example:
>>> if log_once("some_key"):
... logger.info("Some verbose logging statement")
def log_once(key):
"""Returns True if this is the "first" call for a given k... |
Get a single or a collection of remote objects from the object store.
This method is identical to `ray.get` except it adds support for tuples,
ndarrays and dictionaries.
Args:
object_ids: Object ID of the object to get, a list, tuple, ndarray of
object IDs to get or a dict of {key: obj... |
Return a list of IDs that are ready and a list of IDs that are not.
This method is identical to `ray.wait` except it adds support for tuples
and ndarrays.
Args:
object_ids (List[ObjectID], Tuple(ObjectID), np.array(ObjectID)):
List like of object IDs for objects that may or may not be ... |
User notification for deprecated parameter.
Arguments:
deprecated (str): Deprecated parameter.
replacement (str): Replacement parameter to use instead.
soft (bool): Fatal if True.
def _raise_deprecation_note(deprecated, replacement, soft=False):
"""User notification for deprecated para... |
Produces a list of Experiment objects.
Converts input from dict, single experiment, or list of
experiments to list of experiments. If input is None,
will return an empty list.
Arguments:
experiments (Experiment | list | dict): Experiments to run.
Returns:
List of experiments.
def... |
Generates an Experiment object from JSON.
Args:
name (str): Name of Experiment.
spec (dict): JSON configuration of experiment.
def from_json(cls, name, spec):
"""Generates an Experiment object from JSON.
Args:
name (str): Name of Experiment.
spe... |
Registers Trainable or Function at runtime.
Assumes already registered if run_object is a string. Does not
register lambdas because they could be part of variant generation.
Also, does not inspect interface of given run_object.
Arguments:
run_object (str|function|class): Tr... |
Perform a QR decomposition of a tall-skinny matrix.
Args:
a: A distributed matrix with shape MxN (suppose K = min(M, N)).
Returns:
A tuple of q (a DistArray) and r (a numpy array) satisfying the
following.
- If q_full = ray.get(DistArray, q).assemble(), then
... |
Perform a modified LU decomposition of a matrix.
This takes a matrix q with orthonormal columns, returns l, u, s such that
q - s = l * u.
Args:
q: A two dimensional orthonormal matrix q.
Returns:
A tuple of a lower triangular matrix l, an upper triangular matrix u,
and a a... |
Provides a natural representation for string for nice sorting.
def _naturalize(string):
"""Provides a natural representation for string for nice sorting."""
splits = re.split("([0-9]+)", string)
return [int(text) if text.isdigit() else text.lower() for text in splits] |
Returns path to most recently modified checkpoint.
def _find_newest_ckpt(ckpt_dir):
"""Returns path to most recently modified checkpoint."""
full_paths = [
os.path.join(ckpt_dir, fname) for fname in os.listdir(ckpt_dir)
if fname.startswith("experiment_state") and fname.endswith(".json")
]
... |
Saves execution state to `self._metadata_checkpoint_dir`.
Overwrites the current session checkpoint, which starts when self
is instantiated.
def checkpoint(self):
"""Saves execution state to `self._metadata_checkpoint_dir`.
Overwrites the current session checkpoint, which starts when ... |
Restores all checkpointed trials from previous run.
Requires user to manually re-register their objects. Also stops
all ongoing trials.
Args:
metadata_checkpoint_dir (str): Path to metadata checkpoints.
search_alg (SearchAlgorithm): Search Algorithm. Defaults to
... |
Returns whether all trials have finished running.
def is_finished(self):
"""Returns whether all trials have finished running."""
if self._total_time > self._global_time_limit:
logger.warning("Exceeded global time limit {} / {}".format(
self._total_time, self._global_time_li... |
Runs one step of the trial event loop.
Callers should typically run this method repeatedly in a loop. They
may inspect or modify the runner's state in between calls to step().
def step(self):
"""Runs one step of the trial event loop.
Callers should typically run this method repeatedly... |
Adds a new trial to this TrialRunner.
Trials may be added at any time.
Args:
trial (Trial): Trial to queue.
def add_trial(self, trial):
"""Adds a new trial to this TrialRunner.
Trials may be added at any time.
Args:
trial (Trial): Trial to queue.
... |
Returns a human readable message for printing to the console.
def debug_string(self, max_debug=MAX_DEBUG_TRIALS):
"""Returns a human readable message for printing to the console."""
messages = self._debug_messages()
states = collections.defaultdict(set)
limit_per_state = collections.Cou... |
Replenishes queue.
Blocks if all trials queued have finished, but search algorithm is
still not finished.
def _get_next_trial(self):
"""Replenishes queue.
Blocks if all trials queued have finished, but search algorithm is
still not finished.
"""
trials_done = a... |
Checkpoints trial based off trial.last_result.
def _checkpoint_trial_if_needed(self, trial):
"""Checkpoints trial based off trial.last_result."""
if trial.should_checkpoint():
# Save trial runtime if possible
if hasattr(trial, "runner") and trial.runner:
self.tri... |
Tries to recover trial.
Notifies SearchAlgorithm and Scheduler if failure to recover.
Args:
trial (Trial): Trial to recover.
error_msg (str): Error message from prior to invoking this method.
def _try_recover(self, trial, error_msg):
"""Tries to recover trial.
... |
Notification to TrialScheduler and requeue trial.
This does not notify the SearchAlgorithm because the function
evaluation is still in progress.
def _requeue_trial(self, trial):
"""Notification to TrialScheduler and requeue trial.
This does not notify the SearchAlgorithm because the f... |
Adds next trials to queue if possible.
Note that the timeout is currently unexposed to the user.
Args:
blocking (bool): Blocks until either a trial is available
or is_finished (timeout or search algorithm finishes).
timeout (int): Seconds before blocking times o... |
Stops trial.
Trials may be stopped at any time. If trial is in state PENDING
or PAUSED, calls `on_trial_remove` for scheduler and
`on_trial_complete(..., early_terminated=True) for search_alg.
Otherwise waits for result for the trial and calls
`on_trial_complete` for scheduler ... |
Helper function for running examples
def run_func(func, *args, **kwargs):
"""Helper function for running examples"""
ray.init()
func = ray.remote(func)
# NOTE: kwargs not allowed for now
result = ray.get(func.remote(*args))
# Inspect the stack to get calling example
caller = inspect.stac... |
Cython simple class
def example6():
"""Cython simple class"""
ray.init()
cls = ray.remote(cyth.simple_class)
a1 = cls.remote()
a2 = cls.remote()
result1 = ray.get(a1.increment.remote())
result2 = ray.get(a2.increment.remote())
print(result1, result2) |
Cython with blas. NOTE: requires scipy
def example8():
"""Cython with blas. NOTE: requires scipy"""
# See cython_blas.pyx for argument documentation
mat = np.array([[[2.0, 2.0], [2.0, 2.0]], [[2.0, 2.0], [2.0, 2.0]]],
dtype=np.float32)
result = np.zeros((2, 2), np.float32, order="C"... |
Rewrites the given trajectory fragments to encode n-step rewards.
reward[i] = (
reward[i] * gamma**0 +
reward[i+1] * gamma**1 +
... +
reward[i+n_step-1] * gamma**(n_step-1))
The ith new_obs is also adjusted to point to the (i+n_step-1)'th new obs.
At the end of the traject... |
Same as tf.reduce_mean() but ignores -inf values.
def _reduce_mean_ignore_inf(x, axis):
"""Same as tf.reduce_mean() but ignores -inf values."""
mask = tf.not_equal(x, tf.float32.min)
x_zeroed = tf.where(mask, x, tf.zeros_like(x))
return (tf.reduce_sum(x_zeroed, axis) / tf.reduce_sum(
tf.cast(ma... |
Reference: https://en.wikipedia.org/wiki/Huber_loss
def _huber_loss(x, delta=1.0):
"""Reference: https://en.wikipedia.org/wiki/Huber_loss"""
return tf.where(
tf.abs(x) < delta,
tf.square(x) * 0.5, delta * (tf.abs(x) - 0.5 * delta)) |
Minimized `objective` using `optimizer` w.r.t. variables in
`var_list` while ensure the norm of the gradients for each
variable is clipped to `clip_val`
def _minimize_and_clip(optimizer, objective, var_list, clip_val=10):
"""Minimized `objective` using `optimizer` w.r.t. variables in
`var_list` while e... |
Get variables inside a scope
The scope can be specified as a string
Parameters
----------
scope: str or VariableScope
scope in which the variables reside.
trainable_only: bool
whether or not to return only the variables that were marked as
trainable.
Returns
-------
v... |
a common dense layer: y = w^{T}x + b
a noisy layer: y = (w + \epsilon_w*\sigma_w)^{T}x +
(b+\epsilon_b*\sigma_b)
where \epsilon are random variables sampled from factorized normal
distributions and \sigma are trainable variables which are expected to
vanish along the training... |
Returns a custom getter that this class's methods must be called
All methods of this class must be called under a variable scope that was
passed this custom getter. Example:
```python
network = ConvNetBuilder(...)
with tf.variable_scope("cg", custom_getter=network.get_custom_getter()):
netwo... |
Context that construct cnn in the auxiliary arm.
def switch_to_aux_top_layer(self):
"""Context that construct cnn in the auxiliary arm."""
if self.aux_top_layer is None:
raise RuntimeError("Empty auxiliary top layer in the network.")
saved_top_layer = self.top_layer
saved_to... |
Construct a conv2d layer on top of cnn.
def conv(self,
num_out_channels,
k_height,
k_width,
d_height=1,
d_width=1,
mode="SAME",
input_layer=None,
num_channels_in=None,
use_batch_norm=None,
... |
Construct a pooling layer.
def _pool(self, pool_name, pool_function, k_height, k_width, d_height,
d_width, mode, input_layer, num_channels_in):
"""Construct a pooling layer."""
if input_layer is None:
input_layer = self.top_layer
else:
self.top_size = num_c... |
Construct a max pooling layer.
def mpool(self,
k_height,
k_width,
d_height=2,
d_width=2,
mode="VALID",
input_layer=None,
num_channels_in=None):
"""Construct a max pooling layer."""
return self._pool("mpool... |
Construct an average pooling layer.
def apool(self,
k_height,
k_width,
d_height=2,
d_width=2,
mode="VALID",
input_layer=None,
num_channels_in=None):
"""Construct an average pooling layer."""
return self._p... |
Batch normalization on `input_layer` without tf.layers.
def _batch_norm_without_layers(self, input_layer, decay, use_scale,
epsilon):
"""Batch normalization on `input_layer` without tf.layers."""
shape = input_layer.shape
num_channels = shape[3] if self.data_f... |
Adds a Batch Normalization layer.
def batch_norm(self,
input_layer=None,
decay=0.999,
scale=False,
epsilon=0.001):
"""Adds a Batch Normalization layer."""
if input_layer is None:
input_layer = self.top_layer
... |
Adds a local response normalization layer.
def lrn(self, depth_radius, bias, alpha, beta):
"""Adds a local response normalization layer."""
name = "lrn" + str(self.counts["lrn"])
self.counts["lrn"] += 1
self.top_layer = tf.nn.lrn(
self.top_layer, depth_radius, bias, alpha, b... |
Fetch the value of a binary key.
def _internal_kv_get(key):
"""Fetch the value of a binary key."""
worker = ray.worker.get_global_worker()
if worker.mode == ray.worker.LOCAL_MODE:
return _local.get(key)
return worker.redis_client.hget(key, "value") |
Globally associates a value with a given binary key.
This only has an effect if the key does not already have a value.
Returns:
already_exists (bool): whether the value already exists.
def _internal_kv_put(key, value, overwrite=False):
"""Globally associates a value with a given binary key.
... |
Deferred init so that we can pass in previously created workers.
def init(self, aggregators):
"""Deferred init so that we can pass in previously created workers."""
assert len(aggregators) == self.num_aggregation_workers, aggregators
if len(self.remote_evaluators) < self.num_aggregation_worker... |
Free a list of IDs from object stores.
This function is a low-level API which should be used in restricted
scenarios.
If local_only is false, the request will be send to all object stores.
This method will not return any value to indicate whether the deletion is
successful or not. This function i... |
Start the collector worker thread.
If running in standalone mode, the current thread will wait
until the collector thread ends.
def run(self):
"""Start the collector worker thread.
If running in standalone mode, the current thread will wait
until the collector thread ends.
... |
Initialize logger settings.
def init_logger(cls, log_level):
"""Initialize logger settings."""
logger = logging.getLogger("AutoMLBoard")
handler = logging.StreamHandler()
formatter = logging.Formatter("[%(levelname)s %(asctime)s] "
"%(filename)s: %(... |
Run the main event loop for collector thread.
In each round the collector traverse the results log directory
and reload trial information from the status files.
def run(self):
"""Run the main event loop for collector thread.
In each round the collector traverse the results log directo... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.