text stringlengths 81 112k |
|---|
Applies a flatmap operator to the stream.
Attributes:
flatmap_fn (function): The user-defined logic of the flatmap
(e.g. split()).
def flat_map(self, flatmap_fn):
"""Applies a flatmap operator to the stream.
Attributes:
flatmap_fn (function): The user-de... |
Applies a key_by operator to the stream.
Attributes:
key_attribute_index (int): The index of the key attributed
(assuming tuple records).
def key_by(self, key_selector):
"""Applies a key_by operator to the stream.
Attributes:
key_attribute_index (int): T... |
Applies a rolling sum operator to the stream.
Attributes:
sum_attribute_index (int): The index of the attribute to sum
(assuming tuple records).
def reduce(self, reduce_fn):
"""Applies a rolling sum operator to the stream.
Attributes:
sum_attribute_index... |
Applies a rolling sum operator to the stream.
Attributes:
sum_attribute_index (int): The index of the attribute to sum
(assuming tuple records).
def sum(self, attribute_selector, state_keeper=None):
"""Applies a rolling sum operator to the stream.
Attributes:
... |
Applies a system time window to the stream.
Attributes:
window_width_ms (int): The length of the window in ms.
def time_window(self, window_width_ms):
"""Applies a system time window to the stream.
Attributes:
window_width_ms (int): The length of the window in ms.
... |
Applies a filter to the stream.
Attributes:
filter_fn (function): The user-defined filter function.
def filter(self, filter_fn):
"""Applies a filter to the stream.
Attributes:
filter_fn (function): The user-defined filter function.
"""
op = Operator(
... |
Inspects the content of the stream.
Attributes:
inspect_logic (function): The user-defined inspect function.
def inspect(self, inspect_logic):
"""Inspects the content of the stream.
Attributes:
inspect_logic (function): The user-defined inspect function.
"""
... |
Closes the stream with a sink operator.
def sink(self):
"""Closes the stream with a sink operator."""
op = Operator(
_generate_uuid(),
OpType.Sink,
"Sink",
num_instances=self.env.config.parallelism)
return self.__register(op) |
Close all open files (so that we can open more).
def close_all_files(self):
"""Close all open files (so that we can open more)."""
while len(self.open_file_infos) > 0:
file_info = self.open_file_infos.pop(0)
file_info.file_handle.close()
file_info.file_handle = None
... |
Update the list of log files to monitor.
def update_log_filenames(self):
"""Update the list of log files to monitor."""
log_filenames = os.listdir(self.logs_dir)
for log_filename in log_filenames:
full_path = os.path.join(self.logs_dir, log_filename)
if full_path not in... |
Open some closed files if they may have new lines.
Opening more files may require us to close some of the already open
files.
def open_closed_files(self):
"""Open some closed files if they may have new lines.
Opening more files may require us to close some of the already open
... |
Get any changes to the log files and push updates to Redis.
Returns:
True if anything was published and false otherwise.
def check_log_files_and_publish_updates(self):
"""Get any changes to the log files and push updates to Redis.
Returns:
True if anything was publishe... |
Run the log monitor.
This will query Redis once every second to check if there are new log
files to monitor. It will also store those log files in Redis.
def run(self):
"""Run the log monitor.
This will query Redis once every second to check if there are new log
files to monit... |
Chains generator given experiment specifications.
Arguments:
experiments (Experiment | list | dict): Experiments to run.
def add_configurations(self, experiments):
"""Chains generator given experiment specifications.
Arguments:
experiments (Experiment | list | dict): E... |
Provides a batch of Trial objects to be queued into the TrialRunner.
A batch ends when self._trial_generator returns None.
Returns:
trials (list): Returns a list of trials.
def next_trials(self):
"""Provides a batch of Trial objects to be queued into the TrialRunner.
A ba... |
Generates trials with configurations from `_suggest`.
Creates a trial_id that is passed into `_suggest`.
Yields:
Trial objects constructed according to `spec`
def _generate_trials(self, experiment_spec, output_path=""):
"""Generates trials with configurations from `_suggest`.
... |
Generates variants from a spec (dict) with unresolved values.
There are two types of unresolved values:
Grid search: These define a grid search over values. For example, the
following grid search values in a spec will produce six distinct
variants in combination:
"activation":... |
Flattens a nested dict by joining keys into tuple of paths.
Can then be passed into `format_vars`.
def resolve_nested_dict(nested_dict):
"""Flattens a nested dict by joining keys into tuple of paths.
Can then be passed into `format_vars`.
"""
res = {}
for k, v in nested_dict.items():
... |
Run main entry for AutoMLBoard.
Args:
args: args parsed from command line
def run_board(args):
"""
Run main entry for AutoMLBoard.
Args:
args: args parsed from command line
"""
init_config(args)
# backend service, should import after django settings initialized
from b... |
Initialize configs of the service.
Do the following things:
1. automl board settings
2. database settings
3. django settings
def init_config(args):
"""
Initialize configs of the service.
Do the following things:
1. automl board settings
2. database settings
3. django settings
... |
Get the IDs of the GPUs that are available to the worker.
If the CUDA_VISIBLE_DEVICES environment variable was set when the worker
started up, then the IDs returned by this method will be a subset of the
IDs in CUDA_VISIBLE_DEVICES. If not, the IDs will fall in the range
[0, NUM_GPUS - 1], where NUM_GP... |
Return information about failed tasks.
def error_info():
"""Return information about failed tasks."""
worker = global_worker
worker.check_connected()
return (global_state.error_messages(driver_id=worker.task_driver_id) +
global_state.error_messages(driver_id=DriverID.nil())) |
Initialize the serialization library.
This defines a custom serializer for object IDs and also tells ray to
serialize several exception classes that we define for error handling.
def _initialize_serialization(driver_id, worker=global_worker):
"""Initialize the serialization library.
This defines a cu... |
Connect to an existing Ray cluster or start one and connect to it.
This method handles two cases. Either a Ray cluster already exists and we
just attach this driver to it, or we start all of the processes associated
with a Ray cluster and attach to the newly started cluster.
To start Ray and all of th... |
Disconnect the worker, and terminate processes started by ray.init().
This will automatically run at the end when a Python process that uses Ray
exits. It is ok to run this twice in a row. The primary use case for this
function is to cleanup state between tests.
Note that this will clear any remote fu... |
Prints log messages from workers on all of the nodes.
Args:
redis_client: A client to the primary Redis shard.
threads_stopped (threading.Event): A threading event used to signal to
the thread that it should exit.
def print_logs(redis_client, threads_stopped):
"""Prints log message... |
Prints message received in the given output queue.
This checks periodically if any un-raised errors occured in the background.
Args:
task_error_queue (queue.Queue): A queue used to receive errors from the
thread that listens to Redis.
threads_stopped (threading.Event): A threading ... |
Listen to error messages in the background on the driver.
This runs in a separate thread on the driver and pushes (error, time)
tuples to the output queue.
Args:
worker: The worker class that this thread belongs to.
task_error_queue (queue.Queue): A queue used to communicate with the
... |
Connect this worker to the raylet, to Plasma, and to Redis.
Args:
node (ray.node.Node): The node to connect.
mode: The mode of the worker. One of SCRIPT_MODE, WORKER_MODE, and
LOCAL_MODE.
log_to_driver (bool): If true, then output from all of the worker
processes on ... |
Disconnect this worker from the raylet and object store.
def disconnect():
"""Disconnect this worker from the raylet and object store."""
# Reset the list of cached remote functions and actors so that if more
# remote functions or actors are defined and then connect is called again,
# the remote functi... |
Attempt to produce a deterministic class ID for a given class.
The goal here is for the class ID to be the same when this is run on
different worker processes. Pickling, loading, and pickling again seems to
produce more consistent results than simply pickling. This is a bit crazy
and could cause proble... |
Enable serialization and deserialization for a particular class.
This method runs the register_class function defined below on every worker,
which will enable ray to properly serialize and deserialize objects of
this class.
Args:
cls (type): The class that ray should use this custom serializer... |
Get a remote object or a list of remote objects from the object store.
This method blocks until the object corresponding to the object ID is
available in the local object store. If this object is not in the local
object store, it will be shipped from an object store that has it (once the
object has bee... |
Store an object in the object store.
Args:
value: The Python object to be stored.
Returns:
The object ID assigned to this value.
def put(value):
"""Store an object in the object store.
Args:
value: The Python object to be stored.
Returns:
The object ID assigned t... |
Return a list of IDs that are ready and a list of IDs that are not.
.. warning::
The **timeout** argument used to be in **milliseconds** (up through
``ray==0.6.1``) and now it is in **seconds**.
If timeout is set, the function returns either when the requested number of
IDs are ready or w... |
Define a remote function or an actor class.
This can be used with no arguments to define a remote function or actor as
follows:
.. code-block:: python
@ray.remote
def f():
return 1
@ray.remote
class Foo(object):
def method(self):
re... |
A thread-local that contains the following attributes.
current_task_id: For the main thread, this field is the ID of this
worker's current running task; for other threads, this field is a
fake random ID.
task_index: The number of tasks that have been submitted from the
... |
Get the SerializationContext of the driver that this worker is processing.
Args:
driver_id: The ID of the driver that indicates which driver to get
the serialization context for.
Returns:
The serialization context of the given driver.
def get_serialization_cont... |
Store an object and attempt to register its class if needed.
Args:
object_id: The ID of the object to store.
value: The value to put in the object store.
depth: The maximum number of classes to recursively register.
Raises:
Exception: An exception is rai... |
Put value in the local object store with object id objectid.
This assumes that the value for objectid has not yet been placed in the
local object store.
Args:
object_id (object_id.ObjectID): The object ID of the value to be
put.
value: The value to put i... |
Get the value or values in the object store associated with the IDs.
Return the values from the local object store for object_ids. This will
block until all the values for object_ids have been written to the
local object store.
Args:
object_ids (List[object_id.ObjectID]): A... |
Submit a remote task to the scheduler.
Tell the scheduler to schedule the execution of the function with
function_descriptor with arguments args. Retrieve object IDs for the
outputs of the function from the scheduler and immediately return them.
Args:
function_descriptor: T... |
Run arbitrary code on all of the workers.
This function will first be run on the driver, and then it will be
exported to all of the workers to be run. It will also be run on any
new workers that register later. If ray.init has not been called yet,
then cache the function and export it l... |
Retrieve the arguments for the remote function.
This retrieves the values for the arguments to the remote function that
were passed in as object IDs. Arguments that were passed by value are
not changed. This is called by the worker that is executing the remote
function.
Args:
... |
Store the outputs of a remote function in the local object store.
This stores the values that were returned by a remote function in the
local object store. If any of the return values are object IDs, then
these object IDs are aliased with the object IDs that the scheduler
assigned for t... |
Execute a task assigned to this worker.
This method deserializes a task from the scheduler, and attempts to
execute the task. If the task succeeds, the outputs are stored in the
local object store. If the task throws an exception, RayTaskError
objects are stored in the object store to r... |
Wait for a task to be ready and process the task.
Args:
task: The task to execute.
def _wait_for_and_process_task(self, task):
"""Wait for a task to be ready and process the task.
Args:
task: The task to execute.
"""
function_descriptor = FunctionDescri... |
Get the next task from the raylet.
Returns:
A task from the raylet.
def _get_next_task_from_raylet(self):
"""Get the next task from the raylet.
Returns:
A task from the raylet.
"""
with profiling.profile("worker_idle"):
task = self.raylet_cl... |
The main loop a worker runs to receive and execute tasks.
def main_loop(self):
"""The main loop a worker runs to receive and execute tasks."""
def exit(signum, frame):
shutdown()
sys.exit(0)
signal.signal(signal.SIGTERM, exit)
while True:
task = se... |
This methods reshapes all values in a dictionary.
The indices from start to stop will be flattened into a single index.
Args:
weights: A dictionary mapping keys to numpy arrays.
start: The starting index.
stop: The ending index.
def flatten(weights, start=0, stop=2):
"""This metho... |
Get a dictionary of addresses.
def address_info(self):
"""Get a dictionary of addresses."""
return {
"node_ip_address": self._node_ip_address,
"redis_address": self._redis_address,
"object_store_address": self._plasma_store_socket_name,
"raylet_socket_nam... |
Create a redis client.
def create_redis_client(self):
"""Create a redis client."""
return ray.services.create_redis_client(
self._redis_address, self._ray_params.redis_password) |
Return a incremental temporary file name. The file is not created.
Args:
suffix (str): The suffix of the temp file.
prefix (str): The prefix of the temp file.
directory_name (str) : The base directory of the temp file.
Returns:
A string of file name. If ... |
Generate partially randomized filenames for log files.
Args:
name (str): descriptive string for this log file.
redirect_output (bool): True if files should be generated for
logging stdout and stderr and false if stdout and stderr
should not be redirected.... |
Prepare the socket file for raylet and plasma.
This method helps to prepare a socket file.
1. Make the directory if the directory does not exist.
2. If the socket file exists, raise exception.
Args:
socket_path (string): the socket file to prepare.
def _prepare_socket_file... |
Start the Redis servers.
def start_redis(self):
"""Start the Redis servers."""
assert self._redis_address is None
redis_log_files = [self.new_log_files("redis")]
for i in range(self._ray_params.num_redis_shards):
redis_log_files.append(self.new_log_files("redis-shard_" + str... |
Start the log monitor.
def start_log_monitor(self):
"""Start the log monitor."""
stdout_file, stderr_file = self.new_log_files("log_monitor")
process_info = ray.services.start_log_monitor(
self.redis_address,
self._logs_dir,
stdout_file=stdout_file,
... |
Start the reporter.
def start_reporter(self):
"""Start the reporter."""
stdout_file, stderr_file = self.new_log_files("reporter", True)
process_info = ray.services.start_reporter(
self.redis_address,
stdout_file=stdout_file,
stderr_file=stderr_file,
... |
Start the dashboard.
def start_dashboard(self):
"""Start the dashboard."""
stdout_file, stderr_file = self.new_log_files("dashboard", True)
self._webui_url, process_info = ray.services.start_dashboard(
self.redis_address,
self._temp_dir,
stdout_file=stdout_fi... |
Start the plasma store.
def start_plasma_store(self):
"""Start the plasma store."""
stdout_file, stderr_file = self.new_log_files("plasma_store")
process_info = ray.services.start_plasma_store(
stdout_file=stdout_file,
stderr_file=stderr_file,
object_store_me... |
Start the raylet.
Args:
use_valgrind (bool): True if we should start the process in
valgrind.
use_profiler (bool): True if we should start the process in the
valgrind profiler.
def start_raylet(self, use_valgrind=False, use_profiler=False):
"""St... |
Create new logging files for workers to redirect its output.
def new_worker_redirected_log_file(self, worker_id):
"""Create new logging files for workers to redirect its output."""
worker_stdout_file, worker_stderr_file = (self.new_log_files(
"worker-" + ray.utils.binary_to_hex(worker_id), ... |
Start the monitor.
def start_monitor(self):
"""Start the monitor."""
stdout_file, stderr_file = self.new_log_files("monitor")
process_info = ray.services.start_monitor(
self._redis_address,
stdout_file=stdout_file,
stderr_file=stderr_file,
autosca... |
Start the raylet monitor.
def start_raylet_monitor(self):
"""Start the raylet monitor."""
stdout_file, stderr_file = self.new_log_files("raylet_monitor")
process_info = ray.services.start_raylet_monitor(
self._redis_address,
stdout_file=stdout_file,
stderr_fi... |
Start head processes on the node.
def start_head_processes(self):
"""Start head processes on the node."""
logger.info(
"Process STDOUT and STDERR is being redirected to {}.".format(
self._logs_dir))
assert self._redis_address is None
# If this is the head nod... |
Start all of the processes on the node.
def start_ray_processes(self):
"""Start all of the processes on the node."""
logger.info(
"Process STDOUT and STDERR is being redirected to {}.".format(
self._logs_dir))
self.start_plasma_store()
self.start_raylet()
... |
Kill a process of a given type.
If the process type is PROCESS_TYPE_REDIS_SERVER, then we will kill all
of the Redis servers.
If the process was started in valgrind, then we will raise an exception
if the process has a non-zero exit code.
Args:
process_type: The ty... |
Kill the Redis servers.
Args:
check_alive (bool): Raise an exception if any of the processes
were already dead.
def kill_redis(self, check_alive=True):
"""Kill the Redis servers.
Args:
check_alive (bool): Raise an exception if any of the processes
... |
Kill the plasma store.
Args:
check_alive (bool): Raise an exception if the process was already
dead.
def kill_plasma_store(self, check_alive=True):
"""Kill the plasma store.
Args:
check_alive (bool): Raise an exception if the process was already
... |
Kill the raylet.
Args:
check_alive (bool): Raise an exception if the process was already
dead.
def kill_raylet(self, check_alive=True):
"""Kill the raylet.
Args:
check_alive (bool): Raise an exception if the process was already
dead.
... |
Kill the log monitor.
Args:
check_alive (bool): Raise an exception if the process was already
dead.
def kill_log_monitor(self, check_alive=True):
"""Kill the log monitor.
Args:
check_alive (bool): Raise an exception if the process was already
... |
Kill the reporter.
Args:
check_alive (bool): Raise an exception if the process was already
dead.
def kill_reporter(self, check_alive=True):
"""Kill the reporter.
Args:
check_alive (bool): Raise an exception if the process was already
dea... |
Kill the dashboard.
Args:
check_alive (bool): Raise an exception if the process was already
dead.
def kill_dashboard(self, check_alive=True):
"""Kill the dashboard.
Args:
check_alive (bool): Raise an exception if the process was already
... |
Kill the monitor.
Args:
check_alive (bool): Raise an exception if the process was already
dead.
def kill_monitor(self, check_alive=True):
"""Kill the monitor.
Args:
check_alive (bool): Raise an exception if the process was already
dead.
... |
Kill the raylet monitor.
Args:
check_alive (bool): Raise an exception if the process was already
dead.
def kill_raylet_monitor(self, check_alive=True):
"""Kill the raylet monitor.
Args:
check_alive (bool): Raise an exception if the process was already
... |
Kill all of the processes.
Note that This is slower than necessary because it calls kill, wait,
kill, wait, ... instead of kill, kill, ..., wait, wait, ...
Args:
check_alive (bool): Raise an exception if any of the processes were
already dead.
def kill_all_processe... |
Return a list of the live processes.
Returns:
A list of the live processes.
def live_processes(self):
"""Return a list of the live processes.
Returns:
A list of the live processes.
"""
result = []
for process_type, process_infos in self.all_proc... |
Create a large array of noise to be shared by all workers.
def create_shared_noise(count):
"""Create a large array of noise to be shared by all workers."""
seed = 123
noise = np.random.RandomState(seed).randn(count).astype(np.float32)
return noise |
Map model name to model network configuration.
def get_model_config(model_name, dataset):
"""Map model name to model network configuration."""
model_map = _get_model_map(dataset.name)
if model_name not in model_map:
raise ValueError("Invalid model name \"%s\" for dataset \"%s\"" %
... |
Register a new model that can be obtained with `get_model_config`.
def register_model(model_name, dataset_name, model_func):
"""Register a new model that can be obtained with `get_model_config`."""
model_map = _get_model_map(dataset_name)
if model_name in model_map:
raise ValueError("Model \"%s\" i... |
Do a rollout.
If add_noise is True, the rollout will take noisy actions with
noise drawn from that stream. Otherwise, no action noise will be added.
Parameters
----------
policy: tf object
policy from which to draw actions
env: GymEnv
environment from which to draw rewards, don... |
Provides Trial objects to be queued into the TrialRunner.
Returns:
trials (list): Returns a list of trials.
def next_trials(self):
"""Provides Trial objects to be queued into the TrialRunner.
Returns:
trials (list): Returns a list of trials.
"""
trials ... |
Generates Trial objects with the variant generation process.
Uses a fixed point iteration to resolve variants. All trials
should be able to be generated at once.
See also: `ray.tune.suggest.variant_generator`.
Yields:
Trial object
def _generate_trials(self, unresolved_spe... |
Returns result of applying `self.operation`
to a contiguous subsequence of the array.
self.operation(
arr[start], operation(arr[start+1], operation(... arr[end])))
Parameters
----------
start: int
beginning of the subsequence
end: int
... |
Serialize this policy for Monitor to pick up.
def set_flushing_policy(flushing_policy):
"""Serialize this policy for Monitor to pick up."""
if "RAY_USE_NEW_GCS" not in os.environ:
raise Exception(
"set_flushing_policy() is only available when environment "
"variable RAY_USE_NEW_... |
Returns ssh key to connecting to cluster workers.
If the env var TUNE_CLUSTER_SSH_KEY is provided, then this key
will be used for syncing across different nodes.
def get_ssh_key():
"""Returns ssh key to connecting to cluster workers.
If the env var TUNE_CLUSTER_SSH_KEY is provided, then this key
... |
Passes the result to HyperOpt unless early terminated or errored.
The result is internally negated when interacting with HyperOpt
so that HyperOpt can "maximize" this value, as it minimizes on default.
def on_trial_complete(self,
trial_id,
result=Non... |
Tells plasma to prefetch the given object_id.
def plasma_prefetch(object_id):
"""Tells plasma to prefetch the given object_id."""
local_sched_client = ray.worker.global_worker.raylet_client
ray_obj_id = ray.ObjectID(object_id)
local_sched_client.fetch_or_reconstruct([ray_obj_id], True) |
Get an object directly from plasma without going through object table.
Precondition: plasma_prefetch(object_id) has been called before.
def plasma_get(object_id):
"""Get an object directly from plasma without going through object table.
Precondition: plasma_prefetch(object_id) has been called before.
... |
Restores the state of the batched queue for writing.
def enable_writes(self):
"""Restores the state of the batched queue for writing."""
self.write_buffer = []
self.flush_lock = threading.RLock()
self.flush_thread = FlushThread(self.max_batch_time,
... |
Checks for backpressure by the downstream reader.
def _wait_for_reader(self):
"""Checks for backpressure by the downstream reader."""
if self.max_size <= 0: # Unlimited queue
return
if self.write_item_offset - self.cached_remote_offset <= self.max_size:
return # Hasn't... |
Collects at least train_batch_size samples, never discarding any.
def collect_samples(agents, sample_batch_size, num_envs_per_worker,
train_batch_size):
"""Collects at least train_batch_size samples, never discarding any."""
num_timesteps_so_far = 0
trajectories = []
agent_dict = {... |
Collects at least train_batch_size samples.
This is the legacy behavior as of 0.6, and launches extra sample tasks to
potentially improve performance but can result in many wasted samples.
def collect_samples_straggler_mitigation(agents, train_batch_size):
"""Collects at least train_batch_size samples.
... |
Improve the formatting of an exception thrown by a remote function.
This method takes a traceback from an exception and makes it nicer by
removing a few uninformative lines and adding some space to indent the
remaining lines nicely.
Args:
exception_message (str): A message generated by traceba... |
Push an error message to the driver to be printed in the background.
Args:
worker: The worker to use.
error_type (str): The type of the error.
message (str): The message that will be printed in the background
on the driver.
driver_id: The ID of the driver to push the err... |
Push an error message to the driver to be printed in the background.
Normally the push_error_to_driver function should be used. However, in some
instances, the raylet client is not available, e.g., because the
error happens in Python before the driver or worker has connected to the
backend processes.
... |
Check if an object is a Cython function or method
def is_cython(obj):
"""Check if an object is a Cython function or method"""
# TODO(suo): We could split these into two functions, one for Cython
# functions and another for Cython methods.
# TODO(suo): There doesn't appear to be a Cython function 'type... |
Check if an object is a function or method.
Args:
obj: The Python object in question.
Returns:
True if the object is an function or method.
def is_function_or_method(obj):
"""Check if an object is a function or method.
Args:
obj: The Python object in question.
Returns:
... |
Generate a random string to use as an ID.
Note that users may seed numpy, which could cause this function to generate
duplicate IDs. Therefore, we need to seed numpy ourselves, but we can't
interfere with the state of the user's random number generator, so we
extract the state of the random number gene... |
Make this unicode in Python 3, otherwise leave it as bytes.
Args:
byte_str: The byte string to decode.
allow_none: If true, then we will allow byte_str to be None in which
case we will return an empty string. TODO(rkn): Remove this flag.
This is only here to simplify upgradi... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.