text stringlengths 81 112k |
|---|
Reads disk to obtain the health pills for a run at a specific step.
This could be much slower than the alternative path of just returning all
health pills sampled by the event multiplexer. It could take tens of minutes
to complete this call for large graphs for big step values (in the
thousands).
... |
Creates health pills out of data in an event.
Creates health pills out of the event and adds them to the mapping.
Args:
node_name_set: A set of node names that are relevant.
mapping: The mapping from node name to HealthPillEvents.
This object may be destructively modified.
target_s... |
Creates a HealthPillEvent containing various properties of a health pill.
Args:
wall_time: The wall time in seconds.
step: The session run step of the event.
device_name: The name of the node's device.
output_slot: The numeric output slot.
node_name: The name of the node (without the ... |
A (wrapped) werkzeug handler for serving numerics alert report.
Accepts GET requests and responds with an array of JSON-ified
NumericsAlertReportRow.
Each JSON-ified NumericsAlertReportRow object has the following format:
{
'device_name': string,
'tensor_name': string,
'first_t... |
Convert a `TensorBoardInfo` to string form to be stored on disk.
The format returned by this function is opaque and should only be
interpreted by `_info_from_string`.
Args:
info: A valid `TensorBoardInfo` object.
Raises:
ValueError: If any field on `info` is not of the correct type.
Returns:
A... |
Parse a `TensorBoardInfo` object from its string representation.
Args:
info_string: A string representation of a `TensorBoardInfo`, as
produced by a previous call to `_info_to_string`.
Returns:
A `TensorBoardInfo` value.
Raises:
ValueError: If the provided string is not valid JSON, or if it d... |
Compute a `TensorBoardInfo.cache_key` field.
The format returned by this function is opaque. Clients may only
inspect it by comparing it for equality with other results from this
function.
Args:
working_directory: The directory from which TensorBoard was launched
and relative to which paths like `--... |
Get path to directory in which to store info files.
The directory returned by this function is "owned" by this module. If
the contents of the directory are modified other than via the public
functions of this module, subsequent behavior is undefined.
The directory will be created if it does not exist.
def _g... |
Write TensorBoardInfo to the current process's info file.
This should be called by `main` once the server is ready. When the
server shuts down, `remove_info_file` should be called.
Args:
tensorboard_info: A valid `TensorBoardInfo` object.
Raises:
ValueError: If any field on `info` is not of the corre... |
Remove the current process's TensorBoardInfo file, if it exists.
If the file does not exist, no action is taken and no error is raised.
def remove_info_file():
"""Remove the current process's TensorBoardInfo file, if it exists.
If the file does not exist, no action is taken and no error is raised.
"""
try:... |
Return TensorBoardInfo values for running TensorBoard processes.
This function may not provide a perfect snapshot of the set of running
processes. Its result set may be incomplete if the user has cleaned
their /tmp/ directory while TensorBoard processes are running. It may
contain extraneous entries if TensorB... |
Start a new TensorBoard instance, or reuse a compatible one.
If the cache key determined by the provided arguments and the current
working directory (see `cache_key`) matches the cache key of a running
TensorBoard process (see `get_all`), that process will be reused.
Otherwise, a new TensorBoard process will ... |
Find a running TensorBoard instance compatible with the cache key.
Returns:
A `TensorBoardInfo` object, or `None` if none matches the cache key.
def _find_matching_instance(cache_key):
"""Find a running TensorBoard instance compatible with the cache key.
Returns:
A `TensorBoardInfo` object, or `None` i... |
Read the given file, if it exists.
Args:
filename: A path to a file.
Returns:
A string containing the file contents, or `None` if the file does
not exist.
def _maybe_read_file(filename):
"""Read the given file, if it exists.
Args:
filename: A path to a file.
Returns:
A string containi... |
Processes raw trace data and returns the UI data.
def process_raw_trace(raw_trace):
"""Processes raw trace data and returns the UI data."""
trace = trace_events_pb2.Trace()
trace.ParseFromString(raw_trace)
return ''.join(trace_events_json.TraceEventsJsonStream(trace)) |
Whether this plugin is active and has any profile data to show.
Detecting profile data is expensive, so this process runs asynchronously
and the value reported by this method is the cached value and may be stale.
Returns:
Whether any run has profile data.
def is_active(self):
"""Whether this pl... |
Helper that maps a frontend run name to a profile "run" directory.
The frontend run name consists of the TensorBoard run name (aka the relative
path from the logdir root to the directory containing the data) path-joined
to the Profile plugin's "run" concept (which is a subdirectory of the
plugins/profi... |
Generator for pairs of "run name" and a list of tools for that run.
The "run name" here is a "frontend run name" - see _run_dir() for the
definition of a "frontend run name" and how it maps to a directory of
profile data for a specific profile "run". The profile plugin concept of
"run" is different fro... |
Returns available hosts for the run and tool in the log directory.
In the plugin log directory, each directory contains profile data for a
single run (identified by the directory name), and files in the run
directory contains data for different tools and hosts. The file that
contains profile for a spec... |
Retrieves and processes the tool data for a run and a host.
Args:
request: XMLHttpRequest
Returns:
A string that can be served to the frontend tool or None if tool,
run or host is invalid.
def data_impl(self, request):
"""Retrieves and processes the tool data for a run and a host.
... |
Run a temperature simulation.
This will simulate an object at temperature `initial_temperature`
sitting at rest in a large room at temperature `ambient_temperature`.
The object has some intrinsic `heat_coefficient`, which indicates
how much thermal conductivity it has: for instance, metals have high
thermal ... |
Run simulations on a reasonable set of parameters.
Arguments:
logdir: the directory into which to store all the runs' data
verbose: if true, print out each run's name as it begins
def run_all(logdir, verbose=False):
"""Run simulations on a reasonable set of parameters.
Arguments:
logdir: the direct... |
Makes Python object appropriate for JSON serialization.
- Replaces instances of Infinity/-Infinity/NaN with strings.
- Turns byte strings into unicode strings.
- Turns sets into sorted lists.
- Turns tuples into lists.
Args:
obj: Python data structure.
encoding: Charset used to decode byte strings.
... |
Create a legacy text summary op.
Text data summarized via this plugin will be visible in the Text Dashboard
in TensorBoard. The standard TensorBoard Text Dashboard will render markdown
in the strings, and will automatically organize 1D and 2D tensors into tables.
If a tensor with more than 2 dimensions is prov... |
Create a legacy text summary protobuf.
Arguments:
name: A name for the generated node. Will also serve as a series name in
TensorBoard.
data: A Python bytestring (of type bytes), or Unicode string. Or a numpy
data array of those types.
display_name: Optional name for this summary in TensorBoa... |
Return the string message associated with TensorBoard purges.
def _GetPurgeMessage(most_recent_step, most_recent_wall_time, event_step,
event_wall_time, num_expired_scalars, num_expired_histos,
num_expired_comp_histos, num_expired_images,
num_expired_audio... |
Create an event generator for file or directory at given path string.
def _GeneratorFromPath(path):
"""Create an event generator for file or directory at given path string."""
if not path:
raise ValueError('path must be a valid string')
if io_wrapper.IsTensorFlowEventsFile(path):
return event_file_loader... |
Convert the string file_version in event.proto into a float.
Args:
file_version: String file_version from event.proto
Returns:
Version number as a float.
def _ParseFileVersion(file_version):
"""Convert the string file_version in event.proto into a float.
Args:
file_version: String file_version f... |
Loads all events added since the last call to `Reload`.
If `Reload` was never called, loads all events in the file.
Returns:
The `EventAccumulator`.
def Reload(self):
"""Loads all events added since the last call to `Reload`.
If `Reload` was never called, loads all events in the file.
Ret... |
Return the contents of a given plugin asset.
Args:
plugin_name: The string name of a plugin.
asset_name: The string name of an asset.
Returns:
The string contents of the plugin asset.
Raises:
KeyError: If the asset is not available.
def RetrievePluginAsset(self, plugin_name, asse... |
Returns the timestamp in seconds of the first event.
If the first event has been loaded (either by this method or by `Reload`,
this returns immediately. Otherwise, it will load in the first event. Note
that this means that calling `Reload` will cause this to block until
`Reload` has finished.
Retu... |
Returns a dict mapping tags to content specific to that plugin.
Args:
plugin_name: The name of the plugin for which to fetch plugin-specific
content.
Raises:
KeyError: if the plugin name is not found.
Returns:
A dict mapping tags to plugin-specific content (which are always stri... |
Return all tags found in the value stream.
Returns:
A `{tagType: ['list', 'of', 'tags']}` dictionary.
def Tags(self):
"""Return all tags found in the value stream.
Returns:
A `{tagType: ['list', 'of', 'tags']}` dictionary.
"""
return {
IMAGES: self.images.Keys(),
AUDIO... |
Return the graph definition, if there is one.
If the graph is stored directly, return that. If no graph is stored
directly but a metagraph is stored containing a graph, return that.
Raises:
ValueError: If there is no graph for this run.
Returns:
The `graph_def` proto.
def Graph(self):
... |
Return the metagraph definition, if there is one.
Raises:
ValueError: If there is no metagraph for this run.
Returns:
The `meta_graph_def` proto.
def MetaGraph(self):
"""Return the metagraph definition, if there is one.
Raises:
ValueError: If there is no metagraph for this run.
... |
Given a tag, return the associated session.run() metadata.
Args:
tag: A string tag associated with the event.
Raises:
ValueError: If the tag is not found.
Returns:
The metadata in form of `RunMetadata` proto.
def RunMetadata(self, tag):
"""Given a tag, return the associated session... |
Maybe purge orphaned data due to a TensorFlow crash.
When TensorFlow crashes at step T+O and restarts at step T, any events
written after step T are now "orphaned" and will be at best misleading if
they are included in TensorBoard.
This logic attempts to determine if there is orphaned data, and purge ... |
Check and discard expired events using SessionLog.START.
Check for a SessionLog.START event and purge all previously seen events
with larger steps, because they are out of date. Because of supervisor
threading, it is possible that this logic will cause the first few event
messages to be discarded since... |
Check for out-of-order event.step and discard expired events for tags.
Check if the event is out of order relative to the global most recent step.
If it is, purge outdated summaries for tags that the event contains.
Args:
event: The event to use as reference. If the event is out-of-order, all
... |
Processes a proto histogram by adding it to accumulated state.
def _ProcessHistogram(self, tag, wall_time, step, histo):
"""Processes a proto histogram by adding it to accumulated state."""
histo = self._ConvertHistogramProtoToTuple(histo)
histo_ev = HistogramEvent(wall_time, step, histo)
self.histogra... |
Callback for _ProcessHistogram.
def _CompressHistogram(self, histo_ev):
"""Callback for _ProcessHistogram."""
return CompressedHistogramEvent(
histo_ev.wall_time,
histo_ev.step,
compressor.compress_histogram_proto(
histo_ev.histogram_value, self._compression_bps)) |
Processes an image by adding it to accumulated state.
def _ProcessImage(self, tag, wall_time, step, image):
"""Processes an image by adding it to accumulated state."""
event = ImageEvent(wall_time=wall_time,
step=step,
encoded_image_string=image.encoded_image_strin... |
Processes a audio by adding it to accumulated state.
def _ProcessAudio(self, tag, wall_time, step, audio):
"""Processes a audio by adding it to accumulated state."""
event = AudioEvent(wall_time=wall_time,
step=step,
encoded_audio_string=audio.encoded_audio_string,... |
Processes a simple value by adding it to accumulated state.
def _ProcessScalar(self, tag, wall_time, step, scalar):
"""Processes a simple value by adding it to accumulated state."""
sv = ScalarEvent(wall_time=wall_time, step=step, value=scalar)
self.scalars.AddItem(tag, sv) |
Purge all events that have occurred after the given event.step.
If by_tags is True, purge all events that occurred after the given
event.step, but only for the tags that the event has. Non-sequential
event.steps suggest that a TensorFlow restart occurred, and we discard
the out-of-order events to displ... |
Loads all new events from disk as raw serialized proto bytestrings.
Calling Load multiple times in a row will not 'drop' events as long as the
return value is not iterated over.
Yields:
All event proto bytestrings in the file that have not been yielded yet.
def Load(self):
"""Loads all new even... |
Loads all new events from disk.
Calling Load multiple times in a row will not 'drop' events as long as the
return value is not iterated over.
Yields:
All events in the file that have not been yielded yet.
def Load(self):
"""Loads all new events from disk.
Calling Load multiple times in a r... |
input: unscaled sections.
returns: sections scaled to [0, 255]
def scale_sections(sections, scaling_scope):
'''
input: unscaled sections.
returns: sections scaled to [0, 255]
'''
new_sections = []
if scaling_scope == 'layer':
for section in sections:
new_sections.append(scale_image_for_display... |
Records the summary values based on an updated message from the debugger.
Logs an error message if writing the event to disk fails.
Args:
event: The Event proto to be processed.
def on_value_event(self, event):
"""Records the summary values based on an updated message from the debugger.
Logs a... |
Parses the session_run_index value from the event proto.
Args:
event: The event with metadata that contains the session_run_index.
Returns:
The int session_run_index value. Or
constants.SENTINEL_FOR_UNDETERMINED_STEP if it could not be determined.
def _parse_session_run_index(self, event):
... |
Creates fixed size histogram by adding compression to accumulated state.
This routine transforms a histogram at a particular step by interpolating its
variable number of buckets to represent their cumulative weight at a constant
number of compression points. This significantly reduces the size of the
histogram... |
Creates fixed size histogram by adding compression to accumulated state.
This routine transforms a histogram at a particular step by linearly
interpolating its variable number of buckets to represent their cumulative
weight at a constant number of compression points. This significantly reduces
the size of the ... |
Affinely map from [x0, x1] onto [y0, y1].
def _lerp(x, x0, x1, y0, y1):
"""Affinely map from [x0, x1] onto [y0, y1]."""
return y0 + (x - x0) * float(y1 - y0) / (x1 - x0) |
The images plugin is active iff any run has at least one relevant tag.
def is_active(self):
"""The images plugin is active iff any run has at least one relevant tag."""
if self._db_connection_provider:
# The plugin is active if one relevant tag can be found in the database.
db = self._db_connection... |
Given a tag and list of runs, serve a list of metadata for images.
Note that the images themselves are not sent; instead, we respond with URLs
to the images. The frontend should treat these URLs as opaque and should not
try to parse information about them or generate them itself, as the format
may chan... |
Builds a JSON-serializable object with information about images.
Args:
run: The name of the run.
tag: The name of the tag the images all belong to.
sample: The zero-indexed sample of the image for which to retrieve
information. For instance, setting `sample` to `2` will fetch
info... |
Returns the actual image bytes for a given image.
Args:
run: The name of the run the image belongs to.
tag: The name of the tag the images belongs to.
index: The index of the image in the current reservoir.
sample: The zero-indexed sample of the image to retrieve (for example,
setti... |
Serves an individual image.
def _serve_individual_image(self, request):
"""Serves an individual image."""
run = request.args.get('run')
tag = request.args.get('tag')
index = int(request.args.get('index'))
sample = int(request.args.get('sample', 0))
data = self._get_individual_image(run, tag, in... |
Generate a PR curve with precision and recall evenly weighted.
Arguments:
logdir: The directory into which to store all the runs' data.
steps: The number of steps to run for.
run_name: The name of the run.
thresholds: The number of thresholds to use for PR curves.
mask_every_other_prediction: Whe... |
Generate PR curve summaries.
Arguments:
logdir: The directory into which to store all the runs' data.
steps: The number of steps to run for.
verbose: Whether to print the names of runs into stdout during execution.
thresholds: The number of thresholds to use for PR curves.
def run_all(logdir, steps,... |
Write an image summary.
Arguments:
name: A name for this summary. The summary tag used for TensorBoard will
be this name prefixed by any active name scopes.
data: A `Tensor` representing pixel data with shape `[k, h, w, c]`,
where `k` is the number of images, `h` and `w` are the height and
... |
Sets the examples to be displayed in WIT.
Args:
examples: List of example protos.
Returns:
self, in order to enabled method chaining.
def set_examples(self, examples):
"""Sets the examples to be displayed in WIT.
Args:
examples: List of example protos.
Returns:
self, in ... |
Sets the model for inference as a TF Estimator.
Instead of using TF Serving to host a model for WIT to query, WIT can
directly use a TF Estimator object as the model to query. In order to
accomplish this, a feature_spec must also be provided to parse the
example protos for input into the estimator.
... |
Sets a second model for inference as a TF Estimator.
If you wish to compare the results of two models in WIT, use this method
to setup the details of the second model.
Instead of using TF Serving to host a model for WIT to query, WIT can
directly use a TF Estimator object as the model to query. In ord... |
Sets a custom function for inference.
Instead of using TF Serving to host a model for WIT to query, WIT can
directly use a custom function as the model to query. In this case, the
provided function should accept example protos and return:
- For classification: A 2D list of numbers. The first dimensio... |
Sets a second custom function for inference.
If you wish to compare the results of two models in WIT, use this method
to setup the details of the second model.
Instead of using TF Serving to host a model for WIT to query, WIT can
directly use a custom function as the model to query. In this case, the
... |
Reshape a rank 4 array to be rank 2, where each column of block_width is
a filter, and each row of block height is an input channel. For example:
[[[[ 11, 21, 31, 41],
[ 51, 61, 71, 81],
[ 91, 101, 111, 121]],
[[ 12, 22, 32, 42],
[ 52, 62, 72, 82],
[ 92, 102, 112, ... |
Reshapes arrays of ranks not in {1, 2, 4}
def _reshape_irregular_array(self, array, section_height, image_width):
'''Reshapes arrays of ranks not in {1, 2, 4}
'''
section_area = section_height * image_width
flattened_array = np.ravel(array)
if not self.config['show_all']:
flattened_array = f... |
input: unprocessed numpy arrays.
returns: columns of the size that they will appear in the image, not scaled
for display. That needs to wait until after variance is computed.
def _arrays_to_sections(self, arrays):
'''
input: unprocessed numpy arrays.
returns: columns of the size that they ... |
Computes the variance of corresponding sections over time.
Returns:
a list of np arrays.
def _sections_to_variance_sections(self, sections_over_time):
'''Computes the variance of corresponding sections over time.
Returns:
a list of np arrays.
'''
variance_sections = []
for i in r... |
Clears the deque if certain parts of the config have changed.
def _maybe_clear_deque(self):
'''Clears the deque if certain parts of the config have changed.'''
for config_item in ['values', 'mode', 'show_all']:
if self.config[config_item] != self.old_config[config_item]:
self.sections_over_time.... |
Decorator to define a function that lazily loads the module 'name'.
This can be used to defer importing troublesome dependencies - e.g. ones that
are large and infrequently used, or that cause a dependency cycle -
until they are actually used.
Args:
name: the fully-qualified name of the module; typically ... |
Memoizing decorator for f, which must have exactly 1 hashable argument.
def _memoize(f):
"""Memoizing decorator for f, which must have exactly 1 hashable argument."""
nothing = object() # Unique "no value" sentinel object.
cache = {}
# Use a reentrant lock so that if f references the resulting wrapper we die
... |
Provide the root module of a TF-like API for use within TensorBoard.
By default this is equivalent to `import tensorflow as tf`, but it can be used
in combination with //tensorboard/compat:tensorflow (to fall back to a stub TF
API implementation if the real one is not available) or with
//tensorboard/compat:no... |
Provide the root module of a TF-2.0 API for use within TensorBoard.
Returns:
The root module of a TF-2.0 API, if available.
Raises:
ImportError: if a TF-2.0 API is not available.
def tf2():
"""Provide the root module of a TF-2.0 API for use within TensorBoard.
Returns:
The root module of a TF-2.... |
Provide pywrap_tensorflow access in TensorBoard.
pywrap_tensorflow cannot be accessed from tf.python.pywrap_tensorflow
and needs to be imported using
`from tensorflow.python import pywrap_tensorflow`. Therefore, we provide
a separate accessor function for it here.
NOTE: pywrap_tensorflow is not part of Tens... |
Returns a summary proto buffer holding this experiment.
def create_experiment_summary():
"""Returns a summary proto buffer holding this experiment."""
# Convert TEMPERATURE_LIST to google.protobuf.ListValue
temperature_list = struct_pb2.ListValue()
temperature_list.extend(TEMPERATURE_LIST)
materials = struc... |
Runs a temperature simulation.
This will simulate an object at temperature `initial_temperature`
sitting at rest in a large room at temperature `ambient_temperature`.
The object has some intrinsic `heat_coefficient`, which indicates
how much thermal conductivity it has: for instance, metals have high
thermal... |
Run simulations on a reasonable set of parameters.
Arguments:
logdir: the directory into which to store all the runs' data
verbose: if true, print out each run's name as it begins.
def run_all(logdir, verbose=False):
"""Run simulations on a reasonable set of parameters.
Arguments:
logdir: the direc... |
Return the registered filesystem for the given file.
def get_filesystem(filename):
"""Return the registered filesystem for the given file."""
filename = compat.as_str_any(filename)
prefix = ""
index = filename.find("://")
if index >= 0:
prefix = filename[:index]
fs = _REGISTERED_FILESYS... |
Recursive directory tree generator for directories.
Args:
top: string, a Directory name
topdown: bool, Traverse pre order if True, post order if False.
onerror: optional handler for errors. Should be a function, it will be
called with the error as argument. Rethrowing the error aborts the... |
Reads contents of a file to a string.
Args:
filename: string, a path
binary_mode: bool, read as binary if True, otherwise text
size: int, number of bytes or characters to read, otherwise
read all the contents of the file from the offset
offset: in... |
Returns a list of files that match the given pattern(s).
def glob(self, filename):
"""Returns a list of files that match the given pattern(s)."""
if isinstance(filename, six.string_types):
return [
# Convert the filenames to string from bytes.
compat.as_str_a... |
Returns a list of entries contained within a directory.
def listdir(self, dirname):
"""Returns a list of entries contained within a directory."""
if not self.isdir(dirname):
raise errors.NotFoundError(None, None, "Could not find directory")
entries = os.listdir(compat.as_str_any(di... |
Returns file statistics for a given path.
def stat(self, filename):
"""Returns file statistics for a given path."""
# NOTE: Size of the file is given by .st_size as returned from
# os.stat(), but we convert to .length
try:
len = os.stat(compat.as_bytes(filename)).st_size
... |
Split an S3-prefixed URL into bucket and path.
def bucket_and_path(self, url):
"""Split an S3-prefixed URL into bucket and path."""
url = compat.as_str_any(url)
if url.startswith("s3://"):
url = url[len("s3://"):]
idx = url.index("/")
bucket = url[:idx]
path ... |
Determines whether a path exists or not.
def exists(self, filename):
"""Determines whether a path exists or not."""
client = boto3.client("s3")
bucket, path = self.bucket_and_path(filename)
r = client.list_objects(Bucket=bucket, Prefix=path, Delimiter="/")
if r.get("Contents") o... |
Reads contents of a file to a string.
Args:
filename: string, a path
binary_mode: bool, read as binary if True, otherwise text
size: int, number of bytes or characters to read, otherwise
read all the contents of the file from the offset
offset: in... |
Returns a list of files that match the given pattern(s).
def glob(self, filename):
"""Returns a list of files that match the given pattern(s)."""
# Only support prefix with * at the end and no ? in the string
star_i = filename.find('*')
quest_i = filename.find('?')
if quest_i >=... |
Returns whether the path is a directory or not.
def isdir(self, dirname):
"""Returns whether the path is a directory or not."""
client = boto3.client("s3")
bucket, path = self.bucket_and_path(dirname)
if not path.endswith("/"):
path += "/" # This will now only retrieve subd... |
Returns a list of entries contained within a directory.
def listdir(self, dirname):
"""Returns a list of entries contained within a directory."""
client = boto3.client("s3")
bucket, path = self.bucket_and_path(dirname)
p = client.get_paginator("list_objects")
if not path.endswit... |
Returns file statistics for a given path.
def stat(self, filename):
"""Returns file statistics for a given path."""
# NOTE: Size of the file is given by ContentLength from S3,
# but we convert to .length
client = boto3.client("s3")
bucket, path = self.bucket_and_path(filename)
... |
Determine the most specific context that we're in.
Returns:
_CONTEXT_COLAB: If in Colab with an IPython notebook context.
_CONTEXT_IPYTHON: If not in Colab, but we are in an IPython notebook
context (e.g., from running `jupyter notebook` at the command
line).
_CONTEXT_NONE: Otherwise (e.g., b... |
Launch and display a TensorBoard instance as if at the command line.
Args:
args_string: Command-line arguments to TensorBoard, to be
interpreted by `shlex.split`: e.g., "--logdir ./logs --port 0".
Shell metacharacters are not supported: e.g., "--logdir 2>&1" will
point the logdir at the literal... |
Format the elapsed time for the given TensorBoardInfo.
Args:
info: A TensorBoardInfo value.
Returns:
A human-readable string describing the time since the server
described by `info` started: e.g., "2 days, 0:48:58".
def _time_delta_from_info(info):
"""Format the elapsed time for the given TensorBoa... |
Display a TensorBoard instance already running on this machine.
Args:
port: The port on which the TensorBoard server is listening, as an
`int`, or `None` to automatically select the most recently
launched TensorBoard.
height: The height of the frame into which to render the TensorBoard
UI, ... |
Internal version of `display`.
Args:
port: As with `display`.
height: As with `display`.
print_message: True to print which TensorBoard instance was selected
for display (if applicable), or False otherwise.
display_handle: If not None, an IPython display handle into which to
render Tensor... |
Display a TensorBoard instance in a Colab output frame.
The Colab VM is not directly exposed to the network, so the Colab
runtime provides a service worker tunnel to proxy requests from the
end user's browser through to servers running on the Colab VM: the
output frame may issue requests to https://localhost:<... |
Print a listing of known running TensorBoard instances.
TensorBoard instances that were killed uncleanly (e.g., with SIGKILL
or SIGQUIT) may appear in this list even if they are no longer
running. Conversely, this list may be missing some entries if your
operating system's temporary directory has been cleared ... |
Check the path name to see if it is probably a TF Events file.
Args:
path: A file path to check if it is an event file.
Raises:
ValueError: If the path is an empty string.
Returns:
If path is formatted like a TensorFlowEventsFile.
def IsTensorFlowEventsFile(path):
"""Check the path name to see i... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.