text
stringlengths
81
112k
Adds operations that perform JPEG decoding and resizing to the graph.. Args: module_spec: The hub.ModuleSpec for the image module being used. Returns: Tensors for the node to feed JPEG data into, and the output of the preprocessing steps. def add_jpeg_decoding(module_spec): """Adds operations tha...
Exports model for serving. Args: module_spec: The hub.ModuleSpec for the image module being used. class_count: The number of classes. saved_model_dir: Directory in which to save exported model and variables. def export_model(module_spec, class_count, saved_model_dir): """Exports model for serving. ...
Converts logging_level into TensorFlow logging verbosity value Args: logging_level: String value representing logging level: 'DEBUG', 'INFO', 'WARN', 'ERROR', 'FATAL' def logging_level_verbosity(logging_verbosity): """Converts logging_level into TensorFlow logging verbosity value Args: logging_leve...
Returns the module's attached ImageModuleInfo message, or None. def get_image_module_info(module_or_spec, required=False): """Returns the module's attached ImageModuleInfo message, or None.""" return module_or_spec.get_attached_message( IMAGE_MODULE_INFO_KEY, ImageModuleInfo, required=required)
Returns expected [height, width] dimensions of an image input. Args: module_or_spec: a Module or ModuleSpec that accepts image inputs. signature: a string with the key of the signature in question. If None, the default signature is used. input_name: a string with the input name for images. If None,...
Returns expected num_channels dimensions of an image input. This is for advanced users only who expect to handle modules with image inputs that might not have the 3 usual RGB channels. Args: module_or_spec: a Module or ModuleSpec that accepts image inputs. signature: a string with the key of the signatu...
Returns a ParsedTensorInfo instance from a TensorInfo proto. def _parse_tensor_info_proto(tensor_info): """Returns a ParsedTensorInfo instance from a TensorInfo proto.""" encoding = tensor_info.WhichOneof("encoding") dtype = tf.DType(tensor_info.dtype) shape = tf.TensorShape(tensor_info.tensor_shape) if enco...
Returns whether x is a SparseTensor or a parsed sparse tensor info. def _is_sparse(x): """Returns whether x is a SparseTensor or a parsed sparse tensor info.""" return ( isinstance(x, (tf.SparseTensor, tf_v1.SparseTensorValue)) or (hasattr(x, "is_sparse") and x.is_sparse))
Converts `value` into a tensor that can be feed into `tensor_info`. Args: value: A value to convert into Tensor or SparseTensor. target: An object returned by `parse_tensor_info_map`. error_prefix: A string to prefix on raised TypeErrors. Raises: TypeError: If it fails to convert. Returns: ...
Converts dict `values` in tensors that are compatible with `targets`. Args: values: A dict to objects to convert with same keys as `targets`. targets: A dict returned by `parse_tensor_info_map`. Returns: A map with the same keys as `values` but values converted into Tensor/SparseTensors that can b...
Builds a map to feed tensors in `protomap` using `inputs`. Args: protomap: A proto map<string,TensorInfo>. inputs: A map with same keys as `protomap` of Tensors and SparseTensors. Returns: A map from nodes refered by TensorInfo protos to corresponding input tensors. Raises: ValueError: if a...
Builds a map of tensors from `protomap` using `get_tensor_by_name`. Args: protomap: A proto map<string,TensorInfo>. get_tensor_by_name: A lambda that receives a tensor name and returns a Tensor instance. Returns: A map from string to Tensor or SparseTensor instances built from `protomap` and...
Whether two signature inputs/outputs match in dtype, shape and sparsity. Args: map_a: A proto map<string,TensorInfo>. map_b: A proto map<string,TensorInfo>. Returns: A boolean whether `map_a` and `map_b` tensors have the same dtype, shape and sparsity. def tensor_info_proto_maps_match(map_a, map_...
Parses a line of a text embedding file. Args: line: (str) One line of the text embedding file. Returns: A token string and its embedding vector in floats. def parse_line(line): """Parses a line of a text embedding file. Args: line: (str) One line of the text embedding file. Returns: A tok...
Loads a text embedding into memory as a numpy matrix. Args: file_path: Path to the text embedding file. parse_line_fn: callback function to parse each file line. Returns: A tuple of (list of vocabulary tokens, numpy matrix of embedding vectors). Raises: ValueError: if the data in the sstable is...
Makes a module spec to simply perform token to embedding lookups. Input of this module is a 1-D list of string tokens. For T tokens input and an M dimensional embedding table, the lookup result is a [T, M] shaped Tensor. Args: vocabulary_file: Text file where each line is a key in the vocabulary. vocab_...
Exports a TF-Hub module that performs embedding lookups. Args: export_path: Location to export the module. vocabulary: List of the N tokens in the vocabulary. embeddings: Numpy array of shape [N+K,M] the first N rows are the M dimensional embeddings for the respective tokens and the next K ro...
Adds zero vectors for oov buckets if num_oov_buckets > 0. Since we are assigning zero vectors, adding more that one oov bucket is only meaningful if we perform fine-tuning. Args: embeddings: Embeddings to extend. num_oov_buckets: Number of OOV buckets in the extended embedding. def maybe_append_oov_vec...
Experimental: Create a ModuleSpec out of a SavedModel. Define a ModuleSpec from a SavedModel. Note that this is not guaranteed to work in all cases and it assumes the SavedModel has followed some conventions: - The serialized SaverDef can be ignored and instead can be reconstructed. - The init op and main op ...
Register a Module to be exported under `export_name`. This function registers `module` to be exported by `LatestModuleExporter` under a subdirectory named `export_name`. Note that `export_name` must be unique for each module exported from the current graph. It only controls the export subdirectory name and i...
Returns a session constructed using `estimator` and `serving_input_fn`. The Estimator API does not provide an API to construct a graph and session, making it necessary for this function to replicate how an estimator builds a graph. This code is based on `Estimator.export_savedmodel` (another function that h...
Creates a ModuleSpec from a function that builds the module's graph. The `module_fn` is called on a new graph (not the current one) to build the graph of the module and define its signatures via `hub.add_signature()`. Example: ```python # Define a text embedding module. def my_text_module_fn(): text_i...
Adds a signature to the module definition. NOTE: This must be called within a `module_fn` that is defining a Module. Args: name: Signature name as a string. If omitted, it is interpreted as 'default' and is the signature used when `Module.__call__` `signature` is not specified. inputs: A dict ...
Adds an attached message to the module definition. NOTE: This must be called within a `module_fn` that is defining a Module. See ModuleSpec.get_attached_message() for an introduction to attached messages and the API for module consumers. To define a new type of attached message: * Select a reasonably de...
Returns set of registered stateful ops that do not expect inputs. This list is used to identify the ops to be included in the state-graph and that are subsequently fed into the apply-graphs. Returns: A set of strings. def list_registered_stateful_ops_without_inputs(): """Returns set of registered statefu...
Returns a map from tensor names to tensors that hold the state. def get_state_map(meta_graph, state_ops, unsupported_state_ops, get_tensor_by_name): """Returns a map from tensor names to tensors that hold the state.""" state_map = {} for node in meta_graph.graph_def.node: if node.op in stat...
Replaces state ops with non state Placeholder ops for the apply graph. def replace_apply_state(meta_graph, state_ops, feed_map): """Replaces state ops with non state Placeholder ops for the apply graph.""" for node in meta_graph.graph_def.node: keys_to_purge = [] tensor_name = node.name + ":0" # Verify...
Given a tensor name as node_name:output_number, returns both parts. def _split_tensor_name(tensor_name): """Given a tensor name as node_name:output_number, returns both parts.""" result = re.match(r"(.*):(\d+)$", tensor_name) if not result: raise ValueError( "Unexpected format for tensor name. Expect...
Matches a variable to individual parts. Args: variable_key: String identifier of the variable in the module scope. variable: Variable tensor. Returns: partitioned: Whether the variable is partitioned. name: Name of the variable up to the partitioning. offset: Offset of the variable into the fu...
Builds a proper variable map if it contains PartitionedVariables. Args: var_node_map: A map to tf.Variables. PartitionedVariables show up in this map as N entries with keys "<var_name>/part_n". Returns: A map to tf.Variables or to list of tf.Variables for each PartitionedVariables in `var_node_m...
Checks that tag list contains each set of tags only once. def check_unique_tags(tag_list): """Checks that tag list contains each set of tags only once.""" frozen_tags_seen = set() for tags in tag_list: frozen_tags = frozenset(tags) if frozen_tags in frozen_tags_seen: raise ValueError("Tags %r used ...
Checks that SavedModelHandler only uses supported collections. def check_collections_are_supported(saved_model_handler, supported): """Checks that SavedModelHandler only uses supported collections.""" for meta_graph in saved_model_handler.meta_graphs: used_collection_keys = set(meta_graph.collection_def.keys()...
Register graph ops absent in op_def_registry, if present in c++ registry. Args: graph_ops: set with graph op names to register. Raises: RuntimeError: if `graph_ops` contains ops that are not in either python or c++ registry. def register_ops_if_needed(graph_ops): """Register graph ops absent in o...
Fixes colocation attributes after import according to input_map. This function is meant to be called after importing a GraphDef, in order to rewrite colocate_with constrains analogous to how inputs to ops are rewritten by input_map during import. It also updates devices accordingly. The nodes in the given imp...
Returns a dict mapping from pre-import to post-import colocation attrs. Args: input_map: as for fix_colocation_after_import. absolute_import_scope: as for fix_colocation_after_import. Returns: A dict that maps bytes `"loc:@" + absolute_import_scope + "/foo"` to _ConsistentValues set to the lists o...
Rewrites colocation constraints in the current default graph. Nodes in `absolute_import_scope` get their "_class" attr lists rewritten according to `colocation_attr_map`: each entry that matches a key gets replaced by the associated values (with deduplication). The node's device is updated accordingly. Args...
Returns error message for colocation of state ops, or None if ok. def find_state_op_colocation_error(graph, reported_tags=None): """Returns error message for colocation of state ops, or None if ok.""" state_op_types = list_registered_stateful_ops_without_inputs() state_op_map = {op.name: op for op in graph.get_o...
Returns error message for colocation of signature inputs, or None if ok. def find_signature_input_colocation_error(signature_name, inputs): """Returns error message for colocation of signature inputs, or None if ok.""" for input_name, tensor in inputs.items(): expected_colocation_groups = [tf.compat.as_bytes("...
Returns error message for module inputs from ops with multiple outputs. def find_signature_inputs_from_multivalued_ops(inputs): """Returns error message for module inputs from ops with multiple outputs.""" dense_inputs = [] # List of (str, Tensor), with SparseTensors decomposed. for name, tensor in sorted(input...
Internal. Args: path: string where to export the module to. variables_saver: an unary-function that writes the module variables checkpoint on the given path. def _export(self, path, variables_saver): """Internal. Args: path: string where to export the module to. variables_...
Creates the graph nodes that hold the state of the Module. Args: name: name scope to create the state graph in. Returns: A tuple consisting of: variables_tensor_map: a map from tensor names in the original graph def to the created Variables objects. state_map: a map from ...
See `ModuleImpl.create_apply_graph`. def create_apply_graph(self, signature, input_tensors, name): """See `ModuleImpl.create_apply_graph`.""" signature_def = self._meta_graph.signature_def.get(signature) meta_graph = meta_graph_pb2.MetaGraphDef() meta_graph.CopyFrom(self._meta_graph) apply_graph = ...
See `Module.export`. def export(self, path, session): """See `Module.export`.""" def variables_saver(variables_path): if self._saver: self._saver.save( session, variables_path, write_meta_graph=False, write_state=False) self._spec._export(path, variables_s...
Receives a value for the object and some context on its source. def Set(self, value, context=None): """Receives a value for the object and some context on its source.""" if self.has_error: return if self.value is None: self.value = value self._context["old_value"] = value self._context.up...
Gets consistent value or raises ValueError with formatted contexts. def GetConsistentValueOrRaise(self, error_format, context=None): """Gets consistent value or raises ValueError with formatted contexts.""" if self.has_error: full_context = dict(self._context) if context: full_context.update(contex...
Returns the directory where to cache the module. def _module_dir(handle): """Returns the directory where to cache the module.""" cache_dir = resolver.tfhub_cache_dir(use_temp=True) return resolver.create_local_module_dir( cache_dir, hashlib.sha1(handle.encode("utf8")).hexdigest())
Returns the path for storing variables checkpoints. def get_variables_path(export_dir): """Returns the path for storing variables checkpoints.""" return os.path.join( tf.compat.as_bytes(export_dir), tf.compat.as_bytes(tf_v1.saved_model.constants.VARIABLES_DIRECTORY), tf.compat.as_bytes(tf_v1.save...
tensor_name must have format node_name:output_number. Returns node_name. def _get_node_name_from_tensor(tensor_name): """tensor_name must have format node_name:output_number. Returns node_name.""" result = re.match(r"([^:]*):\d+$", tensor_name) if not result: raise ValueError( "Unexpected format for ...
Adds a signature to current graph. Args: key: Signature key as a string. inputs: Signature inputs as a map from string to Tensor or SparseTensor. outputs: Signature outputs as a map from string to Tensor or SparseTensor. (Recall that a Variable is not a Tensor, but Variable.value() is.) Raises: ...
Exports signatures from current graph into a MetaGraphDef. def _export_signatures(meta_graph): """Exports signatures from current graph into a MetaGraphDef.""" named_signatures = tf_v1.get_collection(_SIGNATURE_COLLECTION) if not named_signatures: raise ValueError("No signatures present. Please call hub.add_...
Adds a ModuleAttachment to the current graph. Args: key: A string with the unique key of the attachment. the_bytes: A bytes object with the serialized attachment. def attach_bytes(key, the_bytes): """Adds a ModuleAttachment to the current graph. Args: key: A string with the unique key of the attach...
Exports ModuleAttachments from the current tf.Graph into `meta_graph`. def _export_module_attachments(meta_graph): """Exports ModuleAttachments from the current tf.Graph into `meta_graph`.""" added_attachments = tf_v1.get_collection(_ATTACHMENT_COLLECTION_INTERNAL) if not added_attachments: return # Don't touch...
Returns the dict of ModuleAttachments stored in `meta_graph`. Args: meta_graph: A MetaGraphDef, as built by SavedModelHandler.add_graph_copy() from some graph. Returns: A dict, containing the `(key, bytes)` items passed to `attach_bytes()` when the graph had been built. Raises: ValueError...
Raises TypeError if `node_def` does not match the expectations. def _check_asset_node_def(node_def): """Raises TypeError if `node_def` does not match the expectations.""" if node_def.op != "Const": raise TypeError("Asset node must be of type constant.") if tf.as_dtype(node_def.attr["dtype"].type) != tf.strin...
Merges the ASSETS_KEY collection into the GraphDefs in saved_model_proto. Removes the ASSETS_KEY collection from the GraphDefs in the SavedModel and modifies nodes with the assets filenames to point to the assets in `path`. After this transformation, the SavedModel GraphDefs can be used without feeding asset t...
Creates an ASSETS_KEY collection in the GraphDefs in saved_model_proto. Adds an ASSETS_KEY collection to the GraphDefs in the SavedModel and returns a map from original asset filename to filename when exporting the SavedModel to `export_path`. This is roughly the inverse operation of `_merge_assets_key_collec...
Reads the savedmodel.pb file containing `SavedModel`. def _parse_saved_model(path): """Reads the savedmodel.pb file containing `SavedModel`.""" # Based on tensorflow/python/saved_model/loader.py implementation. path_to_pb = _get_saved_model_proto_path(path) file_content = tf_v1.gfile.Open(path_to_pb, "rb").rea...
Creates a SavedModelHandler from a SavedModel in `path`. def load(path): """Creates a SavedModelHandler from a SavedModel in `path`.""" proto = _parse_saved_model(path) _merge_assets_key_collection(proto, path) handler = SavedModelHandler() handler._proto = proto # pylint: disable=protected-access return ...
Adds a copy of Graph with the specified set of tags. def add_graph_copy(self, graph, tags=None): """Adds a copy of Graph with the specified set of tags.""" with graph.as_default(): # Remove default attrs so that Modules created by a tensorflow version # with ops that have new attrs that are left to...
Returns a copy of a MetaGraph with the identical set of tags. def get_meta_graph_copy(self, tags=None): """Returns a copy of a MetaGraph with the identical set of tags.""" meta_graph = self.get_meta_graph(tags) copy = tf_v1.MetaGraphDef() copy.CopyFrom(meta_graph) return copy
Returns a list of set of tags. def get_tags(self): """Returns a list of set of tags.""" return sorted([frozenset(meta_graph.meta_info_def.tags) for meta_graph in self.meta_graphs])
Exports to SavedModel directory. Args: path: path where to export the SavedModel to. variables_saver: lambda that receives a directory path where to export checkpoints of variables. def export(self, path, variables_saver=None): """Exports to SavedModel directory. Args: path: pat...
Returns the matching MetaGraphDef or raises KeyError. def get_meta_graph(self, tags=None): """Returns the matching MetaGraphDef or raises KeyError.""" matches = [meta_graph for meta_graph in self.meta_graphs if set(meta_graph.meta_info_def.tags) == set(tags or [])] if not matc...
Calls add_weight() to register but not create an existing weight. def _add_existing_weight(self, weight, trainable=None): """Calls add_weight() to register but not create an existing weight.""" if trainable is None: trainable = weight.trainable self.add_weight(name=weight.name, shape=weight.shape, dtype=we...
Helper function to ModuleSpec.export(). def export_module_spec(spec, path, checkpoint_path, name_transform_fn): """Helper function to ModuleSpec.export().""" with tf.Graph().as_default(): m = Module(spec) assign_map = { name_transform_fn(name): value for name, value in m.variable_map.items() } ...
Returns a fresh variable/name scope for a module's state. In order to import a module into a given scope without major complications we require the scope to be empty. This function deals with deciding an unused scope where to define the module state. This is non trivial in cases where name_scope and variable_s...
Converts inputs to a dict of inputs and checks extra/missing args. Args: inputs: inputs fed to Module.__call__(). tensor_info_map: A map from string to `tensor_info.ParsedTensorInfo` describing the signature inputs. Returns: A dict of values with the same keys as tensor_info_map. Raises: ...
Converts from inputs into dict of input tensors. This handles: - putting inputs into a dict, per _prepare_dict_inputs(), - converting all input values into tensors compatible with the expected input tensor (dtype, shape). - check sparse/non-sparse tensor types. Args: inputs: inputs fed to Mo...
Context manager that yields a function to directly evaluate a Module. This creates a separate graph, in which all of the signatures of the module are instantiated. Then, it creates a session and initializes the module variables. Finally, it returns a function which can be used to evaluate the module signatures...
Loads a module from a handle. Currently this method only works with Tensorflow 2.x and can only load modules created by calling tensorflow.saved_model.save(). The method works in both eager and graph modes. Depending on the type of handle used, the call may involve downloading a Tensorflow Hub module to a l...
Describes the inputs required by a signature. Args: signature: A string with the signature to get inputs information for. If None, the default signature is used if defined. Returns: The result of ModuleSpec.get_input_info_dict() for the given signature, and the graph variant selected...
Describes the outputs provided by a signature. Args: signature: A string with the signature to get ouputs information for. If None, the default signature is used if defined. Returns: The result of ModuleSpec.get_output_info_dict() for the given signature, and the graph variant select...
Calls ModuleSpec.get_attached_message(); see there for more. def get_attached_message(self, key, message_type, required=False): """Calls ModuleSpec.get_attached_message(); see there for more.""" return self._spec.get_attached_message(key, message_type, tags=self._tags...
Exports the module with the variables from the session in `path`. Note that it is the module definition in the ModuleSpec used to create this module that gets exported. The session is only used to provide the value of variables. Args: path: path where to export the module to. session: sess...
Returns the list of all tf.Variables created by module instantiation. def variables(self): """Returns the list of all tf.Variables created by module instantiation.""" result = [] for _, value in sorted(self.variable_map.items()): if isinstance(value, list): result.extend(value) else: ...
Uses a Module to construct a dense representation from a text feature. This feature column can be used on an input feature whose values are strings of arbitrary size. The result of this feature column is the result of passing its `input` through the module `m` instantiated from `module_spec`, as per `result...
Raises ValueError if `module_spec` is not a text-embedding module. Args: module_spec: A `ModuleSpec` to test. Raises: ValueError: if `module_spec` default signature is not compatible with Tensor(string, shape=(?,)) -> Tensor(float32, shape=(?,K)). def _check_module_is_text_embedding(module_spec): "...
Uses a Module to get a dense 1-D representation from the pixels of images. This feature column can be used on images, represented as float32 tensors of RGB pixel data in the range [0,1]. This can be read from a numeric_column() if the tf.Example input data happens to have decoded images, all with the same shap...
Raises ValueError if `module_spec` is not usable as image embedding. Args: module_spec: A `_ModuleSpec` to test. Raises: ValueError: if `module_spec` default signature is not compatible with mappingan "images" input to a Tensor(float32, shape=(_,K)). def _check_module_is_image_embedding(module_sp...
Returns string. Used for variable_scope and naming. def name(self): """Returns string. Used for variable_scope and naming.""" if not hasattr(self, "_name"): self._name = "{}_hub_module_embedding".format(self.key) return self._name
Returns a `Tensor`. def _get_dense_tensor(self, inputs, weight_collections=None, trainable=None): """Returns a `Tensor`.""" del weight_collections text_batch = tf.reshape(inputs.get(self), shape=[-1]) m = module.Module(self.module_spec, trainable=self.trainable and trainable) return m(text_batch)
Returns a `tf.Example` parsing spec as dict. def _parse_example_spec(self): """Returns a `tf.Example` parsing spec as dict.""" height, width = image_util.get_expected_image_size(self.module_spec) input_shape = [height, width, 3] return {self.key: tf_v1.FixedLenFeature(input_shape, tf.float32)}
Returns a `Tensor` to represent this feature in the input_layer(). def _get_dense_tensor(self, inputs, weight_collections=None, trainable=None): """Returns a `Tensor` to represent this feature in the input_layer().""" del weight_collections, trainable # Unused. m = module.Module(self.module_spec, trainabl...
Loads a module from a handle. Currently this method only works with Tensorflow 2.x and can only load modules created by calling tensorflow.saved_model.save(). The method works in both eager and graph modes. Depending on the type of handle used, the call may involve downloading a Tensorflow Hub module to a l...
Returns cache directory. Returns cache directory from either TFHUB_CACHE_DIR environment variable or --tfhub_cache_dir or default, if set. Args: default_cache_dir: Default cache location to use if neither TFHUB_CACHE_DIR environment variable nor --tfhub_cache_dir are ...
Creates and returns the name of directory where to cache a module. def create_local_module_dir(cache_dir, module_name): """Creates and returns the name of directory where to cache a module.""" tf_v1.gfile.MakeDirs(cache_dir) return os.path.join(cache_dir, module_name)
Merge a relative tar file to a destination (which can be "gs://..."). def _merge_relative_path(dst_path, rel_path): """Merge a relative tar file to a destination (which can be "gs://...").""" # Convert rel_path to be relative and normalize it to remove ".", "..", "//", # which are valid directories in fileystems...
Writes a descriptor file about the directory containing a module. Args: handle: Module name/handle. module_dir: Directory where a module was downloaded. def _write_module_descriptor_file(handle, module_dir): """Writes a descriptor file about the directory containing a module. Args: handle: Module n...
Returns total size (in bytes) of the given 'directory'. def _dir_size(directory): """Returns total size (in bytes) of the given 'directory'.""" size = 0 for elem in tf_v1.gfile.ListDirectory(directory): elem_full_path = os.path.join(directory, elem) stat = tf_v1.gfile.Stat(elem_full_path) size += _di...
Returns the size of the temp dir pointed to by the given lock file. def _locked_tmp_dir_size(lock_filename): """Returns the size of the temp dir pointed to by the given lock file.""" task_uid = _task_uid_from_lock_file(lock_filename) try: return _dir_size( _temp_download_dir(_module_dir(lock_filename...
Waits for the lock file to disappear. The lock file was created by another process that is performing a download into its own temporary directory. The name of this temp directory is sha1(<module>).<uuid>.tmp where <uuid> comes from the lock file. Args: handle: The location from where a module is being dow...
Returns the path to a Module directory for a given TF-Hub Module handle. Args: handle: (string) Location of a TF-Hub Module. download_fn: Callback function that actually performs download. The callback receives two arguments, handle and the location of a temporary directory ...
Prints a message about download progress either to the console or TF log. Args: msg: Message to print. flush: Indicates whether to flush the output (only used in interactive mode). def _print_download_progress_msg(self, msg, flush=False): """Prints a message about download progress ei...
Logs progress information about ongoing module download. Args: bytes_downloaded: Number of bytes downloaded. def _log_progress(self, bytes_downloaded): """Logs progress information about ongoing module download. Args: bytes_downloaded: Number of bytes downloaded. """ self._total_bytes...
Extracts 'tarinfo' from 'tgz' and writes to 'dst_path'. def _extract_file(self, tgz, tarinfo, dst_path, buffer_size=10<<20): """Extracts 'tarinfo' from 'tgz' and writes to 'dst_path'.""" src = tgz.extractfile(tarinfo) dst = tf_v1.gfile.GFile(dst_path, "wb") while 1: buf = src.read(buffer_size) ...
Streams the content for the 'fileobj' and stores the result in dst_path. Args: fileobj: File handle pointing to .tar/.tar.gz content. dst_path: Absolute path where to store uncompressed data from 'fileobj'. Raises: ValueError: Unknown object encountered inside the TAR file. def download_and...
Prepends name scope to a name. def prepend_name_scope(name, import_scope): """Prepends name scope to a name.""" # Based on tensorflow/python/framework/ops.py implementation. if import_scope: try: str_to_replace = r"([\^]|loc:@|^)(.*)" return re.sub(str_to_replace, r"\1" + import_scope + r"/\2", ...
In-place prefixes shared_name attributes of nodes. def prefix_shared_name_attributes(meta_graph, absolute_import_scope): """In-place prefixes shared_name attributes of nodes.""" shared_name_attr = "shared_name" for node in meta_graph.graph_def.node: shared_name_value = node.attr.get(shared_name_attr, None) ...
Function to propagate backwards in the graph and mark nodes as used. Traverses recursively through the graph from the end tensor, through the op that generates the tensor, and then to the input tensors that feed the op. Nodes encountered are stored in used_node_names. Args: output_tensor: A Tensor which w...
Function to prune unused ops given a signature def. This function does a graph traversal through from all outputs as defined in the signature_def to collect all used nodes. Then, any nodes which are unused can be discarded. This is useful for graph which are executing eagerly or on TPUs. Args: meta_grap...