text
stringlengths
81
112k
Builds the statistics part of the graph when using moving variance. Args: input_batch: Input batch Tensor. use_batch_stats: Boolean to indicate if batch statistics should be calculated, otherwise moving averages are returned. stat_dtype: TensorFlow datatype to use for the moving mean and ...
Builds the moving average update ops when using moving variance. Args: mean: The mean value to update with. variance: The variance value to update with. is_training: Boolean Tensor to indicate if we're currently in training mode. Returns: Tuple of `(update_mean_op, update_varia...
Creates a fused batch normalization op. def _fused_batch_norm_op(self, input_batch, mean, variance, use_batch_stats): """Creates a fused batch normalization op.""" # Store the original shape of the mean and variance. mean_shape = mean.get_shape() variance_shape = variance.get_shape() # The fused ba...
Connects the BatchNormV2 module into the graph. Args: input_batch: A Tensor of the same dimension as `len(data_format)`. is_training: A boolean to indicate if the module should be connected in training mode, meaning the moving averages are updated. Can be a Tensor. test_local_stats: A boo...
Construct a Tensor whose values are the index along a dimension. Construct a Tensor that counts the distance along a single dimension. This is useful, for example, when constructing an identity matrix, >>> x = _range_along_dimension(0, [2, 2]).eval() >>> x array([[0, 0], [1, 1]], dtype=int3...
An initializer for constructing identity convolution kernels. Constructs a convolution kernel such that applying it is the same as an identity operation on the input. Formally, the kernel has entry [i, j, in, out] = 1 if in equals out and i and j are the middle of the kernel and 0 otherwise. Args: shape...
Build an initializer for constructing near-identity convolution kernels. Construct a convolution kernel where in_channels and out_channels are multiples of base_num_channels, but need not be equal. This initializer is essentially the same as identity_kernel_initializer, except that magnitude is "spread out" ac...
Build dilation module. Args: images: Tensor of shape [batch_size, height, width, depth] and dtype float32. Represents a set of images with an arbitrary depth. Note that when using the default initializer, depth must equal num_output_classes. Returns: Tensor of shape [batch_...
Create a dilated convolution layer. Args: output_channels: int. Number of output channels for each pixel. dilation_rate: int. Represents how many pixels each stride offset will move. A value of 1 indicates a standard convolution. apply_relu: bool. If True, a ReLU non-linearlity is added. ...
Wraps the function and prints a warn-once (per `extra_message`) warning. def with_deprecation_warning(fn, extra_message=''): """Wraps the function and prints a warn-once (per `extra_message`) warning.""" def new_fn(*args, **kwargs): if extra_message not in _DONE_WARN: tf.logging.warning( 'Sonne...
A custom build method to wrap into a sonnet Module. def custom_build(inputs, is_training, keep_prob): """A custom build method to wrap into a sonnet Module.""" outputs = snt.Conv2D(output_channels=32, kernel_shape=4, stride=2)(inputs) outputs = snt.BatchNorm()(outputs, is_training=is_training) outputs = tf.nn....
Returns a thread local stack uniquified by the given name. def _get_or_create_stack(name): """Returns a thread local stack uniquified by the given name.""" stack = getattr(_LOCAL_STACKS, name, None) if stack is None: stack = [] setattr(_LOCAL_STACKS, name, stack) return stack
A pre-canned builder for diagonal gaussian posterior distributions. Given a true `getter` function and arguments forwarded from `tf.get_variable`, return a distribution object for a diagonal posterior over a variable of the requisite shape. Args: getter: The `getter` passed to a `custom_getter`. Please se...
A pre-canned builder for fixed gaussian prior distributions. Given a true `getter` function and arguments forwarded from `tf.get_variable`, return a distribution object for a scalar-valued fixed gaussian prior which will be broadcast over a variable of the requisite shape. Args: getter: The `getter` passe...
A pre-canned builder for adaptive scalar gaussian prior distributions. Given a true `getter` function and arguments forwarded from `tf.get_variable`, return a distribution object for a scalar-valued adaptive gaussian prior which will be broadcast over a variable of the requisite shape. This prior's parameters ...
A pre-canned builder for a ubiquitous stochastic KL estimator. def stochastic_kl_builder(posterior, prior, sample): """A pre-canned builder for a ubiquitous stochastic KL estimator.""" return tf.subtract( tf.reduce_sum(posterior.log_prob(sample)), tf.reduce_sum(prior.log_prob(sample)))
A pre-canned builder for the analytic kl divergence. def analytic_kl_builder(posterior, prior, sample): """A pre-canned builder for the analytic kl divergence.""" del sample return tf.reduce_sum(tfp.distributions.kl_divergence(posterior, prior))
Creates a custom getter which does Bayes by Backprop. Please see `tf.get_variable` for general documentation on custom getters. All arguments are optional. If nothing is configued, then a diagonal gaussian posterior will be used, and a fixed N(0, 0.01) prior will be used. Please see the default `posterior_bui...
Create tensor representing estimate of posterior. Args: posterior_dist: An instance of `tfp.distributions.Distribution`. The variational posterior from which to produce an estimate of the variable in question. posterior_estimate_mode: A `Tensor` of dtype `tf.string`, which determines ...
Get the total cost for all (or a subset of) the stochastic variables. Args: name: A name for the tensor representing the total kl cost. filter_by_name_substring: A string used to filter which variables count toward the total KL cost. By default, this argument is `None`, and all variables trained ...
The shape of the output matrix. def output_shape(self): """The shape of the output matrix.""" return (self._block_shape[0] * self._block_rows, self._block_shape[1] * self._block_rows)
Number of blocks with zeros from the left in block row `r`. def _left_zero_blocks(self, r): """Number of blocks with zeros from the left in block row `r`.""" if not self._include_off_diagonal: return r elif not self._upper: return 0 elif self._include_diagonal: return r else: ...
Number of blocks with zeros from the right in block row `r`. def _right_zero_blocks(self, r): """Number of blocks with zeros from the right in block row `r`.""" if not self._include_off_diagonal: return self._block_rows - r - 1 elif self._upper: return 0 elif self._include_diagonal: r...
Number of content blocks in block row `r`. def _content_blocks(self, r): """Number of content blocks in block row `r`.""" return (self._block_rows - self._left_zero_blocks(r) - self._right_zero_blocks(r))
Construct the data, model, loss and optimizer then train. def build_and_train(iterations, log_stride, test=False): """Construct the data, model, loss and optimizer then train.""" # Test mode settings. batch_size = 2 if test else FLAGS.batch_size num_mems = 2 if test else FLAGS.num_mems num_heads = 1 if test...
Dynamic unroll across input objects. Args: inputs: tensor (batch x num_objects x feature). Objects to sort. Returns: Tensor (batch x num_objects); logits indicating the reference objects. def _build(self, inputs): """Dynamic unroll across input objects. Args: inputs: tensor (batch ...
Create an op that clips gradients using a Defun. The tensorflow Defun decorator creates an op and tensorflow caches these op automatically according to `func_name`. Using a Defun decorator twice with the same `func_name` does not create a new op, instead the cached op is used. This method produces a new op th...
Clips respective gradients of a given tensor. Acts as identity for the forward pass, but clips gradient tensor element-wise by value during the backward pass. Any gradient values less than `clip_value_min` or greater than `clip_values_max` are set to the respective limit values. Args: net: A `tf.Tensor`...
Load PTB raw data from data directory "data_path". Reads PTB text files, converts strings to integer ids, and performs mini-batching of the inputs. The PTB dataset comes from Tomas Mikolov's webpage: http://www.fit.vutbr.cz/~imikolov/rnnlm/simple-examples.tgz Args: data_path: string path to the direct...
Checks that the normalization constructor is callable. def _check_and_assign_normalization_members(self, normalization_ctor, normalization_kwargs): """Checks that the normalization constructor is callable.""" if isinstance(normalization_ctor, six.string_types): ...
Sets up normalization, checking old and new flags. def _parse_normalization_kwargs(self, use_batch_norm, batch_norm_config, normalization_ctor, normalization_kwargs): """Sets up normalization, checking old and new flags.""" if use_batch_norm is not None: # Delete this wh...
Instantiates all the convolutional modules used in the network. def _instantiate_layers(self): """Instantiates all the convolutional modules used in the network.""" # Here we are entering the module's variable scope to name our submodules # correctly (not to create variables). As such it's safe to not che...
Assembles the `ConvNet2D` and connects it to the graph. Args: inputs: A 4D Tensor of shape `[batch_size, input_height, input_width, input_channels]`. **normalization_build_kwargs: kwargs passed to the normalization module at _build time. Returns: A 4D Tensor of shape `[batch_...
Returns transposed version of this network. Args: transpose_constructor: A method that creates an instance of the transposed network type. The method must accept the same kwargs as this methods with the exception of the `transpose_constructor` argument. name: Optional string specifying ...
Returns transposed version of this network. Args: name: Optional string specifying the name of the transposed module. The default name is constructed by appending "_transpose" to `self.module_name`. output_channels: Optional iterable of numbers of output channels. kernel_shapes: O...
Returns transposed version of this network. Args: name: Optional string specifying the name of the transposed module. The default name is constructed by appending "_transpose" to `self.module_name`. output_channels: Optional iterable of numbers of output channels. kernel_shapes: O...
Loads the data or reads it from cache. def _get_raw_data(subset): """Loads the data or reads it from cache.""" raw_data = _LOADED.get(subset) if raw_data is not None: return raw_data, _LOADED["vocab"] else: train_data, valid_data, test_data, vocab = ptb_reader.ptb_raw_data( FLAGS.data_path) ...
A builder for the gaussian scale-mixture prior of Fortunato et al. Please see https://arxiv.org/abs/1704.02798, section 7.1 Args: getter: The `getter` passed to a `custom_getter`. Please see the documentation for `tf.get_variable`. name: The `name` argument passed to `tf.get_variable`. *args: Po...
A builder for a particular diagonal gaussian posterior. Args: getter: The `getter` passed to a `custom_getter`. Please see the documentation for `tf.get_variable`. name: The `name` argument passed to `tf.get_variable`. *args: Positional arguments forwarded by `tf.get_variable`. **kwargs: Keywor...
Construct the modules used in the graph. def build_modules(is_training, vocab_size): """Construct the modules used in the graph.""" # Construct the custom getter which implements Bayes by Backprop. if is_training: estimator_mode = tf.constant(bbb.EstimatorModes.sample) else: estimator_mode = tf.consta...
This is the core model logic. Unrolls a Bayesian RNN over the given sequence. Args: data_ops: A `sequence_data.SequenceDataOps` namedtuple. embed_layer: A `snt.Embed` instance. rnn_core: A `snt.RNNCore` instance. output_linear: A `snt.Linear` instance. name_prefix: A string to use to prefix lo...
Compute the log loss given predictions and targets. def build_loss(model_logits, sparse_targets): """Compute the log loss given predictions and targets.""" time_major_shape = [FLAGS.unroll_steps, FLAGS.batch_size] flat_batch_shape = [FLAGS.unroll_steps * FLAGS.batch_size, -1] xent = tf.nn.sparse_softmax_cross_...
Run a network on the PTB training set, checkpointing the weights. def train(logdir): """Run a network on the PTB training set, checkpointing the weights.""" ptb_train = PTB( name="ptb_train", subset="train", seq_len=FLAGS.unroll_steps, batch_size=FLAGS.batch_size) # Connect to training ...
Throw helpful error if required dependencies not available. def _ensure_dependency_available_at_version(package_name, min_version): """Throw helpful error if required dependencies not available.""" try: pkg = importlib.import_module(package_name) except ImportError: pip_name = package_name.replace('_', ...
Create an op that scales gradients using a Defun. The tensorflow Defun decorator creates an op and tensorflow caches these ops automatically according to `func_name`. Using a Defun decorator twice with the same `func_name` does not create a new op, instead the cached op is used. This method produces a new op ...
Scales gradients for the backwards pass. This might be used to, for example, allow one part of a model to learn at a lower rate than the rest. WARNING: Think carefully about how your optimizer works. If, for example, you use rmsprop, the gradient is always rescaled (with some additional epsilon) towards uni...
Construct the data, model, loss and optimizer then train. def build_and_train(iterations, log_stride, test=False): """Construct the data, model, loss and optimizer then train.""" # Test mode settings. batch_size = 2 if test else FLAGS.batch_size num_mems = 2 if test else FLAGS.num_mems num_heads = 1 if test...
Dynamic unroll across input objects. Args: inputs: tensor (input_sequence_length x batch x feature_size). Encoder sequence. targets: tensor (output_sequence_length x batch x feature_size). Decoder sequence. input_sequence_length: tensor (batch). Size of each batched input ...
LSTM with recurrent dropout. Args: hidden_size: the LSTM hidden size. keep_prob: the probability to keep an entry when applying dropout. **kwargs: Extra keyword arguments to pass to the LSTM. Returns: A tuple (train_lstm, test_lstm) where train_lstm is an LSTM with recurrent dropout enabled to...
LSTM with recurrent dropout. Args: hidden_size: the LSTM hidden size. keep_prob_c: the probability to use the new value of the cell state rather than freezing it. keep_prob_h: the probability to use the new value of the hidden state rather than freezing it. **kwargs: Extra keyword argumen...
Highway core with recurrent dropout. Args: hidden_size: (int) Hidden size dimensionality. num_layers: (int) Number of highway layers. keep_prob: the probability to keep an entry when applying dropout. **kwargs: Extra keyword arguments to pass to the highway core. Returns: A tuple (train_core, ...
Returns the keys the dictionary of variable initializers may contain. The set of all possible initializer keys are: w_gates: weight for gates b_gates: bias of gates w_f_diag: weight for prev_cell -> forget gate peephole w_i_diag: weight for prev_cell -> input gate peephole w_o_diag:...
Connects the LSTM module into the graph. If this is not the first time the module has been connected to the graph, the Tensors provided as inputs and state must have the same final dimension, in order for the existing variables to be the correct size for their corresponding multiplications. The batch s...
Initialize the variables used for the gates. def _create_gate_variables(self, input_shape, dtype): """Initialize the variables used for the gates.""" if len(input_shape) != 2: raise ValueError( "Rank of shape must be {} not: {}".format(2, len(input_shape))) equiv_input_size = self._hidden_...
Initialize the variables used for the peephole connections. def _create_peephole_variables(self, dtype): """Initialize the variables used for the peephole connections.""" self._w_f_diag = tf.get_variable( self.W_F_DIAG, shape=[self._hidden_size], dtype=dtype, initializer=self._i...
Tuple of `tf.TensorShape`s indicating the size of state tensors. def state_size(self): """Tuple of `tf.TensorShape`s indicating the size of state tensors.""" return LSTMState(tf.TensorShape([self._hidden_state_size]), tf.TensorShape([self._hidden_size]))
Builds the default start state tensor of zeros. def initial_state(self, batch_size, dtype=tf.float32, trainable=False, trainable_initializers=None, trainable_regularizers=None, name=None): """Builds the default start state tensor of zeros.""" core_initial_state = self._c...
Builds the default start state tensor of zeros. def initial_state(self, batch_size, dtype=tf.float32, trainable=False, trainable_initializers=None, trainable_regularizers=None, name=None): """Builds the default start state tensor of zeros.""" return self._core.initial_st...
Wraps this RNNCore with the additional control input to the `BatchNorm`s. Example usage: lstm = snt.BatchNormLSTM(4) is_training = tf.placeholder(tf.bool) rnn_input = ... my_rnn = rnn.rnn(lstm.with_batch_norm_control(is_training), rnn_input) Args: is_training: Boolean that indic...
Returns the keys the dictionary of variable initializers may contain. The set of all possible initializer keys are: w_gates: weight for gates b_gates: bias of gates w_f_diag: weight for prev_cell -> forget gate peephole w_i_diag: weight for prev_cell -> input gate peephole w_o_diag:...
Connects the LSTM module into the graph. If this is not the first time the module has been connected to the graph, the Tensors provided as inputs and state must have the same final dimension, in order for the existing variables to be the correct size for their corresponding multiplications. The batch s...
Initialize the variables used for the `BatchNorm`s (if any). def _create_batch_norm_variables(self, dtype): """Initialize the variables used for the `BatchNorm`s (if any).""" # The paper recommends a value of 0.1 for good gradient flow through the # tanh nonlinearity (although doesn't say whether this is f...
Initialize the variables used for the gates. def _create_gate_variables(self, input_shape, dtype): """Initialize the variables used for the gates.""" if len(input_shape) != 2: raise ValueError( "Rank of shape must be {} not: {}".format(2, len(input_shape))) input_size = input_shape.dims[1]....
Builds the default start state tensor of zeros. Args: batch_size: An int, float or scalar Tensor representing the batch size. dtype: The data type to use for the state. trainable: Boolean that indicates whether to learn the initial state. trainable_initializers: An optional pair of initiali...
Tuple of `tf.TensorShape`s indicating the size of state tensors. def state_size(self): """Tuple of `tf.TensorShape`s indicating the size of state tensors.""" if self._max_unique_stats == 1: return (tf.TensorShape([self._hidden_size]), tf.TensorShape([self._hidden_size])) else: ret...
Returns new convolution. Args: use_bias: Use bias in convolutions. If False, clean_dict removes bias entries from initializers, partitioners and regularizers passed to the constructor of the convolution. def _new_convolution(self, use_bias): """Returns new convolution. Args: u...
Tuple of `tf.TensorShape`s indicating the size of state tensors. def state_size(self): """Tuple of `tf.TensorShape`s indicating the size of state tensors.""" hidden_size = tf.TensorShape( self._input_shape[:-1] + (self._output_channels,)) return (hidden_size, hidden_size)
Connects the GRU module into the graph. If this is not the first time the module has been connected to the graph, the Tensors provided as inputs and state must have the same final dimension, in order for the existing variables to be the correct size for their corresponding multiplications. The batch si...
Returns the keys the dictionary of variable initializers may contain. The set of all possible initializer keys are: wt: weight for input -> T gate wh: weight for input -> H gate wtL: weight for prev state -> T gate for layer L (indexed from 0) whL: weight for prev state -> H gate for layer ...
Connects the highway core module into the graph. Args: inputs: Tensor of size `[batch_size, input_size]`. prev_state: Tensor of size `[batch_size, hidden_size]`. Returns: A tuple (output, next_state) where `output` is a Tensor of size `[batch_size, hidden_size]` and `next_state` is a T...
Obtains the list flattened output sizes of a list of cores. Args: cores: list of cores to get the shapes from. Returns: List of lists that, for each core, contains the list of its output dimensions. def _get_flat_core_sizes(cores): """Obtains the list flattened output sizes of a list of cores. ...
Converts Tensor nest to a TensorShape nest, removing batch dimension. def _get_shape_without_batch_dimension(tensor_nest): """Converts Tensor nest to a TensorShape nest, removing batch dimension.""" def _strip_batch_and_convert_to_shape(tensor): return tensor.get_shape()[1:] return nest.map_structure(_strip_...
Connects the VanillaRNN module into the graph. If this is not the first time the module has been connected to the graph, the Tensors provided as input_ and state must have the same final dimension, in order for the existing variables to be the correct size for their corresponding multiplications. The b...
Checks the output_sizes of the cores of the DeepRNN module. Raises: ValueError: if the outputs of the cores cannot be concatenated along their first dimension. def _check_cores_output_sizes(self): """Checks the output_sizes of the cores of the DeepRNN module. Raises: ValueError: if th...
Connects the DeepRNN module into the graph. If this is not the first time the module has been connected to the graph, the Tensors provided as input_ and state must have the same final dimension, in order for the existing variables to be the correct size for their corresponding multiplications. The batc...
Builds the default start state for a DeepRNN. Args: batch_size: An int, float or scalar Tensor representing the batch size. dtype: The data type to use for the state. trainable: Boolean that indicates whether to learn the initial state. trainable_initializers: An initializer function or nes...
Connects the ModelRNN module into the graph. If this is not the first time the module has been connected to the graph, the Tensors provided as input_ and state must have the same final dimension, in order for the existing variables to be the correct size for their corresponding multiplications. The bat...
Connects the BidirectionalRNN module into the graph. Args: input_sequence: tensor (time, batch, [feature_1, ..]). It must be time_major. state: tuple of states for the forward and backward cores. Returns: A dict with forward/backard states and output sequences: "outputs":{...
Builds the default start state for a BidirectionalRNN. The Bidirectional RNN flattens the states of its forward and backward cores and concatentates them. Args: batch_size: An int, float or scalar Tensor representing the batch size. dtype: The data type to use for the state. trainable: B...
Instantiates all the linear modules used in the network. Layers are instantiated in the constructor, as opposed to the build function, because MLP implements the Transposable interface, and the transpose function can be called before the module is actually connected to the graph and build is called. ...
Assembles the `MLP` and connects it to the graph. Args: inputs: A 2D Tensor of size `[batch_size, input_size]`. is_training: A bool or tf.Bool Tensor. Indicates whether we are currently training. Defaults to `True`. dropout_keep_prob: The probability that each element is kept when ...
Returns a tuple of all output sizes of all the layers. def output_sizes(self): """Returns a tuple of all output sizes of all the layers.""" return tuple([l() if callable(l) else l for l in self._output_sizes])
Returns transposed `MLP`. Args: name: Optional string specifying the name of the transposed module. The default name is constructed by appending "_transpose" to `self.module_name`. activate_final: Optional boolean determining if the activation and batch normalization, if turned ...
Creates a new MLP with the same structure. Args: name: Optional string specifying the name of the new module. The default name is constructed by appending "_clone" to the original name. Returns: A cloned `MLP` module. def clone(self, name=None): """Creates a new MLP with the same stru...
Calculates the minimum size of the input layer. Given a set of convolutional layers, calculate the minimum value of the `input_height` and `input_width`, i.e. such that the output has size 1x1. Assumes snt.VALID padding. Args: conv_layers: List of tuples `(output_channels, (kernel_size, stride),...
Connects the AlexNet module into the graph. The is_training flag only controls the batch norm settings, if `False` it does not force no dropout by overriding any input `keep_prob`. To avoid any confusion this may cause, if `is_training=False` and `keep_prob` would cause dropout to be applied, an error ...
Generate one input sequence and output label. Each sequences of objects has a feature that consists of the feature vector for that object plus the encoding for its ID, the reference vector ID and the n-th value relative ID for a total feature size of: `num_objects` * 3 + `num_features` Args: ...
Assembles a batch of input tensors and output labels. Args: batch_size: int. number of sequence batches. num_objects: int. number of objects in the sequence. num_features: int. feature size of each object. Returns: 1. np.ndarray (`batch_size`, `num_objects`, (`num_...
Returns set of nth-farthest input tensors and labels. Returns: 1. tf.Tensor (`batch_size`, `num_objects`, (`num_features` + 3 * `num_objects`)). 2. tf.Tensor (`batch_size`). Output object reference label. def get_batch(self): """Returns set of nth-farthest input tensors and la...
Returns default (maximal) output shape for a transpose convolution. In general, there are multiple possible output shapes that a transpose convolution with a given `input_shape` can map to. This function returns the output shape which evenly divides the stride to produce the input shape in a forward convolutio...
Converts a dimension to a tuple of dimensions of a given size. This is used to allow shorthand notation for various configuration parameters. A user can provide either, for example, `2` or `[2, 2]` as a kernel shape, and this function returns `(2, 2)` in both cases. Passing `[1, 2]` will return `(1, 2)`. Ar...
Expands x if necessary into a `n`-D kernel shape and reports errors. def _fill_and_verify_parameter_shape(x, n, parameter_label): """Expands x if necessary into a `n`-D kernel shape and reports errors.""" try: return _fill_shape(x, n) except TypeError as e: raise base.IncompatibleShapeError("Invalid " + ...
Verifies that the provided padding is supported and expands to size n. Args: padding: One of ALLOWED_PADDINGS, or an iterable of them. n: An integer, the size of the desired output list. Returns: If `padding` is one of ALLOWED_PADDINGS, a tuple of size `n` containing `n` copies of `padding`. I...
Whether to use SAME or VALID for the underlying convolution op. Args: padding: A tuple of members of ALLOWED_PADDINGS, e.g. as returned from `_fill_and_verify_padding`. Returns: One of CONV_OP_ALLOWED_PADDINGS, the padding method to use for the underlying convolution op. Raises: ValueErro...
Expands the provided stride to size n and pads it with 1s. def _fill_and_one_pad_stride(stride, n, data_format=DATA_FORMAT_NHWC): """Expands the provided stride to size n and pads it with 1s.""" if isinstance(stride, numbers.Integral) or ( isinstance(stride, collections.Iterable) and len(stride) <= n): i...
Verifies `inputs` is semantically correct. Args: inputs: An input tensor provided by the user. channel_index: The index of the channel dimension. data_format: The format of the data in `inputs`. Raises: base.IncompatibleShapeError: If the shape of `inputs` doesn't match `data_format`. ba...
Returns a default initializer for the weights of a convolutional module. def create_weight_initializer(fan_in_shape, dtype=tf.float32): """Returns a default initializer for the weights of a convolutional module.""" stddev = 1 / math.sqrt(np.prod(fan_in_shape)) return tf.truncated_normal_initializer(stddev=stddev...
Returns the index of the channel dimension. Args: data_format: A string of characters corresponding to Tensor dimensionality. Returns: channel_index: An integer indicating the channel dimension. Raises: ValueError: If no channel dimension was found. def _find_channel_index(data_format): """Retur...
Initialize and apply a bias to the outputs. Figures out the shape of the bias vector, initialize it, and applies it. Args: inputs: A Tensor of shape `data_format`. outputs: A Tensor of shape `data_format`. channel_index: The index of the channel dimension in `inputs`. data_format: Format of `input...
Connects the _ConvND module into the graph, with input Tensor `inputs`. If this is not the first time the module has been connected to the graph, the input Tensor provided here must have the same number of channels, in order for the existing variables to be the correct size for the multiplication; the ...