text
stringlengths
81
112k
Local and global 1d self attention. def local_global_attention(x, self_attention_bias, hparams, q_padding="LEFT", kv_padding="LEFT"): """Local and global 1d self attention.""" with tf.variable_scope("self_lo...
Full self-attention layer. def full_self_attention(x, self_attention_bias, hparams, q_padding="LEFT", kv_padding="LEFT"): """Full self-attention layer.""" x, x_shape, is_4d = maybe_reshape_4d_to_3d(x) if self_attentio...
Local 1d self attention. def encdec_attention_1d(x, encoder_output, encoder_decoder_attention_bias, hparams): """Local 1d self attention.""" x, x_shape, is_4d = maybe_reshape_4d_to_3d(x) encoder_output, _, _ = maybe_reshape_4d_to_3d(encoder_...
Multi layer transformer. def transformer_decoder_layers(inputs, encoder_output, num_layers, hparams, self_attention_bias=None, encoder_decoder_attention_bias=None, ...
Multi layer transformer encoder. def transformer_encoder_layers(inputs, num_layers, hparams, attention_type=AttentionType.GLOBAL, self_attention_bias=None, q_paddin...
ffn layer transformer. def ffn_layer(x, hparams, losses=None): """ffn layer transformer.""" with tf.variable_scope("ffn"): if hparams.ffn_layer == "none": return x if hparams.ffn_layer == "conv_hidden_relu": y = common_layers.dense_relu_dense( x, hparams.filter_size, ...
Creates masked self attention bias. Args: x: A tensor of shape [batch, length, depth] Returns: self_attention_bias: A tensor of shape [length, length, 1] def get_self_attention_bias(x): """Creates masked self attention bias. Args: x: A tensor of shape [batch, length, depth] Returns: self_...
Postprocessing after decoding. Args: x: Tensor of shape [batch, ...], where ... can be any rank such that the number of elements in x is batch * rows * cols * hparams.hidden_size. rows: Integer representing number of rows in a 2-D data point. cols: Integer representing number of columns in a 2-D da...
Prepare encoder for images. def prepare_encoder(inputs, hparams, attention_type="local_1d"): """Prepare encoder for images.""" x = prepare_image(inputs, hparams, name="enc_channels") # Add position signals. x = add_pos_signals(x, hparams, "enc_pos") x_shape = common_layers.shape_list(x) if attention_type =...
Prepare decoder for images. def prepare_decoder(targets, hparams): """Prepare decoder for images.""" targets_shape = common_layers.shape_list(targets) channels = hparams.num_channels curr_infer_length = None # during training, images are [batch, IMG_LEN, IMG_LEN, 3]. # At inference, they are [batch, curr_...
Creates output from decoder output and vars. Args: decoder_output: Tensor of shape [batch, ...], where ... can be any rank such that the number of elements is batch * rows * cols * hparams.hidden_size. rows: Integer representing number of rows in a 2-D data point. cols: Integer representing number ...
Get separate embedding for each of the channels. def get_channel_embeddings(io_depth, targets, hidden_size, name="channel"): """Get separate embedding for each of the channels.""" targets_split = tf.split(targets, io_depth, axis=3) rgb_embedding_var = tf.get_variable("rgb_target_emb_%s" % name, ...
Step the batch of environments. The results of the step can be accessed from the variables defined below. Args: action: Tensor holding the batch of actions to apply. Returns: Operation. def simulate(self, action): """Step the batch of environments. The results of the step can be acc...
Reset the batch of environments. Args: indices: The batch indices of the environments to reset; defaults to all. Returns: Batch tensor of the new observations. def _reset_non_empty(self, indices): """Reset the batch of environments. Args: indices: The batch indices of the environme...
Decide whether to include a revision. If the number of revisions is large, we exclude some revisions to avoid a quadratic blowup in runtime, since the article is likely also large. We make the ratio between consecutive included revision numbers appproximately equal to "factor". Args: revision_num: an i...
Read wikipedia pages from a history dump. Since some pages can be terabytes in size (with all the revisions), we limit page size to max_page_size bytes. Args: my_file: an open file object. max_page_size: an integer Yields: strings def file_page_generator(my_file, max_page_size=2**28): """Read ...
Extract the title from a page. Args: page: a string Returns: a string def get_title(page): """Extract the title from a page. Args: page: a string Returns: a string """ start_pos = page.find("<title>") end_pos = page.find("</title>") assert start_pos != -1 assert end_pos != -1 st...
Extract the id from a page. Args: page: a string Returns: an integer def get_id(page): """Extract the id from a page. Args: page: a string Returns: an integer """ start_pos = page.find("<id>") end_pos = page.find("</id>") assert start_pos != -1 assert end_pos != -1 start_pos += ...
Extract the revisions of a page. Args: page: a string Returns: a list of strings def get_revisions(page): """Extract the revisions of a page. Args: page: a string Returns: a list of strings """ start_string = " <revision>\n" end_string = " </revision>\n" ret = [] current_pos...
Create a dictionary with title, id, and list of revisions. The dictionary contains: "title": a string "id": an integer "revisions": a list of strings Args: raw_page: a string Returns: a dictionary, or None in the case of an error. def parse_page(raw_page): """Create a dictionary with title, id...
Copy a file to a directory if it is not already there. Returns the target filepath. Args: source_filepath: a string target_directory: a string Returns: a string def maybe_copy_file_to_directory(source_filepath, target_directory): """Copy a file to a directory if it is not already there. Retur...
Generate pages from a list of .7z encoded history dumps. Args: corpus_files: a list of strings tmp_dir: a string max_page_size_exp: an integer Yields: strings def corpus_page_generator(corpus_files, tmp_dir, max_page_size_exp): """Generate pages from a list of .7z encoded history dumps. Args...
Extract the text from a revision. Args: revision: a string strip: a boolean Returns: a string def get_text(revision, strip=True): """Extract the text from a revision. Args: revision: a string strip: a boolean Returns: a string """ # text start tag looks like "<text ..otherstuf...
Remove everything in curly braces. Curly braces may be nested, so we keep track of depth. Args: text: a string Returns: a string def _remove_curly_braces(text): """Remove everything in curly braces. Curly braces may be nested, so we keep track of depth. Args: text: a string Returns: a...
Remove double brackets, but leave the viewable text. Args: text: a string Returns: a string def _remove_double_brackets(text): """Remove double brackets, but leave the viewable text. Args: text: a string Returns: a string """ def replacement_fn(s): if ":" in s: # this is prob...
Remove lines that do not start with a letter or a quote. From inspecting the data, this seems to leave in most prose and remove most weird stuff. Args: text: a string Returns: a string def _remove_boring_lines(text): """Remove lines that do not start with a letter or a quote. From inspecting the...
Get or generate the vocabulary. Args: data_dir: a string tmp_dir: a string data_prefix: a string max_page_size_exp: an integer approx_vocab_size: an integer strip: a boolean Returns: a TextEncoder def get_or_generate_vocabulary(data_dir, tmp_dir, ...
Get encoder from vocab file. If vocab is not found in output dir, it will be copied there by copy_vocab_to_output_dir to clarify the vocab used to generate the data. Args: vocab_filepath: path to vocab, either local or cns Returns: A SubwordTextEncoder vocabulary object. None if the output_parallel_t...
Filter out examples that exceed max_edit_ratio between source and target. Args: source_target_input: a list of [source, target] pairs max_equal_to_diff_ratio: cutoff for ratio of equal chars / diff chars between source and target Returns: source_target_output: filtered subset of [source, ...
Artificially add spelling errors and infill markers. This function should be applied to the inputs of a correction model. The artificial errors are particularly useful to train a network to correct spelling when the training data does not contain many natural errors. Also replaces some substrings with an "...
Compute diffs between two sequences. This function is similar in functionality and spirit to difflib.SequenceMatcher.get_opcodes, but it seems to run faster. if a_start, a_end, b_start, b_end are specified, then we compute diffs of the segments a[a_start:a_end] and b[b_start:b_end]. Returned indices are re...
Load variables from checkpoint. New model variables have the following name foramt: new_model_scope/old_model_scope/xxx/xxx:0 To find the map of name to variable, need to strip the new_model_scope and then match the old_model_scope and remove the suffix :0. def begin(self): """Load variables from ...
Creates a TimeStep with both rewards and actions as optional. def create_time_step(cls, observation=None, done=False, raw_reward=None, processed_reward=None, action=None): """Creates a TimeStep with b...
Complete attention layer with preprocessing. def attention(targets_shifted, inputs_encoded, norm_fn, hparams, bias=None): """Complete attention layer with preprocessing.""" separabilities = [hparams.separability, hparams.separability] if hparams.separability < 0: separabilities = [hparams.separability - 1, h...
A stack of separable convolution blocks with residual connections. def multi_conv_res(x, padding, name, layers, hparams, mask=None, source=None): """A stack of separable convolution blocks with residual connections.""" with tf.variable_scope(name): padding_bias = None if mask is not None: padding_bia...
Experimental rank loss, thanks to kkurach@ for the code. def rank_loss(sentence_emb, image_emb, margin=0.2): """Experimental rank loss, thanks to kkurach@ for the code.""" with tf.name_scope("rank_loss"): # Normalize first as this is assumed in cosine similarity later. sentence_emb = tf.nn.l2_normalize(sen...
Loss telling to be more similar to your own targets than to others. def similarity_cost(inputs_encoded, targets_encoded): """Loss telling to be more similar to your own targets than to others.""" # This is a first very simple version: handle variable-length by padding # to same length and putting everything into...
Middle part of slicenet, connecting encoder and decoder. def slicenet_middle(inputs_encoded, targets, target_space_emb, mask, hparams): """Middle part of slicenet, connecting encoder and decoder.""" def norm_fn(x, name): with tf.variable_scope(name, default_name="norm"): return common_layers.apply_norm(...
Input embeddings -> is_padding. def embedding_to_padding(emb): """Input embeddings -> is_padding.""" emb_sum = tf.reduce_sum(tf.abs(emb), axis=-1, keep_dims=True) return tf.to_float(tf.equal(emb_sum, 0.0))
The slicenet model, main step used for training. def slicenet_internal(inputs, targets, target_space, hparams, run_decoder=True): """The slicenet model, main step used for training.""" with tf.variable_scope("slicenet"): # Project to hidden size if necessary if inputs.get_shape().as_list()[-1] != hparams.h...
Set of hyperparameters. def slicenet_params1(): """Set of hyperparameters.""" hparams = common_hparams.basic_params1() hparams.batch_size = 1024 hparams.hidden_size = 768 hparams.dropout = 0.5 hparams.symbol_dropout = 0.2 hparams.label_smoothing = 0.1 hparams.clip_grad_norm = 2.0 hparams.num_hidden_l...
Version with Noam's decay scheme. def slicenet_params1_noam(): """Version with Noam's decay scheme.""" hparams = slicenet_params1() hparams.learning_rate_decay_scheme = "noam" hparams.learning_rate = 1.0 hparams.learning_rate_warmup_steps = 4000 hparams.initializer = "uniform_unit_scaling" hparams.optimi...
Version for fast local runs. def slicenet_params1_tiny(): """Version for fast local runs.""" hparams = slicenet_params1() hparams.attention_type = "simple" hparams.separability = 0 hparams.hidden_size = 128 hparams.num_hidden_layers = 2 hparams.batch_size = 512 hparams.learning_rate_warmup_steps = 200 ...
Small range of hyperparameters. def slicenet_range1(ranged_hparams): """Small range of hyperparameters.""" rhp = ranged_hparams rhp.set_float("clip_grad_norm", 1.0, 10.0, scale=rhp.LOG_SCALE) rhp.set_float("learning_rate", 0.02, 1.0, scale=rhp.LOG_SCALE) rhp.set_float("optimizer_adam_beta2", 0.995, 0.998) ...
Converts a space-separated string of tokens to lists of ids. Also store temporary vocabulary IDs for source OOV tokens. OOVs are represented by their temporary OOV number. E.g., if the vocabulary size is 50k and the source has 3 OOVs, then these temporary OOV numbers will be 50000, 50001, 50002. A...
Converts a space-separated string of tokens to lists of ids. Also store a version of extened vocabulary IDs. For target OOVs that are in the source, encode them using the temporary vocab IDs. For target OOVs not in the source, encode them as <UNK> Args: target: target string source_oov...
decode ids back to tokens, considering OOVs temporary IDs. Args: ids: vocab ids. Could possibly include source temporary OOV ID starting from vocab_size. source_oov_id_to_token: a list of source OOV tokens, with the order the same as they appear in the source. Returns: decoded to...
Computes new shape with the smallest side equal to `smallest_side`. Computes new shape with the smallest side equal to `smallest_side` while preserving the original aspect ratio. Args: height: an int32 scalar tensor indicating the current height. width: an int32 scalar tensor indicating the current ...
Resize images preserving the original aspect ratio. Args: image: A 3-D image `Tensor`. smallest_side: A python integer or scalar `Tensor` indicating the size of the smallest side after resize. Returns: resized_image: A 3-D tensor containing the resized image. def _aspect_preserving_resize(image, ...
Distort the color of a Tensor image. Each color distortion is non-commutative and thus ordering of the color ops matters. Ideally we would randomly permute the ordering of the color ops. Rather then adding that level of complication, we select a distinct ordering of color ops for each preprocessing thread. ...
Computes func(x, sel), with sel sampled from [0...num_cases-1]. Args: x: input Tensor. func: Python function to apply. num_cases: Python int32, number of cases to sample sel from. Returns: The result of func(x, sel), where func receives the value of the selector as a python integer, but sel is...
Subtracts the given means from each image channel. For example: means = [123.68, 116.779, 103.939] image = _mean_image_subtraction(image, means) Note that the rank of `image` must be known. Args: image: a tensor of size [height, width, C]. means: a C-vector of values to subtract from each chann...
vqa v2 preprocess image. def vqa_v2_preprocess_image( image, height, width, mode, resize_side=512, distort=True, image_model_fn="resnet_v1_152", ): """vqa v2 preprocess image.""" image = tf.image.convert_image_dtype(image, dtype=tf.float32) assert resize_side > 0 if resize_side: ...
Prepare one shard of the model for the encoder. Args: inputs: a Tensor. target_space: a Tensor. hparams: run hyperparameters features: optionally pass the entire features dictionary as well. This is needed now for "packed" datasets. Returns: encoder_input: a Tensor, bottom of encoder sta...
A stack of transformer layers. Args: encoder_input: a Tensor encoder_self_attention_bias: bias Tensor for self-attention (see common_attention.attention_bias()) hparams: hyperparameters for model name: a string nonpadding: optional Tensor with shape [batch_size, encoder_length] indic...
Feed-forward layer in the transformer. Args: x: a Tensor of shape [batch_size, length, hparams.hidden_size] hparams: hyperparameters for model pad_remover: an expert_utils.PadRemover object tracking the padding positions. If provided, when using convolutional settings, the padding is removed ...
Transformer on languagemodel_lm1b32k_packed. 50M Params. def lmx_base(): """Transformer on languagemodel_lm1b32k_packed. 50M Params.""" hparams = transformer.transformer_tpu() # sharing is counterproductive when underparameterized hparams.shared_embedding_and_softmax_weights = False # we judge by log-ppl, ...
HParams for training languagemodel_lm1b32k_packed. 880M Params. def lmx_h3k_f12k(): """HParams for training languagemodel_lm1b32k_packed. 880M Params.""" hparams = lmx_base() hparams.hidden_size = 3072 hparams.filter_size = 12288 hparams.batch_size = 2048 hparams.weight_dtype = "bfloat16" return hparam...
HParams for training languagemodel_lm1b32k_packed. 1470M Params. def lmx_h4k_f16k(): """HParams for training languagemodel_lm1b32k_packed. 1470M Params.""" hparams = lmx_base() hparams.hidden_size = 4096 hparams.filter_size = 16384 hparams.batch_size = 1024 hparams.weight_dtype = "bfloat16" return hpar...
Language model using relative attention. def lmx_relative(): """Language model using relative attention.""" hparams = lmx_base() hparams.self_attention_type = "dot_product_relative_v2" hparams.activation_dtype = "float32" hparams.weight_dtype = "float32" return hparams
Transformer with mixture of experts. 890M Params. def lmx_moe_h1k_f4k_x32(): """Transformer with mixture of experts. 890M Params.""" hparams = lmx_h1k_f4k() hparams.ffn_layer = "local_moe_tpu" hparams.moe_num_experts = 32 hparams.weight_dtype = "bfloat16" hparams.batch_size = 8192 return hparams
Transformer with mixture of experts. 890M Params. def lmx_moe_h1k_f8k_x16(): """Transformer with mixture of experts. 890M Params.""" hparams = lmx_h1k_f4k() hparams.filter_size = 8192 hparams.ffn_layer = "local_moe_tpu" hparams.moe_num_experts = 16 hparams.weight_dtype = "bfloat16" hparams.batch_size =...
HParams for training languagemodel_lm1b32k_packed. 880M Params. def lmx_h1k_f64k(): """HParams for training languagemodel_lm1b32k_packed. 880M Params.""" hparams = lmx_base() hparams.hidden_size = 1024 hparams.filter_size = 65536 hparams.batch_size = 2048 return hparams
Uncertainty reward based on logits. def compute_uncertainty_reward(logits, predictions): """Uncertainty reward based on logits.""" # TODO(rsepassi): Add support for L1/L2 loss models. Current code only # works for softmax models. vocab_size = logits.shape[-1] assert vocab_size > 1 log_probs = common_layers...
Reset the batch of environments. Args: indices: The batch indices of the environments to reset; defaults to all. Returns: Batch tensor of the new observations. def _reset_non_empty(self, indices): """Reset the batch of environments. Args: indices: The batch indices of the environme...
Set the random seed from flag everywhere. def set_random_seed(): """Set the random seed from flag everywhere.""" tf.set_random_seed(FLAGS.random_seed) random.seed(FLAGS.random_seed) np.random.seed(FLAGS.random_seed)
Generate data for a problem in _SUPPORTED_PROBLEM_GENERATORS. def generate_data_for_problem(problem): """Generate data for a problem in _SUPPORTED_PROBLEM_GENERATORS.""" training_gen, dev_gen, test_gen = _SUPPORTED_PROBLEM_GENERATORS[problem] num_train_shards = FLAGS.num_shards or 10 tf.logging.info("Generati...
Generate data for `EnvProblem`s. def generate_data_for_env_problem(problem_name): """Generate data for `EnvProblem`s.""" assert FLAGS.env_problem_max_env_steps > 0, ("--env_problem_max_env_steps " "should be greater than zero") assert FLAGS.env_problem_batch_size > ...
Generate data for a registered problem. def generate_data_for_registered_problem(problem_name): """Generate data for a registered problem.""" tf.logging.info("Generating data for %s.", problem_name) if FLAGS.num_shards: raise ValueError("--num_shards should not be set for registered Problem.") problem = re...
Traverses directory collecting input and target files. Args: directory: base path to extracted audio and transcripts. Returns: list of (media_base, media_filepath, label) tuples def _collect_data(directory): """Traverses directory collecting input and target files. Args: directory: base path to extr...
Checks if the filename exists under the path. def _file_exists(path, filename): """Checks if the filename exists under the path.""" return os.path.isfile(os.path.join(path, filename))
Checks if the filename is relative, not absolute. def _is_relative(path, filename): """Checks if the filename is relative, not absolute.""" return os.path.abspath(os.path.join(path, filename)).startswith(path)
Define ppo step. def define_ppo_step(data_points, hparams, action_space, lr): """Define ppo step.""" observation, action, discounted_reward, norm_advantage, old_pdf = data_points obs_shape = common_layers.shape_list(observation) observation = tf.reshape( observation, [obs_shape[0] * obs_shape[1]] + obs_...
PPO epoch. def define_ppo_epoch(memory, hparams, action_space, batch_size): """PPO epoch.""" observation, reward, done, action, old_pdf, value = memory # This is to avoid propagating gradients through simulated environment. observation = tf.stop_gradient(observation) action = tf.stop_gradient(action) rewa...
Generalized advantage estimator. Returns: GAE estimator. It will be one element shorter than the input; this is because to compute GAE for [0, ..., N-1] one needs V for [1, ..., N]. def calculate_generalized_advantage_estimator( reward, value, done, gae_gamma, gae_lambda): # pylint: disable=g-doc-args...
Returns a reading spec of a gym space. NOTE: Only implemented currently for Box and Discrete. Args: gym_space: instance of gym.spaces whose spec we want. Returns: Reading spec for that space. Raises: NotImplementedError: For spaces whose reading spec we haven't implemented. def gym_space_spec(g...
Number of elements that can be represented by the space. Makes the most sense for Discrete or Box type with integral dtype, ex: number of actions in an action space. Args: gym_space: The gym space. Returns: np.int64 number of observations that can be represented by this space, or returns None whe...
RMSE but will argmax if last dim is not 1. def image_rmse(predictions, labels, weights_fn=common_layers.weights_all): """RMSE but will argmax if last dim is not 1.""" if common_layers.shape_list(predictions)[-1] == 1: predictions = tf.squeeze(predictions, axis=[-1]) else: predictions = tf.argmax(predicti...
Computes mean(abs(preds-target)). def abs_error(predictions, labels, weights_fn=None): """Computes mean(abs(preds-target)).""" del weights_fn # Unused targets = tf.squeeze(labels, axis=[2, 3]) batch_abs_error = tf.abs(predictions - targets) den = tf.ones(tf.shape(batch_abs_error), dtype=tf.float32) return...
Explained variance, also known as R^2. def padded_variance_explained(predictions, labels, weights_fn=common_layers.weights_all): """Explained variance, also known as R^2.""" predictions, labels = common_layers.pad_with_zeros(predictions, labels) targets...
Percentage of times that top-k predictions matches labels on non-0s. def padded_accuracy_topk(predictions, labels, k, weights_fn=common_layers.weights_nonzero): """Percentage of times that top-k predictions matches labels on non-0s.""" with...
Sequence accuracy for L1/L2 losses: round down the predictions to ints. def rounding_sequence_accuracy(predictions, labels, weights_fn=common_layers.weights_nonzero): """Sequence accuracy for L1/L2 losses: round down the predictions to ints.""" outputs ...
Percentage of times that predictions matches labels everywhere (non-0). def padded_sequence_accuracy(predictions, labels, weights_fn=common_layers.weights_nonzero): """Percentage of times that predictions matches labels everywhere (non-0).""" # If the last ...
Average edit distance, ignoring padding 0s. The score returned is the edit distance divided by the total length of reference truth and the weight returned is the total length of the truth. Args: predictions: Tensor of shape [`batch_size`, `length`, 1, `num_classes`] and type tf.float32 representing ...
Average log-perplexity exluding padding 0s. No smoothing. def padded_neg_log_perplexity(predictions, labels, weights_fn=common_layers.weights_nonzero): """Average log-perplexity exluding padding 0s. No smoothing.""" num, den = common_layers.padded_cross_e...
Average log-perplexity with custom targets_mask. def padded_neg_log_perplexity_with_masking( predictions, labels, features, weights_fn=None): """Average log-perplexity with custom targets_mask.""" del weights_fn if "targets_mask" not in features: raise ValueError("masked_neg_log_perplexity re...
Average log-perplexity excluding padding 0s. No smoothing. def dmol_neg_log_perplexity(predictions, labels, weights_fn=None): """Average log-perplexity excluding padding 0s. No smoothing.""" del weights_fn # Unused num, den = common_layers.dml_loss( ...
Rounding accuracy for L1/L2 losses: round down the predictions to ints. def rounding_accuracy(predictions, labels, weights_fn=common_layers.weights_nonzero): """Rounding accuracy for L1/L2 losses: round down the predictions to ints.""" outputs = tf.squeeze(tf.to_int32(pr...
Percentage of times that predictions matches labels on non-0s. def padded_accuracy(predictions, labels, weights_fn=common_layers.weights_nonzero): """Percentage of times that predictions matches labels on non-0s.""" # If the last dimension is 1 then we're using L1/L2 loss. ...
Used to evaluate the VQA accuracy. Let n be the times that predictions appear in labels, then final score is min(n/k, 1). Refer to https://arxiv.org/pdf/1505.00468.pdf. Args: predictions: A tensor with shape [batch_size, 1, 1, 1, vocab_size]. labels: A tensor with shape [batch_size, length, 1, 1]. ...
Precision of set predictions. Args: predictions : A Tensor of scores of shape [batch, nlabels]. labels: A Tensor of int32s giving true set elements, of shape [batch, seq_length]. weights_fn: A function to weight the elements. Returns: hits: A Tensor of shape [batch, nlabels]. weights: A ...
Reshapes predictions and passes it to tensorboard. Args: predictions : The predicted image (logits). targets : The ground truth. hparams: model hparams. Returns: summary_proto: containing the summary images. weights: A Tensor of zeros of the same shape as predictions. def image_summary(predic...
Calculate softmax cross entropy given one-hot labels and logits. Args: logits: Tensor of size [batch-size, o=1, p=1, num-classes] labels: Tensor of size [batch-size, o=1, p=1, num-classes] weights_fn: Function that takes in labels and weighs examples (unused) Returns: cross-entropy (scalar), weight...
Calculate accuracy for a set, given one-hot labels and logits. Args: logits: Tensor of size [batch-size, o=1, p=1, num-classes] labels: Tensor of size [batch-size, o=1, p=1, num-classes] weights_fn: Function that takes in labels and weighs examples (unused) Returns: accuracy (scalar), weights def ...
Calculate recall for a set, given one-hot labels and logits. Predictions are converted to one-hot, as predictions[example][arg-max(example)] = 1 Args: logits: Tensor of size [batch-size, o=1, p=1, num-classes] labels: Tensor of size [batch-size, o=1, p=1, num-classes] weights_fn: Function that takes...
Calculate sigmoid cross entropy for one-hot lanels and logits. Args: logits: Tensor of size [batch-size, o=1, p=1, num-classes] labels: Tensor of size [batch-size, o=1, p=1, num-classes] weights_fn: Function that takes in labels and weighs examples (unused) Returns: cross_entropy (scalar), weights ...
Calculate ROC AUC. Requires binary classes. Args: logits: Tensor of size [batch_size, 1, 1, num_classes] labels: Tensor of size [batch_size, 1, 1, num_classes] weights_fn: Function that takes in labels and weighs examples (unused) Returns: ROC AUC (scalar), weights def roc_auc(logits, labels, w...
Creates the evaluation metrics for the model. Args: problems: List of Problem instances. model_hparams: a set of hparams. Returns: dict<metric name, metric function>. The metric functions have signature (Tensor predictions, features) -> (metric Tensor, update op), where features is a dict with...
See create_eager_metrics. def create_eager_metrics_for_problem(problem, model_hparams): """See create_eager_metrics.""" metric_fns = problem.eval_metric_fns(model_hparams) problem_hparams = problem.get_hparams(model_hparams) target_modality = problem_hparams.modality["targets"] weights_fn = model_hparams.wei...
Create metrics accumulators and averager for Eager mode. Args: metric_names: list<str> from Metrics enum weights_fn: function that takes labels and returns a weights mask. Defaults to weights of all 1, i.e. common_layers.weights_all. Use common_layers.weights_nonzero if labels have 0-padding. ...