text
stringlengths
81
112k
Create metrics accumulators and averager for Eager mode. Args: metric_fns: dict<metric name, metric function> weights_fn: function that takes labels and returns a weights mask. Defaults to weights of all 1, i.e. common_layers.weights_all. Use common_layers.weights_nonzero if labels have 0-padding...
Calculate word error rate. Args: raw_predictions: The raw predictions. labels: The actual labels. lookup: A tf.constant mapping indices to output tokens. weights_fn: Weighting function. Returns: The word error rate. def word_error_rate(raw_predictions, labels, ...
Calculate pearson correlation coefficient. Args: predictions: The raw predictions. labels: The actual labels. weights_fn: Weighting function. Returns: The pearson correlation coefficient. def pearson_correlation_coefficient(predictions, labels, weights_fn=None): """Calculate pearson correlation...
Prepare one shard of the model for the decoder. Args: targets: a Tensor. hparams: run hyperparameters Returns: decoder_input: a Tensor, bottom of decoder stack decoder_self_attention_bias: a Tensor, containing large negative values to implement masked attention and possibly biases for diagonal...
A stack of attention_lm layers. Args: decoder_input: a Tensor decoder_self_attention_bias: bias Tensor for self-attention (see common_attention.attention_bias()) hparams: hyperparameters for model name: a string Returns: y: a Tensors def attention_lm_decoder(decoder_input, ...
Set of hyperparameters. def attention_lm_base(): """Set of hyperparameters.""" hparams = common_hparams.basic_params1() hparams.hidden_size = 1024 hparams.batch_size = 8192 hparams.max_length = 256 hparams.dropout = 0.0 hparams.clip_grad_norm = 0. # i.e. no gradient clipping hparams.optimizer_adam_eps...
Cheap model. on lm1b_32k: 45M params 2 steps/sec on [GeForce GTX TITAN X] Returns: an hparams object. def attention_lm_small(): """Cheap model. on lm1b_32k: 45M params 2 steps/sec on [GeForce GTX TITAN X] Returns: an hparams object. """ hparams = attention_lm_base() hp...
Version to use for seq2seq. def attention_lm_translation(): """Version to use for seq2seq.""" hparams = attention_lm_base() hparams.layer_preprocess_sequence = "n" hparams.layer_postprocess_sequence = "da" hparams.learning_rate = 0.4 hparams.prepend_mode = "prepend_inputs_masked_attention" hparams.max_le...
Extracts all n-grams up to a given maximum order from an input segment. Args: segment: text segment from which n-grams will be extracted. max_order: maximum length in tokens of the n-grams returned by this methods. Returns: The Counter containing all n-grams up to max_order in segment with...
BLEU score computation between labels and predictions. An approximate BLEU scoring method since we do not glue word pieces or decode the ids and tokenize the output. By default, we use ngram order of 4 and use brevity penalty. Also, this does not have beam search. Args: predictions: tensor, model predicti...
r"""Tokenize a string following the official BLEU implementation. See https://github.com/moses-smt/mosesdecoder/" "blob/master/scripts/generic/mteval-v14.pl#L954-L983 In our case, the input string is expected to be just one line and no HTML entities de-escaping is needed. So we just tokenize on punc...
Compute BLEU for two files (reference and hypothesis translation). def bleu_wrapper(ref_filename, hyp_filename, case_sensitive=False): """Compute BLEU for two files (reference and hypothesis translation).""" ref_lines = text_encoder.native_to_unicode( tf.gfile.Open(ref_filename, "r").read()).split("\n") hy...
Glob twice, first time possibly catching `NotFoundError`. tf.gfile.Glob may crash with ``` tensorflow.python.framework.errors_impl.NotFoundError: xy/model.ckpt-1130761_temp_9cb4cb0b0f5f4382b5ea947aadfb7a40; No such file or directory ``` Standard glob.glob does not have this bug, but does not handle mul...
Return list of StepFiles sorted by step from files at path_prefix. def _read_stepfiles_list(path_prefix, path_suffix=".index", min_steps=0): """Return list of StepFiles sorted by step from files at path_prefix.""" stepfiles = [] for filename in _try_twice_tf_glob(path_prefix + "*-[0-9]*" + path_suffix): base...
Continuously yield new files with steps in filename as they appear. This is useful for checkpoint files or other files whose names differ just in an integer marking the number of steps and match the wildcard path_prefix + "*-[0-9]*" + path_suffix. Unlike `tf.contrib.training.checkpoints_iterator`, this implem...
Extract the VQA V2 annotation files to directory unless it's there. def _get_vqa_v2_annotations(directory, annotation_url, annotation_filename="vqa_v2.tar.gz"): """Extract the VQA V2 annotation files to directory unless it's there.""" annotation_file = genera...
Extract the VQA V2 image data set to directory unless it's there. def _get_vqa_v2_image_raw_dataset(directory, image_root_url, image_urls): """Extract the VQA V2 image data set to directory unless it's there.""" for url in image_urls: filename = os.path.basename(url) download_url = os.path.join(image_root_...
Extract the VQA V2 feature data set to directory unless it's there. def _get_vqa_v2_image_feature_dataset( directory, feature_url, feature_filename="mscoco_feat.tar.gz"): """Extract the VQA V2 feature data set to directory unless it's there.""" feature_file = generator_utils.maybe_download_from_drive( di...
Helper function for raising a value error for bad assignment. def _parse_fail(name, var_type, value, values): """Helper function for raising a value error for bad assignment.""" raise ValueError( 'Could not parse hparam \'%s\' of type \'%s\' with value \'%s\' in %s' % (name, var_type.__name__, value, v...
Update results_dictionary with a scalar value. Used to update the results_dictionary to be returned by parse_values when encountering a clause with a scalar RHS (e.g. "s=5" or "arr[0]=5".) Mutates results_dictionary. Args: name: Name of variable in assignment ("s" or "arr"). parse_fn: Function for p...
Update results_dictionary from a list of values. Used to update results_dictionary to be returned by parse_values when encountering a clause with a list RHS (e.g. "arr=[1,2,3]".) Mutates results_dictionary. Args: name: Name of variable in assignment ("arr"). parse_fn: Function for parsing individual...
Cast hparam to the provided type, if compatible. Args: name: Name of the hparam to be cast. param_type: The type of the hparam. value: The value to be cast, if compatible. Returns: The result of casting `value` to `param_type`. Raises: ValueError: If the type of `value` is not compatible wi...
Parses hyperparameter values from a string into a python map. `values` is a string containing comma-separated `name=value` pairs. For each pair, the value of the hyperparameter named `name` is set to `value`. If a hyperparameter name appears multiple times in `values`, a ValueError is raised (e.g. 'a=1,a=2'...
Adds {name, value} pair to hyperparameters. Args: name: Name of the hyperparameter. value: Value of the hyperparameter. Can be one of the following types: int, float, string, int list, float list, or string list. Raises: ValueError: if one of the arguments is invalid. def add_hparam...
Set the value of an existing hyperparameter. This function verifies that the type of the value matches the type of the existing hyperparameter. Args: name: Name of the hyperparameter. value: New value of the hyperparameter. Raises: KeyError: If the hyperparameter doesn't exist. ...
Removes the hyperparameter with key 'name'. Does nothing if it isn't present. Args: name: Name of the hyperparameter. def del_hparam(self, name): """Removes the hyperparameter with key 'name'. Does nothing if it isn't present. Args: name: Name of the hyperparameter. """ if h...
Override existing hyperparameter values, parsing new values from a string. See parse_values for more detail on the allowed format for values. Args: values: String. Comma separated list of `name=value` pairs where 'value' must follow the syntax described above. Returns: The `HParams` ...
Override existing hyperparameter values, parsing new values from a dictionary. Args: values_dict: Dictionary of name:value pairs. Returns: The `HParams` instance. Raises: KeyError: If a hyperparameter in `values_dict` doesn't exist. ValueError: If `values_dict` cannot be parsed. ...
Serializes the hyperparameters into JSON. Args: indent: If a non-negative integer, JSON array elements and object members will be pretty-printed with that indent level. An indent level of 0, or negative, will only insert newlines. `None` (the default) selects the most compact represen...
Override existing hyperparameter values, parsing new values from a json object. Args: values_json: String containing a json object of name:value pairs. Returns: The `HParams` instance. Raises: KeyError: If a hyperparameter in `values_json` doesn't exist. ValueError: If `values_jso...
Return the hyperparameter values as a Python dictionary. Returns: A dictionary with hyperparameter names as keys. The values are the hyperparameter values. def values(self): """Return the hyperparameter values as a Python dictionary. Returns: A dictionary with hyperparameter names as k...
Returns the value of `key` if it exists, else `default`. def get(self, key, default=None): """Returns the value of `key` if it exists, else `default`.""" if key in self._hparam_types: # Ensure that default is compatible with the parameter type. if default is not None: param_type, is_param_l...
Returns the field name given parameter type and is_list. Args: param_type: Data type of the hparam. is_list: Whether this is a list. Returns: A string representation of the field name. Raises: ValueError: If parameter type is not recognized. def _get_kind_name(param_type, is_list...
Returns the visualizations for query. Args: query: The query to process. Returns: A dictionary of results with processing and graph visualizations. def process(self, query): """Returns the visualizations for query. Args: query: The query to process. Returns: A dictionary...
Default output directory. def _default_output_dir(): """Default output directory.""" try: dataset_name = gin.query_parameter("inputs.dataset_name") except ValueError: dataset_name = "random" dir_name = "{model_name}_{dataset_name}_{timestamp}".format( model_name=gin.query_parameter("train.model")...
Setup gin configuration. def _setup_gin(): """Setup gin configuration.""" # Imports for configurables # pylint: disable=g-import-not-at-top,unused-import,g-bad-import-order,reimported,unused-variable from tensor2tensor.trax import models as _trax_models from tensor2tensor.trax import optimizers as _trax_opt ...
Return train and evaluation datasets, feature info and supervised keys. Args: dataset_name: a string, the name of the dataset; if it starts with "v1_" then we'll search T2T Problem registry for it, otherwise we assume it is a dataset from TFDS and load it from there. data_dir: directory where the...
Create an info-like tuple for feature given some shapes and vocab size. def _make_info(shape_list, num_classes): """Create an info-like tuple for feature given some shapes and vocab size.""" feature_info = collections.namedtuple("FeatureInfo", ["shape", "num_classes"]) cur_shape = list(shape_list[0]) # We need...
Select a subset of features from the example dict. def _select_features(example, feature_list=None): """Select a subset of features from the example dict.""" feature_list = feature_list or ["inputs", "targets"] return {f: example[f] for f in feature_list}
Return train and evaluation datasets, feature info and supervised keys. def _train_and_eval_dataset_v1(problem_name, data_dir): """Return train and evaluation datasets, feature info and supervised keys.""" problem = problems.problem(problem_name) train_dataset = problem.dataset(tf.estimator.ModeKeys.TRAIN, data_...
Batching function. def batch_fn(dataset, training, shapes, target_names, batch_size=32, eval_batch_size=32, bucket_batch_length=32, bucket_max_length=256, bucket_min_length=8, bucket_length_step=1.1, buckets=None): """Batching function.""" del target_names # If bucketing is...
Shuffle and batch the given dataset. def shuffle_and_batch_data(dataset, target_names, features_info, training): """Shuffle and batch the given dataset.""" def append_targets(example): """Append targets to the example dictionary. Needed for Keras.""" if len(target_names) == 1: return (example, exampl...
Compile the model in Keras. def optimize_fn(model, optimizer=None, learning_rate_schedule=None, loss=None, metrics=None): """Compile the model in Keras.""" learning_rate_schedule = learning_rate_schedule or T2TLearningRateSchedule() if optimizer: ...
Train the given model on the given dataset. Args: data_dir: Directory where the data is located. output_dir: Directory where to put the logs and checkpoints. model_class: The model class to train. dataset: The name of the dataset to train on. input_names: List of strings with the names of the fea...
Main function to train the given model on the given dataset. Args: model_name: The name of the model to train. dataset_name: The name of the dataset to train on. data_dir: Directory where the data is located. output_dir: Directory where to put the logs and checkpoints. config_file: the gin config...
Decode from estimator. Interactive, from file, or from dataset. def decode(estimator, hparams, decode_hp): """Decode from estimator. Interactive, from file, or from dataset.""" if FLAGS.decode_interactive: if estimator.config.use_tpu: raise ValueError("TPU can only decode from dataset.") decoding.dec...
Score each line in a file and return the scores. def score_file(filename): """Score each line in a file and return the scores.""" # Prepare model. hparams = create_hparams() encoders = registry.problem(FLAGS.problem).feature_encoders(FLAGS.data_dir) has_inputs = "inputs" in encoders # Prepare features for...
Put time dimension on channels in an embedded video. def time_to_channels(embedded_video): """Put time dimension on channels in an embedded video.""" video_shape = common_layers.shape_list(embedded_video) if len(video_shape) != 5: raise ValueError("Assuming videos given as tensors in the format " ...
Basic autoencoder model. def autoencoder_basic(): """Basic autoencoder model.""" hparams = common_hparams.basic_params1() hparams.optimizer = "adam" hparams.learning_rate_constant = 0.0002 hparams.learning_rate_warmup_steps = 500 hparams.learning_rate_schedule = "constant * linear_warmup" hparams.label_s...
Autoregressive autoencoder model. def autoencoder_autoregressive(): """Autoregressive autoencoder model.""" hparams = autoencoder_basic() hparams.add_hparam("autoregressive_forget_base", False) hparams.add_hparam("autoregressive_mode", "none") hparams.add_hparam("autoregressive_decode_steps", 0) hparams.ad...
Residual autoencoder model. def autoencoder_residual(): """Residual autoencoder model.""" hparams = autoencoder_autoregressive() hparams.optimizer = "Adafactor" hparams.clip_grad_norm = 1.0 hparams.learning_rate_constant = 0.5 hparams.learning_rate_warmup_steps = 500 hparams.learning_rate_schedule = "con...
Residual autoencoder model for text. def autoencoder_residual_text(): """Residual autoencoder model for text.""" hparams = autoencoder_residual() hparams.bottleneck_bits = 32 hparams.batch_size = 1024 hparams.hidden_size = 64 hparams.max_hidden_size = 512 hparams.bottleneck_noise = 0.0 hparams.bottom =...
Basic autoencoder model. def autoencoder_basic_discrete(): """Basic autoencoder model.""" hparams = autoencoder_autoregressive() hparams.num_hidden_layers = 5 hparams.hidden_size = 64 hparams.bottleneck_bits = 1024 hparams.bottleneck_noise = 0.1 hparams.add_hparam("discretize_warmup_steps", 16000) retu...
Residual discrete autoencoder model. def autoencoder_residual_discrete(): """Residual discrete autoencoder model.""" hparams = autoencoder_residual() hparams.bottleneck_bits = 1024 hparams.bottleneck_noise = 0.05 hparams.add_hparam("discretize_warmup_steps", 16000) hparams.add_hparam("bottleneck_kind", "ta...
Residual discrete autoencoder model, big version. def autoencoder_residual_discrete_big(): """Residual discrete autoencoder model, big version.""" hparams = autoencoder_residual_discrete() hparams.hidden_size = 128 hparams.max_hidden_size = 4096 hparams.bottleneck_noise = 0.1 hparams.residual_dropout = 0.4...
Ordered discrete autoencoder model. def autoencoder_ordered_discrete(): """Ordered discrete autoencoder model.""" hparams = autoencoder_residual_discrete() hparams.bottleneck_noise = 0.05 # Use 0.8 for ordered. hparams.gan_loss_factor = 0.05 hparams.add_hparam("unordered", True) return hparams
Ordered discrete autoencoder model. def autoencoder_ordered_discrete_image64(): """Ordered discrete autoencoder model.""" hparams = autoencoder_ordered_discrete() hparams.batch_size = 32 hparams.num_hidden_layers = 6 hparams.bottleneck_warmup_steps *= 2 hparams.gan_codes_warmup_steps *= 2 return hparams
Ordered discrete autoencoder model for text. def autoencoder_ordered_text(): """Ordered discrete autoencoder model for text.""" hparams = autoencoder_ordered_discrete() hparams.bottleneck_bits = 1024 hparams.bottleneck_shared_bits = 1024-64 hparams.bottleneck_shared_bits_start_warmup = 75000 hparams.bottle...
Ordered discrete autoencoder model for text, small version. def autoencoder_ordered_text_small(): """Ordered discrete autoencoder model for text, small version.""" hparams = autoencoder_ordered_text() hparams.bottleneck_bits = 32 hparams.num_hidden_layers = 3 hparams.hidden_size = 64 hparams.max_hidden_siz...
Discrete autoencoder model for compressing pong frames. def autoencoder_discrete_pong(): """Discrete autoencoder model for compressing pong frames.""" hparams = autoencoder_ordered_discrete() hparams.num_hidden_layers = 3 hparams.bottleneck_bits = 24 hparams.batch_size = 2 hparams.gan_loss_factor = 0.01 ...
Discrete autoencoder model for compressing pong frames for testing. def autoencoder_discrete_tiny(): """Discrete autoencoder model for compressing pong frames for testing.""" hparams = autoencoder_ordered_discrete() hparams.num_hidden_layers = 2 hparams.bottleneck_bits = 24 hparams.batch_size = 2 hparams.g...
Discrete autoencoder model for compressing cifar. def autoencoder_discrete_cifar(): """Discrete autoencoder model for compressing cifar.""" hparams = autoencoder_ordered_discrete() hparams.bottleneck_noise = 0.0 hparams.bottleneck_bits = 90 hparams.num_hidden_layers = 2 hparams.hidden_size = 256 hparams....
Tuning grid of the main autoencoder params. def autoencoder_range(rhp): """Tuning grid of the main autoencoder params.""" rhp.set_float("dropout", 0.01, 0.3) rhp.set_float("gan_loss_factor", 0.01, 0.1) rhp.set_float("bottleneck_l2_factor", 0.001, 0.1, scale=rhp.LOG_SCALE) rhp.set_discrete("bottleneck_warmup_...
A stack of self attention layers. def image_encoder(image_feat, hparams, name="image_encoder", save_weights_to=None, make_image_summary=True): """A stack of self attention layers.""" x = image_feat with tf.variable_scope(name): for laye...
Question encoder, run LSTM encoder and get the last output as encoding. def question_encoder(question, hparams, name="encoder"): """Question encoder, run LSTM encoder and get the last output as encoding.""" with tf.variable_scope(name, "encoder", values=[question]): question = common_layers.flatten4d3d(questio...
Attention on image feature with question as query. def attn(image_feat, query, hparams, name="attn"): """Attention on image feature with question as query.""" with tf.variable_scope(name, "attn", values=[image_feat, query]): attn_dim = hparams.attn_dim num_glimps = hparams.num_glimps num_channels = com...
Multi layer perceptron with dropout and relu activation. def mlp(feature, hparams, name="mlp"): """Multi layer perceptron with dropout and relu activation.""" with tf.variable_scope(name, "mlp", values=[feature]): num_mlp_layers = hparams.num_mlp_layers mlp_dim = hparams.mlp_dim for _ in range(num_mlp_...
VQA attention baseline hparams. def vqa_attention_base(): """VQA attention baseline hparams.""" hparams = common_hparams.basic_params1() hparams.batch_size = 128 hparams.use_fixed_batch_size = True, hparams.optimizer = "adam" hparams.optimizer_adam_beta1 = 0.9 hparams.optimizer_adam_beta2 = 0.999 hpara...
Small range of hyperparameters. def vqa_attention_base_range(rhp): """Small range of hyperparameters.""" # After starting from base, set intervals for some parameters. rhp.set_float("learning_rate", 0.1, 1.0, scale=rhp.LOG_SCALE) rhp.set_float("clip_grad_norm", 0.1, 10, scale=rhp.LOG_SCALE) rhp.set_discrete(...
Append (step, value) pair to history for the given mode and metric. def append(self, mode, metric, step, value): """Append (step, value) pair to history for the given mode and metric.""" if mode not in self._values: self._values[mode] = collections.defaultdict(list) self._values[mode][metric].append(...
Get the history for the given metric and mode. def get(self, mode, metric): """Get the history for the given metric and mode.""" if mode not in self._values: logging.info("Metric %s not found for mode %s", metric, mode) return [] return list(self._values[mode][metric])
Metrics available for a given mode. def metrics_for_mode(self, mode): """Metrics available for a given mode.""" if mode not in self._values: logging.info("Mode %s not found", mode) return [] return sorted(list(self._values[mode].keys()))
Performs a batch normalization followed by a ReLU. Args: inputs: `Tensor` of shape `[batch, channels, ...]`. is_training: `bool` for whether the model is training. relu: `bool` if False, omits the ReLU operation. init_zero: `bool` if True, initializes scale parameter of batch normalization wi...
Strided 2-D convolution with explicit padding. The padding is consistent and is based only on `kernel_size`, not on the dimensions of `inputs` (as opposed to using `tf.layers.conv2d` alone). Args: inputs: `Tensor` of size `[batch, channels, height_in, width_in]`. filters: `int` number of filters in the ...
Standard building block for residual networks with BN before convolutions. Args: inputs: `Tensor` of size `[batch, channels, height, width]`. filters: `int` number of filters for the first two convolutions. Note that the third and final convolution will use 4 times as many filters. is_training: `...
Bottleneck block variant for residual networks with BN after convolutions. Args: inputs: `Tensor` of size `[batch, channels, height, width]`. filters: `int` number of filters for the first two convolutions. Note that the third and final convolution will use 4 times as many filters. is_training: `...
Creates one layer of blocks for the ResNet model. Args: inputs: `Tensor` of size `[batch, channels, height, width]`. filters: `int` number of filters for the first convolution of the layer. block_fn: `function` for the block to use within the model blocks: `int` number of blocks contained in the laye...
Resnet model. Args: inputs: `Tensor` images. block_fn: `function` for the block to use within the model. Either `residual_block` or `bottleneck_block`. layer_blocks: list of 3 or 4 `int`s denoting the number of blocks to include in each of the 3 or 4 block groups. Each group consists of blo...
Set of hyperparameters. def resnet_imagenet_34_td_weight_05_05(): """Set of hyperparameters.""" hp = resnet_imagenet_34() hp.use_td = "weight" hp.targeting_rate = 0.5 hp.keep_prob = 0.5 return hp
Set of hyperparameters. def resnet_imagenet_34_td_unit_05_05(): """Set of hyperparameters.""" hp = resnet_imagenet_34() hp.use_td = "unit" hp.targeting_rate = 0.5 hp.keep_prob = 0.5 return hp
Set of hyperparameters. def resnet_imagenet_34_td_unit_no_drop(): """Set of hyperparameters.""" hp = resnet_imagenet_34() hp.use_td = "unit" hp.targeting_rate = 0.0 hp.keep_prob = 1.0 return hp
Set of hyperparameters. def resnet_cifar_15(): """Set of hyperparameters.""" hp = resnet_base() hp.block_fn = "residual" hp.is_cifar = True hp.layer_sizes = [2, 2, 2] hp.filter_sizes = [16, 32, 64, 128] return hp
Returns the length of the Longest Common Subsequence between two seqs. Source: http://www.algorithmist.com/index.php/Longest_Common_Subsequence Args: x: sequence of words y: sequence of words Returns integer: Length of LCS between x and y def _len_lcs(x, y): """Returns the length of the Longest ...
Computes the length of the LCS between two seqs. The implementation below uses a DP programming algorithm and runs in O(nm) time where n = len(x) and m = len(y). Source: http://www.algorithmist.com/index.php/Longest_Common_Subsequence Args: x: collection of words y: collection of words Returns: ...
Computes ROUGE-L (sentence level) of two collections of sentences. Source: https://www.microsoft.com/en-us/research/publication/ rouge-a-package-for-automatic-evaluation-of-summaries/ Calculated according to: R_lcs = LCS(X,Y)/m P_lcs = LCS(X,Y)/n F_lcs = ((1 + beta^2)*R_lcs*P_lcs) / (R_lcs + (beta^2) * P_...
ROUGE scores computation between labels and predictions. This is an approximate ROUGE scoring method since we do not glue word pieces or decode the ids and tokenize the output. Args: predictions: tensor, model predictions labels: tensor, gold output. Returns: rouge_l_fscore: approx rouge-l f1 sco...
Calculates n-grams. Args: n: which n-grams to calculate text: An array of tokens Returns: A set of n-grams def _get_ngrams(n, text): """Calculates n-grams. Args: n: which n-grams to calculate text: An array of tokens Returns: A set of n-grams """ ngram_set = set() text_lengt...
ROUGE-2 F1 score computation between labels and predictions. This is an approximate ROUGE scoring method since we do not glue word pieces or decode the ids and tokenize the output. Args: predictions: tensor, model predictions labels: tensor, gold output. Returns: rouge2_fscore: approx rouge-2 f1 ...
Normalize the examples from different tasks so they can be merged. This function is specific to NLP tasks and normalizes them so that in the end the example only has "targets" and "task_id". For tasks that originally have inputs, this is done by appending task_id to the inputs and prepending targets, so normal...
A list of examples to a dataset containing mixed examples. Given a list of `n` dataset examples, flatten them by converting each element into a dataset and concatenating them to convert into a single dataset. Args: *args: A list containing one example each from `n` different datasets. Returns: flat...
Multiproblem loss function. def aggregate_task_losses(hparams, problem_hparams, logits, feature_name, feature): """Multiproblem loss function.""" # If no reweighting, we want the default loss to mimic the LM lo...
LM loss for multiproblems. def aggregate_task_lm_losses(hparams, problem_hparams, logits, feature_name, feature): """LM loss for multiproblems.""" summaries = [] vocab_size = problem_hparams.vocab_...
Normalize the examples from different tasks so they can be merged. def normalize_example(self, task, example, encoder, hparams, is_infer): """Normalize the examples from different tasks so they can be merged.""" # Here we use the default function for NLP tasks that makes everything # a part of "targets" fe...
Generate task_ids for each problem. These ids correspond to the index of the task in the task_list. Args: encoder_vocab_size: the size of the vocab which is used to compute the index offset. def update_task_ids(self, encoder_vocab_size): """Generate task_ids for each problem. These ids...
Compute the maximum number of classes any subtask has. This is useful for modifying the size of the softmax to include the output labels for the classification tasks. Currently, labels from different tasks are overloaded. Returns: num: Highest number of output classes in any text classification ...
Called prior to self-attention, to incorporate memory items. Args: segment: an integer Tensor with shape [batch] query_antecedent: a Tensor with shape [batch, length_q, channels] memory_antecedent: must be None. Attention normally allows this to be a Tensor with shape [batch, length_m, ch...
Called prior to self-attention, to incorporate memory items. Args: segment: an integer Tensor with shape [batch] query_antecedent: a Tensor with shape [batch, length_q, channels] memory_antecedent: must be None. Attention normally allows this to be a Tensor with shape [batch, length_m, ch...
Called after self-attention. The memory can be updated here. Args: token: Data returned by pre_attention, which can be used to carry over state related to the current memory operation. x: a Tensor of data after self-attention and feed-forward Returns: a (possibly modified) version of ...
Compute the safe norm. def _norm(self, x): """Compute the safe norm.""" return tf.sqrt(tf.reduce_sum(tf.square(x), keepdims=True, axis=-1) + 1e-7)
Address the memory based on content similarity. Args: x: a tensor in the shape of [batch_size, length, depth]. Returns: the logits for each memory entry [batch_size, length, memory_size]. def _address_content(self, x): """Address the memory based on content similarity. Args: x: a te...