text stringlengths 81 112k |
|---|
Implementing attention that runs inside each expert.
Args:
x: A tensor of shape[batch, depth]. Contains representations from
different positions, which are lexicographically ordered.
batch_coordinate: A tensor of shape [batch, 1] containing the batch
coordinate of each element in x. This is neede... |
Attention using a mixture of experts.
Positions sent to the same expert can attend to each other.
The mixture of experts is "local" in that it is replicated on each
datashard.
local_moe flatten all batches so to avoid problems with padding (ex: all
padding going to the same expert, self attention ... |
Perform dot product on a subset of the sequence.
Can add a mask to the attention to prevent sequences to attend to each other
and to prevent attention to the future.
Args:
q (tf.Tensor): Queries of shape [length_expert_q, depth_k]
k (tf.Tensor): Keys of shape [length_expert_k, depth_k]
v (tf.Tensor)... |
Perform a dot product attention on a single sequence on a single head.
This function dispatch the q, k, v and loop over the buckets to compute the
attention dot product on each subsequences.
Args:
q (tf.Tensor): [length_q, depth_q]
k (tf.Tensor): [length_k, depth_q]
v (tf.Tensor): [length_k, depth_v... |
Construct the graph with either tf.map_fn or a python for loop.
This function is mainly for for benchmarking purpose.
tf.map_fn is dynamic but is much slower than creating a static graph with
for loop. However, having a for loop make the graph much longer to build
and can consume too much RAM on distributed s... |
Sparse multihead self attention.
Perform an approximation of the full multihead attention by dispatching
the tokens using their keys/values. Thus the attention matrix are only
computed each times on a subset of the tokens.
Notes:
* The function don't perform scaling here (multihead_attention does
the /sq... |
Perform a dot product attention on a single sequence on a single head.
This function dispatch the q, k, v and loop over the buckets to compute the
attention dot product on each subsequences.
Args:
q (tf.Tensor): [batch*heads, length_q, depth_q]
k (tf.Tensor): [batch*heads, length_k, depth_q]
v (tf.T... |
Sparse multihead self attention.
Perform an approximation of the full multihead attention by dispatching
the tokens using their keys/values. Thus the attention matrix are only
computed each times on a subset of the tokens.
Notes:
* The function don't perform scaling here (multihead_attention does
the /sq... |
Increase the length and change the dimensionality.
Expand/project each positions of dim depth of the input into
factor*tokens of dim out_depth
Args:
x (tf.Tensor): shape [batch_size, length, depth]
factor (int): Multiplicative factor of each tokens.
out_depth (int): Output depth (if None, keep depth... |
Decrease the length and change the dimensionality.
Merge/restore/compress factors positions of dim depth of the input into
a single position of dim out_depth.
This is basically just a strided convolution without overlap
between each strides. The original length has to be divided by factor.
Args:
x (tf.T... |
Reduce the length dimension using self attention.
Args:
x (tf.Tensor): float32 of shape [batch, length, depth]
block_length (int): Block length for local attention (Compression factor)
multihead_params (dict): parameters for multihead attention
Returns:
tf.Tensor: Compressed tensor of shape [batch... |
Reduce the length dimension by compressing with conv.
Args:
x (tf.Tensor): float32 of shape [batch, length, depth]
memory_antecedent (tf.Tensor): Unsupported for now
bias (tf.Tensor): Ignored
factor (int): compression factor for the memory sequence
multihead_params (dict): parameters for multihea... |
Scaled dot-product attention. One head. One spatial dimension.
Args:
q: a Tensor with shape [batch, length_q, depth_k]
k: a Tensor with shape [batch, length_kv, depth_k]
v: a Tensor with shape [batch, length_kv, depth_v]
bias: optional Tensor broadcastable to [batch, length_q, length_kv]
name: an... |
Multihead scaled-dot-product self-attention.
Includes layer norm.
Returns multihead-self-attention(layer_norm(x))
Computes one attention head at a time to avoid exhausting memory.
If forget=True, then forget all forwards activations and recompute on
the backwards pass.
Args:
x: a Tensor with shape ... |
Convert an group index to its bit representation.
def _idx_to_bits(self, i):
"""Convert an group index to its bit representation."""
bits = bin(i)[2:].zfill(self.nb_hyperplanes) # Pad the bits str with 0
return [-1.0 if b == "0" else 1.0 for b in bits] |
Return the bucket id of the given tensor.
Args:
x (tf.Tensor): float32 of shape [length, depth]
Returns:
tf.Tensor: One-hot vector int64 of shape [heads, length, nb_buckets]
containing the id of the bucket
def get_gates(self, x):
"""Return the bucket id of the given tensor.
Args:... |
The image encoder for the VAN.
Similar architecture as Ruben's paper
(http://proceedings.mlr.press/v70/villegas17a/villegas17a.pdf).
Args:
x: The image to encode.
first_depth: The depth of the first layer. Depth is increased in subsequent
layers.
reuse: To reuse in variable scope or not.
h... |
The higher level structure encoder for the VAN.
The high level structure is a vector instead of an image.
Args:
x: The higher level structure to encode.
first_depth: The depth of the first layer. Depth is increased in subsequent
layers.
reuse: To reuse in variable scope or not.
Returns:
T... |
The VAN decoder.
Args:
x: The analogy information to decode.
skip_connections: The encoder layers which can be used as skip connections.
output_shape: The shape of the desired output image.
first_depth: The depth of the first layer of the van image encoder.
hparams: The python hparams.
Returns... |
Implements the deep analogy computation.
def analogy_computation_2d(f_first_enc,
f_first_frame,
f_current_enc,
first_depth):
"""Implements the deep analogy computation."""
with tf.variable_scope('analogy_computation'):
frame_enc_... |
Implements a VAN.
Args:
first_enc: The first encoding.
first_frame: The first ground truth frame.
current_enc: The encoding of the frame to generate.
gt_image: The ground truth image, only used for regularization.
reuse: To reuse in variable scope or not.
scope_prefix: The prefix before the s... |
VGG network to use as encoder without the top few layers.
Can be pretrained.
Args:
x: The image to encode. In the range 0 to 1.
enc_final_size: The desired size of the encoding.
reuse: To reuse in variable scope or not.
scope_prefix: The prefix before the scope name.
hparams: The python hparam... |
LSTM predictor network.
def predictor(enc_flat,
action,
lstm_states,
pred_depth,
reuse=False,
scope_prefix='',
hparams=None):
"""LSTM predictor network."""
with tf.variable_scope(scope_prefix + 'predict', reuse=reuse):
enc_fin... |
Constructs the tensorflow graph of the hierarchical model.
def construct_model(images,
actions=None,
context_frames=2,
hparams=None,
is_training=True):
"""Constructs the tensorflow graph of the hierarchical model."""
pred_depth = 20
... |
Image quality metric based on maximal signal power vs. power of the noise.
Args:
true: the ground truth image.
pred: the predicted image.
Returns:
peak signal to noise ratio (PSNR)
def peak_signal_to_noise_ratio(true, pred):
"""Image quality metric based on maximal signal power vs. power of the nois... |
L2 distance between tensors true and pred.
Args:
true: the ground truth image.
pred: the predicted image.
Returns:
mean squared error between ground truth and predicted image.
def mean_squared_error(true, pred):
"""L2 distance between tensors true and pred.
Args:
true: the ground truth image.... |
L1 distance between tensors true and pred.
def l1_error(true, pred):
"""L1 distance between tensors true and pred."""
return tf.reduce_sum(tf.abs(true - pred)) / tf.to_float(tf.size(pred)) |
Calculates loss and psnr for predictions over multiple timesteps.
def calc_loss_psnr(gen_images, images, name, hparams=None, use_l1_loss=False):
"""Calculates loss and psnr for predictions over multiple timesteps."""
del hparams
with tf.name_scope(name):
loss, error, psnr_all = 0.0, 0.0, 0.0
for _, x, gx... |
SV2P model hparams.
def next_frame_sv2p():
"""SV2P model hparams."""
hparams = basic_stochastic.next_frame_basic_stochastic()
hparams.optimizer = "true_adam"
hparams.learning_rate_schedule = "constant"
hparams.learning_rate_constant = 1e-3
hparams.video_num_input_frames = 1
hparams.video_num_target_frame... |
SV2P discrete model hparams.
def next_frame_sv2p_discrete():
"""SV2P discrete model hparams."""
hparams = next_frame_sv2p()
hparams.action_injection = "multiplicative"
hparams.small_mode = True
hparams.add_hparam("bottleneck_bits", 128)
hparams.add_hparam("bottleneck_noise", 0.02)
hparams.add_hparam("dis... |
SV2P model for atari.
def next_frame_sv2p_atari():
"""SV2P model for atari."""
hparams = next_frame_sv2p()
hparams.video_num_input_frames = 4
hparams.video_num_target_frames = 4
hparams.action_injection = "multiplicative"
hparams.num_iterations_1st_stage = 12000
hparams.num_iterations_2nd_stage = 12000
... |
SV2P model for atari with softmax.
def next_frame_sv2p_atari_softmax():
"""SV2P model for atari with softmax."""
hparams = next_frame_sv2p_atari()
hparams.bottom = {}
hparams.loss = {}
hparams.top = {}
hparams.internal_loss = True
return hparams |
Tiny SV2P model.
def next_frame_sv2p_tiny():
"""Tiny SV2P model."""
hparams = next_frame_sv2p_atari_softmax()
hparams.batch_size = 2
hparams.tiny_mode = True
hparams.num_masks = 1
hparams.video_modality_loss_cutoff = 0.4
hparams.video_num_input_frames = 4
hparams.video_num_target_frames = 4
return hp... |
SV2P model with additional cutoff in L2 loss for environments like pong.
def next_frame_sv2p_cutoff():
"""SV2P model with additional cutoff in L2 loss for environments like pong."""
hparams = next_frame_sv2p()
hparams.video_modality_loss_cutoff = 0.4
hparams.video_num_input_frames = 4
hparams.video_num_targe... |
Download and extract MSCOCO datasets to directory unless it is there.
def _get_mscoco(directory):
"""Download and extract MSCOCO datasets to directory unless it is there."""
for url in _MSCOCO_URLS:
filename = os.path.basename(url)
download_url = os.path.join(_MSCOCO_ROOT_URL, url)
path = generator_uti... |
Image generator for MSCOCO captioning problem with token-wise captions.
Args:
data_dir: path to the data directory.
tmp_dir: path to temporary storage directory.
training: a Boolean; if true, we use the train set, otherwise the test set.
how_many: how many images and labels to generate.
start_fro... |
Convert FLAGS to list of args suitable for passing on cmd line.
def flags_as_args():
"""Convert FLAGS to list of args suitable for passing on cmd line."""
if hasattr(FLAGS, "flag_values_dict"):
args_dict = FLAGS.flag_values_dict()
else:
args_dict = dict(FLAGS.__dict__["__flags"])
del args_dict["cloud_m... |
Returns master_type for trainingInput.
def get_default_master_type(num_gpus=1):
"""Returns master_type for trainingInput."""
gpus_to_master_map = {
0: "standard",
1: "standard_p100",
4: "complex_model_m_p100",
8: "complex_model_l_gpu",
}
if num_gpus not in gpus_to_master_map:
raise ... |
Construct jobSpec for ML Engine job.
def configure_job():
"""Construct jobSpec for ML Engine job."""
# See documentation:
# https://cloud.google.com/ml-engine/reference/rest/v1/projects.jobs#traininginput
training_input = {
"pythonModule": "tensor2tensor.bin.t2t_trainer",
"args": flags_as_args(),
... |
Launch job on ML Engine.
def launch_job(job_spec):
"""Launch job on ML Engine."""
project_id = "projects/{}".format(
text_encoder.native_to_unicode(default_project()))
credentials = GoogleCredentials.get_application_default()
cloudml = discovery.build("ml", "v1", credentials=credentials,
... |
Tar and gzip src_dir and copy to GCS target_dir.
def _tar_and_copy(src_dir, target_dir):
"""Tar and gzip src_dir and copy to GCS target_dir."""
src_dir = src_dir.rstrip("/")
target_dir = target_dir.rstrip("/")
tmp_dir = tempfile.gettempdir().rstrip("/")
src_base = os.path.basename(src_dir)
shell_run(
... |
Tar Tensor2Tensor and cp to train_dir.
def tar_and_copy_t2t(train_dir):
"""Tar Tensor2Tensor and cp to train_dir."""
tf.logging.info("Tarring and pushing local Tensor2Tensor package.")
output = text_encoder.native_to_unicode(shell_output(
"pip show tensor2tensor")).split("\n")
assert output[1].startswit... |
Package, tar, and copy usr_dir to GCS train_dir.
def tar_and_copy_usr_dir(usr_dir, train_dir):
"""Package, tar, and copy usr_dir to GCS train_dir."""
tf.logging.info("Tarring and pushing t2t_usr_dir.")
usr_dir = os.path.abspath(os.path.expanduser(usr_dir))
# Copy usr dir to a temp location
top_dir = os.path.... |
Validates flags are set to acceptable values for CloudML Engine runs.
def validate_flags():
"""Validates flags are set to acceptable values for CloudML Engine runs."""
assert not job_dir()
assert FLAGS.output_dir.startswith("gs://")
assert FLAGS.data_dir.startswith("gs://")
assert FLAGS.worker_replicas <= 1
... |
Launch t2t_trainer on Cloud ML Engine.
def launch():
"""Launch t2t_trainer on Cloud ML Engine."""
validate_flags()
job_spec = configure_job()
job_name = job_spec["jobId"]
tf.logging.info("Launching job %s with ML Engine spec:\n%s", job_name,
pprint.pformat(job_spec))
assert confirm()
tr... |
Decorator for Layers, overriding add_weight for trainable initializers.
def add_weight(cls):
"""Decorator for Layers, overriding add_weight for trainable initializers."""
@functools.wraps(cls.add_weight)
def _add_weight(self,
name=None,
shape=None,
dtype=None... |
Get the KL multiplier, either dynamically or schedule based.
if hparams.latent_loss_multiplier_dynamic is set to true, then beta
is being adjusted to keep KL under hparams.latent_loss_multiplier_epsilon.
In order to do so, the beta is being updated at each iteration
by taking steps of size hparams.late... |
Get KL loss for all the predicted Gaussians.
def get_kl_loss(self, means, log_vars, means_p=None, log_vars_p=None):
"""Get KL loss for all the predicted Gaussians."""
kl_loss = 0.0
if means_p is None:
means_p = tf.unstack(tf.zeros_like(means))
if log_vars_p is None:
log_vars_p = tf.unstack(... |
Create the latent tower.
def construct_latent_tower(self, images, time_axis):
"""Create the latent tower."""
# No latent in the first phase
first_phase = tf.less(
self.get_iteration_num(), self.hparams.num_iterations_1st_stage)
# use all frames by default but this allows more
# predicted f... |
Encode transformer inputs.
Args:
encoder_function: the encoder function
inputs: Transformer inputs [batch_size, input_length, 1, hidden_dim] which
will be flattened along the two spatial dimensions.
target_space: scalar, target space ID.
hparams: hyperparameters for model.
attention_weights... |
Decode Transformer outputs from encoder representation.
Args:
decoder_function: the decoder function
decoder_input: inputs to bottom of the model. [batch_size, decoder_length,
hidden_dim]
encoder_output: Encoder representation. [batch_size, input_length,
hidden_dim]
encoder_decoder_attent... |
Create the initial cache for Transformer fast decoding.
def _init_transformer_cache(cache, hparams, batch_size, attention_init_length,
encoder_output, encoder_decoder_attention_bias,
scope_prefix):
"""Create the initial cache for Transformer fast decoding."""
... |
Given encoder output and a symbols to logits function, does fast decoding.
Implements both greedy and beam search decoding for TPU, uses beam search iff
beam_size > 1, otherwise beam search related arguments are ignored.
Args:
encoder_output: A tensor, output from encoder.
encoder_decoder_attention_bias... |
Given encoder output and a symbols to logits function, does fast decoding.
Implements both greedy and beam search decoding, uses beam search iff
beam_size > 1, otherwise beam search related arguments are ignored.
Args:
encoder_output: Output from encoder.
encoder_decoder_attention_bias: a bias tensor fo... |
Prepare one shard of the model for the decoder.
Args:
targets: a Tensor.
hparams: run hyperparameters
features: optionally pass the entire features dictionary as well. This is
needed now for "packed" datasets.
Returns:
decoder_input: a Tensor, bottom of decoder stack
decoder_self_attenti... |
A stack of transformer layers.
Args:
decoder_input: a Tensor
encoder_output: a Tensor
decoder_self_attention_bias: bias Tensor for self-attention (see
common_attention.attention_bias())
encoder_decoder_attention_bias: bias Tensor for encoder-decoder attention
(see common_attention.attenti... |
Set of hyperparameters.
def transformer_base_v1():
"""Set of hyperparameters."""
hparams = common_hparams.basic_params1()
hparams.norm_type = "layer"
hparams.hidden_size = 512
hparams.batch_size = 4096
hparams.max_length = 256
hparams.clip_grad_norm = 0. # i.e. no gradient clipping
hparams.optimizer_a... |
Set of hyperparameters.
def transformer_base_v2():
"""Set of hyperparameters."""
hparams = transformer_base_v1()
hparams.layer_preprocess_sequence = "n"
hparams.layer_postprocess_sequence = "da"
hparams.layer_prepostprocess_dropout = 0.1
hparams.attention_dropout = 0.1
hparams.relu_dropout = 0.1
hparam... |
Set of hyperparameters for lm1b packed following tpu params.
def transformer_base_vq_ada_32ex_packed():
"""Set of hyperparameters for lm1b packed following tpu params."""
hparams = transformer_base_v2()
expert_utils.update_hparams_for_vq_gating(hparams)
hparams.moe_num_experts = 32
hparams.gating_type = "vq"... |
Set of hyperparameters.
def transformer_base_vq1_16_nb1_packed_nda_b01_scales():
"""Set of hyperparameters."""
hparams = transformer_base_vq_ada_32ex_packed()
hparams.use_scales = int(True)
hparams.moe_num_experts = 16
hparams.moe_k = 1
hparams.beta = 0.1
hparams.layer_preprocess_sequence = "n"
hparams... |
Set of hyperparameters.
def transformer_base_vq1_16_nb1_packed_dan_b01_scales():
"""Set of hyperparameters."""
hparams = transformer_base_vq_ada_32ex_packed()
hparams.use_scales = int(True)
hparams.moe_num_experts = 16
hparams.moe_k = 1
hparams.beta = 0.1
hparams.ema = False
return hparams |
Set of hyperparameters.
def transformer_base_vq1_16_nb1_packed_nda_b01_scales_dialog():
"""Set of hyperparameters."""
hparams = transformer_base_vq1_16_nb1_packed_nda_b01_scales()
hparams.batch_size = 2048
hparams.max_length = 1024
hparams.filter_size = 3072
return hparams |
Set of hyperparameters.
def transformer_ada_lmpackedbase_dialog():
"""Set of hyperparameters."""
hparams = transformer_base_vq_ada_32ex_packed()
hparams.max_length = 1024
hparams.ffn_layer = "dense_relu_dense"
hparams.batch_size = 4096
return hparams |
Base parameters for Transformer model.
def transformer_base_v3():
"""Base parameters for Transformer model."""
# Update parameters here, then occasionally cut a versioned set, e.g.
# transformer_base_v2.
hparams = transformer_base_v2()
hparams.optimizer_adam_beta2 = 0.997
# New way of specifying learning r... |
HParams for transformer big model on WMT.
def transformer_big():
"""HParams for transformer big model on WMT."""
hparams = transformer_base()
hparams.hidden_size = 1024
hparams.filter_size = 4096
# Reduce batch size to 2048 from 4096 to be able to train the model on a GPU
# with 12 GB memory. For example, ... |
Hparams for transformer on LM for pretraining/finetuning/mixing.
def transformer_tall():
"""Hparams for transformer on LM for pretraining/finetuning/mixing."""
hparams = transformer_base()
hparams.batch_size = 2048
hparams.hidden_size = 768
hparams.filter_size = 3072
hparams.num_hidden_layers = 12
hparam... |
Tied means fine-tune CNN/DM summarization as LM.
def transformer_tall_finetune_tied():
"""Tied means fine-tune CNN/DM summarization as LM."""
hparams = transformer_tall()
hparams.multiproblem_max_input_length = 750
hparams.multiproblem_max_target_length = 100
hparams.multiproblem_schedule_max_examples = 0
... |
Fine-tune CNN/DM with a unidirectional encoder and decoder.
def transformer_tall_finetune_uniencdec():
"""Fine-tune CNN/DM with a unidirectional encoder and decoder."""
hparams = transformer_tall()
hparams.max_input_seq_length = 750
hparams.max_target_seq_length = 100
hparams.optimizer = "true_adam"
hparam... |
Train CNN/DM with a unidirectional encoder and decoder.
def transformer_tall_train_uniencdec():
"""Train CNN/DM with a unidirectional encoder and decoder."""
hparams = transformer_tall()
hparams.max_input_seq_length = 750
hparams.max_target_seq_length = 100
hparams.optimizer = "true_adam"
hparams.learning_... |
Hparams for transformer on LM for finetuning on text class problems.
def transformer_tall_finetune_textclass():
"""Hparams for transformer on LM for finetuning on text class problems."""
hparams = transformer_tall()
hparams.learning_rate_constant = 6.25e-5
hparams.learning_rate_schedule = ("linear_warmup*const... |
Hparams for transformer on LM pretraining (with 64k vocab).
def transformer_tall_pretrain_lm():
"""Hparams for transformer on LM pretraining (with 64k vocab)."""
hparams = transformer_tall()
hparams.learning_rate_constant = 2e-4
hparams.learning_rate_schedule = ("linear_warmup*constant*cosdecay")
hparams.opt... |
Hparams for transformer on LM pretraining (with 64k vocab) on TPU.
def transformer_tall_pretrain_lm_tpu_adafactor():
"""Hparams for transformer on LM pretraining (with 64k vocab) on TPU."""
hparams = transformer_tall_pretrain_lm()
update_hparams_for_tpu(hparams)
hparams.max_length = 1024
# For multi-problem ... |
Hparams for transformer on LM pretraining on TPU, large model.
def transformer_tall_pretrain_lm_tpu_adafactor_large():
"""Hparams for transformer on LM pretraining on TPU, large model."""
hparams = transformer_tall_pretrain_lm_tpu_adafactor()
hparams.hidden_size = 1024
hparams.num_heads = 16
hparams.filter_s... |
Hparams for transformer on LM pretraining on TPU with AdamW.
def transformer_tall_pretrain_lm_tpu():
"""Hparams for transformer on LM pretraining on TPU with AdamW."""
hparams = transformer_tall_pretrain_lm_tpu_adafactor()
# Optimizer gets reset in update_hparams_for_tpu so we set it again here.
hparams.learni... |
HParams for transformer base model for single GPU.
def transformer_base_single_gpu():
"""HParams for transformer base model for single GPU."""
hparams = transformer_base()
hparams.batch_size = 1024
hparams.learning_rate_schedule = "constant*linear_warmup*rsqrt_decay"
hparams.learning_rate_constant = 0.1
hp... |
HParams for parsing on WSJ only.
def transformer_parsing_base():
"""HParams for parsing on WSJ only."""
hparams = transformer_base()
hparams.attention_dropout = 0.2
hparams.layer_prepostprocess_dropout = 0.2
hparams.max_length = 512
hparams.learning_rate_warmup_steps = 16000
hparams.hidden_size = 1024
... |
HParams for parsing on WSJ semi-supervised.
def transformer_parsing_big():
"""HParams for parsing on WSJ semi-supervised."""
hparams = transformer_big()
hparams.max_length = 512
hparams.shared_source_target_embedding = False
hparams.learning_rate_warmup_steps = 4000
hparams.layer_prepostprocess_dropout = 0... |
Small range of hyperparameters.
def transformer_base_range(rhp):
"""Small range of hyperparameters."""
# After starting from base, set intervals for some parameters.
rhp.set_float("learning_rate", 0.3, 3.0, scale=rhp.LOG_SCALE)
rhp.set_discrete("learning_rate_warmup_steps",
[1000, 2000, 4000... |
Use relative position embeddings instead of absolute position encodings.
def transformer_relative():
"""Use relative position embeddings instead of absolute position encodings."""
hparams = transformer_base()
hparams.pos = None
hparams.self_attention_type = "dot_product_relative"
hparams.max_relative_positio... |
HParams for Transformer model on TPU for MLPerf on TPU 2x2.
def transformer_mlperf_tpu():
"""HParams for Transformer model on TPU for MLPerf on TPU 2x2."""
hparams = transformer_base_v3()
hparams.mlperf_mode = True
hparams.symbol_modality_num_shards = 1
hparams.max_length = 256 # ignored when using "_packed... |
Change hparams to be compatible with TPU training.
def update_hparams_for_tpu(hparams):
"""Change hparams to be compatible with TPU training."""
# Adafactor uses less memory than Adam.
# switch to Adafactor with its recommended learning rate scheme.
hparams.optimizer = "Adafactor"
hparams.learning_rate_sche... |
Small range of hyperparameters.
def transformer_tpu_range(rhp):
"""Small range of hyperparameters."""
# After starting from base, set intervals for some parameters.
rhp.set_float("learning_rate", 0.3, 3.0, scale=rhp.LOG_SCALE)
rhp.set_discrete("learning_rate_warmup_steps",
[1000, 2000, 4000,... |
No dropout, label smoothing, max_length.
def transformer_clean():
"""No dropout, label smoothing, max_length."""
hparams = transformer_base_v2()
hparams.label_smoothing = 0.0
hparams.layer_prepostprocess_dropout = 0.0
hparams.attention_dropout = 0.0
hparams.relu_dropout = 0.0
hparams.max_length = 0
ret... |
HParams for training languagemodel_lm1b8k on tpu. 92M Params.
def transformer_lm_tpu_0():
"""HParams for training languagemodel_lm1b8k on tpu. 92M Params."""
hparams = transformer_clean_big()
update_hparams_for_tpu(hparams)
hparams.num_heads = 4 # Heads are expensive on TPUs.
hparams.batch_size = 4096
h... |
HParams for training ASR model on LibriSpeech V1.
def transformer_librispeech_v1():
"""HParams for training ASR model on LibriSpeech V1."""
hparams = transformer_base()
hparams.num_heads = 4
hparams.filter_size = 1024
hparams.hidden_size = 256
hparams.num_encoder_layers = 5
hparams.num_decoder_layers = ... |
HParams for training ASR model on LibriSpeech V2.
def transformer_librispeech_v2():
"""HParams for training ASR model on LibriSpeech V2."""
hparams = transformer_base()
hparams.max_length = 1240000
hparams.max_input_seq_length = 1550
hparams.max_target_seq_length = 350
hparams.batch_size = 16
hparams.nu... |
HParams for training ASR model on Librispeech on TPU v1.
def transformer_librispeech_tpu_v1():
"""HParams for training ASR model on Librispeech on TPU v1."""
hparams = transformer_librispeech_v1()
update_hparams_for_tpu(hparams)
hparams.batch_size = 16
librispeech.set_librispeech_length_hparams(hparams)
r... |
HParams for training ASR model on Librispeech on TPU v2.
def transformer_librispeech_tpu_v2():
"""HParams for training ASR model on Librispeech on TPU v2."""
hparams = transformer_librispeech_v2()
update_hparams_for_tpu(hparams)
hparams.batch_size = 16
librispeech.set_librispeech_length_hparams(hparams)
r... |
Hparams for machine translation with ~1.1B parameters.
def transformer_tpu_1b():
"""Hparams for machine translation with ~1.1B parameters."""
hparams = transformer_tpu()
hparams.hidden_size = 2048
hparams.filter_size = 8192
hparams.num_hidden_layers = 8
# smaller batch size to avoid OOM
hparams.batch_siz... |
HParams for training languagemodel_wikitext103_l4k.
def transformer_wikitext103_l4k_v0():
"""HParams for training languagemodel_wikitext103_l4k."""
hparams = transformer_big()
# Adafactor uses less memory than Adam.
# switch to Adafactor with its recommended learning rate scheme.
hparams.optimizer = "Adafac... |
HParams for training languagemodel_wikitext103_l4k with memory.
def transformer_wikitext103_l4k_memory_v0():
"""HParams for training languagemodel_wikitext103_l4k with memory."""
hparams = transformer_wikitext103_l4k_v0()
hparams.split_targets_chunk_length = 64
hparams.split_targets_max_chunks = 64
hparams.... |
HParams for training languagemodel_wikitext103_l16k with memory.
def transformer_wikitext103_l16k_memory_v0():
"""HParams for training languagemodel_wikitext103_l16k with memory."""
hparams = transformer_wikitext103_l4k_memory_v0()
hparams.max_length = 16384
hparams.split_targets_chunk_length = 64
hparams.s... |
HParams for training image_cifar10_plain_gen_flat_rev with memory.
def transformer_cifar10_memory_v0():
"""HParams for training image_cifar10_plain_gen_flat_rev with memory."""
hparams = transformer_wikitext103_l4k_memory_v0()
hparams.num_hidden_layers = 6
hparams.max_length = 32 * 32 * 3
hparams.split_tar... |
HParams for training image_imagenet64_gen_flat_rev with memory.
def transformer_imagenet64_memory_v0():
"""HParams for training image_imagenet64_gen_flat_rev with memory."""
hparams = transformer_cifar10_memory_v0()
hparams.max_length = 64 * 64 * 3
hparams.split_targets_chunk_length = 64 * 3
hparams.split_t... |
Reshape input from 4D to 3D if necessary.
def maybe_reshape_4d_to_3d(x):
"""Reshape input from 4D to 3D if necessary."""
x_shape = common_layers.shape_list(x)
is_4d = False
if len(x_shape) == 4:
x = tf.reshape(x, [x_shape[0], x_shape[1]*x_shape[2], x_shape[3]])
is_4d = True
return x, x_shape, is_4d |
Local 2d, self attention layer.
def local_attention_2d(x, hparams, attention_type="local_attention_2d"):
"""Local 2d, self attention layer."""
# self-attention
with tf.variable_scope("local_2d_self_att"):
y = common_attention.multihead_attention_2d(
x,
None,
hparams.attention_key_chan... |
Local within block self attention.
def local_within_block_attention(x,
self_attention_bias,
hparams,
attention_type="local_within_block_mask_right",
q_padding="VALID",
... |
Local 1d self attention.
def local_attention_1d(x,
hparams,
attention_type="local_unmasked",
q_padding="VALID",
kv_padding="VALID"):
"""Local 1d self attention."""
# self-attention
x, x_shape, is_4d = maybe_reshape_4d_to_... |
Dilated attention with a masking strategy.
def get_dilated_1d_attention_mask(
num_heads, block_size,
num_blocks, memory_size, gap_size,
name="dilated_mask"):
"""Dilated attention with a masking strategy."""
mask = np.ones((num_heads, block_size, 2*block_size), np.bool)
# now going over every row to ... |
Dilated 1d self attention.
def dilated_attention_1d(x,
hparams,
attention_type="masked_dilated_1d",
q_padding="VALID",
kv_padding="VALID",
gap_size=2):
"""Dilated 1d self attention."""
# sel... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.