text
stringlengths
81
112k
Use this method to change the description of a supergroup or a channel. The bot must be an administrator in the chat for this to work and must have the appropriate admin rights. Returns True on success. :param chat_id: Int or Str: Unique identifier for the target chat or username of the target c...
Use this method to pin a message in a supergroup. The bot must be an administrator in the chat for this to work and must have the appropriate admin rights. Returns True on success. :param chat_id: Int or Str: Unique identifier for the target chat or username of the target channel (in...
Convenience function for `send_message(message.chat.id, text, reply_to_message_id=message.message_id, **kwargs)` def reply_to(self, message, text, **kwargs): """ Convenience function for `send_message(message.chat.id, text, reply_to_message_id=message.message_id, **kwargs)` """ return s...
Use this method to send answers to an inline query. On success, True is returned. No more than 50 results per query are allowed. :param inline_query_id: Unique identifier for the answered query :param results: Array of results for the inline query :param cache_time: The maximum amount of...
Use this method to send answers to callback queries sent from inline keyboards. The answer will be displayed to the user as a notification at the top of the chat screen or as an alert. :param callback_query_id: :param text: :param show_alert: :return: def answer_callback_query(s...
Use this method to get a sticker set. On success, a StickerSet object is returned. :param name: :return: def get_sticker_set(self, name): """ Use this method to get a sticker set. On success, a StickerSet object is returned. :param name: :return: """ resu...
Use this method to upload a .png file with a sticker for later use in createNewStickerSet and addStickerToSet methods (can be used multiple times). Returns the uploaded File on success. :param user_id: :param png_sticker: :return: def upload_sticker_file(self, user_id, png_sticker): ...
Use this method to create new sticker set owned by a user. The bot will be able to edit the created sticker set. Returns True on success. :param user_id: :param name: :param title: :param png_sticker: :param emojis: :param contains_masks: :param mask_posit...
Use this method to add a new sticker to a set created by the bot. Returns True on success. :param user_id: :param name: :param png_sticker: :param emojis: :param mask_position: :return: def add_sticker_to_set(self, user_id, name, png_sticker, emojis, mask_position=None):...
Use this method to move a sticker in a set created by the bot to a specific position . Returns True on success. :param sticker: :param position: :return: def set_sticker_position_in_set(self, sticker, position): """ Use this method to move a sticker in a set created by the bot t...
Registers a callback function to be notified when a reply to `message` arrives. Warning: In case `callback` as lambda function, saving reply handlers will not work. :param message: The message for which we are awaiting a reply. :param callback: The callback function to be called when a ...
Registers a callback function to be notified when a reply to `message` arrives. Warning: In case `callback` as lambda function, saving reply handlers will not work. :param message_id: The id of the message for which we are awaiting a reply. :param callback: The callback function to be call...
Registers a callback function to be notified when new message arrives after `message`. Warning: In case `callback` as lambda function, saving next step handlers will not work. :param message: The message for which we want to handle new message in the same chat. :param callback: The call...
Registers a callback function to be notified when new message arrives after `message`. Warning: In case `callback` as lambda function, saving next step handlers will not work. :param chat_id: The chat for which we want to handle new message. :param callback: The callback function which ...
Clears all callback functions registered by register_next_step_handler(). :param message: The message for which we want to handle new message after that in same chat. def clear_step_handler(self, message): """ Clears all callback functions registered by register_next_step_handler(). ...
Clears all callback functions registered by register_next_step_handler(). :param chat_id: The chat for which we want to clear next step handlers def clear_step_handler_by_chat_id(self, chat_id): """ Clears all callback functions registered by register_next_step_handler(). :param chat_...
Clears all callback functions registered by register_for_reply() and register_for_reply_by_message_id(). :param message_id: The message id for which we want to clear reply handlers def clear_reply_handlers_by_message_id(self, message_id): """ Clears all callback functions registered by registe...
Message handler decorator. This decorator can be used to decorate functions that must handle certain types of messages. All message handlers are tested in the order they were added. Example: bot = TeleBot('TOKEN') # Handles all messages which text matches regexp. @bot....
Generates code samples. Args: max_length: int. max literal length. max_nest: int. max nesting level. ops: CodeOp. set of allowable operations. Returns: 1. (str) output value. 2. (str) Code operation. def generate_code(max_length, max_nest, ops): """Generates code samples. Args: max_...
Defines tokens. Args: max_value: the maximum numeric range for the token. Returns: list of string tokens in vocabulary. def get_tokens(max_value): """Defines tokens. Args: max_value: the maximum numeric range for the token. Returns: list of string tokens in vocabulary. """ vocab = [st...
Factory method for LearnToExecute Dataset module. Args: batch_size: (int). The number of elements in a mini-batch. max_length: (int). Maximum character length. max_nesting: (int). Maximum level of statement nesting. token_by_char: (bool). Tokenize by character or words? mode: (string). Either 'tr...
Samples up to maximum difficulty. def fetch(self): """Samples up to maximum difficulty.""" length = np.random.randint(1, self._max_length + 1) nesting = np.random.randint(1, self._max_length + 1) return length, nesting
Increments level difficulty (length and nesting) by 1 until maximum. def update(self, loss, force=False): """Increments level difficulty (length and nesting) by 1 until maximum.""" do_update = super(CombineCurriculum, self).update(loss, force) if not do_update: return False if self._curr_length ...
Samples up to current difficulty. def fetch(self): """Samples up to current difficulty.""" length = np.random.randint(1, self._curr_length + 1) nesting = np.random.randint(1, self._curr_nesting + 1) return length, nesting
Generates batched data in flat numpy arrays. Raises: ValueError: When too many generate calls are required. def generate_flat_data(self): """Generates batched data in flat numpy arrays. Raises: ValueError: When too many generate calls are required. """ # Construct the string statement...
Produces the list of integer indices corresponding to a token list. Args: char_input: The character string to be tokenized. max_len: Truncation length. by_char: If true each character is a token - otherwise alpha-numeric groupings are tokens. Returns: A padded list of st...
Returns an operations list based on the specified task index. Args: task_type: indicates the task type used. Returns: List of the eligible ops. def get_task_ops(task_type=TaskType.ALG_CTRL): """Returns an operations list based on the specified task index. Args: task_type: indicates...
Generator function for batchifying data for learning to execute. Yields: tuple: 1. one-hot input tensor, representing programmatic input 2. one-hot target tensor, the vealuation result. 3. one-hot decoder target, start symbol added for sequence decoding. 4. batch size tensor c...
Returns a human-readable version of a one-hot encoding of words. Args: data: (numpy.ndarray S x B x OH). One-hot encoding of words. S is sequence length, B is batch size, OH is one hot dimensionality. label_batch_entries: (bool). Whether to add numerical label before each batch elem...
Infers the data format for the fused batch norm. It uses the axis option to infer this information. Specifically, the axis value (0, 1, 2) corresponds to data format NHWC and the axis value (0, 2, 3) to data format NCHW. Args: input_batch: A Tensor of arbitrary dimension. Returns: A s...
Creates a fused batch normalization op. def _fused_batch_norm_op(self, input_batch, mean, variance, use_batch_stats): """Creates a fused batch normalization op.""" # Store the original shape of the mean and variance. mean_shape = mean.get_shape() variance_shape = variance.get_shape() # The fused ba...
Creates a batch normalization op. It uses the tf.nn.batch_normalization op by default and the tf.nn.fused_batch_norm op to support fused batch normalization. Args: input_batch: A input Tensor of arbitrary dimension. mean: A mean tensor, of the same dtype as `input_batch`. variance: A var...
Sets up optional scale and offset factors. def _build_scale_offset(self, dtype): """Sets up optional scale and offset factors.""" # tf.nn.fused_batch_norm accepts float16 batch data, but not scale/offset. if self._fused and dtype == tf.float16: dtype = tf.float32 # The fused batch norm operatio...
Connects the BatchNorm module into the graph. Args: input_batch: A Tensor of arbitrary dimension. By default, the final dimension is not reduced over when computing the minibatch statistics. is_training: A boolean to indicate if the module should be connected in training mode, meaning t...
Produces the list of integer indices corresponding to a token list. def tokenize(self, token_list): """Produces the list of integer indices corresponding to a token list.""" return [ self._vocab_dict.get(token, self._vocab_dict[self.UNK]) for token in token_list ]
Returns a batch of sequences. Returns: obs: np.int32 array of size [Time, Batch] target: np.int32 array of size [Time, Batch] def _get_batch(self): """Returns a batch of sequences. Returns: obs: np.int32 array of size [Time, Batch] target: np.int32 array of size [Time, Batch] ...
Returns a tuple containing observation and target one-hot tensors. def _build(self): """Returns a tuple containing observation and target one-hot tensors.""" q = tf.FIFOQueue( self._queue_capacity, [self._dtype, self._dtype], shapes=[[self._num_steps, self._batch_size, self._vocab_size]]*2) ...
Returns cost. Args: logits: model output. target: target. Returns: Cross-entropy loss for a sequence of logits. The loss will be averaged across time steps if time_average_cost was enabled at construction time. def cost(self, logits, target): """Returns cost. Args: logi...
Returns a human-readable version of a one-hot encoding of words. Args: data: A tuple with (obs, target). `obs` is a numpy array with one-hot encoding of words. label_batch_entries: bool. Whether to add numerical label before each batch element in the output string. indices: Li...
Gets training and testing dataset iterators. Args: name: String. Name of dataset, either 'mnist' or 'cifar10'. train_batch_size: Integer. Batch size for training. test_batch_size: Integer. Batch size for testing. Returns: Dict containing: train_iterator: A tf.data.Iterator, over training dat...
Returns the name of the variable scope indicated by the given value. Args: value: String, variable scope, or object with `variable_scope` attribute (e.g., Sonnet module). Returns: The name (a string) of the corresponding variable scope. Raises: ValueError: If `value` does not identify a variabl...
Returns a tuple `tf.Variable`s in a scope for a given collection. Args: scope: `tf.VariableScope` or string to retrieve variables from. collection: Collection to restrict query to. By default this is `tf.Graphkeys.TRAINABLE_VARIABLES`, which doesn't include non-trainable variables such as mov...
Returns tuple of `tf.Variable`s declared inside an `snt.Module`. Note that this operates by searching the variable scope a module contains, and so does not know about any modules which were constructed elsewhere but used inside this module. Args: module: `snt.Module` instance to query the scope of. co...
Checks if all items in the dictionary and in subdictionaries are callables. Args: dictionary: Dictionary of callables or other dictionaries with callables. object_name: The name of the object that is expected in the dictionary. E.g. 'Initializer', 'Partitioner' or 'Regularizer'. The first letter ...
Raises a TypeError iff `maybe_dictlike` is not a dictlike object. def _assert_is_dictlike(maybe_dictlike, valid_keys): """Raises a TypeError iff `maybe_dictlike` is not a dictlike object.""" # This covers a common mistake when people use incorrect dictionary nesting # for initializers / partitioners etc. The pre...
Checks the given initializers. This checks that `initializers` is a dictionary that only contains keys in `keys`, and furthermore the entries in `initializers` are functions or further dictionaries (the latter used, for example, in passing initializers to modules inside modules) that must satisfy the same cons...
Checks the given partitioners. This checks that `partitioners` is a dictionary that only contains keys in `keys`, and furthermore the entries in `partitioners` are functions or further dictionaries (the latter used, for example, in passing partitioners to modules inside modules) that must satisfy the same cons...
Checks the given regularizers. This checks that `regularizers` is a dictionary that only contains keys in `keys`, and furthermore the entries in `regularizers` are functions or further dictionaries (the latter used, for example, in passing regularizers to modules inside modules) that must satisfy the same cons...
Checks that `prefix_name` is a proper scope prefix of `scope_name`. def _is_scope_prefix(scope_name, prefix_name): """Checks that `prefix_name` is a proper scope prefix of `scope_name`.""" if not prefix_name: return True if not scope_name.endswith("/"): scope_name += "/" if not prefix_name.endswith(...
Separates the sliced (partitioned) and unsliced variables in var_list. Args: var_list: a list of variables. Returns: A list of unsliced variables in var_list, and a dict mapping names to parts for the sliced variables in var_list. def _get_sliced_variables(var_list): """Separates the sliced (partit...
Creates a custom getter than matches requests to dict of custom getters. Custom getters are callables which implement the [custom getter API] (https://www.tensorflow.org/versions/r1.0/api_docs/python/tf/get_variable). The returned custom getter dispatches calls based on pattern matching the name of the requ...
Builds map of `tf.Variable`s in scope or module with normalized names. The names of the variables are normalized to remove the scope prefix. Args: scope_or_module: Scope or module to build map from. collection: Collection to restrict query to. By default this is `tf.Graphkeys.GLOBAL_VARIABLES`, wh...
Builds a `tf.train.Saver` for the scope or module, with normalized names. The names of the variables are normalized to remove the scope prefix. This allows the same variables to be restored into another similar scope or module using a complementary `tf.train.Saver` object. Args: scope: Scope or module. Va...
Yields an iterator over (string, variable) pairs in the variable map. In general, variable maps map variable names to either a `tf.Variable`, or list of `tf.Variable`s (in case of sliced variables). Args: variable_map: dict, variable map over which to iterate. Yields: (string, tf.Variable) pairs. de...
Returns a dict mapping variables to the collections they appear in. def _get_vars_to_collections(variables): """Returns a dict mapping variables to the collections they appear in.""" var_to_collections = collections.defaultdict(lambda: []) if isinstance(variables, dict): variables = list(v for _, v in variab...
Returns the device with an annotation specifying `ResourceVariable`. "legacy" means a normal tf.Variable while "resource" means a ResourceVariable. For example: `(legacy)` `(resource)` `/job:learner/task:0/device:CPU:* (legacy)` `/job:learner/task:0/device:CPU:* (resource)` Args: var: The Tensorflo...
Takes a collection of variables and formats it as a table. def format_variables(variables, join_lines=True): """Takes a collection of variables and formats it as a table.""" rows = [] rows.append(("Variable", "Shape", "Type", "Collections", "Device")) var_to_collections = _get_vars_to_collections(variables) ...
Takes a key-to-variable map and formats it as a table. def format_variable_map(variable_map, join_lines=True): """Takes a key-to-variable map and formats it as a table.""" rows = [] rows.append(("Key", "Variable", "Shape", "Type", "Collections", "Device")) var_to_collections = _get_vars_to_collections(variable...
Logs variable information. This function logs the name, shape, type, collections, and device for either all variables or a given iterable of variables. In the "Device" columns, the nature of the variable (legacy or resource (for ResourceVariables)) is also specified in parenthesis. Args: variables: iter...
Returns human readable string of how much memory `num_bytes` fills. def _num_bytes_to_human_readable(num_bytes): """Returns human readable string of how much memory `num_bytes` fills.""" if num_bytes < (2 ** 10): return "%d B" % num_bytes elif num_bytes < (2 ** 20): return "%.3f KB" % (float(num_bytes) /...
Logs a summary of variable information. This function groups Variables by dtype and prints out the number of Variables and the total number of scalar values for each datatype, as well as the total memory consumed. For Variables of type tf.string, the memory usage cannot be accurately calculated from the Gra...
Returns a dict mapping dtypes to number of variables and scalars. Args: variables: iterable of `tf.Variable`s, or None. If None is passed, then all global and local variables in the current graph are used. Returns: A dict mapping tf.dtype keys to a dict containing the keys 'num_scalars' and 'n...
Wraps an arbitrary method so it does variable sharing. This decorator creates variables the first time it calls `method`, and reuses them for subsequent calls. The object that calls `method` provides a `tf.VariableScope`, either as a `variable_scope` attribute or as the return value of an `_enter_variable_scop...
Returns a module name for a callable or `None` if no name can be found. def name_for_callable(func): """Returns a module name for a callable or `None` if no name can be found.""" if isinstance(func, functools.partial): return name_for_callable(func.func) try: name = func.__name__ except AttributeError...
Returns a CamelCase string as a snake_case string. def to_snake_case(camel_case): """Returns a CamelCase string as a snake_case string.""" if not re.match(r"^[A-Za-z_]\w*$", camel_case): raise ValueError( "Input string %s is not a valid Python identifier." % camel_case) # Add underscore at word star...
Calls `callback(var)` for all newly created variables. Callback should not modify the variable passed in. Use cases that require variables to be modified should use `variable_creator_scope` directly and sit within the variable creator stack. >>> variables = [] >>> with notify_about_variables(variables.appen...
Recursively gets attributes inside `module` as specified by `path`. def _recursive_getattr(module, path): """Recursively gets attributes inside `module` as specified by `path`.""" if "." not in path: return getattr(module, path) else: first, rest = path.split(".", 1) return _recursive_getattr(getattr...
Returns a callable which corresponds to the constructor string. Various modules (eg, ConvNet2D) take constructor arguments which are callables, indicating a submodule to build. These can be passed as actual constructors, eg `snt.LayerNorm`, however that makes the config for that module not trivially serializab...
Determines whether the provided callable supports all the kwargs. This is useful when you have a module that might or might not support a kwarg such as `is_training`. Rather than calling the module and catching the error, risking the potential modification of underlying state, this function introspects the mod...
Removes any kwargs not supported by `module_or_fn` from `all_kwargs_dict`. A new dict is return with shallow copies of keys & values from `all_kwargs_dict`, as long as the key is accepted by module_or_fn. The returned dict can then be used to connect `module_or_fn` (along with some other inputs, ie non-keyword...
Merge the first dimensions of a tensor. Args: array_or_tensor: Tensor to have its first dimensions merged. Can also be an array or numerical value, which will be converted to a tensor for batch application, if needed. n_dims: Number of dimensions to merge. Returns: Either the input val...
Split the first dimension of a tensor. Args: tensor: Tensor to have its first dimension split. inputs: Original reference input to look the dimensions of. n_dims: Number of dimensions to split. Returns: The input tensor, with its first dimension split. def split_leading_dim(tensor, inputs, n_dims...
Returns a default initializer for weights of a linear module. def create_linear_initializer(input_size, dtype=tf.float32): """Returns a default initializer for weights of a linear module.""" stddev = 1 / math.sqrt(input_size) return tf.truncated_normal_initializer(stddev=stddev, dtype=dtype)
Calculate `bias_shape` based on the `input_shape` and `bias_dims`. Args: input_shape: Shape of the input being passed into the module. The leading dimension is the minibatch size. bias_dims: The dimensions that bias should be applied over. The remaining dimensions will get broadcasted over. ...
Connects the Linear module into the graph, with input Tensor `inputs`. If this is not the first time the module has been connected to the graph, the Tensor provided here must have the same final dimension, in order for the existing variables to be the correct size for the multiplication. The batch size...
Returns the module output size. def output_size(self): """Returns the module output size.""" if callable(self._output_size): self._output_size = self._output_size() return self._output_size
Returns a cloned `Linear` module. Args: name: Optional string assigning name of cloned module. The default name is constructed by appending "_clone" to `self.module_name`. Returns: Cloned `Linear` module. def clone(self, name=None): """Returns a cloned `Linear` module. Args: ...
Connects the module into the graph. If this is not the first time the module has been connected to the graph, the Tensors provided here must have the same final dimensions as when called the first time, in order for the existing variables to be the correct size for the multiplication. The batch size ma...
Connects the Add module into the graph, with input Tensor `inputs`. Args: inputs: A Tensor of size `[batch_size, input_size1, ...]`. multiplier: A scalar or Tensor which the bias term is multiplied by before adding it to `inputs`. Anything which works in the expression `bias * multiplie...
Returns transposed `AddBias` module. Args: name: Optional string assigning name of transpose module. The default name is constructed by appending "_transpose" to `self.module_name`. Returns: Transposed `AddBias` module. def transpose(self, name=None): """Returns transposed `AddBias`...
Replaces the -1 wildcard in the output shape vector. This function infers the correct output shape given the input dimensions. Args: dimensions: List of input non-batch dimensions. Returns: Tuple of non-batch output dimensions. def _infer_shape(self, dimensions): """Replaces the -1 wildc...
Connects the module into the graph, with input Tensor `inputs`. Args: inputs: A Tensor of shape [b_1, b_2, ..., b_preserve_dims, b_preserve_dims+1, ...]. Returns: A Tensor of shape [b_1, b_2, ..., b_preserve_dims, b_reshape_1, b_reshape_2, ...
Returns transpose batch reshape. def transpose(self, name=None): """Returns transpose batch reshape.""" if name is None: name = self.module_name + "_transpose" return BatchReshape(shape=lambda: self.input_shape, preserve_dims=self._preserve_dims, name=n...
Connects the TrainableTensor module into the graph. Returns: A Tensor of shape as determined in the constructor. def _build(self): """Connects the TrainableTensor module into the graph. Returns: A Tensor of shape as determined in the constructor. """ if "w" not in self._initializers: ...
Connects the BatchApply module into the graph. Args: *args: a Tensor or a nested list or dictionary of Tensors. The input tensors will have their first dimensions merged, then an op or a module will be called on the input. The first dimension of the output tensor(s) will be spli...
Connects the SliceByDim module into the graph. Args: inputs: `Tensor` to slice. Its rank must be greater than the maximum dimension specified in `dims` (plus one as python is 0 indexed). Returns: The sliced tensor. Raises: ValueError: If `inputs` tensor has insufficient rank. ...
Connects the `TileByDim` module into the graph. Args: inputs: `Tensor` to tile. Returns: The tiled tensor. def _build(self, inputs): """Connects the `TileByDim` module into the graph. Args: inputs: `Tensor` to tile. Returns: The tiled tensor. """ shape_inputs = i...
Connects the MergeDims module into the graph. Args: inputs: Tensor or a nested list of Tensors to merge. Its rank must be greater than or equal to `start` + `size`. Returns: The merged Tensor or a nested list of merged Tensors. Raises: ValueError: If any of the `inputs` tensor...
Creates the initial memory. We should ensure each row of the memory is initialized to be unique, so initialize the matrix to be the identity. We then pad or truncate as necessary so that init_state is of size (batch_size, self._mem_slots, self._mem_size). Args: batch_size: The size of the ba...
Perform multi-head attention from 'Attention is All You Need'. Implementation of the attention mechanism from https://arxiv.org/abs/1706.03762. Args: memory: Memory tensor to perform attention on. Returns: new_memory: New memory tensor. def _multihead_attention(self, memory): """Perf...
Create input and forget gates for this step using `inputs` and `memory`. Args: inputs: Tensor input. memory: The current state of memory. Returns: input_gate: A LSTM-like insert gate. forget_gate: A LSTM-like forget gate. def _create_gates(self, inputs, memory): """Create input an...
Perform multiheaded attention over `memory`. Args: memory: Current relational memory. Returns: The attended-over memory. def _attend_over_memory(self, memory): """Perform multiheaded attention over `memory`. Args: memory: Current relational memory. Returns: The attended-...
Adds relational memory to the TensorFlow graph. Args: inputs: Tensor input. memory: Memory output from the previous time step. treat_input_as_matrix: Optional, whether to treat `input` as a sequence of matrices. Defaulta to False, in which case the input is flattened into a vector...
Creates a basic MNIST model using Sonnet, then trains and evaluates it. def train_and_eval(train_batch_size, test_batch_size, num_hidden, learning_rate, num_train_steps, report_every, test_every): """Creates a basic MNIST model using Sonnet, then trains and evaluates it.""" data_dict = dataset_...
Returns an initial (maybe learnable) state. This function does not create any variable scopes, and it should be called from a Sonnet module. This function also makes sure that all the rows of its `state` argument have the same value. Args: state: initial value of the initial state. It should be a tensor o...
Creates an initial state consisting of trainable variables. The trainable variables are created with the same shapes as the elements of `state_size` and are tiled to produce an initial state. Args: batch_size: An int, or scalar int32 Tensor representing the batch size. state_size: A `TensorShape` or nes...
Returns a decorator to copy documentation from the given function. Docstring is copied, including *args and **kwargs documentation. Args: fn_with_doc_to_copy: Function whose docstring, including *args and **kwargs documentation, is to be copied. Returns: Decorated version of `wrapper_init` with d...
Connects the module to the graph. Returns: The learnable state, which has the same type, structure and shape as the `initial_state` passed to the constructor. def _build(self): """Connects the module to the graph. Returns: The learnable state, which has the same type, structure and sh...
Connects the LayerNorm module into the graph. Args: inputs: a Tensor of dimensionality >= 2. Returns: normalized: layer normalized outputs with same shape as inputs. Raises: base.NotSupportedError: If `inputs` has less than 2 dimensions. def _build(self, inputs): """Connects the La...
Perform a differentiable read. Args: memory: [batch_size, memory_size, memory_word_size]-shaped Tensor of dtype float32. This represents, for each example and memory slot, a single embedding to attend over. query: [batch_size, query_word_size]-shaped Tensor of dtype float32. Rep...