text
stringlengths
81
112k
Draw top3 proposals for each gt. Args: proposals: NPx4 proposal_scores: NP gt_boxes: NG def draw_proposal_recall(img, proposals, proposal_scores, gt_boxes): """ Draw top3 proposals for each gt. Args: proposals: NPx4 proposal_scores: NP gt_boxes: NG ""...
Args: boxes: kx4 scores: kxC def draw_predictions(img, boxes, scores): """ Args: boxes: kx4 scores: kxC """ if len(boxes) == 0: return img labels = scores.argmax(axis=1) scores = scores.max(axis=1) tags = ["{},{:.2f}".format(cfg.DATA.CLASS_NAMES[lb], ...
Args: results: [DetectionResult] def draw_final_outputs(img, results): """ Args: results: [DetectionResult] """ if len(results) == 0: return img # Display in largest to smallest order to reduce occlusion boxes = np.asarray([r.box for r in results]) areas = np_area(b...
Overlay a mask on top of the image. Args: im: a 3-channel uint8 image in BGR mask: a binary 1-channel image of the same size color: if None, will choose automatically def draw_mask(im, mask, alpha=0.5, color=None): """ Overlay a mask on top of the image. Args: im: a 3-...
Run DataFlow and send data to a ZMQ socket addr. It will serialize and send each datapoint to this address with a PUSH socket. This function never returns. Args: df (DataFlow): Will infinitely loop over the DataFlow. addr: a ZMQ socket endpoint. hwm (int): ZMQ high-water mark (buffe...
Convert a DataFlow to a :class:`multiprocessing.Queue`. The DataFlow will only be reset in the spawned process. Args: df (DataFlow): the DataFlow to dump. size (int): size of the queue nr_consumer (int): number of consumer of the queue. The producer will add this many of ``D...
:returns: the current 3-channel image def _grab_raw_image(self): """ :returns: the current 3-channel image """ m = self.ale.getScreenRGB() return m.reshape((self.height, self.width, 3))
:returns: a gray-scale (h, w) uint8 image def _current_state(self): """ :returns: a gray-scale (h, w) uint8 image """ ret = self._grab_raw_image() # max-pooled over the last screen ret = np.maximum(ret, self.last_raw_screen) if self.viz: if isinstance...
Args: boxes: nx4, xyxy window: [h, w] def clip_boxes(boxes, window, name=None): """ Args: boxes: nx4, xyxy window: [h, w] """ boxes = tf.maximum(boxes, 0.0) m = tf.tile(tf.reverse(window, [0]), [2]) # (4,) boxes = tf.minimum(boxes, tf.cast(m, tf.float32), name...
Args: box_predictions: (..., 4), logits anchors: (..., 4), floatbox. Must have the same shape Returns: box_decoded: (..., 4), float32. With the same shape. def decode_bbox_target(box_predictions, anchors): """ Args: box_predictions: (..., 4), logits anchors: (..., 4...
Args: boxes: (..., 4), float32 anchors: (..., 4), float32 Returns: box_encoded: (..., 4), float32 with the same shape. def encode_bbox_target(boxes, anchors): """ Args: boxes: (..., 4), float32 anchors: (..., 4), float32 Returns: box_encoded: (..., 4), ...
Aligned version of tf.image.crop_and_resize, following our definition of floating point boxes. Args: image: NCHW boxes: nx4, x1y1x2y2 box_ind: (n,) crop_size (int): Returns: n,C,size,size def crop_and_resize(image, boxes, box_ind, crop_size, pad_border=True): """ ...
Args: featuremap: 1xCxHxW boxes: Nx4 floatbox resolution: output spatial resolution Returns: NxCx res x res def roi_align(featuremap, boxes, resolution): """ Args: featuremap: 1xCxHxW boxes: Nx4 floatbox resolution: output spatial resolution Ret...
Slice anchors to the spatial size of this featuremap. def narrow_to(self, featuremap): """ Slice anchors to the spatial size of this featuremap. """ shape2d = tf.shape(featuremap)[2:] # h,w slice3d = tf.concat([shape2d, [-1]], axis=0) slice4d = tf.concat([shape2d, [-1, ...
img: bgr, [0,255] heatmap: [0,1] def colorize(img, heatmap): """ img: bgr, [0,255] heatmap: [0,1] """ heatmap = viz.intensity_to_rgb(heatmap, cmap='jet')[:, :, ::-1] return img * 0.5 + heatmap * 0.5
The correct center is shape*0.5-0.5. This can be verified by: SHAPE = 7 arr = np.random.rand(SHAPE, SHAPE) orig = arr c = SHAPE * 0.5 - 0.5 c = (c, c) for k in range(4): mat = cv2.getRotationMatrix2D(c, 90, 1) arr = cv2.warpAffine(arr, mat, arr.sh...
Get largest rectangle after rotation. http://stackoverflow.com/questions/16702966/rotate-image-and-crop-out-black-borders def largest_rotated_rect(w, h, angle): """ Get largest rectangle after rotation. http://stackoverflow.com/questions/16702966/rotate-image-and-crop-out-black-borders ...
Apply a mapping on certain argument before calling the original function. Args: maps (dict): {argument_name: map_func} def map_arg(**maps): """ Apply a mapping on certain argument before calling the original function. Args: maps (dict): {argument_name: map_func} """ def deco(f...
Like memoized, but keep one cache per default graph. def graph_memoized(func): """ Like memoized, but keep one cache per default graph. """ # TODO it keeps the graph alive from ..compat import tfv1 GRAPH_ARG_NAME = '__IMPOSSIBLE_NAME_FOR_YOU__' @memoized def func_with_graph_arg(*args,...
A decorator. It performs memoization ignoring the arguments used to call the function. def memoized_ignoreargs(func): """ A decorator. It performs memoization ignoring the arguments used to call the function. """ def wrapper(*args, **kwargs): if func not in _MEMOIZED_NOARGS: ...
Ensure a 2D shape. Args: a: a int or tuple/list of length 2 Returns: list: of length 2. if ``a`` is a int, return ``[a, a]``. def shape2d(a): """ Ensure a 2D shape. Args: a: a int or tuple/list of length 2 Returns: list: of length 2. if ``a`` is a int, return...
Ensuer a 4D shape, to use with 4D symbolic functions. Args: a: a int or tuple/list of length 2 Returns: list: of length 4. if ``a`` is a int, return ``[1, a, a, 1]`` or ``[1, 1, a, a]`` depending on data_format. def shape4d(a, data_format='NHWC'): """ Ensuer a 4D shape, to...
Decorate a method or property of a class, so that this method can only be called once for every instance. Calling it more than once will result in exception. def call_only_once(func): """ Decorate a method or property of a class, so that this method can only be called once for every instance. C...
A decorator that performs memoization on methods. It stores the cache on the object instance itself. def memoized_method(func): """ A decorator that performs memoization on methods. It stores the cache on the object instance itself. """ @functools.wraps(func) def wrapper(*args, **kwargs): ...
A decorator which automatically reuses the current variable scope if the function has been called with the same variable scope before. Example: .. code-block:: python @auto_reuse_variable_scope def myfunc(x): return tf.layers.conv2d(x, 128, 3) myfunc(x1) # will inher...
Args: name_scope(str): the default scope to use. If None, will use the name of the function. Returns: A decorator which makes the function run under a name scope. The name scope is obtained by the following: 1. The 'name_scope' keyword argument when the decorated function is called....
Returns: A decorator which makes the function happen under a variable scope, which is named by the function itself. Example: .. code-block:: python @under_variable_scope() def mid_level(x): with argscope(Conv2D, kernel_shape=3, nl=BNReLU): x = Conv2...
Return a context which either opens and caches a new name scope, or reenter an existing one. Args: top_level(bool): if True, the name scope will always be top-level. It will not be nested under any existing name scope of the caller. def cached_name_scope(name, top_level=True): """ ...
Args: grad_list: list of list of tuples, shape is Ngpu x Nvar x 2 def _check_grad_list(grad_list): """ Args: grad_list: list of list of tuples, shape is Ngpu x Nvar x 2 """ nvars = [len(k) for k in grad_list] def basename(x): return re.sub('t...
Run `func` on all GPUs (towers) and return the results. Args: towers (list[int]): a list of GPU id. func: a lambda to be called inside each tower devices: a list of devices to be used. By default will use '/gpu:{tower}' use_vs (list[bool]): list of use_vs to pass...
Reduce the gradients, apply them with the optimizer, and set self.grads to a list of (g, v), containing the averaged gradients. Args: grad_list ([[(grad, var), ...], ...]): #GPU lists to be reduced. Each is the gradients computed on each GPU. get_opt_fn (-> tf.train.Optimizer): ...
Call the function `tower_fn` under :class:`TowerContext` for each tower. Returns: a list, contains the return values of `tower_fn` on each tower. def call_for_each_tower(self, tower_fn): """ Call the function `tower_fn` under :class:`TowerContext` for each tower. Returns: ...
Reduce the gradients, apply them with the optimizer, and set self.grads to #GPU number of lists of (g, v), containing the all-reduced gradients on each device. Args: grad_list ([[(grad, var), ...], ...]): #GPU lists to be reduced. Each is the gradients computed on each GPU. get_...
Copy values of variables on GPU 0 to other GPUs. def get_post_init_ops(): """ Copy values of variables on GPU 0 to other GPUs. """ # literally all variables, because it's better to sync optimizer-internal variables as well all_vars = tf.global_variables() + tf.local_variables() ...
Call the function `tower_fn` under :class:`TowerContext` for each tower. Returns: a list, contains the return values of `tower_fn` on each tower. def call_for_each_tower(self, tower_fn): """ Call the function `tower_fn` under :class:`TowerContext` for each tower. Returns: ...
Args: grad_list ([[(grad, var), ...], ...]): #GPU lists to be reduced. Each is the gradients computed on each GPU. get_opt_fn (-> tf.train.Optimizer): callable which returns an optimizer Returns: tf.Operation: the training op def build(self, grad_list, get_opt_fn): ...
Humanize timedelta given in seconds Args: sec (float): time difference in seconds. Must be positive. Returns: str - time difference as a readable string Example: .. code-block:: python print(humanize_time_delta(1)) # 1 second print(h...
Args: name(str), val(str): Returns: a context where the environment variable ``name`` being set to ``val``. It will be set back after the context exits. def change_env(name, val): """ Args: name(str), val(str): Returns: a context where the environment variable ...
Get a good RNG seeded with time, pid and the object. Args: obj: some object to use to generate random seed. Returns: np.random.RandomState: the RNG. def get_rng(obj=None): """ Get a good RNG seeded with time, pid and the object. Args: obj: some object to use to generate ra...
Each called in the code to this function is guaranteed to return True the first time and False afterwards. Returns: bool: whether this is the first time this function gets called from this line of code. Example: .. code-block:: python if execute_only_once(): # ...
Return default arguments to be used with tqdm. Args: kwargs: extra arguments to be used. Returns: dict: def get_tqdm_kwargs(**kwargs): """ Return default arguments to be used with tqdm. Args: kwargs: extra arguments to be used. Returns: dict: """ defaul...
Similar to `from ctypes.util import find_library`, but try to return full path if possible. def find_library_full_path(name): """ Similar to `from ctypes.util import find_library`, but try to return full path if possible. """ from ctypes.util import find_library if os.name == "posix" and s...
Args: df (DataFlow): the DataFlow to serialize. path (str): output path. Either a directory or an lmdb file. write_frequency (int): the frequency to write back data to disk. def save(df, path, write_frequency=5000): """ Args: df (DataFlow): the DataFlow t...
Note: If you found deserialization being the bottleneck, you can use :class:`LMDBData` as the reader and run deserialization as a mapper in parallel. def load(path, shuffle=True): """ Note: If you found deserialization being the bottleneck, you can use :class:`LMDBDa...
Args: df (DataFlow): the DataFlow to serialize. path (str): output npz file. def save(df, path): """ Args: df (DataFlow): the DataFlow to serialize. path (str): output npz file. """ buffer = [] size = _reset_df_and_get_size(df) ...
Args: df (DataFlow): the DataFlow to serialize. path (str): output tfrecord file. def save(df, path): """ Args: df (DataFlow): the DataFlow to serialize. path (str): output tfrecord file. """ if os.environ.get('TENSORPACK_COMPATIBLE_SERIAL...
Args: size (int): total number of records. If not provided, the returned dataflow will have no `__len__()`. It's needed because this metadata is not stored in the TFRecord file. def load(path, size=None): """ Args: size (int): total number of records. If not prov...
Args: df (DataFlow): the DataFlow to serialize. path (str): output hdf5 file. data_paths (list[str]): list of h5 paths. It should have the same length as each datapoint, and each path should correspond to one component of the datapoint. def save(df, p...
Args: trainer (SingleCostTrainer): get_model (input1, input2, ... -> tf.keras.Model): A function which takes tensors, builds and returns a Keras model. It will be part of the tower function. input (InputSource): optimizer (tf.train.Optimizer): loss, metric...
Args: optimizer (tf.train.Optimizer): loss, metrics: string or list of strings def compile(self, optimizer, loss, metrics=None): """ Args: optimizer (tf.train.Optimizer): loss, metrics: string or list of strings """ if isinstance(loss, six...
Args: validation_data (DataFlow or InputSource): to be used for inference. The inference callback is added as the first in the callback list. If you need to use it in a different order, please write it in the callback list manually. kwargs: same arguments as :meth...
Return the three quantization functions fw, fa, fg, for weights, activations and gradients respectively def get_dorefa(bitW, bitA, bitG): """ Return the three quantization functions fw, fa, fg, for weights, activations and gradients respectively """ def quantize(x, k): n = float(2 ** k - 1) ...
Implemented Trained Ternary Quantization: https://arxiv.org/abs/1612.01064 Code modified from the authors' at: https://github.com/czhu95/ternarynet/blob/master/examples/Ternary-Net/ternary.py def ternarize(x, thresh=0.05): """ Implemented Trained Ternary Quantization: https://arxiv.org/abs/161...
Args: img (np.ndarray): an image (expect BGR) to show. lclick_cb, rclick_cb: a callback ``func(img, x, y)`` for left/right click event. kwargs: can be {key_cb_a: callback_img, key_cb_b: callback_img}, to specify a callback ``func(img)`` for keypress. Some existing keypress event...
Stacked patches into grid, to produce visualizations like the following: .. image:: https://github.com/tensorpack/tensorpack/raw/master/examples/GAN/demo/BEGAN-CelebA-samples.jpg Args: patch_list(list[ndarray] or ndarray): NHW or NHWC images in [0,255]. nr_row(int), nr_col(int): rows and cols ...
Similar to :func:`stack_patches` but with a generator interface. It takes a much-longer list and yields stacked results one by one. For example, if ``patch_list`` contains 1000 images and ``nr_row==nr_col==10``, this generator yields 10 stacked images. Args: nr_row(int), nr_col(int): rows and c...
Dump or visualize images of a :class:`DataFlow`. Args: df (DataFlow): the DataFlow. index (int): the index of the image component. batched (bool): whether the component contains batched images (NHW or NHWC) or not (HW or HWC). number (int): how many datapoint to take fro...
Convert a 1-channel matrix of intensities to an RGB image employing a colormap. This function requires matplotlib. See `matplotlib colormaps <http://matplotlib.org/examples/color/colormaps_reference.html>`_ for a list of available colormap. Args: intensity (np.ndarray): array of intensities suc...
Draw text on an image. Args: pos (tuple): x, y; the position of the text text (str): font_scale (float): color (tuple): a 3-tuple BGR color in [0, 255] def draw_text(img, pos, text, color, font_scale=0.4): """ Draw text on an image. Args: pos (tuple): x, y; the...
Args: im (np.ndarray): a BGR image in range [0,255]. It will not be modified. boxes (np.ndarray): a numpy array of shape Nx4 where each row is [x1, y1, x2, y2]. labels: (list[str] or None) color: a 3-tuple BGR color (in range [0, 255]) Returns: np.ndarray: a new image. def ...
A wrapper around ``tf.concat`` to cooperate with :class:`LinearWrap`. Args: x (tf.Tensor): input tensor (list[tf.Tensor]): a tensor or list of tensors to concatenate with x. x will be at the beginning dim (int): the dimension along which to concatenate Returns: tf.T...
Args: points: (nx4)x2 Returns: nx4 boxes (x1y1x2y2) def point8_to_box(points): """ Args: points: (nx4)x2 Returns: nx4 boxes (x1y1x2y2) """ p = points.reshape((-1, 4, 2)) minxy = p.min(axis=1) # nx2 maxxy = p.max(axis=1) # nx2 return np.concatenate...
Convert polygons to binary masks. Args: polys: a list of nx2 float array. Each array contains many (x, y) coordinates. Returns: a binary matrix of (height, width) def segmentation_to_mask(polys, height, width): """ Convert polygons to binary masks. Args: polys: a list of ...
Args: boxes: (...)x4, float shape: h, w def clip_boxes(boxes, shape): """ Args: boxes: (...)x4, float shape: h, w """ orig_shape = boxes.shape boxes = boxes.reshape([-1, 4]) h, w = shape boxes[:, [0, 1]] = np.maximum(boxes[:, [0, 1]], 0) boxes[:, 2] = np....
Args: boxes: (nx4), float shape: (h, w) Returns: indices: (k, ) selection: (kx4) def filter_boxes_inside_shape(boxes, shape): """ Args: boxes: (nx4), float shape: (h, w) Returns: indices: (k, ) selection: (kx4) """ assert boxes.n...
Same as `tf.layers.MaxPooling2D`. Default strides is equal to pool_size. def MaxPooling( inputs, pool_size, strides=None, padding='valid', data_format='channels_last'): """ Same as `tf.layers.MaxPooling2D`. Default strides is equal to pool_size. """ if strides is...
Same as `tf.layers.AveragePooling2D`. Default strides is equal to pool_size. def AvgPooling( inputs, pool_size, strides=None, padding='valid', data_format='channels_last'): """ Same as `tf.layers.AveragePooling2D`. Default strides is equal to pool_size. """ if st...
Global average pooling as in the paper `Network In Network <http://arxiv.org/abs/1312.4400>`_. Args: x (tf.Tensor): a 4D tensor. Returns: tf.Tensor: a NC tensor named ``output``. def GlobalAvgPooling(x, data_format='channels_last'): """ Global average pooling as in the paper `Netw...
Unpool the input with a fixed matrix to perform kronecker product with. Args: x (tf.Tensor): a 4D image tensor shape: int or (h, w) tuple unpool_mat: a tf.Tensor or np.ndarray 2D matrix with size=shape. If is None, will use a matrix with 1 at top-left corner. Returns: ...
Args: varname(str): a variable name in the graph varname_prefix(str): an optional prefix that may need to be removed in varname savename_prefix(str): an optional prefix to append to all savename Returns: str: the name used to save the variable def get_savename_from_varname( ...
Dump value of all TRAINABLE + MODEL variables to a dict, and save as npz format (loadable by :func:`sessinit.get_model_loader`). Args: path(str): the file name to save the parameters. Must ends with npz. def dump_session_params(path): """ Dump value of all TRAINABLE + MODEL variables to a dict...
Save variables in dic to path. Args: dic: {name: value} path: save as npz if the name ends with '.npz', otherwise save as a checkpoint. def save_chkpt_vars(dic, path): """ Save variables in dic to path. Args: dic: {name: value} path: save as npz if the name ends with '...
Work around TF problems in checkpoint path handling. Args: model_path: a user-input path Returns: str: the argument that can be passed to NewCheckpointReader def get_checkpoint_path(model_path): """ Work around TF problems in checkpoint path handling. Args: model_path: a u...
Load all variables from a checkpoint to a dict. Args: model_path(str): path to a checkpoint. Returns: dict: a name:value dict def load_chkpt_vars(model_path): """ Load all variables from a checkpoint to a dict. Args: model_path(str): path to a checkpoint. Returns: ...
**Guess** if this variable is only used in training. Only used internally to avoid too many logging. Do not use it. def is_training_name(name): """ **Guess** if this variable is only used in training. Only used internally to avoid too many logging. Do not use it. """ # TODO: maybe simply check ...
Returns a relaxed (possibly reshaped/upcast-ed) version of value, to be loaded to the given variable. Args: value (ndarray): an numpy array to be loaded to var var (tf.Variable): Returns: ndarray: a possibly reshaped or casted version of value def relaxed_v...
Args: prms(dict): dict of {variable name: value} Any name in prms must be in the graph and in vars_to_update. def update(self, prms): """ Args: prms(dict): dict of {variable name: value} Any name in prms must be in the graph and in vars_to_update....
Args: server (tf.train.Server): Returns: tf.train.SessionCreator def get_distributed_session_creator(server): """ Args: server (tf.train.Server): Returns: tf.train.SessionCreator """ server_def = server.server_def is_chief = (server_def.job_name == 'worker')...
Returns: int: #available GPUs in CUDA_VISIBLE_DEVICES, or in the system. def get_num_gpu(): """ Returns: int: #available GPUs in CUDA_VISIBLE_DEVICES, or in the system. """ def warn_return(ret, message): try: import tensorflow as tf except ImportError: ...
Put a `tf.Summary`. def put_summary(self, summary): """ Put a `tf.Summary`. """ if isinstance(summary, six.binary_type): summary = tf.Summary.FromString(summary) assert isinstance(summary, tf.Summary), type(summary) # TODO other types for val in summ...
Put a scalar. def put_scalar(self, name, val): """ Put a scalar. """ if isinstance(val, np.floating): val = float(val) if isinstance(val, np.integer): val = int(val) self._dispatch(lambda m: m.process_scalar(name, val)) s = create_scalar_s...
Put an image. Args: name (str): val (np.ndarray): 2D, 3D (HWC) or 4D (NHWC) numpy array of images in range [0,255]. If channel is 3, assumed to be RGB. def put_image(self, name, val): """ Put an image. Args: name (str): v...
Put an :class:`tf.Event`. `step` and `wall_time` fields of :class:`tf.Event` will be filled automatically. Args: evt (tf.Event): def put_event(self, evt): """ Put an :class:`tf.Event`. `step` and `wall_time` fields of :class:`tf.Event` will be filled automatically. ...
Look for an existing json under :meth:`logger.get_logger_dir()` named "stats.json", and return the loaded list of statistics if found. Returns None otherwise. def load_existing_json(): """ Look for an existing json under :meth:`logger.get_logger_dir()` named "stats.json", and return the...
Add stats to json and dump to disk. Note that this method is idempotent. def _trigger(self): """ Add stats to json and dump to disk. Note that this method is idempotent. """ if len(self._stat_now): self._stat_now['epoch_num'] = self.epoch_num self...
Args: img: bxhxwxc coords: bxh2xw2x2. each coordinate is (y, x) integer. Out of boundary coordinates will be clipped. Return: bxh2xw2xc image def sample(img, coords): """ Args: img: bxhxwxc coords: bxh2xw2x2. each coordinate is (y, x) integer. ...
Sample the images using the given coordinates, by bilinear interpolation. This was described in the paper: `Spatial Transformer Networks <http://arxiv.org/abs/1506.02025>`_. This is equivalent to `torch.nn.functional.grid_sample`, up to some non-trivial coordinate transformation. This implementati...
Enable trace for calls to any function. def enable_call_trace(): """ Enable trace for calls to any function. """ def tracer(frame, event, arg): if event == 'call': co = frame.f_code func_name = co.co_name if func_name == 'write' or func_name == 'print': ...
Apply a set of default rules to make a fast :class:`InputSource`. Args: input_source_or_dataflow(InputSource | DataFlow): trainer (Trainer): Returns: InputSource def apply_default_prefetch(input_source_or_dataflow, trainer): """ Apply a set of default rules to make a fast :cla...
Train with a :class:`TrainConfig` and a :class:`Trainer`, to present the simple and old training interface. It basically does the following 3 things (and you can easily do them by yourself if you need more control): 1. Setup the input with automatic prefetching heuristics, from `config.data` or `con...
Delegate property to self.loop def _get_property(name): """ Delegate property to self.loop """ ret = property( lambda self: getattr(self.loop, name)) if six.PY3: # __doc__ is readonly in Py2 try: ret.__doc__ = getattr(TrainLoop, name).__doc__ except Attribute...
Configure the loop given the settings. def config(self, steps_per_epoch, starting_epoch, max_epoch): """ Configure the loop given the settings. """ self.starting_epoch = int(starting_epoch) self.max_epoch = int(max_epoch) self.steps_per_epoch = int(steps_per_epoch) ...
Register callbacks to the trainer. It can only be called before :meth:`Trainer.train()`. Args: cb (Callback or [Callback]): a callback or a list of callbacks Returns: succeed or not def _register_callback(self, cb): """ Register callbacks to the trainer...
Defines what to do in one iteration. The default is: ``self.hooked_sess.run(self.train_op)``. The behavior of each iteration can be changed by either setting ``trainer.train_op``, or overriding this method. def run_step(self): """ Defines what to do in one iteration. The defaul...
Setup callbacks and monitors. Must be called after the main graph is built. Args: callbacks ([Callback]): monitors ([MonitorBase]): def setup_callbacks(self, callbacks, monitors): """ Setup callbacks and monitors. Must be called after the main graph is built. A...
Create the session and set `self.sess`. Call `self.initiailize_hooks()` Finalize the graph. It must be called after callbacks are setup. Args: session_creator (tf.train.SessionCreator): session_init (sessinit.SessionInit): def initialize(self, session_creator, ...
Create SessionRunHooks for all callbacks, and hook it onto `self.sess` to create `self.hooked_sess`. A new trainer may override this method to create multiple groups of hooks, which can be useful when the training is not done by a single `train_op`. def initialize_hooks(self): """ Crea...
Run the main training loop. Args: steps_per_epoch, starting_epoch, max_epoch (int): def main_loop(self, steps_per_epoch, starting_epoch, max_epoch): """ Run the main training loop. Args: steps_per_epoch, starting_epoch, max_epoch (int): """ with...
Implemented by three lines: .. code-block:: python self.setup_callbacks(callbacks, monitors) self.initialize(session_creator, session_init) self.main_loop(steps_per_epoch, starting_epoch, max_epoch) You can call those methods by yourself to have better control on d...
Same as :meth:`train()`, except: 1. Add `extra_callbacks` to callbacks. The default value for `extra_callbacks` is :meth:`DEFAULT_CALLBACKS()`. 2. Default value for `monitors` is :meth:`DEFAULT_MONITORS()`. 3. Provide default values for every option except `steps_per_epoch`. def tra...