text
stringlengths
81
112k
Returns ctype arrays for keys and values(converted to strings) in a dictionary def _ctype_dict(param_dict): """ Returns ctype arrays for keys and values(converted to strings) in a dictionary """ assert(isinstance(param_dict, dict)), \ "unexpected type for param_dict: " + str(type(param_dict)) ...
A wrapper for the user-defined handle. def _updater_wrapper(updater): """A wrapper for the user-defined handle.""" def updater_handle(key, lhs_handle, rhs_handle, _): """ ctypes function """ lhs = _ndarray_cls(NDArrayHandle(lhs_handle)) rhs = _ndarray_cls(NDArrayHandle(rhs_handle)) ...
Creates a new KVStore. For single machine training, there are two commonly used types: ``local``: Copies all gradients to CPU memory and updates weights there. ``device``: Aggregates gradients and updates weights on GPUs. With this setting, the KVStore also attempts to use GPU peer-to-peer communicat...
Initializes a single or a sequence of key-value pairs into the store. For each key, one must `init` it before calling `push` or `pull`. When multiple workers invoke `init` for the same key, only the value supplied by worker with rank `0` is used. This function returns after data has bee...
Pushes a single or a sequence of key-value pairs into the store. This function returns immediately after adding an operator to the engine. The actual operation is executed asynchronously. If there are consecutive pushes to the same key, there is no guarantee on the serialization of pushes. ...
Pulls a single value or a sequence of values from the store. This function returns immediately after adding an operator to the engine. Subsequent attempts to read from the `out` variable will be blocked until the pull operation completes. `pull` is executed asynchronously after all pre...
Pulls a single RowSparseNDArray value or a sequence of RowSparseNDArray values \ from the store with specified row_ids. When there is only one row_id, KVStoreRowSparsePull \ is invoked just once and the result is broadcast to all the rest of outputs. `row_sparse_pull` is executed asynchronously...
Specifies type of low-bit quantization for gradient compression \ and additional arguments depending on the type of compression being used. 2bit Gradient Compression takes a positive float `threshold`. The technique works by thresholding values such that positive values in the gradient...
Registers an optimizer with the kvstore. When using a single machine, this function updates the local optimizer. If using multiple machines and this operation is invoked from a worker node, it will serialized the optimizer with pickle and send it to all servers. The function returns aft...
Returns the type of this kvstore. Returns ------- type : str the string type def type(self): """ Returns the type of this kvstore. Returns ------- type : str the string type """ kv_type = ctypes.c_char_p() check_c...
Returns the rank of this worker node. Returns ------- rank : int The rank of this node, which is in range [0, num_workers()) def rank(self): """ Returns the rank of this worker node. Returns ------- rank : int The rank of this node, whic...
Returns the number of worker nodes. Returns ------- size :int The number of worker nodes. def num_workers(self): """Returns the number of worker nodes. Returns ------- size :int The number of worker nodes. """ size = ctyp...
Saves the optimizer (updater) state to a file. This is often used when checkpointing the model during training. Parameters ---------- fname : str Path to the output states file. dump_optimizer : bool, default False Whether to also save the optimizer itsel...
Loads the optimizer (updater) state from the file. Parameters ---------- fname : str Path to input states file. def load_optimizer_states(self, fname): """Loads the optimizer (updater) state from the file. Parameters ---------- fname : str ...
Sets a push updater into the store. This function only changes the local store. When running on multiple machines one must use `set_optimizer`. Parameters ---------- updater : function The updater function. Examples -------- >>> def update(k...
Sends a command to all server nodes. Sending command to a server node will cause that server node to invoke ``KVStoreServer.controller`` to execute the command. This function returns after the command has been executed on all server nodes. Parameters ---------- ...
Add a module to the chain. Parameters ---------- module : BaseModule The new module to add. kwargs : ``**keywords`` All the keyword arguments are saved as meta information for the added module. The currently known meta includes - `take_la...
Gets current parameters. Returns ------- (arg_params, aux_params) A pair of dictionaries each mapping parameter names to NDArray values. This is a merged dictionary of all the parameters in the modules. def get_params(self): """Gets current parameters. ...
Initializes parameters. Parameters ---------- initializer : Initializer arg_params : dict Default ``None``. Existing parameters. This has higher priority than `initializer`. aux_params : dict Default ``None``. Existing auxiliary states. This h...
Binds the symbols to construct executors. This is necessary before one can perform computation with the module. Parameters ---------- data_shapes : list of (str, tuple) Typically is `data_iter.provide_data`. label_shapes : list of (str, tuple) Typically i...
Installs and initializes optimizers. Parameters ---------- kvstore : str or KVStore Default `'local'`. optimizer : str or Optimizer Default `'sgd'` optimizer_params : dict Default ``(('learning_rate', 0.01),)``. The default value is not a dict...
Forward computation. Parameters ---------- data_batch : DataBatch is_train : bool Default is ``None``, in which case `is_train` is take as ``self.for_training``. def forward(self, data_batch, is_train=None): """Forward computation. Parameters ------...
Backward computation. def backward(self, out_grads=None): """Backward computation.""" assert self.binded and self.params_initialized for i_layer, module in reversed(list(zip(range(len(self._modules)), self._modules))): module.backward(out_grads=out_grads) if i_layer == ...
Updates parameters according to installed optimizer and the gradient computed in the previous forward-backward cycle. def update(self): """Updates parameters according to installed optimizer and the gradient computed in the previous forward-backward cycle. """ assert self.binded...
Gets outputs from a previous forward computation. Parameters ---------- merge_multi_context : bool Default is ``True``. In the case when data-parallelism is used, the outputs will be collected from multiple devices. A ``True`` value indicate that we should me...
Gets the gradients with respect to the inputs of the module. Parameters ---------- merge_multi_context : bool Default is ``True``. In the case when data-parallelism is used, the outputs will be collected from multiple devices. A ``True`` value indicate that we ...
Evaluates and accumulates evaluation metric on outputs of the last forward computation. Parameters ---------- eval_metric : EvalMetric labels : list of NDArray Typically ``data_batch.label``. def update_metric(self, eval_metric, labels, pre_sliced=False): """Evaluat...
Installs monitor on all executors. def install_monitor(self, mon): """Installs monitor on all executors.""" assert self.binded for module in self._modules: module.install_monitor(mon)
Generate the iterator of mnist dataset def get_iterator(data_shape, use_caffe_data): """Generate the iterator of mnist dataset""" def get_iterator_impl_mnist(args, kv): """return train and val iterators for mnist""" # download data get_mnist_ubyte() flat = False if len(data_shap...
The function is used to run predictions on the audio files in the directory `pred_directory`. Parameters ---------- net: The model that has been trained. prediction_dir: string, default ./Test The directory that contains the audio files on which predictions are to be made def predict(p...
Thread loop for generating data Parameters ---------- proc_id: int Process id alive: multiprocessing.Value variable for signaling whether process should continue or not queue: multiprocessing.Queue queue for passing data back fn: funct...
Start processes if not already started def _init_proc(self): """Start processes if not already started""" if not self.proc: self.proc = [ mp.Process(target=self._proc_loop, args=(i, self.alive, self.queue, self.fn)) for i in range(self.num_proc) ]...
Resets the generator by stopping all processes def reset(self): """Resets the generator by stopping all processes""" self.alive.value = False qsize = 0 try: while True: self.queue.get(timeout=0.1) qsize += 1 except QEmptyExcept: ...
Create a base class with a metaclass. def with_metaclass(meta, *bases): """Create a base class with a metaclass.""" # This requires a bit of explanation: the basic idea is to make a dummy # metaclass for one level of class instantiation that replaces itself with # the actual metaclass. class metacl...
Load library by searching possible path. def _load_lib(): """Load library by searching possible path.""" lib_path = libinfo.find_lib_path() lib = ctypes.CDLL(lib_path[0], ctypes.RTLD_LOCAL) # DMatrix functions lib.MXGetLastError.restype = ctypes.c_char_p return lib
Create ctypes array from a Python array. Parameters ---------- ctype : ctypes data type Data type of the array we want to convert to, such as mx_float. values : tuple or list Data content. Returns ------- out : ctypes array Created ctypes array. Examples -...
Create ctypes const void ** from a list of MXNet objects with handles. Parameters ---------- objs : list of NDArray/Symbol. MXNet objects. Returns ------- (ctypes.c_void_p * len(objs)) A void ** pointer that can be passed to C API. def c_handle_array(objs): """Create ctype...
Convert a ctypes pointer to a numpy array. The resulting NumPy array shares the memory with the pointer. Parameters ---------- cptr : ctypes.POINTER(mx_float) pointer to the memory region shape : tuple Shape of target `NDArray`. Returns ------- out : numpy_array ...
Build argument docs in python style. arg_names : list of str Argument names. arg_types : list of str Argument type information. arg_descs : list of str Argument description information. remove_dup : boolean, optional Whether remove duplication or not. Returns ...
Append the definition position to each function contained in module. Examples -------- # Put the following codes at the end of a file add_fileline_to_docstring(__name__) def add_fileline_to_docstring(module, incursive=True): """Append the definition position to each function contained in module. ...
Registers op functions created by `make_op_func` under `root_namespace.module_name.[submodule_name]`, where `submodule_name` is one of `_OP_SUBMODULE_NAME_LIST`. Parameters ---------- root_namespace : str Top level module name, `mxnet` in the current cases. module_name : str Sec...
Generate op functions created by `op_code_gen_func` and write to the source file of `root_namespace.module_name.[submodule_name]`, where `submodule_name` is one of `_OP_SUBMODULE_NAME_LIST`. Parameters ---------- root_namespace : str Top level module name, `mxnet` in the current cases. ...
Turns on/off NumPy compatibility. NumPy-compatibility is turned off by default in backend. Parameters ---------- active : bool Indicates whether to turn on/off NumPy compatibility. Returns ------- A bool value indicating the previous state of NumPy compatibility. def set_np_compat...
Checks whether the NumPy compatibility is currently turned on. NumPy-compatibility is turned off by default in backend. Returns ------- A bool value indicating whether the NumPy compatibility is currently on. def is_np_compat(): """ Checks whether the NumPy compatibility is currently turne...
Wraps a function with an activated NumPy-compatibility scope. This ensures that the execution of the function is guaranteed with NumPy compatible semantics, such as zero-dim and zero size tensors. Example:: import mxnet as mx @mx.use_np_compat def scalar_one(): return mx...
computes the root relative squared error (condensed using standard deviation formula) def rse(label, pred): """computes the root relative squared error (condensed using standard deviation formula)""" numerator = np.sqrt(np.mean(np.square(label - pred), axis = None)) denominator = np.std(label, axis = None)...
computes the relative absolute error (condensed using standard deviation formula) def rae(label, pred): """computes the relative absolute error (condensed using standard deviation formula)""" numerator = np.mean(np.abs(label - pred), axis=None) denominator = np.mean(np.abs(label - np.mean(label, axis=None)...
computes the empirical correlation coefficient def corr(label, pred): """computes the empirical correlation coefficient""" numerator1 = label - np.mean(label, axis=0) numerator2 = pred - np.mean(pred, axis = 0) numerator = np.mean(numerator1 * numerator2, axis=0) denominator = np.std(label, axis=0)...
:return: mxnet metric object def get_custom_metrics(): """ :return: mxnet metric object """ _rse = mx.metric.create(rse) _rae = mx.metric.create(rae) _corr = mx.metric.create(corr) return mx.metric.create([_rae, _rse, _corr])
Get input size def _get_input(proto): """Get input size """ layer = caffe_parser.get_layers(proto) if len(proto.input_dim) > 0: input_dim = proto.input_dim elif len(proto.input_shape) > 0: input_dim = proto.input_shape[0].dim elif layer[0].type == "Input": input_dim = la...
Convert convolution layer parameter from Caffe to MXNet def _convert_conv_param(param): """ Convert convolution layer parameter from Caffe to MXNet """ param_string = "num_filter=%d" % param.num_output pad_w = 0 pad_h = 0 if isinstance(param.pad, int): pad = param.pad param...
Convert the pooling layer parameter def _convert_pooling_param(param): """Convert the pooling layer parameter """ param_string = "pooling_convention='full', " if param.global_pooling: param_string += "global_pool=True, kernel=(1,1)" else: param_string += "pad=(%d,%d), kernel=(%d,%d)...
Parse Caffe prototxt into symbol string def _parse_proto(prototxt_fname): """Parse Caffe prototxt into symbol string """ proto = caffe_parser.read_prototxt(prototxt_fname) # process data layer input_name, input_dim, layers = _get_input(proto) # only support single input, so always use `data` a...
Convert caffe model definition into Symbol Parameters ---------- prototxt_fname : str Filename of the prototxt file Returns ------- Symbol Converted Symbol tuple Input shape def convert_symbol(prototxt_fname): """Convert caffe model definition into Symbol ...
Complete an episode's worth of training for each environment. def train_episode(agent, envs, preprocessors, t_max, render): """Complete an episode's worth of training for each environment.""" num_envs = len(envs) # Buffers to hold trajectories, e.g. `env_xs[i]` will hold the observations # for environ...
parses the trained .caffemodel file filepath: /path/to/trained-model.caffemodel returns: layers def parse_caffemodel(file_path): """ parses the trained .caffemodel file filepath: /path/to/trained-model.caffemodel returns: layers """ f = open(file_path, 'rb') contents = f.read() ...
For a given audio clip, calculate the log of its Fourier Transform Params: audio_clip(str): Path to the audio clip def featurize(self, audio_clip, overwrite=False, save_feature_as_csvfile=False): """ For a given audio clip, calculate the log of its Fourier Transform Params: ...
Read metadata from the description file (possibly takes long, depending on the filesize) Params: desc_file (str): Path to a JSON-line file that contains labels and paths to the audio files partition (str): One of 'train', 'validation' or 'test' ma...
Featurize a minibatch of audio, zero pad them and return a dictionary Params: audio_paths (list(str)): List of paths to audio files texts (list(str)): List of texts corresponding to the audio files Returns: dict: See below for contents def prepare_minibatch(self, aud...
Estimate the mean and std of the features from the training set Params: k_samples (int): Use this number of samples for estimation def sample_normalize(self, k_samples=1000, overwrite=False): """ Estimate the mean and std of the features from the training set Params: k_s...
GRU Cell symbol Reference: * Chung, Junyoung, et al. "Empirical evaluation of gated recurrent neural networks on sequence modeling." arXiv preprint arXiv:1412.3555 (2014). def gru(num_hidden, indata, prev_state, param, seqidx, layeridx, dropout=0., is_batchnorm=False, gamma=None, beta=None, name=None):...
save image def save_image(data, epoch, image_size, batch_size, output_dir, padding=2): """ save image """ data = data.asnumpy().transpose((0, 2, 3, 1)) datanp = np.clip( (data - np.min(data))*(255.0/(np.max(data) - np.min(data))), 0, 255).astype(np.uint8) x_dim = min(8, batch_size) y_dim = ...
Traverses the root of directory that contains images and generates image list iterator. Parameters ---------- root: string recursive: bool exts: string Returns ------- image iterator that contains all the image under the specified path def list_image(root, recursive, exts): """T...
Hepler function to write image list into the file. The format is as below, integer_image_index \t float_label_index \t path_to_image Note that the blank between number and tab is only used for readability. Parameters ---------- path_out: string image_list: list def write_list(path_out, imag...
Generates .lst file. Parameters ---------- args: object that contains all the arguments def make_list(args): """Generates .lst file. Parameters ---------- args: object that contains all the arguments """ image_list = list_image(args.root, args.recursive, args.exts) image_list = ...
Reads the .lst file and generates corresponding iterator. Parameters ---------- path_in: string Returns ------- item iterator that contains information in .lst file def read_list(path_in): """Reads the .lst file and generates corresponding iterator. Parameters ---------- path_in...
Reads, preprocesses, packs the image and put it back in output queue. Parameters ---------- args: object i: int item: list q_out: queue def image_encode(args, i, item, q_out): """Reads, preprocesses, packs the image and put it back in output queue. Parameters ---------- args: ob...
Function that will be spawned to fetch the image from the input queue and put it back to output queue. Parameters ---------- args: object q_in: queue q_out: queue def read_worker(args, q_in, q_out): """Function that will be spawned to fetch the image from the input queue and put it back...
Function that will be spawned to fetch processed image from the output queue and write to the .rec file. Parameters ---------- q_out: queue fname: string working_dir: string def write_worker(q_out, fname, working_dir): """Function that will be spawned to fetch processed image from the o...
Defines all arguments. Returns ------- args object that contains all the params def parse_args(): """Defines all arguments. Returns ------- args object that contains all the params """ parser = argparse.ArgumentParser( formatter_class=argparse.ArgumentDefaultsHelpFormatter, ...
Crop and normnalize an image nd array. def transform(data, target_wd, target_ht, is_train, box): """Crop and normnalize an image nd array.""" if box is not None: x, y, w, h = box data = data[y:min(y+h, data.shape[0]), x:min(x+w, data.shape[1])] # Resize to target_wd * target_ht. data =...
Return training and testing iterator for the CUB200-2011 dataset. def cub200_iterator(data_path, batch_k, batch_size, data_shape): """Return training and testing iterator for the CUB200-2011 dataset.""" return (CUB200Iter(data_path, batch_k, batch_size, data_shape, is_train=True), CUB200Iter(data_p...
Load and transform an image. def get_image(self, img, is_train): """Load and transform an image.""" img_arr = mx.image.imread(img) img_arr = transform(img_arr, 256, 256, is_train, self.boxes[img]) return img_arr
Sample a training batch (data and label). def sample_train_batch(self): """Sample a training batch (data and label).""" batch = [] labels = [] num_groups = self.batch_size // self.batch_k # For CUB200, we use the first 100 classes for training. sampled_classes = np.rand...
Return a batch. def next(self): """Return a batch.""" if self.is_train: data, labels = self.sample_train_batch() else: if self.test_count * self.batch_size < len(self.test_image_files): data, labels = self.get_test_batch() self.test_count ...
Load mnist dataset def load_mnist(training_num=50000): """Load mnist dataset""" data_path = os.path.join(os.path.dirname(os.path.realpath('__file__')), 'mnist.npz') if not os.path.isfile(data_path): from six.moves import urllib origin = ( 'https://github.com/sxjscience/mxnet/raw...
Check the library for compile-time features. The list of features are maintained in libinfo.h and libinfo.cc Returns ------- list List of :class:`.Feature` objects def feature_list(): """ Check the library for compile-time features. The list of features are maintained in libinfo.h and libi...
Check for a particular feature by name Parameters ---------- feature_name: str The name of a valid feature as string for example 'CUDA' Returns ------- Boolean True if it's enabled, False if it's disabled, RuntimeError if the feature is not known...
make a directory to store all caches Returns: --------- cache path def cache_path(self): """ make a directory to store all caches Returns: --------- cache path """ cache_path = os.path.join(os.path.dirname(__file__), '..', 'cache...
find out which indexes correspond to given image set (train or val) Parameters: ---------- shuffle : boolean whether to shuffle the image list Returns: ---------- entire list of images specified in the setting def _load_image_set_index(self, shuffle): ...
given image index, find out full path Parameters: ---------- index: int index of a specific image Returns: ---------- full path of this image def image_path_from_index(self, index): """ given image index, find out full path Parameter...
given image index, find out annotation path Parameters: ---------- index: int index of a specific image Returns: ---------- full path of annotation file def _label_path_from_index(self, index): """ given image index, find out annotation path...
preprocess all ground-truths Returns: ---------- labels packed in [num_images x max_num_objects x 5] tensor def _load_image_labels(self): """ preprocess all ground-truths Returns: ---------- labels packed in [num_images x max_num_objects x 5] tensor ...
top level evaluations Parameters: ---------- detections: list result list, each entry is a matrix of detections Returns: ---------- None def evaluate_detections(self, detections): """ top level evaluations Parameters: -----...
this is a template VOCdevkit/results/VOC2007/Main/<comp_id>_det_test_aeroplane.txt Returns: ---------- a string template def get_result_file_template(self): """ this is a template VOCdevkit/results/VOC2007/Main/<comp_id>_det_test_aeroplane.txt Retur...
write results files in pascal devkit path Parameters: ---------- all_boxes: list boxes to be processed [bbox, confidence] Returns: ---------- None def write_pascal_results(self, all_boxes): """ write results files in pascal devkit path ...
python evaluation wrapper Returns: ---------- None def do_python_eval(self): """ python evaluation wrapper Returns: ---------- None """ annopath = os.path.join(self.data_path, 'Annotations', '{:s}.xml') imageset_file = os.path.jo...
get image size info Returns: ---------- tuple of (height, width) def _get_imsize(self, im_name): """ get image size info Returns: ---------- tuple of (height, width) """ img = cv2.imread(im_name) return (img.shape[0], img.shape[1])
parser : argparse.ArgumentParser return a parser added with args required by fit def add_fit_args(parser): """ parser : argparse.ArgumentParser return a parser added with args required by fit """ train = parser.add_argument_group('Training', 'model training') train.add_argument('--network',...
train a model args : argparse returns network : the symbol definition of the nerual network data_loader : function that returns the train and val data iterators def fit(args, network, data_loader, **kwargs): """ train a model args : argparse returns network : the symbol definition of the ne...
Helper function to create multiple random crop augmenters. Parameters ---------- min_object_covered : float or list of float, default=0.1 The cropped area of the image must contain at least this fraction of any bounding box supplied. The value of this parameter should be non-negative. ...
Create augmenters for detection. Parameters ---------- data_shape : tuple of int Shape for output data resize : int Resize shorter edge if larger than 0 at the begining rand_crop : float [0, 1], probability to apply random cropping rand_pad : float [0, 1], probab...
Override default. def dumps(self): """Override default.""" return [self.__class__.__name__.lower(), [x.dumps() for x in self.aug_list]]
Calculate areas for multiple labels def _calculate_areas(self, label): """Calculate areas for multiple labels""" heights = np.maximum(0, label[:, 3] - label[:, 1]) widths = np.maximum(0, label[:, 2] - label[:, 0]) return heights * widths
Calculate intersect areas, normalized. def _intersect(self, label, xmin, ymin, xmax, ymax): """Calculate intersect areas, normalized.""" left = np.maximum(label[:, 0], xmin) right = np.minimum(label[:, 2], xmax) top = np.maximum(label[:, 1], ymin) bot = np.minimum(label[:, 3], y...
Check if constrains are satisfied def _check_satisfy_constraints(self, label, xmin, ymin, xmax, ymax, width, height): """Check if constrains are satisfied""" if (xmax - xmin) * (ymax - ymin) < 2: return False # only 1 pixel x1 = float(xmin) / width y1 = float(ymin) / height...
Convert labels according to crop box def _update_labels(self, label, crop_box, height, width): """Convert labels according to crop box""" xmin = float(crop_box[0]) / width ymin = float(crop_box[1]) / height w = float(crop_box[2]) / width h = float(crop_box[3]) / height o...
Propose cropping areas def _random_crop_proposal(self, label, height, width): """Propose cropping areas""" from math import sqrt if not self.enabled or height <= 0 or width <= 0: return () min_area = self.area_range[0] * height * width max_area = self.area_range[1] ...
Update label according to padding region def _update_labels(self, label, pad_box, height, width): """Update label according to padding region""" out = label.copy() out[:, (1, 3)] = (out[:, (1, 3)] * width + pad_box[0]) / pad_box[2] out[:, (2, 4)] = (out[:, (2, 4)] * height + pad_box[1])...
Generate random padding region def _random_pad_proposal(self, label, height, width): """Generate random padding region""" from math import sqrt if not self.enabled or height <= 0 or width <= 0: return () min_area = self.area_range[0] * height * width max_area = self....