text stringlengths 81 112k |
|---|
Parameterized ReLU as in the paper `Delving Deep into Rectifiers: Surpassing
Human-Level Performance on ImageNet Classification
<http://arxiv.org/abs/1502.01852>`_.
Args:
x (tf.Tensor): input
init (float): initial value for the learnable slope.
name (str): name of the output.
V... |
A shorthand of BatchNormalization + ReLU.
def BNReLU(x, name=None):
"""
A shorthand of BatchNormalization + ReLU.
"""
x = BatchNorm('bn', x)
x = tf.nn.relu(x, name=name)
return x |
More code that reproduces the paper can be found at https://github.com/ppwwyyxx/GroupNorm-reproduce/.
def GroupNorm(x, group=32, gamma_initializer=tf.constant_initializer(1.)):
"""
More code that reproduces the paper can be found at https://github.com/ppwwyyxx/GroupNorm-reproduce/.
"""
shape = x.get_sh... |
Args:
freeze (bool): whether to freeze all the variables under the scope
def backbone_scope(freeze):
"""
Args:
freeze (bool): whether to freeze all the variables under the scope
"""
def nonlin(x):
x = get_norm()(x)
return tf.nn.relu(x)
with argscope([Conv2D, MaxPool... |
Extract the images into a 4D uint8 numpy array [index, y, x, depth].
def extract_images(filename):
"""Extract the images into a 4D uint8 numpy array [index, y, x, depth]."""
with gzip.open(filename) as bytestream:
magic = _read32(bytestream)
if magic != 2051:
raise ValueError(
... |
Extract the labels into a 1D uint8 numpy array [index].
def extract_labels(filename):
"""Extract the labels into a 1D uint8 numpy array [index]."""
with gzip.open(filename) as bytestream:
magic = _read32(bytestream)
if magic != 2049:
raise ValueError(
'Invalid magic ... |
When a dependency of a class is not available, create a dummy class which throws ImportError when used.
Args:
klass (str): name of the class.
dependency (str): name of the dependency.
Returns:
class: a class object
def create_dummy_class(klass, dependency):
"""
When a dependen... |
When a dependency of a function is not available, create a dummy function which throws ImportError when used.
Args:
func (str): name of the function.
dependency (str or list[str]): name(s) of the dependency.
Returns:
function: a function object
def create_dummy_func(func, dependency):... |
Log deprecation warning.
Args:
name (str): name of the deprecated item.
text (str, optional): information about the deprecation.
eos (str, optional): end of service date such as "YYYY-MM-DD".
def log_deprecated(name="", text="", eos=""):
"""
Log deprecation warning.
Args:
... |
Args:
text, eos: same as :func:`log_deprecated`.
Returns:
a decorator which deprecates the function.
Example:
.. code-block:: python
@deprecated("Explanation of what to do instead.", "2017-11-4")
def foo(...):
pass
def deprecated(text="", eos="... |
Clear the queue, then call dataflow.__iter__() again and fill into the queue.
def refill_queue(self):
"""
Clear the queue, then call dataflow.__iter__() again and fill into the queue.
"""
self.thread.pause() # pause enqueue
opt = tfv1.RunOptions()
opt.timeout_in_ms ... |
Create a hook-only callback which maintain EMA of the queue size.
Also tf.summary.scalar the EMA.
def _create_ema_callback(self):
"""
Create a hook-only callback which maintain EMA of the queue size.
Also tf.summary.scalar the EMA.
"""
with self.cached_name_scope():
... |
shapes except for the batch dimension
def _setup(self, inputs):
logger.info("Setting up the queue for CPU prefetching ...")
self.input_placehdrs = [build_or_reuse_placeholder(v) for v in inputs]
assert len(self.input_placehdrs) > 0, \
"BatchQueueInput has to be used with some input ... |
Wrap a dataflow to tf.data.Dataset.
This function will also reset the dataflow.
If the dataflow itself is finite, the returned dataset is also finite.
Therefore, if used for training, you'll need to add `.repeat()` on the returned
dataset.
Args:
df (DataFlow): a dat... |
Returns:
OnlinePredictor: the nth predictor on the nth tower.
def get_predictor(self, n):
"""
Returns:
OnlinePredictor: the nth predictor on the nth tower.
"""
l = len(self.predictors)
if n >= l:
logger.warn("n > #towers, will assign predictor... |
Compute pairwise intersection areas between boxes.
Args:
boxes1: a numpy array with shape [N, 4] holding N boxes
boxes2: a numpy array with shape [M, 4] holding M boxes
Returns:
a numpy array with shape [N*M] representing pairwise intersection area
def intersection(boxes1, boxes2):
"""Compute pairw... |
Computes pairwise intersection-over-union between box collections.
Args:
boxes1: a numpy array with shape [N, 4] holding N boxes.
boxes2: a numpy array with shape [M, 4] holding M boxes.
Returns:
a numpy array with shape [N, M] representing pairwise iou scores.
def iou(boxes1, boxes2):
"""Computes ... |
Computes pairwise intersection-over-area between box collections.
Intersection-over-area (ioa) between two boxes box1 and box2 is defined as
their intersection area over box2's area. Note that ioa is not symmetric,
that is, IOA(box1, box2) != IOA(box2, box1).
Args:
boxes1: a numpy array with shape [N, 4] ... |
Download the data from Marlin's website, unless it's already here.
def maybe_download(url, work_directory):
"""Download the data from Marlin's website, unless it's already here."""
filename = url.split("/")[-1]
filepath = os.path.join(work_directory, filename)
if not os.path.exists(filepath):
l... |
Returns:
dict: {cls_number: synset_id}
def get_synset_1000(self):
"""
Returns:
dict: {cls_number: synset_id}
"""
fname = os.path.join(self.dir, 'synsets.txt')
assert os.path.isfile(fname)
lines = [x.strip() for x in open(fname).readlines()]
... |
Args:
name (str): 'train' or 'val' or 'test'
dir_structure (str): same as in :meth:`ILSVRC12.__init__()`.
Returns:
list: list of (image filename, label)
def get_image_list(self, name, dir_structure='original'):
"""
Args:
name (str): 'train' or 'va... |
Args:
size (tuple): image size in (h, w). Defaults to (256, 256).
Returns:
np.ndarray: per-pixel mean of shape (h, w, 3 (BGR)) in range [0, 255].
def get_per_pixel_mean(self, size=None):
"""
Args:
size (tuple): image size in (h, w). Defaults to (256, 256).
... |
Return the directory structure of "dir".
Args:
dir(str): something like '/path/to/imagenet/val'
Returns:
either 'train' or 'original'
def guess_dir_structure(dir):
"""
Return the directory structure of "dir".
Args:
dir(str): something like ... |
Args:
json_file (str): path to the results json file in coco format
Returns:
dict: the evaluation metrics
def print_coco_metrics(self, json_file):
"""
Args:
json_file (str): path to the results json file in coco format
Returns:
dict: the e... |
Args:
add_gt: whether to add ground truth bounding box annotations to the dicts
add_mask: whether to also add ground truth mask
Returns:
a list of dict, each has keys including:
'image_id', 'file_name',
and (if add_gt is True) 'boxes', 'class'... |
Change relative filename to abosolute file name.
def _use_absolute_file_name(self, img):
"""
Change relative filename to abosolute file name.
"""
img['file_name'] = os.path.join(
self._imgdir, img['file_name'])
assert os.path.isfile(img['file_name']), img['file_name'... |
Add 'boxes', 'class', 'is_crowd' of this image to the dict, used by detection.
If add_mask is True, also add 'segmentation' in coco poly format.
def _add_detection_gt(self, img, add_mask):
"""
Add 'boxes', 'class', 'is_crowd' of this image to the dict, used by detection.
If add_mask is ... |
Load and merges several instance files together.
Returns the same format as :meth:`COCODetection.load`.
def load_many(basedir, names, add_gt=True, add_mask=False):
"""
Load and merges several instance files together.
Returns the same format as :meth:`COCODetection.load`.
"""
... |
Args:
names (list[str]): name of the training datasets, e.g. ['train2014', 'valminusminival2014']
Returns:
roidbs (list[dict]):
Produce "roidbs" as a list of dict, each dict corresponds to one image with k>=0 instances.
and the following keys are expected for training:... |
Args:
name (str): name of one inference dataset, e.g. 'minival2014'
Returns:
roidbs (list[dict]):
Each dict corresponds to one image to run inference on. The
following keys in the dict are expected:
file_name (str): full path to the image
... |
Args:
results (list[dict]): the inference results as dicts.
Each dict corresponds to one __instance__. It contains the following keys:
image_id (str): the id that matches `load_inference_roidbs`.
category_id (int): the category prediction, in range [1, #categ... |
Surround a context with a timer.
Args:
msg(str): the log to print.
log_start(bool): whether to print also at the beginning.
Example:
.. code-block:: python
with timed_operation('Good Stuff'):
time.sleep(1)
Will print:
.. code-block:: pytho... |
A context which add the time spent inside to TotalTimer.
def total_timer(msg):
""" A context which add the time spent inside to TotalTimer. """
start = timer()
yield
t = timer() - start
_TOTAL_TIMER_DATA[msg].feed(t) |
Print the content of the TotalTimer, if it's not empty. This function will automatically get
called when program exits.
def print_total_timer():
"""
Print the content of the TotalTimer, if it's not empty. This function will automatically get
called when program exits.
"""
if len(_TOTAL_TIMER_DA... |
Will reset state of each augmentor
def reset_state(self):
""" Will reset state of each augmentor """
super(AugmentorList, self).reset_state()
for a in self.augmentors:
a.reset_state() |
Make sure processes terminate when main process exit.
Args:
proc (multiprocessing.Process or list)
def ensure_proc_terminate(proc):
"""
Make sure processes terminate when main process exit.
Args:
proc (multiprocessing.Process or list)
"""
if isinstance(proc, list):
for... |
Set the "death signal" of the current process, so that
the current process will be cleaned with guarantee
in case the parent dies accidentally.
def enable_death_signal(_warn=True):
"""
Set the "death signal" of the current process, so that
the current process will be cleaned with guarantee
in c... |
Returns:
If called in main thread, returns a context where ``SIGINT`` is ignored, and yield True.
Otherwise yield False.
def mask_sigint():
"""
Returns:
If called in main thread, returns a context where ``SIGINT`` is ignored, and yield True.
Otherwise yield False.
"""
if... |
Start process(es) with SIGINT ignored.
Args:
proc: (mp.Process or list)
Note:
The signal mask is only applied when called from main thread.
def start_proc_mask_signal(proc):
"""
Start process(es) with SIGINT ignored.
Args:
proc: (mp.Process or list)
Note:
The... |
Execute a command with timeout, and return STDOUT and STDERR
Args:
cmd(str): the command to execute.
timeout(float): timeout in seconds.
Returns:
output(bytes), retcode(int). If timeout, retcode is -1.
def subproc_call(cmd, timeout=None):
"""
Execute a command with timeout, an... |
Put obj to queue, but will give up when the thread is stopped
def queue_put_stoppable(self, q, obj):
""" Put obj to queue, but will give up when the thread is stopped"""
while not self.stopped():
try:
q.put(obj, timeout=5)
break
except queue.Full:... |
Take obj from queue, but will give up when the thread is stopped
def queue_get_stoppable(self, q):
""" Take obj from queue, but will give up when the thread is stopped"""
while not self.stopped():
try:
return q.get(timeout=5)
except queue.Empty:
p... |
Args:
rank(int): rank of th element. All elements must have different ranks.
val: an object
def put(self, rank, val):
"""
Args:
rank(int): rank of th element. All elements must have different ranks.
val: an object
"""
idx = bisect.bisect(s... |
Visualize use weights in convolution filters.
Args:
filters: tensor containing the weights [H,W,Cin,Cout]
name: label for tensorboard
Returns:
image of all weight
def visualize_conv_weights(filters, name):
"""Visualize use weights in convolution filters.
Args:
filters... |
Visualize activations for convolution layers.
Remarks:
This tries to place all activations into a square.
Args:
activation: tensor with the activation [B,H,W,C]
name: label for tensorboard
Returns:
image of almost all activations
def visualize_conv_activations(activation,... |
Make the static shape of a tensor less specific.
If you want to feed to a tensor, the shape of the feed value must match
the tensor's static shape. This function creates a placeholder which
defaults to x if not fed, but has a less specific static shape than x.
See also `tensorflow#5680 <https://github.... |
Estimate H(x|s) ~= -E_{x \sim P(x|s)}[\log Q(x|s)], where x are samples, and Q is parameterized by vec.
def entropy_from_samples(samples, vec):
"""
Estimate H(x|s) ~= -E_{x \sim P(x|s)}[\log Q(x|s)], where x are samples, and Q is parameterized by vec.
"""
samples_cat = tf.argmax(samples[:, :NUM_CLASS],... |
OpenAI official code actually models the "uniform" latent code as
a Gaussian distribution, but obtain the samples from a uniform distribution.
def sample_prior(batch_size):
cat, _ = get_distributions(DIST_PRIOR_PARAM[:NUM_CLASS], DIST_PRIOR_PARAM[NUM_CLASS:])
sample_cat = tf.one_hot(cat.sample(batch_size),... |
Mutual information between x (i.e. zc in this case) and some
information s (the generated samples in this case):
I(x;s) = H(x) - H(x|s)
= H(x) + E[\log P(x|s)]
The distribution from which zc is sampled, in this case, is set to a fixed prior already.
... |
see "Dynamic Filter Networks" (NIPS 2016)
by Bert De Brabandere*, Xu Jia*, Tinne Tuytelaars and Luc Van Gool
Remarks:
This is the convolution version of a dynamic filter.
Args:
inputs : unfiltered input [b, h, w, 1] only grayscale images.
filters : learned filters of [b, k, k, ... |
Estimate filters for convolution layers
Args:
theta: angle of filter
kernel_shape: size of each filter
Returns:
learned filter as [B, k, k, 1]
def _parameter_net(self, theta, kernel_shape=9):
"""Estimate filters for convolution layers
Args:
... |
Implements a steerable Gaussian filter.
This function can be used to evaluate the first
directional derivative of an image, using the
method outlined in
W. T. Freeman and E. H. Adelson, "The Design
and Use of Steerable Filters", IEEE PAMI, 1991.
It evaluates th... |
Assign `self.g_vars` to the parameters under scope `g_scope`,
and same with `self.d_vars`.
def collect_variables(self, g_scope='gen', d_scope='discrim'):
"""
Assign `self.g_vars` to the parameters under scope `g_scope`,
and same with `self.d_vars`.
"""
self.g_vars = tf.g... |
Build standard GAN loss and set `self.g_loss` and `self.d_loss`.
D and G play two-player minimax game with value function V(G,D)
min_G max _D V(D, G) = IE_{x ~ p_data} [log D(x)] + IE_{z ~ p_fake} [log (1 - D(G(z)))]
Args:
logits_real (tf.Tensor): discrim logits from real sample... |
We need to set tower_func because it's a TowerTrainer,
and only TowerTrainer supports automatic graph creation for inference during training.
If we don't care about inference during training, using tower_func is
not needed. Just calling model.build_graph directly is OK.
def _build_gan_trainer(... |
After applying this decorator:
1. data_format becomes tf.layers style
2. nl becomes activation
3. initializers are renamed
4. positional args are transformed to corresponding kwargs, according to args_names
5. kwargs are mapped to tf.layers names if needed, by name_mapping
def convert_to_tflayer_ar... |
Args:
mapping(dict): an old -> new mapping for variable basename. e.g. {'kernel': 'W'}
Returns:
A context where the variables are renamed.
def rename_get_variable(mapping):
"""
Args:
mapping(dict): an old -> new mapping for variable basename. e.g. {'kernel': 'W'}
Returns:
... |
Apply a regularizer on trainable variables matching the regex, and print
the matched variables (only print once in multi-tower training).
In replicated mode, it will only regularize variables within the current tower.
If called under a TowerContext with `is_training==False`, this function returns a zero co... |
Get the cost from the regularizers in ``tf.GraphKeys.REGULARIZATION_LOSSES``.
If in replicated mode, will only regularize variables created within the current tower.
Args:
name (str): the name of the returned tensor
Returns:
tf.Tensor: a scalar, the total regularization cost.
def regulari... |
Same as `tf.layers.dropout`.
However, for historical reasons, the first positional argument is
interpreted as keep_prob rather than drop_prob.
Explicitly use `rate=` keyword arguments to ensure things are consistent.
def Dropout(x, *args, **kwargs):
"""
Same as `tf.layers.dropout`.
However, for... |
Return a proper background image of background_shape, given img.
Args:
background_shape (tuple): a shape (h, w)
img: an image
Returns:
a background image
def fill(self, background_shape, img):
"""
Return a proper background image of background_shape,... |
Apply a function on the wrapped tensor.
Returns:
LinearWrap: ``LinearWrap(func(self.tensor(), *args, **kwargs))``.
def apply(self, func, *args, **kwargs):
"""
Apply a function on the wrapped tensor.
Returns:
LinearWrap: ``LinearWrap(func(self.tensor(), *args, *... |
Apply a function on the wrapped tensor. The tensor
will be the second argument of func.
This is because many symbolic functions
(such as tensorpack's layers) takes 'scope' as the first argument.
Returns:
LinearWrap: ``LinearWrap(func(args[0], self.tensor(), *args[1:], **kwa... |
Returns:
A context where the gradient of :meth:`tf.nn.relu` is replaced by
guided back-propagation, as described in the paper:
`Striving for Simplicity: The All Convolutional Net
<https://arxiv.org/abs/1412.6806>`_
def guided_relu():
"""
Returns:
A context where the grad... |
Produce a saliency map as described in the paper:
`Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps
<https://arxiv.org/abs/1312.6034>`_.
The saliency map is the gradient of the max element in output w.r.t input.
Returns:
tf.Tensor: the saliency map. ... |
A wrapper around `tf.layers.Conv2D`.
Some differences to maintain backward-compatibility:
1. Default kernel initializer is variance_scaling_initializer(2.0).
2. Default padding is 'same'.
3. Support 'split' argument to do group conv. Note that this is not efficient.
Variable Names:
* ``W``: w... |
A wrapper around `tf.layers.Conv2DTranspose`.
Some differences to maintain backward-compatibility:
1. Default kernel initializer is variance_scaling_initializer(2.0).
2. Default padding is 'same'
Variable Names:
* ``W``: weights
* ``b``: bias
def Conv2DTranspose(
inputs,
filt... |
Will setup the assign operator for that variable.
def setup_graph(self):
""" Will setup the assign operator for that variable. """
all_vars = tfv1.global_variables() + tfv1.local_variables()
for v in all_vars:
if v.name == self.var_name:
self.var = v
... |
Returns:
The value to assign to the variable.
Note:
Subclasses will implement the abstract method
:meth:`_get_value_to_set`, which should return a new value to
set, or return None to do nothing.
def get_value_to_set(self):
"""
Returns:
... |
Using schedule, compute the value to be set at a given point.
def _get_value_to_set_at_point(self, point):
"""
Using schedule, compute the value to be set at a given point.
"""
laste, lastv = None, None
for e, v in self.schedule:
if e == point:
return... |
This function should build the model which takes the input variables
and return cost at the end
def build_graph(self, image, label):
"""This function should build the model which takes the input variables
and return cost at the end"""
# In tensorflow, inputs to convolution function are... |
Convert a caffe parameter name to a tensorflow parameter name as
defined in the above model
def name_conversion(caffe_layer_name):
""" Convert a caffe parameter name to a tensorflow parameter name as
defined in the above model """
# beginning & end mapping
NAME_MAP = {'bn_conv1/beta': 'conv... |
Args:
custom_getter: the same as in :func:`tf.get_variable`
Returns:
The current variable scope with a custom_getter.
def custom_getter_scope(custom_getter):
"""
Args:
custom_getter: the same as in :func:`tf.get_variable`
Returns:
The current variable scope with a cust... |
Use fn to map the output of any variable getter.
Args:
fn (tf.Variable -> tf.Tensor)
Returns:
The current variable scope with a custom_getter that maps
all the variables by fn.
Example:
.. code-block:: python
with varreplace.remap_variables(lambda var: quantiz... |
Return a context to freeze variables,
by wrapping ``tf.get_variable`` with a custom getter.
It works by either applying ``tf.stop_gradient`` on the variables,
or by keeping them out of the ``TRAINABLE_VARIABLES`` collection, or
both.
Example:
.. code-block:: python
with varrepl... |
Load a caffe model. You must be able to ``import caffe`` to use this
function.
Args:
model_desc (str): path to caffe model description file (.prototxt).
model_file (str): path to caffe model parameter file (.caffemodel).
Returns:
dict: the parameters.
def load_caffe(model_desc, mode... |
Get caffe protobuf.
Returns:
The imported caffe protobuf module.
def get_caffe_pb():
"""
Get caffe protobuf.
Returns:
The imported caffe protobuf module.
"""
dir = get_dataset_path('caffe')
caffe_pb_file = os.path.join(dir, 'caffe_pb2.py')
if not os.path.isfile(caffe_pb_... |
Run some sanity checks, and populate some configs from others
def finalize_configs(is_training):
"""
Run some sanity checks, and populate some configs from others
"""
_C.freeze(False) # populate new keys now
_C.DATA.NUM_CLASS = _C.DATA.NUM_CATEGORY + 1 # +1 background
_C.DATA.BASEDIR = os.pat... |
Convert to a nested dict.
def to_dict(self):
"""Convert to a nested dict. """
return {k: v.to_dict() if isinstance(v, AttrDict) else v
for k, v in self.__dict__.items() if not k.startswith('_')} |
Update from command line args.
def update_args(self, args):
"""Update from command line args. """
for cfg in args:
keys, v = cfg.split('=', maxsplit=1)
keylist = keys.split('.')
dic = self
for i, k in enumerate(keylist[:-1]):
assert k in ... |
Get a corresponding model loader by looking at the file name.
Returns:
SessInit: either a :class:`DictRestore` (if name ends with 'npy/npz') or
:class:`SaverRestore` (otherwise).
def get_model_loader(filename):
"""
Get a corresponding model loader by looking at the file name.
Returns:... |
return a set of strings
def _read_checkpoint_vars(model_path):
""" return a set of strings """
reader = tf.train.NewCheckpointReader(model_path)
reader = CheckpointReaderAdapter(reader) # use an adapter to standardize the name
ckpt_vars = reader.get_variable_to_shape_map().keys()
... |
Args:
layers (list or layer): layer or list of layers to apply the arguments.
Returns:
a context where all appearance of these layer will by default have the
arguments specified by kwargs.
Example:
.. code-block:: python
with argscope(Conv2D, kernel_shape=3, nl=tf.... |
Decorator for function to support argscope
Example:
.. code-block:: python
from mylib import myfunc
myfunc = enable_argscope_for_function(myfunc)
Args:
func: A function mapping one or multiple tensors to one or multiple
tensors.
log_shape (bool): S... |
Overwrite all functions of a given module to support argscope.
Note that this function monkey-patches the module and therefore could
have unexpected consequences.
It has been only tested to work well with ``tf.layers`` module.
Example:
.. code-block:: python
import tensorflow as t... |
Generate tensor for TensorBoard (casting, clipping)
Args:
name: name for visualization operation
*imgs: multiple tensors as list
scale_func: scale input tensors to fit range [0, 255]
Example:
visualize_tensors('viz1', [img1])
visualize_tensors('viz2', [img1, img2, img3]... |
img: an RGB image of shape (s, 2s, 3).
:return: [input, output]
def split_input(img):
"""
img: an RGB image of shape (s, 2s, 3).
:return: [input, output]
"""
# split the image into left + right pairs
s = img.shape[0]
assert img.shape[1] == 2 * s
input, output = img[:, :s, :], img[:,... |
return a (b, 1) logits
def discriminator(self, inputs, outputs):
""" return a (b, 1) logits"""
l = tf.concat([inputs, outputs], 3)
with argscope(Conv2D, kernel_size=4, strides=2, activation=BNLReLU):
l = (LinearWrap(l)
.Conv2D('conv0', NF, activation=tf.nn.leaky_rel... |
A simple print Op that might be easier to use than :meth:`tf.Print`.
Use it like: ``x = print_stat(x, message='This is x')``.
def print_stat(x, message=None):
""" A simple print Op that might be easier to use than :meth:`tf.Print`.
Use it like: ``x = print_stat(x, message='This is x')``.
"""
... |
Returns:
root mean square of tensor x.
def rms(x, name=None):
"""
Returns:
root mean square of tensor x.
"""
if name is None:
name = x.op.name + '/rms'
with tfv1.name_scope(None): # name already contains the scope
return tf.sqrt(tf.reduce_mean(tf.square(x))... |
`Peek Signal to Noise Ratio <https://en.wikipedia.org/wiki/Peak_signal-to-noise_ratio>`_.
.. math::
PSNR = 20 \cdot \log_{10}(MAX_p) - 10 \cdot \log_{10}(MSE)
Args:
prediction: a :class:`tf.Tensor` representing the prediction signal.
ground_truth: another :class:`tf.Tensor` with the s... |
Args:
anchor: coordinate of the center
def get_gaussian_weight(self, anchor):
"""
Args:
anchor: coordinate of the center
"""
ret = np.zeros(self.shape, dtype='float32')
y, x = np.mgrid[:self.shape[0], :self.shape[1]]
y = y.astype('float32') / ret... |
Pad tensor in H, W
Remarks:
TensorFlow uses "ceil(input_spatial_shape[i] / strides[i])" rather than explicit padding
like Caffe, pyTorch does. Hence, we need to pad here beforehand.
Args:
x (tf.tensor): incoming tensor
p (int, optional): padding for H, W
Returns:
t... |
Correlation Cost Volume computation.
This is a fallback Python-only implementation, specialized just for FlowNet2.
It takes a lot of memory and is slow.
If you know to compile a custom op yourself, it's better to use the cuda implementation here:
https://github.com/PatWie/tensorflow-recipes/tree/maste... |
Resize input tensor with unkown input-shape by a factor
Args:
x (tf.Tensor): tensor NCHW
factor (int, optional): resize factor for H, W
Note:
Differences here against Caffe have huge impacts on the
quality of the predictions.
Returns:
tf.Tensor: resized tensor NCHW... |
Architecture in Table 4 of FlowNet 2.0.
Args:
x: NCHW tensor, where C=11 is the concatenation of 7 items of [3, 2, 2, 1, 1, 1, 1] channels.
def flownet2_fusion(self, x):
"""
Architecture in Table 4 of FlowNet 2.0.
Args:
x: NCHW tensor, where C=11 is the concate... |
Architecture in Table 3 of FlowNet 2.0.
Args:
x: concatenation of two inputs, of shape [1, 2xC, H, W]
def flownet2_sd(self, x):
"""
Architecture in Table 3 of FlowNet 2.0.
Args:
x: concatenation of two inputs, of shape [1, 2xC, H, W]
"""
with ar... |
Architecture of FlowNetSimple in Figure 2 of FlowNet 1.0.
Args:
x: 2CHW if standalone==True, else NCHW where C=12 is a concatenation
of 5 tensors of [3, 3, 3, 2, 1] channels.
standalone: If True, this model is used to predict flow from two inputs.
If Fals... |
Architecture of FlowNetCorr in Figure 2 of FlowNet 1.0.
Args:
x: 2CHW.
def graph_structure(self, x1x2):
"""
Architecture of FlowNetCorr in Figure 2 of FlowNet 1.0.
Args:
x: 2CHW.
"""
with argscope([tf.layers.conv2d], activation=lambda x: tf.nn.lea... |
Will not modify img
def draw_annotation(img, boxes, klass, is_crowd=None):
"""Will not modify img"""
labels = []
assert len(boxes) == len(klass)
if is_crowd is not None:
assert len(boxes) == len(is_crowd)
for cls, crd in zip(klass, is_crowd):
clsname = cfg.DATA.CLASS_NAMES[c... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.