prompts
stringlengths
87
212
description
stringlengths
0
6.76k
Given the following machine learning model name: Domain-Symmetric Network, provide a description of the model
**Domain-Symmetric Network**, or **SymmNet**, is an algorithm for unsupervised multi-class domain adaptation. It features an adversarial strategy of domain confusion and discrimination.
Given the following machine learning model name: Embedded Dot Product Affinity, provide a description of the model
**Embedded Dot Product Affinity** is a type of affinity or self-similarity function between two points $\mathbb{x\_{i}}$ and $\mathbb{x\_{j}}$ that uses a dot product function in an embedding space: $$ f\left(\mathbb{x\_{i}}, \mathbb{x\_{j}}\right) = \theta\left(\mathbb{x\_{i}}\right)^{T}\phi\left(\mathbb{x\_{j}}\ri...
Given the following machine learning model name: DropConnect, provide a description of the model
**DropConnect** generalizes [Dropout](https://paperswithcode.com/method/dropout) by randomly dropping the weights rather than the activations with probability $1-p$. DropConnect is similar to Dropout as it introduces dynamic sparsity within the model, but differs in that the sparsity is on the weights $W$, rather than ...
Given the following machine learning model name: bilayer convolutional neural network, provide a description of the model
Given the following machine learning model name: Label Quality Model, provide a description of the model
**Label Quality Model** is an intermediate supervised task aimed at predicting the clean labels from noisy labels by leveraging rater features and a paired subset for supervision. The LQM technique assumes the existence of rater features and a subset of training data with both noisy and clean labels, which we call pair...
Given the following machine learning model name: Visual-Linguistic BERT, provide a description of the model
VL-BERT is pre-trained on a large-scale image-captions dataset together with text-only corpus. The input to the model are either words from the input sentences or regions-of-interest (RoI) from input images. It can be fine-tuned to fit most visual-linguistic downstream tasks. Its backbone is a multi-layer bidirectional...
Given the following machine learning model name: WaveGrad UBlock, provide a description of the model
The **WaveGrad UBlock** is used for upsampling in [WaveGrad](https://paperswithcode.com/method/wavegrad). Neural audio generation models often use large receptive field. Dilation factors of four convolutional layers are 1, 2, 1, 2 for the first two UBlocks and 1, 2, 4, 8 for the rest. Orthogonal initialization is used.
Given the following machine learning model name: Spherical Graph Convolutional Network, provide a description of the model
Given the following machine learning model name: Hierarchical Entity Graph Convolutional Network, provide a description of the model
**HEGCN**, or **Hierarchical Entity Graph Convolutional Network** is a model for multi-hop relation extraction across documents. Documents in a document chain are encoded using a bi-directional long short-term memory ([BiLSTM](https://paperswithcode.com/method/bilstm)) layer. On top of the BiLSTM layer, two graph convo...
Given the following machine learning model name: HiFi-GAN, provide a description of the model
**HiFi-GAN** is a generative adversarial network for speech synthesis. HiFi-GAN consists of one generator and two discriminators: multi-scale and multi-period discriminators. The generator and discriminators are trained adversarially, along with two additional losses for improving training stability and model performan...
Given the following machine learning model name: Iterative Pseudo-Labeling, provide a description of the model
**Iterative Pseudo-Labeling** (IPL) is a semi-supervised algorithm for speech recognition which efficiently performs multiple iterations of pseudo-labeling on unlabeled data as the acoustic model evolves. In particular, IPL fine tunes an existing model at each iteration using both labeled data and a subset of unlabeled...
Given the following machine learning model name: Guided Language to Image Diffusion for Generation and Editing, provide a description of the model
GLIDE is a generative model based on text-guided diffusion models for more photorealistic image generation. Guided diffusion is applied to text-conditional image synthesis and the model is able to handle free-form prompts. The diffusion model uses a text encoder to condition on natural language descriptions. The model ...
Given the following machine learning model name: Self-Supervised Deep Supervision, provide a description of the model
The method exploits the finding that high correlation of segmentation performance among each U-Net's decoder layer -- with discriminative layer attached -- tends to have higher segmentation performance in the final segmentation map. By introducing an "Inter-layer Divergence Loss", based on Kulback-Liebler Divergence, t...
Given the following machine learning model name: Residual Connection, provide a description of the model
**Residual Connections** are a type of skip-connection that learn residual functions with reference to the layer inputs, instead of learning unreferenced functions. Formally, denoting the desired underlying mapping as $\mathcal{H}({x})$, we let the stacked nonlinear layers fit another mapping of $\mathcal{F}({x}):=...
Given the following machine learning model name: Invertible Rescaling Network, provide a description of the model
An **Invertible Rescaling Network (IRN)** is a network for image rescaling. According to the Nyquist-Shannon sampling theorem, high-frequency contents are lost during downscaling. Ideally, we hope to keep all lost information to perfectly recover the original HR image, but storing or transferring the high-frequency in...
Given the following machine learning model name: Axial Attention, provide a description of the model
**Axial Attention** is a simple generalization of self-attention that naturally aligns with the multiple dimensions of the tensors in both the encoding and the decoding settings. It was first proposed in [CCNet](https://paperswithcode.com/method/ccnet) [1] named as criss-cross attention, which harvests the contextual i...
Given the following machine learning model name: gMLP, provide a description of the model
**gMLP** is an [MLP](https://paperswithcode.com/methods/category/feedforward-networks)-based alternative to [Transformers](https://paperswithcode.com/methods/category/vision-transformer) without [self-attention](https://paperswithcode.com/method/scaled), which simply consists of channel projections and spatial projecti...
Given the following machine learning model name: YOLOv4, provide a description of the model
**YOLOv4** is a one-stage object detection model that improves on [YOLOv3](https://paperswithcode.com/method/yolov3) with several bags of tricks and modules introduced in the literature. The components section below details the tricks and modules used.
Given the following machine learning model name: Deflation, provide a description of the model
**Deflation** is a video-to-image operation to transform a video network into a network that can ingest a single image. In the two types of video networks considered in the original paper, this deflation corresponds to the following operations: for [3D convolutional based networks](https://paperswithcode.com/method/3d-...
Given the following machine learning model name: Graph Representation with Global structure, provide a description of the model
Given the following machine learning model name: Random Resized Crop, provide a description of the model
**RandomResizedCrop** is a type of image data augmentation where a crop of random size of the original size and a random aspect ratio of the original aspect ratio is made. This crop is finally resized to given size. Image Credit: [Apache MXNet](https://mxnet.apache.org/versions/1.5.0/tutorials/gluon/data_augmentatio...
Given the following machine learning model name: E-Branchformer, provide a description of the model
E-BRANCHFORMER: BRANCHFORMER WITH ENHANCED MERGING FOR SPEECH RECOGNITION
Given the following machine learning model name: Support Vector Machine, provide a description of the model
A **Support Vector Machine**, or **SVM**, is a non-parametric supervised learning model. For non-linear classification and regression, they utilise the kernel trick to map inputs to high-dimensional feature spaces. SVMs construct a hyper-plane or set of hyper-planes in a high or infinite dimensional space, which can be...
Given the following machine learning model name: Attentional Liquid Warping Block, provide a description of the model
**Attentional Liquid Warping Block**, or **AttLWB**, is a module for human image synthesis GANs that propagates the source information - such as texture, style, color and face identity - in both image and feature spaces to the synthesized reference. It firstly learns similarities of the global features among all multip...
Given the following machine learning model name: DeLighT, provide a description of the model
**DeLiGHT** is a [transformer](https://paperswithcode.com/method/transformer) architecture that delivers parameter efficiency improvements by (1) within each Transformer block using [DExTra](https://paperswithcode.com/method/dextra), a deep and light-weight transformation, allowing for the use of [single-headed attenti...
Given the following machine learning model name: Differentiable Digital Signal Processing, provide a description of the model
Given the following machine learning model name: Discriminative Regularization, provide a description of the model
**Discriminative Regularization** is a regularization technique for [variational autoencoders](https://paperswithcode.com/methods/category/likelihood-based-generative-models) that uses representations from discriminative classifiers to augment the [VAE](https://paperswithcode.com/method/vae) objective function (the low...
Given the following machine learning model name: Iterative Latent Variable Refinement, provide a description of the model
**Iterative Latent Variable Refinement**, or **ILVR**, is a method to guide the generative process in denoising diffusion probabilistic models (DDPMs) to generate high-quality images based on a given reference image. ILVR conditions the generation process in well-performing unconditional DDPM. Each transition in the ge...
Given the following machine learning model name: Multimodal Fuzzy Fusion Framework, provide a description of the model
BCI MI signal Classification Framework using Fuzzy integrals. Paper: Ko, L. W., Lu, Y. C., Bustince, H., Chang, Y. C., Chang, Y., Ferandez, J., ... & Lin, C. T. (2019). Multimodal fuzzy fusion for enhancing the motor-imagery-based brain computer interface. IEEE Computational Intelligence Magazine, 14(1), 96-106.
Given the following machine learning model name: Content-Conditioned Style Encoder, provide a description of the model
The **Content-Conditioned Style Encoder**, or **COCO**, is a style encoder used for image-to-image translation in the [COCO-FUNIT](https://paperswithcode.com/method/coco-funit#) architecture. Unlike the style encoder in [FUNIT](https://arxiv.org/abs/1905.01723), COCO takes both content and style image as input. With t...
Given the following machine learning model name: Introspective Adversarial Network, provide a description of the model
The **Introspective Adversarial Network (IAN)** is a hybridization of [GANs](https://paperswithcode.com/method/gan) and [VAEs](https://paperswithcode.com/method/vae) that leverages the power of the adversarial objective while maintaining the VAE’s efficient inference mechanism. It uses the discriminator of the GAN, $D$...
Given the following machine learning model name: Class activation guide, provide a description of the model
Class activation guide is a module which uses weak localization information from the instrument activation maps to guide the verb and target recognition. Image source: [Nwoye et al.](https://arxiv.org/pdf/2007.05405v1.pdf)
Given the following machine learning model name: Pattern-Exploiting Training, provide a description of the model
**Pattern-Exploiting Training** is a semi-supervised training procedure that reformulates input examples as cloze-style phrases to help language models understand a given task. These phrases are then used to assign soft labels to a large set of unlabeled examples. Finally, standard supervised training is performed on t...
Given the following machine learning model name: Spiking Neural Networks, provide a description of the model
**Spiking Neural Networks** (**SNNs**) are a class of artificial neural networks inspired by the structure and functioning of the brain's neural networks. Unlike traditional artificial neural networks that operate based on continuous firing rates, SNNs simulate the behavior of individual neurons through discrete spike...
Given the following machine learning model name: ERNIE-GEN, provide a description of the model
**ERNIE-GEN** is a multi-flow sequence to sequence pre-training and fine-tuning framework which bridges the discrepancy between training and inference with an infilling generation mechanism and a noise-aware generation method. To make generation closer to human writing patterns, this framework introduces a span-by-span...
Given the following machine learning model name: PixelCNN, provide a description of the model
A **PixelCNN** is a generative model that uses autoregressive connections to model images pixel by pixel, decomposing the joint image distribution as a product of conditionals. PixelCNNs are much faster to train than [PixelRNNs](https://paperswithcode.com/method/pixelrnn) because convolutions are inherently easier to p...
Given the following machine learning model name: SAGAN Self-Attention Module, provide a description of the model
The **SAGAN Self-Attention Module** is a self-attention module used in the [Self-Attention GAN](https://paperswithcode.com/method/sagan) architecture for image synthesis. In the module, image features from the previous hidden layer $\textbf{x} \in \mathbb{R}^{C\text{x}N}$ are first transformed into two feature spaces $...
Given the following machine learning model name: TabNet, provide a description of the model
**TabNet** is a deep tabular data learning architecture that uses sequential attention to choose which features to reason from at each decision step. The TabNet encoder is composed of a feature transformer, an attentive transformer and feature masking. A split block divides the processed representation to be used b...
Given the following machine learning model name: K3M, provide a description of the model
**K3M** is a multi-modal pretraining method for e-commerce product data that introduces knowledge modality to correct the noise and supplement the missing of image and text modalities. The modal-encoding layer extracts the features of each modality. The modal-interaction layer is capable of effectively modeling the int...
Given the following machine learning model name: Layer-Sequential Unit-Variance Initialization, provide a description of the model
**Layer-Sequential Unit-Variance Initialization** (**LSUV**) is a simple method for weight initialization for deep net learning. The initialization strategy involves the following two step: 1) First, pre-initialize weights of each [convolution](https://paperswithcode.com/method/convolution) or inner-product layer wi...
Given the following machine learning model name: Kernel Inducing Points, provide a description of the model
**Kernel Inducing Points**, or **KIP**, is a meta-learning algorithm for learning datasets that can mitigate the challenges which occur for naturally occurring datasets without a significant sacrifice in performance. KIP uses kernel-ridge regression to learn $\epsilon$-approximate datasets. It can be regarded as an ad...
Given the following machine learning model name: SegFormer, provide a description of the model
**SegFormer** is a [Transformer](https://paperswithcode.com/methods/category/transformers)-based framework for semantic segmentation that unifies Transformers with lightweight [multilayer perceptron](https://paperswithcode.com/method/feedforward-network) (MLP) decoders. SegFormer has two appealing features: 1) SegForme...
Given the following machine learning model name: Hierarchical Network Dissection, provide a description of the model
**Hierarchical Network Dissection** is a pipeline for interpreting the internal representation of face-centric inference models. Using a probabilistic formulation, Hierarchical Network Dissection pairs units of the model with concepts in a "Face Dictionary" (a collection of facial concepts with corresponding sample ima...
Given the following machine learning model name: Balanced Selection, provide a description of the model
Given the following machine learning model name: SCARLET-NAS, provide a description of the model
**SCARLET-NAS** is a type of [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) that utilises a learnable stabilizer to calibrate feature deviation, named the Equivariant Learnable Stabilizer (ELS). Previous one-shot approaches can be limited by fixed-depth search spaces. With SC...
Given the following machine learning model name: XLNet, provide a description of the model
**XLNet** is an autoregressive [Transformer](https://paperswithcode.com/method/transformer) that leverages the best of both autoregressive language modeling and autoencoding while attempting to avoid their limitations. Instead of using a fixed forward or backward factorization order as in conventional autoregressive mo...
Given the following machine learning model name: AmoebaNet, provide a description of the model
**AmoebaNet** is a convolutional neural network found through regularized evolution architecture search. The search space is NASNet, which specifies a space of image classifiers with a fixed outer structure: a feed-forward stack of [Inception-like modules](https://paperswithcode.com/method/inception-module) called cell...
Given the following machine learning model name: CenterMask, provide a description of the model
**CenterMask** is an anchor-free instance segmentation method that adds a novel [spatial attention-guided mask](https://paperswithcode.com/method/spatial-attention-guided-mask) (SAG-Mask) branch to anchor-free one stage object detector ([FCOS](https://paperswithcode.com/method/fcos)) in the same vein with [Mask R-CNN](...
Given the following machine learning model name: LiteSeg, provide a description of the model
**LiteSeg** is a lightweight architecture for semantic segmentation that uses a deeper version of Atrous [Spatial Pyramid Pooling](https://paperswithcode.com/method/spatial-pyramid-pooling) module ([ASPP](https://paperswithcode.com/method/aspp)) and applies short and long residual connections, and [depthwise separable ...
Given the following machine learning model name: Weight Tying, provide a description of the model
**Weight Tying** improves the performance of language models by tying (sharing) the weights of the embedding and [softmax](https://paperswithcode.com/method/softmax) layers. This method also massively reduces the total number of parameters in the language models that it is applied to. Language models are typically ...
Given the following machine learning model name: Spectrally Normalised GAN, provide a description of the model
**SNGAN**, or **Spectrally Normalised GAN**, is a type of generative adversarial network that uses [spectral normalization](https://paperswithcode.com/method/spectral-normalization), a type of [weight normalization](https://paperswithcode.com/method/weight-normalization), to stabilise the training of the discriminator.
Given the following machine learning model name: Lecun's Tanh, provide a description of the model
**LeCun's tanh** is an activation function of the form $f\left(x\right) = 1.7159\tanh\left(\frac{2}{3}x\right)$. The constants were chosen to keep the variance of the output close to 1.
Given the following machine learning model name: Inception-ResNet-v2-B, provide a description of the model
**Inception-ResNet-v2-B** is an image model block for a 17 x 17 grid used in the [Inception-ResNet-v2](https://paperswithcode.com/method/inception-resnet-v2) architecture. It largely follows the idea of Inception modules - and grouped convolutions - but also includes residual connections.
Given the following machine learning model name: Maxout, provide a description of the model
The **Maxout Unit** is a generalization of the [ReLU](https://paperswithcode.com/method/relu) and the [leaky ReLU](https://paperswithcode.com/method/leaky-relu) functions. It is a piecewise linear function that returns the maximum of the inputs, designed to be used in conjunction with [dropout](https://paperswithcode.c...
Given the following machine learning model name: Decomposition-Integration Class Activation Map, provide a description of the model
DecomCAM decomposes intermediate activation maps into orthogonal features using singular value decomposition and generates saliency maps by integrating them.
Given the following machine learning model name: Simple Visual Language Model, provide a description of the model
SimVLM is a minimalist pretraining framework to reduce training complexity by exploiting large-scale weak supervision. It is trained end-to-end with a single prefix language modeling (PrefixLM) objective. PrefixLM enables bidirectional attention within the prefix sequence, and thus it is applicable for both decoder-onl...
Given the following machine learning model name: Spatio-temporal stability analysis, provide a description of the model
Spatio-temporal features extraction that measure the stabilty. The proposed method is based on a compression algorithm named Run Length Encoding. The workflow of the method is presented bellow.
Given the following machine learning model name: VDO-SLAM, provide a description of the model
**VDO-SLAM** is a feature-based stereo/RGB-D dynamic SLAM system that leverages image-based semantic information to simultaneously localise the robot, map the static and dynamic structure, and track motions of rigid objects in the scene. Input images are first pre-processed to generate instance-level object segmentatio...
Given the following machine learning model name: PointAugment, provide a description of the model
**PointAugment** is a an auto-augmentation framework that automatically optimizes and augments point cloud samples to enrich the data diversity when we train a classification network. Different from existing auto-augmentation methods for 2D images, PointAugment is sample-aware and takes an adversarial learning strategy...
Given the following machine learning model name: CReLU, provide a description of the model
**CReLU**, or **Concatenated Rectified Linear Units**, is a type of activation function which preserves both positive and negative phase information while enforcing non-saturated non-linearity. We compute by concatenating the layer output $h$ as: $$ \left[\text{ReLU}\left(h\right), \text{ReLU}\left(-h\right)\right] ...
Given the following machine learning model name: Swin Transformer, provide a description of the model
The **Swin Transformer** is a type of [Vision Transformer](https://paperswithcode.com/method/vision-transformer). It builds hierarchical feature maps by merging image patches (shown in gray) in deeper layers and has linear computation complexity to input image size due to computation of self-attention only within each ...
Given the following machine learning model name: CP with N3 Regularizer, provide a description of the model
CP with N3 Regularizer
Given the following machine learning model name: TuckER, provide a description of the model
TuckER
Given the following machine learning model name: Ternary Weight Splitting, provide a description of the model
**Ternary Weight Splitting** is a ternarization approach used in [BinaryBERT](https://www.paperswithcode.com/method/binarybert) that exploits the flatness of ternary loss landscape as the optimization proxy of the binary model. We first train the half-sized ternary BERT to convergence, and then split both the latent fu...
Given the following machine learning model name: Momentum Contrast, provide a description of the model
**MoCo**, or **Momentum Contrast**, is a self-supervised learning algorithm with a contrastive loss. Contrastive loss methods can be thought of as building dynamic dictionaries. The "keys" (tokens) in the dictionary are sampled from data (e.g., images or patches) and are represented by an encoder network. Unsupervi...
Given the following machine learning model name: Additive Attention, provide a description of the model
**Additive Attention**, also known as **Bahdanau Attention**, uses a one-hidden layer feed-forward network to calculate the attention alignment score: $$f_{att}\left(\textbf{h}_{i}, \textbf{s}\_{j}\right) = v\_{a}^{T}\tanh\left(\textbf{W}\_{a}\left[\textbf{h}\_{i};\textbf{s}\_{j}\right]\right)$$ where $\textbf{v}...
Given the following machine learning model name: Adaptive Training Sample Selection, provide a description of the model
**Adaptive Training Sample Selection**, or **ATSS**, is a method to automatically select positive and negative samples according to statistical characteristics of object. It bridges the gap between anchor-based and anchor-free detectors. For each ground-truth box $g$ on the image, we first find out its candidate po...
Given the following machine learning model name: MushroomRL, provide a description of the model
**MushroomRL** is an open-source Python library developed to simplify the process of implementing and running Reinforcement Learning (RL) experiments. The architecture of MushroomRL is built in such a way that every component of an RL problem is already provided, and most of the time users can only focus on the impleme...
Given the following machine learning model name: Data-efficient Image Transformer, provide a description of the model
A **Data-Efficient Image Transformer** is a type of [Vision Transformer](https://paperswithcode.com/method/vision-transformer) for image classification tasks. The model is trained using a teacher-student strategy specific to transformers. It relies on a distillation token ensuring that the student learns from the teach...
Given the following machine learning model name: VERtex Similarity Embeddings, provide a description of the model
VERtex Similarity Embeddings (VERSE) is a simple, versatile, and memory-efficient method that derives graph embeddings explicitly calibrated to preserve the distributions of a selected vertex-to-vertex similarity measure. VERSE learns such embeddings by training a single-layer neural network. Source: [Tsitsulin et a...
Given the following machine learning model name: Relative Position Encodings, provide a description of the model
**Relative Position Encodings** are a type of position embeddings for [Transformer-based models](https://paperswithcode.com/methods/category/transformers) that attempts to exploit pairwise, relative positional information. Relative positional information is supplied to the model on two levels: values and keys. This bec...
Given the following machine learning model name: Enhanced Seq2Seq Autoencoder via Contrastive Learning, provide a description of the model
**ESACL**, or **Enhanced Seq2Seq Autoencoder via Contrastive Learning**, is a denoising sequence-to-sequence (seq2seq) autoencoder via contrastive learning for abstractive text summarization. The model adopts a standard [Transformer](https://paperswithcode.com/method/transformer)-based architecture with a multilayer bi...
Given the following machine learning model name: BytePS, provide a description of the model
**BytePS** is a distributed training method for deep neural networks. BytePS handles cases with varying number of CPU machines and makes traditional all-reduce and PS as two special cases of its framework. To further accelerate DNN training, BytePS proposes Summation Service and splits a DNN optimizer into two parts: g...
Given the following machine learning model name: Neural Radiance Field, provide a description of the model
NeRF represents a scene with learned, continuous volumetric radiance field $F_\theta$ defined over a bounded 3D volume. In a NeRF, $F_\theta$ is a multilayer perceptron (MLP) that takes as input a 3D position $x = (x, y, z)$ and unit-norm viewing direction $d = (dx, dy, dz)$, and produces as output a density $\sigma$ a...
Given the following machine learning model name: TrOCR, provide a description of the model
**TrOCR** is an end-to-end [Transformer](https://paperswithcode.com/methods/category/transformers)-based OCR model for text recognition with pre-trained CV and NLP models. It leverages the [Transformer](https://paperswithcode.com/method/transformer) architecture for both image understanding and wordpiece-level text gen...
Given the following machine learning model name: Reduction-A, provide a description of the model
**Reduction-A** is an image model block used in the [Inception-v4](https://paperswithcode.com/method/inception-v4) architecture.
Given the following machine learning model name: Nesterov Accelerated Gradient, provide a description of the model
**Nesterov Accelerated Gradient** is a momentum-based [SGD](https://paperswithcode.com/method/sgd) optimizer that "looks ahead" to where the parameters will be to calculate the gradient **ex post** rather than **ex ante**: $$ v\_{t} = \gamma{v}\_{t-1} + \eta\nabla\_{\theta}J\left(\theta-\gamma{v\_{t-1}}\right) $$ $...
Given the following machine learning model name: Contextual Residual Aggregation, provide a description of the model
**Contextual Residual Aggregation**, or **CRA**, is a module for image inpainting. It can produce high-frequency residuals for missing contents by weighted aggregating residuals from contextual patches, thus only requiring a low-resolution prediction from the network. Specifically, it involves a neural network to predi...
Given the following machine learning model name: AdamW, provide a description of the model
**AdamW** is a stochastic optimization method that modifies the typical implementation of weight decay in [Adam](https://paperswithcode.com/method/adam), by decoupling [weight decay](https://paperswithcode.com/method/weight-decay) from the gradient update. To see this, $L\_{2}$ regularization in Adam is usually impleme...
Given the following machine learning model name: ReasonBERT, provide a description of the model
**ReasonBERT** is a pre-training method that augments language models with the ability to reason over long-range relations and multiple, possibly hybrid, contexts. It utilizes distant supervision to automatically connect multiple pieces of text and tables to create pre-training examples that require long-range reasonin...
Given the following machine learning model name: Locality Sensitive Hashing Attention, provide a description of the model
**LSH Attention**, or **Locality Sensitive Hashing Attention** is a replacement for [dot-product attention](https://paperswithcode.com/method/scaled) with one that uses locality-sensitive hashing, changing its complexity from O($L^2$) to O($L\log L$), where $L$ is the length of the sequence. LSH refers to a family of f...
Given the following machine learning model name: Circular Smooth Label, provide a description of the model
**Circular Smooth Label** (CSL) is a classification-based rotation detection technique for arbitrary-oriented object detection. It is used for circularly distributed angle classification and addresses the periodicity of the angle and increases the error tolerance to adjacent angles.
Given the following machine learning model name: RevSilo, provide a description of the model
Invertible multi-input multi-output coupling module. In RevBiFPN it is used as a bidirectional multi-scale feature pyramid fusion module that is invertible.
Given the following machine learning model name: Weights Reset, provide a description of the model
Weight Reset is an implicit regularization procedure that periodically resets a randomly selected portion of layer weights during the training process, according to predefined probability distributions. To delineate the Weight Reset procedure, a straightforward formulation is introduced. Assume $\mathcal{B}(p)$ as a...
Given the following machine learning model name: Class-MLP, provide a description of the model
**Class-MLP** is an alternative to [average pooling](https://paperswithcode.com/method/average-pooling), which is an adaptation of the class-attention token introduced in [CaiT](https://paperswithcode.com/method/cait). In CaiT, this consists of two layers that have the same structure as the [transformer](https://papers...
Given the following machine learning model name: LightAutoML, provide a description of the model
**LightAutoML** is an AutoML solution targeted for financial services companies. A typical LightAutoML pipeline scheme is presented in the Figure, each pipeline containing: - Reader: object that receives raw data and task as input, calculates some useful metadata, performs initial data cleaning and decides about dat...
Given the following machine learning model name: Wizard: Unsupervised goats tracking algorithm, provide a description of the model
Computer vision is an interesting tool for animal behavior monitoring, mainly because it limits animal handling and it can be used to record various traits using only one sensor. From previous studies, this technic has shown to be suitable for various species and behavior. However it remains challenging to collect indi...
Given the following machine learning model name: utterance level permutation invariant training, provide a description of the model
Given the following machine learning model name: Zero-padded Shortcut Connection, provide a description of the model
A **Zero-padded Shortcut Connection** is a type of [residual connection](https://paperswithcode.com/method/residual-connection) used in the [PyramidNet](https://paperswithcode.com/method/pyramidnet) architecture. For PyramidNets, identity mapping alone cannot be used for a shortcut because the feature map dimension dif...
Given the following machine learning model name: Squeeze-and-Excitation Block, provide a description of the model
The **Squeeze-and-Excitation Block** is an architectural unit designed to improve the representational power of a network by enabling it to perform dynamic channel-wise feature recalibration. The process is: - The block has a convolutional block as an input. - Each channel is "squeezed" into a single numeric value ...
Given the following machine learning model name: G3D, provide a description of the model
**G3D** is a unified spatial-temporal graph convolutional operator that directly models cross-spacetime joint dependencies. It leverages dense cross-spacetime edges as skip connections for direct information propagation across the 3D spatial-temporal graph.
Given the following machine learning model name: Random Grayscale, provide a description of the model
**Random Grayscale** is an image data augmentation that converts an image to grayscale with probability $p$.
Given the following machine learning model name: Spatial Broadcast Decoder, provide a description of the model
Spatial Broadcast Decoder is an architecture that aims to improve disentangling, reconstruction accuracy, and generalization to held-out regions in data space. It provides a particularly dramatic benefit when applied to datasets with small objects. Source: [Watters et al.](https://arxiv.org/pdf/1901.07017v2.pdf) ...
Given the following machine learning model name: Bottleneck Transformer, provide a description of the model
The **Bottleneck Transformer (BoTNet) ** is an image classification model that incorporates self-attention for multiple computer vision tasks including image classification, object detection and instance segmentation. By just replacing the spatial convolutions with global self-attention in the final three bottleneck bl...
Given the following machine learning model name: Activation Regularization, provide a description of the model
**Activation Regularization (AR)**, or $L\_{2}$ activation regularization, is regularization performed on activations as opposed to weights. It is usually used in conjunction with [RNNs](https://paperswithcode.com/methods/category/recurrent-neural-networks). It is defined as: $$\alpha{L}\_{2}\left(m\circ{h\_{t}}\rig...
Given the following machine learning model name: Pyramidal Bottleneck Residual Unit, provide a description of the model
A **Pyramidal Bottleneck Residual Unit** is a type of residual unit where the number of channels gradually increases as a function of the depth at which the layer occurs, which is similar to a pyramid structure of which the shape gradually widens from the top downwards. It also consists of a bottleneck using 1x1 convol...
Given the following machine learning model name: Twins-SVT, provide a description of the model
**Twins-SVT** is a type of [vision transformer](https://paperswithcode.com/methods/category/vision-transformer) which utilizes a [spatially separable attention mechanism](https://paperswithcode.com/method/spatially-separable-self-attention) (SSAM) which is composed of two types of attention operations—(i) locally-group...
Given the following machine learning model name: Collapsing Linear Unit, provide a description of the model
CoLU is an activation function similar to Swish and Mish in properties. It is defined as: $$f(x)=\frac{x}{1-x^{-(x+e^x)}}$$ It is smooth, continuously differentiable, unbounded above, bounded below, non-saturating, and non-monotonic. Based on experiments done with CoLU with different activation functions, it is obser...
Given the following machine learning model name: Focal Loss, provide a description of the model
A **Focal Loss** function addresses class imbalance during training in tasks like object detection. Focal loss applies a modulating term to the cross entropy loss in order to focus learning on hard misclassified examples. It is a dynamically scaled cross entropy loss, where the scaling factor decays to zero as confiden...
Given the following machine learning model name: Dimension-wise Convolution, provide a description of the model
A **Dimension-wise Convolution**, or **DimConv**, is a type of [convolution](https://paperswithcode.com/method/convolution) that can encode depth-wise, width-wise, and height-wise information independently. To achieve this, DimConv extends depthwise convolutions to all dimensions of the input tensor $X \in \mathbb{R}^{...