prompts
stringlengths
87
212
description
stringlengths
0
6.76k
Given the following machine learning model name: LeViT Attention Block, provide a description of the model
**LeViT Attention Block** is a module used for [attention](https://paperswithcode.com/methods/category/attention-mechanisms) in the [LeViT](https://paperswithcode.com/method/levit) architecture. Its main feature is providing positional information within each attention block, i.e. where we explicitly inject relative po...
Given the following machine learning model name: SqueezeNeXt Block, provide a description of the model
A **SqueezeNeXt Block** is a two-stage bottleneck module used in the [SqueezeNeXt](https://paperswithcode.com/method/squeezenext) architecture to reduce the number of input channels to the 3 × 3 [convolution](https://paperswithcode.com/method/convolution). We decompose with separable convolutions to further reduce the ...
Given the following machine learning model name: Double Deep Q-Learning, provide a description of the model
Given the following machine learning model name: Lower Bound on Transmission Using Non-Linear Bounding Function in Single Image Dehazing, provide a description of the model
Given the following machine learning model name: Self-regularizing Boundary Time and Amplitude Warping, provide a description of the model
Given the following machine learning model name: Exponential Linear Squashing Activation, provide a description of the model
The **Exponential Linear Squashing Activation Function**, or **ELiSH**, is an activation function used for neural networks. It shares common properties with [Swish](https://paperswithcode.com/method/swish), being made up of an [ELU](https://paperswithcode.com/method/elu) and a [Sigmoid](https://paperswithcode.com/metho...
Given the following machine learning model name: Split Attention, provide a description of the model
A **Split Attention** block enables attention across feature-map groups. As in [ResNeXt blocks](https://paperswithcode.com/method/resnext-block), the feature can be divided into several groups, and the number of feature-map groups is given by a cardinality hyperparameter $K$. The resulting feature-map groups are called...
Given the following machine learning model name: Dense Synthesized Attention, provide a description of the model
**Dense Synthesized Attention**, introduced with the [Synthesizer](https://paperswithcode.com/method/synthesizer) architecture, is a type of synthetic attention mechanism that replaces the notion of [query-key-values](https://paperswithcode.com/method/scaled) in the self-attention module and directly synthesizes the al...
Given the following machine learning model name: CS-GAN, provide a description of the model
**CS-GAN** is a type of generative adversarial network that uses a form of deep compressed sensing, and [latent optimisation](https://paperswithcode.com/method/latent-optimisation), to improve the quality of generated samples.
Given the following machine learning model name: Root-of-Mean-Squared Pooling, provide a description of the model
**RMS Pooling** is a pooling operation that calculates the square mean root for patches of a feature map, and uses it to create a downsampled (pooled) feature map. It is usually used after a convolutional layer. $$ z_{j} = \sqrt{\frac{1}{M}\sum^{M}_{i=1}u{ij}^{2}} $$
Given the following machine learning model name: Fast Attention Via Positive Orthogonal Random Features, provide a description of the model
**FAVOR+**, or **Fast Attention Via Positive Orthogonal Random Features**, is an efficient attention mechanism used in the [Performer](https://paperswithcode.com/method/performer) architecture which leverages approaches such as kernel methods and random features approximation for approximating [softmax](https://papersw...
Given the following machine learning model name: Blink Communication, provide a description of the model
**Blink** is a communication library for inter-GPU parameter exchange that achieves near-optimal link utilization. To handle topology heterogeneity from hardware generations or partial allocations from cluster schedulers, Blink dynamically generates optimal communication primitives for a given topology. Blink probes th...
Given the following machine learning model name: ZeRO, provide a description of the model
**Zero Redundancy Optimizer (ZeRO)** is a sharded data parallel method for distributed training. ZeRODP removes the memory state redundancies across data-parallel processes by partitioning the model states instead of replicating them, and it retains the compute/communication efficiency by retaining the computational gr...
Given the following machine learning model name: SOHO, provide a description of the model
SOHO (“See Out of tHe bOx”) that takes a whole image as input, and learns vision-language representation in an end-to-end manner. SOHO does not require bounding box annotations which enables inference 10 times faster than region-based approaches. Text embeddings are used to extract textual embedding features. A trainab...
Given the following machine learning model name: Multiple Random Window Discriminator, provide a description of the model
**Multiple Random Window Discriminator** is a discriminator used for the [GAN-TTS](https://paperswithcode.com/method/gan-tts) text-to-speech architecture. These discriminators operate on randomly sub-sampled fragments of the real or generated samples. The ensemble allows for the evaluation of audio in different complem...
Given the following machine learning model name: Linear Warmup, provide a description of the model
**Linear Warmup** is a learning rate schedule where we linearly increase the learning rate from a low rate to a constant rate thereafter. This reduces volatility in the early stages of training. Image Credit: [Chengwei Zhang](https://www.dlology.com/about-me/)
Given the following machine learning model name: GPT, provide a description of the model
**GPT** is a [Transformer](https://paperswithcode.com/method/transformer)-based architecture and training procedure for natural language processing tasks. Training follows a two-stage procedure. First, a language modeling objective is used on the unlabeled data to learn the initial parameters of a neural network model...
Given the following machine learning model name: TrIVD-GAN, provide a description of the model
**TrIVD-GAN**, or **Transformation-based & TrIple Video Discriminator GAN**, is a type of generative adversarial network for video generation that builds upon [DVD-GAN](https://paperswithcode.com/method/dvd-gan). Improvements include a novel transformation-based recurrent unit (the TSRU) that makes the generator more e...
Given the following machine learning model name: TSDAE, provide a description of the model
**TSDAE** is an unsupervised sentence embedding method. During training, TSDAE encodes corrupted sentences into fixed-sized vectors and requires the decoder to reconstruct the original sentences from this sentence embedding. For good reconstruction quality, the semantics must be captured well in the sentence embedding ...
Given the following machine learning model name: Adversarial Solarization, provide a description of the model
Given the following machine learning model name: ConvNeXt, provide a description of the model
Given the following machine learning model name: Neural Oblivious Decision Ensembles, provide a description of the model
**Neural Oblivious Decision Ensembles (NODE)** is a tabular data architecture that consists of differentiable oblivious decision trees (ODT) that are trained end-to-end by backpropagation. The core building block is a Neural Oblivious Decision Ensemble (NODE) layer. The layer is composed of $m$ differentiable obli...
Given the following machine learning model name: Pathology Language and Image Pre-Training, provide a description of the model
Pathology Language and Image Pre-Training (PLIP) is a vision-and-language foundation model created by fine-tuning CLIP on pathology images.
Given the following machine learning model name: StyleMapGAN, provide a description of the model
**StyleMapGAN** is a generative adversarial network for real-time image editing. The intermediate latent space has spatial dimensions, and a spatially variant modulation replaces [AdaIN](https://paperswithcode.com/method/adaptive-instance-normalization). It aims to make the embedding through an encoder more accurate th...
Given the following machine learning model name: FFB6D, provide a description of the model
**FFB6D** is a full flow bidirectional fusion network for 6D pose estimation of known objects from a single RGBD image. Unlike previous works that extract the RGB and point cloud features independently and fuse them in the final stage, FFB6D builds bidirectional fusion modules as communication bridges in the full flow ...
Given the following machine learning model name: Discriminative and Generative Network, provide a description of the model
Given the following machine learning model name: Distributed Shampoo, provide a description of the model
A scalable second order optimization algorithm for deep learning. Optimization in machine learning, both theoretical and applied, is presently dominated by first-order gradient methods such as stochastic gradient descent. Second-order optimization methods, that involve second derivatives and/or second order statisti...
Given the following machine learning model name: PointNet, provide a description of the model
**PointNet** provides a unified architecture for applications ranging from object classification, part segmentation, to scene semantic parsing. It directly takes point clouds as input and outputs either class labels for the entire input or per point segment/part labels for each point of the input. Source: [Qi et al....
Given the following machine learning model name: CondConv, provide a description of the model
**CondConv**, or **Conditionally Parameterized Convolutions**, are a type of [convolution](https://paperswithcode.com/method/convolution) which learn specialized convolutional kernels for each example. In particular, we parameterize the convolutional kernels in a CondConv layer as a linear combination of $n$ experts $(...
Given the following machine learning model name: Transposed convolution, provide a description of the model
Given the following machine learning model name: RetinaNet, provide a description of the model
**RetinaNet** is a one-stage object detection model that utilizes a [focal loss](https://paperswithcode.com/method/focal-loss) function to address class imbalance during training. Focal loss applies a modulating term to the cross entropy loss in order to focus learning on hard negative examples. RetinaNet is a single, ...
Given the following machine learning model name: Glow-TTS, provide a description of the model
**Glow-TTS** is a flow-based generative model for parallel TTS that does not require any external aligner. By combining the properties of flows and dynamic programming, the proposed model searches for the most probable monotonic alignment between text and the latent representation of speech. The model is directly trai...
Given the following machine learning model name: YOLOv2, provide a description of the model
**YOLOv2**, or [**YOLO9000**](https://www.youtube.com/watch?v=QsDDXSmGJZA), is a single-stage real-time object detection model. It improves upon [YOLOv1](https://paperswithcode.com/method/yolov1) in several ways, including the use of [Darknet-19](https://paperswithcode.com/method/darknet-19) as a backbone, [batch norma...
Given the following machine learning model name: ELECTRA, provide a description of the model
**ELECTRA** is a [transformer](https://paperswithcode.com/method/transformer) with a new pre-training approach which trains two transformer models: the generator and the discriminator. The generator replaces tokens in the sequence - trained as a masked language model - and the discriminator (the ELECTRA contribution) a...
Given the following machine learning model name: FCPose, provide a description of the model
**FCPose** is a fully convolutional multi-person [pose estimation framework](https://paperswithcode.com/methods/category/pose-estimation-models) using dynamic instance-aware convolutions. Different from existing methods, which often require ROI (Region of Interest) operations and/or grouping post-processing, FCPose eli...
Given the following machine learning model name: Lambda Layer, provide a description of the model
**Lambda layers** are a building block for modeling long-range dependencies in data. They consist of long-range interactions between a query and a structured set of context elements at a reduced memory cost. Lambda layers transform each available context into a linear function, termed a lambda, which is then directly a...
Given the following machine learning model name: Huber loss, provide a description of the model
The Huber loss function describes the penalty incurred by an estimation procedure f. Huber (1964) defines the loss function piecewise by[1] L δ ( a ) = { 1 2 a 2 for | a | ≤ δ , δ ⋅ ( | a | − 1 2 δ ) , otherwise. {\displaystyle L_{\delta }(a)={\begin{cases}{\frac {1}{2}}{a^{2}}&{\text{for }}|a|\leq \delta ,\\\d...
Given the following machine learning model name: DropBlock, provide a description of the model
**DropBlock** is a structured form of [dropout](https://paperswithcode.com/method/dropout) directed at regularizing convolutional networks. In DropBlock, units in a contiguous region of a feature map are dropped together. As DropBlock discards features in a correlated area, the networks must look elsewhere for evidenc...
Given the following machine learning model name: Co-Scale Conv-attentional Image Transformer, provide a description of the model
**Co-Scale Conv-Attentional Image Transformer** (CoaT) is a [Transformer](https://paperswithcode.com/method/transformer)-based image classifier equipped with co-scale and conv-attentional mechanisms. First, the co-scale mechanism maintains the integrity of Transformers' encoder branches at individual scales, while allo...
Given the following machine learning model name: Self-training Guided Prototypical Cross-domain Self-supervised learning, provide a description of the model
Model to adapt: We use Ultra Fast Structure-aware Deep Lane Detection (UFLD) as baseline and strictly adopt its training scheme and hyperparameters. UFLD treats lane detection as a row-based classification problem and utilizes the row anchors defined by TuSimple. Unsupervised Domain Adaptation with SGPCS: SGPCS bu...
Given the following machine learning model name: Modular Interactive VOS, provide a description of the model
**MiVOS** is a video object segmentation model which decouples interaction-to-mask and mask propagation. By decoupling interaction from propagation, MiVOS is versatile and not limited by the type of interactions. It uses three modules: Interaction-to-Mask, Propagation and Difference-Aware Fusion. Trained separately, th...
Given the following machine learning model name: Dice Loss, provide a description of the model
\begin{equation} DiceLoss\left( y, \overline{p} \right) = 1 - \dfrac{\left( 2y\overline{p} + 1 \right)} {\left( y+\overline{p } + 1 \right)} \end{equation}
Given the following machine learning model name: Feature Intertwiner, provide a description of the model
**Feature Intertwiner** is an object detection module that leverages the features from a more reliable set to help guide the feature learning of another less reliable set. The mutual learning process helps two sets to have closer distance within the cluster in each class. The intertwiner is applied on the object detect...
Given the following machine learning model name: Convolution-enhanced image Transformer, provide a description of the model
**Convolution-enhanced image Transformer** (**CeiT**) combines the advantages of CNNs in extracting low-level features, strengthening locality, and the advantages of Transformers in establishing long-range dependencies. Three modifications are made to the original Transformer: 1) instead of the straightforward tokeniza...
Given the following machine learning model name: Mixed Depthwise Convolution, provide a description of the model
**MixConv**, or **Mixed Depthwise Convolution**, is a type of [depthwise convolution](https://paperswithcode.com/method/depthwise-convolution) that naturally mixes up multiple kernel sizes in a single [convolution](https://paperswithcode.com/method/convolution). It is based on the insight that depthwise convolution app...
Given the following machine learning model name: 3D Dynamic Scene Graph, provide a description of the model
**3D Dynamic Scene Graph**, or **DSG**, is a representation that captures metric and semantic aspects of a dynamic environment. A DSG is a layered graph where nodes represent spatial concepts at different levels of abstraction, and edges represent spatio-temporal relations among nodes.
Given the following machine learning model name: T-Fixup, provide a description of the model
**T-Fixup** is an [initialization](https://paperswithcode.com/methods/category/initialization) method for [Transformers](https://paperswithcode.com/methods/category/transformers) that aims to remove the need for [layer normalization](https://paperswithcode.com/method/layer-normalization) and [warmup](https://paperswith...
Given the following machine learning model name: Re-Attention Module, provide a description of the model
The **Re-Attention Module** is an attention layer used in the [DeepViT](https://paperswithcode.com/method/deepvit) architecture which mixes the attention map with a learnable matrix before multiplying with the values. The motivation is to re-generate the attention maps to increase their diversity at different layers wi...
Given the following machine learning model name: CSPDenseNet, provide a description of the model
**CSPDenseNet** is a convolutional neural network and object detection backbone where we apply the Cross Stage Partial Network (CSPNet) approach to [DenseNet](https://paperswithcode.com/method/densenet). The CSPNet partitions the feature map of the base layer into two parts and then merges them through a cross-stage hi...
Given the following machine learning model name: Vision-and-Langauge Transformer, provide a description of the model
ViLT is a minimal vision-and-language pre-training transformer model where processing of visual inputs is simplified to just the same convolution-free manner that text inputs are processed. The model-specific components of ViLT require less computation than the transformer component for multimodal interactions. ViLTThe...
Given the following machine learning model name: Hybrid Task Cascade, provide a description of the model
**Hybrid Task Cascade**, or **HTC**, is a framework for cascading in instance segmentation. It differs from [Cascade Mask R-CNN](https://paperswithcode.com/method/cascade-mask-r-cnn) in two important aspects: (1) instead of performing cascaded refinement on the two tasks of detection and segmentation separately, it in...
Given the following machine learning model name: Dense Block, provide a description of the model
A **Dense Block** is a module used in convolutional neural networks that connects *all layers* (with matching feature-map sizes) directly with each other. It was originally proposed as part of the [DenseNet](https://paperswithcode.com/method/densenet) architecture. To preserve the feed-forward nature, each layer obtain...
Given the following machine learning model name: Graph Neural Network, provide a description of the model
Given the following machine learning model name: Switch Transformer, provide a description of the model
**Switch Transformer** is a sparsely-activated expert [Transformer](https://paperswithcode.com/methods/category/transformers) model that aims to simplify and improve over Mixture of Experts. Through distillation of sparse pre-trained and specialized fine-tuned models into small dense models, it reduces the model size b...
Given the following machine learning model name: Pixel-BERT, provide a description of the model
Pixel-BERT is a pre-trained model trained to align image pixels with text. The end-to-end framework includes a CNN-based visual encoder and cross-modal transformers for visual and language embedding learning. This model has three parts: one fully convolutional neural network that takes pixels of an image as input, one...
Given the following machine learning model name: CR-NET, provide a description of the model
CR-NET is a YOLO-based model proposed for license plate character detection and recognition
Given the following machine learning model name: Aligning Latent and Image Spaces, provide a description of the model
An infinite image generator which is based on a patch-wise, periodically equivariant generator.
Given the following machine learning model name: Temporal Pyramid Network, provide a description of the model
**Temporal Pyramid Network**, or **TPN**, is a pyramid level module for action recognition at the feature-level, which can be flexibly integrated into 2D or 3D backbone networks in a plug-and-play manner. The source of features and the fusion of features form a feature hierarchy for the backbone so that it can capture ...
Given the following machine learning model name: DiCENet, provide a description of the model
**DiCENet** is a convolutional neural network architecture that utilizes dimensional convolutions (and dimension-wise fusion). The dimension-wise convolutions apply light-weight convolutional filtering across each dimension of the input tensor while dimension-wise fusion efficiently combines these dimension-wise repres...
Given the following machine learning model name: SimAdapter, provide a description of the model
**SimAdapter** is a module for explicitly learning knowledge from adapters. SimAdapter aims to learn the similarities between the source and target languages during fine-tuning using the adapters, and the similarity is based on an [attention mechanism](https://paperswithcode.com/methods/category/attention-mechanisms-1)...
Given the following machine learning model name: Time-aware Large Kernel Convolution, provide a description of the model
A **Time-aware Large Kernel (TaLK) convolution** is a type of temporal [convolution](https://paperswithcode.com/method/convolution) that learns the kernel size of a summation kernel for each time-step instead of learning the kernel weights as in a typical convolution operation. For each time-step, a function is respons...
Given the following machine learning model name: KnowPrompt, provide a description of the model
**KnowPrompt** is a prompt-tuning approach for relational understanding. It injects entity and relation knowledge into prompt construction with learnable virtual template words as well as answer words and synergistically optimize their representation with knowledge constraints. To be specific, TYPED MARKER is utilized ...
Given the following machine learning model name: StyleGAN2, provide a description of the model
**StyleGAN2** is a generative adversarial network that builds on [StyleGAN](https://paperswithcode.com/method/stylegan) with several improvements. First, [adaptive instance normalization](https://paperswithcode.com/method/adaptive-instance-normalization) is redesigned and replaced with a normalization technique called ...
Given the following machine learning model name: Soft Split and Soft Composition, provide a description of the model
**Soft Split and Soft Composition** are video frame based operations used in the [FuseFormer](https://paperswithcode.com/method/fuseformer) architecture, specifically the [FuseFormer blocks](https://paperswithcode.com/method/fuseformer-block). We softly split each frame into overlapped patches and then softly composite...
Given the following machine learning model name: Conditional Relation Network, provide a description of the model
**Conditional Relation Network**, or **CRN**, is a building block to construct more sophisticated structures for representation and reasoning over video. CRN takes as input an array of tensorial objects and a conditioning feature, and computes an array of encoded output objects. Model building becomes a simple exercise...
Given the following machine learning model name: Libra R-CNN, provide a description of the model
**Libra R-CNN** is an object detection model that seeks to achieve a balanced training procedure. The authors motivation is that training in past detectors has suffered from imbalance during the training process, which generally consists in three levels – sample level, feature level, and objective level. To mitigate th...
Given the following machine learning model name: Gated Channel Transformation, provide a description of the model
GCT first collects global information by computing the l2-norm of each channel. Next, a learnable vector $ \alpha $ is applied to scale the feature. Then a competition mechanism is adopted by channel normalization to interact between channels. Unlike previous methods, GCT first collects global information by computi...
Given the following machine learning model name: Local Response Normalization, provide a description of the model
**Local Response Normalization** is a normalization layer that implements the idea of lateral inhibition. Lateral inhibition is a concept in neurobiology that refers to the phenomenon of an excited neuron inhibiting its neighbours: this leads to a peak in the form of a local maximum, creating contrast in that area and ...
Given the following machine learning model name: Multi Loss ( BCE Loss + Focal Loss ) + Dice Loss, provide a description of the model
Our proposed loss function is a combination of BCE Loss, Focal Loss, and Dice loss. Each one of them contributes individually to improve performance further details of loss functions are mentioned below, (1) BCE Loss calculates probabilities and compares each actual class output with predicted probabilities which ca...
Given the following machine learning model name: Spatial Attention-Guided Mask, provide a description of the model
**A Spatial Attention-Guided Mask** is a module for [instance segmentation](https://paperswithcode.com/task/instance-segmentation) that predicts a segmentation mask on each detected box with a spatial attention map that helps to focus on informative pixels and suppress noise. The goal is to guide the mask head for spot...
Given the following machine learning model name: Hydra, provide a description of the model
**Hydra** is a multi-headed neural network for model distillation with a shared body network. The shared body network learns a joint feature representation that enables each head to capture the predictive behavior of each ensemble member. Existing distillation methods often train a distillation network to imitate the ...
Given the following machine learning model name: Natural Gradient Descent, provide a description of the model
**Natural Gradient Descent** is an approximate second-order optimisation method. It has an interpretation as optimizing over a Riemannian manifold using an intrinsic distance metric, which implies the updates are invariant to transformations such as whitening. By using the positive semi-definite (PSD) Gauss-Newton matr...
Given the following machine learning model name: Monte-Carlo Tree Search, provide a description of the model
**Monte-Carlo Tree Search** is a planning algorithm that accumulates value estimates obtained from Monte Carlo simulations in order to successively direct simulations towards more highly-rewarded trajectories. We execute MCTS after encountering each new state to select an agent's action for that state: it is executed a...
Given the following machine learning model name: Convolution, provide a description of the model
A **convolution** is a type of matrix operation, consisting of a kernel, a small matrix of weights, that slides over input data performing element-wise multiplication with the part of the input it is on, then summing the results into an output. Intuitively, a convolution allows for weight sharing - reducing the numb...
Given the following machine learning model name: Dilated convolution with learnable spacings, provide a description of the model
Dilated convolution with learnable spacings (DCLS) is a type of convolution that allows the spacings between the non-zero elements of the kernel to be learned during training. This makes it possible to increase the receptive field of the convolution without increasing the number of parameters, which can improve the per...
Given the following machine learning model name: Affordance Correspondence, provide a description of the model
Method for one-shot visual search of object parts / one-shot semantic part correspondence. Given a single reference image of an object with annotated affordance regions, it segments semantically corresponding parts within a target scene. AffCorrs is used to find corresponding affordances both for intra- and inter-class...
Given the following machine learning model name: MUSIQ, provide a description of the model
**MUSIQ**, or **Multi-scale Image Quality Transformer**, is a [Transformer](https://paperswithcode.com/method/transformer)-based model for multi-scale image quality assessment. It processes native resolution images with varying sizes and aspect ratios. In MUSIQ, we construct a multi-scale image representation as input,...
Given the following machine learning model name: GPT-NeoX, provide a description of the model
**GPT-NeoX** is an autoregressive transformer decoder model whose architecture largely follows that of GPT-3, with a few notable deviations. The model has 20 billion parameters with 44 layers, a hidden dimension size of 6144, and 64 heads. The main difference with GPT-3 is the change in tokenizer, the addition of Rotar...
Given the following machine learning model name: Soft Actor Critic, provide a description of the model
**Soft Actor Critic**, or **SAC**, is an off-policy actor-critic deep RL algorithm based on the maximum entropy reinforcement learning framework. In this framework, the actor aims to maximize expected reward while also maximizing entropy. That is, to succeed at the task while acting as randomly as possible. Prior deep ...
Given the following machine learning model name: Blender, provide a description of the model
**Blender** is a proposal-based instance mask generation module which incorporates rich instance-level information with accurate dense pixel features. A single [convolution](https://paperswithcode.com/method/convolution) layer is added on top of the detection towers to produce attention masks along with each bounding b...
Given the following machine learning model name: Sparse R-CNN, provide a description of the model
**Sparse R-CNN** is a purely sparse method for object detection in images, without object positional candidates enumerating on all(dense) image grids nor object queries interacting with global(dense) image feature. As shown in the Figure, object candidates are given with a fixed small set of learnable bounding boxe...
Given the following machine learning model name: AutoEncoder, provide a description of the model
An **Autoencoder** is a bottleneck architecture that turns a high-dimensional input into a latent low-dimensional code (encoder), and then performs a reconstruction of the input with this latent code (the decoder). Image: [Michael Massi](https://en.wikipedia.org/wiki/Autoencoder#/media/File:Autoencoder_schema.png)
Given the following machine learning model name: GridMask, provide a description of the model
**GridMask** is a data augmentation method that randomly removes some pixels of an input image. Unlike other methods, the region that the algorithm removes is neither a continuous region nor random pixels in dropout. Instead, the algorithm removes a region with disconnected pixel sets, as shown in the Figure. We exp...
Given the following machine learning model name: Weight Normalization, provide a description of the model
**Weight Normalization** is a normalization method for training neural networks. It is inspired by [batch normalization](https://paperswithcode.com/method/batch-normalization), but it is a deterministic method that does not share batch normalization's property of adding noise to the gradients. It reparameterizes each $...
Given the following machine learning model name: RoIWarp, provide a description of the model
**Region of Interest Warping**, or **RoIWarp**, is a form of [RoIPool](https://paperswithcode.com/method/roi-pooling) that is differentiable with respect to the box position. In practice, this takes the form of a RoIWarp layer followed by a standard [Max Pooling](https://paperswithcode.com/method/max-pooling) layer. Th...
Given the following machine learning model name: Polyak Averaging, provide a description of the model
**Polyak Averaging** is an optimization technique that sets final parameters to an average of (recent) parameters visited in the optimization trajectory. Specifically if in $t$ iterations we have parameters $\theta\_{1}, \theta\_{2}, \dots, \theta\_{t}$, then Polyak Averaging suggests setting $$ \theta\_t =\frac{1}...
Given the following machine learning model name: Transformer Decoder, provide a description of the model
[Transformer](https://paperswithcode.com/method/transformer)-Decoder is a modification to Transformer-Encoder-Decoder for long sequences that drops the encoder module, combines the input and output sequences into a single ”sentence” and is trained as a standard language model. It is used in [GPT](https://paperswithcod...
Given the following machine learning model name: DiCE Unit, provide a description of the model
A **DiCE Unit** is an image model block that is built using dimension-wise convolutions and dimension-wise fusion. The dimension-wise convolutions apply light-weight convolutional filtering across each dimension of the input tensor while dimension-wise fusion efficiently combines these dimension-wise representations; ...
Given the following machine learning model name: Gradient Sign Dropout, provide a description of the model
**GradDrop**, or **Gradient Sign Dropout**, is a probabilistic masking procedure which samples gradients at an activation layer based on their level of consistency. It is applied as a layer in any standard network forward pass, usually on the final layer before the prediction head to save on compute overhead and maximi...
Given the following machine learning model name: PSFR-GAN, provide a description of the model
**PSFR-GAN** is a semantic-aware style transformation framework for face restoration. Given a pair of LQ face image and its corresponding parsing map, we first generate a multi-scale pyramid of the inputs, and then progressively modulate different scale features from coarse-to-fine in a semantic-aware style transfer wa...
Given the following machine learning model name: Attentive Normalization, provide a description of the model
**Attentive Normalization** generalizes the common affine transformation component in the vanilla feature normalization. Instead of learning a single affine transformation, AN learns a mixture of affine transformations and utilizes their weighted-sum as the final affine transformation applied to re-calibrate features i...
Given the following machine learning model name: Pyramid Vision Transformer, provide a description of the model
**PVT**, or **Pyramid Vision Transformer**, is a type of [vision transformer](https://paperswithcode.com/methods/category/vision-transformer) that utilizes a pyramid structure to make it an effective backbone for dense prediction tasks. Specifically it allows for more fine-grained inputs (4 x 4 pixels per patch) to be ...
Given the following machine learning model name: Differential Diffusion, provide a description of the model
**Differential Diffusion** is an enhancement of image-to-image diffusion models that adds the ability to control the amount of change applied to each image fragment via a change map.
Given the following machine learning model name: Deformable Position-Sensitive RoI Pooling, provide a description of the model
**Deformable Position-Sensitive RoI Pooling** is similar to PS RoI Pooling but it adds an offset to each bin position in the regular bin partition. Offset learning follows the “fully convolutional” spirit. In the top branch, a convolutional layer generates the full spatial resolution offset fields. For each RoI (also f...
Given the following machine learning model name: AdaDelta, provide a description of the model
**AdaDelta** is a stochastic optimization technique that allows for per-dimension learning rate method for [SGD](https://paperswithcode.com/method/sgd). It is an extension of [Adagrad](https://paperswithcode.com/method/adagrad) that seeks to reduce its aggressive, monotonically decreasing learning rate. Instead of accu...
Given the following machine learning model name: MinCut Pooling, provide a description of the model
MinCutPool is a trainable pooling operator for graphs that learns to map nodes into clusters. The method is trained to approximate the minimum K-cut of the graph to ensure that the clusters are balanced, while also jointly optimizing the objective of the task at hand.
Given the following machine learning model name: Gaussian Gated Linear Network, provide a description of the model
**Gaussian Gated Linear Network**, or **G-GLN**, is a multi-variate extension to the recently proposed [GLN](https://paperswithcode.com/method/gln) family of deep neural networks by reformulating the GLN neuron as a gated product of Gaussians. This Gaussian Gated Linear Network (G-GLN) formulation exploits the fact tha...
Given the following machine learning model name: SepFormer, provide a description of the model
**SepFormer** is [Transformer](https://paperswithcode.com/methods/category/transformers)-based neural network for speech separation. The SepFormer learns short and long-term dependencies with a multi-scale approach that employs transformers. It is mainly composed of multi-head attention and feed-forward layers. A dual-...
Given the following machine learning model name: Context Optimization, provide a description of the model
**CoOp**, or **Context Optimization**, is an automated prompt engineering method that avoids manual prompt tuning by modeling context words with continuous vectors that are end-to-end learned from data. The context could be shared among all classes or designed to be class-specific. During training, we simply minimize t...
Given the following machine learning model name: Residual gating mechanism to compose adverb-action representations, provide a description of the model