prompts stringlengths 87 212 | description stringlengths 0 6.76k |
|---|---|
Given the following machine learning model name: A Framework for Leader Identification in Coordinated Activity, provide a description of the model | An agreement of a group to follow a common purpose is manifested by its coalescence into a coordinated behavior. The process of initiating this behavior and the period of decision-making by the group members necessarily precedes the coordinated behavior. Given time series of group members’ behavior, the goal is to find... |
Given the following machine learning model name: Area Under the ROC Curve for Clustering, provide a description of the model | The area under the receiver operating characteristics (ROC) Curve, referred to as AUC, is a well-known performance measure in the supervised learning domain. Due to its compelling features, it has been employed in a number of studies to evaluate and compare the performance of different classifiers. In this work, we exp... |
Given the following machine learning model name: Siamese Multi-depth Transformer-based Hierarchical Encoder, provide a description of the model | **SMITH**, or **Siamese Multi-depth Transformer-based Hierarchical Encoder**, is a [Transformer](https://paperswithcode.com/methods/category/transformers)-based model for document representation learning and matching. It contains several design choices to adapt [self-attention models](https://paperswithcode.com/methods... |
Given the following machine learning model name: DIoU-NMS, provide a description of the model | **DIoU-NMS** is a type of non-maximum suppression where we use Distance IoU rather than regular DIoU, in which the overlap area and the distance between two central points of bounding boxes are simultaneously considered when suppressing redundant boxes.
In original NMS, the IoU metric is used to suppress the redunda... |
Given the following machine learning model name: Dual Softmax Loss, provide a description of the model | **Dual Softmax Loss** is a loss function based on symmetric cross-entropy loss used in the [CAMoE](https://paperswithcode.com/method/camoe) video-text retrieval model. Every text and video are calculated the
similarity with other videos or texts, which should be maximum in terms of the ground truth pair. For DSL, a pr... |
Given the following machine learning model name: ComplEx with N3 Regularizer, provide a description of the model | ComplEx model trained with a nuclear norm regularizer |
Given the following machine learning model name: Graph Recurrent Imputation Network, provide a description of the model | |
Given the following machine learning model name: Local Relation Layer, provide a description of the model | A **Local Relation Layer** is an image feature extractor that is an alternative to a [convolution](https://paperswithcode.com/method/convolution) operator. The intuition is that aggregation in convolution is basically a pattern matching process that applies fixed filters, which can be inefficient at modeling visual ele... |
Given the following machine learning model name: XGrad-CAM, provide a description of the model | **XGrad-CAM**, or **Axiom-based Grad-CAM**, is a class-discriminative visualization method and able to highlight the regions belonging to the objects of interest. Two axiomatic properties are introduced in the derivation of XGrad-CAM: Sensitivity and Conservation. In particular, the proposed XGrad-CAM is still a linear... |
Given the following machine learning model name: AutoTinyBERT, provide a description of the model | **AutoTinyBERT** is a an efficient [BERT](https://paperswithcode.com/method/bert) variant found through neural architecture search. Specifically, one-shot learning is used to obtain a big Super Pretrained Language Model (SuperPLM), where the objectives of pre-training or task-agnostic BERT distillation are used. Then,... |
Given the following machine learning model name: Hunger Games Search, provide a description of the model | **Hunger Games Search (**HGS**)** is a general-purpose population-based optimization technique with a simple structure, special stability features and very competitive performance to realize the solutions of both constrained and unconstrained problems more effectively. HGS is designed according to the hunger-driven act... |
Given the following machine learning model name: Sequence to Sequence, provide a description of the model | **Seq2Seq**, or **Sequence To Sequence**, is a model used in sequence prediction tasks, such as language modelling and machine translation. The idea is to use one [LSTM](https://paperswithcode.com/method/lstm), the *encoder*, to read the input sequence one timestep at a time, to obtain a large fixed dimensional vector ... |
Given the following machine learning model name: Bort, provide a description of the model | **Bort** is a parametric architectural variant of the [BERT](https://paperswithcode.com/method/bert) architecture. It extracts an optimal subset of architectural parameters for the BERT architecture through a [neural architecture search](https://paperswithcode.com/method/neural-architecture-search) approach; in particu... |
Given the following machine learning model name: Electric, provide a description of the model | **Electric** is an energy-based cloze model for representation learning over text. Like BERT, it is a conditional generative model of tokens given their contexts. However, Electric does not use masking or output a full distribution over tokens that could occur in a context. Instead, it assigns a scalar energy score to ... |
Given the following machine learning model name: OPT-IML, provide a description of the model | **OPT-IML** is a version of OPT fine-tuned on a large collection of 1500+ NLP tasks divided into various task categories. |
Given the following machine learning model name: Low-resolution input, provide a description of the model | |
Given the following machine learning model name: Partition Filter Network, provide a description of the model | **Partition Filter Network** is a framework designed specifically for joint entity and relation extraction. The framework consists of three components: partition filter encoder, NER unit and RE unit. In task units, we use table-filling for word pair prediction. Orange, yellow and green represents NER-related, shared an... |
Given the following machine learning model name: Constrained Pairwise k-Means, provide a description of the model | COP-KMeans is a modified version the popular k-means algorithm that supports pairwise constraints.
Original paper : Constrained K-means Clustering with Background Knowledge, Wagstaff et al. 2001 |
Given the following machine learning model name: Grab, provide a description of the model | **Grab** is a sensor processing system for cashier-free shopping. Grab needs to accurately identify and track customers, and associate each shopper with items he or she retrieves from shelves. To do this, it uses a keypoint-based pose tracker as a building block for identification and tracking, develops robust feature-... |
Given the following machine learning model name: DetNASNet, provide a description of the model | **DetNASNet** is a convolutional neural network designed to be an object detection backbone and discovered through [DetNAS](https://paperswithcode.com/method/detnas) architecture search. It uses [ShuffleNet V2](https://paperswithcode.com/method/shufflenet-v2) blocks as its basic building block. |
Given the following machine learning model name: Extended Transformer Construction, provide a description of the model | **Extended Transformer Construction**, or **ETC**, is an extension of the [Transformer](https://paperswithcode.com/method/transformer) architecture with a new attention mechanism that extends the original in two main ways: (1) it allows scaling up the input length from 512 to several thousands; and (2) it can ingesting... |
Given the following machine learning model name: Content-based Attention, provide a description of the model | **Content-based attention** is an attention mechanism based on cosine similarity:
$$f_{att}\left(\textbf{h}_{i}, \textbf{s}\_{j}\right) = \cos\left[\textbf{h}\_{i};\textbf{s}\_{j}\right] $$
It was utilised in [Neural Turing Machines](https://paperswithcode.com/method/neural-turing-machine) as part of the Addressi... |
Given the following machine learning model name: SpreadsheetCoder, provide a description of the model | **SpreadsheetCoder** is a neural network architecture for spreadsheet formula prediction. It is a [BERT](https://paperswithcode.com/method/bert)-based model architecture to represent the tabular context in both row-based and column-based formats. A [BERT](https://paperswithcode.com/method/bert) encoder computes an embe... |
Given the following machine learning model name: R(2+1)D, provide a description of the model | A **R(2+1)D** convolutional neural network is a network for action recognition that employs [R(2+1)D](https://paperswithcode.com/method/2-1-d-convolution) convolutions in a [ResNet](https://paperswithcode.com/method/resnet) inspired architecture. The use of these convolutions over regular [3D Convolutions](https://pape... |
Given the following machine learning model name: Weight Demodulation, provide a description of the model | **Weight Modulation** is an alternative to [adaptive instance normalization](https://paperswithcode.com/method/adaptive-instance-normalization) for use in generative adversarial networks, specifically it is introduced in [StyleGAN2](https://paperswithcode.com/method/stylegan2). The purpose of [instance normalization](h... |
Given the following machine learning model name: PanGu-$α$, provide a description of the model | **PanGu-$α$** is an autoregressive language model (ALM) with up to 200 billion parameters pretrained on a large corpus of text, mostly in Chinese language. The architecture of PanGu-$α$ is based on Transformer, which has been extensively used as the backbone of a variety of pretrained language models such as [BERT](htt... |
Given the following machine learning model name: Adapter, provide a description of the model | |
Given the following machine learning model name: ExtremeNet, provide a description of the model | **ExtremeNet** is a a bottom-up object detection framework that detects four extreme points (top-most, left-most, bottom-most, right-most) of an object. It uses a keypoint estimation framework to find extreme points, by predicting four multi-peak heatmaps for each object category. In addition, it uses one [heatmap](htt... |
Given the following machine learning model name: BigGAN-deep, provide a description of the model | **BigGAN-deep** is a deeper version (4x) of [BigGAN](https://paperswithcode.com/method/biggan). The main difference is a slightly differently designed [residual block](https://paperswithcode.com/method/residual-block). Here the $z$ vector is concatenated with the conditional vector without splitting it into chunks. I... |
Given the following machine learning model name: Legendre Memory Unit, provide a description of the model | The Legendre Memory Unit (LMU) is mathematically derived to orthogonalize
its continuous-time history – doing so by solving d coupled ordinary differential
equations (ODEs), whose phase space linearly maps onto sliding windows of
time via the Legendre polynomials up to degree d-1. It is optimal for compressing temp... |
Given the following machine learning model name: Deep Ensembles, provide a description of the model | |
Given the following machine learning model name: Smooth Step, provide a description of the model | Please enter a description about the method here |
Given the following machine learning model name: CSPResNeXt Block, provide a description of the model | **CSPResNeXt Block** is an extended [ResNext Block](https://paperswithcode.com/method/resnext-block) where we partition the feature map of the base layer into two parts and then merge them through a cross-stage hierarchy. The use of a split and merge strategy allows for more gradient flow through the network. |
Given the following machine learning model name: FBNet Block, provide a description of the model | **FBNet Block** is an image model block used in the [FBNet](https://paperswithcode.com/method/fbnet) architectures discovered through [DNAS](https://paperswithcode.com/method/dnas) [neural architecture search](https://paperswithcode.com/method/neural-architecture-search). The basic building blocks employed are [depthwi... |
Given the following machine learning model name: Domain Adaptative Neighborhood Clustering via Entropy Optimization, provide a description of the model | **Domain Adaptive Neighborhood Clustering via Entropy Optimization (DANCE)** is a self-supervised clustering method that harnesses the cluster structure of the target domain using self-supervision. This is done with a neighborhood clustering technique that self-supervises feature learning in the target. At the same tim... |
Given the following machine learning model name: Local Contrast Normalization, provide a description of the model | **Local Contrast Normalization** is a type of normalization that performs local subtraction and division normalizations, enforcing a sort of local competition between adjacent features in a feature map, and between features at the same spatial location in different feature maps. |
Given the following machine learning model name: Pansharpening Network, provide a description of the model | We propose a deep network architecture for the pansharpening problem called PanNet. We incorporate domain-specific knowledge to design our PanNet architecture by focusing on the two aims of the pan-sharpening problem: spectral and spatial preservation. For spectral preservation, we add up-sampled multispectral images t... |
Given the following machine learning model name: Online Hard Example Mining, provide a description of the model | Some object detection datasets contain an overwhelming number of easy examples and a small number of hard examples. Automatic selection of these hard examples can make training more
effective and efficient. **OHEM**, or **Online Hard Example Mining**, is a bootstrapping technique that modifies [SGD](https://paperswith... |
Given the following machine learning model name: Stochastic Steady-state Embedding, provide a description of the model | Stochastic Steady-state Embedding (SSE) is an algorithm that can learn many steady-state algorithms over graphs. Different from graph neural network family models, SSE is trained stochastically which only requires 1-hop information, but can capture fixed point relationships efficiently and effectively.
Description a... |
Given the following machine learning model name: Hybrid-deconvolution, provide a description of the model | A resnet-like architecture with deconvolution feature normalization (Ye et al. 2020, ICLR) layers in the first few layers for sparse low-level feature identification, and batch normalization layers in the later layers. |
Given the following machine learning model name: Channel-wise Cross Attention, provide a description of the model | **Channel-wise Cross Attention** is a module for semantic segmentation used in the [UCTransNet](https://paperswithcode.com/method/uctransnet) architecture. It is used to fuse features of inconsistent semantics between the Channel [Transformer](https://paperswithcode.com/method/transformer) and [U-Net](https://paperswit... |
Given the following machine learning model name: RotNet, provide a description of the model | **RotNet** is a self-supervision approach that relies on predicting image rotations as the pretext task
in order to learn image representations. |
Given the following machine learning model name: Weighted Recurrent Quality Enhancement, provide a description of the model | **Weighted Recurrent Quality Enhancement**, or **WRQE**, is a recurrent quality enhancement network for video compression that takes both compressed frames and the bit stream as inputs. In the recurrent cell of WRQE, the memory and update signal are weighted by quality features to reasonably leverage multi-frame inform... |
Given the following machine learning model name: Encoder-Attender-Aggregator, provide a description of the model | EncAttAgg introduced two attenders to tackle two problems: 1) We introduce a mutual attender layer to efficiently obtain the entity-pair-specific mention representations. 2) We introduce an integration attender to weight mention pairs of a target entity pair. |
Given the following machine learning model name: Wavelet-integrated Identity Preserving Adversarial Network for face super-resolution, provide a description of the model | # WIPA: Wavelet-integrated, Identity Preserving, Adversarial network for Face Super-resolution
Pytorch implementation of WIPA: Super-resolution of very low-resolution face images with a **W**avelet Integrated, **I**dentity **P**reserving, **A**dversarial Network.
# Paper:
[Super-resolution of very low-resolution fa... |
Given the following machine learning model name: Dynamic Convolution, provide a description of the model | **DynamicConv** is a type of [convolution](https://paperswithcode.com/method/convolution) for sequential modelling where it has kernels that vary over time as a learned function of the individual time steps. It builds upon [LightConv](https://paperswithcode.com/method/lightconv) and takes the same form but uses a time-... |
Given the following machine learning model name: Reliability Balancing, provide a description of the model | |
Given the following machine learning model name: Gated Positional Self-Attention, provide a description of the model | **Gated Positional Self-Attention (GPSA)** is a self-attention module for vision transformers, used in the [ConViT](https://paperswithcode.com/method/convit) architecture, that can be initialized as a convolutional layer -- helping a ViT learn inductive biases about locality. |
Given the following machine learning model name: Fast Feedforward Networks, provide a description of the model | A log-time alternative to feedforward layers outperforming both the vanilla feedforward and mixture-of-experts approaches. |
Given the following machine learning model name: Self-Adjusting Smooth L1 Loss, provide a description of the model | **Self-Adjusting Smooth L1 Loss** is a loss function used in object detection that was introduced with [RetinaMask](https://paperswithcode.com/method/retinamask). This is an improved version of Smooth L1. For Smooth L1 loss we have:
$$ f(x) = 0.5 \frac{x^{2}}{\beta} \text{ if } |x| < \beta $$
$$ f(x) = |x| -0.5\b... |
Given the following machine learning model name: MixNet, provide a description of the model | **MixNet** is a type of convolutional neural network discovered via AutoML that utilises MixConvs instead of regular depthwise convolutions. |
Given the following machine learning model name: GPU-Efficient Network, provide a description of the model | **GENets**, or **GPU-Efficient Networks**, are a family of efficient models found through [neural architecture search](https://paperswithcode.com/methods/category/neural-architecture-search). The search occurs over several types of convolutional block, which include [depth-wise convolutions](https://paperswithcode.com/... |
Given the following machine learning model name: Dense Contrastive Learning, provide a description of the model | **Dense Contrastive Learning** is a self-supervised learning method for dense prediction tasks. It implements self-supervised learning by optimizing a pairwise contrastive (dis)similarity loss at the pixel level between two views of input images. Contrasting with regular contrastive loss, the contrastive loss is comput... |
Given the following machine learning model name: Graph Attention Network, provide a description of the model | A **Graph Attention Network (GAT)** is a neural network architecture that operates on graph-structured data, leveraging masked self-attentional layers to address the shortcomings of prior methods based on graph convolutions or their approximations. By stacking layers in which nodes are able to attend over their neighbo... |
Given the following machine learning model name: Nonlinear Activation Free Network, provide a description of the model | |
Given the following machine learning model name: FastMoE, provide a description of the model | **FastMoE ** is a distributed MoE training system based on PyTorch with common accelerators. The system provides a hierarchical interface for both flexible model design and adaption to different applications, such as [Transformer-XL](https://paperswithcode.com/method/transformer-xl) and Megatron-LM. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.