conference stringclasses 6
values | title stringlengths 2 176 | abstract stringlengths 2 5k | decision stringclasses 11
values |
|---|---|---|---|
ICLR.cc/2023/Conference | SYNC: Efficient Neural Code Search Through Structurally Guided Hard Negative Curricula | Efficient code snippet search using natural language queries can be a great productivity tool for developers (beginners and professionals alike). Recently neural code search has been popular, where a neural method is used to embed both the query (NL) and the code snippet (PL) into a common representation space; which i... | Withdrawn |
ICLR.cc/2022/Conference | Achieving Small-Batch Accuracy with Large-Batch Scalability via Adaptive Learning Rate Adjustment | We consider synchronous data-parallel neural network training with fixed large batch sizes. While the large batch size provides a high degree of parallelism, it likely degrades the generalization performance due to the low gradient noise scale. We propose a two-phase adaptive learning rate adjustment framework that tac... | Withdrawn |
ICLR.cc/2021/Conference | Conditioning Trick for Training Stable GANs | In this paper we propose a conditioning trick, called difference departure from normality, applied on the generator network in response to instability issues during GAN training. We force the generator to get closer to the departure from normality function of real samples computed in the spectral domain of Schur decomp... | Withdrawn |
ICLR.cc/2023/Conference | CORE-PERIPHERY PRINCIPLE GUIDED REDESIGN OF SELF-ATTENTION IN TRANSFORMERS | Designing more efficient, reliable, and explainable neural network architectures is a crucial topic in the artificial intelligence (AI) field. Numerous efforts have been devoted to exploring the best structures, or structural signatures, of well-performing artificial neural networks (ANN). Previous studies, by post-hoc... | Withdrawn |
ICLR.cc/2019/Conference | Object detection deep learning networks for Optical Character Recognition | In this article, we show how we applied a simple approach coming from deep learning networks for object detection to the task of optical character recognition in order to build image features taylored for documents. In contrast to scene text reading in natural images using networks pretrained on ImageNet, our document ... | Reject |
ICLR.cc/2022/Conference | StARformer: Transformer with State-Action-Reward Representations | Reinforcement Learning (RL) can be considered as a sequence modeling task, i.e., given a sequence of past state-action-reward experiences, a model autoregressively predicts a sequence of future actions. Recently, Transformers have been successfully adopted to model this problem. In this work, we propose State-Action-... | Withdrawn |
ICLR.cc/2021/Conference | PIVEN: A Deep Neural Network for Prediction Intervals with Specific Value Prediction | Improving the robustness of neural nets in regression tasks is key to their application in multiple domains. Deep learning-based approaches aim to achieve this goal either by improving their prediction of specific values (i.e., point prediction), or by producing prediction intervals (PIs) that quantify uncertainty. We ... | Reject |
ICLR.cc/2022/Conference | On Hard Episodes in Meta-Learning | Existing meta-learners primarily focus on improving the average task accuracy across multiple episodes. Different episodes, however, may vary in hardness and quality leading to a wide gap in the meta-learner's performance across episodes. Understanding this issue is particularly critical in industrial few-shot settings... | Reject |
ICLR.cc/2021/Conference | Neural ODE Processes | Neural Ordinary Differential Equations (NODEs) use a neural network to model the instantaneous rate of change in the state of a system. However, despite their apparent suitability for dynamics-governed time-series, NODEs present a few disadvantages. First, they are unable to adapt to incoming data-points, a fundamental... | Accept (Poster) |
ICLR.cc/2023/Conference | Unsupervised Threshold Learning with "$L$"-trend Prior For Visual Anomaly Detection | This paper considers unsupervised threshold learning, a practical yet under-researched module of anomaly detection (AD) for image data. AD comprises two separate modules: score generation and threshold learning. Most existing studies are more curious about the first part. It is often assumed that if the scoring module ... | Withdrawn |
ICLR.cc/2018/Conference | GraphVAE: Towards Generation of Small Graphs Using Variational Autoencoders | Deep learning on graphs has become a popular research topic with many applications. However, past work has concentrated on learning graph embedding tasks only, which is in contrast with advances in generative models for images and text. Is it possible to transfer this progress to the domain of graphs? We propose to sid... | Reject |
ICLR.cc/2019/Conference | Doubly Sparse: Sparse Mixture of Sparse Experts for Efficient Softmax Inference | Computations for the softmax function in neural network models are expensive when the number of output classes is large. This can become a significant issue in both training and inference for such models. In this paper, we present Doubly Sparse Softmax (DS-Softmax), Sparse Mixture of Sparse of Sparse Experts, to improv... | Reject |
ICLR.cc/2023/Conference | When does Bias Transfer in Transfer Learning? | Using transfer learning to adapt a pre-trained "source model" to a downstream "target task" can dramatically increase performance with seemingly no downside. In this work, we demonstrate that there can exist a downside after all: bias transfer, or the tendency for biases of the source model to persist even after adapti... | Withdrawn |
ICLR.cc/2023/Conference | Deep Leakage from Model in Federated Learning | Distributed machine learning has been widely used in recent years to tackle large and complex dataset problems. Therewith, the security of distributed learning has also drawn increasing attention from both academia and industry. In this context, federated learning (FL) was developed as a “secure” distributed learning b... | Withdrawn |
ICLR.cc/2018/Conference | Now I Remember! Episodic Memory For Reinforcement Learning | Humans rely on episodic memory constantly, in remembering the name of someone they met 10 minutes ago, the plot of a movie as it unfolds, or where they parked the car. Endowing reinforcement learning agents with episodic memory is a key step on the path toward replicating human-like general intelligence. We analyze why... | Reject |
ICLR.cc/2021/Conference | Probabilistic Multimodal Representation Learning | Learning multimodal representations is a requirement for many tasks such as image--caption retrieval. Previous work on this problem has only focused on finding good vector representations without any explicit measure of uncertainty. In this work, we argue and demonstrate that learning multimodal representations as prob... | Withdrawn |
ICLR.cc/2020/Conference | Factorized Multimodal Transformer for Multimodal Sequential Learning | The complex world around us is inherently multimodal and sequential (continuous). Information is scattered across different modalities and requires multiple continuous sensors to be captured. As machine learning leaps towards better generalization to real world, multimodal sequential learning becomes a fundamental rese... | Withdrawn |
ICLR.cc/2021/Conference | Everybody's Talkin': Let Me Talk as You Want | We present a method to edit a target portrait footage by taking a sequence of audio as input to synthesize a photo-realistic video. This method is unique because it is highly dynamic. It does not assume a person-specific rendering network yet capable of translating one source audio into one random chosen video output ... | Withdrawn |
ICLR.cc/2022/Conference | Learning Universal User Representations via Self-Supervised Lifelong Behaviors Modeling | Universal user representation is an important research topic in industry, and is widely used in diverse downstream user analysis tasks, such as user profiling and user preference prediction. With the rapid development of Internet service platforms, extremely long user behavior sequences have been accumulated. However, ... | Reject |
ICLR.cc/2021/Conference | Improving Local Effectiveness for Global Robustness Training | Despite its increasing popularity, deep neural networks are easily fooled. To alleviate this deficiency, researchers are actively developing new training strategies, which encourage models that are robust to small input perturbations. Several successful robust training methods have been proposed. However, many of them ... | Reject |
ICLR.cc/2021/Conference | Data augmentation for deep learning based accelerated MRI reconstruction | Deep neural networks have emerged as very successful tools for image restoration and reconstruction tasks. These networks are often trained end-to-end to directly reconstruct an image from a noisy or corrupted measurement of that image. To achieve state-of-the-art performance, training on large and diverse sets of imag... | Reject |
ICLR.cc/2021/Conference | GAN2GAN: Generative Noise Learning for Blind Denoising with Single Noisy Images | We tackle a challenging blind image denoising problem, in which only single distinct noisy images are available for training a denoiser, and no information about noise is known, except for it being zero-mean, additive, and independent of the clean image. In such a setting, which often occurs in practice, it is not poss... | Accept (Poster) |
ICLR.cc/2020/Conference | Weighted Empirical Risk Minimization: Transfer Learning based on Importance Sampling | We consider statistical learning problems, when the distribution $P'$ of the training observations $Z'_1,\; \ldots,\; Z'_n$ differs from the distribution $P$ involved in the risk one seeks to minimize (referred to as the \textit{test distribution}) but is still defined on the same measurable space as $P$ and dominates ... | Reject |
ICLR.cc/2018/Conference | Quadrature-based features for kernel approximation | We consider the problem of improving kernel approximation via feature maps. These maps arise as Monte Carlo approximation to integral representations of kernel functions and scale up kernel methods for larger datasets. We propose to use more efficient numerical integration technique to obtain better estimates of the in... | Reject |
ICLR.cc/2023/Conference | AIA: learn to design greedy algorithm for NP-complete problems using neural networks | Algorithm design is an art that heavily requires intuition and expertise of the human designers as well as insights into the problems under consideration. In particular, the design of greedy-selection rules, the core of greedy algorithms, is usually a great challenge to designer: it is relatively easy to understand a g... | Reject |
ICLR.cc/2022/Conference | Implicit Bias of MSE Gradient Optimization in Underparameterized Neural Networks | We study the dynamics of a neural network in function space when optimizing the mean squared error via gradient flow. We show that in the underparameterized regime the network learns eigenfunctions of an integral operator $T_K$ determined by the Neural Tangent Kernel at rates corresponding to their eigenvalues. For e... | Accept (Poster) |
ICLR.cc/2023/Conference | Label-free Concept Bottleneck Models | Concept bottleneck models (CBM) are a popular way of creating more interpretable neural networks by having hidden layer neurons correspond to human-understandable concepts. However, existing CBMs and their variants have two crucial limitations: first, they need to collect labeled data for each of the predefined concep... | Accept: poster |
ICLR.cc/2021/Conference | Not All Memories are Created Equal: Learning to Expire | Attention mechanisms have shown promising results in sequence modeling tasks that require long-term memory. Recent work has investigated mechanisms to reduce the computational cost of preserving and storing the memories. However, not all content in the past is equally important to remember. We propose Expire-Span, a me... | Reject |
ICLR.cc/2023/Conference | Multi Task Learning of Different Class Label Representations for Stronger Models | We find that the way in which class labels are represented can have a powerful effect on how well models trained on them learn. In classification, the standard way of representing class labels is as one-hot vectors. We present a new way of representing class labels called Binary Labels, where each class label is a larg... | Reject |
ICLR.cc/2022/Conference | Shap-CAM: Visual Explanations for Convolutional Neural Networks based on Shapley Value | Explaining deep convolutional neural networks has been recently drawing increasing attention since it helps to understand the networks' internal operations and why they make certain decisions. Saliency maps, which emphasize salient regions largely connected to the network's decision-making, are one of the most common w... | Withdrawn |
ICLR.cc/2018/Conference | CausalGAN: Learning Causal Implicit Generative Models with Adversarial Training | We introduce causal implicit generative models (CiGMs): models that allow sampling from not only the true observational but also the true interventional distributions. We show that adversarial training can be used to learn a CiGM, if the generator architecture is structured based on a given causal graph. We consider th... | Accept (Poster) |
ICLR.cc/2021/Conference | Transferring Inductive Biases through Knowledge Distillation | Having the right inductive biases can be crucial in many tasks or scenarios where data or computing resources are a limiting factor, or where training data is not perfectly representative of the conditions at test time. However, defining, designing, and efficiently adapting inductive biases is not necessarily straightf... | Reject |
ICLR.cc/2020/Conference | Task-agnostic Continual Learning via Growing Long-Term Memory Networks | As our experience shows, humans can learn and deploy a myriad of different skills to tackle the situations they encounter daily. Neural networks, in contrast, have a fixed memory capacity that prevents them from learning more than a few sets of skills before starting to forget them.
In this work, we make a step to bri... | Withdrawn |
ICLR.cc/2020/Conference | SCALOR: Generative World Models with Scalable Object Representations | Scalability in terms of object density in a scene is a primary challenge in unsupervised sequential object-oriented representation learning. Most of the previous models have been shown to work only on scenes with a few objects. In this paper, we propose SCALOR, a probabilistic generative world model for learning SCALab... | Accept (Poster) |
ICLR.cc/2020/Conference | Improved Training Techniques for Online Neural Machine Translation | Neural sequence-to-sequence models are at the basis of state-of-the-art solutions for sequential prediction problems such as machine translation and speech recognition. The models typically assume that the entire input is available when starting target generation. In some applications, however, it is desirable to start... | Reject |
ICLR.cc/2022/Conference | Coordination Among Neural Modules Through a Shared Global Workspace | Deep learning has seen a movement away from representing examples with a monolithic hidden state towards a richly structured state. For example, Transformers segment by position, and object-centric architectures decompose images into entities. In all these architectures, interactions between different elements are mod... | Accept (Oral) |
ICLR.cc/2022/Conference | MemREIN: Rein the Domain Shift for Cross-Domain Few-Shot Learning | Few-shot learning aims to enable models generalize to new categories (query instances) with only limited labeled samples (support instances) from each category. Metric-based mechanism is a promising direction which compares feature embeddings via different metrics. However, it always fail to generalize to unseen domain... | Withdrawn |
ICLR.cc/2023/Conference | Normalizing Flows for Interventional Density Estimation | Existing machine learning methods for causal inference usually estimate quantities expressed via the mean of potential outcomes (e.g., average treatment effect). However, such quantities do not capture the full information about the distribution of potential outcomes. In this work, we estimate the density of potential ... | Reject |
ICLR.cc/2022/Conference | Ancestral protein sequence reconstruction using a tree-structured Ornstein-Uhlenbeck variational autoencoder | We introduce a deep generative model for representation learning of biological sequences that, unlike existing models, explicitly represents the evolutionary process. The model makes use of a tree-structured Ornstein-Uhlenbeck process, obtained from a given phylogenetic tree, as an informative prior for a variational a... | Accept (Poster) |
ICLR.cc/2019/Conference | An Efficient and Margin-Approaching Zero-Confidence Adversarial Attack | There are two major paradigms of white-box adversarial attacks that attempt to impose input perturbations. The first paradigm, called the fix-perturbation attack, crafts adversarial samples within a given perturbation level. The second paradigm, called the zero-confidence attack, finds the smallest perturbation neede... | Reject |
ICLR.cc/2020/Conference | Multi-agent Reinforcement Learning for Networked System Control | This paper considers multi-agent reinforcement learning (MARL) in networked system control. Specifically, each agent learns a decentralized control policy based on local observations and messages from connected neighbors. We formulate such a networked MARL (NMARL) problem as a spatiotemporal Markov decision process and... | Accept (Poster) |
ICLR.cc/2023/Conference | MixPro: Data Augmentation with MaskMix and Progressive Attention Labeling for Vision Transformer | The recently proposed data augmentation TransMix employs attention labels to help visual transformers (ViT) achieve better robustness and performance. However, TransMix is deficient in two aspects: 1) The image cropping method of TransMix may not be suitable for vision transformer. 2) At the early stage of training, th... | Accept: poster |
ICLR.cc/2019/Conference | SGD Converges to Global Minimum in Deep Learning via Star-convex Path | Stochastic gradient descent (SGD) has been found to be surprisingly effective in training a variety of deep neural networks. However, there is still a lack of understanding on how and why SGD can train these complex networks towards a global minimum. In this study, we establish the convergence of SGD to a global minimu... | Accept (Poster) |
ICLR.cc/2023/Conference | Hierarchies of Reward Machines | Reward machines (RMs) are a recent formalism for representing the reward function of a reinforcement learning task through a finite-state machine whose edges encode landmarks of the task using high-level events. The structure of RMs enables the decomposition of a task into simpler and independently solvable subtasks th... | Reject |
ICLR.cc/2022/Conference | An Improved Composite Functional Gradient Learning by Wasserstein Regularization for Generative adversarial networks | Generative adversarial networks (GANs) are usually trained by a minimax game which is notoriously and empirically known to be unstable. Recently, a totally new methodology called
Composite Functional Gradient Learning (CFG) provides an alternative
theoretical foundation for training GANs more stablely by employing
a s... | Withdrawn |
ICLR.cc/2023/Conference | LSTM-BASED-AUTO-BI-LSTM for Remaining Useful Life (RUL) Prediction: the first round of test results | The Remaining Useful Life (RUL) is one of the most critical indicators to detect a component’s failure before it effectively occurs. It can be predicted by historical data or direct data extraction by adopting model-based, data-driven, or hybrid methodologies. Data-driven methods have mainly used Machine Learning (ML) ... | Reject |
ICLR.cc/2022/Conference | Information-theoretic stochastic contrastive conditional GAN: InfoSCC-GAN | Conditional generation is a subclass of generative problems when the output of generation is conditioned by a class attributes’ information. In this paper, we present a new stochastic contrastive conditional generative adversarial network (InfoSCC-GAN) with explorable latent space. The InfoSCC-GAN architecture is based... | Reject |
ICLR.cc/2020/Conference | Cross-Dimensional Self-Attention for Multivariate, Geo-tagged Time Series Imputation | Many real-world applications involve multivariate, geo-tagged time series data: at each location, multiple sensors record corresponding measurements. For example, air quality monitoring system records PM2.5, CO, etc. The resulting time-series data often has missing values due to device outages or communication errors. ... | Reject |
ICLR.cc/2020/Conference | Neural Network Branching for Neural Network Verification | Formal verification of neural networks is essential for their deployment in safety-critical areas. Many available formal verification methods have been shown to be instances of a unified Branch and Bound (BaB) formulation. We propose a novel framework for designing an effective branching strategy for BaB. Specifically,... | Accept (Talk) |
ICLR.cc/2023/Conference | Tailoring Language Generation Models under Total Variation Distance | The standard paradigm of neural language generation adopts maximum likelihood estimation (MLE) as the optimizing method. From a distributional view, MLE in fact minimizes the Kullback-Leibler divergence (KLD) between the distribution of the real data and that of the model. However, this approach forces the model to dis... | Accept: notable-top-5% |
ICLR.cc/2020/Conference | On the implicit minimization of alternative loss functions when training deep networks | Understanding the implicit bias of optimization algorithms is important in order to improve generalization of neural networks. One approach to try to exploit such understanding would be to then make the bias explicit in the loss function. Conversely, an interesting approach to gain more insights into the implicit bias... | Reject |
ICLR.cc/2022/Conference | Connecting Graph Convolution and Graph PCA | Graph convolution operator of the GCN model is originally motivated from a localized first-order approximation of spectral graph convolutions. This work stands on a different view; establishing a mathematical connection between graph convolution and graph-regularized PCA (GPCA). Based on this connection, the GCN archit... | Reject |
ICLR.cc/2022/Conference | Towards Unknown-aware Deep Q-Learning | Deep reinforcement learning (RL) has achieved remarkable success in known environments where the agents are trained, yet the agents do not necessarily know what they don’t know. In particular, RL agents deployed in the open world are naturally subject to environmental shifts and encounter unknown out-of-distribution (O... | Withdrawn |
ICLR.cc/2022/Conference | A Variance Principle Explains why Dropout Finds Flatter Minima | Although dropout has achieved great success in deep learning, little is known about how it helps the training find a good generalization solution in the high-dimensional parameter space. In this work, we show that the training with dropout finds the neural network with a flatter minimum compared with standard gradient ... | Reject |
ICLR.cc/2021/Conference | Set Prediction without Imposing Structure as Conditional Density Estimation | Set prediction is about learning to predict a collection of unordered variables with unknown interrelations. Training such models with set losses imposes the structure of a metric space over sets. We focus on stochastic and underdefined cases, where an incorrectly chosen loss function leads to implausible predictions. ... | Accept (Poster) |
ICLR.cc/2018/Conference | Ensemble Robustness and Generalization of Stochastic Deep Learning Algorithms | The question why deep learning algorithms generalize so well has attracted increasing
research interest. However, most of the well-established approaches,
such as hypothesis capacity, stability or sparseness, have not provided complete
explanations (Zhang et al., 2016; Kawaguchi et al., 2017). In this work, we focus
on... | Invite to Workshop Track |
ICLR.cc/2023/Conference | PatchBlender: A Motion Prior for Video Transformers | Transformers have become one of the dominant architectures in the field of computer vision. However, there are yet several challenges when applying such architectures to video data. Most notably, these models struggle to model the temporal patterns of video data effectively. Directly targeting this issue, we introduce ... | Reject |
ICLR.cc/2023/Conference | FunkNN: Neural Interpolation for Functional Generation | Can we build continuous generative models which generalize across scales, can be evaluated at any coordinate, admit calculation of exact derivatives, and are conceptually simple? Existing MLP-based architectures generate worse samples than the grid-based generators with favorable convolutional inductive biases. Models ... | Accept: poster |
ICLR.cc/2021/Conference | Deformable DETR: Deformable Transformers for End-to-End Object Detection | DETR has been recently proposed to eliminate the need for many hand-designed components in object detection while demonstrating good performance. However, it suffers from slow convergence and limited feature spatial resolution, due to the limitation of Transformer attention modules in processing image feature maps. To ... | Accept (Oral) |
ICLR.cc/2020/Conference | DropGrad: Gradient Dropout Regularization for Meta-Learning | With the growing attention on learning-to-learn new tasks using only a few examples, meta-learning has been widely used in numerous problems such as few-shot classification, reinforcement learning, and domain generalization. However, meta-learning models are prone to overfitting when there are no sufficient training ta... | Withdrawn |
ICLR.cc/2020/Conference | Making Efficient Use of Demonstrations to Solve Hard Exploration Problems | This paper introduces R2D3, an agent that makes efficient use of demonstrations to solve hard exploration problems in partially observable environments with highly variable initial conditions. We also introduce a suite of eight tasks that combine these three properties, and show that R2D3 can solve several of the tasks... | Accept (Poster) |
ICLR.cc/2023/Conference | Deja Vu: Continual Model Generalization for Unseen Domains | In real-world applications, deep learning models often run in non-stationary environments where the target data distribution continually shifts over time. There have been numerous domain adaptation (DA) methods in both online and offline modes to improve cross-domain adaptation ability. However, these DA methods typica... | Accept: poster |
ICLR.cc/2023/Conference | Policy Expansion for Bridging Offline-to-Online Reinforcement Learning | Pre-training with offline data and online fine-tuning using reinforcement learning is a promising strategy for learning control policies by leveraging the best of both worlds in terms of sample efficiency and performance. One natural approach is to initialize the policy for online learning with the one trained offline.... | Accept: poster |
ICLR.cc/2023/Conference | Dynamic-Aware GANs: Time-Series Generation with Handy Self-Supervision | This paper presents Dynamic-Aware GAN (DAGAN) as a data-efficient self-supervised paradigm for time-series data generation.
To support sequential generation with sufficient clues of temporal dynamics, we explicitly model the transition dynamics within the data sequence through differencing, thus refining the vanilla se... | Withdrawn |
ICLR.cc/2021/Conference | Dual-Tree Wavelet Packet CNNs for Image Classification | In this paper, we target an important issue of deep convolutional neural networks (CNNs) — the lack of a mathematical understanding of their properties. We present an explicit formalism that is motivated by the similarities between trained CNN kernels and oriented Gabor filters for addressing this problem. The core ide... | Reject |
ICLR.cc/2020/Conference | A Novel Analysis Framework of Lower Complexity Bounds for Finite-Sum Optimization | This paper studies the lower bound complexity for the optimization problem whose objective function is the average of $n$ individual smooth convex functions. We consider the algorithm which gets access to gradient and proximal oracle for each individual component.
For the strongly-convex case, we prove such an algorith... | Reject |
ICLR.cc/2018/Conference | Model-based imitation learning from state trajectories | Imitation learning from demonstrations usually relies on learning a policy from trajectories of optimal states and actions. However, in real life expert demonstrations, often the action information is missing and only state trajectories are available. We present a model-based imitation learning method that can learn en... | Reject |
ICLR.cc/2023/Conference | Statistical Theory of Differentially Private Marginal-based Data Synthesis Algorithms | Marginal-based methods achieve promising performance in the synthetic data competition hosted by the National Institute of Standards and Technology (NIST).
To deal with high-dimensional data, the distribution of synthetic data is represented by a probabilistic graphical model (e.g., a Bayesian network), while the raw... | Accept: poster |
ICLR.cc/2019/Conference | Automatic generation of object shapes with desired functionalities | 3D objects (artefacts) are made to fulfill functions. Designing an object often starts with defining a list of functionalities that it should provide, also known as functional requirements. Today, the design of 3D object models is still a slow and largely artisanal activity, with few Computer-Aided Design (CAD) tools e... | Reject |
ICLR.cc/2018/Conference | Beyond Finite Layer Neural Networks: Bridging Deep Architectures and Numerical Differential Equations | Deep neural networks have become the state-of-the-art models in numerous machine learning tasks. However, general guidance to network architecture design is still missing. In our work, we bridge deep neural network design with numerical differential equations. We show that many effective networks, such as ResNet, PolyN... | Invite to Workshop Track |
ICLR.cc/2021/Conference | Empirical Studies on the Convergence of Feature Spaces in Deep Learning | While deep learning is effective to learn features/representations from data, the distributions of samples in feature spaces learned by various architectures for different training tasks (e.g., latent layers of AEs and feature vectors in CNN classifiers) have not been well-studied or compared. We hypothesize that the f... | Reject |
ICLR.cc/2022/Conference | The Close Relationship Between Contrastive Learning and Meta-Learning | Contrastive learning has recently taken off as a paradigm for learning from unlabeled data. In this paper, we discuss the close relationship between contrastive learning and meta-learning under a certain task distribution. We complement this observation by showing that established meta-learning methods, such as Prototy... | Accept (Poster) |
ICLR.cc/2020/Conference | Generative Restricted Kernel Machines | We introduce a novel framework for generative models based on Restricted Kernel Machines (RKMs) with multi-view generation and uncorrelated feature learning capabilities, called Gen-RKM. To incorporate multi-view generation, this mechanism uses a shared representation of data from various views. The mechanism is flexib... | Reject |
ICLR.cc/2023/Conference | Understanding Hindsight Goal Relabeling Requires Rethinking Divergence Minimization | Hindsight goal relabeling has become a foundational technique for multi-goal reinforcement learning (RL). The idea is quite simple: any arbitrary trajectory can be seen as an expert demonstration for reaching the trajectory's end state. Intuitively, this procedure trains a goal-conditioned policy to imitate a sub-optim... | Reject |
ICLR.cc/2022/Conference | Learning the Representation of Behavior Styles with Imitation Learning | Imitation learning is one of the methods for reproducing expert demonstrations adaptively by learning a mapping between observations and actions. However, behavior styles such as motion trajectory and driving habit depend largely on the dataset of human maneuvers, and settle down to an average behavior style in most im... | Reject |
ICLR.cc/2023/Conference | Fuzzy Alignments in Directed Acyclic Graph for Non-Autoregressive Machine Translation | Non-autoregressive translation (NAT) reduces the decoding latency but suffers from performance degradation due to the multi-modality problem. Recently, the structure of directed acyclic graph has achieved great success in NAT, which tackles the multi-modality problem by introducing dependency between vertices. However,... | Accept: poster |
ICLR.cc/2022/Conference | Incorporating User-Item Similarity in Hybrid Neighborhood-based Recommendation System | Modern hybrid recommendation systems require a sufficient amount of data. However, several internet privacy issues make users skeptical about sharing their personal information with online service providers. This work introduces various novel methods utilizing the baseline estimate to learn user interests from their in... | Withdrawn |
ICLR.cc/2022/Conference | No Parameters Left Behind: Sensitivity Guided Adaptive Learning Rate for Training Large Transformer Models | Recent research has shown the existence of significant redundancy in large Transformer models. One can prune the redundant parameters without significantly sacrificing the generalization performance. However, we question whether the redundant parameters could have contributed more if they were properly trained. To answ... | Accept (Poster) |
ICLR.cc/2023/Conference | Self-Paced Learning Enhanced Physics-informed Neural Networks for Solving Partial Differential Equations | There is a hit discussion on solving partial differential equation by neural network. The famous PINN (physics-informed neural networks) has drawn worldwide attention since it was put forward. Despite its success in solving nonlinear partial differential equation, the difficulty in converging and the inefficiency in tr... | Reject |
ICLR.cc/2022/Conference | Policy improvement by planning with Gumbel | AlphaZero is a powerful reinforcement learning algorithm based on approximate policy iteration and tree search. However, AlphaZero can fail to improve its policy network, if not visiting all actions at the root of a search tree. To address this issue, we propose a policy improvement algorithm based on sampling actions ... | Accept (Spotlight) |
ICLR.cc/2021/Conference | Improved knowledge distillation by utilizing backward pass knowledge in neural networks | Knowledge distillation (KD) is one of the prominent techniques for model compression. In this method, the knowledge
of a large network (teacher) is distilled into a model (student) with usually significantly fewer parameters. KD tries to
better-match the output of the student model to that of the teacher model based on... | Withdrawn |
ICLR.cc/2019/Conference | Transformer-XL: Language Modeling with Longer-Term Dependency | We propose a novel neural architecture, Transformer-XL, for modeling longer-term dependency. To address the limitation of fixed-length contexts, we introduce a notion of recurrence by reusing the representations from the history. Empirically, we show state-of-the-art (SoTA) results on both word-level and character-leve... | Reject |
ICLR.cc/2022/Conference | Automatic Concept Extraction for Concept Bottleneck-based Video Classification | Recent efforts in interpretable deep learning models have shown that concept-based explanation methods achieve competitive accuracy with standard end-to-end models and enable reasoning and intervention about extracted high-level visual concepts from images, e.g., identifying the wing color and beak length for bird-spec... | Reject |
ICLR.cc/2021/Conference | CNN Based Analysis of the Luria’s Alternating Series Test for Parkinson’s Disease Diagnostics | Deep-learning based image classification is applied in this studies to the Luria's alternating series tests to support diagnostics of the Parkinson's disease. Luria's alternating series tests belong to the family of fine-motor drawing tests and been used in neurology and psychiatry for nearly a century. Introduction of... | Withdrawn |
ICLR.cc/2022/Conference | A Neural Tangent Kernel Perspective of Infinite Tree Ensembles | In practical situations, the tree ensemble is one of the most popular models along with neural networks. A soft tree is a variant of a decision tree. Instead of using a greedy method for searching splitting rules, the soft tree is trained using a gradient method in which the entire splitting operation is formulated in ... | Accept (Poster) |
ICLR.cc/2023/Conference | Revisiting Robustness in Graph Machine Learning | Many works show that node-level predictions of Graph Neural Networks (GNNs) are unrobust to small, often termed adversarial, changes to the graph structure. However, because manual inspection of a graph is difficult, it is unclear if the studied perturbations always preserve a core assumption of adversarial examples: t... | Accept: poster |
ICLR.cc/2020/Conference | VILD: Variational Imitation Learning with Diverse-quality Demonstrations | The goal of imitation learning (IL) is to learn a good policy from high-quality demonstrations. However, the quality of demonstrations in reality can be diverse, since it is easier and cheaper to collect demonstrations from a mix of experts and amateurs. IL in such situations can be challenging, especially when the lev... | Reject |
ICLR.cc/2018/Conference | Compact Encoding of Words for Efficient Character-level Convolutional Neural Networks Text Classification | This paper puts forward a new text to tensor representation that relies on information compression techniques to assign shorter codes to the most frequently used characters. This representation is language-independent with no need of pretraining and produces an encoding with no information loss. It provides an adequate... | Reject |
ICLR.cc/2020/Conference | Model-based Saliency for the Detection of Adversarial Examples | Adversarial perturbations cause a shift in the salient features of an image, which may result in a misclassification. We demonstrate that gradient-based saliency approaches are unable to capture this shift, and develop a new defense which detects adversarial examples based on learnt saliency models instead. We study tw... | Reject |
ICLR.cc/2021/Conference | Signal Coding and Reconstruction using Spike Trains | In many animal sensory pathways, the transformation from external stimuli to spike trains is essentially deterministic. In this context, a new mathematical framework for coding and reconstruction, based on a biologically plausible model of the spiking neuron, is presented. The framework considers encoding of a signal t... | Reject |
ICLR.cc/2021/Conference | Using MMD GANs to correct physics models and improve Bayesian parameter estimation | Bayesian parameter estimation methods are robust techniques for quantifying properties of physical systems which cannot be observed directly. In estimating such parameters, one first requires a physics model of the phenomenon to be studied. Often, such a model follows a series of assumptions to make parameter inference... | Withdrawn |
ICLR.cc/2021/Conference | Fidelity-based Deep Adiabatic Scheduling | Adiabatic quantum computation is a form of computation that acts by slowly interpolating a quantum system between an easy to prepare initial state and a final state that represents a solution to a given computational problem. The choice of the interpolation schedule is critical to the performance: if at a certain time ... | Accept (Spotlight) |
ICLR.cc/2019/Conference | Context Mover's Distance & Barycenters: Optimal transport of contexts for building representations | We propose a unified framework for building unsupervised representations of entities and their compositions, by viewing each entity as a histogram (or distribution) over its contexts. This enables us to take advantage of optimal transport and construct representations that effectively harness the geometry of the underl... | Reject |
ICLR.cc/2023/Conference | Multi-Level Contrastive Learning for Dense Prediction Task | In this work, we present Multi-Level Contrastive Learning for Dense Prediction Task (MCL), an efficient self-supervised method to learn region-level feature representation for dense prediction tasks. This approach is motivated by the three key factors in detection: localization, scale consistency and recognition. Consi... | Withdrawn |
ICLR.cc/2018/Conference | HyperNetworks with statistical filtering for defending adversarial examples | Deep learning algorithms have been known to be vulnerable to adversarial perturbations in various tasks such as image classification. This problem was addressed by employing several defense methods for detection and rejection of particular types of attacks. However, training and manipulating networks according to parti... | Withdrawn |
ICLR.cc/2022/Conference | Learning to Extend Molecular Scaffolds with Structural Motifs | Recent advancements in deep learning-based modeling of molecules promise to accelerate in silico drug discovery. A plethora of generative models is available, building molecules either atom-by-atom and bond-by-bond or fragment-by-fragment. However, many drug discovery projects require a fixed scaffold to be present in ... | Accept (Poster) |
ICLR.cc/2023/Conference | Generalizable Multi-Relational Graph Representation Learning: A Message Intervention Approach | With the edges associated with labels and directions, the so-called multi-relational graph possesses powerful expressiveness, which is beneficial to many applications. However, as the heterogeneity brought by the higher cardinality of edges and relations climbs up, more trivial relations are taken into account for the ... | Withdrawn |
ICLR.cc/2021/Conference | Learning advanced mathematical computations from examples | Using transformers over large generated datasets, we train models to learn mathematical properties of differential systems, such as local stability, behavior at infinity and controllability. We achieve near perfect prediction of qualitative characteristics, and good approximations of numerical features of the system. T... | Accept (Poster) |
ICLR.cc/2021/Conference | Bayesian Online Meta-Learning | Neural networks are known to suffer from catastrophic forgetting when trained on sequential datasets. While there have been numerous attempts to solve this problem for large-scale supervised classification, little has been done to overcome catastrophic forgetting for few-shot classification problems. Few-shot meta-lear... | Reject |
ICLR.cc/2022/Conference | Towards Structured Dynamic Sparse Pre-Training of BERT | Identifying algorithms for computational efficient unsupervised training of large language models is an important and active area of research.
In this work, we develop and study a straightforward, dynamic always-sparse pre-training approach for BERT language modeling, which leverages periodic compression steps based o... | Reject |
End of preview. Expand in Data Studio
- Downloads last month
- 14