Unnamed: 0.1
int64
0
41k
Unnamed: 0
int64
0
41k
author
stringlengths
9
1.39k
id
stringlengths
11
18
summary
stringlengths
25
3.66k
title
stringlengths
4
258
year
int64
1.99k
2.02k
arxiv_url
stringlengths
32
39
info
stringlengths
523
3.18k
embeddings
stringlengths
16.9k
17.1k
200
200
['Chengxi Ye', 'Yezhou Yang', 'Cornelia Fermuller', 'Yiannis Aloimonos']
1708.00631v1
We explain that the difficulties of training deep neural networks come from a syndrome of three consistency issues. This paper describes our efforts in their analysis and treatment. The first issue is the training speed inconsistency in different layers. We propose to address it with an intuitive, simple-to-implement, ...
On the Importance of Consistency in Training Deep Neural Networks
2,017
http://arxiv.org/pdf/1708.00631v1
Title Importance Consistency Training Deep Neural Networks Summary explain difficulty training deep neural network come syndrome three consistency issue paper describes effort analysis treatment first issue training speed inconsistency different layer propose address intuitive simpletoimplement low footprint secondorde...
[0.0011005856795236468, 0.0342542938888073, -0.0016036214074119925, 0.037525322288274765, 0.017938777804374695, -0.04064643010497093, 0.04413624480366707, -0.01656109280884266, -0.01651647314429283, 0.01686471328139305, -0.02132304757833481, -0.029987270012497902, 0.030414137989282608, 0.04707800969481468, -0.003312838...
201
201
['Mario Amrehn', 'Sven Gaube', 'Mathias Unberath', 'Frank Schebesch', 'Tim Horz', 'Maddalena Strumia', 'Stefan Steidl', 'Markus Kowarschik', 'Andreas Maier']
1709.03450v1
For complex segmentation tasks, fully automatic systems are inherently limited in their achievable accuracy for extracting relevant objects. Especially in cases where only few data sets need to be processed for a highly accurate result, semi-automatic segmentation techniques exhibit a clear benefit for the user. One ar...
UI-Net: Interactive Artificial Neural Networks for Iterative Image Segmentation Based on a User Model
2,017
http://arxiv.org/pdf/1709.03450v1
Title UINet Interactive Artificial Neural Networks Iterative Image Segmentation Based User Model Summary complex segmentation task fully automatic system inherently limited achievable accuracy extracting relevant object Especially case data set need processed highly accurate result semiautomatic segmentation technique ...
[-0.011895904317498207, -0.016781369224190712, 0.003017455106601119, 0.028427323326468468, -0.04797236993908882, -0.002457182854413986, 0.04309859871864319, 0.0015853755176067352, -0.03841213136911392, 0.05920279026031494, -0.023175353184342384, -0.01464101392775774, -0.007778484839946032, 0.029599057510495186, -0.0185...
202
202
['Altaf H. Khan']
1712.05695v1
Most of the weights in a Lightweight Neural Network have a value of zero, while the remaining ones are either +1 or -1. These universal approximators require approximately 1.1 bits/weight of storage, posses a quick forward pass and achieve classification accuracies similar to conventional continuous-weight networks. Th...
Lightweight Neural Networks
2,017
http://arxiv.org/pdf/1712.05695v1
Title Lightweight Neural Networks Summary weight Lightweight Neural Network value zero remaining one either 1 1 universal approximators require approximately 11 bitsweight storage posse quick forward pas achieve classification accuracy similar conventional continuousweight network training regimen focus error reduction...
[-0.006239911075681448, 0.05687817186117172, -0.03179732337594032, 0.02979792095720768, 0.020963774994015694, 0.005468358751386404, 0.07468367367982864, 0.052600882947444916, 0.012772627174854279, 0.03470863029360771, 0.02001088112592697, 0.02225230261683464, 0.024142583832144737, 0.02948911115527153, 0.019600821658968...
203
203
['Nathaniel Thomas', 'Tess Smidt', 'Steven Kearnes', 'Lusann Yang', 'Li Li', 'Kai Kohlhoff', 'Patrick Riley']
1802.08219v2
We introduce tensor field networks, which are locally equivariant to 3D rotations, translations, and permutations of points at every layer. 3D rotation equivariance removes the need for data augmentation to identify features in arbitrary orientations. Our network uses filters built from spherical harmonics; due to the ...
Tensor Field Networks: Rotation- and Translation-Equivariant Neural Networks for 3D Point Clouds
2,018
http://arxiv.org/pdf/1802.08219v2
Title Tensor Field Networks Rotation TranslationEquivariant Neural Networks 3D Point Clouds Summary introduce tensor field network locally equivariant 3D rotation translation permutation point every layer 3D rotation equivariance remove need data augmentation identify feature arbitrary orientation network us filter bui...
[-0.001138135907240212, 0.023233946412801743, 0.017776718363165855, 0.006724439561367035, -0.0014553532237187028, -0.0018856192473322153, 0.04718993976712227, -0.01338037196546793, -0.03774923458695412, 0.0066506462171673775, -0.013602137565612793, 0.02383286878466606, 0.002643703017383814, 0.029266733676195145, 0.0519...
204
204
['Çağlar Gülçehre', 'Yoshua Bengio']
1301.4083v6
We explore the effect of introducing prior information into the intermediate level of neural networks for a learning task on which all the state-of-the-art machine learning algorithms tested failed to learn. We motivate our work from the hypothesis that humans learn such intermediate concepts from other individuals via...
Knowledge Matters: Importance of Prior Information for Optimization
2,013
http://arxiv.org/pdf/1301.4083v6
Title Knowledge Matters Importance Prior Information Optimization Summary explore effect introducing prior information intermediate level neural network learning task stateoftheart machine learning algorithm tested failed learn motivate work hypothesis human learn intermediate concept individual via form supervision gu...
[0.014172586612403393, 0.009406535886228085, -0.023537883535027504, 0.02201433666050434, -0.001226184656843543, -0.014183464460074902, 0.04456879198551178, -0.0022789237555116415, -0.04931991919875145, 0.0088122533634305, 0.004124582279473543, 0.0367431603372097, 0.00808875821530819, 0.038832150399684906, 0.01729504764...
205
205
['Kishore Konda', 'Roland Memisevic', 'David Krueger']
1402.3337v5
Regularized training of an autoencoder typically results in hidden unit biases that take on large negative values. We show that negative biases are a natural result of using a hidden layer whose responsibility is to both represent the input data and act as a selection mechanism that ensures sparsity of the representati...
Zero-bias autoencoders and the benefits of co-adapting features
2,014
http://arxiv.org/pdf/1402.3337v5
Title Zerobias autoencoders benefit coadapting feature Summary Regularized training autoencoder typically result hidden unit bias take large negative value show negative bias natural result using hidden layer whose responsibility represent input data act selection mechanism ensures sparsity representation show negative...
[-0.03741852939128876, 0.048517659306526184, -0.026960469782352448, 0.02670970931649208, 0.013095607049763203, 0.008037182502448559, 0.09301012754440308, -0.00367228826507926, -0.032445278018713, -0.014952633529901505, -0.021101243793964386, 0.03696267679333687, 0.009924480691552162, 0.08767807483673096, 0.040551718324...
206
206
['Bodo Rueckauer', 'Iulia-Alexandra Lungu', 'Yuhuang Hu', 'Michael Pfeiffer']
1612.04052v1
Deep convolutional neural networks (CNNs) have shown great potential for numerous real-world machine learning applications, but performing inference in large CNNs in real-time remains a challenge. We have previously demonstrated that traditional CNNs can be converted into deep spiking neural networks (SNNs), which exhi...
Theory and Tools for the Conversion of Analog to Spiking Convolutional Neural Networks
2,016
http://arxiv.org/pdf/1612.04052v1
Title Theory Tools Conversion Analog Spiking Convolutional Neural Networks Summary Deep convolutional neural network CNNs shown great potential numerous realworld machine learning application performing inference large CNNs realtime remains challenge previously demonstrated traditional CNNs converted deep spiking neura...
[-0.047626595944166183, -0.008519168011844158, -0.010641636326909065, 0.08292718231678009, 0.028569580987095833, -0.0191702451556921, 0.03178006783127785, -0.02696431241929531, -0.02704492025077343, -0.024147793650627136, -0.06486588716506958, 0.07105593383312225, -0.038517773151397705, 0.12279143184423447, 0.015830203...
207
207
['Xun Huang', 'Yixuan Li', 'Omid Poursaeed', 'John Hopcroft', 'Serge Belongie']
1612.04357v4
In this paper, we propose a novel generative model named Stacked Generative Adversarial Networks (SGAN), which is trained to invert the hierarchical representations of a bottom-up discriminative network. Our model consists of a top-down stack of GANs, each learned to generate lower-level representations conditioned on ...
Stacked Generative Adversarial Networks
2,016
http://arxiv.org/pdf/1612.04357v4
Title Stacked Generative Adversarial Networks Summary paper propose novel generative model named Stacked Generative Adversarial Networks SGAN trained invert hierarchical representation bottomup discriminative network model consists topdown stack GANs learned generate lowerlevel representation conditioned higherlevel re...
[-0.026028765365481377, 0.08551148325204849, -0.008710755966603756, 0.03697074577212334, 0.01247104536741972, 0.01192604098469019, 0.03986614570021629, -0.004626457113772631, -0.027626855298876762, 0.012442037463188171, -0.04514726251363754, 0.003601840464398265, -0.02068101428449154, -0.004156286362558603, 0.069662697...
208
208
['David Warde-Farley', 'Andrew Rabinovich', 'Dragomir Anguelov']
1412.6563v2
We study the problem of large scale, multi-label visual recognition with a large number of possible classes. We propose a method for augmenting a trained neural network classifier with auxiliary capacity in a manner designed to significantly improve upon an already well-performing model, while minimally impacting its c...
Self-informed neural network structure learning
2,014
http://arxiv.org/pdf/1412.6563v2
Title Selfinformed neural network structure learning Summary study problem large scale multilabel visual recognition large number possible class propose method augmenting trained neural network classifier auxiliary capacity manner designed significantly improve upon already wellperforming model minimally impacting comp...
[0.00026361903292126954, 0.018375858664512634, -0.0031296100933104753, 0.06841722130775452, 0.018402837216854095, -0.0008843803079798818, 0.05183177813887596, 0.012944181449711323, 0.01937745325267315, -0.02353578247129917, -0.06006372347474098, 0.030863041058182716, -0.04926510155200958, 0.015997005626559258, 0.026619...
209
209
['Forest Agostinelli', 'Matthew Hoffman', 'Peter Sadowski', 'Pierre Baldi']
1412.6830v3
Artificial neural networks typically have a fixed, non-linear activation function at each neuron. We have designed a novel form of piecewise linear activation function that is learned independently for each neuron using gradient descent. With this adaptive activation function, we are able to improve upon deep neural ne...
Learning Activation Functions to Improve Deep Neural Networks
2,014
http://arxiv.org/pdf/1412.6830v3
Title Learning Activation Functions Improve Deep Neural Networks Summary Artificial neural network typically fixed nonlinear activation function neuron designed novel form piecewise linear activation function learned independently neuron using gradient descent adaptive activation function able improve upon deep neural ...
[-0.021282952278852463, 0.04590875282883644, -0.005355850327759981, 0.05935748293995857, 0.04437795281410217, -0.0030491596553474665, 0.044773463159799576, 0.004834912717342377, 0.023217419162392616, -0.012477260082960129, -0.025427183136343956, 0.021910693496465683, -0.029006775468587875, 0.10003077983856201, -0.00213...
210
210
['Antti Rasmus', 'Tapani Raiko', 'Harri Valpola']
1412.7210v4
Suitable lateral connections between encoder and decoder are shown to allow higher layers of a denoising autoencoder (dAE) to focus on invariant representations. In regular autoencoders, detailed information needs to be carried through the highest layers but lateral connections from encoder to decoder relieve this pres...
Denoising autoencoder with modulated lateral connections learns invariant representations of natural images
2,014
http://arxiv.org/pdf/1412.7210v4
Title Denoising autoencoder modulated lateral connection learns invariant representation natural image Summary Suitable lateral connection encoder decoder shown allow higher layer denoising autoencoder dAE focus invariant representation regular autoencoders detailed information need carried highest layer lateral connec...
[-0.039212666451931, 0.05988015606999397, -0.007838346995413303, 0.056840647011995316, 0.028272325173020363, -0.026427801698446274, 0.03760572895407677, -0.01976924017071724, -0.054584842175245285, -0.03468039631843567, -0.006670383736491203, 0.026429450139403343, 0.02657223306596279, 0.0709182396531105, 0.022801244631...
211
211
['Ankit B. Patel', 'Tan Nguyen', 'Richard G. Baraniuk']
1504.00641v1
A grand challenge in machine learning is the development of computational algorithms that match or outperform humans in perceptual inference tasks that are complicated by nuisance variation. For instance, visual object recognition involves the unknown object position, orientation, and scale in object recognition while ...
A Probabilistic Theory of Deep Learning
2,015
http://arxiv.org/pdf/1504.00641v1
Title Probabilistic Theory Deep Learning Summary grand challenge machine learning development computational algorithm match outperform human perceptual inference task complicated nuisance variation instance visual object recognition involves unknown object position orientation scale object recognition speech recognitio...
[0.002162026474252343, 0.07437891513109207, -0.009023087099194527, 0.010961651802062988, 0.002599551109597087, -0.02297329716384411, 0.01750197820365429, -0.0030190821271389723, -0.08056651055812836, 0.012798402458429337, 0.020903363823890686, -0.02697143144905567, 0.03993939235806465, 0.08382690697908401, 0.0273410584...
212
212
['Rein Houthooft', 'Filip De Turck']
1508.00451v4
Tackling pattern recognition problems in areas such as computer vision, bioinformatics, speech or text recognition is often done best by taking into account task-specific statistical relations between output variables. In structured prediction, this internal structure is used to predict multiple outputs simultaneously,...
Integrated Inference and Learning of Neural Factors in Structural Support Vector Machines
2,015
http://arxiv.org/pdf/1508.00451v4
Title Integrated Inference Learning Neural Factors Structural Support Vector Machines Summary Tackling pattern recognition problem area computer vision bioinformatics speech text recognition often done best taking account taskspecific statistical relation output variable structured prediction internal structure used pr...
[0.020924311131238937, -0.03649092838168144, 0.01964947022497654, 0.0556345209479332, 0.016335025429725647, -0.016980592161417007, 0.026135742664337158, 0.028232399374246597, -0.0035930343437939882, -0.027327626943588257, -0.0528823547065258, -0.0008015804341994226, 0.036273837089538574, 0.052221789956092834, 0.0007711...
213
213
['Patrick W. Gallagher', 'Shuai Tang', 'Zhuowen Tu']
1511.07125v1
Top-down information plays a central role in human perception, but plays relatively little role in many current state-of-the-art deep networks, such as Convolutional Neural Networks (CNNs). This work seeks to explore a path by which top-down information can have a direct impact within current deep networks. We explore ...
What Happened to My Dog in That Network: Unraveling Top-down Generators in Convolutional Neural Networks
2,015
http://arxiv.org/pdf/1511.07125v1
Title Happened Dog Network Unraveling Topdown Generators Convolutional Neural Networks Summary Topdown information play central role human perception play relatively little role many current stateoftheart deep network Convolutional Neural Networks CNNs work seek explore path topdown information direct impact within cur...
[0.030013343319296837, 0.030576882883906364, -0.03516053408384323, 0.025222107768058777, 0.00966761913150549, 0.00573092931881547, 0.07170960307121277, 0.004153324291110039, -0.08175349980592728, -0.008145200088620186, -0.00856097973883152, 0.07306088507175446, 0.005758746527135372, 0.0571817010641098, 0.06772801280021...
214
214
['Adrien Gaidon', 'Qiao Wang', 'Yohann Cabon', 'Eleonora Vig']
1605.06457v1
Modern computer vision algorithms typically require expensive data acquisition and accurate manual labeling. In this work, we instead leverage the recent progress in computer graphics to generate fully labeled, dynamic, and photo-realistic proxy virtual worlds. We propose an efficient real-to-virtual world cloning meth...
Virtual Worlds as Proxy for Multi-Object Tracking Analysis
2,016
http://arxiv.org/pdf/1605.06457v1
Title Virtual Worlds Proxy MultiObject Tracking Analysis Summary Modern computer vision algorithm typically require expensive data acquisition accurate manual labeling work instead leverage recent progress computer graphic generate fully labeled dynamic photorealistic proxy virtual world propose efficient realtovirtual...
[-0.015725428238511086, 0.02745204232633114, 0.021197417750954628, 0.052757520228624344, -0.005117174703627825, -0.014248081482946873, 0.05115975812077522, -0.015235783532261848, -0.019489137455821037, 0.016809707507491112, 0.03321458399295807, -0.01713690720498562, -0.0009854338131844997, 0.05634380877017975, 0.051976...
215
215
['Jianwen Xie', 'Song-Chun Zhu', 'Ying Nian Wu']
1606.00972v2
Video sequences contain rich dynamic patterns, such as dynamic texture patterns that exhibit stationarity in the temporal domain, and action patterns that are non-stationary in either spatial or temporal domain. We show that a spatial-temporal generative ConvNet can be used to model and synthesize dynamic patterns. The...
Synthesizing Dynamic Patterns by Spatial-Temporal Generative ConvNet
2,016
http://arxiv.org/pdf/1606.00972v2
Title Synthesizing Dynamic Patterns SpatialTemporal Generative ConvNet Summary Video sequence contain rich dynamic pattern dynamic texture pattern exhibit stationarity temporal domain action pattern nonstationary either spatial temporal domain show spatialtemporal generative ConvNet used model synthesize dynamic patter...
[-0.009735777042806149, 0.033621497452259064, -0.019019458442926407, 0.02915627881884575, -0.018329793587327003, -0.009356766007840633, 0.018577085807919502, -7.509900024160743e-05, -0.11104785650968552, -0.01958169788122177, 0.046330247074365616, -0.02671165019273758, -0.0072829932905733585, 0.08923191577196121, 0.025...
216
216
['Mohammad Javad Shafiee', 'Akshaya Mishra', 'Alexander Wong']
1606.04393v3
Taking inspiration from biological evolution, we explore the idea of "Can deep neural networks evolve naturally over successive generations into highly efficient deep neural networks?" by introducing the notion of synthesizing new highly efficient, yet powerful deep neural networks over successive generations via an ev...
Deep Learning with Darwin: Evolutionary Synthesis of Deep Neural Networks
2,016
http://arxiv.org/pdf/1606.04393v3
Title Deep Learning Darwin Evolutionary Synthesis Deep Neural Networks Summary Taking inspiration biological evolution explore idea deep neural network evolve naturally successive generation highly efficient deep neural network introducing notion synthesizing new highly efficient yet powerful deep neural network succes...
[-0.023068277165293694, 0.04342826083302498, -0.05288401246070862, -0.008777724578976631, 0.010167833417654037, 0.0057924832217395306, 0.022848917171359062, -0.00561692425981164, -0.04466056451201439, 0.048386696726083755, -0.019895227625966072, 0.03785804286599159, -0.008577093482017517, 0.04098708555102348, 0.0276984...
217
217
['Tian Han', 'Yang Lu', 'Song-Chun Zhu', 'Ying Nian Wu']
1606.08571v4
This paper proposes an alternating back-propagation algorithm for learning the generator network model. The model is a non-linear generalization of factor analysis. In this model, the mapping from the continuous latent factors to the observed signal is parametrized by a convolutional neural network. The alternating bac...
Alternating Back-Propagation for Generator Network
2,016
http://arxiv.org/pdf/1606.08571v4
Title Alternating BackPropagation Generator Network Summary paper proposes alternating backpropagation algorithm learning generator network model model nonlinear generalization factor analysis model mapping continuous latent factor observed signal parametrized convolutional neural network alternating backpropagation al...
[-0.01800454966723919, 0.04055950045585632, -0.0014842418022453785, 0.009536906145513058, -0.0030878777615725994, -0.026116665452718735, -0.016958091408014297, 0.00174429127946496, -0.07229884713888168, 0.016639191657304764, -0.015200141817331314, 0.00015803637506905943, -0.020448176190257072, 0.05180133134126663, 0.08...
218
218
['Ilija Ilievski', 'Jiashi Feng']
1608.00218v1
Recently, several optimization methods have been successfully applied to the hyperparameter optimization of deep neural networks (DNNs). The methods work by modeling the joint distribution of hyperparameter values and corresponding error. Those methods become less practical when applied to modern DNNs whose training ma...
Hyperparameter Transfer Learning through Surrogate Alignment for Efficient Deep Neural Network Training
2,016
http://arxiv.org/pdf/1608.00218v1
Title Hyperparameter Transfer Learning Surrogate Alignment Efficient Deep Neural Network Training Summary Recently several optimization method successfully applied hyperparameter optimization deep neural network DNNs method work modeling joint distribution hyperparameter value corresponding error method become le pract...
[-0.023147162050008774, 0.0573740117251873, -0.00635602418333292, 0.004228698089718819, 0.026486214250326157, -0.012647579424083233, 0.038649626076221466, -0.004795538727194071, -0.03584451228380203, 0.009933757595717907, -0.06142236664891243, 0.045735571533441544, 0.011374056339263916, 0.02665100060403347, -2.98173436...
219
219
['Hao Wang', 'Dit-Yan Yeung']
1608.06884v2
While perception tasks such as visual object recognition and text understanding play an important role in human intelligence, the subsequent tasks that involve inference, reasoning and planning require an even higher level of intelligence. The past few years have seen major advances in many perception tasks using deep ...
Towards Bayesian Deep Learning: A Framework and Some Existing Methods
2,016
http://arxiv.org/pdf/1608.06884v2
Title Towards Bayesian Deep Learning Framework Existing Methods Summary perception task visual object recognition text understanding play important role human intelligence subsequent task involve inference reasoning planning require even higher level intelligence past year seen major advance many perception task using ...
[0.007974247448146343, 0.036423783749341965, 0.013382895849645138, 0.04231448471546173, -0.04866444692015648, 0.03920255973935127, 0.04171886667609215, 0.03536500409245491, -0.019592175260186195, -0.03266230970621109, 0.0006695681368000805, -0.012984338216483593, 0.03467118740081787, 0.08165556192398071, -0.01498672831...
220
220
['Mason McGill', 'Pietro Perona']
1703.06217v2
We propose and systematically evaluate three strategies for training dynamically-routed artificial neural networks: graphs of learned transformations through which different input signals may take different paths. Though some approaches have advantages over others, the resulting networks are often qualitatively similar...
Deciding How to Decide: Dynamic Routing in Artificial Neural Networks
2,017
http://arxiv.org/pdf/1703.06217v2
Title Deciding Decide Dynamic Routing Artificial Neural Networks Summary propose systematically evaluate three strategy training dynamicallyrouted artificial neural network graph learned transformation different input signal may take different path Though approach advantage others resulting network often qualitatively ...
[0.006757006980478764, -0.015304154716432095, -0.039194539189338684, -0.02484578639268875, -0.05993489548563957, -0.046978265047073364, 0.016341058537364006, -0.04097360372543335, -0.03397383168339729, -0.011854507029056549, 0.023937376216053963, 0.025907054543495178, 0.015508369542658329, 0.050955552607774734, 0.04291...
221
221
['Hongyang Gao', 'Hao Yuan', 'Zhengyang Wang', 'Shuiwang Ji']
1705.06820v4
Deconvolutional layers have been widely used in a variety of deep models for up-sampling, including encoder-decoder networks for semantic segmentation and deep generative models for unsupervised learning. One of the key limitations of deconvolutional operations is that they result in the so-called checkerboard problem....
Pixel Deconvolutional Networks
2,017
http://arxiv.org/pdf/1705.06820v4
Title Pixel Deconvolutional Networks Summary Deconvolutional layer widely used variety deep model upsampling including encoderdecoder network semantic segmentation deep generative model unsupervised learning One key limitation deconvolutional operation result socalled checkerboard problem caused fact direct relationshi...
[-0.015216377563774586, 0.05134040117263794, 0.012215763330459595, 0.08516170084476471, -0.01683502085506916, -0.018124151974916458, 0.04815717786550522, 0.004856148734688759, -0.057793211191892624, 0.047311946749687195, 0.03756553307175636, 0.08593137562274933, 0.004859064240008593, 0.028719542548060417, -0.0064530260...
222
222
['Stanislav Fort']
1708.02735v1
We propose a novel architecture for $k$-shot classification on the Omniglot dataset. Building on prototypical networks, we extend their architecture to what we call Gaussian prototypical networks. Prototypical networks learn a map between images and embedding vectors, and use their clustering for classification. In our...
Gaussian Prototypical Networks for Few-Shot Learning on Omniglot
2,017
http://arxiv.org/pdf/1708.02735v1
Title Gaussian Prototypical Networks FewShot Learning Omniglot Summary propose novel architecture kshot classification Omniglot dataset Building prototypical network extend architecture call Gaussian prototypical network Prototypical network learn map image embedding vector use clustering classification model part enco...
[-0.03872193396091461, 0.011896251700818539, -0.024651411920785904, 0.04198518395423889, 0.008748532272875309, -0.0013317536795511842, 0.05101098492741585, 0.01363440416753292, -0.016359755769371986, 0.015828387811779976, -0.0021112554240971804, 0.02167893573641777, 0.012797352857887745, 0.057533640414476395, 0.0296211...
223
223
['Leslie N. Smith', 'Nicholay Topin']
1708.07120v2
In this paper, we show a phenomenon, which we named "super-convergence", where residual networks can be trained using an order of magnitude fewer iterations than is used with standard training methods. The existence of super-convergence is relevant to understanding why deep networks generalize well. One of the key elem...
Super-Convergence: Very Fast Training of Residual Networks Using Large Learning Rates
2,017
http://arxiv.org/pdf/1708.07120v2
Title SuperConvergence Fast Training Residual Networks Using Large Learning Rates Summary paper show phenomenon named superconvergence residual network trained using order magnitude fewer iteration used standard training method existence superconvergence relevant understanding deep network generalize well One key eleme...
[0.004243234172463417, 0.01020099688321352, 0.0030279129277914762, 0.05753330886363983, 0.005873274523764849, 0.014268917962908745, -0.003872480010613799, 0.014631052501499653, -0.010071934200823307, 0.021705912426114082, 0.0007730689249001443, 0.019592784345149994, -0.04117584973573685, 0.009131697937846184, -0.002717...
224
224
['Boris Flach', 'Alexander Shekhovtsov', 'Ondrej Fikar']
1709.08524v1
Learning, taking into account full distribution of the data, referred to as generative, is not feasible with deep neural networks (DNNs) because they model only the conditional distribution of the outputs given the inputs. Current solutions are either based on joint probability models facing difficult estimation proble...
Generative learning for deep networks
2,017
http://arxiv.org/pdf/1709.08524v1
Title Generative learning deep network Summary Learning taking account full distribution data referred generative feasible deep neural network DNNs model conditional distribution output given input Current solution either based joint probability model facing difficult estimation problem learn two separate network mappi...
[-0.013764153234660625, 0.06498578935861588, -0.03143179789185524, 0.03569284453988075, 0.021829647943377495, -0.028896350413560867, 0.060644734650850296, -0.018914001062512398, -0.024976620450615883, 0.03555653989315033, 0.02034589648246765, 0.012766139581799507, -0.009108430705964565, 0.05768049135804176, 0.034216392...
225
225
['Hanxiao Liu', 'Karen Simonyan', 'Oriol Vinyals', 'Chrisantha Fernando', 'Koray Kavukcuoglu']
1711.00436v2
We explore efficient neural architecture search methods and show that a simple yet powerful evolutionary algorithm can discover new architectures with excellent performance. Our approach combines a novel hierarchical genetic representation scheme that imitates the modularized design pattern commonly adopted by human ex...
Hierarchical Representations for Efficient Architecture Search
2,017
http://arxiv.org/pdf/1711.00436v2
Title Hierarchical Representations Efficient Architecture Search Summary explore efficient neural architecture search method show simple yet powerful evolutionary algorithm discover new architecture excellent performance approach combine novel hierarchical genetic representation scheme imitates modularized design patte...
[0.01517266221344471, 0.0689282938838005, -0.05876625329256058, 0.060064010322093964, -0.0023158686235547066, 0.006358189973980188, 0.02171948365867138, 0.008171390742063522, -0.0018332767067477107, 0.005735564511269331, -0.04318080097436905, -0.0018145412905141711, 0.005118620116263628, 0.02851330302655697, 0.01081409...
226
226
['Antreas Antoniou', 'Amos Storkey', 'Harrison Edwards']
1711.04340v3
Effective training of neural networks requires much data. In the low-data regime, parameters are underdetermined, and learnt networks generalise poorly. Data Augmentation alleviates this by using existing data more effectively. However standard data augmentation produces only limited plausible alternative data. Given t...
Data Augmentation Generative Adversarial Networks
2,017
http://arxiv.org/pdf/1711.04340v3
Title Data Augmentation Generative Adversarial Networks Summary Effective training neural network requires much data lowdata regime parameter underdetermined learnt network generalise poorly Data Augmentation alleviates using existing data effectively However standard data augmentation produce limited plausible alterna...
[0.007798369042575359, 0.1127941906452179, -0.008351664058864117, 0.01784515753388405, 0.036803483963012695, -0.015771817415952682, 0.04350386559963226, -0.01760702207684517, -0.021776631474494934, 0.010825569741427898, -0.015450343489646912, 0.040628865361213684, -0.019412493333220482, 0.06117817386984825, 0.075924500...
227
227
['Dror Sholomon', 'Eli David', 'Nathan S. Netanyahu']
1711.08762v1
This paper introduces the first deep neural network-based estimation metric for the jigsaw puzzle problem. Given two puzzle piece edges, the neural network predicts whether or not they should be adjacent in the correct assembly of the puzzle, using nothing but the pixels of each piece. The proposed metric exhibits an e...
DNN-Buddies: A Deep Neural Network-Based Estimation Metric for the Jigsaw Puzzle Problem
2,017
http://arxiv.org/pdf/1711.08762v1
Title DNNBuddies Deep Neural NetworkBased Estimation Metric Jigsaw Puzzle Problem Summary paper introduces first deep neural networkbased estimation metric jigsaw puzzle problem Given two puzzle piece edge neural network predicts whether adjacent correct assembly puzzle using nothing pixel piece proposed metric exhibit...
[-0.005605415441095829, 0.06549376994371414, -0.022269506007432938, 0.0740916058421135, -0.05972287803888321, -0.02298656292259693, 0.0314687080681324, -0.01741977035999298, -0.07045616209506989, 0.03507894277572632, 0.03322844207286835, 0.02667774073779583, 0.012714301235973835, 0.06369805335998535, 0.0132814226672053...
228
228
['Eli David', 'Nathan S. Netanyahu']
1711.08763v1
In this paper we describe the problem of painter classification, and propose a novel approach based on deep convolutional autoencoder neural networks. While previous approaches relied on image processing and manual feature extraction from paintings, our approach operates on the raw pixel level, without any preprocessin...
DeepPainter: Painter Classification Using Deep Convolutional Autoencoders
2,017
http://arxiv.org/pdf/1711.08763v1
Title DeepPainter Painter Classification Using Deep Convolutional Autoencoders Summary paper describe problem painter classification propose novel approach based deep convolutional autoencoder neural network previous approach relied image processing manual feature extraction painting approach operates raw pixel level w...
[0.006494715344160795, 0.06416041404008865, -0.014872116968035698, 0.0733407512307167, 0.014861884526908398, -0.002644152147695422, 0.036904990673065186, -0.017610760405659676, -0.013382405042648315, 0.0001574601628817618, -0.01190575398504734, -0.010635052807629108, 0.007070810999721289, 0.012400463223457336, -0.00093...
229
229
['Ido Cohen', 'Eli David', 'Nathan S. Netanyahu', 'Noa Liscovitch', 'Gal Chechik']
1711.09663v1
This paper presents a novel deep learning-based method for learning a functional representation of mammalian neural images. The method uses a deep convolutional denoising autoencoder (CDAE) for generating an invariant, compact representation of in situ hybridization (ISH) images. While most existing methods for bio-ima...
DeepBrain: Functional Representation of Neural In-Situ Hybridization Images for Gene Ontology Classification Using Deep Convolutional Autoencoders
2,017
http://arxiv.org/pdf/1711.09663v1
Title DeepBrain Functional Representation Neural InSitu Hybridization Images Gene Ontology Classification Using Deep Convolutional Autoencoders Summary paper present novel deep learningbased method learning functional representation mammalian neural image method us deep convolutional denoising autoencoder CDAE generati...
[-0.03399789705872536, 0.010101009160280228, -0.017971951514482498, 0.019560527056455612, 0.02207835391163826, 0.023933110758662224, 0.04776512458920479, 0.05417383462190628, 0.009116754867136478, 0.037622109055519104, -0.024749767035245895, -0.01833522878587246, 0.022730596363544464, 0.09499692916870117, 0.02136012539...
230
230
['Omid Poursaeed', 'Isay Katsman', 'Bicheng Gao', 'Serge Belongie']
1712.02328v1
In this paper, we propose novel generative models for creating adversarial examples, slightly perturbed images resembling natural images but maliciously crafted to fool pre-trained models. We present trainable deep neural networks for transforming images to adversarial perturbations. Our proposed models can produce ima...
Generative Adversarial Perturbations
2,017
http://arxiv.org/pdf/1712.02328v1
Title Generative Adversarial Perturbations Summary paper propose novel generative model creating adversarial example slightly perturbed image resembling natural image maliciously crafted fool pretrained model present trainable deep neural network transforming image adversarial perturbation proposed model produce imagea...
[0.008410746231675148, 0.05454821512103081, -0.03014814667403698, 0.049116428941488266, -0.024745067581534386, -0.013837946578860283, 0.023260800167918205, -0.024695586413145065, -0.037630245089530945, -0.0020487233996391296, -0.014552706852555275, 0.057706501334905624, -0.012826532125473022, 0.026701834052801132, 0.06...
231
231
['Logan Engstrom', 'Brandon Tran', 'Dimitris Tsipras', 'Ludwig Schmidt', 'Aleksander Madry']
1712.02779v3
We show that simple transformations, namely translations and rotations alone, are sufficient to fool neural network-based vision models on a significant fraction of inputs. This is in sharp contrast to previous work that relied on more complicated optimization approaches that are unlikely to appear outside of a truly a...
A Rotation and a Translation Suffice: Fooling CNNs with Simple Transformations
2,017
http://arxiv.org/pdf/1712.02779v3
Title Rotation Translation Suffice Fooling CNNs Simple Transformations Summary show simple transformation namely translation rotation alone sufficient fool neural networkbased vision model significant fraction input sharp contrast previous work relied complicated optimization approach unlikely appear outside truly adve...
[0.017787711694836617, 0.006879817694425583, -0.024337947368621826, 0.025793012231588364, -0.02040625736117363, 0.012200893834233284, 0.06180109456181526, 0.015304679051041603, -0.06062167510390282, -0.036799050867557526, -0.0036219065077602863, 0.06119444593787193, 0.0380178801715374, 0.016379501670598984, 0.055030435...
232
232
['Boyang Deng', 'Junjie Yan', 'Dahua Lin']
1712.03351v1
The quest for performant networks has been a significant force that drives the advancements of deep learning in recent years. While rewarding, improving network design has never been an easy journey. The large design space combined with the tremendous cost required for network training poses a major obstacle to this en...
Peephole: Predicting Network Performance Before Training
2,017
http://arxiv.org/pdf/1712.03351v1
Title Peephole Predicting Network Performance Training Summary quest performant network significant force drive advancement deep learning recent year rewarding improving network design never easy journey large design space combined tremendous cost required network training pose major obstacle endeavor work propose new ...
[-0.03332606703042984, 0.054034698754549026, -0.0035848221741616726, 0.03292200341820717, 0.03933834657073021, -0.04411165416240692, 0.021326975896954536, 0.0023866964038461447, -0.033627405762672424, -0.038252171128988266, -0.02949833869934082, 0.009029288776218891, 0.017469746991991997, 0.0657353550195694, 0.05195369...
233
233
['Abien Fred Agarap']
1712.03541v1
Convolutional neural networks (CNNs) are similar to "ordinary" neural networks in the sense that they are made up of hidden layers consisting of neurons with "learnable" parameters. These neurons receive inputs, performs a dot product, and then follows it with a non-linearity. The whole network expresses the mapping be...
An Architecture Combining Convolutional Neural Network (CNN) and Support Vector Machine (SVM) for Image Classification
2,017
http://arxiv.org/pdf/1712.03541v1
Title Architecture Combining Convolutional Neural Network CNN Support Vector Machine SVM Image Classification Summary Convolutional neural network CNNs similar ordinary neural network sense made hidden layer consisting neuron learnable parameter neuron receive input performs dot product follows nonlinearity whole netwo...
[0.04178604111075401, 0.0010077209444716573, -0.018232231959700584, 0.07323478162288666, 6.074137854739092e-05, -0.007431807462126017, 0.059503935277462006, -0.007657527457922697, -0.03318963572382927, -0.013424609787762165, -0.03081115521490574, 0.043146539479494095, 0.007727096788585186, 0.06916404515504837, 0.053730...
234
234
['Ekaba Bisong']
1712.08314v2
Artifical Neural Networks are a particular class of learning systems modeled after biological neural functions with an interesting penchant for Hebbian learning, that is "neurons that wire together, fire together". However, unlike their natural counterparts, artificial neural networks have a close and stringent couplin...
Benchmarking Decoupled Neural Interfaces with Synthetic Gradients
2,017
http://arxiv.org/pdf/1712.08314v2
Title Benchmarking Decoupled Neural Interfaces Synthetic Gradients Summary Artifical Neural Networks particular class learning system modeled biological neural function interesting penchant Hebbian learning neuron wire together fire together However unlike natural counterpart artificial neural network close stringent c...
[-0.03697482869029045, 0.05663694441318512, -0.02497054450213909, 0.029748650267720222, -0.012712767347693443, -0.03279449790716171, 0.08903144299983978, -0.0162457674741745, 0.020958641543984413, 0.002326270332559943, -0.049318134784698486, 0.04366159811615944, 0.03583143278956413, 0.025675173848867416, 0.015638167038...
235
235
['Amin Fehri', 'Santiago Velasco-Forero', 'Fernand Meyer']
1802.07008v1
Image segmentation is the process of partitioning an image into a set of meaningful regions according to some criteria. Hierarchical segmentation has emerged as a major trend in this regard as it favors the emergence of important regions at different scales. On the other hand, many methods allow us to have prior inform...
Segmentation hiérarchique faiblement supervisée
2,018
http://arxiv.org/pdf/1802.07008v1
Title Segmentation hiérarchique faiblement supervisée Summary Image segmentation process partitioning image set meaningful region according criterion Hierarchical segmentation emerged major trend regard favor emergence important region different scale hand many method allow u prior information position structure intere...
[-0.006895685568451881, -0.011677310802042484, -0.013283872045576572, 0.05941111221909523, -0.05897188186645508, 0.004648934584110975, 0.019284600391983986, 0.018355773761868477, 0.00807526521384716, 0.01541792880743742, 0.008606784977018833, 0.02819819375872612, 0.04107086732983589, 0.0046898671425879, -0.020383853465...
236
236
['Mark D. McDonnell']
1802.08530v1
For fast and energy-efficient deployment of trained deep neural networks on resource-constrained embedded hardware, each learned weight parameter should ideally be represented and stored using a single bit. Error-rates usually increase when this requirement is imposed. Here, we report large improvements in error rates ...
Training wide residual networks for deployment using a single bit for each weight
2,018
http://arxiv.org/pdf/1802.08530v1
Title Training wide residual network deployment using single bit weight Summary fast energyefficient deployment trained deep neural network resourceconstrained embedded hardware learned weight parameter ideally represented stored using single bit Errorrates usually increase requirement imposed report large improvement ...
[-0.007621029857546091, 0.013809921219944954, -0.005200904794037342, 0.04447505995631218, 0.028674913570284843, -0.02496200241148472, 0.0607050396502018, -0.001097885426133871, -0.025637412443757057, 0.030242057517170906, -0.005632867105305195, 0.027116065844893456, -0.018702542409300804, 0.03824552521109581, 0.0272639...
237
237
['Abien Fred Agarap']
1803.08375v1
We introduce the use of rectified linear units (ReLU) as the classification function in a deep neural network (DNN). Conventionally, ReLU is used as an activation function in DNNs, with Softmax function as their classification function. However, there have been several studies on using a classification function other t...
Deep Learning using Rectified Linear Units (ReLU)
2,018
http://arxiv.org/pdf/1803.08375v1
Title Deep Learning using Rectified Linear Units ReLU Summary introduce use rectified linear unit ReLU classification function deep neural network DNN Conventionally ReLU used activation function DNNs Softmax function classification function However several study using classification function Softmax study addition acc...
[-0.017851971089839935, -0.02014479786157608, -0.007213265169411898, 0.04017939046025276, 0.026483051478862762, -0.00466179521754384, 0.06296517699956894, -0.010629304684698582, -0.008163356222212315, -0.0031884177587926388, 0.005745685659348965, -0.008078278973698616, -0.009909066371619701, 0.07852396368980408, 0.0115...
238
238
['Djork-Arné Clevert', 'Andreas Mayr', 'Thomas Unterthiner', 'Sepp Hochreiter']
1502.06464v2
We propose rectified factor networks (RFNs) to efficiently construct very sparse, non-linear, high-dimensional representations of the input. RFN models identify rare and small events in the input, have a low interference between code units, have a small reconstruction error, and explain the data covariance structure. R...
Rectified Factor Networks
2,015
http://arxiv.org/pdf/1502.06464v2
Title Rectified Factor Networks Summary propose rectified factor network RFNs efficiently construct sparse nonlinear highdimensional representation input RFN model identify rare small event input low interference code unit small reconstruction error explain data covariance structure RFN learning generalized alternating...
[-0.046232495456933975, 0.013117842376232147, -0.02872827835381031, 0.03222096711397171, 0.050006262958049774, 0.0023814458400011063, 0.029582221060991287, 0.04177924618124962, -0.0456358976662159, 0.03354101628065109, 0.00545511906966567, 0.015595695935189724, 0.017880761995911598, 0.12493440508842468, -0.022241465747...
239
239
['Qi Wang', 'Joseph JaJa']
1312.1909v1
Motivated by an important insight from neural science, we propose a new framework for understanding the success of the recently proposed "maxout" networks. The framework is based on encoding information on sparse pathways and recognizing the correct pathway at inference time. Elaborating further on this insight, we pro...
From Maxout to Channel-Out: Encoding Information on Sparse Pathways
2,013
http://arxiv.org/pdf/1312.1909v1
Title Maxout ChannelOut Encoding Information Sparse Pathways Summary Motivated important insight neural science propose new framework understanding success recently proposed maxout network framework based encoding information sparse pathway recognizing correct pathway inference time Elaborating insight propose novel de...
[-0.003104336326941848, 0.007492174860090017, 0.005086570046842098, 0.03221803531050682, 0.019866492599248886, -0.014069817960262299, 0.01960849016904831, 0.008129559457302094, -0.0948764905333519, 0.01597270369529724, -0.014996765181422234, 0.024849604815244675, -0.02412574365735054, 0.0745856985449791, 0.008558634668...
240
240
['Takashi Shinozaki', 'Yasushi Naruse']
1312.5845v7
We propose a novel learning method for multilayered neural networks which uses feedforward supervisory signal and associates classification of a new input with that of pre-trained input. The proposed method effectively uses rich input information in the earlier layer for robust leaning and revising internal representat...
Competitive Learning with Feedforward Supervisory Signal for Pre-trained Multilayered Networks
2,013
http://arxiv.org/pdf/1312.5845v7
Title Competitive Learning Feedforward Supervisory Signal Pretrained Multilayered Networks Summary propose novel learning method multilayered neural network us feedforward supervisory signal associate classification new input pretrained input proposed method effectively us rich input information earlier layer robust le...
[-0.04009803384542465, 0.009164806455373764, -0.011097167618572712, -0.0011576570104807615, 0.014742846600711346, 0.006361402105540037, 0.035141341388225555, -0.01559604611247778, -0.006365400273352861, -0.01850414089858532, -0.03009050153195858, 0.004498535301536322, -0.019216449931263924, 0.01816696859896183, 0.01619...
241
241
['Chen-Yu Lee', 'Saining Xie', 'Patrick Gallagher', 'Zhengyou Zhang', 'Zhuowen Tu']
1409.5185v2
Our proposed deeply-supervised nets (DSN) method simultaneously minimizes classification error while making the learning process of hidden layers direct and transparent. We make an attempt to boost the classification performance by studying a new formulation in deep networks. Three aspects in convolutional neural netwo...
Deeply-Supervised Nets
2,014
http://arxiv.org/pdf/1409.5185v2
Title DeeplySupervised Nets Summary proposed deeplysupervised net DSN method simultaneously minimizes classification error making learning process hidden layer direct transparent make attempt boost classification performance studying new formulation deep network Three aspect convolutional neural network CNN style archi...
[-0.009686863049864769, 0.0631462037563324, -0.02084679901599884, 0.06011141091585159, 0.02666478045284748, -0.025617189705371857, 0.03203669562935829, -0.014180340804159641, -0.014078167267143726, 0.010133367963135242, -0.021181421354413033, 0.033930275589227676, -0.015635931864380836, 0.07227247953414917, -0.01105375...
242
242
['Behnam Neyshabur', 'Ruslan Salakhutdinov', 'Nathan Srebro']
1506.02617v1
We revisit the choice of SGD for training deep neural networks by reconsidering the appropriate geometry in which to optimize the weights. We argue for a geometry invariant to rescaling of weights that does not affect the output of the network, and suggest Path-SGD, which is an approximate steepest descent method with ...
Path-SGD: Path-Normalized Optimization in Deep Neural Networks
2,015
http://arxiv.org/pdf/1506.02617v1
Title PathSGD PathNormalized Optimization Deep Neural Networks Summary revisit choice SGD training deep neural network reconsidering appropriate geometry optimize weight argue geometry invariant rescaling weight affect output network suggest PathSGD approximate steepest descent method respect pathwise regularizer relat...
[-0.02088894322514534, 0.03726065903902054, -0.01929231360554695, 0.04762108996510506, 0.017126496881246567, -0.037848442792892456, 0.01982071064412594, -0.011975996196269989, -0.031421370804309845, 0.03843139484524727, 0.02079521119594574, 0.01623847894370556, -0.02488737367093563, 0.015691833570599556, 0.030969014391...
243
243
['Alan Mosca', 'George D. Magoulas']
1509.04612v2
The Resilient Propagation (Rprop) algorithm has been very popular for backpropagation training of multilayer feed-forward neural networks in various applications. The standard Rprop however encounters difficulties in the context of deep neural networks as typically happens with gradient-based learning algorithms. In th...
Adapting Resilient Propagation for Deep Learning
2,015
http://arxiv.org/pdf/1509.04612v2
Title Adapting Resilient Propagation Deep Learning Summary Resilient Propagation Rprop algorithm popular backpropagation training multilayer feedforward neural network various application standard Rprop however encounter difficulty context deep neural network typically happens gradientbased learning algorithm paper pro...
[-0.016168491914868355, 0.019840942695736885, -0.029285850003361702, 0.01610058918595314, 5.419364242698066e-05, -0.03379685431718826, -0.0014425793197005987, -0.01248942967504263, -0.06541429460048676, 0.00048020516987890005, 0.03383493795990944, 0.019835278391838074, 0.02872784622013569, 0.0385395772755146, 0.0108206...
244
244
['Nastaran Mohammadian Rad', 'Andrea Bizzego', 'Seyed Mostafa Kia', 'Giuseppe Jurman', 'Paola Venuti', 'Cesare Furlanello']
1511.01865v3
Autism Spectrum Disorders (ASDs) are often associated with specific atypical postural or motor behaviors, of which Stereotypical Motor Movements (SMMs) have a specific visibility. While the identification and the quantification of SMM patterns remain complex, its automation would provide support to accurate tuning of t...
Convolutional Neural Network for Stereotypical Motor Movement Detection in Autism
2,015
http://arxiv.org/pdf/1511.01865v3
Title Convolutional Neural Network Stereotypical Motor Movement Detection Autism Summary Autism Spectrum Disorders ASDs often associated specific atypical postural motor behavior Stereotypical Motor Movements SMMs specific visibility identification quantification SMM pattern remain complex automation would provide supp...
[-0.0202474445104599, 0.0006815855740569532, -0.04242274537682533, 0.05192091315984726, 0.06507916748523712, 0.019127126783132553, 0.02270711585879326, -0.017676804214715958, -0.054764408618211746, -0.029056217521429062, 0.018322328105568886, -0.011970349587500095, 0.025815501809120178, 0.06343507766723633, -0.00378999...
245
245
['Sasha Targ', 'Diogo Almeida', 'Kevin Lyman']
1603.08029v1
Residual networks (ResNets) have recently achieved state-of-the-art on challenging computer vision tasks. We introduce Resnet in Resnet (RiR): a deep dual-stream architecture that generalizes ResNets and standard CNNs and is easily implemented with no computational overhead. RiR consistently improves performance over R...
Resnet in Resnet: Generalizing Residual Architectures
2,016
http://arxiv.org/pdf/1603.08029v1
Title Resnet Resnet Generalizing Residual Architectures Summary Residual network ResNets recently achieved stateoftheart challenging computer vision task introduce Resnet Resnet RiR deep dualstream architecture generalizes ResNets standard CNNs easily implemented computational overhead RiR consistently improves perform...
[-0.023636987432837486, 0.03913691267371178, -0.018584342673420906, 0.06586231291294098, -0.008258814923465252, 0.0023474704939872026, 0.003197903512045741, -0.003969330340623856, -0.05937030166387558, 0.02191881462931633, 0.022512001916766167, 0.0013895828742533922, 0.0011546164751052856, 0.02555547095835209, 0.003185...
246
246
['Mohammad Javad Shafiee', 'Alexander Wong']
1609.01360v2
There has been significant recent interest towards achieving highly efficient deep neural network architectures. A promising paradigm for achieving this is the concept of evolutionary deep intelligence, which attempts to mimic biological evolution processes to synthesize highly-efficient deep neural networks over succe...
Evolutionary Synthesis of Deep Neural Networks via Synaptic Cluster-driven Genetic Encoding
2,016
http://arxiv.org/pdf/1609.01360v2
Title Evolutionary Synthesis Deep Neural Networks via Synaptic Clusterdriven Genetic Encoding Summary significant recent interest towards achieving highly efficient deep neural network architecture promising paradigm achieving concept evolutionary deep intelligence attempt mimic biological evolution process synthesize ...
[-0.04819837585091591, 0.023699864745140076, -0.05558415502309799, 0.017693307250738144, 0.007717323023825884, 0.010449408553540707, 0.0286715030670166, 0.011082939803600311, -0.016567587852478027, 0.03959182649850845, -0.029942557215690613, 0.009146512486040592, -0.020443174988031387, 0.03525388240814209, 0.0398170538...
247
247
['Andrew Brock', 'Theodore Lim', 'J. M. Ritchie', 'Nick Weston']
1609.07093v3
The increasingly photorealistic sample quality of generative image models suggests their feasibility in applications beyond image generation. We present the Neural Photo Editor, an interface that leverages the power of generative neural networks to make large, semantically coherent changes to existing images. To tackle...
Neural Photo Editing with Introspective Adversarial Networks
2,016
http://arxiv.org/pdf/1609.07093v3
Title Neural Photo Editing Introspective Adversarial Networks Summary increasingly photorealistic sample quality generative image model suggests feasibility application beyond image generation present Neural Photo Editor interface leverage power generative neural network make large semantically coherent change existing...
[0.016065798699855804, 0.09007365256547928, 0.015057975426316261, 0.026203274726867676, 0.00218421733006835, -0.05233510583639145, 0.018378846347332, -0.02055145613849163, -0.07738371193408966, 0.00926798302680254, 0.03939371556043625, -0.024948840960860252, 0.01631634496152401, 0.01621488854289055, 0.07300959527492523...
248
248
['Tolga Bolukbasi', 'Joseph Wang', 'Ofer Dekel', 'Venkatesh Saligrama']
1702.07811v2
We present an approach to adaptively utilize deep neural networks in order to reduce the evaluation time on new examples without loss of accuracy. Rather than attempting to redesign or approximate existing networks, we propose two schemes that adaptively utilize networks. We first pose an adaptive network evaluation sc...
Adaptive Neural Networks for Efficient Inference
2,017
http://arxiv.org/pdf/1702.07811v2
Title Adaptive Neural Networks Efficient Inference Summary present approach adaptively utilize deep neural network order reduce evaluation time new example without loss accuracy Rather attempting redesign approximate existing network propose two scheme adaptively utilize network first pose adaptive network evaluation s...
[-0.010655594058334827, 0.07083895057439804, -0.02189578115940094, 0.040964525192976, 0.016899876296520233, 0.003200060222297907, 0.08283265680074692, 0.020197181031107903, 0.009742124937474728, -0.023342343047261238, -0.02673979662358761, 0.03044191189110279, 0.019908875226974487, 0.0055004204623401165, -0.00521785300...
249
249
['Zhengyang Wang', 'Hao Yuan', 'Shuiwang Ji']
1705.06821v1
The key idea of variational auto-encoders (VAEs) resembles that of traditional auto-encoder models in which spatial information is supposed to be explicitly encoded in the latent space. However, the latent variables in VAEs are vectors, which are commonly interpreted as multiple feature maps of size 1x1. Such represent...
Spatial Variational Auto-Encoding via Matrix-Variate Normal Distributions
2,017
http://arxiv.org/pdf/1705.06821v1
Title Spatial Variational AutoEncoding via MatrixVariate Normal Distributions Summary key idea variational autoencoders VAEs resembles traditional autoencoder model spatial information supposed explicitly encoded latent space However latent variable VAEs vector commonly interpreted multiple feature map size 1x1 represe...
[-0.02362840436398983, 0.03753163293004036, -0.016020672395825386, 0.003944622352719307, -0.008386004716157913, 0.0237281396985054, 0.019690774381160736, -0.05245620012283325, -0.06290556490421295, 0.030429089441895485, -0.014068732969462872, 0.04389841482043266, 0.030763179063796997, 0.09631757438182831, 0.05527981743...
250
250
['Jun Li', 'Yongjun Chen', 'Lei Cai', 'Ian Davidson', 'Shuiwang Ji']
1705.08881v2
The key idea of current deep learning methods for dense prediction is to apply a model on a regular patch centered on each pixel to make pixel-wise predictions. These methods are limited in the sense that the patches are determined by network architecture instead of learned from data. In this work, we propose the dense...
Dense Transformer Networks
2,017
http://arxiv.org/pdf/1705.08881v2
Title Dense Transformer Networks Summary key idea current deep learning method dense prediction apply model regular patch centered pixel make pixelwise prediction method limited sense patch determined network architecture instead learned data work propose dense transformer network learn shape size patch data dense tran...
[-0.019880279898643494, 0.033221058547496796, 0.0034778385888785124, 0.018752625212073326, 0.01973993517458439, -0.04496455937623978, 0.022053755819797516, -0.01864602044224739, -0.06360097229480743, 0.0651354119181633, 0.0012996755540370941, 0.031202536076307297, 0.02019202895462513, 0.08069653064012527, 0.01318674813...
251
251
['Saikat Chatterjee', 'Alireza M. Javid', 'Mostafa Sadeghi', 'Partha P. Mitra', 'Mikael Skoglund']
1710.08177v1
We develop an algorithm for systematic design of a large artificial neural network using a progression property. We find that some non-linear functions, such as the rectifier linear unit and its derivatives, hold the property. The systematic design addresses the choice of network size and regularization of parameters. ...
Progressive Learning for Systematic Design of Large Neural Networks
2,017
http://arxiv.org/pdf/1710.08177v1
Title Progressive Learning Systematic Design Large Neural Networks Summary develop algorithm systematic design large artificial neural network using progression property find nonlinear function rectifier linear unit derivative hold property systematic design address choice network size regularization parameter number n...
[-0.007100499235093594, 0.07661841809749603, -0.014514010399580002, -0.022471878677606583, 0.006745586637407541, -0.035336531698703766, 0.032387640327215195, 0.004059568513184786, -0.03514965623617172, 0.020913951098918915, 0.02031024731695652, 0.04010375961661339, 0.03836868703365326, 0.037399277091026306, 0.014155317...
252
252
['Shibani Santurkar', 'Ludwig Schmidt', 'Aleksander Mądry']
1711.00970v3
A fundamental, and still largely unanswered, question in the context of Generative Adversarial Networks (GANs) is whether GANs are actually able to capture the key characteristics of the datasets they are trained on. The current approaches to examining this issue require significant human supervision, such as visual in...
A Classification-Based Perspective on GAN Distributions
2,017
http://arxiv.org/pdf/1711.00970v3
Title ClassificationBased Perspective GAN Distributions Summary fundamental still largely unanswered question context Generative Adversarial Networks GANs whether GANs actually able capture key characteristic datasets trained current approach examining issue require significant human supervision visual inspection sampl...
[0.0021763378754258156, 0.08285621553659439, -0.018773389980196953, 0.022689875215291977, 0.017496628686785698, -0.018213102594017982, 0.02479293756186962, -0.008787611499428749, -0.05918903276324272, 0.026047758758068085, -0.026144003495573997, -0.00343594909645617, -0.003952491097152233, 0.014672715216875076, 0.07179...
253
253
['Ethan Perez', 'Harm de Vries', 'Florian Strub', 'Vincent Dumoulin', 'Aaron Courville']
1707.03017v5
Achieving artificial visual reasoning - the ability to answer image-related questions which require a multi-step, high-level process - is an important step towards artificial general intelligence. This multi-modal task requires learning a question-dependent, structured reasoning process over images from language. Stand...
Learning Visual Reasoning Without Strong Priors
2,017
http://arxiv.org/pdf/1707.03017v5
Title Learning Visual Reasoning Without Strong Priors Summary Achieving artificial visual reasoning ability answer imagerelated question require multistep highlevel process important step towards artificial general intelligence multimodal task requires learning questiondependent structured reasoning process image langu...
[0.007242904976010323, 0.037822578102350235, -0.0016457148594781756, 0.03268323093652725, -0.01724429428577423, 0.04981609061360359, 0.03430554270744324, 0.0010658128885552287, 0.019291743636131287, -0.017107194289565086, -0.0006256354390643537, 0.02180442027747631, -0.00687773572281003, 0.06499188393354416, 0.02795973...
254
254
['Jieyu Zhao', 'Tianlu Wang', 'Mark Yatskar', 'Vicente Ordonez', 'Kai-Wei Chang']
1707.09457v1
Language is increasingly being used to define rich visual recognition problems with supporting image collections sourced from the web. Structured prediction models are used in these tasks to take advantage of correlations between co-occurring labels and visual input but risk inadvertently encoding social biases found i...
Men Also Like Shopping: Reducing Gender Bias Amplification using Corpus-level Constraints
2,017
http://arxiv.org/pdf/1707.09457v1
Title Men Also Like Shopping Reducing Gender Bias Amplification using Corpuslevel Constraints Summary Language increasingly used define rich visual recognition problem supporting image collection sourced web Structured prediction model used task take advantage correlation cooccurring label visual input risk inadvertent...
[0.06009983643889427, 0.07471739500761032, -0.027074970304965973, -0.010839974507689476, 0.028910618275403976, 0.056949615478515625, 0.07157381623983383, -0.00790663156658411, 0.015195939689874649, -0.09310241043567657, 0.0076153394766151905, -0.01762606017291546, 0.028808385133743286, 0.02023264765739441, 0.0086589902...
255
255
['Guillem Collell', 'Luc Van Gool', 'Marie-Francine Moens']
1711.06821v2
Spatial understanding is a fundamental problem with wide-reaching real-world applications. The representation of spatial knowledge is often modeled with spatial templates, i.e., regions of acceptability of two objects under an explicit spatial relationship (e.g., "on", "below", etc.). In contrast with prior work that r...
Acquiring Common Sense Spatial Knowledge through Implicit Spatial Templates
2,017
http://arxiv.org/pdf/1711.06821v2
Title Acquiring Common Sense Spatial Knowledge Implicit Spatial Templates Summary Spatial understanding fundamental problem widereaching realworld application representation spatial knowledge often modeled spatial template ie region acceptability two object explicit spatial relationship eg etc contrast prior work restr...
[0.0390472486615181, 0.05449872091412544, -0.01978192664682865, 0.0235537551343441, -0.015112494118511677, 0.026040440425276756, 0.02543070539832115, -0.03768523409962654, -0.0028930793050676584, -0.043474871665239334, -0.021396949887275696, 0.044973064213991165, 0.0420638769865036, 0.05469481647014618, 0.0506098046898...
256
256
['Ethan Perez', 'Florian Strub', 'Harm de Vries', 'Vincent Dumoulin', 'Aaron Courville']
1709.07871v2
We introduce a general-purpose conditioning method for neural networks called FiLM: Feature-wise Linear Modulation. FiLM layers influence neural network computation via a simple, feature-wise affine transformation based on conditioning information. We show that FiLM layers are highly effective for visual reasoning - an...
FiLM: Visual Reasoning with a General Conditioning Layer
2,017
http://arxiv.org/pdf/1709.07871v2
Title FiLM Visual Reasoning General Conditioning Layer Summary introduce generalpurpose conditioning method neural network called FiLM Featurewise Linear Modulation FiLM layer influence neural network computation via simple featurewise affine transformation based conditioning information show FiLM layer highly effectiv...
[-0.020262116566300392, 0.026702584698796272, -0.0076394337229430676, 0.020431069657206535, 0.0009277117205783725, -0.03338836506009102, 0.07610216736793518, -0.00010991925955750048, -0.07093799859285355, 0.004698887001723051, 0.0132906474173069, -0.0012317874934524298, 0.02308536134660244, 0.07724828273057938, -0.0047...
257
257
['Ivan Titov', 'Ehsan Khoddam']
1412.2812v1
We introduce a new approach to unsupervised estimation of feature-rich semantic role labeling models. Our model consists of two components: (1) an encoding component: a semantic role labeling model which predicts roles given a rich set of syntactic and lexical features; (2) a reconstruction component: a tensor factoriz...
Unsupervised Induction of Semantic Roles within a Reconstruction-Error Minimization Framework
2,014
http://arxiv.org/pdf/1412.2812v1
Title Unsupervised Induction Semantic Roles within ReconstructionError Minimization Framework Summary introduce new approach unsupervised estimation featurerich semantic role labeling model model consists two component 1 encoding component semantic role labeling model predicts role given rich set syntactic lexical feat...
[0.04606027156114578, 0.030625415965914726, -0.01615452580153942, 0.07971205562353134, -0.030639758333563805, 0.01852208562195301, -0.02940760925412178, -0.02171035297214985, -0.03535516560077667, -0.07344570010900497, -0.006927483715116978, -0.0360574871301651, -0.00732341967523098, 0.021620342507958412, -0.0053366902...
258
258
['Tolga Bolukbasi', 'Kai-Wei Chang', 'James Zou', 'Venkatesh Saligrama', 'Adam Kalai']
1607.06520v1
The blind application of machine learning runs the risk of amplifying biases present in data. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors which has been used in many machine learning and natural language processing tasks. We show that even word embeddings traine...
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
2,016
http://arxiv.org/pdf/1607.06520v1
Title Man Computer Programmer Woman Homemaker Debiasing Word Embeddings Summary blind application machine learning run risk amplifying bias present data danger facing u word embedding popular framework represent text data vector used many machine learning natural language processing task show even word embeddings train...
[0.044939227402210236, 0.07646720111370087, -0.02984785847365856, 0.019330870360136032, 0.0043589952401816845, 0.020647117868065834, 0.034630268812179565, -0.02673092857003212, 0.02662590891122818, -0.046371929347515106, 0.04187776893377304, 0.0023317565210163593, 0.06847120821475983, 0.012187430635094643, 0.0187245048...
259
259
['Adji B. Dieng', 'Chong Wang', 'Jianfeng Gao', 'John Paisley']
1611.01702v2
In this paper, we propose TopicRNN, a recurrent neural network (RNN)-based language model designed to directly capture the global semantic meaning relating words in a document via latent topics. Because of their sequential nature, RNNs are good at capturing the local structure of a word sequence - both semantic and syn...
TopicRNN: A Recurrent Neural Network with Long-Range Semantic Dependency
2,016
http://arxiv.org/pdf/1611.01702v2
Title TopicRNN Recurrent Neural Network LongRange Semantic Dependency Summary paper propose TopicRNN recurrent neural network RNNbased language model designed directly capture global semantic meaning relating word document via latent topic sequential nature RNNs good capturing local structure word sequence semantic syn...
[0.039330221712589264, 0.03902608901262283, 0.005862101446837187, 0.07468004524707794, -0.04729980230331421, -0.017305465415120125, -0.018174050375819206, -0.0029418659396469593, -0.039891425520181656, -0.05232084542512894, -0.0013909466797485948, -0.00482860067859292, 0.02484303154051304, 0.03141802176833153, -0.02759...
260
260
['Liwen Zhang', 'John Winn', 'Ryota Tomioka']
1611.02266v2
We propose the Gaussian attention model for content-based neural memory access. With the proposed attention model, a neural network has the additional degree of freedom to control the focus of its attention from a laser sharp attention to a broad attention. It is applicable whenever we can assume that the distance in t...
Gaussian Attention Model and Its Application to Knowledge Base Embedding and Question Answering
2,016
http://arxiv.org/pdf/1611.02266v2
Title Gaussian Attention Model Application Knowledge Base Embedding Question Answering Summary propose Gaussian attention model contentbased neural memory access proposed attention model neural network additional degree freedom control focus attention laser sharp attention broad attention applicable whenever assume dis...
[0.04573822021484375, 0.021794361993670464, -8.420786798524205e-06, 0.05959530919790268, 0.01120956614613533, 0.0024439350236207247, -0.02130891941487789, -0.012029629200696945, -0.016816748306155205, -0.056387417018413544, 0.042499374598264694, 0.006019329186528921, -0.014441725797951221, 0.04128998890519142, 0.034159...
261
261
['Yacine Jernite', 'Edouard Grave', 'Armand Joulin', 'Tomas Mikolov']
1611.06188v2
Recurrent neural networks (RNNs) have been used extensively and with increasing success to model various types of sequential data. Much of this progress has been achieved through devising recurrent units and architectures with the flexibility to capture complex statistics in the data, such as long range dependency or l...
Variable Computation in Recurrent Neural Networks
2,016
http://arxiv.org/pdf/1611.06188v2
Title Variable Computation Recurrent Neural Networks Summary Recurrent neural network RNNs used extensively increasing success model various type sequential data Much progress achieved devising recurrent unit architecture flexibility capture complex statistic data long range dependency localized attention phenomenon Ho...
[0.024442000314593315, 0.02478458732366562, 0.0005417542415671051, 0.04417232424020767, -0.0052021867595613, -0.016433410346508026, 0.03130257874727249, -0.02177743799984455, -0.04024537280201912, -0.01409372128546238, 0.008027873933315277, -0.03966596722602844, 0.045676253736019135, 0.061685752123594284, 0.01568374037...
262
262
['Mostafa Dehghani', 'Aliaksei Severyn', 'Sascha Rothe', 'Jaap Kamps']
1711.11383v1
In this paper, we propose a method for training neural networks when we have a large set of data with weak labels and a small amount of data with true labels. In our proposed model, we train two neural networks: a target network, the learner and a confidence network, the meta-learner. The target network is optimized to...
Learning to Learn from Weak Supervision by Full Supervision
2,017
http://arxiv.org/pdf/1711.11383v1
Title Learning Learn Weak Supervision Full Supervision Summary paper propose method training neural network large set data weak label small amount data true label proposed model train two neural network target network learner confidence network metalearner target network optimized perform given task trained using large...
[0.016199544072151184, 0.008898280560970306, -0.0032587472815066576, 0.010760168544948101, 0.01855023205280304, -0.011139057576656342, 0.03655419871211052, -0.025033380836248398, 0.0042014648206532, 0.010688869282603264, -0.06303218752145767, 0.0256331916898489, -0.014091835357248783, 0.023681145161390305, 0.0366879180...
263
263
['Garrett B. Goh', 'Nathan O. Hodas', 'Charles Siegel', 'Abhinav Vishnu']
1712.02034v2
Chemical databases store information in text representations, and the SMILES format is a universal standard used in many cheminformatics software. Encoded in each SMILES string is structural information that can be used to predict complex chemical properties. In this work, we develop SMILES2vec, a deep RNN that automat...
SMILES2Vec: An Interpretable General-Purpose Deep Neural Network for Predicting Chemical Properties
2,017
http://arxiv.org/pdf/1712.02034v2
Title SMILES2Vec Interpretable GeneralPurpose Deep Neural Network Predicting Chemical Properties Summary Chemical database store information text representation SMILES format universal standard used many cheminformatics software Encoded SMILES string structural information used predict complex chemical property work de...
[-0.0056990343146026134, 0.05438602343201637, -0.007343196775764227, 0.008709163405001163, 0.026958413422107697, -0.030311936512589455, 0.012894507497549057, -0.0001822587801143527, 0.09065743535757065, 0.030944425612688065, 0.026158733293414116, 0.02760491706430912, -0.022660505026578903, 0.0733989030122757, 0.0333320...
264
264
['Gellért Weisz', 'Paweł Budzianowski', 'Pei-Hao Su', 'Milica Gašić']
1802.03753v1
In spoken dialogue systems, we aim to deploy artificial intelligence to build automated dialogue agents that can converse with humans. A part of this effort is the policy optimisation task, which attempts to find a policy describing how to respond to humans, in the form of a function taking the current state of the dia...
Sample Efficient Deep Reinforcement Learning for Dialogue Systems with Large Action Spaces
2,018
http://arxiv.org/pdf/1802.03753v1
Title Sample Efficient Deep Reinforcement Learning Dialogue Systems Large Action Spaces Summary spoken dialogue system aim deploy artificial intelligence build automated dialogue agent converse human part effort policy optimisation task attempt find policy describing respond human form function taking current state dia...
[0.05258924514055252, 0.025957388803362846, -0.007112140301615, 0.03299039974808693, 0.010600725188851357, 0.018305521458387375, 0.01092015765607357, -0.007585285231471062, 0.00495365634560585, -0.03693164512515068, -0.041278038173913956, -0.018762703984975815, -0.02209930121898651, 0.08973554521799088, 0.0083509301766...
265
265
['M. Andrecut']
1802.09914v1
In this paper we explore the "vector semantics" problem from the perspective of "almost orthogonal" property of high-dimensional random vectors. We show that this intriguing property can be used to "memorize" random vectors by simply adding them, and we provide an efficient probabilistic solution to the set membership ...
High-Dimensional Vector Semantics
2,018
http://arxiv.org/pdf/1802.09914v1
Title HighDimensional Vector Semantics Summary paper explore vector semantics problem perspective almost orthogonal property highdimensional random vector show intriguing property used memorize random vector simply adding provide efficient probabilistic solution set membership problem Also discus several application wo...
[0.013657531701028347, 0.033309757709503174, 0.00046756744268350303, 0.05040276050567627, -0.02798558957874775, 0.0247584767639637, 0.0012990129180252552, 0.016715247184038162, -0.04514794796705246, -0.06318659335374832, 0.025398975238204002, 0.00190466339699924, -0.007011611945927143, -0.006851550191640854, 0.00626556...
266
266
['Ashutosh Modi', 'Ivan Titov']
1312.5198v4
Induction of common sense knowledge about prototypical sequences of events has recently received much attention. Instead of inducing this knowledge in the form of graphs, as in much of the previous work, in our method, distributed representations of event realizations are computed based on distributed representations o...
Learning Semantic Script Knowledge with Event Embeddings
2,013
http://arxiv.org/pdf/1312.5198v4
Title Learning Semantic Script Knowledge Event Embeddings Summary Induction common sense knowledge prototypical sequence event recently received much attention Instead inducing knowledge form graph much previous work method distributed representation event realization computed based distributed representation predicate...
[0.014707188121974468, -0.023915736004710197, 0.013688293285667896, 0.07964979857206345, -0.02585529536008835, 0.004732577595859766, -0.030672112479805946, 0.012466762214899063, 0.07346168160438538, -0.06675880402326584, 0.052225515246391296, 0.0369463786482811, 0.009310407564043999, 0.10182614624500275, 0.020551547408...
267
267
['Andrew S. Lan', 'Divyanshu Vats', 'Andrew E. Waters', 'Richard G. Baraniuk']
1501.04346v1
While computer and communication technologies have provided effective means to scale up many aspects of education, the submission and grading of assessments such as homework assignments and tests remains a weak link. In this paper, we study the problem of automatically grading the kinds of open response mathematical qu...
Mathematical Language Processing: Automatic Grading and Feedback for Open Response Mathematical Questions
2,015
http://arxiv.org/pdf/1501.04346v1
Title Mathematical Language Processing Automatic Grading Feedback Open Response Mathematical Questions Summary computer communication technology provided effective mean scale many aspect education submission grading assessment homework assignment test remains weak link paper study problem automatically grading kind ope...
[0.0008942689746618271, -0.012581785209476948, -0.051359623670578, 0.012524507008492947, 0.013330218382179737, 0.012910638935863972, 0.03785356134176254, 0.01906597800552845, 0.007265756372362375, -0.06543463468551636, -0.006016953848302364, 0.05648420751094818, 0.012943807989358902, 0.07430504262447357, -0.00021407756...
268
268
['Tadahiro Taniguchi', 'Ryo Nakashima', 'Shogo Nagasaka']
1506.06646v2
Human infants can discover words directly from unsegmented speech signals without any explicitly labeled data. In this paper, we develop a novel machine learning method called nonparametric Bayesian double articulation analyzer (NPB-DAA) that can directly acquire language and acoustic models from observed continuous sp...
Nonparametric Bayesian Double Articulation Analyzer for Direct Language Acquisition from Continuous Speech Signals
2,015
http://arxiv.org/pdf/1506.06646v2
Title Nonparametric Bayesian Double Articulation Analyzer Direct Language Acquisition Continuous Speech Signals Summary Human infant discover word directly unsegmented speech signal without explicitly labeled data paper develop novel machine learning method called nonparametric Bayesian double articulation analyzer NPB...
[-0.021494179964065552, 0.0935753658413887, -0.003326444188132882, -0.005437909159809351, -0.004162050783634186, 0.05506052076816559, 0.05543316528201103, 0.009482024237513542, -0.03410143777728081, -0.019834361970424652, -0.029655329883098602, -0.017773011699318886, 0.10199090093374252, 0.019770469516515732, 0.0108447...
269
269
['Zhiting Hu', 'Xuezhe Ma', 'Zhengzhong Liu', 'Eduard Hovy', 'Eric Xing']
1603.06318v4
Combining deep neural networks with structured logic rules is desirable to harness flexibility and reduce uninterpretability of the neural models. We propose a general framework capable of enhancing various types of neural networks (e.g., CNNs and RNNs) with declarative first-order logic rules. Specifically, we develop...
Harnessing Deep Neural Networks with Logic Rules
2,016
http://arxiv.org/pdf/1603.06318v4
Title Harnessing Deep Neural Networks Logic Rules Summary Combining deep neural network structured logic rule desirable harness flexibility reduce uninterpretability neural model propose general framework capable enhancing various type neural network eg CNNs RNNs declarative firstorder logic rule Specifically develop i...
[0.033003341406583786, 0.07190505415201187, 0.010013608261942863, 0.056163910776376724, -0.04210459440946579, -0.0017273036064580083, -0.03845634311437607, 0.023358173668384552, 0.025619158521294594, -0.03538328781723976, -0.00015137832087930292, 0.009299815632402897, -0.017805354669690132, 0.055462513118982315, -0.041...
270
270
['Zhiting Hu', 'Zichao Yang', 'Xiaodan Liang', 'Ruslan Salakhutdinov', 'Eric P. Xing']
1703.00955v3
Generic generation and manipulation of text is challenging and has limited success compared to recent deep generative modeling in visual domain. This paper aims at generating plausible natural language sentences, whose attributes are dynamically controlled by learning disentangled latent representations with designated...
Toward Controlled Generation of Text
2,017
http://arxiv.org/pdf/1703.00955v3
Title Toward Controlled Generation Text Summary Generic generation manipulation text challenging limited success compared recent deep generative modeling visual domain paper aim generating plausible natural language sentence whose attribute dynamically controlled learning disentangled latent representation designated s...
[0.041591014713048935, 0.049242641776800156, -0.02687974087893963, 0.017289403825998306, -0.020910058170557022, 0.0009405180462636054, 0.013665645383298397, -0.03590340167284012, -0.02819523774087429, -0.01796075887978077, 0.03623415157198906, -0.027817292138934135, -0.002657985780388117, 0.08267983794212341, 0.0315427...
271
271
['Lianhui Qin', 'Zhisong Zhang', 'Hai Zhao', 'Zhiting Hu', 'Eric P. Xing']
1704.00217v1
Implicit discourse relation classification is of great challenge due to the lack of connectives as strong linguistic cues, which motivates the use of annotated implicit connectives to improve the recognition. We propose a feature imitation framework in which an implicit relation network is driven to learn from another ...
Adversarial Connective-exploiting Networks for Implicit Discourse Relation Classification
2,017
http://arxiv.org/pdf/1704.00217v1
Title Adversarial Connectiveexploiting Networks Implicit Discourse Relation Classification Summary Implicit discourse relation classification great challenge due lack connective strong linguistic cue motivates use annotated implicit connective improve recognition propose feature imitation framework implicit relation ne...
[0.05799024552106857, 0.04870559275150299, 0.001366019481793046, 0.07373739033937454, 0.00565086305141449, 0.004908737726509571, 0.01874467544257641, 0.010964848101139069, 0.016286183148622513, -0.06822407990694046, -0.02528931386768818, 0.026924440637230873, 0.01248467992991209, 0.008275067433714867, -0.02954594418406...
272
272
['Maxim Rabinovich', 'Mitchell Stern', 'Dan Klein']
1704.07535v1
Tasks like code generation and semantic parsing require mapping unstructured (or partially structured) inputs to well-formed, executable outputs. We introduce abstract syntax networks, a modeling framework for these problems. The outputs are represented as abstract syntax trees (ASTs) and constructed by a decoder with ...
Abstract Syntax Networks for Code Generation and Semantic Parsing
2,017
http://arxiv.org/pdf/1704.07535v1
Title Abstract Syntax Networks Code Generation Semantic Parsing Summary Tasks like code generation semantic parsing require mapping unstructured partially structured input wellformed executable output introduce abstract syntax network modeling framework problem output represented abstract syntax tree ASTs constructed d...
[0.0036666709929704666, 0.03017038106918335, -0.044676270335912704, 0.04405936226248741, -0.030508369207382202, 0.012179131619632244, -0.027734987437725067, -0.0032433553133159876, -0.0009128220262937248, -0.03829493373632431, -0.005977168213576078, 0.0694354772567749, 0.005467509850859642, 0.0688248798251152, 0.004465...
273
273
['Ben Athiwaratkun', 'Andrew Gordon Wilson']
1704.08424v1
Word embeddings provide point representations of words containing useful semantic information. We introduce multimodal word distributions formed from Gaussian mixtures, for multiple word meanings, entailment, and rich uncertainty information. To learn these distributions, we propose an energy-based max-margin objective...
Multimodal Word Distributions
2,017
http://arxiv.org/pdf/1704.08424v1
Title Multimodal Word Distributions Summary Word embeddings provide point representation word containing useful semantic information introduce multimodal word distribution formed Gaussian mixture multiple word meaning entailment rich uncertainty information learn distribution propose energybased maxmargin objective sho...
[0.014139062725007534, 0.06031809747219086, -0.00011749863915611058, 0.06753607839345932, -0.00906281266361475, 0.006855569314211607, -0.04722219705581665, -0.0014130757190287113, -0.03933229297399521, -0.053519390523433685, -0.0067403726279735565, -0.009325833059847355, 0.03750859573483467, 0.04316749796271324, 0.0529...
274
274
['Brent Harrison', 'Upol Ehsan', 'Mark O. Riedl']
1707.08616v2
In this work we present a technique to use natural language to help reinforcement learning generalize to unseen environments. This technique uses neural machine translation, specifically the use of encoder-decoder networks, to learn associations between natural language behavior descriptions and state-action informatio...
Guiding Reinforcement Learning Exploration Using Natural Language
2,017
http://arxiv.org/pdf/1707.08616v2
Title Guiding Reinforcement Learning Exploration Using Natural Language Summary work present technique use natural language help reinforcement learning generalize unseen environment technique us neural machine translation specifically use encoderdecoder network learn association natural language behavior description st...
[0.06082983687520027, 0.014656160026788712, -0.011895433068275452, 0.02202833816409111, 0.005228497087955475, -0.004309078212827444, -0.026899201795458794, 0.007668856997042894, -0.026616493239998817, -0.03102383390069008, -0.04038817435503006, 0.020689476281404495, 0.0008628653013147414, 0.06435170024633408, 0.0113265...
275
275
['Mo Yu', 'Xiaoxiao Guo', 'Jinfeng Yi', 'Shiyu Chang', 'Saloni Potdar', 'Gerald Tesauro', 'Haoyu Wang', 'Bowen Zhou']
1708.07918v1
We investigate task clustering for deep-learning based multi-task and few-shot learning in a many-task setting. We propose a new method to measure task similarities with cross-task transfer performance matrix for the deep learning scenario. Although this matrix provides us critical information regarding similarity betw...
Robust Task Clustering for Deep Many-Task Learning
2,017
http://arxiv.org/pdf/1708.07918v1
Title Robust Task Clustering Deep ManyTask Learning Summary investigate task clustering deeplearning based multitask fewshot learning manytask setting propose new method measure task similarity crosstask transfer performance matrix deep learning scenario Although matrix provides u critical information regarding similar...
[0.001757272519171238, -0.024962512776255608, -0.03047266975045204, 0.02561763860285282, -0.007608948275446892, 0.012431913986802101, 0.026778310537338257, -0.007478294428437948, 0.05867559835314751, -0.025957508012652397, -0.06346814334392548, -0.019597521051764488, -0.03317791223526001, 0.009456802159547806, 0.005325...
276
276
['Gino Brunner', 'Yuyi Wang', 'Roger Wattenhofer', 'Michael Weigelt']
1801.06024v1
We train multi-task autoencoders on linguistic tasks and analyze the learned hidden sentence representations. The representations change significantly when translation and part-of-speech decoders are added. The more decoders a model employs, the better it clusters sentences according to their syntactic similarity, as t...
Natural Language Multitasking: Analyzing and Improving Syntactic Saliency of Hidden Representations
2,018
http://arxiv.org/pdf/1801.06024v1
Title Natural Language Multitasking Analyzing Improving Syntactic Saliency Hidden Representations Summary train multitask autoencoders linguistic task analyze learned hidden sentence representation representation change significantly translation partofspeech decoder added decoder model employ better cluster sentence ac...
[0.04328468441963196, -0.0018625339725986123, -0.042040757834911346, 0.04902634024620056, -0.023598480969667435, 0.047014687210321426, 0.014851320534944534, -0.03033214807510376, 0.029052147641777992, -0.08073584735393524, -0.05591224506497383, 0.0034719135146588087, -0.025080954656004906, 0.036438219249248505, 0.03869...
277
277
['Minghai Chen', 'Sen Wang', 'Paul Pu Liang', 'Tadas Baltrušaitis', 'Amir Zadeh', 'Louis-Philippe Morency']
1802.00924v1
With the increasing popularity of video sharing websites such as YouTube and Facebook, multimodal sentiment analysis has received increasing attention from the scientific community. Contrary to previous works in multimodal sentiment analysis which focus on holistic information in speech segments such as bag of words re...
Multimodal Sentiment Analysis with Word-Level Fusion and Reinforcement Learning
2,018
http://arxiv.org/pdf/1802.00924v1
Title Multimodal Sentiment Analysis WordLevel Fusion Reinforcement Learning Summary increasing popularity video sharing website YouTube Facebook multimodal sentiment analysis received increasing attention scientific community Contrary previous work multimodal sentiment analysis focus holistic information speech segment...
[0.023880409076809883, 0.06874074786901474, 0.0020064199343323708, 0.023797376081347466, -0.007718656212091446, -0.020262226462364197, 0.002905611414462328, -0.02502617985010147, -0.028708036988973618, -0.04761388897895813, -0.03299633041024208, -0.03164663165807724, 0.022039808332920074, 0.0894528478384018, 0.02580103...
278
278
['Ed Collins', 'Isabelle Augenstein', 'Sebastian Riedel']
1706.03946v1
Automatic summarisation is a popular approach to reduce a document to its main arguments. Recent research in the area has focused on neural approaches to summarisation, which can be very data-hungry. However, few large datasets exist and none for the traditionally popular domain of scientific publications, which opens ...
A Supervised Approach to Extractive Summarisation of Scientific Papers
2,017
http://arxiv.org/pdf/1706.03946v1
Title Supervised Approach Extractive Summarisation Scientific Papers Summary Automatic summarisation popular approach reduce document main argument Recent research area focused neural approach summarisation datahungry However large datasets exist none traditionally popular domain scientific publication open challenging...
[0.05546897277235985, 0.036916639655828476, 0.012769097462296486, 0.038231298327445984, -0.03908950835466385, 0.005155544728040695, -0.010568211786448956, -0.020524710416793823, -0.009354954585433006, -0.04994983226060867, 0.004804883152246475, 0.02872115932404995, 0.02255595661699772, 0.021443190053105354, 0.009356690...
279
279
['Jacob Devlin', 'Hao Cheng', 'Hao Fang', 'Saurabh Gupta', 'Li Deng', 'Xiaodong He', 'Geoffrey Zweig', 'Margaret Mitchell']
1505.01809v3
Two recent approaches have achieved state-of-the-art results in image captioning. The first uses a pipelined process where a set of candidate words is generated by a convolutional neural network (CNN) trained on images, and then a maximum entropy (ME) language model is used to arrange these words into a coherent senten...
Language Models for Image Captioning: The Quirks and What Works
2,015
http://arxiv.org/pdf/1505.01809v3
Title Language Models Image Captioning Quirks Works Summary Two recent approach achieved stateoftheart result image captioning first us pipelined process set candidate word generated convolutional neural network CNN trained image maximum entropy language model used arrange word coherent sentence second us penultimate a...
[0.07051143050193787, 0.06679592281579971, -0.003020326839759946, 0.06557819247245789, -0.020720146596431732, 0.008877604268491268, 0.01993243210017681, -0.010685762390494347, -0.011696897447109222, -0.034674759954214096, -0.002281500492244959, -0.023624075576663017, 0.03912442922592163, 0.045678701251745224, 0.0103616...
280
280
['Mengye Ren', 'Ryan Kiros', 'Richard Zemel']
1505.02074v4
This work aims to address the problem of image-based question-answering (QA) with new models and datasets. In our work, we propose to use neural networks and visual semantic embeddings, without intermediate stages such as object detection and image segmentation, to predict answers to simple questions about images. Our ...
Exploring Models and Data for Image Question Answering
2,015
http://arxiv.org/pdf/1505.02074v4
Title Exploring Models Data Image Question Answering Summary work aim address problem imagebased questionanswering QA new model datasets work propose use neural network visual semantic embeddings without intermediate stage object detection image segmentation predict answer simple question image model performs 18 time b...
[0.04875091835856438, 0.021046742796897888, -0.011336416006088257, 0.07555010914802551, 0.008658998645842075, 0.026764843612909317, -0.008618188090622425, 0.01862267404794693, -0.006723479367792606, -0.019414395093917847, 0.023782672360539436, 0.028598297387361526, -0.047281406819820404, 0.038023315370082855, 0.0242913...
281
281
['Yash Goyal', 'Tejas Khot', 'Douglas Summers-Stay', 'Dhruv Batra', 'Devi Parikh']
1612.00837v3
Problems at the intersection of vision and language are of significant importance both as challenging research questions and for the rich set of applications they enable. However, inherent structure in our world and bias in our language tend to be a simpler signal for learning than visual modalities, resulting in model...
Making the V in VQA Matter: Elevating the Role of Image Understanding in Visual Question Answering
2,016
http://arxiv.org/pdf/1612.00837v3
Title Making V VQA Matter Elevating Role Image Understanding Visual Question Answering Summary Problems intersection vision language significant importance challenging research question rich set application enable However inherent structure world bias language tend simpler signal learning visual modality resulting mode...
[0.03509814292192459, 0.05680400878190994, -0.034543465822935104, 0.030327793210744858, -0.030562257394194603, 0.017595887184143066, 0.025898054242134094, -0.017597490921616554, -0.03156403824687004, -0.01465595606714487, 0.026172084733843803, 0.03404659032821655, 0.010982564650475979, 0.09113752841949463, 0.0225955285...
282
282
['Mateusz Malinowski', 'Mario Fritz']
1410.0210v4
We propose a method for automatically answering questions about images by bringing together recent advances from natural language processing and computer vision. We combine discrete reasoning with uncertain predictions by a multi-world approach that represents uncertainty about the perceived world in a bayesian framewo...
A Multi-World Approach to Question Answering about Real-World Scenes based on Uncertain Input
2,014
http://arxiv.org/pdf/1410.0210v4
Title MultiWorld Approach Question Answering RealWorld Scenes based Uncertain Input Summary propose method automatically answering question image bringing together recent advance natural language processing computer vision combine discrete reasoning uncertain prediction multiworld approach represents uncertainty percei...
[0.05978573113679886, 0.04490907862782478, 0.010872947052121162, 0.043480806052684784, -0.014554851688444614, -0.009198376908898354, -0.0010114817414432764, 0.014343124814331532, 0.01916860230267048, -0.05219611898064613, 0.025891276076436043, -0.031279269605875015, 0.009066601283848286, 0.06506037712097168, 0.02125419...
283
283
['Mateusz Malinowski', 'Mario Fritz']
1501.03302v2
Progress in language and image understanding by machines has sparkled the interest of the research community in more open-ended, holistic tasks, and refueled an old AI dream of building intelligent machines. We discuss a few prominent challenges that characterize such holistic tasks and argue for "question answering ab...
Hard to Cheat: A Turing Test based on Answering Questions about Images
2,015
http://arxiv.org/pdf/1501.03302v2
Title Hard Cheat Turing Test based Answering Questions Images Summary Progress language image understanding machine sparkled interest research community openended holistic task refueled old AI dream building intelligent machine discus prominent challenge characterize holistic task argue question answering image particu...
[0.03784540668129921, 0.00692099379375577, -0.03702068701386452, 0.00965532474219799, -0.02058044821023941, -0.002799171954393387, 0.016656283289194107, 0.013406004756689072, -0.00946202501654625, -0.016794269904494286, -0.003985284361988306, 0.05363457649946213, -0.005288075655698776, 0.08110769838094711, 0.0064461608...
284
284
['Aishwarya Agrawal', 'Dhruv Batra', 'Devi Parikh']
1606.07356v2
Recently, a number of deep-learning based models have been proposed for the task of Visual Question Answering (VQA). The performance of most models is clustered around 60-70%. In this paper we propose systematic methods to analyze the behavior of these models as a first step towards recognizing their strengths and weak...
Analyzing the Behavior of Visual Question Answering Models
2,016
http://arxiv.org/pdf/1606.07356v2
Title Analyzing Behavior Visual Question Answering Models Summary Recently number deeplearning based model proposed task Visual Question Answering VQA performance model clustered around 6070 paper propose systematic method analyze behavior model first step towards recognizing strength weakness identifying fruitful dire...
[0.07380562275648117, 0.0181230790913105, -0.0345357283949852, 0.05666319280862808, 0.006163184065371752, 0.025825493037700653, -0.007961368188261986, 0.0340099073946476, -0.014782741665840149, -0.021929217502474785, 0.012460102327167988, -0.015488985925912857, -0.008923979476094246, 0.042523205280303955, 0.05546228215...
285
285
['Harsh Agrawal', 'Arjun Chandrasekaran', 'Dhruv Batra', 'Devi Parikh', 'Mohit Bansal']
1606.07493v5
Temporal common sense has applications in AI tasks such as QA, multi-document summarization, and human-AI communication. We propose the task of sequencing -- given a jumbled set of aligned image-caption pairs that belong to a story, the task is to sort them such that the output sequence forms a coherent story. We prese...
Sort Story: Sorting Jumbled Images and Captions into Stories
2,016
http://arxiv.org/pdf/1606.07493v5
Title Sort Story Sorting Jumbled Images Captions Stories Summary Temporal common sense application AI task QA multidocument summarization humanAI communication propose task sequencing given jumbled set aligned imagecaption pair belong story task sort output sequence form coherent story present multiple approach via una...
[0.025620480999350548, 0.0587407611310482, -0.011837073601782322, 0.02782978117465973, -0.0032393806613981724, 0.0478534922003746, 0.00614558719098568, 0.0035160628613084555, -0.00953300204128027, -0.03974258154630661, 0.051692184060811996, -0.025561995804309845, 0.058960385620594025, 0.10357807576656342, -0.0180671922...
286
286
['Ashkan Mokarian', 'Mateusz Malinowski', 'Mario Fritz']
1608.02717v1
We present Mean Box Pooling, a novel visual representation that pools over CNN representations of a large number, highly overlapping object proposals. We show that such representation together with nCCA, a successful multimodal embedding technique, achieves state-of-the-art performance on the Visual Madlibs task. Moreo...
Mean Box Pooling: A Rich Image Representation and Output Embedding for the Visual Madlibs Task
2,016
http://arxiv.org/pdf/1608.02717v1
Title Mean Box Pooling Rich Image Representation Output Embedding Visual Madlibs Task Summary present Mean Box Pooling novel visual representation pool CNN representation large number highly overlapping object proposal show representation together nCCA successful multimodal embedding technique achieves stateoftheart pe...
[-0.029553566128015518, -0.008898820728063583, -0.0002850211749318987, 0.07411832362413406, 0.008108250796794891, 0.0261395126581192, 0.04738902673125267, -0.003120200941339135, -0.006630031857639551, -0.025454875081777573, -0.011117050424218178, 0.026201127097010612, -0.03696085512638092, 0.006584473419934511, 0.04462...
287
287
['Yuval Atzmon', 'Jonathan Berant', 'Vahid Kezami', 'Amir Globerson', 'Gal Chechik']
1608.07639v1
Recurrent neural networks have recently been used for learning to describe images using natural language. However, it has been observed that these models generalize poorly to scenes that were not observed during training, possibly depending too strongly on the statistics of the text in the training data. Here we propos...
Learning to generalize to new compositions in image understanding
2,016
http://arxiv.org/pdf/1608.07639v1
Title Learning generalize new composition image understanding Summary Recurrent neural network recently used learning describe image using natural language However observed model generalize poorly scene observed training possibly depending strongly statistic text training data propose describe image using short structu...
[0.03118283301591873, 0.03637874871492386, 0.0151326023042202, 0.06296464055776596, -0.012003778479993343, 0.008699464611709118, 0.001413799123838544, 0.003256863448768854, -0.03573073446750641, -0.05352947860956192, -0.0009998248424381018, -0.030531948432326317, 0.026165850460529327, 0.07841352373361588, 0.02739800326...
288
288
['C. Lawrence Zitnick', 'Aishwarya Agrawal', 'Stanislaw Antol', 'Margaret Mitchell', 'Dhruv Batra', 'Devi Parikh']
1608.08716v1
As machines have become more intelligent, there has been a renewed interest in methods for measuring their intelligence. A common approach is to propose tasks for which a human excels, but one which machines find difficult. However, an ideal task should also be easy to evaluate and not be easily gameable. We begin with...
Measuring Machine Intelligence Through Visual Question Answering
2,016
http://arxiv.org/pdf/1608.08716v1
Title Measuring Machine Intelligence Visual Question Answering Summary machine become intelligent renewed interest method measuring intelligence common approach propose task human excels one machine find difficult However ideal task also easy evaluate easily gameable begin case study exploring recently popular task ima...
[0.03843073919415474, 0.020461248233914375, -0.04631388559937477, 0.03540738672018051, -0.014663448557257652, 0.004288766533136368, 0.01788918487727642, 0.031028596684336662, 0.023628931492567062, 0.0022359101567417383, 0.016585979610681534, 0.018988551571965218, -0.004398867953568697, 0.06177018955349922, 0.0302237439...
289
289
['Yash Goyal', 'Akrit Mohapatra', 'Devi Parikh', 'Dhruv Batra']
1608.08974v2
Deep neural networks have shown striking progress and obtained state-of-the-art results in many AI research fields in the recent years. However, it is often unsatisfying to not know why they predict what they do. In this paper, we address the problem of interpreting Visual Question Answering (VQA) models. Specifically,...
Towards Transparent AI Systems: Interpreting Visual Question Answering Models
2,016
http://arxiv.org/pdf/1608.08974v2
Title Towards Transparent AI Systems Interpreting Visual Question Answering Models Summary Deep neural network shown striking progress obtained stateoftheart result many AI research field recent year However often unsatisfying know predict paper address problem interpreting Visual Question Answering VQA model Specifica...
[0.03860742226243019, 0.013975169509649277, -0.03445684537291527, 0.07084664702415466, 0.0009075008565559983, -0.0041730678640306, -0.010166934691369534, 0.024501441046595573, -0.01296352781355381, -0.028722422197461128, 0.011036978103220463, 0.0017562838038429618, 0.003150589531287551, 0.0756709948182106, 0.0370296724...
290
290
['Abhishek Das', 'Satwik Kottur', 'Khushi Gupta', 'Avi Singh', 'Deshraj Yadav', 'José M. F. Moura', 'Devi Parikh', 'Dhruv Batra']
1611.08669v5
We introduce the task of Visual Dialog, which requires an AI agent to hold a meaningful dialog with humans in natural, conversational language about visual content. Specifically, given an image, a dialog history, and a question about the image, the agent has to ground the question in image, infer context from history, ...
Visual Dialog
2,016
http://arxiv.org/pdf/1611.08669v5
Title Visual Dialog Summary introduce task Visual Dialog requires AI agent hold meaningful dialog human natural conversational language visual content Specifically given image dialog history question image agent ground question image infer context history answer question accurately Visual Dialog disentangled enough spe...
[0.0699768140912056, 0.07259771972894669, -0.01006864383816719, 0.03940192237496376, -0.013206160627305508, 0.022668281570076942, 0.014168682508170605, 0.013024978339672089, 0.03724847361445427, -0.034664757549762726, -0.01568761095404625, 0.004153894260525703, -0.008535664528608322, 0.1040937602519989, -0.004536944907...
291
291
['Abhinav Thanda', 'Shankar M Venkatesan']
1701.02477v1
Multi-task learning (MTL) involves the simultaneous training of two or more related tasks over shared representations. In this work, we apply MTL to audio-visual automatic speech recognition(AV-ASR). Our primary task is to learn a mapping between audio-visual fused features and frame labels obtained from acoustic GMM/H...
Multi-task Learning Of Deep Neural Networks For Audio Visual Automatic Speech Recognition
2,017
http://arxiv.org/pdf/1701.02477v1
Title Multitask Learning Deep Neural Networks Audio Visual Automatic Speech Recognition Summary Multitask learning MTL involves simultaneous training two related task shared representation work apply MTL audiovisual automatic speech recognitionAVASR primary task learn mapping audiovisual fused feature frame label obtai...
[-0.03141818195581436, -0.0007812197436578572, -0.01270153746008873, 0.00644897622987628, 0.014467687346041203, 0.014353024773299694, 0.055697157979011536, -0.027531344443559647, -0.03224589303135872, -0.014968904666602612, -0.11036159843206406, -0.02765566296875477, 0.021632378920912743, 0.051266107708215714, 0.031522...
292
292
['Abhishek Das', 'Satwik Kottur', 'José M. F. Moura', 'Stefan Lee', 'Dhruv Batra']
1703.06585v2
We introduce the first goal-driven training for visual question answering and dialog agents. Specifically, we pose a cooperative 'image guessing' game between two agents -- Qbot and Abot -- who communicate in natural language dialog so that Qbot can select an unseen image from a lineup of images. We use deep reinforcem...
Learning Cooperative Visual Dialog Agents with Deep Reinforcement Learning
2,017
http://arxiv.org/pdf/1703.06585v2
Title Learning Cooperative Visual Dialog Agents Deep Reinforcement Learning Summary introduce first goaldriven training visual question answering dialog agent Specifically pose cooperative image guessing game two agent Qbot Abot communicate natural language dialog Qbot select unseen image lineup image use deep reinforc...
[0.062454722821712494, 0.03200775757431984, -0.008456087671220303, 0.0404348187148571, -0.02340024895966053, 0.014208480715751648, 0.028155341744422913, -0.009489491581916809, 0.003380730515345931, -0.038527462631464005, -0.03748941421508789, 0.007519615348428488, -0.032869283109903336, 0.11186755448579788, -0.00782845...
293
293
['Wei-Lun Chao', 'Hexiang Hu', 'Fei Sha']
1704.07121v1
Visual question answering (QA) has attracted a lot of attention lately, seen essentially as a form of (visual) Turing test that artificial intelligence should strive to achieve. In this paper, we study a crucial component of this task: how can we design good datasets for the task? We focus on the design of multiple-cho...
Being Negative but Constructively: Lessons Learnt from Creating Better Visual Question Answering Datasets
2,017
http://arxiv.org/pdf/1704.07121v1
Title Negative Constructively Lessons Learnt Creating Better Visual Question Answering Datasets Summary Visual question answering QA attracted lot attention lately seen essentially form visual Turing test artificial intelligence strive achieve paper study crucial component task design good datasets task focus design mu...
[0.05095381662249565, 0.022796915844082832, -0.03355911746621132, 0.041267335414886475, 0.0018429163610562682, 0.02651730179786682, -0.004608791787177324, 0.025015152990818024, -0.004312430042773485, 0.012326233088970184, -0.006936265155673027, 0.030817486345767975, -0.015086743980646133, 0.024741847068071365, 0.036706...
294
294
['Aishwarya Agrawal', 'Aniruddha Kembhavi', 'Dhruv Batra', 'Devi Parikh']
1704.08243v1
Visual Question Answering (VQA) has received a lot of attention over the past couple of years. A number of deep learning models have been proposed for this task. However, it has been shown that these models are heavily driven by superficial correlations in the training data and lack compositionality -- the ability to a...
C-VQA: A Compositional Split of the Visual Question Answering (VQA) v1.0 Dataset
2,017
http://arxiv.org/pdf/1704.08243v1
Title CVQA Compositional Split Visual Question Answering VQA v10 Dataset Summary Visual Question Answering VQA received lot attention past couple year number deep learning model proposed task However shown model heavily driven superficial correlation training data lack compositionality ability answer question unseen co...
[0.07020939886569977, 0.033546529710292816, -0.020312199369072914, 0.04286683723330498, 0.0024580531753599644, 0.03000464476644993, 0.012141934596002102, -0.0004130439192522317, -0.03183414787054062, -0.0023541348055005074, 0.011007928289473057, -0.03912682458758354, -0.007061128504574299, 0.049503810703754425, 0.04201...
295
295
['Alexander Kuhnle', 'Ann Copestake']
1706.01322v1
We discuss problems with the standard approaches to evaluation for tasks like visual question answering, and argue that artificial data can be used to address these as a complement to current practice. We demonstrate that with the help of existing 'deep' linguistic processing technology we are able to create challengin...
Deep learning evaluation using deep linguistic processing
2,017
http://arxiv.org/pdf/1706.01322v1
Title Deep learning evaluation using deep linguistic processing Summary discus problem standard approach evaluation task like visual question answering argue artificial data used address complement current practice demonstrate help existing deep linguistic processing technology able create challenging abstract datasets...
[0.04702436551451683, 0.007313648238778114, -0.003427966730669141, 0.06499689072370529, -0.037067756056785583, 0.0241558700799942, 0.03907600790262222, -0.012462311424314976, -0.007517700549215078, -0.03356023132801056, -0.012995629571378231, -0.03819983825087547, 0.007236033212393522, 0.04533670097589493, 0.0185821466...
296
296
['Xu Sun', 'Xuancheng Ren', 'Shuming Ma', 'Houfeng Wang']
1706.06197v4
We propose a simple yet effective technique for neural network learning. The forward propagation is computed as usual. In back propagation, only a small subset of the full gradient is computed to update the model parameters. The gradient vectors are sparsified in such a way that only the top-$k$ elements (in terms of m...
meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting
2,017
http://arxiv.org/pdf/1706.06197v4
Title meProp Sparsified Back Propagation Accelerated Deep Learning Reduced Overfitting Summary propose simple yet effective technique neural network learning forward propagation computed usual back propagation small subset full gradient computed update model parameter gradient vector sparsified way topk element term ma...
[-0.034714024513959885, 0.04026816785335541, -0.029872296378016472, 0.025489047169685364, 0.05376351252198219, -0.028138553723692894, 0.012318890541791916, 0.019436581060290337, -0.0021193919237703085, 0.017988935112953186, 0.012787655927240849, 0.007434160448610783, 0.004233662039041519, 0.05216285213828087, 0.0209126...
297
297
['Suranjana Samanta', 'Sameep Mehta']
1707.02812v1
Adversarial samples are strategically modified samples, which are crafted with the purpose of fooling a classifier at hand. An attacker introduces specially crafted adversarial samples to a deployed classifier, which are being mis-classified by the classifier. However, the samples are perceived to be drawn from entirel...
Towards Crafting Text Adversarial Samples
2,017
http://arxiv.org/pdf/1707.02812v1
Title Towards Crafting Text Adversarial Samples Summary Adversarial sample strategically modified sample crafted purpose fooling classifier hand attacker introduces specially crafted adversarial sample deployed classifier misclassified classifier However sample perceived drawn entirely different class thus becomes hard...
[0.08556494861841202, 0.06461843848228455, -0.014933346770703793, 0.049989040940999985, -0.04509786143898964, -0.01245724968612194, 0.026351122185587883, 0.02328794077038765, 0.009678495116531849, -0.08863590657711029, 0.025523070245981216, 0.02739601396024227, 0.015645092353224754, 0.04214196279644966, 0.0312421750277...
298
298
['Ramakanth Pasunuru', 'Mohit Bansal']
1708.02300v1
Sequence-to-sequence models have shown promising improvements on the temporal task of video captioning, but they optimize word-level cross-entropy loss during training. First, using policy gradient and mixed-loss methods for reinforcement learning, we directly optimize sentence-level task-based metrics (as rewards), ac...
Reinforced Video Captioning with Entailment Rewards
2,017
http://arxiv.org/pdf/1708.02300v1
Title Reinforced Video Captioning Entailment Rewards Summary Sequencetosequence model shown promising improvement temporal task video captioning optimize wordlevel crossentropy loss training First using policy gradient mixedloss method reinforcement learning directly optimize sentencelevel taskbased metric reward achie...
[0.05354555323719978, 0.03832118585705757, 0.002761641051620245, 0.029181038960814476, -0.017482832074165344, 0.015165441669523716, -0.03006918355822563, -0.005761315114796162, -0.0414213165640831, -0.05080592632293701, -0.027974097058176994, -0.016205307096242905, 0.025298845022916794, 0.0386817567050457, -0.007077659...
299
299
['Licheng Yu', 'Mohit Bansal', 'Tamara L. Berg']
1708.02977v1
We address the problem of end-to-end visual storytelling. Given a photo album, our model first selects the most representative (summary) photos, and then composes a natural language story for the album. For this task, we make use of the Visual Storytelling dataset and a model composed of three hierarchically-attentive ...
Hierarchically-Attentive RNN for Album Summarization and Storytelling
2,017
http://arxiv.org/pdf/1708.02977v1
Title HierarchicallyAttentive RNN Album Summarization Storytelling Summary address problem endtoend visual storytelling Given photo album model first selects representative summary photo composes natural language story album task make use Visual Storytelling dataset model composed three hierarchicallyattentive Recurren...
[0.042152877897024155, 0.08644861727952957, 0.003656974760815501, 0.05060679465532303, -0.015786796808242798, 0.03198721632361412, 0.027615293860435486, -0.021865271031856537, -0.04505283758044243, -0.022298071533441544, -0.0163447093218565, -0.017181573435664177, 0.02027195505797863, 0.06858467310667038, 0.01531388424...