paperID
stringlengths
36
36
pwc_id
stringlengths
8
47
arxiv_id
stringlengths
6
16
nips_id
float64
url_abs
stringlengths
18
329
url_pdf
stringlengths
18
742
title
stringlengths
8
325
abstract
stringlengths
1
7.27k
authors
stringlengths
2
7.06k
published
stringlengths
10
10
conference
stringlengths
12
47
conference_url_abs
stringlengths
16
198
conference_url_pdf
stringlengths
27
199
proceeding
stringlengths
6
47
taskID
stringlengths
7
1.44k
areaID
stringclasses
688 values
embedding
stringlengths
9.26k
12.5k
umap_embedding
stringlengths
29
44
cb29abcf-8c05-4830-8fef-5c786d1b1633
attend-memorize-and-generate-towards-faithful-1
2203.00732
null
https://arxiv.org/abs/2203.00732v1
https://arxiv.org/pdf/2203.00732v1.pdf
Attend, Memorize and Generate: Towards Faithful Table-to-Text Generation in Few Shots
Few-shot table-to-text generation is a task of composing fluent and faithful sentences to convey table content using limited data. Despite many efforts having been made towards generating impressive fluent sentences by fine-tuning powerful pre-trained language models, the faithfulness of generated content still needs t...
['Philip S. Yu', 'Yao Wan', 'Ye Liu', 'Wenting Zhao']
2022-03-01
attend-memorize-and-generate-towards-faithful
https://aclanthology.org/2021.findings-emnlp.347
https://aclanthology.org/2021.findings-emnlp.347.pdf
findings-emnlp-2021-11
['table-to-text-generation']
['natural-language-processing']
[ 2.48318046e-01 3.91110957e-01 -1.18735723e-01 -2.69467860e-01 -8.77848864e-01 -3.80086809e-01 1.07433569e+00 2.19238430e-01 -3.53820361e-02 1.14471436e+00 9.70313311e-01 1.47242369e-02 4.90506917e-01 -1.29464829e+00 -5.97611904e-01 -2.14932814e-01 4.12561297e-01 6.79615438e-01 8.78864378e-02 -9.14010823...
[11.703781127929688, 8.851287841796875]
66d27375-2389-4888-9192-258c555567bb
weighted-anisotropic-isotropic-total
2307.00439
null
https://arxiv.org/abs/2307.00439v1
https://arxiv.org/pdf/2307.00439v1.pdf
Weighted Anisotropic-Isotropic Total Variation for Poisson Denoising
Poisson noise commonly occurs in images captured by photon-limited imaging systems such as in astronomy and medicine. As the distribution of Poisson noise depends on the pixel intensity value, noise levels vary from pixels to pixels. Hence, denoising a Poisson-corrupted image while preserving important details can be c...
['Jack Xin', 'Fredrick Park', 'Yifei Lou', 'Kevin Bui']
2023-07-01
null
null
null
null
['astronomy']
['miscellaneous']
[ 4.72609639e-01 -5.39900661e-01 3.25475872e-01 -5.54059558e-02 -6.74447358e-01 -3.48109454e-01 1.86620414e-01 -1.57979012e-01 -8.24611247e-01 8.91159832e-01 -5.02316914e-02 3.81806903e-02 1.90205947e-01 -8.49868298e-01 -6.16052806e-01 -1.23703384e+00 5.00646174e-01 5.17346337e-02 3.21784407e-01 4.71579790...
[11.629626274108887, -2.5683696269989014]
bf18f094-75b8-49a3-a859-79e2b51ece54
constructing-colloquial-dataset-for-persian
2306.12679
null
https://arxiv.org/abs/2306.12679v1
https://arxiv.org/pdf/2306.12679v1.pdf
Constructing Colloquial Dataset for Persian Sentiment Analysis of Social Microblogs
Introduction: Microblogging websites have massed rich data sources for sentiment analysis and opinion mining. In this regard, sentiment classification has frequently proven inefficient because microblog posts typically lack syntactically consistent terms and representatives since users on these social networks do not l...
['Zeinab Rajabi', 'Farzaneh Rahmani', 'Leyla Rabiei', 'Mojtaba Mazoochi']
2023-06-22
null
null
null
null
['word-embeddings', 'sentiment-analysis', 'opinion-mining', 'persian-sentiment-anlysis']
['methodology', 'natural-language-processing', 'natural-language-processing', 'natural-language-processing']
[-6.57622278e-01 -1.94170043e-01 -1.49049228e-02 -5.08473754e-01 -1.67083547e-01 -5.07351160e-01 5.23753285e-01 4.15796369e-01 -9.09481347e-01 8.27754140e-01 4.94359702e-01 -4.04529244e-01 3.18236798e-01 -1.04155016e+00 -4.79589365e-02 -4.57130373e-01 1.55199900e-01 1.46449938e-01 -2.41594285e-01 -1.10949278...
[11.200174331665039, 6.942314624786377]
ed129eb1-3528-4e9e-a385-bd189043bbb0
self-supervised-sparse-to-dense-motion
2008.07872
null
https://arxiv.org/abs/2008.07872v1
https://arxiv.org/pdf/2008.07872v1.pdf
Self-supervised Sparse to Dense Motion Segmentation
Observable motion in videos can give rise to the definition of objects moving with respect to the scene. The task of segmenting such moving objects is referred to as motion segmentation and is usually tackled either by aggregating motion information in long, sparse point trajectories, or by directly producing per frame...
['Margret Keuper', 'Kalun Ho', 'Peter Ochs', 'Amirhossein Kardoost']
2020-08-18
null
null
null
null
['motion-segmentation']
['computer-vision']
[ 4.56552863e-01 7.98271075e-02 -2.72133380e-01 -3.20848197e-01 -8.72884095e-01 -5.68032742e-01 6.07108772e-01 -8.03866163e-02 -6.42430127e-01 5.28616369e-01 1.53480306e-01 3.95579711e-02 4.93509583e-02 -6.14381790e-01 -8.97096992e-01 -7.94770062e-01 -1.41401544e-01 6.64917946e-01 6.25478029e-01 2.55602241...
[9.140424728393555, -0.25388726592063904]
176dabcc-9aa8-44f7-b4b9-be82ed58c7c1
visual-sentiment-prediction-with-deep
1411.5731
null
http://arxiv.org/abs/1411.5731v1
http://arxiv.org/pdf/1411.5731v1.pdf
Visual Sentiment Prediction with Deep Convolutional Neural Networks
Images have become one of the most popular types of media through which users convey their emotions within online social networks. Although vast amount of research is devoted to sentiment analysis of textual data, there has been very limited work that focuses on analyzing sentiment of image data. In this work, we propo...
['Li-Jia Li', 'Kuang-Chih Lee', 'Suleyman Cetintas', 'Can Xu']
2014-11-21
null
null
null
null
['visual-sentiment-prediction']
['computer-vision']
[ 1.15398735e-01 -1.94553539e-01 -2.85752833e-01 -6.62082732e-01 -6.90789595e-02 -3.50712419e-01 5.18204868e-01 2.18310907e-01 -5.09545684e-01 4.23828155e-01 1.36488885e-01 -4.28235561e-01 6.22605681e-01 -8.86213839e-01 -7.42325425e-01 -3.34147125e-01 3.39343607e-01 -2.13886663e-01 4.00644355e-02 -3.18376184...
[10.9176664352417, 2.693315267562866]
19b1de35-291e-4378-95cc-9fb77a431a90
parallelizing-legendre-memory-unit-training
2102.11417
null
https://arxiv.org/abs/2102.11417v2
https://arxiv.org/pdf/2102.11417v2.pdf
Parallelizing Legendre Memory Unit Training
Recently, a new recurrent neural network (RNN) named the Legendre Memory Unit (LMU) was proposed and shown to achieve state-of-the-art performance on several benchmark datasets. Here we leverage the linear time-invariant (LTI) memory component of the LMU to construct a simplified variant that can be parallelized during...
['Chris Eliasmith', 'Narsimha Chilkuri']
2021-02-22
parallelizing-legendre-memory-unit-training-1
https://arxiv.org/abs/2102.11417
https://arxiv.org/pdf/2102.11417.pdf
null
['sequential-image-classification']
['computer-vision']
[ 5.45695312e-02 -7.23649487e-02 -2.27879882e-01 -1.71583742e-01 -7.87501097e-01 -6.31634414e-01 7.29906797e-01 -2.53846526e-01 -6.85765088e-01 4.77833599e-01 1.11220784e-01 -8.97384524e-01 3.29277486e-01 -8.36020470e-01 -1.01316822e+00 -6.96915209e-01 1.34891793e-01 6.86463773e-01 2.12884456e-01 -3.73787671...
[10.803276062011719, 6.806011199951172]
adecbd42-75ff-4f6c-a9b1-7f6083914c26
a-comprehensive-evaluation-on-multi-channel
2202.10286
null
https://arxiv.org/abs/2202.10286v1
https://arxiv.org/pdf/2202.10286v1.pdf
A Comprehensive Evaluation on Multi-channel Biometric Face Presentation Attack Detection
The vulnerability against presentation attacks is a crucial problem undermining the wide-deployment of face recognition systems. Though presentation attack detection (PAD) systems try to address this problem, the lack of generalization and robustness continues to be a major concern. Several works have shown that using ...
['Sebastien Marcel', 'David Geissbuhler', 'Anjith George']
2022-02-21
null
null
null
null
['face-presentation-attack-detection']
['computer-vision']
[ 3.95066291e-01 -3.70497495e-01 -7.29599819e-02 -1.79519325e-01 -6.69078290e-01 -7.79925287e-01 4.77633148e-01 -2.14746725e-02 -3.77866477e-01 2.72229254e-01 -1.43926308e-01 -4.01406229e-01 -2.42403328e-01 -6.48262203e-01 -5.33195734e-01 -1.06980157e+00 -4.44071293e-01 -2.20999703e-01 2.58212060e-01 -3.79727155...
[13.073018074035645, 1.097410798072815]
af7a4476-939d-4120-af30-9c1f6cafaf3b
disclip-open-vocabulary-referring-expression
2305.19108
null
https://arxiv.org/abs/2305.19108v1
https://arxiv.org/pdf/2305.19108v1.pdf
DisCLIP: Open-Vocabulary Referring Expression Generation
Referring Expressions Generation (REG) aims to produce textual descriptions that unambiguously identifies specific objects within a visual scene. Traditionally, this has been achieved through supervised learning methods, which perform well on specific data distributions but often struggle to generalize to new images an...
['Gal Chechik', 'Ethan Fetaya', 'Aviv Shamsian', 'Eitan Shaar', 'Lior Bracha']
2023-05-30
null
null
null
null
['referring-expression-generation', 'referring-expression']
['computer-vision', 'computer-vision']
[ 4.60099041e-01 2.47159317e-01 2.82816049e-02 -5.95390439e-01 -1.13977981e+00 -7.23646760e-01 8.89009416e-01 2.08988488e-01 -1.91923589e-01 5.13793111e-01 3.29536200e-01 7.67076164e-02 3.62180978e-01 -5.76897800e-01 -9.16033566e-01 -3.37526411e-01 3.60788912e-01 5.84549069e-01 1.05524912e-01 -1.57100439...
[10.80776309967041, 1.378513216972351]