Dataset Viewer
Auto-converted to Parquet Duplicate
paper_id
uint32
0
3.26k
title
stringlengths
15
150
paper_url
stringlengths
42
42
authors
listlengths
1
21
type
stringclasses
3 values
abstract
stringlengths
393
2.58k
keywords
stringlengths
5
409
TL;DR
stringlengths
7
250
submission_number
int64
1
16.4k
arxiv_id
stringlengths
10
10
embedding
listlengths
768
768
github
stringlengths
26
123
0
Implicit Regularization for Tubal Tensor Factorizations via Gradient Descent
https://openreview.net/forum?id=2GmXJnyNM4
[ "Santhosh Karnik", "Anna Veselovska", "Mark Iwen", "Felix Krahmer" ]
Oral
We provide a rigorous analysis of implicit regularization in an overparametrized tensor factorization problem beyond the lazy training regime. For matrix factorization problems, this phenomenon has been studied in a number of works. A particular challenge has been to design universal initialization strategies which pro...
overparameterization, implicit regularization, tensor factorization
We provide a rigorous analysis of implicit regularization in an overparametrized tensor factorization problem beyond the lazy training regime.
16,047
2410.16247
[ -0.019181201234459877, -0.038270384073257446, 0.01611342281103134, 0.024747205898165703, 0.02661350928246975, 0.048398762941360474, 0.011871577240526676, 0.00596786430105567, -0.016043782234191895, -0.05787587910890579, -0.01238834485411644, -0.0017976615345105529, -0.05827592313289642, 0....
https://github.com/AnnaVeselovskaUA/tubal-tensor-implicit-reg-GD
1
Algorithm Development in Neural Networks: Insights from the Streaming Parity Task
https://openreview.net/forum?id=3go0lhfxd0
[ "Loek van Rossem", "Andrew M Saxe" ]
Oral
Even when massively overparameterized, deep neural networks show a remarkable ability to generalize. Research on this phenomenon has focused on generalization within distribution, via smooth interpolation. Yet in some settings neural networks also learn to extrapolate to data far beyond the bounds of the original train...
Out-of-distribution generalization, Algorithm discovery, Deep learning theory, Mechanistic Interpretability
We explain in a simple setting how out-of-distribution generalization can occur.
16,013
2507.09897
[ -0.0326198972761631, -0.014057177118957043, -0.01832646317780018, 0.03963460400700569, 0.026646170765161514, 0.017422083765268326, 0.04410151392221451, 0.03895360231399536, -0.05122928321361542, -0.02704167366027832, -0.0036526948679238558, -0.021394891664385796, -0.0678320974111557, -0.00...
null
2
Orthogonal Subspace Decomposition for Generalizable AI-Generated Image Detection
https://openreview.net/forum?id=GFpjO8S8Po
[ "Zhiyuan Yan", "Jiangming Wang", "Peng Jin", "Ke-Yue Zhang", "Chengchun Liu", "Shen Chen", "Taiping Yao", "Shouhong Ding", "Baoyuan Wu", "Li Yuan" ]
Oral
Detecting AI-generated images (AIGIs), such as natural images or face images, has become increasingly important yet challenging. In this paper, we start from a new perspective to excavate the reason behind the failure generalization in AIGI detection, named the asymmetry phenomenon, where a naively trained detector ten...
AI-Generated Image Detection, Face Forgery Detection, Deepfake Detection, Media Forensics
We introduce a novel approach via orthogonal subspace decomposition for generalizing AI-generated images detection.
15,222
2411.15633
[ -0.00003362178904353641, -0.037818778306245804, 0.04007361829280853, 0.027026934549212456, 0.0204058475792408, 0.015415623784065247, 0.03986102715134621, -0.005431922152638435, -0.005959442351013422, -0.05781417712569237, -0.03166253864765167, 0.014723767526447773, -0.0881330817937851, 0.0...
https://github.com/YZY-stack/Effort-AIGI-Detection
3
Accelerating LLM Inference with Lossless Speculative Decoding Algorithms for Heterogeneous Vocabularies
https://openreview.net/forum?id=vQubr1uBUw
[ "Nadav Timor", "Jonathan Mamou", "Daniel Korat", "Moshe Berchansky", "Gaurav Jain", "Oren Pereg", "Moshe Wasserblat", "David Harel" ]
Oral
Accelerating the inference of large language models (LLMs) is a critical challenge in generative AI. Speculative decoding (SD) methods offer substantial efficiency gains by generating multiple tokens using a single target forward pass. However, existing SD approaches require the drafter and target models to share the s...
Speculative Decoding, Large Language Models, Vocabulary Alignment, Heterogeneous Vocabularies, Efficient Inference, Inference Acceleration, Rejection Sampling, Tokenization, Transformer Architectures, Text Generation, Open Source.
null
15,148
2502.05202
[ 0.0034581993240863085, -0.026665760204195976, -0.014417539350688457, 0.04200471565127373, 0.043362341821193695, 0.04652798920869827, 0.00632978230714798, 0.015310577116906643, -0.005913200788199902, -0.010434215888381004, -0.010758771561086178, 0.040639977902173996, -0.05736065283417702, 0...
https://github.com/keyboardAnt/hf-bench
4
LLM-SRBench: A New Benchmark for Scientific Equation Discovery with Large Language Models
https://openreview.net/forum?id=SyQPiZJVWY
[ "Parshin Shojaee", "Ngoc-Hieu Nguyen", "Kazem Meidani", "Amir Barati Farimani", "Khoa D Doan", "Chandan K. Reddy" ]
Oral
Scientific equation discovery is a fundamental task in the history of scientific progress, enabling the derivation of laws governing natural phenomena. Recently, Large Language Models (LLMs) have gained interest for this task due to their potential to leverage embedded scientific knowledge for hypothesis generation. Ho...
Benchmark, Scientific Discovery, Large Language Models, Symbolic Regression
We present LLM-SRBench, the first comprehensive benchmark for evaluating scientific equation discovery with LLMs, designed to rigorously assess discovery capabilities beyond memorization
14,812
null
[ -0.042210932821035385, 0.004859969485551119, 0.0037580966018140316, 0.02409457601606846, 0.03766670823097229, 0.01762591302394867, 0.012549391016364098, -0.00429905578494072, -0.02228093333542347, 0.005307744722813368, 0.030796995386481285, 0.0336964875459671, -0.04636078327894211, 0.00487...
https://github.com/deep-symbolic-mathematics/llm-srbench
5
ConceptAttention: Diffusion Transformers Learn Highly Interpretable Features
https://openreview.net/forum?id=Rc7y9HFC34
[ "Alec Helbling", "Tuna Han Salih Meral", "Benjamin Hoover", "Pinar Yanardag", "Duen Horng Chau" ]
Oral
Do the rich representations of multi-modal diffusion transformers (DiTs) exhibit unique properties that enhance their interpretability? We introduce ConceptAttention, a novel method that leverages the expressive power of DiT attention layers to generate high-quality saliency maps that precisely locate textual concepts ...
diffusion, interpretability, transformers, representation learning, mechanistic interpretability
We introduce a method for interpreting the representations of diffusion transformers by producing saliency maps of textual concepts.
14,767
2502.04320
[ 0.011045873165130615, -0.03143558278679848, -0.0036811591126024723, 0.06845836341381073, 0.019581660628318787, 0.02596374601125717, 0.029545053839683533, 0.008918728679418564, -0.002027298789471388, -0.027645176276564598, -0.04489611089229584, -0.02071050927042961, -0.02666446939110756, 0....
https://github.com/helblazer811/ConceptAttention
6
Emergence in non-neural models: grokking modular arithmetic via average gradient outer product
https://openreview.net/forum?id=36hVB7DEB0
[ "Neil Rohit Mallinar", "Daniel Beaglehole", "Libin Zhu", "Adityanarayanan Radhakrishnan", "Parthe Pandit", "Mikhail Belkin" ]
Oral
Neural networks trained to solve modular arithmetic tasks exhibit grokking, a phenomenon where the test accuracy starts improving long after the model achieves 100% training accuracy in the training process. It is often taken as an example of "emergence", where model ability manifests sharply through a phase transition...
Theory of deep learning, grokking, modular arithmetic, feature learning, kernel methods, average gradient outer product (AGOP), emergence
We show that "emergence" in the task of grokking modular arithmetic occurs in feature learning kernels using the Average Gradient Outer Product (AGOP) and that the features take the form of block-circulant features.
14,743
2407.20199
[ -0.021082431077957153, -0.012389615178108215, 0.004305555485188961, 0.022336209192872047, 0.05508352071046829, 0.025000594556331635, 0.02121897228062153, 0.004627522546797991, -0.06073524057865143, -0.01705952361226082, 0.010001549497246742, 0.017953528091311455, -0.057553987950086594, 0.0...
https://github.com/nmallinar/rfm-grokking
7
Hierarchical Refinement: Optimal Transport to Infinity and Beyond
https://openreview.net/forum?id=EBNgREMoVD
[ "Peter Halmos", "Julian Gold", "Xinhao Liu", "Benjamin Raphael" ]
Oral
Optimal transport (OT) has enjoyed great success in machine learning as a principled way to align datasets via a least-cost correspondence, driven in large part by the runtime efficiency of the Sinkhorn algorithm (Cuturi, 2013). However, Sinkhorn has quadratic space complexity in the number of points, limiting scalabil...
Optimal transport, low-rank, linear complexity, sparse, full-rank
Linear-complexity optimal transport, using low-rank optimal transport to progressively refine the solution to a Monge map.
14,649
2503.03025
[ -0.027334941551089287, -0.01004324946552515, 0.011843880638480186, 0.034024015069007874, 0.029421843588352203, 0.02161589451134205, 0.016120072454214096, -0.018252994865179062, -0.00789616722613573, -0.0722673311829567, -0.008323721587657928, -0.020554102957248688, -0.07072562724351883, 0....
https://github.com/raphael-group/HiRef
8
Train for the Worst, Plan for the Best: Understanding Token Ordering in Masked Diffusions
https://openreview.net/forum?id=DjJmre5IkP
[ "Jaeyeon Kim", "Kulin Shah", "Vasilis Kontonis", "Sham M. Kakade", "Sitan Chen" ]
Oral
In recent years, masked diffusion models (MDMs) have emerged as a promising alternative approach for generative modeling over discrete domains. Compared to autoregressive models (ARMs), MDMs trade off complexity at training time with flexibility at inference time. At training time, they must learn to solve an exponenti...
Discrete Diffusion models, Masked Diffusion Models, Diffusion Models, Learning Theory, Inference-Time Strategy
null
14,095
2502.06768
[ -0.03017411381006241, -0.015793848782777786, -0.03172555938363075, 0.06325984746217728, 0.05338553711771965, 0.03736850991845131, 0.02657005749642849, -0.011509117670357227, -0.01983768306672573, -0.034276317805051804, 0.0005438215448521078, 0.0016898621106520295, -0.03530674800276756, 0.0...
null
9
Statistical Test for Feature Selection Pipelines by Selective Inference
https://openreview.net/forum?id=4EYwwVuhtG
[ "Tomohiro Shiraishi", "Tatsuya Matsukawa", "Shuichi Nishino", "Ichiro Takeuchi" ]
Oral
A data analysis pipeline is a structured sequence of steps that transforms raw data into meaningful insights by integrating various analysis algorithms. In this paper, we propose a novel statistical test to assess the significance of data analysis pipelines. Our approach enables the systematic development of valid stat...
Data Analysis Pipeline, AutoML, Statistical Test, Selective Inference, Missing Value Imputation, Outlier Detection, Feature Selection
We introduce a statistical test for data analysis pipeline in feature selection problems, which allows for the systematic development of valid statistical tests applicable to any pipeline configuration composed of a set of predefined components.
13,925
2406.18902
[ 0.009413988329470158, -0.01155618391931057, -0.02200639247894287, 0.030097780749201775, 0.067814402282238, 0.045050036162137985, 0.06003964692354202, -0.025692462921142578, -0.0019961593206971884, -0.038121238350868225, -0.011850292794406414, 0.0272664912045002, -0.06766638159751892, 0.000...
https://github.com/Takeuchi-Lab-SI-Group/si4pipeline
End of preview. Expand in Data Studio
README.md exists but content is empty.
Downloads last month
10