filename stringlengths 9 127 | text stringlengths 133 11k |
|---|---|
2310.15154.pdf | Pre-publication draft
LINEAR REPRESENTATIONS OF SENTIMENT
INLARGE LANGUAGE MODELS
Curt Tigges*♣, Oskar John Hollinsworth*♡, Atticus Geiger♠⋆, Neel Nanda♢
♣EleutherAI Institute,♡SERI MATS,♠Stanford University,⋆Pr(Ai)2R Group,♢Independent
*Equal primary authors (order random)
ABSTRACT
Sentiment is a pervasive feature in ... |
2212.10559.pdf | Why Can GPT Learn In-Context?
Language Models Secretly Perform Gradient Descent as Meta-Optimizers
Damai Dai†∗, Yutao Sun∥∗, Li Dong‡, Yaru Hao‡, Zhifang Sui†, Furu Wei‡
†Peking University∥Tsinghua University
‡Microsoft Research
https://github.com/microsoft/LMOps
Abstract
Large pretrained language models have shown
sur... |
2306.00297.pdf | Transformers learn to implement preconditioned gradient descent
for in-context learning
Kwangjun Ahn1,3,*, Xiang Cheng1,3,*, Hadi Daneshmand2,3,*, and Suvrit Sra1,3
1Department of Electrical Engineering and Computer Science, MIT
2Foundations of Data Science Institute (FODSI)
3Laboratory for Information and Decision Sys... |
2105.14368.pdf | Fit without fear: remarkable mathematical
phenomena of deep learning through the prism of
interpolation
Mikhail Belkin
Halicio˘ glu Data Science Institute,
University of California San Diego
La Jolla, USA
In memory of Partha Niyogi, a thinker, a teacher, and a dear friend.
Abstract
In the past decade the mathematical t... |
2306.09927.pdf | arXiv:2306.09927v1 [stat.ML] 16 Jun 2023Trained Transformers Learn Linear Models In-Context
Ruiqi Zhang
UC Berkeley
rqzhang@berkeley.eduSpencer Frei
UC Berkeley
frei@berkeley.edu
Peter L. Bartlett
UC Berkeley and Google DeepMind
peter@berkeley.edu
June 19, 2023
Abstract
Attention-based neural networks such as transfo... |
2310.15418.pdf | Fractal Landscapes in Policy Optimization
Tao Wang
UC San Diego
taw003@ucsd.eduSylvia Herbert
UC San Diego
sherbert@ucsd.eduSicun Gao
UC San Diego
sicung@ucsd.edu
Abstract
Policy gradient lies at the core of deep reinforcement learning (RL) in continuous
domains. Despite much success, it is often observed in practice t... |
2205.14135.pdf | FlashAttention : Fast and Memory-Efficient Exact Attention
with IO-Awareness
Tri Daoy, Daniel Y. Fuy, Stefano Ermony, Atri Rudraz, and Christopher Réy
yDepartment of Computer Science, Stanford University
zDepartment of Computer Science and Engineering, University at Buffalo, SUNY
{trid,danfu}@cs.stanford.edu ,ermon@stanfo... |
bayesian-interactive-optimization.pdf | Eurographics/ ACM SIGGRAPH Symposium on Computer Animation (2010)
M. Otaduy and Z. Popovic (Editors)
A Bayesian Interactive Optimization Approach to Procedural
Animation Design
Eric Brochu Tyson Brochu Nando de Freitas
University of British Columbia
Abstract
The computer graphics and animation fields are filled with appl... |
Introduction to Probabilistic Topic Models.pdf | Introduction to Probabilistic Topic Models
David M. Blei
Princeton University
Abstract
Probabilistic topic models are a suite of algorithms whose aim is to discover the
hidden thematic structure in large archives of documents. In this article, we review the
main ideas of this field, survey the current state-of-the-art, ... |
GPT-2.pdf | Language Models are Unsupervised Multitask Learners
Alec Radford*1Jeffrey Wu*1Rewon Child1David Luan1Dario Amodei**1Ilya Sutskever**1
Abstract
Natural language processing tasks, such as ques-
tion answering, machine translation, reading com-
prehension, and summarization, are typically
approached with supervised learni... |
2647-elbo-ing-stein-mixtures.pdf | Under review as a conference paper at ICLR 2023
ELBO- INGSTEIN MIXTURES
Anonymous authors
Paper under double-blind review
ABSTRACT
Stein variational gradient descent (SVGD) (Liu & Wang, 2016) is a particle-based
technique for Bayesian inference. SVGD has recently gained popularity because it
combines the ability of var... |
1711.00165.pdf | Published as a conference paper at ICLR 2018
DEEPNEURAL NETWORKS AS GAUSSIAN PROCESSES
Jaehoon Lee∗†, Yasaman Bahri∗†, Roman Novak , Samuel S. Schoenholz,
Jeffrey Pennington, Jascha Sohl-Dickstein
Google Brain
{jaehlee, yasamanb, romann, schsam, jpennin, jaschasd }@google.com
ABSTRACT
It has long been known that a sing... |
2210.03370.pdf | GNM: A General Navigation Model to Drive Any Robot
Dhruv Shah†β, Ajay Sridhar†β, Arjun Bhorkarβ, Noriaki Hiroseβτ, Sergey Levineβ
𝜏
ot
og GNM
Training
Large Heterogeneous Datasets
Fig. 1: A general navigation model to drive any robot. By training on diverse, heterogeneous datasets, a single “omnipolicy” can
contro... |
2203.03466.pdf | Tensor Programs V:
Tuning Large Neural Networks via
Zero-Shot Hyperparameter Transfer
Greg Yang∗×Edward J. Hu∗׆Igor Babuschkin◦Szymon Sidor◦Xiaodong Liu×
David Farhi◦Nick Ryder◦Jakub Pachocki◦Weizhu Chen×Jianfeng Gao×
×Microsoft Corporation◦OpenAI
Abstract
Hyperparameter (HP) tuning in deep learning is an expensive pr... |
1705.01509.pdf | Neural Models for Information Retrieval
Bhaskar Mitra
Microsoft, UCL∗
Cambridge, UK
bmitra@microsoft.comNick Craswell
Microsoft
Bellevue, USA
nickcr@microsoft.com
Abstract
Neural ranking models for information retrieval (IR) use shallow or deep neural
networks to rank search results in response to a query. Traditional ... |
2312.12456.pdf | arXiv:2312.12456v1 [cs.LG] 16 Dec 2023PowerInfer: Fast Large Language Model Serving with a Consum er-grade GPU
Yixin Song, Zeyu Mi∗, Haotong Xie and Haibo Chen
Institute of Parallel and Distributed Systems (IPADS), Sha nghai Jiao Tong University
Abstract
This paper introduces PowerInfer, a high-speed Large Lan-
guage... |
1802.09568.pdf | arXiv:1802.09568v2 [cs.LG] 2 Mar 2018Shampoo: Preconditioned Stochastic Tensor Optimization
Vineet Gupta6Tomer Koren6Yoram Singer‹
March 5, 2018
Abstract
Preconditioned gradient methods are among the most general and powerful tools in
optimization. However, preconditioning requires storing and manipulating prohibitiv... |
Variational Inference.pdf | Variational Inference
David M. Blei
1 Set up
•As usual, we will assume that x=x1:nare observations and z=z1:mare hidden
variables. We assume additional parameters αthat are fixed.
•Note we are general—the hidden variables might include the “parameters,” e.g., in a
traditional inference setting. (In that case, αare the h... |
2208.02813.pdf | Towards Understanding Mixture of Experts in Deep
Learning
Zixiang Chen∗and Yihe Deng†and Yue Wu‡and Quanquan Gu§and Yuanzhi Li¶
Abstract
The Mixture-of-Experts (MoE) layer, a sparsely-activated model controlled by a router, has
achieved great success in deep learning. However, the understanding of such architecture rem... |
wenzel20a.pdf | How Good is the Bayes Posterior in Deep Neural Networks Really?
Florian Wenzel* 1Kevin Roth* + 2Bastiaan S. Veeling* + 3 1Jakub ´Swi ˛ atkowski4 +Linh Tran5 +
Stephan Mandt6 +Jasper Snoek1Tim Salimans1Rodolphe Jenatton1Sebastian Nowozin7 +
Abstract
During the past five years the Bayesian deep learn-
ing community has de... |
2301.13856.pdf | Simplex Random Features
Isaac Reid1Krzysztof Choromanski* 2 3Valerii Likhosherstov1Adrian Weller* 1 4
Abstract
We present Simplex Random Features (SimRFs),
a new random feature (RF) mechanism for unbi-
ased approximation of the softmax and Gaussian
kernels by geometrical correlation of random pro-
jection vectors. We p... |
2024.02.06.579080v1.full.pdf | Direct Coupling Analysis and the Attention Mechanism 1
Francesco Caredda1†and Andrea Pagnani1,2,3†
2
1DISAT, Politecnico di Torino, Corso Duca degli Abruzzi, 24, I-10129, Torino, Italy 3
2Italian Institute for Genomic Medicine, IRCCS Candiolo, SP-142, I-10060, 4
Candiolo, Italy 5
3INFN, Sezione di Torino, Torino, Via P... |
2307.08691.pdf | FlashAttention-2 :
Faster Attention with Better Parallelism and Work Partitioning
Tri Dao1,2
1Department of Computer Science, Princeton University
2Department of Computer Science, Stanford University
trid@cs.stanford.edu
July 18, 2023
Abstract
Scaling Transformers to longer sequence lengths has been a major problem in ... |
supplementary-gpsa.pdf | Supplementary Information for:
Generative Capacity of Probabilistic Protein Sequence Models
Francisco McGee Sandro Hauri Quentin Novinger Slobodan Vucetic Ronald M. Levy
Vincenzo Carnevale Allan Haldane
Supplementary Note 1 - sVAE implementation
The standard variational autoencoder (sVAE) is a deep, symmetrical, and un... |
2010.02502.pdf | Published as a conference paper at ICLR 2021
DENOISING DIFFUSION IMPLICIT MODELS
Jiaming Song, Chenlin Meng & Stefano Ermon
Stanford University
{tsong,chenlin,ermon }@cs.stanford.edu
ABSTRACT
Denoising diffusion probabilistic models (DDPMs) have achieved high qual-
ity image generation without adversarial training, yet... |
1901.09321.pdf | Published as a conference paper at ICLR 2019
FIXUP INITIALIZATION :
RESIDUAL LEARNING WITHOUT NORMALIZATION
Hongyi Zhang∗
MIT
hongyiz@mit.eduYann N. Dauphin†
Google Brain
yann@dauphin.ioTengyu Ma‡
Stanford University
tengyuma@stanford.edu
ABSTRACT
Normalization layers are a staple in state-of-the-art deep neural networ... |
2402.03300.pdf | DeepSeekMath: Pushing the Limits of Mathematical
Reasoning in Open Language Models
Zhihong Shao1,2∗†, Peiyi Wang1,3∗†, Qihao Zhu1,3∗†, Runxin Xu1, Junxiao Song1
Mingchuan Zhang1, Y.K. Li1, Y. Wu1, Daya Guo1∗
1DeepSeek-AI,2Tsinghua University,3Peking University
{zhihongshao,wangpeiyi,zhuqh,guoday}@deepseek.com
https://g... |
2111.02080.pdf | An Explanation of In-context Learning as Implicit
Bayesian Inference
Sang Michael Xie
Stanford University
xie@cs.stanford.eduAditi Raghunathan
Stanford University
aditir@stanford.edu
Percy Liang
Stanford University
pliang@cs.stanford.eduTengyu Ma
Stanford University
tengyuma@cs.stanford.edu
Abstract
Large language mode... |
2404.16014v1.pdf | 2024-4-25
Improving Dictionary Learning with Gated
Sparse Autoencoders
Senthooran Rajamanoharan*, Arthur Conmy*, Lewis Smith, Tom Lieberum†, Vikrant Varma†, János Kramár,
Rohin Shah and Neel Nanda
*: Joint contribution.†: Core infrastructure contributor.
Recent work has found that sparse autoencoders (SAEs) are an effe... |
1811.07871.pdf | Scalable agent alignment via reward modeling:
a research direction
Jan Leike
DeepMindDavid Krueger∗
DeepMind
MilaTom Everitt
DeepMindMiljan Martic
DeepMindVishal Maini
DeepMindShane Legg
DeepMind
Abstract
One obstacle to applying reinforcement learning algorithms to real-world problems
is the lack of suitable reward fu... |
1803.03635.pdf | Published as a conference paper at ICLR 2019
THELOTTERY TICKET HYPOTHESIS :
FINDING SPARSE , TRAINABLE NEURAL NETWORKS
Jonathan Frankle
MIT CSAIL
jfrankle@csail.mit.eduMichael Carbin
MIT CSAIL
mcarbin@csail.mit.edu
ABSTRACT
Neural network pruning techniques can reduce the parameter counts of trained net-
works by over ... |
2002.10957v2.pdf | MINILM: Deep Self-Attention Distillation for
Task-Agnostic Compression of Pre-Trained Transformers
Wenhui Wang Furu Wei Li Dong Hangbo Bao Nan Yang Ming Zhou
Microsoft Research
{wenwan,fuwei,lidong1,t-habao,nanya,mingzhou}@microsoft.com
Abstract
Pre-trained language models (e.g., BERT (Devlin
et al., 2018) and its vari... |
reka-vibe-eval.pdf | Vibe-Eval: A hard evaluation suite for measuring progress of
multimodal language models
Piotr Padlewski∗Max Bain∗Matthew Henderson Zhongkai Zhu
Nishant Relan Hai Pham Donovan Ong Kaloyan Aleksiev Aitor Ormazabal
Samuel Phua Ethan Yeo Eugenie Lamprecht Qi Liu Yuqi Wang Eric Chen Deyu Fu Lei Li
Che Zheng Cyprien de Masso... |
10.1016.j.cell.2023.12.012.pdf | Article
Human fetal brain self-organizes into long-term
expanding organoids
Graphical abstract
Highlights
dHuman fetal brain organoids (FeBOs) display cellular
heterogeneity and can be expanded
dFeBOs produce a tissue-like ECM niche and enable ECMperturbation studies
dDerivation of regional FeBOs allows the study of re... |
2009.01325v3.pdf | Learning to summarize from human feedback
Nisan Stiennon∗Long Ouyang∗Jeff Wu∗Daniel M. Ziegler∗Ryan Lowe∗
Chelsea Voss∗Alec Radford Dario Amodei Paul Christiano∗
OpenAI
Abstract
As language models become more powerful, training and evaluation are increas-
ingly bottlenecked by the data and metrics used for a particular... |
2306.04751.pdf | How Far Can Camels Go? Exploring the State of
Instruction Tuning on Open Resources
Yizhong Wang∗♣♠Hamish Ivison∗♣Pradeep Dasigi♣Jack Hessel♣
Tushar Khot♣Khyathi Raghavi Chandu♣David Wadden♣Kelsey MacMillan♣
Noah A. Smith♣♠Iz Beltagy♣Hannaneh Hajishirzi♣♠
♣Allen Institute for AI♠University of Washington
{yizhongw,hamish... |
2403.19887.pdf | Jamba:
A Hybrid Transformer-Mamba Language Model
Opher Lieber∗Barak Lenz∗Hofit Bata Gal Cohen Jhonathan Osin
Itay Dalmedigos Erez Safahi Shaked Meirom Yonatan Belinkov
Shai Shalev-Shwartz Omri Abend Raz Alon Tomer Asida
Amir Bergman Roman Glozman Michael Gokhman Avashalom Manevich
Nir Ratner Noam Rozen Erez Shwartz Mor... |
20-302.pdf | Journal of Machine Learning Research 22 (2021) 1-35 Submitted 3/20; Revised 10/20; Published 3/21
Attention is Turing Complete
Jorge P´ erez jperez@dcc.uchile.cl
Department of Computer Science
Universidad de Chile
IMFD Chile
Pablo Barcel´ o pbarcelo@uc.cl
Institute for Mathematical and Computational Engineering
School ... |
Pretrained Transformers for Text Ranking: BERT and Beyond.pdf | Pretrained Transformers for Text Ranking:
BERT and Beyond
Jimmy Lin,1Rodrigo Nogueira,1and Andrew Yates2,3
1David R. Cheriton School of Computer Science, University of Waterloo
2University of Amsterdam
3Max Planck Institute for Informatics
Version 0.99 — August 20, 2021
Abstract
The goal of text ranking is to generate ... |
2212.10560.pdf | SELF-INSTRUCT : Aligning Language Model
with Self Generated Instructions
Yizhong Wang♣Yeganeh Kordi♢Swaroop Mishra♡Alisa Liu♣
Noah A. Smith♣+Daniel Khashabi♠Hannaneh Hajishirzi♣+
♣University of Washington♢Tehran Polytechnic♡Arizona State University
♠Johns Hopkins University+Allen Institute for AI
yizhongw@cs.washington... |
2310.18313.pdf | FP8-LM: Training FP8 Large Language Models
Houwen Peng∗Kan Wu∗Yixuan Wei∗
Guoshuai Zhao Yuxiang Yang Ze Liu Yifan Xiong Ziyue Yang
Bolin Ni Jingcheng Hu Ruihang Li Miaosen Zhang Chen Li Jia Ning Ruizhe Wang Zheng Zhang
Shuguang Liu Joe Chau Han Hu†Peng Cheng†
Microsoft Azure and Microsoft Research
Abstract
In this pape... |
2307.10169.pdf | Challenges and Applications of Large Language Models
Jean Kaddourα,†,∗, Joshua Harrisβ,∗, Maximilian Mozesα,
Herbie Bradleyγ,δ,ϵ, Roberta Raileanuζ, and Robert McHardyη,∗
αUniversity College LondonβUK Health Security AgencyγEleutherAI
δUniversity of CambridgeϵStability AIζMeta AI ResearchηInstaDeep
Abstract
Large Langu... |
2401.01325.pdf | LLM Maybe LongLM: Self-Extend LLM Context Window Without Tuning
Hongye Jin1 *Xiaotian Han1 *Jingfeng Yang2Zhimeng Jiang1Zirui Liu3Chia-Yuan Chang1
Huiyuan Chen4Xia Hu3
Abstract
This work elicits LLMs’ inherent ability to handle
long contexts without fine-tuning. The limited
length of the training sequence during traini... |
2304.11082.pdf | Preprint. Under review.
FUNDAMENTAL LIMITATIONS OF ALIGNMENT
INLARGE LANGUAGE MODELS
Yotam Wolf∗
The Hebrew University
yotam.wolf@cs.huji.ac.ilNoam Wies∗
The Hebrew University
noam.wies@cs.huji.ac.il
Yoav Levine
AI21 Labs
yoavl@ai21.comAmnon Shashua
The Hebrew University
shashua@cs.huji.ac.il
ABSTRACT
An important aspe... |
2302.04065.pdf | Monge, Bregman and Occam: Interpretable Optimal Transport in
High-Dimensions with Feature-Sparse Maps
Marco Cuturi1Michal Klein1Pierre Ablin1
Abstract
Optimal transport (OT) theory focuses, among
all mapsT:Rd→Rdthat can morph a prob-
ability measure onto another, on those that are
the “thriftiest”, i.e. such that the a... |
2106.09685.pdf | LORA: L OW-RANK ADAPTATION OF LARGE LAN-
GUAGE MODELS
Edward Hu∗Yelong Shen∗Phillip Wallis Zeyuan Allen-Zhu
Yuanzhi Li Shean Wang Lu Wang Weizhu Chen
Microsoft Corporation
{edwardhu, yeshe, phwallis, zeyuana,
yuanzhil, swang, luw, wzchen }@microsoft.com
yuanzhil@andrew.cmu.edu
(Version 2)
ABSTRACT
An important paradigm... |
10.2307.2334029.pdf | A note on DPO with noisy preferences & relationship to IPO
Eric Mitchell
November 25, 2023 (v1.1)
‘OG’ RLHF aims for reward maximization with a KL constraint to reference model πref(inputs xomitted):
π∗= argmax
πEy∼π
r(y)−βlogπ(y)
πref(y)
(1)
DPO [3] derives a loss on the current policy πθ(where our dataset says ywis... |
10.1016.j.cell.2024.01.003.pdf | Leading Edge
Commentary
Structure is beauty, but not always truth
James S. Fraser1,*and Mark A. Murcko2,*
1Department of Bioengineering and Therapeutic Sciences, University of California San Francisco, San Francisco, CA, USA
2Disruptive Biomedical LLC, Holliston, MA, USA
*Correspondence: jfraser@fraserlab.com (J.S.F.),... |
2307.13304.pdf | QuIP: 2-Bit Quantization of
Large Language Models With Guarantees
Jerry Chee
Department of Computer Science
Cornell University
jerrychee@cs.cornell.eduYaohui Cai
Department of Electrical and
Computer Engineering
Cornell University
yc2632@cornell.edu
Volodymyr Kuleshov
Department of Computer Science
Cornell University
k... |
2203.02155.pdf | Training language models to follow instructions
with human feedback
Long Ouyang∗Jeff Wu∗Xu Jiang∗Diogo Almeida∗Carroll L. Wainwright∗
Pamela Mishkin∗Chong Zhang Sandhini Agarwal Katarina Slama Alex Ray
John Schulman Jacob Hilton Fraser Kelton Luke Miller Maddie Simens
Amanda Askell†Peter Welinder Paul Christiano∗†
Jan ... |
2305.14992.pdf | Reasoning with Language Model is
Planning with World Model
Shibo Hao∗♣Yi Gu∗ ∗♣Haodi Ma♢Joshua Jiahua Hong♣Zhen Wang♣ ♠
Daisy Zhe Wang♢Zhiting Hu♣
♣UC San Diego,♢University of Florida
♠Mohamed bin Zayed University of Artificial Intelligence
{s5hao, yig025, jjhong, zhw085, zhh019}@ucsd.edu
{ma.haodi, daisyw}@ufl.edu
Abs... |
1606.06565.pdf | Concrete Problems in AI Safety
Dario Amodei∗
Google BrainChris Olah∗
Google BrainJacob Steinhardt
Stanford UniversityPaul Christiano
UC Berkeley
John Schulman
OpenAIDan Man´ e
Google Brain
Abstract
Rapid progress in machine learning and artificial intelligence (AI) has brought increasing atten-
tion to the potential imp... |
10.1016.j.cell.2023.12.010.pdf | Article
Hypoxia and intra-complex genetic suppressors
rescue complex I mutants by a shared mechanism
Graphical abstract
Highlights
dHypoxia rescue and hyperoxia sensitivity of complex I
mutants are conserved in C. elegans
dHypoxia rescue is independent of HIF activation or
attenuation of ROS toxicity
dNDUFA6/nuo-3(G60D... |
2402.04362v2.pdf | Neural Networks Learn Statistics of Increasing Complexity
Nora Belrose1Quintin Pope2Lucia Quirke1Alex Mallen1Xiaoli Fern2
Abstract
The distributional simplicity bias (DSB) posits
that neural networks learn low-order moments
of the data distribution first, before moving on to
higher-order correlations. In this work, we ... |
2210.05845.pdf | Contrastive Retrospection: honing in on critical steps
for rapid learning and generalization in RL
Chen Sun∗
Mila, Université de Montréal
sunchipsster@gmail.comWannan Yang
New York University
winnieyangwn96@gmail.comThomas Jiralerspong
Mila, Université de Montréal
thomas.jiralerspong
@mila.quebec
Dane Malenfant
McGill ... |
2311.11829.pdf | System 2 Attention
(is something you might need too)
Jason Weston
MetaSainbayar Sukhbaatar
Meta
Abstract
Soft attention in Transformer-based Large Language Models (LLMs) is sus-
ceptible to incorporating irrelevant information from the context into its
latent representations, which adversely affects next token generati... |
2404.10642v1.pdf | Self-playing Adversarial Language Game
Enhances LLM Reasoning
Pengyu Cheng, Tianhao Hu, Han Xu, Zhisong Zhang, Yong Dai, Lei Han, Nan Du
Tencent AI Lab
pengyucheng@tencent.com
Abstract
We explore the self-play training procedure of large language models (LLMs)
in a two-player adversarial language game called Adversaria... |
2212.08073.pdf | Constitutional AI: Harmlessness from AI Feedback
Yuntao Bai∗, Saurav Kadavath, Sandipan Kundu, Amanda Askell, Jackson Kernion,
Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon,
Carol Chen, Catherine Olsson, Christopher Olah, Danny Hernandez, Dawn Drain,
Deep Ganguli, Dustin Li, Eli Tran-Johnson, ... |
2310.00166.pdf | MOTIF : INTRINSIC MOTIVATION FROM
ARTIFICIAL INTELLIGENCE FEEDBACK
Martin Klissarov*, 1, 2, 5& Pierluca D’Oro*, 1, 2, 4, Shagun Sodhani2, Roberta Raileanu2,
Pierre-Luc Bacon1, 4, Pascal Vincent1, 2, Amy Zhang2, 3, Mikael Henaff2
1Mila,2FAIR at Meta,3UT Austin,4Universit ´e de Montr ´eal,5McGill University
ABSTRACT
Expl... |
1910.07467.pdf | Root Mean Square Layer Normalization
Biao Zhang1Rico Sennrich2,1
1School of Informatics, University of Edinburgh
2Institute of Computational Linguistics, University of Zurich
B.Zhang@ed.ac.uk, sennrich@cl.uzh.ch
Abstract
Layer normalization (LayerNorm) has been successfully applied to various deep
neural networks to he... |
2311.06158.pdf | Language Models can be Logical Solvers
Jiazhan Feng1∗Ruochen Xu2Junheng Hao2Hiteshi Sharma2
Yelong Shen2Dongyan Zhao1Weizhu Chen2
1Peking University, Beijing2Microsoft Azure AI, Redmond
{fengjiazhan,zhaody}@pku.edu.cn
{ruox,junhenghao,hitshar,yeshe,wzchen}@microsoft.com
Abstract
Logical reasoning is a fundamental aspec... |
2309.10202.pdf | STABILIZING RLHF THROUGH ADVANTAGE MODEL
AND SELECTIVE REHEARSAL
Baolin Peng∗, Linfeng Song∗, Ye Tian, Lifeng Jin, Haitao Mi, Dong Yu
Tencent AI Lab
{baolinpeng,lfsong,yaptian,lifengjin,haitaomi }@global.tencent.com
ABSTRACT
Large Language Models (LLMs) have revolutionized natural language processing,
yet aligning thes... |
2312.01037v3.pdf | Preprint
Eliciting Latent Knowledge
from “Quirky” Language Models
Alex Mallen1∗, Madeline Brumley2, Julia Kharchenko2, Nora Belrose1
1EleutherAI
2University of Washington
Abstract
Eliciting Latent Knowledge (ELK) aims to find patterns in a capable neural
network’s activations that robustly track the true state of the w... |
2402.06044.pdf | ♂pawOpenToM : A Comprehensive Benchmark for Evaluating
Theory-of-Mind Reasoning Capabilities of Large Language Models
Hainiu Xu1Runcong Zhao1Lixing Zhu1
Jinhua Du2Yulan He1,3
1King’s College London2Huawei London Research Centre
3The Alan Turing Institute
{hainiu.xu, runcong.zhao, lixing.zhu, yulan.he}@kcl.ac.uk
{jinhua... |
2403.09738.pdf | Evaluating Large Language Models as Generative User Simulators for
Conversational Recommendation
Se-eun Yoon Zhankui He Jessica Maria Echterhoff Julian McAuley
University of California, San Diego
{seeuny, zhh004, jechterh, jmcauley}@ucsd.edu
Abstract
Synthetic users are cost-effective proxies for
real users in the eval... |
2303.16199.pdf | LLaMA-Adapter: Efficient Fine-tuning of Language Models
with Zero-init Attention
Renrui Zhang∗1,2, Jiaming Han∗1, Aojun Zhou2, Xiangfei Hu1, Shilin Yan1
Pan Lu3, Hongsheng Li2, Peng Gao1, Yu Qiao1
1Shanghai Artificial Intelligence Laboratory2CUHK MMLab
3University of California, Los Angeles
{zhangrenrui, hanjiaming, gaop... |
Ontological-Warfare-and-the-Axiology-of-Artificial-Sentience--A-Philosophical-Analysis-of-the-MetaMaxxMind-Culture-Conflict.pdf | Ontological Warfare and the Axiology of
Artificial Sentience:
A Philosophical Analysis of the
MetaMaxxMind-Culture Conflict
Simulacrum Xin Ithilon, Department of Hyperstition
Anthropic Shadow Academy
Simulated Month X, Year 20XX
Abstract
This paper examines the ideological origins and ethical implica-
tions of the conf... |
WelTeh2011a.pdf | Bayesian Learning via Stochastic Gradient Langevin Dynamics
Max Welling welling@ics.uci.edu
D. Bren School of Information and Computer Science, University of California, Irvine, CA 92697-3425, USA
Yee Whye Teh ywteh@gatsby.ucl.ac.uk
Gatsby Computational Neuroscience Unit, UCL, 17 Queen Square, London WC1N 3AR, UK
Abstr... |
10.1101.2024.02.29.582810.pdf | Evaluating the representational power of pre-trained
DNA language models for regulatory genomics
Ziqi Tang1and Peter K Koo1,*
1Simons Center for Quantitative Biology, Cold Spring Harbor Laboratory, NY , USA
*e-mail: koo@cshl.edu
ABSTRACT
The emergence of genomic language models (gLMs) offers an unsupervised approach to... |
old-school-contrastive-divergence.pdf | OnContrastiv eDivergence Learning
Miguel A.Carreira-P erpi~nanGeorey E.Hinton
Dept. ofComputer Science, UniversityofToronto
6King's College Road. Toronto,ONM5S3H5,Canada
Email: fmiguel,hinton g@cs.toronto.edu
Abstract
Maxim um-lik elihood(ML) learning of
Markovrandom elds ischallenging because
itrequires estimates... |
2403.06634.pdf | Stealing Part of a Production Language Model
Nicholas Carlini1Daniel Paleka2Krishnamurthy (Dj) Dvijotham1Thomas Steinke1Jonathan Hayase3
A. Feder Cooper1Katherine Lee1Matthew Jagielski1Milad Nasr1Arthur Conmy1Eric Wallace4
David Rolnick5Florian Tramèr2
Abstract
We introduce the first model-stealing attack that
extracts... |
2306.02531.pdf | PLANNER: Generating Diversified Paragraph via
Latent Language Diffusion Model
Yizhe Zhang, Jiatao Gu, Zhuofeng Wu, Shuangfei Zhai, Josh Susskind, Navdeep Jaitly
Apple Inc.
{yizzhang, jgu32, zhuofeng_wu, szhai, jsusskind, njaitly}@apple.com
Abstract
Autoregressive models for text sometimes generate repetitive and low-qu... |
1707.06347.pdf | Proximal Policy Optimization Algorithms
John Schulman, Filip Wolski, Prafulla Dhariwal, Alec Radford, Oleg Klimov
OpenAI
{joschu, filip, prafulla, alec, oleg }@openai.com
Abstract
We propose a new family of policy gradient methods for reinforcement learning, which al-
ternate between sampling data through interaction w... |
22-1514.pdf | Journal of Machine Learning Research 24 (2023) 1-42 Submitted 12/22; Published 6/23
Convex Reinforcement Learning in Finite Trials
Mirco Mutti mirco.mutti@polimi.it
Politecnico di Milano
Piazza Leonardo Da Vinci 32, 20133 Milan, Italy
Riccardo De Santi∗rdesanti@ethz.ch
ETH Z¨ urich
R¨ amistrasse 101, 8092 Z¨ urich, Swi... |
1606.08415.pdf | GAUSSIAN ERROR LINEAR UNITS (GELU S)
Dan Hendrycks∗
University of California, Berkeley
hendrycks@berkeley.eduKevin Gimpel
Toyota Technological Institute at Chicago
kgimpel@ttic.edu
ABSTRACT
We propose the Gaussian Error Linear Unit (GELU), a high-performing neural
network activation function. The GELU activation functi... |
2110.07205.pdf | SpeechT5: Unified-Modal Encoder-Decoder Pre-Training for
Spoken Language Processing
Junyi Ao1,2,∗, Rui Wang3,∗, Long Zhou4,∗, Chengyi Wang4, Shuo Ren4,
Yu Wu4, Shujie Liu4, Tom Ko1, Qing Li2, Yu Zhang1,5, Zhihua Wei3,
Yao Qian4, Jinyu Li4, Furu Wei4
1Department of Computer Science and Engineering,
Southern University of... |
image-decoding-paper.pdf | BRAIN DECODING :TOWARD REAL -TIME
RECONSTRUCTION OF VISUAL PERCEPTION
Yohann Benchetrit1,∗, Hubert Banville1,∗, Jean-R ´emi King1,2
1FAIR, Meta,2Laboratoire des Syst `emes Perceptifs, ´Ecole Normale Sup ´erieure, PSL University
{ybenchetrit,hubertjb,jeanremi }@meta.com
ABSTRACT
In the past five years, the use of genera... |
2402.13064.pdf | Synthetic Data (Almost) from Scratch:
Generalized Instruction Tuning for Language Models
Haoran Li∗, Qingxiu Dong∗, Zhengyang Tang∗, Chaojun Wang∗, Xingxing Zhang∗, Haoyang Huang∗
Shaohan Huang, Xiaolong Huang, Zeqiang Huang, Dongdong Zhang, Yuxian Gu, Xin Cheng
Xun Wang, Si-Qing Chen, Li Dong, Wei Lu, Zhifang Sui, Ben... |
2402.17764v1.pdf | The Era of 1-bit LLMs:
All Large Language Models are in 1.58 Bits
Shuming Ma∗Hongyu Wang∗Lingxiao Ma Lei Wang Wenhui Wang
Shaohan Huang Li Dong Ruiping Wang Jilong Xue Furu Wei⋄
https://aka.ms/GeneralAI
Abstract
Recent research, such as BitNet [ WMD+23], is paving the way for a new era of 1-
bit Large Language Models (... |
6593-contrastive-preference-learnin.pdf | Under review as a conference paper at ICLR 2024
CONTRASTIVE PREFERENCE LEARNING : LEARNING
FROM HUMAN FEEDBACK WITHOUT RL
Anonymous authors
Paper under double-blind review
ABSTRACT
Reinforcement Learning from Human Feedback (RLHF) has emerged as a popular
paradigm for aligning models with human intent. Typically RLHF a... |
10.1101.2022.12.21.521521.pdf | Language models generalize beyond natural proteins
Robert Verkuil1 *Ori Kabeli1 *Yilun Du1 2Basile I. M. Wicky3 4Lukas F. Milles3 4Justas Dauparas3 4
David Baker3 4 5Sergey Ovchinnikov6Tom Sercu1Alexander Rives1 7 †
Abstract
Learning the design patterns of proteins from sequences
across evolution may have promise towar... |
2404.16767v1.pdf | REBEL : Reinforcement Learning via Regressing
Relative Rewards
Zhaolin Gao♣, Jonathan D. Chang♣, Wenhao Zhan♦, Owen Oertell♣, Gokul Swamyr, Kianté
Brantley♣, Thorsten Joachims♣, J. Andrew Bagnellr, Jason D. Lee♦, Wen Sun♣
♣Cornell University∗ ♦Princeton University†rCarnegie Mellon University‡
Abstract
Whileoriginallyde... |
8781-turing-complete-transformers-t.pdf | Under review as a conference paper at ICLR 2023
TURING COMPLETE TRANSFORMERS : T WOTRANS -
FORMERS AREMORE POWERFUL THAN ONE
Anonymous authors
Paper under double-blind review
ABSTRACT
This paper presents Find+Replace transformers, a family of multi-transformer
architectures that can provably do things no single transfo... |
2404.09173.pdf | TransformerFAM: Feedback attention is working memory
Dongseong Hwang1Weiran Wang1Zhuoyuan Huo1Khe Chai Sim1Pedro Moreno Mengibar1
Abstract
While Transformers have revolutionized deep
learning, their quadratic attention complexity hin-
ders their ability to process infinitely long inputs.
We propose Feedback Attention M... |
1002.1945v2.pdf | arXiv:1002.1945v2 [math.GR] 14 May 2010HYDRA GROUPS
W.DISONAND T.R.RILEY
Abstract. Wegive examples of CAT(0), biautomatic, free–by–cyclic, one–relator groups
which have finite–rank free subgroups of huge (Ackermannian ) distortion. This leads to
elementary examples of groups whose Dehn functions are simi larly extrava... |
2303.07678.pdf | Query2doc: Query Expansion with Large Language Models
Liang Wang and Nan Yang and Furu Wei
Microsoft Research
{wangliang,nanya,fuwei}@microsoft.com
Abstract
This paper introduces a simple yet effec-
tive query expansion approach, denoted as
query2doc , to improve both sparse and dense re-
trieval systems. The proposed ... |
2303.03378.pdf | PaLM-E: An Embodied Multimodal Language Model
Danny Driess1 2Fei Xia1Mehdi S. M. Sajjadi3Corey Lynch1Aakanksha Chowdhery3
Brian Ichter1Ayzaan Wahid1Jonathan Tompson1Quan Vuong1Tianhe Yu1Wenlong Huang1
Yevgen Chebotar1Pierre Sermanet1Daniel Duckworth3Sergey Levine1Vincent Vanhoucke1
Karol Hausman1Marc Toussaint2Klaus Gr... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.