filename
stringlengths
9
127
text
stringlengths
133
11k
2305.13048.pdf
RWKV: Reinventing RNNs for the Transformer Era Bo Peng1∗Eric Alcaide2,3,4∗Quentin Anthony2,5∗ Alon Albalak2,6Samuel Arcadinho2,7Huanqi Cao8Xin Cheng9Michael Chung10 Matteo Grella11Kranthi Kiran GV12Xuzheng He2Haowen Hou13Przemysław Kazienko14 Jan Koco ´n14Jiaming Kong15Bartłomiej Koptyra14Hayden Lau2Krishna Sri Ipsit M...
2023.12.07.570727v1.full.pdf
ProteinGym: Large-Scale Benchmarks for Protein Design and Fitness Prediction Pascal Notin†∗ Computer Science, University of OxfordAaron W. Kollasch† Systems Biology, Harvard Medical SchoolDaniel Ritter† Systems Biology, Harvard Medical School Lood van Niekerk† Systems Biology, Harvard Medical SchoolSteffanie Paul Syste...
1511.06349.pdf
Generating Sentences from a Continuous Space Samuel R. Bowman∗ NLP Group and Dept. of Linguistics Stanford University sbowman@stanford.eduLuke Vilnis∗ CICS University of Massachusetts Amherst luke@cs.umass.edu Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz & Samy Bengio Google Brain {vinyals, adai, rafalj, bengio }@goo...
2402.16819.pdf
Nemotron-4 15B Technical Report Jupinder Parmar*Shrimai Prabhumoye∗Joseph Jennings∗Mostofa Patwary∗ Sandeep Subramanian†Dan Su Chen Zhu Deepak Narayanan Aastha Jhunjhunwala Ayush Dattagupta Vibhu Jawa Jiwei Liu Ameya Mahabaleshwarkar Osvald Nitski Annika Brundyn James Maki Miguel Martinez Jiaxuan You John Kamalu Patric...
2203.05482.pdf
Model soups: averaging weights of multiple fine-tuned models improves accuracy without increasing inference time Mitchell Wortsman1Gabriel Ilharco1Samir Yitzhak Gadre2Rebecca Roelofs3Raphael Gontijo-Lopes3 Ari S. Morcos4Hongseok Namkoong2Ali Farhadi1Yair Carmon* 5Simon Kornblith* 3Ludwig Schmidt* 1 Abstract The conventi...
noise-contrastive-estimation.pdf
Journalof Machine LearningResearch 13(2012)307-361 Submi tted 12/10;Revised 11/11;Published2/12 Noise-ContrastiveEstimationof UnnormalizedStatistical Models, with Applications toNatural ImageStatistics Michael U.Gutmann MICHAEL .GUTMANN @HELSINKI .FI AapoHyv ¨arinen AAPO.HYVARINEN @HELSINKI .FI Department of Computer S...
1611.03530.pdf
UNDERSTANDING DEEP LEARNING REQUIRES RE - THINKING GENERALIZATION Chiyuan Zhang∗ Massachusetts Institute of Technology chiyuan@mit.eduSamy Bengio Google Brain bengio@goog/l.Vare.comMoritz Hardt Google Brain mrtz@goog/l.Vare.com Benjamin Recht† University of California, Berkeley brecht@berke/l.Varey.eduOriol Vinyals Goo...
2310.03214.pdf
Preprint FRESH LLM S: REFRESHING LARGE LANGUAGE MODELS WITH SEARCH ENGINE AUGMENTATION Tu Vu1Mohit Iyyer2Xuezhi Wang1Noah Constant1Jerry Wei1 Jason Wei3∗Chris Tar1Yun-Hsuan Sung1Denny Zhou1Quoc Le1Thang Luong1 Google1University of Massachusetts Amherst2OpenAI3 freshllms@google.com ABSTRACT Most large language models ( ...
2111.02080v6.pdf
An Explanation of In-context Learning as Implicit Bayesian Inference Sang Michael Xie Stanford University xie@cs.stanford.eduAditi Raghunathan Stanford University aditir@stanford.edu Percy Liang Stanford University pliang@cs.stanford.eduTengyu Ma Stanford University tengyuma@cs.stanford.edu Abstract Large language mode...
2110.04374.pdf
A Few More Examples May Be Worth Billions of Parameters Yuval Kirstain♠Patrick Lewis†‡Sebastian Riedel†‡Omer Levy♠‡ ♠Tel-Aviv University †University College London ‡Facebook AI Research {yuval.kirstain,levyomer }@cs.tau.ac.il ,{patrick.lewis,s.riedel }@cs.ucl.ac.uk Abstract We investigate the dynamics of increasing the...
10.7554.eLife.50524.001.pdf
*For correspondence: ronlevy@temple.edu Competing interests: The authors declare that no competing interests exist. Funding: See page 20 Received: 25 July 2019 Accepted: 09 September 2019 Published: 08 October 2019 Reviewing editor: Patricia J Wittkopp, University of Michigan, United States Copyright Biswas et al. This...
1608.03983v5.pdf
Published as a conference paper at ICLR 2017 SGDR: S TOCHASTIC GRADIENT DESCENT WITH WARM RESTARTS Ilya Loshchilov & Frank Hutter University of Freiburg Freiburg, Germany, {ilya,fh}@cs.uni-freiburg.de ABSTRACT Restart techniques are common in gradient-free optimization to deal with multi- modal functions. Partial warm ...
2402.05120.pdf
More Agents Is All You Need Junyou Li* 1Qin Zhang* 1Yangbin Yu1Qiang Fu1Deheng Ye1 Abstract We find that, simply via a sampling-and-voting method, the performance of large language mod- els (LLMs) scales with the number of agents in- stantiated. Also, this method is orthogonal to existing complicated methods to further...
2305.18290.pdf
Direct Preference Optimization: Your Language Model is Secretly a Reward Model Rafael Rafailov∗†Archit Sharma∗†Eric Mitchell∗† Stefano Ermon†‡Christopher D. Manning†Chelsea Finn† †Stanford University‡CZ Biohub {rafailov,architsh,eric.mitchell}@cs.stanford.edu Abstract While large-scale unsupervised language models (LMs...
2111.12763.pdf
Sparse is Enough in Scaling Transformers Sebastian Jaszczur∗ University of WarsawAakanksha Chowdhery Google ResearchAfroz Mohiuddin Google ResearchŁukasz Kaiser∗ OpenAI Wojciech Gajewski Google ResearchHenryk Michalewski Google ResearchJonni Kanerva Google Research Abstract Large Transformer models yield impressive res...
10.1126.science.aay8015.pdf
STRUCTURAL BIOLOGY Structural basis for strand-transfer inhibitor binding to HIV intasomes Dario Oliveira Passos1*, Min Li2*, Ilona K. Józ ´wik1, Xue Zhi Zhao3, Diogo Santos-Martins4, Renbin Yang2, Steven J. Smith3, Youngmin Jeon1, Stefano Forli4, Stephen H. Hughes3, Terrence R. Burke Jr.3, Robert Craigie2, Dmitry Lyum...
2404.01413v2.pdf
Is Model Collapse Inevitable? Breaking the Curse of Recursion by Accumulating Real and Synthetic Data Matthias Gerstgrasser∗†, Rylan Schaeffer∗, Apratim Dey∗, Rafael Rafailov∗, Dhruv Pai Stanford University {mgerst,rschaef,apd1995,rafailov,dhruvpai }@stanford.edu Henry Sleight‡, John Hughes‡, Tomasz Korbak‡, Rajashree ...
10.1038.s42004-024-01098-2.pdf
ARTICLE Evolution shapes interaction patterns for epistasis and speci fic protein binding in a two-component signaling system Zhiqiang Yan1& Jin Wang2✉ The elegant design of protein sequence/structure/function relationships arises from the interaction patterns between amino acid positions. A central question is how evol...
2305.10626.pdf
Language Models Meet World Models: Embodied Experiences Enhance Language Models Jiannan Xiang∗♠, Tianhua Tao∗♣, Yi Gu♠, Tianmin Shu♢△, Zirui Wang♠, Zichao Yang♡, Zhiting Hu♠ ♠UC San Diego,♣UIUC,♢MIT,△JHU,♡CMU Abstract While large language models (LMs) have shown remarkable capabilities across numerous tasks, they often...
2306.14892.pdf
Supervised Pretraining Can Learn In-Context Reinforcement Learning Jonathan N. Lee∗1Annie Xie∗1Aldo Pacchiano2Yash Chandak1 Chelsea Finn1Ofir Nachum3Emma Brunskill1 1Stanford University,2Microsoft Research,3Google DeepMind Abstract Large transformer models trained on diverse datasets have shown a remarkable ability to ...
2405.03651v1.pdf
Published as a conference paper at ICLR 2024 ADAPTIVE RETRIEVAL AND SCALABLE INDEXING FOR k-NN S EARCH WITH CROSS -ENCODERS Nishant Yadav1∗, Nicholas Monath2, Manzil Zaheer2, Rob Fergus2, Andrew McCallum1 1University of Massachusetts Amherst,2Google DeepMind ABSTRACT Cross-encoder (CE) models which compute similarity b...
1907.05600.pdf
Generative Modeling by Estimating Gradients of the Data Distribution Yang Song Stanford University yangsong@cs.stanford.eduStefano Ermon Stanford University ermon@cs.stanford.edu Abstract We introduce a new generative model where samples are produced via Langevin dynamics using gradients of the data distribution estima...
2301.13196.pdf
Looped Transformers as Programmable Computers Angeliki Giannouw*, Shashank Rajputw∗, Jy-yong Sohnw, Kangwook Leew, Jason D. Leep, Dimitris Papailiopoulosw pPrinceton University wUniversity of Wisconsin-Madison January 31, 2023 Abstract We present a framework for using transformer networks as universal computers by prog...
1109.2146.pdf
Journal of Artificial Intelligence Research 24 (2005) 1-48 S ubmitted 11/04; published 07/05 CIXL2: A Crossover Operator for Evolutionary Algorithms Based on Population Features Domingo Ortiz-Boyer dortiz@uco.es C´ esar Herv´ as-Mart´ ınez chervas@uco.es Nicol´ as Garc´ ıa-Pedrajas npedrajas@uco.es Department of Computi...
1804.00746v4.pdf
The Simple Essence of Automatic Differentiation Extended version∗ Conal Elliott Target conal@conal.net March, 2018 Abstract Automatic differentiation (AD) in reverse mode (RAD) is a central component of deep learning and other uses of large-scale optimization. Commonly used RAD algorithms such as backpropagation, however...
2302.08582.pdf
Pretraining Language Models with Human Preferences Tomasz Korbak1 2 3Kejian Shi2Angelica Chen2Rasika Bhalerao4Christopher L. Buckley1Jason Phang2 Samuel R. Bowman2 5Ethan Perez2 3 5 Abstract Language models (LMs) are pretrained to imitate internet text, including content that would vio- late human preferences if genera...
10.1101.2024.02.06.579080.pdf
Direct Coupling Analysis and the Attention Mechanism 1 Francesco Caredda1†and Andrea Pagnani1,2,3† 2 1DISAT, Politecnico di Torino, Corso Duca degli Abruzzi, 24, I-10129, Torino, Italy 3 2Italian Institute for Genomic Medicine, IRCCS Candiolo, SP-142, I-10060, 4 Candiolo, Italy 5 3INFN, Sezione di Torino, Torino, Via P...
IN-Tetramer-manuscript-merged-with-figures-bioRxiv.pdf
1 Oligomeric HIV-1 Integrase Structures Reveal Functional Plasticity for Intasome Assembly and RNA Binding Tao Jing1‡, Zelin Shan1‡, Tung Dinh4, Avik Biswas1, Sooin Jang5,6, Juliet Greenwood5, Min Li7, Zeyuan Zhang1, Gennavieve Gray1, Hye Jeong Shin1, Bo Zhou1, Dario Passos1, Sriram Aiyer1, Zhen Li5, Robert Craigie7...
2402.07871.pdf
SCALING LAWS FOR FINE-GRAINED MIXTURE OF EXPERTS Jakub Krajewski∗ University of Warsaw IDEAS NCBRJan Ludziejewski∗ University of Warsaw IDEAS NCBRKamil Adamczewski IDEAS NCBRMaciej Pi ´oro IPPT PAN IDEAS NCBR Michał Krutul University of Warsaw IDEAS NCBRSzymon Antoniak University of Warsaw IDEAS NCBRKamil Ciebiera Univ...
2105.14111.pdf
Goal Misgeneralization in Deep Reinforcement Learning Lauro Langosco* 1Jack Koch*Lee Sharkey* 2Jacob Pfau3Laurent Orseau4David Krueger1 Abstract We study goal misgeneralization , a type of out- of-distribution generalization failure in reinforce- ment learning (RL). Goal misgeneralization oc- curs when an RL agent reta...
2402.09727.pdf
2024-02-14 A Human-Inspired Reading Agent with Gist Memory of Very Long Contexts Kuang-Huei Lee1, Xinyun Chen1, Hiroki Furuta1, John Canny1and Ian Fischer2 1Google DeepMind,2Google Research Correspond to: {leekh, iansf}@google.com; Author contributions are stated in Appendix J. Website: read-agent.github.io Current Lar...
2402.09900.pdf
Revisiting Recurrent Reinforcement Learning with Memory Monoids Steven Morad1Chris Lu2Ryan Kortvelesy1Stephan Liwicki3Jakob Foerster2Amanda Prorok1 Abstract Memory models such as Recurrent Neural Net- works (RNNs) and Transformers address Par- tially Observable Markov Decision Processes (POMDPs) by mapping trajectories...
2305.11841.pdf
How Does Generative Retrieval Scale to Millions of Passages? Ronak Pradeep∗†§, Kai Hui∗, Jai Gupta, Adam D. Lelkes, Honglei Zhuang Jimmy Lin§, Donald Metzler, Vinh Q. Tran∗ Google Research,§University of Waterloo rpradeep@uwaterloo.ca ,{kaihuibj,vqtran}@google.com Abstract Popularized by the Differentiable Search In- d...
10.1016.j.bpj.2017.10.028.pdf
Article Coevolutionary Landscape of Kinase Family Proteins: Sequence Probabilities and FunctionalMotifs Allan Haldane,1William F. Flynn,1,2Peng He,1and Ronald M. Levy1,* 1Center for Biophysics and Computational Biology, Department of Chemistry, and Institute for Computational Molecular Science, Temple University, Phila...
2024.04.22.590591v1.full.pdf
Design of highly functional genome editors by modeling the universe of CRISPR-Cas sequences Jeffrey A. Ruffolo1,*, Stephen Nayfach1,*, Joseph Gallagher1,*, Aadyot Bhatnagar1,*, Joel Beazer1, Riffat Hussain1, Jordan Russ1, Jennifer Yip1, Emily Hill1, Martin Pacesa1,2, Alexander J. Meeske1,3, Peter Cameron1, and Ali Mada...
2208.11970.pdf
Understanding Diffusion Models: A Unified Perspective Calvin Luo Google Research, Brain Team calvinluo@google.com August 26, 2022 Contents Introduction: Generative Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1 Background: ELBO, VAE, and Hierarchical VAE . . . . . . . . . . . . . . . . . . ....
2303.06296.pdf
STABILIZING TRANSFORMER TRAINING BY PREVENTING ATTENTION ENTROPY COLLAPSE A P REPRINT Shuangfei Zhai∗, Tatiana Likhomanenko∗, Etai Littwin∗, Dan Busbridge∗, Jason Ramapuram∗, Yizhe Zhang, Jiatao Gu, Josh Susskind Apple {szhai,antares,elittwin,dbusbridge,jramapuram,yizzhang,jgu32,jsusskind}@apple.com March 14, 2023 ABST...
2005.12320.pdf
SCAN: Learning to Classify Images without Labels Wouter Van Gansbeke1⋆Simon Vandenhende1⋆Stamatios Georgoulis2 Marc Proesmans1Luc Van Gool1,2 1KU Leuven/ESAT-PSI2ETH Zurich/CVL, TRACE Abstract. Can we automatically group images into semantically mean- ingful clusters when ground-truth annotations are absent? The task o...
2212.05339.pdf
Elixir: Train a Large Language Model on a Small GPU Cluster Haichen Huang HPC-AI Technology Inc. hhc@hpcaitech.comJiarui Fang∗ HPC-AI Technology Inc. fangjr@hpcaitech.comHongxin Liu HPC-AI Technology Inc. liuhongxin@hpcaitech.com Shenggui Li HPC-AI Technology Inc. lisg@hpcaitech.comYang You† National University of Sing...
2310.12442.pdf
Efficient Long-Range Transformers: You Need to Attend More, but Not Necessarily at Every Layer Qingru Zhang†∗, Dhananjay Ram⋄, Cole Hawkins⋄, Sheng Zha⋄, Tuo Zhao† †Georgia Institute of Technology⋄Amazon Web Service {qingru.zhang,tourzhao}@gatech.edu {radhna,colehawk,zhasheng}@amazon.com Abstract Pretrained transformer...
1703.03400.pdf
Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks Chelsea Finn1Pieter Abbeel1 2Sergey Levine1 Abstract We propose an algorithm for meta-learning that is model-agnostic, in the sense that it is com- patible with any model trained with gradient de- scent and applicable to a variety of different learning p...
Evolutionary-Principles-in-Self-Referential-Learning.pdf
Evolutionary Principles inSelf—Referential Learning (Diploma Thesis) Jargen Schmidhube: Technische Universitat Miinchen May 14, 1987
2211.03540.pdf
Measuring Progress on Scalable Oversight for Large Language Models Samuel R. Bowman∗, Jeeyoon Hyun, Ethan Perez, Edwin Chen,†Craig Pettit,†Scott Heiner,†Kamil ˙e Lukoši ¯ut˙e,‡ Amanda Askell, Andy Jones, Anna Chen, Anna Goldie, Azalia Mirhoseini, Cameron McKinnon, Christopher Olah, Daniela Amodei, Dario Amodei, Dawn Dr...
2310.05869.pdf
HyperAttention: Long-context Attention in Near-Linear Time Insu Han Yale University insu.han@yale.eduRajesh Jayaram Google Research rkjayaram@google.comAmin Karbasi Yale University, Google Research amin.karbasi@yale.edu Vahab Mirrokni Google Research mirrokni@google.comDavid P. Woodruff CMU, Google Research dwoodruf@cs...
2310.13548.pdf
TOWARDS UNDERSTANDING SYCOPHANCY IN LANGUAGE MODELS Mrinank Sharma∗, Meg Tong∗, Tomasz Korbak, David Duvenaud Amanda Askell, Samuel R. Bowman, Newton Cheng, Esin Durmus, Zac Hatfield-Dodds, Scott R. Johnston, Shauna Kravec, Timothy Maxwell, Sam McCandlish, Kamal Ndousse, Oliver Rausch, Nicholas Schiefer, Da Yan, Mirand...
2005.11401.pdf
Retrieval-Augmented Generation for Knowledge-Intensive NLP Tasks Patrick Lewis†‡, Ethan Perez⋆, Aleksandra Piktus†, Fabio Petroni†, Vladimir Karpukhin†, Naman Goyal†, Heinrich Küttler†, Mike Lewis†, Wen-tau Yih†, Tim Rocktäschel†‡, Sebastian Riedel†‡, Douwe Kiela† †Facebook AI Research;‡University College London;⋆New Y...
2304.07313.pdf
M2T: Masking Transformers Twice for Faster Decoding Fabian Mentzer Google Research mentzer@google.comEirikur Agustsson Google Research eirikur@google.comMichael Tschannen Google Research tschannen@google.com Abstract We show how bidirectional transformers trained for masked token prediction can be applied to neural ima...
2303.15343v4.pdf
Sigmoid Loss for Language Image Pre-Training Xiaohua Zhai⋆Basil Mustafa Alexander Kolesnikov Lucas Beyer⋆ Google DeepMind, Z ¨urich, Switzerland {xzhai, basilm, akolesnikov, lbeyer }@google.com Abstract We propose a simple pairwise Sigmoid loss for Language-Image Pre-training (SigLIP). Unlike standard contrastive learn...
2404.11018v1.pdf
2024-4-18 Many-Shot In-Context Learning Rishabh Agarwal*, Avi Singh*, Lei M. Zhang†, Bernd Bohnet†, Stephanie Chan†, Ankesh Anand , Zaheer Abbas , Azade Nova , John D. Co-Reyes , Eric Chu , Feryal Behbahani , Aleksandra Faust and Hugo Larochelle *Contributed equally,†Core contribution Largelanguagemodels(LLMs)excelatfe...
2305.09836.pdf
Revisiting the Minimalist Approach to Offline Reinforcement Learning Denis Tarasov Vladislav Kurenkov Alexander Nikulin Sergey Kolesnikov Tinkoff {den.tarasov, v.kurenkov, a.p.nikulin, s.s.kolesnikov}@tinkoff.ai Abstract Recent years have witnessed significant advancements in offline reinforcement learning (RL), result...
2002.05202.pdf
arXiv:2002.05202v1 [cs.LG] 12 Feb 2020GLU Variants Improve Transformer Noam Shazeer Google noam@google.com February 14, 2020 Abstract Gated Linear Units [ Dauphin et al. ,2016] consist of the component-wise product of two linear pro- jections, one of which is first passed through a sigmoid funct ion. Variations on GLU...
978-3-642-41822-8-15.pdf
Auto-enco d er Based D ata C lustering Chunfeng Song1,F e n gL i u2, Yongzhen Huang1, Liang Wang1, and Tieniu Tan1 1National Laboratory of Pattern Recognition (NLPR), Institute of Automation, Chinese Academy of Sciences, Beijing, 100190, China 2School of Automation, Southeast University, Nanjing, 210096, China Ab st r ...
sutskever10a.pdf
789On the Convergence Properties of Contrastive Divergence Ilya Sutskever Tijmen Tieleman University of Toronto University of Toronto Abstract Contrastive Divergence (CD) is a popular method for estimating the parameters of Markov Random Fields (MRFs) by rapidly approximating an intractable term in the gra- di...
2306.02572.pdf
Les Houches Summer School Lecture Notes 2022 Preprint Introduction to Latent Variable Energy-Based Models: A Path Towards Autonomous Machine Intelligence Anna Dawid1,2and Yann LeCun3,4⋆ 1ICFO - Institut de Ciències Fotòniques, The Barcelona Institute of Science and Technology, Av. Carl Friedrich Gauss 3, 08860 Castelld...
2306.16922.pdf
THEEXPRESSIVE LEAKY MEMORY NEURON : ANEFFICIENT AND EXPRESSIVE PHENOMENOLOGICAL NEURON MODEL CANSOLVE LONG -HORIZON TASKS Aaron Spieler1,2, Nasim Rahaman3,2, Georg Martius1,2, Bernhard Schölkopf2, and Anna Levina1,4 1University of Tübingen 2Max Planck Institute for Intelligent Systems, Tübingen 3Mila, Quebec AI Institu...
2004.04906.pdf
Dense Passage Retrieval for Open-Domain Question Answering Vladimir Karpukhin∗, Barlas O ˘guz∗, Sewon Min†, Patrick Lewis, Ledell Wu, Sergey Edunov, Danqi Chen‡, Wen-tau Yih Facebook AI†University of Washington‡Princeton University {vladk, barlaso, plewis, ledell, edunov, scottyih }@fb.com sewon@cs.washington.edu danqi...
427986745-768441298640104-1604906292521363076-n.pdf
Revisiting Feature Prediction for Learning Visual Representations from Video Adrien Bardes1,2,3,Quentin Garrido1,4,Jean Ponce3,5,6,Xinlei Chen1,Michael Rabbat1,Yann LeCun1,5,6, Mahmoud Assran1,†,Nicolas Ballas1,† 1FAIR at Meta,2Inria,3École normale supérieure, CNRS, PSL Research University,4Univ. Gustave Eiffel, CNRS, ...
2206.02326.pdf
arXiv:2206.02326v1 [cs.LG] 6 Jun 2022Asymptotic Instance-Optimal Algorithms for Interactive D ecision Making Kefan Dong Stanford University kefandong@stanford.eduTengyu Ma Stanford University tengyuma@stanford.edu June 7, 2022 Abstract Past research on interactive decision making problems (ban dits, reinforcement lea...
2205.05131.pdf
UL2: Unifying Language Learning Paradigms Yi Tay∗Mostafa Dehghani∗ Vinh Q. Tran♯Xavier Garcia♯Jason Wei♯Xuezhi Wang♯Hyung Won Chung♯ Siamak Shakeri♯Dara Bahri♭Tal Schuster♭Huaixiu Steven Zheng△ Denny Zhou△Neil Houlsby△Donald Metzler△ Google Brain Abstract Existing pre-trained models are generally geared towards a parti...
2304.01373.pdf
Pythia : A Suite for Analyzing Large Language Models Across Training and Scaling Stella Biderman* 1 2Hailey Schoelkopf* 1 3Quentin Anthony1Herbie Bradley1 4Kyle O’Brien1 Eric Hallahan1Mohammad Aflah Khan5Shivanshu Purohit6 1USVSN Sai Prashanth1Edward Raff2 Aviya Skowron1Lintang Sutawika1 7Oskar van der Wal8 Abstract Ho...
2306.14846.pdf
ViNT: A Foundation Model for Visual Navigation Dhruv Shah†, Ajay Sridhar†, Nitish Dashora†, Kyle Stachowicz, Kevin Black, Noriaki Hirose, Sergey Levine UC Berkeley Abstract: General-purpose pre-trained models (“foundation models”) have en- abled practitioners to produce generalizable solutions for individual machine le...
2207.08286.pdf
An Overview of Distant Supervision for Relation Extraction with a Focus on Denoising and Pre-training Methods William P Hogan Department of Computer Science & Engineering University of California, San Diego Abstract Relation Extraction (RE) is a foundational task of natural language processing. RE seeks to transform ra...
2210.10760.pdf
Scaling Laws for Reward Model Overoptimization Leo Gao OpenAIJohn Schulman OpenAIJacob Hilton OpenAI Abstract In reinforcement learning from human feedback, it is common to optimize against a reward model trained to predict human preferences. Because the reward model is an imperfect proxy, optimizing its value too much...
10.1038.s41588-023-01649-8.pdf
Nature Genetics | Volume 56 | March 2024 | 483–492 483 nature geneticshttps://doi.org/10.1038/s41588-023-01649-8 Article In vitro reconstitution of chromatin domains shows a role for nucleosome positioning in 3D genome organization Elisa Oberbeckmann   1,4 , Kimberly Quililan2,3,4, Patrick Cramer1 & A. Marieke Oud...
2306.01708.pdf
Resolving Interference When Merging Models Prateek Yadav1Derek Tam1 Leshem Choshen2Colin Raffel1Mohit Bansal1 1University of North Carolina at Chapel Hill2IBM Research leshem.choshen@il.ibm.com {praty,dtredsox,craffel,mbansal}@cs.unc.edu Abstract Transfer learning – i.e., further fine-tuning a pre-trained model on a do...
2312.10003.pdf
REST MEETS REACT: S ELF-IMPROVEMENT FOR MULTI -STEPREASONING LLM A GENT Renat Aksitov†1, Sobhan Miryoosefi†1, Zonglin Li†1, Daliang Li†1, Sheila Babayan†2, Kavya Kopparapu†2, Zachary Fisher1, Ruiqi Guo1, Sushant Prakash1, Pranesh Srinivasan3, Manzil Zaheer2, Felix Yu1, and Sanjiv Kumar1 1Google Research,2Google DeepMin...
2310.11564.pdf
PERSONALIZED SOUPS : PERSONALIZED LARGE LANGUAGE MODEL ALIGNMENT VIA POST-HOC PARAMETER MERGING Joel Jang1,2Seungone Kim3Bill Yuchen Lin2Yizhong Wang1Jack Hessel2 Luke Zettlemoyer1Hannaneh Hajishirzi1,2Yejin Choi1,2Prithviraj Ammanabrolu4 1University of Washington2Allen Institute for AI3KAIST AI4UC San Diego joeljang@c...
2401.05300.pdf
I am a Strange Dataset: Metalinguistic Tests for Language Models Tristan Thrush§, Jared Moore§, Miguel Monares†‡, Christopher Potts§, Douwe Kiela§¶ §Stanford University; †UC San Diego; ‡Playtest AI; ¶Contextual AI tthrush@stanford.edu Abstract Statements involving metalinguistic self- reference (“This paper has six sec...
2306.00238.pdf
Bytes Are All You Need: Transformers Operating Directly On File Bytes Maxwell Horton, Sachin Mehta, Ali Farhadi, Mohammad Rastegari Apple Abstract Modern deep learning approaches usually transform inputs into a modality-specific form. For example, the most common deep learning approach to image classi- fication involve...
2305.12387.pdf
Optimal Time Complexities of Parallel Stochastic Optimization Methods Under a Fixed Computation Model Alexander Tyurin KAUST Saudi Arabia alexandertiurin@gmail.comPeter Richt ´arik KAUST Saudi Arabia richtarik@gmail.com Abstract Parallelization is a popular strategy for improving the performance of iterative algorithms...
2104.08821.pdf
SimCSE: Simple Contrastive Learning of Sentence Embeddings Tianyu Gao†∗Xingcheng Yao‡∗Danqi Chen† †Department of Computer Science, Princeton University ‡Institute for Interdisciplinary Information Sciences, Tsinghua University {tianyug,danqic}@cs.princeton.edu yxc18@mails.tsinghua.edu.cn Abstract This paper presents Si...
1912.02292.pdf
DEEPDOUBLE DESCENT : WHERE BIGGER MODELS AND MORE DATA HURT Preetum Nakkiran∗ Harvard UniversityGal Kaplun† Harvard UniversityYamini Bansal† Harvard UniversityTristan Yang Harvard University Boaz Barak Harvard UniversityIlya Sutskever OpenAI ABSTRACT We show that a variety of modern deep learning tasks exhibit a “doubl...
2401.10241.pdf
ZERO BUBBLE PIPELINE PARALLELISM Penghui Qi∗, Xinyi Wan∗, Guangxing Huang & Min Lin Sea AI Lab {qiph,wanxy,huanggx,linmin }@sea.com ABSTRACT Pipeline parallelism is one of the key components for large-scale distributed train- ing, yet its efficiency suffers from pipeline bubbles which were deemed inevitable. In this wo...
2310.01352v4.pdf
Published as a conference paper at ICLR 2024 RA-DIT: R ETRIEVAL -AUGMENTED DUAL INSTRUC - TION TUNING Xi Victoria Lin∗Xilun Chen∗Mingda Chen∗ Weijia Shi Maria Lomeli Rich James Pedro Rodriguez Jacob Kahn Gergely Szilvasy Mike Lewis Luke Zettlemoyer Scott Yih FAIR at Meta {victorialin,xilun,mingdachen,scottyih }@meta.co...
2304.09871.pdf
A Theory on Adam Instability in Large-Scale Machine Learning Igor Molybog∗, Peter Albert, Moya Chen, Zachary DeVito, David Esiobu, Naman Goyal, Punit Singh Koura, Sharan Narang, Andrew Poulton, Ruan Silva, Binh Tang, Diana Liskovich, Puxin Xu, Yuchen Zhang, Melanie Kambadur, Stephen Roller, Susan Zhang Meta AI April 26...
2024.02.27.582234v2.full.pdf
Sequence modeling and design from molecular to genome scale with Evo Eric Nguyen∗,1,2, Michael Poli∗,3, Matthew G. Durrant∗,2, Armin W. Thomas1, Brian Kang1, Jeremy Sullivan2, Madelena Y. Ng1, Ashley Lewis1, Aman Patel1, Aaron Lou1, Stefano Ermon1,4, Stephen A. Baccus1, Tina Hernandez-Boussard1, Christopher Ré1, Patric...
2203.12644.pdf
Linearizing Transformer with Key-Value Memory Yizhe Zhang∗ Meta AI yizhezhang@fb.comDeng Cai∗ The Chinese University of Hong Kong thisisjcykcd@gmail.com Abstract Efficient transformer variants with linear time complexity have been developed to mitigate the quadratic computational overhead of the vanilla transformer. Amo...
1909.05215.pdf
Published as a conference paper at ICLR 2020 RECONSTRUCTING CONTINUOUS DISTRIBUTIONS OF 3D PROTEIN STRUCTURE FROM CRYO -EM IMAGES Ellen D. Zhong MIT zhonge@mit.eduTristan Bepler MIT tbepler@mit.eduJoseph H. Davis∗ MIT jhdavis@mit.eduBonnie Berger∗ MIT bab@mit.edu ABSTRACT Cryo-electron microscopy (cryo-EM) is a powerfu...
2104.08663v2.pdf
BEIR: A Heterogenous Benchmark for Zero-shot Evaluation of Information Retrieval Models Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, Iryna Gurevych Ubiquitous Knowledge Processing Lab (UKP-TUDA) Department of Computer Science, Technische Universität Darmstadt www.ukp.tu-darmstadt.de Abstract Neural...
2402.08609.pdf
2024-2-14 Mixtures of Experts Unlock Parameter Scaling for Deep RL Johan Obando-Ceron*,1,2,3, Ghada Sokar*,1, Timon Willi*,4, Clare Lyle1, Jesse Farebrother1,2,5, Jakob Foerster4, Gintare Karolina Dziugaite1,2,5, Doina Precup1,2,5and Pablo Samuel Castro1,2,3 *Equal contributions,1Google DeepMind,2Mila - Québec AI Insti...
2306.17563.pdf
arXiv:2306.17563v1 [cs.IR] 30 Jun 2023Preprint LARGE LANGUAGE MODELS ARE EFFECTIVE TEXT RANKERS WITH PAIRWISE RANKING PROMPTING Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, J iaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, Michae l Bendersky Google Research {zhenqin,jagerman,kaihuibj,...
2310.07096.pdf
Sparse Universal Transformer Shawn Tan1 * tanjings@mila.quebecYikang Shen2 * yikang.shen@ibm.com Zhenfang Chen2 zfchen@ibm.comAaron Courville1 courvila@iro.umontreal.caChuang Gan2 chuangg@ibm.com 1Mila, University of Montreal2MIT-IBM Watson AI Lab Abstract The Universal Transformer (UT) is a variant of the Transformer ...
1502.05767.pdf
arXiv:1502.05767v4 [cs.SC] 5 Feb 2018Automatic Differentiation in Machine Learning: a Survey Atılım G¨ une¸ s Baydin gunes@robots.ox.ac.uk Department of Engineering Science University of Oxford Oxford OX1 3PJ, United Kingdom Barak A. Pearlmutter barak@pearlmutter.net Department of Computer Science National University ...
1611.03852v3.pdf
arXiv:1611.03852v3 [cs.LG] 25 Nov 2016A ConnectionBetweenGenerativeAdversarial Networks,InverseReinforcementLearning,and Energy-BasedModels ChelseaFinn∗, PaulChristiano∗, PieterAbbeel, SergeyLevine UniversityofCalifornia,Berkeley {cbfinn,paulfchristiano,pabbeel,svlevine}@eecs.berk eley.edu Abstract Generative adversa...
1601.00670.pdf
Variational Inference: A Review for Statisticians David M. Blei Department of Computer Science and Statistics Columbia University Alp Kucukelbir Department of Computer Science Columbia University Jon D. McAuliffe Department of Statistics University of California, Berkeley May 11, 2018 Abstract One of the core problems ...
2310.17722.pdf
LARGE LANGUAGE MODELS AS GENERALIZABLE POLICIES FOR EMBODIED TASKS Andrew Szot, Max Schwarzer, Harsh Agrawal, Bogdan Mazoure, Walter Talbott Katherine Metcalf, Natalie Mackraz, Devon Hjelm, Alexander Toshev Apple ABSTRACT We show that large language models (LLMs) can be adapted to be generalizable policies for embodied...
RFeynman-plentySpace.pdf
Plenty of Room at the Bottom Richard P. Feynman (Dated: Dec. 1959) This is the transcript of a talk presented by Richard P. Feynman to the American Physical Society in Pasadena on December 1959, which explores the immense possibilities afforded by miniaturization. I imagine experimental physicists must often look with e...
NIPS-2017-deep-reinforcement-learning-from-human-preferences-Paper.pdf
Deep Reinforcement Learning from Human Preferences Paul F Christiano OpenAI paul@openai.comJan Leike DeepMind leike@google.comTom B Brown Google Brain⇤ tombbrown@google.com Miljan Martic DeepMind miljanm@google.comShane Legg DeepMind legg@google.comDario Amodei OpenAI damodei@openai.com Abstract For sophisticated reinf...
2403.20327.pdf
Gecko: Versatile Text Embeddings Distilled from Large Language Models Jinhyuk Lee*, Zhuyun Dai*, Xiaoqi Ren*, Blair Chen, Daniel Cer, Jeremy R. Cole, Kai Hui, Michael Boratko, Rajvi Kapadia, Wen Ding, Yi Luan, Sai Meher Karthik Duddu, Gustavo Hernandez Abrego, Weiqiang Shi, Nithi Gupta, Aditya Kusupati, Prateek Jain, S...
1509.02971.pdf
Published as a conference paper at ICLR 2016 CONTINUOUS CONTROL WITH DEEP REINFORCEMENT LEARNING Timothy P. Lillicrap∗, Jonathan J. Hunt∗, Alexander Pritzel, Nicolas Heess, Tom Erez, Yuval Tassa, David Silver & Daan Wierstra Google Deepmind London, UK {countzero, jjhunt, apritzel, heess, etom, tassa, davidsilver, wiers...
10.1101.2024.03.07.584001.pdf
Protein language models are biased by unequal sequence sampling across the tree of life Frances Ding frances@berkeley.edu Department of Electrical Engineering and Computer Sciences University of California, Berkeley Jacob Steinhardt jsteinhardt@berkeley.edu Departments of Statistics and Electrical Engineering and Compu...
2305.14314.pdf
QL ORA: Efficient Finetuning of Quantized LLMs Tim Dettmers∗Artidoro Pagnoni∗Ari Holtzman Luke Zettlemoyer University of Washington {dettmers,artidoro,ahai,lsz}@cs.washington.edu Abstract We present QLORA, an efficient finetuning approach that reduces memory us- age enough to finetune a 65B parameter model on a single ...
2312.16682.pdf
Some things are more CRINGE than others: Preference Optimization with the Pairwise Cringe Loss Jing Xu1Andrew Lee1Sainbayar Sukhbaatar1Jason Weston1 Abstract Practitioners commonly align large language mod- els using pairwise preferences, i.e., given labels of the type response A is preferred to response B for a given ...
5175-reward-design-with-language-mo.pdf
Published as a conference paper at ICLR 2023 REWARD DESIGN WITH LANGUAGE MODELS Minae Kwon, Sang Michael Xie, Kalesha Bullard†, Dorsa Sadigh Stanford University, DeepMind† {minae ,xie,dorsa }@cs.stanford.edu ,ksbullard@deepmind.com† ABSTRACT Reward design in reinforcement learning (RL) is challenging since specifying h...
2005.14165.pdf
Language Models are Few-Shot Learners Tom B. Brown∗Benjamin Mann∗Nick Ryder∗Melanie Subbiah∗ Jared Kaplan†Prafulla Dhariwal Arvind Neelakantan Pranav Shyam Girish Sastry Amanda Askell Sandhini Agarwal Ariel Herbert-Voss Gretchen Krueger Tom Henighan Rewon Child Aditya Ramesh Daniel M. Ziegler Jeffrey Wu Clemens Winter ...
2304.14767.pdf
Dissecting Recall of Factual Associations in Auto-Regressive Language Models Mor Geva1Jasmijn Bastings1Katja Filippova1Amir Globerson2,3 1Google DeepMind2Tel Aviv University3Google Research {pipek, bastings, katjaf, amirg}@google.com Abstract Transformer-based language models (LMs) are known to capture factual knowledg...
2307.00524.pdf
Large Language Models Enable Few-Shot Clustering Vijay Viswanathan1, Kiril Gashteovski2, Carolin Lawrence2, Tongshuang Wu1, Graham Neubig1, 3 1Carnegie Mellon University,2NEC Laboratories Europe,3Inspired Cognition Abstract Unlike traditional unsupervised clustering, semi-supervised clustering allows users to pro- vide...
deep-boltzmann-machines.pdf
Deep BoltzmannMachines Ruslan Salakhutdinov DepartmentofComputerScience UniversityofToronto rsalakhu@cs.toronto.eduGeoffreyHinton DepartmentofComputerScience UniversityofToronto hinton@cs.toronto.edu Abstract We present a new learning algorithm for Boltz- mann machines that contain many layers of hid- den variables. Da...
2305.16264.pdf
Scaling Data-Constrained Language Models Niklas Muennighoff1Alexander M. Rush1Boaz Barak2Teven Le Scao1 Aleksandra Piktus1Nouamane Tazi1Sampo Pyysalo3Thomas Wolf1Colin Raffel1 1Hugging Face2Harvard University3University of Turku n.muennighoff@gmail.com Abstract The current trend of scaling language models involves incr...
Estimation-of-Entropy-and-Mutual-Information.pdf
ARTICLE Communicated by Jonathan Victor Estimation of Entropy and Mutual Information Liam Paninski liam@cns.nyu.edu Center for Neural Science, New York University, New York, NY 10003, U.S.A. We present some new results on the nonparametric estimation of entropy and mutual information. First, we use an exact local expan...