filename stringlengths 9 127 | text stringlengths 133 11k |
|---|---|
2402.11960v1.pdf | DB-LLM: Accurate Dual-Binarization for Efficient LLMs
Hong Chen1*, Chengtao Lv1*, Liang Ding2, Haotong Qin1, Xiabin Zhou4,
Yifu Ding1, Xuebo Liu3, Min Zhang3, Jinyang Guo1, Xianglong Liu1†, Dacheng Tao2
1Beihang University2The University of Sydney
3Harbin Institute of Technology, Shenzhen4Jiangsu University
{18373205, ... |
2210.13382.pdf | Published as a conference paper at ICLR 2023
EMERGENT WORLD REPRESENTATIONS : EXPLORING A
SEQUENCE MODEL TRAINED ON A SYNTHETIC TASK
Kenneth Li∗
Harvard UniversityAspen K. Hopkins
Massachusetts Institute of TechnologyDavid Bau
Northeastern University
Fernanda Vi ´egas
Harvard UniversityHanspeter Pfister
Harvard Universi... |
1809.04281.pdf | MUSIC TRANSFORMER :
GENERATING MUSIC WITH LONG -TERM STRUCTURE
Cheng-Zhi Anna Huang∗Ashish Vaswani Jakob Uszkoreit Noam Shazeer
Ian Simon Curtis Hawthorne Andrew M. Dai Matthew D. Hoffman
Monica Dinculescu Douglas Eck
Google Brain
ABSTRACT
Music relies heavily on repetition to build structure and meaning. Self-referenc... |
NeurIPS-2022-training-language-models-to-follow-instructions-with-human-feedback-Paper-Conference.pdf | Training language models to follow instructions
with human feedback
Long Ouyang∗Jeff Wu∗Xu Jiang∗Diogo Almeida∗Carroll L. Wainwright∗
Pamela Mishkin∗Chong Zhang Sandhini Agarwal Katarina Slama Alex Ray
John Schulman Jacob Hilton Fraser Kelton Luke Miller Maddie Simens
Amanda Askell†Peter Welinder Paul Christiano∗†
Jan ... |
2305.12132.pdf | Can Public Large Language Models Help Private Cross-device
Federated Learning?
Boxin Wang3∗, Yibo Jacky Zhang4, Yuan Cao2, Bo Li3, H. Brendan McMahan1,
Sewoong Oh1, Zheng Xu1, Manzil Zaheer2
1Google Research,2Google Deepmind,3UIUC,4Stanford
Abstract
We study (differentially) private federated
learning (FL) of language ... |
2201.02867v3.pdf | Deep Generative Modeling for Volume
Reconstruction in Cryo-Electron Microscopy
Claire Donnat1+, Axel Levy2,3, Fr´ed´eric Poitevin3, Ellen Zhong4, and Nina Miolane5*+
1University of Chicago, Department of Statistics, Chicago, Illinois, USA
2Stanford University, Department of Electrical Engineering, Stanford, CA, USA
3LC... |
2304.02034.pdf | Effective Theory of Transformers at Initialization
Emily Dinan,∗Sho Yaida,†and Susan Zhang‡
Meta AI
Meta Platforms, Inc.§
We perform an effective-theory analysis of forward–backward signal propagation in wide
and deep Transformers, i.e., residual neural networks with multi-head self-attention blocks
and multilayer percep... |
2307.12950.pdf | RLCD: R EINFORCEMENT LEARNING FROM CONTRAST
DISTILLATION FOR LANGUAGE MODEL ALIGNMENT
Kevin Yang1,2Dan Klein1Asli Celikyilmaz2Nanyun Peng3Yuandong Tian2
1UC Berkeley,2Meta AI,3UCLA
{yangk,klein}@berkeley.edu,{aslic,yuandong}@meta.com,violetpeng@cs.ucla.edu
ABSTRACT
We propose Reinforcement Learning from Contrast Distil... |
2206.14486.pdf | Beyond neural scaling laws:
beating power law scaling via data pruning
Ben Sorscher∗ ∗1Robert Geirhos∗2Shashank Shekhar3
Surya Ganguli1,3§Ari S. Morcos3§
∗equal contribution
1Department of Applied Physics, Stanford University
2University of Tübingen
3Meta AI (FAIR)
§Joint senior authors
Abstract
Widely observed neural ... |
2305.16381.pdf | DPOK: Reinforcement Learning for
Fine-tuning Text-to-Image Diffusion Models
Ying Fan˚,1,2, Olivia Watkins3, Yuqing Du3, Hao Liu3, Moonkyung Ryu1, Craig Boutilier1,
Pieter Abbeel3,Mohammad Ghavamzadeh1,Kangwook Lee2,Kimin Lee˚,1
˚Equal technical contribution
1Google Research2University of Wisconsin-Madison3UC Berkeley
A... |
10.1016.j.cell.2024.01.036.pdf | Article
Structure of the plant plastid-encoded RNA
polymerase
Graphical abstract
Highlights
dStructure of the chloroplast transcription complex
dFifteen nuclear-encoded subunits encase the plastid-
encoded polymerase
dSubunits PAP1 and PAP2 interact with the DNA and themRNA, respectively
dStructure-guided insights into... |
99_on_recovering_higher_order_int.pdf | ONRECOVERING HIGHER -ORDER INTERACTIONS
FROM PROTEIN LANGUAGE MODELS
Darin Tsui & Amirali Aghazadeh
School of Electrical and Computer Engineering
Georgia Institute of Technology
Atlanta, GA 30332, USA
{darint,amiralia }@gatech.edu
ABSTRACT
Protein language models leverage evolutionary information to perform state-of-
t... |
langegabelriedmiller2011chapter.pdf | Batch Reinforcement Learning
Sascha Lange, Thomas Gabel, and Martin Riedmiller
Abstract Batch reinforcement learning is a subfield of dynamic programming-based
reinforcement learning. Originally defined as the task of learning the best possible
policy from a fixed set of a priori-known transition samples, the (batch) algo... |
2210.15097.pdf | Contrastive Decoding: Open-ended Text Generation as Optimization
Xiang Lisa Li1, Ari Holtzman2, Daniel Fried3, Percy Liang1, Jason Eisner4,
Tatsunori Hashimoto1, Luke Zettlemoyer2,5, Mike Lewis5
Stanford University1, University of Washington2, Carnegie Mellon University3,
Johns Hopkins University4, FAIR5
xlisali@stanfo... |
3639-the-effects-of-reward-misspeci.pdf | THEEFFECTS OF REWARD MISSPECIFICATION :
MAPPING AND MITIGATING MISALIGNED MODELS
Alexander Pan
CaltechKush Bhatia
UC BerkeleyJacob Steinhardt
UC Berkeley
ABSTRACT
Reward hacking—where RL agents exploit gaps in misspecified reward
functions—has been widely observed, but not yet systematically studied. To un-
derstand ho... |
2401.12187.pdf | WARM: On the Benefits of Weight Averaged
Reward Models
Alexandre Ramé, Nino Vieillard, Léonard Hussenot, Robert Dadashi, Geoffrey Cideron, Olivier Bachem, Johan Ferret
Google DeepMind
Aligning large language models (LLMs) with human preferences through reinforcement learning (RLHF)
can lead to reward hacking, where LLM... |
2305.16183.pdf | Passive learning of active causal strategies in agents
and language models
Andrew K. Lampinen
Google DeepMind
London, UK
lampinen@deepmind.comStephanie C. Y. Chan
Google DeepMind
London, UK
scychan@deepmind.comIshita Dasgupta
Google DeepMind
London, UK
idg@deepmind.com
Andrew J. Nam
Stanford University
Stanford, CA
ajh... |
2001.08361.pdf | Scaling Laws for Neural Language Models
Jared Kaplan∗
Johns Hopkins University, OpenAI
jaredk@jhu.eduSam McCandlish∗
OpenAI
sam@openai.com
Tom Henighan
OpenAI
henighan@openai.comTom B. Brown
OpenAI
tom@openai.comBenjamin Chess
OpenAI
bchess@openai.comRewon Child
OpenAI
rewon@openai.com
Scott Gray
OpenAI
scott@openai.co... |
10.1038.s41467-023-38539-w.pdf | Article https://doi.org/10.1038/s41467-023-38539-w
A method for restoring signals and revealing
individual macromolecule states incryo-ET, REST
Haonan Zhang1,2,3,Y a nL i1,3,Y a n a nL i u1,2, Dongyu Li1,2,L i nW a n g1, Kai Song1,
Keyan Bao1& Ping Zhu1,2
Cryo-electron tomography (cryo-ET) is widely used to explore the... |
1801.10198.pdf | Published as a conference paper at ICLR 2018
GENERATING WIKIPEDIA BY SUMMARIZING LONG
SEQUENCES
Peter J. Liu∗, Mohammad Saleh∗,
Etienne Pot†, Ben Goodrich, Ryan Sepassi, Łukasz Kaiser, Noam Shazeer
Google Brain
Mountain View, CA
{peterjliu,msaleh,epot,bgoodrich,rsepassi,lukaszkaiser,noam }@google.com
ABSTRACT
We show t... |
10.1126.science.abo7201.pdf | RESEARCH ARTICLE SUMMARY◥
CORONAVIRUS
Open science discovery of potent noncovalent
SARS-CoV-2 main protease inhibitors
Melissa L. Boby †, Daren Fearon †, Matteo Ferla †, Mihajlo Filep †, Lizbé Koekemoer †,
Matthew C. Robinson †, The COVID Moonshot Consortium, John D. Chodera *, Alpha A. Lee *,
Nir London *, Annette von... |
2306.16410.pdf | Towards Language Models That Can See:
Computer Vision Through the LENS
of Natural Language
William Berrios†Gautam Mittal†§Tristan Thrush†§
Douwe Kiela†§Amanpreet Singh†
†Contextual AI;§Stanford University
Abstract
We propose LENS
, a modular approach for tackling computer vision problems by leveraging
the power of la... |
2005.00341.pdf | Jukebox: A Generative Model for Music
Prafulla Dhariwal* 1Heewoo Jun* 1Christine Payne* 1Jong Wook Kim1Alec Radford1Ilya Sutskever1
Abstract
We introduce Jukebox, a model that generates
music with singing in the raw audio domain. We
tackle the long context of raw audio using a multi-
scale VQ-V AE to compress it to dis... |
1905.01969v4.pdf | Published as a conference paper at ICLR 2020
Poly-encoders :architectures and pre -training
strategies for fast and accurate multi -sentence scoring
Samuel Humeau∗, Kurt Shuster∗, Marie-Anne Lachaux, Jason Weston
Facebook AI Research
{samuelhumeau,kshuster,malachaux,jase }@fb.com
Abstract
The use of deep pre-trained tr... |
2401.18079.pdf | KVQuant: Towards 10 Million Context Length LLM Inference
with KV Cache Quantization
Coleman Hooper
chooper@berkeley.edu
UC BerkeleySehoon Kim
sehoonkim@berkeley.edu
UC BerkeleyHiva Mohammadzadeh
hiva@berkeley.edu
UC Berkeley
Michael W. Mahoney
mmahoney@stat.berkeley.edu
ICSI, LBNL, UC BerkeleyYakun Sophia Shao
ysshao@b... |
2305.15717.pdf | The False Promise of Imitating Proprietary LLMs
Arnav Gudibande∗
UC Berkeley
arnavg@berkeley.eduEric Wallace∗
UC Berkeley
ericwallace@berkeley.eduCharlie Snell∗
UC Berkeley
csnell22@berkeley.edu
Xinyang Geng
UC Berkeley
young.geng@berkeley.eduHao Liu
UC Berkeley
hao.liu@berkeley.eduPieter Abbeel
UC Berkeley
pabbeel@ber... |
2306.02707.pdf | Orca: Progressive Learning from Complex
Explanation Traces of GPT-4
Subhabrata Mukherjee∗†, Arindam Mitra∗
Ganesh Jawahar, Sahaj Agarwal, Hamid Palangi, Ahmed Awadallah
Microsoft Research
Abstract
Recent research has focused on enhancing the capability of smaller models
through imitation learning, drawing on the output... |
109_how_well_do_generative_protein.pdf | HOW WELL DO GENERATIVE PROTEIN MODELS GENERATE ?
Han Spinner
Department of Systems Biology
Harvard Medical SchoolAaron W. Kollasch
Department of Systems Biology
Harvard Medical SchoolDebora S. Marks
Department of Systems Biology
Harvard Medical School
ABSTRACT
Protein design relies critically on the generation of plaus... |
Pursuing-structural-biology-in-China-cell.pdf | Leading Edge
Conversations
Pursuing structural biology in China
In November 2023, structural biologists from different countries and different disciplines gathered at the Cell
Symposium: Structural biology from the nanoscale to cellular mesoscale to discuss recent breakthroughs,including structures of proteins and macr... |
HyvO00-icatut.pdf | Indep enden t Comp onen t Analysis/: A T utorialAap o Hyv /ärinen and Erkki OjaHelsinki Univ ersit y of T ec hnologyLab oratory of Computer and Information ScienceP /.O/. Bo x /5/4/0/0/, FIN/-/0/2/0/1/5 Esp o o/, Finlandaapo/.hyvarinen/@hut/.fi/, erkki/.oja/@hut/.fihttp/:////www/.cis/.hut/.fi//pro ject s//ic a//A v ers... |
2211.06738.pdf | arXiv:2211.06738v1 [cs.AI] 12 Nov 2022Formalizing the presumption of independence
Paul Christiano, Eric Neyman, Mark Xu
Alignment Research Center
Abstract
Mathematical proof aims to deliver confident conclusions, but a ver y similar process of
deduction can be used to make uncertain estimates that are open t o revisio... |
1805.00899.pdf | AI safety via debate
Geoffrey Irving∗Paul Christiano
OpenAIDario Amodei
Abstract
To make AI systems broadly useful for challenging real-world tasks, we need them to learn
complexhumangoalsandpreferences. Oneapproachtospecifyingcomplexgoalsaskshumansto
judge during training which agent behaviors are safe and useful, but ... |
2401.10020.pdf | Self-Rewarding Language Models
Weizhe Yuan1,2Richard Yuanzhe Pang1,2Kyunghyun Cho2
Xian Li1Sainbayar Sukhbaatar1Jing Xu1Jason Weston1,2
1Meta2NYU
Abstract
We posit that to achieve superhuman agents, future models require super-
human feedback in order to provide an adequate training signal. Current
approaches commonly ... |
2401.12192.pdf | Text Embedding Inversion Attacks on Multilingual Language Models
Yiyi Chen Heather Lent Johannes Bjerva
Department of Computer Science, Aalborg University, Denmark
{yiyic, hcle, jbjerva}@cs.aau.dk
Abstract
Representing textual information as real-
numbered embeddings has become the norm in
NLP. Moreover, with the rise ... |
2211.07793.pdf | EXTREME GENERATIVE IMAGE COMPRESSION BY LEARNING
TEXT EMBEDDING FROM DIFFUSION MODELS
A P REPRINT
Zhihong Pan, Xin Zhou, Hao Tian
Baidu Research (USA)
ABSTRACT
Transferring large amount of high resolution images over limited bandwidth is an important but very
challenging task. Compressing images using extremely low bit... |
gu-dissertation-augmented.pdf | MODELING SEQUENCES WITH STRUCTURED STATE SPACES
A DISSERTATION
SUBMITTED TO THE DEPARTMENT OF DEPARTMENT OF COMPUTER
SCIENCE
AND THE COMMITTEE ON GRADUATE STUDIES
OF STANFORD UNIVERSITY
IN PARTIAL FULFILLMENT OF THE REQUIREMENTS
FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY
Albert Gu
June 2023 |
2108.05540.pdf | Unsupervised Corpus Aware Language Model Pre-training
for Dense Passage Retrieval
Luyu Gao and Jamie Callan
Language Technologies Institute
Carnegie Mellon University
{luyug, callan}@cs.cmu.edu
Abstract
Recent research demonstrates the effective-
ness of using fine-tuned language mod-
els (LM) for dense retrieval. Howev... |
1501.05014.pdf | Experimental Simulation of Closed Timelike Curves
Martin Ringbauer1,2∗, Matthew A. Broome1,2, Casey R. Myers1, Andrew G. White1,2and Timothy C. Ralph2
1Centre for Engineered Quantum Systems,2Centre for Quantum Computer and Communication Technology,
School of Mathematics and Physics, University of Queensland, Brisbane, ... |
2310.18168.pdf | PERSONAS AS A WAY TO MODEL TRUTHFULNESS IN
LANGUAGE MODELS
Nitish Joshi1∗Javier Rando2∗Abulhair Saparov1Najoung Kim3He He1
1New York University2ETH Zurich3Boston University
{nitish}@nyu.edu {jrando}@ethz.ch
ABSTRACT
Large Language Models (LLMs) are trained on vast amounts of text from the
internet, which contains both ... |
1712.03346.pdf | Variational auto-encoding of protein sequences
Sam Sinai∗
Harvard University
samsinai@g.harvard.eduEric Kelsic†‡
Harvard Medical School
eric kelsic@hms.harvard.edu
George M. Church§†‡
Harvard Medical School
church labadmin@hms.harvard.eduMartin A. Nowak∗‡¶
Harvard University
martin nowak@harvard.edu
Abstract
Proteins a... |
2309.16797.pdf | PROMPTBREEDER :
SELF-REFERENTIAL SELF-IMPROVEMENT
VIAPROMPT EVOLUTION
Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, Tim Rockt ¨aschel
Google DeepMind
{chrisantha,dylski,henrykm,osindero,rocktaschel }@google.com
ABSTRACT
Popular prompt strategies like Chain-of-Thought Prompting can dramatically... |
2404.12253v1.pdf | Toward Self-Improvement of LLMs via Imagination,
Searching, and Criticizing
Ye Tian∗, Baolin Peng∗, Linfeng Song∗, Lifeng Jin, Dian Yu, Haitao Mi†, Dong Yu
Tencent AI Lab, Bellevue, WA
{yaptian,baolinpeng,lfsong,lifengjin,yudian,haitaomi}@global.tencent.com
Abstract
Despite the impressive capabilities of Large Language... |
2005.10242.pdf | Understanding Contrastive Representation Learning through
Alignment and Uniformity on the Hypersphere
Tongzhou Wang1Phillip Isola1
Abstract
Contrastive representation learning has been out-
standingly successful in practice. In this work,
we identify two key properties related to the con-
trastive loss: (1) alignment (... |
Improving-Memory-Search-through-Model-Based-Cue-Selection.pdf | IMPROVING MEMORY SEARCH 1
.
Improving Memory Search
through Model-Based Cue Selection
Charlotte A. Cornell1, Kenneth A. Norman2, Thomas L. Griffiths2,3, and Qiong Zhang1,4
1Psychology Department, Rutgers University–New Brunswick
2Psychology Department, Princeton University
3Computer Science Department, Princeton Univer... |
tr00-004.pdf | /CC /D6/CP/CX/D2/CX/D2/CV /C8/D6/D3 /CS/D9/CR/D8/D7 /D3/CU /BX/DC/D4 /CT/D6/D8/D7 /CQ /DD /C5/CX/D2/CX/D1/CX/DE/CX/D2/CV /BV/D3/D2 /D8/D6/CP/D7/D8/CX/DA /CT/BW/CX/DA /CT/D6/CV/CT/D2/CR/CT/BZ/BV/C6/CD /CC/CA /BE/BC/BC/BC/B9/BC/BC/BG/BZ/CT/D3/AB/D6/CT/DD /BX/BA /C0/CX/D2 /D8/D3/D2/BZ/CP/D8/D7/CQ /DD /BV/D3/D1/D4/D9/D8/CP... |
2212.04356.pdf | Robust Speech Recognition via Large-Scale Weak Supervision
Alec Radford* 1Jong Wook Kim* 1Tao Xu1Greg Brockman1Christine McLeavey1Ilya Sutskever1
Abstract
We study the capabilities of speech processing
systems trained simply to predict large amounts of
transcripts of audio on the internet. When scaled
to 680,000 hours ... |
Rombach-High-Resolution-Image-Synthesis-With-Latent-Diffusion-Models-CVPR-2022-paper.pdf | High-Resolution Image Synthesis with Latent Diffusion Models
Robin Rombach1∗Andreas Blattmann1∗Dominik Lorenz1Patrick Esser
Bj¨orn Ommer1
1Ludwig Maximilian University of Munich & IWR, Heidelberg University, Germany
Runway ML
https://github.com/CompVis/latent-diffusion
Abstract
By decomposing the image formation proc... |
2402.09668.pdf | How to Train Data-Efficient LLMs
Noveen Sachdeva1 2Benjamin Coleman1Wang-Cheng Kang1Jianmo Ni1Lichan Hong1Ed H. Chi1
James Caverlee1 3Julian McAuley2Derek Zhiyuan Cheng1
Abstract
The training of large language models (LLMs) is
expensive. In this paper, we study data-efficient
approaches for pre-training LLMs, i.e., tec... |
mapreduce.pdf | MapReduce: Simplied Data Processing onLargeClusters
JeffreyDean andSanjay Ghema wat
jeff@google.com, sanjay@google.com
Google,Inc.
Abstract
MapReduce isaprogramming model andanassoci-
ated implementation forprocessing andgenerating large
data sets. Users specify amap function thatprocesses a
key/valuepairtogenerate as... |
2311.00208.pdf | Transformers as Recognizers of Formal Languages:
A Survey on Expressivity
Lena Strobl
Umeå University
lena.strobl@umu.seWilliam Merrill
New York University
willm@nyu.eduGail Weiss
EPFL
gail.weiss@epfl.ch
David Chiang
University of Notre Dame
dchiang@nd.eduDana Angluin
Yale University
dana.angluin@yale.edu
Abstract
As t... |
2402.04833.pdf | Long Is More for Alignment:
A Simple but Tough-to-Beat Baseline for Instruction Fine-Tuning
Hao Zhao1Maksym Andriushchenko1Francesco Croce1Nicolas Flammarion1
Abstract
There is a consensus that instruction fine-tuning
of LLMs requires high-quality data, but what
are they? LIMA (NeurIPS 2023) and AlpaGa-
sus (ICLR 2024)... |
1801.05134.pdf | Understanding the Disharmony between Dropout and Batch Normalization by
Variance Shift
Xiang Li1Shuo Chen1Xiaolin Hu2Jian Yang1
Abstract
This paper first answers the question “why do
the two most powerful techniques Dropout and
Batch Normalization (BN) often lead to a worse
performance when they are combined together?”
... |
2305.13301.pdf | TRAINING DIFFUSION MODELS
WITH REINFORCEMENT LEARNING
Kevin Black∗1Michael Janner∗1Yilun Du2Ilya Kostrikov1Sergey Levine1
1University of California, Berkeley2Massachusetts Institute of Technology
{kvablack, janner, kostrikov, sergey.levine}@berkeley.edu yilundu@mit.edu
ABSTRACT
Diffusion models are a class of flexible ... |
2306.04488.pdf | Rewarded soups: towards Pareto-optimal alignment
by interpolating weights fine-tuned on diverse rewards
Alexandre Rame1∗, Guillaume Couairon1,2†, Mustafa Shukor1†,
Corentin Dancette1†,Jean-Baptiste Gaya1,2†,Laure Soulier1,Matthieu Cord1,3
1Sorbonne Université, CNRS, ISIR, Paris, France2Meta AI3Valeo.ai
Abstract
Foundat... |
2210.03057.pdf | LANGUAGE MODELS ARE
MULTILINGUAL CHAIN -OF-THOUGHT REASONERS
Freda Shi1,2,∗Mirac Suzgun1,3,∗Markus Freitag1Xuezhi Wang1
Suraj Srivats4Soroush Vosoughi4Hyung Won Chung1Yi Tay1
Sebastian Ruder1Denny Zhou1Dipanjan Das1Jason Wei1
1Google Research2Toyota Technological Institute at Chicago
3Stanford University4Dartmouth Coll... |
2306.17806.pdf | Stay on topic with Classifier-Free Guidance
Guillaume V . Sanchez*
Hexaglobe
EleutherAI
gsanchez@hexaglobe.comHonglu Fan*
University of Geneva
EleutherAI
honglu.fan@unige.chAlexander Spangher*
Information Sciences Institute
University of Southern California
spangher@usc.edu
Elad Levi
Sightful
eladlevico@gmail.comPawan ... |
2310.10638v5.pdf | Published as a conference paper at ICLR 2024
IN-CONTEXT PRETRAINING : LANGUAGE MODELING
BEYOND DOCUMENT BOUNDARIES
Weijia Shi1,2Sewon Min1,2Maria Lomeli1Chunting Zhou1
Margaret Li1,2Gergely Szilvasy1Rich James1Xi Victoria Lin1
Noah A. Smith2,3Luke Zettlemoyer1,2Scott Yih1Mike Lewis1
1Meta AI2University of Washington3Al... |
2305.15348.pdf | READ: Recurrent Adaptation of Large Transformers
Sid Wang John Nguyen Ke Li Carole-Jean Wu
Meta AI
{yuwang2020,ngjhn,kli26,carolejeanwu}@meta.com
Abstract
Fine-tuning large-scale Transformers has led to the explosion of many AI applica-
tions across Natural Language Processing and Computer Vision tasks. However,
fine-t... |
2309.10668.pdf | Language Modeling Is Compression
Grégoire Delétang*1, Anian Ruoss*1, Paul-Ambroise Duquenne2, Elliot Catt1, Tim Genewein1, Christopher
Mattern1, Jordi Grau-Moya1, Li Kevin Wenliang1, Matthew Aitchison1, Laurent Orseau1, Marcus Hutter1and
Joel Veness1
*Equal contributions,1Google DeepMind,2Meta AI & Inria
It has long be... |
2404.16710v1.pdf | LayerSkip: Enabling Early Exit Inference and
Self-Speculative Decoding
Mostafa Elhoushi1,†,∗,Akshat Shrivastava1,†,∗,Diana Liskovich2,†,Bram Wasti2,Basil Hosmer1,
Liangzhen Lai3,Anas Mahmoud4,Bilge Acun1,Saurabh Agrawal6,Ahmed Roman7,Ahmed A Aly3,Beidi
Chen1,5,Carole Jean-Wu1
1FAIR at Meta,2GenAI at Meta,3Reality Labs ... |
2212.14024.pdf | DEMONSTRATE –SEARCH –PREDICT :
Composing retrieval and language models for knowledge-intensive NLP
Omar Khattab1Keshav Santhanam1Xiang Lisa Li1David Hall1
Percy Liang1Christopher Potts1Matei Zaharia1
Abstract
Retrieval-augmented in-context learning has
emerged as a powerful approach for addressing
knowledge-intensive t... |
L08_expressivity.pdf | Expressive Variational Autoencoders
John Thickstun
The Gaussian VAE parameterizes the prior r(z), conditional likelihood p(x|z), and posterior
approximation q(x|z) with with Gaussian distributions. The in-expressivity of these Gaussian
models can make it difficult to capture the distribution p(x); complaints about the “b... |
2311.11944v1.pdf | FINANCE BENCH : A New Benchmark for Financial Question Answering
Pranab Islam1∗Anand Kannappan1Douwe Kiela2,3
Rebecca Qian1Nino Scherrer1Bertie Vidgen1
1Patronus AI2Contextual AI3Stanford University
Abstract
FINANCE BENCH is a first-of-its-kind test suite
for evaluating the performance of LLMs on
open book financial qu... |
2403.09636.pdf | Dynamic Memory Compression: Retrofitting LLMs for Accelerated Inference
Piotr Nawrot*Q VAdrian Ła ´ncucki*Q KMarcin ChochowskiQDavid TarjanQEdoardo M. PontiV
QNVIDIAKUniversity of WrocławVUniversity of Edinburgh
Abstract
Transformers have emerged as the backbone of
large language models (LLMs). However, genera-
tion re... |
1610.03518v1.pdf | Transfer from Simulation to Real World through
Learning Deep Inverse Dynamics Model
Paul Christiano, Zain Shah, Igor Mordatch, Jonas Schneider,
Trevor Blackwell, Joshua Tobin, Pieter Abbeel, and Wojciech Zaremba
OpenAI, San Francisco, CA, USA
Abstract — Developing control policies in simulation is often
more practical ... |
2302.03764.pdf | Sketchy: Memory-efficient Adaptive Regularization with Frequent Directions
Vladimir Feinberg1Xinyi Chen1 2Y. Jennifer Sun2Rohan Anil1Elad Hazan1 2
Abstract
Adaptive regularization methods that exploit more
than the diagonal entries exhibit state of the art
performance for many tasks, but can be pro-
hibitive in terms of... |
1608.04471.pdf | Stein Variational Gradient Descent: A General
Purpose Bayesian Inference Algorithm
Qiang Liu Dilin Wang
Department of Computer Science
Dartmouth College
Hanover, NH 03755
{qiang.liu, dilin.wang.gr}@dartmouth.edu
Abstract
We propose a general purpose variational inference algorithm that forms a natural
counterpart of gr... |
1812.11118.pdf | Reconciling modern machine learning practice
and the bias-variance trade-off
Mikhail Belkina, Daniel Hsub, Siyuan Maa, and Soumik Mandala
aThe Ohio State University, Columbus, OH
bColumbia University, New York, NY
September 12, 2019
Abstract
Breakthroughs in machine learning are rapidly changing science and society, yet... |
2002.05616.pdf | Learning the Stein Discrepancy
for Training and Evaluating Energy-Based Models without Sampling
Will Grathwohl1Kuan-Chieh Wang1J¨orn-Henrik Jacobsen1David Duvenaud1Richard Zemel1
Abstract
We present a new method for evaluating and train-
ing unnormalized density models. Our approach
only requires access to the gradient... |
2304.14802.pdf | ResiDual: Transformer with Dual Residual
Connections
Shufang Xie‡†, Huishuai Zhang†, Junliang Guo†, Xu Tan†∗, Jiang Bian†
Hany Hassan Awadalla†,Arul Menezes†,Tao Qin†,Rui Yan‡∗
†Microsoft Research†Microsoft Azure Translation
‡Gaoling School of Artificial Intelligence, Renmin University of China
{shufangxie,ruiyan}@ruc.e... |
2403.07816.pdf | Branch-Train-MiX:
Mixing Expert LLMs into a Mixture-of-Experts LLM
Sainbayar Sukhbaatar ,Olga Golovneva ,Vasu Sharma ,Hu Xu,Xi Victoria Lin ,Baptiste Rozière ,Jacob
Kahn,Daniel Li,Wen-tau Yih ,Jason Weston ,Xian Li
FAIR at Meta
We investigate efficient methods for training Large Language Models (LLMs) to possess capabi... |
2209.15634.pdf | A General Framework for Sample-Efficient Function
Approximation in Reinforcement Learning
Zixiang Chen‡∗Chris Junchi Li⋄∗Angela Yuan‡∗Quanquan Gu‡Michael I. Jordan⋄,†
Department of Computer Sciences, University of California, Los Angeles‡
Department of Electrical Engineering and Computer Sciences, University of Californi... |
2205.13147.pdf | Matryoshka Representation Learning
Aditya Kusupati∗†⋄, Gantavya Bhatt∗†, Aniket Rege∗†,
Matthew Wallingford†, Aditya Sinha⋄, Vivek Ramanujan†, William Howard-Snyder†,
Kaifeng Chen⋄, Sham Kakade‡, Prateek Jain⋄and Ali Farhadi†
†University of Washington,⋄Google Research,‡Harvard University
{kusupati,ali}@cs.washington.ed... |
2307.15043.pdf | Universal and Transferable Adversarial Attacks
on Aligned Language Models
Andy Zou1,2, Zifan Wang2, Nicholas Carlini3, Milad Nasr3,
J. Zico Kolter1,4, Matt Fredrikson1
1Carnegie Mellon University,2Center for AI Safety,
3Google DeepMind,4Bosch Center for AI
Abstract
Because “out-of-the-box” large language models are cap... |
2207.10551.pdf | Scaling Laws vs Model Architectures :
How does Inductive Bias Influence Scaling?
Yi Tay∗Mostafa Dehghani∗Samira Abnar Hyung Won Chung
William Fedus Jinfeng Rao Sharan Narang Vinh Q. Tran
Dani Yogatama†Donald Metzler
Google Research & DeepMind†
{yitay,dehghani}@google.com
Abstract
There have been a lot of interest in the... |
2212.14024v2.pdf | DEMONSTRATE –SEARCH –PREDICT :
Composing retrieval and language models for knowledge-intensive NLP
Omar Khattab1Keshav Santhanam1Xiang Lisa Li1David Hall1
Percy Liang1Christopher Potts1Matei Zaharia1
Abstract
Retrieval-augmented in-context learning has
emerged as a powerful approach for addressing
knowledge-intensive t... |
2302.12441.pdf | MUX-PLMs: Data Multiplexing for High-throughput Language Models
Vishvak Murahari1Ameet Deshpande1Carlos E. Jimenez1
Izhak Shafran2Mingqiu Wang2Yuan Cao2Karthik Narasimhan1
1Princeton University2Google Brain
murahari@cs.princeton.edu
Abstract
The widespread adoption of large language
models such as ChatGPT and Bard has ... |
10.1038.s41467-021-25756-4.pdf | ARTICLE
Efficient generative modeling of protein sequences
using simple autoregressive models
Jeanne Trinquier1,2, Guido Uguzzoni3,4, Andrea Pagnani3,4,5, Francesco Zamponi2& Martin Weigt1✉
Generative models emerge as promising candidates for novel sequence-data driven
approaches to protein design, and for the extractio... |
2306.03078.pdf | SpQR: A Sparse-Quantized Representation for
Near-Lossless LLM Weight Compression
Tim Dettmers∗ †
University of WashingtonRuslan Svirschevski∗
HSE University & YandexVage Egiazarian∗
HSE University & Yandex
Denis Kuznedelev∗
Yandex & SkoltechElias Frantar
IST AustriaSaleh Ashkboos
ETH ZurichAlexander Borzunov
HSE Univer... |
1706.03741.pdf | Deep Reinforcement Learning
from Human Preferences
Paul F Christiano
OpenAI
paul@openai.comJan Leike
DeepMind
leike@google.comTom B Brown
nottombrown@gmail.com
Miljan Martic
DeepMind
miljanm@google.comShane Legg
DeepMind
legg@google.comDario Amodei
OpenAI
damodei@openai.com
Abstract
For sophisticated reinforcement lear... |
karakida19a.pdf | Universal Statistics of Fisher Information in Deep Neural Networks:
Mean Field Approach
Ryo Karakida Shotaro Akaho Shun-ichi Amari
AIST, Japan AIST, Japan RIKEN CBS, Japan
Abstract
The Fisher information matrix (FIM) is a
fundamental quantity to represent the char-
acteristics of a stochastic model, including
deep neur... |
2310.06816.pdf | Text Embeddings Reveal (Almost) As Much As Text
John X. Morris, Volodymyr Kuleshov, Vitaly Shmatikov, Alexander M. Rush
Department of Computer Science
Cornell University
Abstract
How much private information do text em-
beddings reveal about the original text? We
investigate the problem of embedding inver-
sion, recons... |
1908.10084v1.pdf | Sentence-BERT: Sentence Embeddings using Siamese BERT-Networks
Nils Reimers and Iryna Gurevych
Ubiquitous Knowledge Processing Lab (UKP-TUDA)
Department of Computer Science, Technische Universit ¨at Darmstadt
www.ukp.tu-darmstadt.de
Abstract
BERT (Devlin et al., 2018) and RoBERTa (Liu
et al., 2019) has set a new state-... |
2402.00854.pdf | SymbolicAI: A framework for logic-based approaches
combining generative models and solvers
Marius–Constantin Dinu∗ †Claudiu Leoveanu–Condrei‡Markus Holzleitner†
Werner Zellinger§Sepp Hochreiter†
Abstract
We introduce SymbolicAI , a versatile and modular framework employing a logic-based approach to
concept learning and... |
1907.10786.pdf | Interpreting the Latent Space of GANs for Semantic Face Editing
Yujun Shen1, Jinjin Gu2, Xiaoou Tang1, Bolei Zhou1
1The Chinese University of Hong Kong2The Chinese University of Hong Kong, Shenzhen
{sy116, xtang, bzhou }@ie.cuhk.edu.hk, jinjingu@link.cuhk.edu.cn
Original Pose Age Gender Eyeglasses
Figure 1: Manipulatin... |
2107.13163.pdf | arXiv:2107.13163v3 [cs.LG] 30 Mar 2023Statistically Meaningful Approximation: a
Case Study on Approximating Turing Machines with Transform ers
Colin Wei Yining Chen Tengyu Ma
Department of Computer Science
Stanford University
{colinwei,cynnjjs,tengyuma}@cs.stanford.edu
March 31, 2023
Abstract
A common lens to theoret... |
1906.08237.pdf | XLNet: Generalized Autoregressive Pretraining
for Language Understanding
Zhilin Yang∗1, Zihang Dai∗12, Yiming Yang1, Jaime Carbonell1,
Ruslan Salakhutdinov1, Quoc V . Le2
1Carnegie Mellon University,2Google AI Brain Team
{zhiliny,dzihang,yiming,jgc,rsalakhu}@cs.cmu.edu, qvl@google.com
Abstract
With the capability of mo... |
2206.05895.pdf | Latent Diffusion Energy-Based Model for Interpretable Text Modeling
Peiyu Yu1 2Sirui Xie1Xiaojian Ma1 2Baoxiong Jia1 2Bo Pang3
Ruiqi Gao4Yixin Zhu5 6Song-Chun Zhu1 2 5 6 7 8Ying Nian Wu7
Abstract
Latent space Energy-Based Models ( EBM s), also
known as energy-based priors, have drawn grow-
ing interests in generative m... |
2209.13325.pdf | Outlier Suppression: Pushing the Limit of Low-bit
Transformer Language Models
Xiuying Wei1, 2, Yunchen Zhang2, 4, Xiangguo Zhang2, Ruihao Gong1, 2,
Shanghang Zhang3, Qi Zhang2, Fengwei Yu2, Xianglong Liu1∗
1State Key Lab of Software Development Environment, Beihang University
2SenseTime Research,3Peking University
4Uni... |
2308.05660v1.pdf | Thermodynamic Linear Algebra
Maxwell Aifer, Kaelan Donatella, Max Hunter Gordon,
Thomas Ahle, Daniel Simpson, Gavin Crooks, Patrick J. Coles
Normal Computing Corporation, New York, New York, USA
Linear algebraic primitives are at the core of many modern algorithms in engineering, science, and
machine learning. Hence, a... |
2312.17227.pdf | Gradient-based Planning with World Models
Jyothir S V1∗Siddhartha Jalagam1∗Yann LeCun1, 2Vlad Sobal1, 2
1New York University2Meta AI
{jyothir, scj9994, us441}@nyu.edu
yann@cs.nyu.edu
Abstract
The enduring challenge in the field of artificial intelligence has been the control of
systems to achieve desired behaviours. Wh... |
10.1038.s41564-023-01584-8.pdf | Nature Microbiology
nature microbiologyhttps://doi.org/10.1038/s41564-023-01584-8
Analysis
Large language models improve annotation
of prokaryotic viral proteins
Zachary N. Flamholz 1, Steven J. Biller 2 & Libusha Kelly 1,3
Viral genomes are poorly annotated in metagenomic samples, representing
an obstacle ... |
2202.03286.pdf | Red Teaming Language Models with Language Models
WARNING: This paper contains model outputs which are offensive in nature.
Ethan Perez1 2Saffron Huang1Francis Song1Trevor Cai1Roman Ring1
John Aslanides1Amelia Glaese1Nat McAleese1Geoffrey Irving1
1DeepMind,2New York University
perez@nyu.edu
Abstract
Language Models (LMs... |
2401.14196.pdf | DeepSeek-Coder: When the Large Language Model Meets
Programming - The Rise of Code Intelligence
Daya Guo*1, Qihao Zhu∗1,2, Dejian Yang1, Zhenda Xie1, Kai Dong1, Wentao Zhang1
Guanting Chen1, Xiao Bi1, Y. Wu1, Y.K. Li1, Fuli Luo1, Yingfei Xiong2, Wenfeng Liang1
1DeepSeek-AI
2Key Lab of HCST (PKU), MOE; SCS, Peking Unive... |
dubey2022pursuit.pdf | RESEA RCH ARTICL E
Thepursuit ofhappiness: Areinforcement
learning perspective onhabituation and
comparisons
Rachit Dubey ID
1*,Thomas L.Griffiths2,Peter Dayan ID
3,4
1Department ofComputer Science, Princeton University ,Princeton, New Jersey, United States ofAmerica,
2Department ofPsychology, Prince tonUniversity, Pri... |
2312.11671v2.pdf | Evaluating Language-Model Agents on Realistic
Autonomous Tasks
Megan Kinniment Lucas Jun Koba Sato Haoxing Du Brian Goodrich Max Hasin
Lawrence Chan Luke Harold Miles Tao R. Lin Hjalmar Wijk Joel Burget
Aaron Ho Elizabeth Barnes∗Paul Christiano†
METR (Formerly ARC Evals)
Abstract
In this report, we explore the ability ... |
2310.11589.pdf | ELICITING HUMAN PREFERENCES WITH
LANGUAGE MODELS
Belinda Z. Li∗
MIT CSAIL
bzl@mit.eduAlex Tamkin∗
Anthropic†
atamkin@cs.stanford.eduNoah Goodman
Stanford
ndg@stanford.eduJacob Andreas
MIT CSAIL
jda@mit.edu
ABSTRACT
Language models (LMs) can be directed to perform target tasks by using labeled
examples or natural langua... |
2403.20222v1.pdf | Shallow Cross-Encoders
for Low-Latency Retrieval
Aleksandr V. Petrov, Sean MacAvaney, and Craig Macdonald
University of Glasgow, Glasgow, UK
a.petrov.1@research.gla.ac.uk
{sean.macavaney;craig.macdonald }@glasgow.ac.uk
Abstract. Transformer-based Cross-Encoders achieve state-of-the-art effectivness in text retrieval.
H... |
2309.10400v3.pdf | Published as a conference paper at ICLR 2024
POSE: E FFICIENT CONTEXT WINDOW EXTENSION OF
LLM S VIA POSITIONAL SKIP-WISE TRAINING
Dawei Zhu∗♡♠Nan Yang♢Liang Wang♢Yifan Song♡♠Wenhao Wu♡♠
Furu Wei♢Sujian Li♡♠
♡School of Computer Science, Peking University
♠National Key Laboratory for Multimedia Information Processing, Pe... |
2202.04728.pdf | Predicting Human Similarity Judgments Using Large Language Models
Raja Marjieh1,*, Ilia Sucholutsky2,*, Theodore R. Sumers2,
Nori Jacoby3, Thomas L. Griffiths1,2
1Department of Psychology, Princeton University
2Department of Computer Science, Princeton University
3Computational Auditory Perception Group, Max Planck Inst... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.