MO7YW4NG/ms-marco-MiniLM-L-6-v2-4bit-nf4
Text Ranking • 23.1M • Updated • 4
corpusid int64 110 268M | title stringlengths 0 8.56k | abstract stringlengths 0 18.4k | citations sequencelengths 0 142 | full_paper stringlengths 0 635k |
|---|---|---|---|---|
252,715,594 | PHENAKI: VARIABLE LENGTH VIDEO GENERATION FROM OPEN DOMAIN TEXTUAL DESCRIPTIONS | We present Phenaki, a model capable of realistic video synthesis, given a sequence of textual prompts. Generating videos from text is particularly challenging due to the computational cost, limited quantities of high quality text-video data and variable length of videos. To address these issues, we introduce a new mode... | [
6628106,
174802916,
238582653
] | PHENAKI: VARIABLE LENGTH VIDEO GENERATION FROM OPEN DOMAIN TEXTUAL DESCRIPTIONS
Ruben Villegas
University of Michigan
University College London
Google Brain
University of Michigan
University College London
Mohammad Babaeizadeh
University of Michigan
University College London
Google Brain
University of Michi... |
13,002,849 | MODE REGULARIZED GENERATIVE ADVERSARIAL NETWORKS | Although Generative Adversarial Networks achieve state-of-the-art results on a variety of generative tasks, they are regarded as highly unstable and prone to miss modes. We argue that these bad behaviors of GANs are due to the very particular functional shape of the trained discriminators in high dimensional spaces, wh... | [] | MODE REGULARIZED GENERATIVE ADVERSARIAL NETWORKS
† Tong
Montreal Institute for Learning Algorithms
Université de Montréal
H3T 1J4MontréalQCCanada
Department of Computing
School of Computer Science
The Hong Kong Polytechnic University
University Of WaterlooN2L 3G1Hong Kong, WaterlooONCanada
Che
Yanran Li
Montreal... |
239,998,253 | What Do We Mean by Generalization in Federated Learning? | "Federated learning data is drawn from a distribution of distributions: clients are drawn from a met(...TRUNCATED) | [
235613568,
231924480,
211678094,
195798643,
43964415
] | "What Do We Mean by Generalization in Federated Learning?\n\n\nHonglin Yuan \nWarren Morningstar \nL(...TRUNCATED) |
62,841,605 | SPREADING VECTORS FOR SIMILARITY SEARCH | "Discretizing multi-dimensional data distributions is a fundamental step of modern indexing methods.(...TRUNCATED) | [] | "SPREADING VECTORS FOR SIMILARITY SEARCH\n\n\nAlexandre Sablayrolles \nFacebook AI Research Inria\n\(...TRUNCATED) |
253,237,531 | MACHINE UNLEARNING OF FEDERATED CLUSTERS | "Federated clustering (FC) is an unsupervised learning problem that arises in a number of practical (...TRUNCATED) | [] | "MACHINE UNLEARNING OF FEDERATED CLUSTERS\n\n\nChao Pan chaopan2@illinois.edu \nDepartment of Electr(...TRUNCATED) |
222,291,443 | CONTRASTIVE EXPLANATIONS FOR REINFORCEMENT LEARNING VIA EMBEDDED SELF PREDICTIONS | "We investigate a deep reinforcement learning (RL) architecture that supports explaining why a learn(...TRUNCATED) | [] | "CONTRASTIVE EXPLANATIONS FOR REINFORCEMENT LEARNING VIA EMBEDDED SELF PREDICTIONS\n\n\nZhengxian Li(...TRUNCATED) |
223,956,716 | FOR SELF-SUPERVISED LEARNING, RATIONALITY IMPLIES GENERALIZATION, PROVABLY | "We prove a new upper bound on the generalization gap of classifiers that are obtained by first usin(...TRUNCATED) | [
6212000,
67855429
] | "FOR SELF-SUPERVISED LEARNING, RATIONALITY IMPLIES GENERALIZATION, PROVABLY\n\n\nYamini Bansal \nHar(...TRUNCATED) |
263,605,472 | MULTI-TASK LEARNING WITH 3D-AWARE REGULARIZATION | "Deep neural networks have become a standard building block for designing models that can perform mu(...TRUNCATED) | [] | "MULTI-TASK LEARNING WITH 3D-AWARE REGULARIZATION\n\n\nWei-Hong Li \nUniversity of Edinburgh\n\n\nSt(...TRUNCATED) |
212,996,548 | LITE TRANSFORMER WITH LONG-SHORT RANGE ATTENTION | "Transformer has become ubiquitous in natural language processing (e.g., machine translation, questi(...TRUNCATED) | [91184134,6628106,2134321,59310641,9545399,52892477,964287,54438210,3508167,44131019,159041867,19984(...TRUNCATED) | "LITE TRANSFORMER WITH LONG-SHORT RANGE ATTENTION\n\n\nZhanghao Wu zhwu@mit.edu \nMassachusetts Inst(...TRUNCATED) |
202,719,276 | ROBUST LOCAL FEATURES FOR IMPROVING THE GENERALIZATION OF ADVERSARIAL TRAINING | "Adversarial training has been demonstrated as one of the most effective methods for training robust(...TRUNCATED) | [
67855552,
58006571,
3604396,
6706414,
3488815,
17707860,
54101493,
53483414,
52898972
] | "ROBUST LOCAL FEATURES FOR IMPROVING THE GENERALIZATION OF ADVERSARIAL TRAINING\n\n\nChubiao Song cb(...TRUNCATED) |
This dataset contains the query set and retrieval corpus for our paper LitSearch: A Retrieval Benchmark for Scientific Literature Search. We introduce LitSearch, a retrieval benchmark comprising 597 realistic literature search queries about recent ML and NLP papers. LitSearch is constructed using a combination of (1) questions generated by GPT-4 based on paragraphs containing inline citations from research papers and (2) questions about recently published papers, manually written by their authors. All LitSearch questions were manually examined or edited by experts to ensure high quality.
This dataset contains three configurations:
query containing 597 queries accomanied by gold paper IDs, specificity and quality annotations, and metadata about the source of the query.corpus_clean containing 64183 documents. We provide the extracted titles, abstracts and outgoing citation paper IDs.corpus_s2orc contains the same set of 64183 documents expressed in the Semantic Scholar Open Research Corpus (S2ORC) schema along with all available metadata.Each configuration has a single 'full' split.
You can load the configurations as follows:
from datasets import load_dataset
query_data = load_dataset("princeton-nlp/LitSearch", "query", split="full")
corpus_clean_data = load_dataset("princeton-nlp/LitSearch", "corpus_clean", split="full")
corpus_s2orc_data = load_dataset("princeton-nlp/LitSearch", "corpus_s2orc", split="full")