Dataset Preview
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code: DatasetGenerationCastError
Exception: DatasetGenerationCastError
Message: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 9 new columns ({'paper_id', 'related_work_scores', 'article_scores', 'related_work', 'citations', 'extract_rate', 'overall_score', 'introduction', 'origin_citations'}) and 1 missing columns ({'doc_id'}).
This happened while the json dataset builder was generating data using
hf://datasets/BFTree/RWGBench/gold100_papers.json (at revision 89d1688dfe908d89de9337ad4c4ca4abbeba2e8e), [/tmp/hf-datasets-cache/medium/datasets/77667334117168-config-parquet-and-info-BFTree-RWGBench-ec65d61f/hub/datasets--BFTree--RWGBench/snapshots/89d1688dfe908d89de9337ad4c4ca4abbeba2e8e/corpus.json (origin=hf://datasets/BFTree/RWGBench@89d1688dfe908d89de9337ad4c4ca4abbeba2e8e/corpus.json), /tmp/hf-datasets-cache/medium/datasets/77667334117168-config-parquet-and-info-BFTree-RWGBench-ec65d61f/hub/datasets--BFTree--RWGBench/snapshots/89d1688dfe908d89de9337ad4c4ca4abbeba2e8e/gold100_papers.json (origin=hf://datasets/BFTree/RWGBench@89d1688dfe908d89de9337ad4c4ca4abbeba2e8e/gold100_papers.json), /tmp/hf-datasets-cache/medium/datasets/77667334117168-config-parquet-and-info-BFTree-RWGBench-ec65d61f/hub/datasets--BFTree--RWGBench/snapshots/89d1688dfe908d89de9337ad4c4ca4abbeba2e8e/papers.json (origin=hf://datasets/BFTree/RWGBench@89d1688dfe908d89de9337ad4c4ca4abbeba2e8e/papers.json)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback: Traceback (most recent call last):
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1887, in _prepare_split_single
writer.write_table(table)
File "/usr/local/lib/python3.12/site-packages/datasets/arrow_writer.py", line 675, in write_table
pa_table = table_cast(pa_table, self._schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2272, in table_cast
return cast_table_to_schema(table, schema)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2218, in cast_table_to_schema
raise CastError(
datasets.table.CastError: Couldn't cast
paper_id: int64
title: string
abstract: string
introduction: string
related_work: string
origin_citations: list<item: string>
child 0, item: string
citations: list<item: int64>
child 0, item: int64
extract_rate: double
article_scores: struct<Content_Quality: int64, Final_Score: double, Publication_Potential: int64, raw_text: string>
child 0, Content_Quality: int64
child 1, Final_Score: double
child 2, Publication_Potential: int64
child 3, raw_text: string
related_work_scores: struct<Citation_Quality: int64, Content_Coherence: int64, Final_Score: double, Synthesis_Analysis: i (... 23 chars omitted)
child 0, Citation_Quality: int64
child 1, Content_Coherence: int64
child 2, Final_Score: double
child 3, Synthesis_Analysis: int64
child 4, raw_text: string
overall_score: double
-- schema metadata --
pandas: '{"index_columns": [], "column_indexes": [], "columns": [{"name":' + 1499
to
{'doc_id': Value('int64'), 'title': Value('string'), 'abstract': Value('string')}
because column names don't match
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1347, in compute_config_parquet_and_info_response
parquet_operations = convert_to_parquet(builder)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 980, in convert_to_parquet
builder.download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 884, in download_and_prepare
self._download_and_prepare(
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 947, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1736, in _prepare_split
for job_id, done, content in self._prepare_split_single(
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/lib/python3.12/site-packages/datasets/builder.py", line 1889, in _prepare_split_single
raise DatasetGenerationCastError.from_cast_error(
datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
All the data files must have the same columns, but at some point there are 9 new columns ({'paper_id', 'related_work_scores', 'article_scores', 'related_work', 'citations', 'extract_rate', 'overall_score', 'introduction', 'origin_citations'}) and 1 missing columns ({'doc_id'}).
This happened while the json dataset builder was generating data using
hf://datasets/BFTree/RWGBench/gold100_papers.json (at revision 89d1688dfe908d89de9337ad4c4ca4abbeba2e8e), [/tmp/hf-datasets-cache/medium/datasets/77667334117168-config-parquet-and-info-BFTree-RWGBench-ec65d61f/hub/datasets--BFTree--RWGBench/snapshots/89d1688dfe908d89de9337ad4c4ca4abbeba2e8e/corpus.json (origin=hf://datasets/BFTree/RWGBench@89d1688dfe908d89de9337ad4c4ca4abbeba2e8e/corpus.json), /tmp/hf-datasets-cache/medium/datasets/77667334117168-config-parquet-and-info-BFTree-RWGBench-ec65d61f/hub/datasets--BFTree--RWGBench/snapshots/89d1688dfe908d89de9337ad4c4ca4abbeba2e8e/gold100_papers.json (origin=hf://datasets/BFTree/RWGBench@89d1688dfe908d89de9337ad4c4ca4abbeba2e8e/gold100_papers.json), /tmp/hf-datasets-cache/medium/datasets/77667334117168-config-parquet-and-info-BFTree-RWGBench-ec65d61f/hub/datasets--BFTree--RWGBench/snapshots/89d1688dfe908d89de9337ad4c4ca4abbeba2e8e/papers.json (origin=hf://datasets/BFTree/RWGBench@89d1688dfe908d89de9337ad4c4ca4abbeba2e8e/papers.json)]
Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
doc_id int64 | title string | abstract string |
|---|---|---|
0 | Multiscale Snapshots Visual Analysis of Temporal Summaries in Dynamic Graphs | The overview-driven visual analysis of large-scale dynamic graphs poses a major challenge. We propose Multiscale Snapshots, a visual analytics approach to analyze temporal summaries of dynamic graphs at multiple temporal scales. First, we recursively generate temporal summaries to abstract overlapping sequences of grap... |
1 | Dynamic Information Retrieval Theoretical Framework and Application | Theoretical frameworks like the Probability Ranking Principle and its more recent Interactive Information Retrieval variant have guided the development of ranking and retrieval algorithms for decades, yet they are not capable of helping us model problems in Dynamic Information Retrieval which exhibit the following thre... |
2 | aNMM Ranking Short Answer Texts with Attention-Based Neural Matching Model | As an alternative to question answering methods based on feature engineering, deep learning approaches such as convolutional neural networks (CNNs) and Long Short-Term Memory Models (LSTMs) have recently been proposed for semantic matching of questions and answers. To achieve good results, however, these models have be... |
3 | A Quantum Many-body Wave Function Inspired Language Modeling Approach | The recently proposed quantum language model (QLM) aimed at a principled approach to modeling term dependency by applying the quantum probability theory. The latest development for a more effective QLM has adopted word embeddings as a kind of global dependency information and integrated the quantum-inspired idea in a n... |
4 | Baseline Needs More Love On Simple Word-Embedding-Based Models and Associated Pooling Mechanisms | Many deep learning architectures have been proposed to model the compositionality in text sequences, requiring a substantial number of parameters and expensive computations. However, there has not been a rigorous evaluation regarding the added value of sophisticated compositional functions. In this paper, we conduct a ... |
5 | TANDA Transfer and Adapt Pre-Trained Transformer Models for Answer Sentence Selection | We propose TANDA, an effective technique for fine-tuning pre-trained Transformer models for natural language tasks. Specifically, we first transfer a pre-trained model into a model for a general task by fine-tuning it with a large and high-quality dataset. We then perform a second fine-tuning step to adapt the transfer... |
6 | Investigating Order Effects in Multidimensional Relevance Judgment using Query Logs | There is a growing body of research which has investigated relevance judgment in IR being influenced by multiple factors or dimensions. At the same time, the Order Effects in sequential decision making have been quantitatively detected and studied in Mathematical Psychology. Combining the two phenomena, there have been... |
7 | BERT Pre-training of Deep Bidirectional Transformers for Language Understanding | We introduce a new language representation model called BERT, which stands for Bidirectional Encoder Representations from Transformers. Unlike recent language representation models, BERT is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right contex... |
8 | Investigating non-classical correlations between decision fused multi-modal documents | Correlation has been widely used to facilitate various information retrieval methods such as query expansion, relevance feedback, document clustering, and multi-modal fusion. Especially, correlation and independence are important issues when fusing different modalities that influence a multi-modal information retrieval... |
9 | Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer | Transfer learning, where a model is first pre-trained on a data-rich task before being fine-tuned on a downstream task, has emerged as a powerful technique in natural language processing (NLP). The effectiveness of transfer learning has given rise to a diversity of approaches, methodology, and practice. In this paper, ... |
10 | Quantum-inspired Complex Word Embedding | A challenging task for word embeddings is to capture the emergent meaning or polarity of a combination of individual words. For example, existing approaches in word embeddings will assign high probabilities to the words "Penguin" and "Fly" if they frequently co-occur, but it fails to capture the fact that they occur in... |
11 | XLNet Generalized Autoregressive Pretraining for Language Understanding | With the capability of modeling bidirectional contexts, denoising autoencoding based pretraining like BERT achieves better performance than pretraining approaches based on autoregressive language modeling. However, relying on corrupting the input with masks, BERT neglects dependency between the masked positions and suf... |
12 | Semantic Hilbert Space for Text Representation Learning | Capturing the meaning of sentences has long been a challenging task. Current models tend to apply linear combinations of word features to conduct semantic composition for bigger-granularity units e.g. phrases, sentences, and documents. However, the semantic linearity does not always hold in human language. For instance... |
13 | CNM An Interpretable Complex-valued Network for Matching | This paper seeks to model human language by the mathematical framework of quantum physics. With the well-designed mathematical formulations in quantum physics, this framework unifies different linguistic units in a single complex-valued vector space, e.g. words as particles in quantum states and sentences as mixed syst... |
14 | Modeling Multidimensional User Relevance in IR using Vector Spaces | It has been shown that relevance judgment of documents is influenced by multiple factors beyond topicality. Some multidimensional user relevance models (MURM) proposed in literature have investigated the impact of different dimensions of relevance on user judgment. Our hypothesis is that a user might give more importan... |
15 | Modelling Dynamic Interactions between Relevance Dimensions | Relevance is an underlying concept in the field of Information Science and Retrieval. It is a cognitive notion consisting of several different criteria or dimensions. Theoretical models of relevance allude to interdependence between these dimensions, where their interaction and fusion leads to the final inference of re... |
16 | An Introduction to Mechanized Reasoning | Mechanized reasoning uses computers to verify proofs and to help discover new theorems. Computer scientists have applied mechanized reasoning to economic problems but -- to date -- this work has not yet been properly presented in economics journals. We introduce mechanized reasoning to economists in three ways. First, ... |
17 | Applying a Formal Method in Industry a 25-Year Trajectory | Industrial applications involving formal methods are still exceptions to the general rule. Lack of understanding, employees without proper education, difficulty to integrate existing development cycles, no explicit requirement from the market, etc. are explanations often heard for not being more formal. Hence the feedb... |
18 | The x86isa Books Features, Usage, and Future Plans | The x86isa library, incorporated in the ACL2 community books project, provides a formal model of the x86 instruction-set architecture and supports reasoning about x86 machine-code programs. However, analyzing x86 programs can be daunting -- even for those familiar with program verification, in part due to the complexit... |
19 | Automatically Proving Mathematical Theorems with Evolutionary Algorithms and Proof Assistants | Mathematical theorems are human knowledge able to be accumulated in the form of symbolic representation, and proving theorems has been considered intelligent behavior. Based on the BHK interpretation and the Curry-Howard isomorphism, proof assistants, software capable of interacting with human for constructing formal p... |
20 | Reinforcement Learning of Theorem Proving | We introduce a theorem proving algorithm that uses practically no domain heuristics for guiding its connection-style proof search. Instead, it runs many Monte-Carlo simulations guided by reinforcement learning from previous proof attempts. We produce several versions of the prover, parameterized by different learning a... |
21 | Revisiting Spatial-Temporal Similarity A Deep Learning Framework for Traffic Prediction | Traffic prediction has drawn increasing attention in AI research field due to the increasing availability of large-scale traffic data and its importance in the real world. For example, an accurate taxi demand prediction can assist taxi companies in pre-allocating taxis. The key challenge of traffic prediction lies in h... |
22 | Deep Multi-View Spatial-Temporal Network for Taxi Demand Prediction | Taxi demand prediction is an important building block to enabling intelligent transportation systems in a smart city. An accurate prediction model can help the city pre-allocate resources to meet travel demand and to reduce empty taxis on streets which waste energy and worsen the traffic congestion. With the increasing... |
23 | Diffusion Convolutional Recurrent Neural Network Data-Driven Traffic Forecasting | Spatiotemporal forecasting has various applications in neuroscience, climate and transportation domain. Traffic forecasting is one canonical example of such learning task. The task is challenging due to (1) complex spatial dependency on road networks, (2) non-linear temporal dynamics with changing road conditions and (... |
24 | Deep Spatio-Temporal Residual Networks for Citywide Crowd Flows Prediction | Forecasting the flow of crowds is of great importance to traffic management and public safety, yet a very challenging task affected by many complex factors, such as inter-region traffic, events and weather. In this paper, we propose a deep-learning-based approach, called ST-ResNet, to collectively forecast the in-flow ... |
25 | Spatio-Temporal Graph Convolutional Networks A Deep Learning Framework for Traffic Forecasting | Timely accurate traffic forecast is crucial for urban traffic control and guidance. Due to the high nonlinearity and complexity of traffic flow, traditional methods cannot satisfy the requirements of mid-and-long term prediction tasks and often neglect spatial and temporal dependencies. In this paper, we propose a nove... |
26 | STG2Seq Spatial-temporal Graph to Sequence Model for Multi-step Passenger Demand Forecasting | Multi-step passenger demand forecasting is a crucial task in on-demand vehicle sharing services. However, predicting passenger demand over multiple time horizons is generally challenging due to the nonlinear and dynamic spatial-temporal dependencies. In this work, we propose to model multi-step citywide passenger deman... |
27 | GaAN Gated Attention Networks for Learning on Large and Spatiotemporal Graphs | We propose a new network architecture, Gated Attention Networks (GaAN), for learning on graphs. Unlike the traditional multi-head attention mechanism, which equally consumes all attention heads, GaAN uses a convolutional sub-network to control each attention head's importance. We demonstrate the effectiveness of GaAN o... |
28 | T-GCN A Temporal Graph ConvolutionalNetwork for Traffic Prediction | Accurate and real-time traffic forecasting plays an important role in the Intelligent Traffic System and is of great significance for urban traffic planning, traffic management, and traffic control. However, traffic forecasting has always been considered an open scientific issue, owing to the constraints of urban road ... |
29 | Connecting the Dots Multivariate Time Series Forecasting with Graph Neural Networks | Modeling multivariate time series has long been a subject that has attracted researchers from a diverse range of fields including economics, finance, and traffic. A basic assumption behind multivariate time series forecasting is that its variables depend on one another but, upon looking closely, it is fair to say that ... |
30 | Multi-Range Attentive Bicomponent Graph Convolutional Network for Traffic Forecasting | Traffic forecasting is of great importance to transportation management and public safety, and very challenging due to the complicated spatial-temporal dependency and essential uncertainty brought about by the road network and traffic conditions. Latest studies mainly focus on modeling the spatial dependency by utilizi... |
31 | Temporal Pattern Attention for Multivariate Time Series Forecasting | Forecasting multivariate time series data, such as prediction of electricity consumption, solar power production, and polyphonic piano pieces, has numerous valuable applications. However, complex and non-linear interdependencies between time steps and series complicate the task. To obtain accurate prediction, it is cru... |
32 | GMAN A Graph Multi-Attention Network for Traffic Prediction | Long-term traffic prediction is highly challenging due to the complexity of traffic systems and the constantly changing nature of many impacting factors. In this paper, we focus on the spatio-temporal factors, and propose a graph multi-attention network (GMAN) to predict traffic conditions for time steps ahead at diffe... |
33 | Adaptive Graph Convolutional Recurrent Network for Traffic Forecasting | Modeling complex spatial and temporal correlations in the correlated time series data is indispensable for understanding the traffic dynamics and predicting the future status of an evolving traffic system. Recent works focus on designing complicated graph neural network architectures to capture shared patterns with the... |
34 | Modeling Long- and Short-Term Temporal Patterns with Deep Neural Networks | Multivariate time series forecasting is an important machine learning problem across many domains, including predictions of solar plant energy output, electricity consumption, and traffic jam situation. Temporal data arise in these real-world applications often involves a mixture of long-term and short-term patterns, f... |
35 | Enhancing the Locality and Breaking the Memory Bottleneck of Transformer on Time Series Forecasting | Time series forecasting is an important problem across many domains, including predictions of solar plant energy output, electricity consumption, and traffic jam situation. In this paper, we propose to tackle such forecasting problem with Transformer [1]. Although impressed by its performance in our preliminary study, ... |
36 | Bike Flow Prediction with Multi-Graph Convolutional Networks | One fundamental issue in managing bike sharing systems is the bike flow prediction. Due to the hardness of predicting the flow for a single station, recent research works often predict the bike flow at cluster-level. While such studies gain satisfactory prediction accuracy, they cannot directly guide some fine-grained ... |
37 | Multi-Scale Context Aggregation by Dilated Convolutions | State-of-the-art models for semantic segmentation are based on adaptations of convolutional networks that had originally been designed for image classification. However, dense prediction and image classification are structurally different. In this work, we develop a new convolutional network module that is specifically... |
38 | Attention Is All You Need | The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on at... |
39 | Empirical Evaluation of Gated Recurrent Neural Networks on Sequence Modeling | In this paper we compare different types of recurrent units in recurrent neural networks (RNNs). Especially, we focus on more sophisticated units that implement a gating mechanism, such as a long short-term memory (LSTM) unit and a recently proposed gated recurrent unit (GRU). We evaluate these recurrent units on the t... |
40 | Convolutional LSTM Network A Machine Learning Approach for Precipitation Nowcasting | The goal of precipitation nowcasting is to predict the future rainfall intensity in a local region over a relatively short period of time. Very few previous studies have examined this crucial and challenging weather forecasting problem from the machine learning perspective. In this paper, we formulate precipitation now... |
41 | PyTorch An Imperative Style, High-Performance Deep Learning Library | Deep learning frameworks have often focused on either usability or speed, but not both. PyTorch is a machine learning library that shows that these two goals are in fact compatible: it provides an imperative and Pythonic programming style that supports code as a model, makes debugging easy and is consistent with other ... |
42 | CONFIT Toward Faithful Dialogue Summarization with Linguistically-Informed Contrastive Fine-tuning | Factual inconsistencies in generated summaries severely limit the practical applications of abstractive dialogue summarization. Although significant progress has been achieved by using pre-trained models, substantial amounts of hallucinated content are found during the human evaluation. Pre-trained models are most comm... |
43 | A Hierarchical Network for Abstractive Meeting Summarization with Cross-Domain Pretraining | With the abundance of automatic meeting transcripts, meeting summarization is of great interest to both participants and other parties. Traditional methods of summarizing meetings depend on complex multi-step pipelines that make joint optimization intractable. Meanwhile, there are a handful of deep neural models for te... |
44 | Coreference-Aware Dialogue Summarization | Summarizing conversations via neural approaches has been gaining research traction lately, yet it is still challenging to obtain practical solutions. Examples of such challenges include unstructured information exchange in dialogues, informal interactions between speakers, and dynamic role changes of speakers as the di... |
45 | Dialogue Discourse-Aware Graph Model and Data Augmentation for Meeting Summarization | Meeting summarization is a challenging task due to its dynamic interaction nature among multiple speakers and lack of sufficient training data. Existing methods view the meeting as a linear sequence of utterances while ignoring the diverse relations between each utterance. Besides, the limited labeled data further hind... |
46 | TODSum Task-Oriented Dialogue Summarization with State Tracking | Previous dialogue summarization datasets mainly focus on open-domain chitchat dialogues, while summarization datasets for the broadly used task-oriented dialogue haven't been explored yet. Automatically summarizing such task-oriented dialogues can help a business collect and review needs to improve the service. Besides... |
47 | DialogLM Pre-trained Model for Long Dialogue Understanding and Summarization | Dialogue is an essential part of human communication and cooperation. Existing research mainly focuses on short dialogue scenarios in a one-on-one fashion. However, multi-person interactions in the real world, such as meetings or interviews, are frequently over a few thousand words. There is still a lack of correspondi... |
48 | How Domain Terminology Affects Meeting Summarization Performance | Meetings are essential to modern organizations. Numerous meetings are held and recorded daily, more than can ever be comprehended. A meeting summarization system that identifies salient utterances from the transcripts to automatically generate meeting minutes can help. It empowers users to rapidly search and sift throu... |
49 | Controllable Abstractive Dialogue Summarization with Sketch Supervision | In this paper, we aim to improve abstractive dialogue summarization quality and, at the same time, enable granularity control. Our model has two primary components and stages: 1) a two-stage generation strategy that generates a preliminary summary sketch serving as the basis for the final summary. This summary sketch p... |
50 | Unified Language Model Pre-training for Natural Language Understanding and Generation | This paper presents a new Unified pre-trained Language Model (UniLM) that can be fine-tuned for both natural language understanding and generation tasks. The model is pre-trained using three types of language modeling tasks: unidirectional, bidirectional, and sequence-to-sequence prediction. The unified modeling is ach... |
51 | Investigating Crowdsourcing Protocols for Evaluating the Factual Consistency of Summaries | Current pre-trained models applied to summarization are prone to factual inconsistencies which either misrepresent the source text or introduce extraneous information. Thus, comparing the factual consistency of summaries is necessary as we develop improved models. However, the optimal human evaluation setup for factual... |
52 | A Survey on Cross-Lingual Summarization | Cross-lingual summarization is the task of generating a summary in one language (e.g., English) for the given document(s) in a different language (e.g., Chinese). Under the globalization background, this task has attracted increasing attention of the computational linguistics community. Nevertheless, there still remain... |
53 | Empirical Analysis of Overfitting and Mode Drop in GAN Training | We examine two key questions in GAN training, namely overfitting and mode drop, from an empirical perspective. We show that when stochasticity is removed from the training procedure, GANs can overfit and exhibit almost no mode drop. Our results shed light on important characteristics of the GAN training procedure. They... |
54 | f-GAN Training Generative Neural Samplers using Variational Divergence Minimization | Generative neural samplers are probabilistic models that implement sampling using feedforward neural networks: they take a random input vector and produce a sample from a probability distribution defined by the network weights. These models are expressive and allow efficient computation of samples and derivatives, but ... |
55 | Self-Supervised GANs via Auxiliary Rotation Loss | Conditional GANs are at the forefront of natural image synthesis. The main drawback of such models is the necessity for labeled data. In this work we exploit two popular unsupervised learning techniques, adversarial training and self-supervision, and take a step towards bridging the gap between conditional and uncondit... |
56 | Large Scale GAN Training for High Fidelity Natural Image Synthesis | Despite recent progress in generative image modeling, successfully generating high-resolution, diverse samples from complex datasets such as ImageNet remains an elusive goal. To this end, we train Generative Adversarial Networks at the largest scale yet attempted, and study the instabilities specific to such scale. We ... |
57 | Layer Normalization | Training state-of-the-art, deep neural networks is computationally expensive. One way to reduce the training time is to normalize the activities of the neurons. A recently introduced technique called batch normalization uses the distribution of the summed input to a neuron over a mini-batch of training cases to compute... |
58 | Generative Image Inpainting with Contextual Attention | Recent deep learning based approaches have shown promising results for the challenging task of inpainting large missing regions in an image. These methods can generate visually plausible image structures and textures, but often create distorted structures or blurry textures inconsistent with surrounding areas. This is ... |
59 | StarGAN Unified Generative Adversarial Networks for Multi-Domain Image-to-Image Translation | Recent studies have shown remarkable success in image-to-image translation for two domains. However, existing approaches have limited scalability and robustness in handling more than two domains, since different models should be built independently for every pair of image domains. To address this limitation, we propose... |
60 | GANs Trained by a Two Time-Scale Update Rule Converge to a Local Nash Equilibrium | Generative Adversarial Networks (GANs) excel at creating realistic images with complex models for which maximum likelihood is infeasible. However, the convergence of GAN training has still not been proved. We propose a two time-scale update rule (TTUR) for training GANs with stochastic gradient descent on arbitrary GAN... |
61 | A Comprehensive Survey on Data-Efficient GANs in Image Generation | Generative Adversarial Networks (GANs) have achieved remarkable achievements in image synthesis. These successes of GANs rely on large scale datasets, requiring too much cost. With limited training data, how to stable the training process of GANs and generate realistic images have attracted more attention. The challeng... |
62 | Progressive Growing of GANs for Improved Quality, Stability, and Variation | We describe a new training methodology for generative adversarial networks. The key idea is to grow both the generator and discriminator progressively: starting from a low resolution, we add new layers that model increasingly fine details as training progresses. This both speeds the training up and greatly stabilizes i... |
63 | Least Squares Generative Adversarial Networks | Unsupervised learning with generative adversarial networks (GANs) has proven hugely successful. Regular GANs hypothesize the discriminator as a classifier with the sigmoid cross entropy loss function. However, we found that this loss function may lead to the vanishing gradients problem during the learning process. To o... |
64 | Wasserstein GAN | We introduce a new algorithm named WGAN, an alternative to traditional GAN training. In this new model, we show that we can improve the stability of learning, get rid of problems like mode collapse, and provide meaningful learning curves useful for debugging and hyperparameter searches. Furthermore, we show that the co... |
65 | A Large-Scale Study on Regularization and Normalization in GANs | Generative adversarial networks (GANs) are a class of deep generative models which aim to learn a target distribution in an unsupervised fashion. While they were successfully applied to many problems, training a GAN is a notoriously challenging task and requires a significant number of hyperparameter tuning, neural arc... |
66 | Probabilistic Feature Selection and Classification Vector Machine | Sparse Bayesian learning is a state-of-the-art supervised learning algorithm that can choose a subset of relevant samples from the input data and make reliable probabilistic predictions. However, in the presence of high-dimensional data with irrelevant features, traditional sparse Bayesian classifiers suffer from perfo... |
67 | Low Rank Regularization A Review | Low rank regularization, in essence, involves introducing a low rank or approximately low rank assumption for matrix we aim to learn, which has achieved great success in many fields including machine learning, data mining and computer version. Over the last decade, much progress has been made in theories and practical ... |
68 | A survey on text generation using generative adversarial networks | This work presents a thorough review concerning recent studies and text generation advancements using Generative Adversarial Networks. The usage of adversarial learning for text generation is promising as it provides alternatives to generate the so-called "natural" language. Nevertheless, adversarial text generation is... |
69 | Improved Consistency Regularization for GANs | Recent work has increased the performance of Generative Adversarial Networks (GANs) by enforcing a consistency cost on the discriminator. We improve on this technique in several ways. We first show that consistency regularization can introduce artifacts into the GAN samples and explain how to fix this issue. We then pr... |
70 | Improving GAN Training with Probability Ratio Clipping and Sample Reweighting | Despite success on a wide range of problems related to vision, generative adversarial networks (GANs) often suffer from inferior performance due to unstable training, especially for text generation. To solve this issue, we propose a new variational GAN training framework which enjoys superior training stability. Our ap... |
71 | Batch Normalization Accelerating Deep Network Training by Reducing Internal Covariate Shift | Training Deep Neural Networks is complicated by the fact that the distribution of each layer's inputs changes during training, as the parameters of the previous layers change. This slows down the training by requiring lower learning rates and careful parameter initialization, and makes it notoriously hard to train mode... |
72 | Bias and Generalization in Deep Generative Models An Empirical Study | In high dimensional settings, density estimation algorithms rely crucially on their inductive bias. Despite recent empirical success, the inductive bias of deep generative models is not well understood. In this paper we propose a framework to systematically investigate bias and generalization in deep generative models ... |
73 | Generative Adversarial Networks for Spatio-temporal Data A Survey | Generative Adversarial Networks (GANs) have shown remarkable success in producing realistic-looking images in the computer vision area. Recently, GAN-based techniques are shown to be promising for spatio-temporal-based applications such as trajectory prediction, events generation and time-series data imputation. While ... |
74 | Towards a Better Understanding and Regularization of GAN Training Dynamics | Generative adversarial networks (GANs) are notoriously difficult to train and the reasons underlying their (non-)convergence behaviors are still not completely understood. By first considering a simple yet representative GAN example, we mathematically analyze its local convergence behavior in a non-asymptotic way. Furt... |
75 | Perceptual Adversarial Networks for Image-to-Image Transformation | In this paper, we propose a principled Perceptual Adversarial Networks (PAN) for image-to-image transformation tasks. Unlike existing application-specific algorithms, PAN provides a generic framework of learning mapping relationship between paired images (Fig. 1), such as mapping a rainy image to its de-rained counterp... |
76 | StackGAN Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks | Synthesizing high-quality images from text descriptions is a challenging problem in computer vision and has many practical applications. Samples generated by existing text-to-image approaches can roughly reflect the meaning of the given descriptions, but they fail to contain necessary details and vivid object parts. In... |
77 | Regularization Methods for Generative Adversarial Networks An Overview of Recent Studies | Despite its short history, Generative Adversarial Network (GAN) has been extensively studied and used for various tasks, including its original purpose, i.e., synthetic sample generation. However, applying GAN to different data types with diverse neural network architectures has been hindered by its limitation in train... |
78 | Spectral Normalization for Generative Adversarial Networks | One of the challenges in the study of generative adversarial networks is the instability of its training. In this paper, we propose a novel weight normalization technique called spectral normalization to stabilize the training of the discriminator. Our new normalization technique is computationally light and easy to in... |
79 | DRIT++ Diverse Image-to-Image Translation via Disentangled Representations | Image-to-image translation aims to learn the mapping between two visual domains. There are two main challenges for this task: 1) lack of aligned training pairs and 2) multiple possible outputs from a single input image. In this work, we present an approach based on disentangled representation for generating diverse out... |
80 | Patch-Based Image Inpainting with Generative Adversarial Networks | Area of image inpainting over relatively large missing regions recently advanced substantially through adaptation of dedicated deep neural networks. However, current network solutions still introduce undesired artifacts and noise to the repaired regions. We present an image inpainting method that is based on the celebr... |
81 | AttnGAN Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks | In this paper, we propose an Attentional Generative Adversarial Network (AttnGAN) that allows attention-driven, multi-stage refinement for fine-grained text-to-image generation. With a novel attentional generative network, the AttnGAN can synthesize fine-grained details at different subregions of the image by paying at... |
82 | MirrorGAN Learning Text-to-image Generation by Redescription | Generating an image from a given text description has two goals: visual realism and semantic consistency. Although significant progress has been made in generating high-quality and visually realistic images using generative adversarial networks, guaranteeing semantic consistency between the text description and visual ... |
83 | Interpreting the Latent Space of GANs for Semantic Face Editing | Despite the recent advance of Generative Adversarial Networks (GANs) in high-fidelity image synthesis, there lacks enough understanding of how GANs are able to map a latent code sampled from a random distribution to a photo-realistic image. Previous work assumes the latent space learned by GANs follows a distributed re... |
84 | Interpreting the Latent Space of GANs via Correlation Analysis for Controllable Concept Manipulation | Generative adversarial nets (GANs) have been successfully applied in many fields like image generation, inpainting, super-resolution and drug discovery, etc., by now, the inner process of GANs is far from been understood. To get deeper insight of the intrinsic mechanism of GANs, in this paper, a method for interpreting... |
85 | Improving the Improved Training of Wasserstein GANs A Consistency Term and Its Dual Effect | Despite being impactful on a variety of problems and applications, the generative adversarial nets (GANs) are remarkably difficult to train. This issue is formally analyzed by \cite{arjovsky2017towards}, who also propose an alternative direction to avoid the caveats in the minmax two-player training of GANs. The corres... |
86 | Evolutionary Generative Adversarial Networks | Generative adversarial networks (GAN) have been effective for learning generative models for real-world data. However, existing GANs (GAN and its variants) tend to suffer from training problems such as instability and mode collapse. In this paper, we propose a novel GAN framework called evolutionary generative adversar... |
87 | Generative Adversarial Network in Medical Imaging A Review | Generative adversarial networks have gained a lot of attention in the computer vision community due to their capability of data generation without explicitly modelling the probability density function. The adversarial loss brought by the discriminator provides a clever way of incorporating unlabeled samples into traini... |
88 | On the regularization of Wasserstein GANs | Since their invention, generative adversarial networks (GANs) have become a popular approach for learning to model a distribution of real (unlabeled) data. Convergence problems during training are overcome by Wasserstein GANs which minimize the distance between the model and the empirical distribution in terms of a dif... |
89 | Generative Adversarial Networks for Image Super-Resolution A Survey | Single image super-resolution (SISR) has played an important role in the field of image processing. Recent generative adversarial networks (GANs) can achieve excellent results on low-resolution images with small samples. However, there are little literatures summarizing different GANs in SISR. In this paper, we conduct... |
90 | Regularization for Deep Learning A Taxonomy | Regularization is one of the crucial ingredients of deep learning, yet the term regularization has various definitions, and regularization methods are often studied separately from each other. In our work we present a systematic, unifying taxonomy to categorize existing methods. We distinguish methods that affect data,... |
91 | Consistency Regularization for Generative Adversarial Networks | Generative Adversarial Networks (GANs) are known to be difficult to train, despite considerable research effort. Several regularization techniques for stabilizing training have been proposed, but they introduce non-trivial computational overheads and interact poorly with existing techniques like spectral normalization.... |
92 | Geometric GAN | Generative Adversarial Nets (GANs) represent an important milestone for effective generative models, which has inspired numerous variants seemingly different from each other. One of the main contributions of this paper is to reveal a unified geometric structure in GAN and its variants. Specifically, we show that the ad... |
93 | Wasserstein Divergence for GANs | In many domains of computer vision, generative adversarial networks (GANs) have achieved great success, among which the family of Wasserstein GANs (WGANs) is considered to be state-of-the-art due to the theoretical contributions and competitive qualitative performance. However, it is very challenging to approximate the... |
94 | GAN-QP A Novel GAN Framework without Gradient Vanishing and Lipschitz Constraint | We know SGAN may have a risk of gradient vanishing. A significant improvement is WGAN, with the help of 1-Lipschitz constraint on discriminator to prevent from gradient vanishing. Is there any GAN having no gradient vanishing and no 1-Lipschitz constraint on discriminator? We do find one, called GAN-QP. To construct ... |
95 | A Geometric View of Optimal Transportation and Generative Model | In this work, we show the intrinsic relations between optimal transportation and convex geometry, especially the variational approach to solve Alexandrov problem: constructing a convex polytope with prescribed face normals and volumes. This leads to a geometric interpretation to generative models, and leads to a novel ... |
96 | Densely Connected Convolutional Networks | Recent work has shown that convolutional networks can be substantially deeper, more accurate, and efficient to train if they contain shorter connections between layers close to the input and those close to the output. In this paper, we embrace this observation and introduce the Dense Convolutional Network (DenseNet), w... |
97 | Deep Residual Learning for Image Recognition | Deeper neural networks are more difficult to train. We present a residual learning framework to ease the training of networks that are substantially deeper than those used previously. We explicitly reformulate the layers as learning residual functions with reference to the layer inputs, instead of learning unreferenced... |
98 | Gradient descent GAN optimization is locally stable | Despite the growing prominence of generative adversarial networks (GANs), optimization in GANs is still a poorly understood topic. In this paper, we analyze the "gradient descent" form of GAN optimization i.e., the natural setting where we simultaneously take small gradient steps in both generator and discriminator par... |
99 | Which Training Methods for GANs do actually Converge | Recent work has shown local convergence of GAN training for absolutely continuous data and generator distributions. In this paper, we show that the requirement of absolute continuity is necessary: we describe a simple yet prototypical counterexample showing that in the more realistic case of distributions that are not ... |
End of preview.
RWGBench is a benchmark for evaluating related work generation (RWG) through the lens of scholarly positioning and citation decision-making, rather than surface-level text similarity. It includes a large-scale paper collection, a 1,091,394 retrieval corpus, 40,108 papers in computer sccience with full text and citation lists, a curated 100-paper test set and a fully automated evaluation framework.
Dataset
| File | # Entries | Description |
|---|---|---|
papers.json |
40,108 | CS papers (arXiv 2020–2025) with full text and citation lists |
corpus.json |
1,091,394 | Retrieval corpus — title + abstract per paper |
gold100_papers.json |
100 | Quality-filtered test set with gold related work sections |
corpus.json — retrieval candidates
{
"doc_id": 1589,
"title": "LoRA: Low-Rank Adaptation of Large Language Models",
"abstract": "We propose a low-rank adaptation method that freezes pretrained model weights..."
}
gold100_papers.json — test papers
{
"paper_id": 9745,
"title": "EfficientDM: Efficient Quantization-Aware Fine-Tuning of Low-Bit Diffusion Models",
"abstract": "...",
"introduction": "...",
"related_work": "Model quantization. Quantization is a widely employed technique...",
"citations": [191955, 118706, 517176, 264652, 1589, 2253, ... ],
"overall_score": 90.6
}
citations is a list of doc_ids from corpus.json. overall_score is a GLM-4 quality rating (0–100).
Data Collection & Ethics
- Source: Public arXiv metadata (2020–2025), respecting arXiv's terms of use
- Copyright: Only metadata (titles, abstracts) and author-written content; no full-text.
- Ethical Use: Intended for non-commercial research only
- Downloads last month
- 9