Datasets:

License:
Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    TypeError
Message:      Couldn't cast array of type
struct<short_book_title: string, publication_date: int64, url: string>
to
{'timestamp': Value(dtype='float64', id=None), 'yymm': Value(dtype='string', id=None), 'arxiv_id': Value(dtype='string', id=None), 'language': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None)}
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in cast_table_to_schema
                  arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2261, in <listcomp>
                  arrays = [cast_array_to_feature(table[name], feature) for name, feature in features.items()]
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in wrapper
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 1802, in <listcomp>
                  return pa.chunked_array([func(chunk, *args, **kwargs) for chunk in array.chunks])
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2122, in cast_array_to_feature
                  raise TypeError(f"Couldn't cast array of type\n{_short_str(array.type)}\nto\n{_short_str(feature)}")
              TypeError: Couldn't cast array of type
              struct<short_book_title: string, publication_date: int64, url: string>
              to
              {'timestamp': Value(dtype='float64', id=None), 'yymm': Value(dtype='string', id=None), 'arxiv_id': Value(dtype='string', id=None), 'language': Value(dtype='string', id=None), 'url': Value(dtype='string', id=None)}
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1524, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1099, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

text
string
meta
dict
\section{Introduction and results} \label{sec:Intro} Consider an $n$-variate polynomial of degree at most $d$: \[ p=\sum_{|\alpha| \le d} c_{\alpha} x^{\alpha} \] where $x:=(x_1,\ldots,x_n)$, $\alpha \in \ZZ_{\ge 0}^n$, $|\alpha|:=\sum_i \alpha_i$, $x^{\alpha}:=\prod_i x_i^{\alpha_i}$, and where each coefficient $...
{ "timestamp": 1468894042, "yymm": "1607", "arxiv_id": "1607.04873", "language": "en", "url": "https://arxiv.org/abs/1607.04873" }
\subsection{Lagrangian for FCNC $Z'$s} \begin{figure}[th!] \begin{center} \includegraphics[width=10cm]{gqtZp.eps} \caption{Leading-order diagrams for $gu\rightarrow tZ'$ with anomalous $t$-$u$-$Z'$ coupling and $gc\rightarrow tZ'$ with anomalous $t$-$c$-$Z'$ coupling.} \label{gqtZp} \end{center} \end{figure} An FCNC ...
{ "timestamp": 1590976950, "yymm": "1904", "arxiv_id": "1904.10071", "language": "en", "url": "https://arxiv.org/abs/1904.10071" }
"\\section{Introduction}\n\n\nDistrict Heating (DH) comprises a network of insulated pipes which tr(...TRUNCATED)
{"timestamp":1620267840.0,"yymm":"2103","arxiv_id":"2103.06568","language":"en","url":"https://arxiv(...TRUNCATED)
"\\section{Introduction}\n\nThe modeling of plasticity and fracture in a geometrically linear framew(...TRUNCATED)
{"timestamp":1517278042.0,"yymm":"1706","arxiv_id":"1706.01735","language":"en","url":"https://arxiv(...TRUNCATED)
"\\section{Introduction}\\label{intro}\nLet $d$ be a positive integer. If $X$ is a subspace of $L^1 (...TRUNCATED)
{"timestamp":1496801137.0,"yymm":"1706","arxiv_id":"1706.01712","language":"en","url":"https://arxiv(...TRUNCATED)
"\\section{Introduction}\n\\label{intro}\n\nScalar field theories with non-linear equations of motio(...TRUNCATED)
{"timestamp":1554862669.0,"yymm":"1810","arxiv_id":"1810.01890","language":"en","url":"https://arxiv(...TRUNCATED)
"\\section*{Results}\n\\begin{figure}\n\\includegraphics[width=80mm]{figures/fig2_simulated_errors.e(...TRUNCATED)
{"timestamp":1468893712.0,"yymm":"1607","arxiv_id":"1607.04675","language":"en","url":"https://arxiv(...TRUNCATED)
"\\section{Introduction}\n\\label{sect:intro}\n\nThe potential outcomes approach is a framework that(...TRUNCATED)
{"timestamp":1630549158.0,"yymm":"2103","arxiv_id":"2103.06740","language":"en","url":"https://arxiv(...TRUNCATED)
"\\section{Introduction}\nThe \\ac{iot} provides a number of benefits,\nto consumers as well as busi(...TRUNCATED)
{"timestamp":1608171189.0,"yymm":"2012","arxiv_id":"2012.08811","language":"en","url":"https://arxiv(...TRUNCATED)
"\\section{A letter from Saratov} \\label{story}\n\nDuring his last trip to Moscow, the second auth(...TRUNCATED)
{"timestamp":1468893894.0,"yymm":"1607","arxiv_id":"1607.04766","language":"en","url":"https://arxiv(...TRUNCATED)
End of preview.

We collect a 2.5B training dataset from various domains for long-context continual pre-training. The composition of this dataset is as follows (partially inspired by Long-Data-Collection):

Domain Proportion Source
Book 40% Redpajama-Book
Arxiv 20% Redpajama-Arxiv
General 20% Redpajama
Code 10% LCC-Python
QA 5% Natural Questions
Summarization 5% BookSum

We have also curated a test dataset comprising 250 million tokens, mirroring the same composition. The selection criteria ensured that the average n-gram similarity (for n=2, 3, 4) with the training set is below 10%. This threshold effectively excludes all QA and Summarization data, resulting in a test corpus where the distribution of tokens across Book, Arxiv, General, and Code categories follows a ratio of 4:2:2:1, respectively.

Downloads last month
20

Models trained or fine-tuned on DAMO-NLP-SG/LongCorpus-2.5B