Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed
Error code:   DatasetGenerationError
Exception:    CastError
Message:      Couldn't cast
directory: string
identifier: string
...1: int64
creator: string
language: string
title: string
publication_date: int64
lang: string
real_lang: string
n: int64
rights: string
file: string
word_count: int64
text: string
-- schema metadata --
pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1844
to
{'identifier': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'publication_date': Value(dtype='int64', id=None), 'word_count': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1492, in compute_config_parquet_and_info_response
                  fill_builder_info(builder, hf_endpoint=hf_endpoint, hf_token=hf_token, validate=validate)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 683, in fill_builder_info
                  ) = retry_validate_get_features_num_examples_size_and_compression_ratio(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 602, in retry_validate_get_features_num_examples_size_and_compression_ratio
                  validate(pf)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 640, in validate
                  raise TooBigRowGroupsError(
              worker.job_runners.config.parquet_and_info.TooBigRowGroupsError: Parquet file has too big row groups. First row group has 850328990 which exceeds the limit of 300000000
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1995, in _prepare_split_single
                  for _, table in generator:
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 797, in wrapped
                  for item in generator(*args, **kwargs):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 97, in _generate_tables
                  yield f"{file_idx}_{batch_idx}", self._cast_table(pa_table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/packaged_modules/parquet/parquet.py", line 75, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              directory: string
              identifier: string
              ...1: int64
              creator: string
              language: string
              title: string
              publication_date: int64
              lang: string
              real_lang: string
              n: int64
              rights: string
              file: string
              word_count: int64
              text: string
              -- schema metadata --
              pandas: '{"index_columns": [{"kind": "range", "name": null, "start": 0, "' + 1844
              to
              {'identifier': Value(dtype='string', id=None), 'title': Value(dtype='string', id=None), 'publication_date': Value(dtype='int64', id=None), 'word_count': Value(dtype='int64', id=None), 'text': Value(dtype='string', id=None)}
              because column names don't match
              
              The above exception was the direct cause of the following exception:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1505, in compute_config_parquet_and_info_response
                  parquet_operations, partial, estimated_dataset_info = stream_convert_to_parquet(
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1099, in stream_convert_to_parquet
                  builder._prepare_split(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2038, in _prepare_split_single
                  raise DatasetGenerationError("An error occurred while generating the dataset") from e
              datasets.exceptions.DatasetGenerationError: An error occurred while generating the dataset

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

identifier
string
title
string
publication_date
int64
word_count
int64
text
string
cihm_08193
Baptism versus rantism [microform] : baptism as a New Testament ordinance proved to be a covering of the person with water, and rantism, sprinkling -not a New Testament ordinance : a reply to the misstatements and fallacies of Rev. W.A. McKay, B.A.
1,880
18,428
,n<4u IMAGE EVALUATION TEST TARGET (MT-3) 1.0 I.I 2.0 1.8 1.25 1.4 1 ^ 1 1.6 ^ 6" ► Va ^ /a VI ^. el ^^ .v^ <p '/a y //a Photographic Sciences Corporation s. ip i\ o ^^ ^\ fc 6^ <> ^<h'- 23 WEST MAIN ST...
cihm_58382
"Teachers and teaching [microform] : an address delivered before the Teachers' Association for the S(...TRUNCATED)
1,877
17,623
"IMAGE EVALUATION \nTEST TARGET (MT-3) \n\n\n:-••/•. \n\n\n«9- \n\n\n1.0 \n\n\nI.I \n\n\n1(...TRUNCATED)
histoiredesducs02gugoog
Histoire des ducs de Bourgogne de la maison de Valois, 1364-1477
1,839
89,831
"Google \n\n\n\nThis is a digital copy of a book that was preserved for generations on library shelv(...TRUNCATED)
diarydrthomasca00cartgoog
The Diary of Dr. Thomas Cartwright, Bishop of Chester: Commencing at the Time of His Elevation ...
1,843
44,249
"Google \n\n\n\nThis is a digital copy of a book that was preserved for generations on library shelv(...TRUNCATED)
dermatologischem20leipuoft
Dermatologische Monatsschrift
1,882
249,585
"i. - ■ ^ \n\nMONATSHEFTE vorian \n\n\nFOR \n\nPRAKTISCHE DEMATOLOGIE. \n\nUNTER M(...TRUNCATED)
monthlyreposito15unkngoog
The monthly repository of theology and general literature
1,806
628,984
"This is a digital copy of a book that was preserved for generations on library shelv(...TRUNCATED)
bengalasafieldm01wyligoog
Bengal as a Field of Missions
1,854
199,912
"Google \n\n\n\nThis is a digital copy of a book that was preserved for generations on library shelv(...TRUNCATED)
oeuvresducardina03retzuoft
"Oeuvres du cardinal de Retz. Nouv. éd., rev. sur les plus anciennes impressions et les autographes(...TRUNCATED)
1,870
208,488
"TO\\\\0H'\\0 \n\n\nLES \n\nGRANDS ECRIYAINS \n\nDE LA FRANCE \n\nNOUVELLES EDITIONS \n\nrL(...TRUNCATED)
cihm_07746
"The Hunt and Douglas process for extracting copper from its ores [microform] : with an appendix inc(...TRUNCATED)
1,876
16,784
"IMAGE EVALUATION \nTEST TARGET (MT-3) \n\n\n/. \n\n\n'k \n\n\n>\"a \n\n\n%° \n\n\nf ^ \n\n\n(...TRUNCATED)
memoirschiefinc00trimgoog
"Memoirs of the chief incidents of the public life of Sir George Thomas Staunton, Bart., hon. D.C.L.(...TRUNCATED)
1,856
44,292
"\nThis is a digital copy of a book that was preserved for generations on library shelves before it (...TRUNCATED)
End of preview.

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

English Public Domain Books (English)

English-Public Domain-Book or English-PD-Books is a large collection aiming to aggregate a significant part of English monographies in the public domain. It has been designed to avoid duplication with existing collections such as English-PD-Books.

Dataset summary

The collection contains 78,566,669,909 words (736,214 titles) recovered from multiple sources, especially Internet Archive. Each parquet file has the full text of 1,000 books selected at random.

Curation method

The composition of the dataset adheres to the criteria for public domain works in the EU and, consequently, all Berne-countries for EU authors: any publication whose author is dead for more than 70 years. Additionally, the initial consolidation of public domain status for cultural heritage operates in the EU under the 2019 Copyright Directive (art. 14).

As of June 2024, to limit rights verification, we have retained exclusively titles published prior to 1884.

The corpus will be expanded at a later stage to encompass late 19th century and early 20th century publications, after checking for public domain validity.

Uses

The collection aims to expand the availability of open works for the training of Large Language Models. The text can be used for model training and republished without restriction for reproducibility purposes.

The rationales for creation of this collection are multifold:

  • Scientific: We observe that the closure of training corpora represents a major barrier to AI research. Large language models face a real crisis of reproducibility.
  • Legal: With the adoption of the AI Act with its obligations in terms of copyright law compliance for the pretraining corpora, the European AI ecosystem will have to change its provenance practices.
  • Cultural: The linguistic diversity of the European Union is currently underrepresented. Unlike web archives, open, heritage, administrative, or scientific texts are often of high quality: they are long, multilingual, and editorialized publications.
  • Economical: Today, value capture is concentrated on players whose financial resources are already considerable, allowing them to collect or purchase data at a high price. Making a royalty-free corpus available to as many people as possible frees innovation in uses and minimizes economic dependencies on dominant actors.

License

The entire collection is in the public domain in all regions. This means that the patrimonial rights of each individual or collective right holders have expired.

There has been a debate for years in Europe over the definition of public domain and the possibility to restrict its use. Since 2019, the EU Copyright Directive states that "Member States shall provide that, when the term of protection of a work of visual art has expired, any material resulting from an act of reproduction of that work is not subject to copyright or related rights, unless the material resulting from that act of reproduction is original in the sense that it is the author's own intellectual creation." (art. 14)

Future work

This dataset is not a one-time work but will continue to evolve significantly in three directions:

  • Expansion of the dataset to the late 19th and early 20th century works and its further enhancement with currently unexploited collections coming from European patrimonial data repositories.
  • Correction of computer generated errors in the text. All the texts have been transcribed automatically through the use of Optical Character Recognition (OCR) software. The original files have been digitized over a long time period (since the mid-2000s) and some documents should be. Future versions will strive either to re-OCRize the original text or use experimental LLM models for partial OCR correction.
  • Enhancement of the structure/editorial presentation of the original text. Some parts of the original documents are likely unwanted for large scale analysis or model training (header, page count…). Additionally, some advanced document structures like tables or multi-column layout are unlikely to be well-formatted.

Acknowledgements

The corpus was stored and processed with the generous support of Scaleway. It was built up with the support and concerted efforts of the state start-up LANGU:IA (start-up d’Etat), supported by the French Ministry of Culture and DINUM, as part of the prefiguration of the service offering of the Alliance for Language technologies EDIC (ALT-EDIC).

Corpus collection has been largely facilitated thanks to the open science LLM community insights, cooperation and support (Occiglot, Eleuther AI, OpenLLM France, Allen AI).

Downloads last month
286

Collection including PleIAs/English-PD