Dataset Viewer (First 5GB)
Auto-converted to Parquet Duplicate
__key__
stringlengths
32
32
__url__
stringclasses
71 values
json
dict
tiff
imagewidth (px)
151
5.86k
ef8e1a893ecf4e089f2358ab3ad0e87b
hf://datasets/mlfoundations/MINT-1T-PDF-CC-2023-23@d2475fc14efd472e8c1cd10d1c0147b5fc52a2bf/CC-MAIN-2023-23-shard-0/CC-MAIN-20230527223515-20230528013515-00000.tar
{ "bff_contained_ngram_count_before_dedupe": 29, "image_metadata": [ { "height": 1234, "page": 0, "sha256": "036f6a8a7d0a495b791acd4250a40a454f585acdaab8a8ea80ba045919412578", "width": 3111, "xref": 85 } ], "images": [ "page_0_image_85", null ], "language_id_who...
30060a1930b34150b70d36f6fe760f20
hf://datasets/mlfoundations/MINT-1T-PDF-CC-2023-23@d2475fc14efd472e8c1cd10d1c0147b5fc52a2bf/CC-MAIN-2023-23-shard-0/CC-MAIN-20230527223515-20230528013515-00000.tar
{ "bff_contained_ngram_count_before_dedupe": 0, "image_metadata": [ { "height": 236, "page": 0, "sha256": "38ed2ec564b5398ed959fb16c58b493a2a2c932365ed5701aead8eff4b3b4edb", "width": 420, "xref": 8 }, { "height": 352, "page": 0, "sha256": "b3ab0ddafd18e191...
f8022075e783464aa418bbfa46f0a8b4
hf://datasets/mlfoundations/MINT-1T-PDF-CC-2023-23@d2475fc14efd472e8c1cd10d1c0147b5fc52a2bf/CC-MAIN-2023-23-shard-0/CC-MAIN-20230527223515-20230528013515-00000.tar
{ "bff_contained_ngram_count_before_dedupe": 2, "image_metadata": [ { "height": 323, "page": 5, "sha256": "5732f18e057c252d2dd7a089e289a595f02cc0ad41762a40fbcb371bd5315921", "width": 485, "xref": 131 } ], "images": [ null, "page_5_image_131", null ], "langua...
586e3e168a994d13b7412e3cc5d2fd58
hf://datasets/mlfoundations/MINT-1T-PDF-CC-2023-23@d2475fc14efd472e8c1cd10d1c0147b5fc52a2bf/CC-MAIN-2023-23-shard-0/CC-MAIN-20230527223515-20230528013515-00000.tar
{ "bff_contained_ngram_count_before_dedupe": 1, "image_metadata": [ { "height": 1059, "page": 4, "sha256": "5f46eec0b30619f92bbfdcb7ceaa824de216de8f0479b0a32815cd5b7c16766d", "width": 1345, "xref": 12 } ], "images": [ null, "page_4_image_12", null ], "langua...
d8efc8054e534cc4ad7e733c3ddb5eda
hf://datasets/mlfoundations/MINT-1T-PDF-CC-2023-23@d2475fc14efd472e8c1cd10d1c0147b5fc52a2bf/CC-MAIN-2023-23-shard-0/CC-MAIN-20230527223515-20230528013515-00000.tar
{ "bff_contained_ngram_count_before_dedupe": 31, "image_metadata": [ { "height": 488, "page": 0, "sha256": "ad38dab178a4c5b789bf9d3ef89670f08e9ea355577ae0a26ea0b279939daedf", "width": 957, "xref": 13 }, { "height": 300, "page": 0, "sha256": "b7bb0237d00f31...
cf10ac08d9454161ad40bdaa8a9d80d9
hf://datasets/mlfoundations/MINT-1T-PDF-CC-2023-23@d2475fc14efd472e8c1cd10d1c0147b5fc52a2bf/CC-MAIN-2023-23-shard-0/CC-MAIN-20230527223515-20230528013515-00000.tar
{ "bff_contained_ngram_count_before_dedupe": 13, "image_metadata": [ { "height": 272, "page": 0, "sha256": "9eced162f1b41b16dc6506eded7211b13bef265de68a9af06eec91c85ec448b5", "width": 248, "xref": 52 }, { "height": 298, "page": 0, "sha256": "fecda2c212d757...
fb76692157e14dcb831bcb708e9d84b9
hf://datasets/mlfoundations/MINT-1T-PDF-CC-2023-23@d2475fc14efd472e8c1cd10d1c0147b5fc52a2bf/CC-MAIN-2023-23-shard-0/CC-MAIN-20230527223515-20230528013515-00000.tar
{ "bff_contained_ngram_count_before_dedupe": 0, "image_metadata": [ { "height": 168, "page": 0, "sha256": "5d0fa9d2bf934be459c3ae44aa94c336efdba6183592e47aa62e908c522d6ce6", "width": 300, "xref": 19 }, { "height": 181, "page": 1, "sha256": "21e738f74e4b67a...
5413d9b80ba348499caa598583c0bf66
hf://datasets/mlfoundations/MINT-1T-PDF-CC-2023-23@d2475fc14efd472e8c1cd10d1c0147b5fc52a2bf/CC-MAIN-2023-23-shard-0/CC-MAIN-20230527223515-20230528013515-00000.tar
{ "bff_contained_ngram_count_before_dedupe": 40, "image_metadata": [ { "height": 434, "page": 0, "sha256": "63ddde56668cb994f4b42addfd895da53bf884c500abd3a4c4d894515e9d2d67", "width": 686, "xref": 79 } ], "images": [ null, "page_0_image_79", null ], "languag...
7cd801ff818b438da0bec21fc4106b5e
hf://datasets/mlfoundations/MINT-1T-PDF-CC-2023-23@d2475fc14efd472e8c1cd10d1c0147b5fc52a2bf/CC-MAIN-2023-23-shard-0/CC-MAIN-20230527223515-20230528013515-00000.tar
{ "bff_contained_ngram_count_before_dedupe": 0, "image_metadata": [ { "height": 916, "page": 0, "sha256": "08f8fa1160f56d82305a83f9be787412e27e29941f6f0a270c46215b367832a8", "width": 1364, "xref": 63 }, { "height": 428, "page": 0, "sha256": "439757a575527d...
8d58b504c661468b910cf72b4f4aebed
hf://datasets/mlfoundations/MINT-1T-PDF-CC-2023-23@d2475fc14efd472e8c1cd10d1c0147b5fc52a2bf/CC-MAIN-2023-23-shard-0/CC-MAIN-20230527223515-20230528013515-00000.tar
{ "bff_contained_ngram_count_before_dedupe": 0, "image_metadata": [ { "height": 432, "page": 0, "sha256": "7bed9ca8be2ce261e577382574653f7f2d1b71b8706c84ed9f6dc6bf86d188e0", "width": 576, "xref": 13 }, { "height": 318, "page": 0, "sha256": "7f162fad40247dc...
22eee49547b0476c81a791986310e055
hf://datasets/mlfoundations/MINT-1T-PDF-CC-2023-23@d2475fc14efd472e8c1cd10d1c0147b5fc52a2bf/CC-MAIN-2023-23-shard-0/CC-MAIN-20230527223515-20230528013515-00000.tar
{ "bff_contained_ngram_count_before_dedupe": 0, "image_metadata": [ { "height": 240, "page": 0, "sha256": "6800aaf748e19b3b3a143fd7b5edf470bf29f7a6e7c1aa310e78c3f1a2f9f366", "width": 308, "xref": 18 } ], "images": [ null, "page_0_image_18", null ], "language...
25dd0f0167764ccdb4f932308c117723
hf://datasets/mlfoundations/MINT-1T-PDF-CC-2023-23@d2475fc14efd472e8c1cd10d1c0147b5fc52a2bf/CC-MAIN-2023-23-shard-0/CC-MAIN-20230527223515-20230528013515-00000.tar
{ "bff_contained_ngram_count_before_dedupe": 312, "image_metadata": [ { "height": 300, "page": 0, "sha256": "d720fb64982a3a90f798c0d2501fffb0799dca60c6fe6adf759051e3eb3d0901", "width": 400, "xref": 26 } ], "images": [ "page_0_image_26", null ], "language_id_whol...
d28d956f44eb4ad6b106d076a1791191
hf://datasets/mlfoundations/MINT-1T-PDF-CC-2023-23@d2475fc14efd472e8c1cd10d1c0147b5fc52a2bf/CC-MAIN-2023-23-shard-0/CC-MAIN-20230527223515-20230528013515-00000.tar
{ "bff_contained_ngram_count_before_dedupe": 2, "image_metadata": [ { "height": 356, "page": 0, "sha256": "34d9db9d0d85f44d926df3d36c66977743f0ab21d27d50841861b6d9adc0f3cd", "width": 237, "xref": 17 }, { "height": 1231, "page": 0, "sha256": "1e381bfdf87595...
4a68c53ed7804db3a4531f38e5a8e95e
hf://datasets/mlfoundations/MINT-1T-PDF-CC-2023-23@d2475fc14efd472e8c1cd10d1c0147b5fc52a2bf/CC-MAIN-2023-23-shard-0/CC-MAIN-20230527223515-20230528013515-00000.tar
{ "bff_contained_ngram_count_before_dedupe": 0, "image_metadata": [ { "height": 211, "page": 0, "sha256": "181c03b6af35efd6d1f7787dc6ca82ba89f1295d36f2a54c0d6ae457f2d9f8ad", "width": 378, "xref": 2 } ], "images": [ null, "page_0_image_2", null ], "language_i...
4f718ece0afa441ebaf4ff45d7ab9e6c
hf://datasets/mlfoundations/MINT-1T-PDF-CC-2023-23@d2475fc14efd472e8c1cd10d1c0147b5fc52a2bf/CC-MAIN-2023-23-shard-0/CC-MAIN-20230527223515-20230528013515-00000.tar
{ "bff_contained_ngram_count_before_dedupe": 12, "image_metadata": [ { "height": 313, "page": 6, "sha256": "7d322f1ac9ae470ba96d125662a8b050a43dd8f77dde6e8bd26739e3e313784e", "width": 535, "xref": 66 }, { "height": 253, "page": 8, "sha256": "4d18ba03f50954...
End of preview. Expand in Data Studio

🍃 MINT-1T:
Scaling Open-Source Multimodal Data by 10x:
A Multimodal Dataset with One Trillion Tokens

🍃 MINT-1T is an open-source Multimodal INTerleaved dataset with 1 trillion text tokens and 3.4 billion images, a 10x scale-up from existing open-source datasets. Additionally, we include previously untapped sources such as PDFs and ArXiv papers. 🍃 MINT-1T is designed to facilitate research in multimodal pretraining. 🍃 MINT-1T is created by a team from the University of Washington in collaboration with Salesforce Research, other academic institutions including Stanford University, University of Texas at Austin, and University of California Berkeley.

You are currently viewing a subset of the PDF portion of 🍃 MINT-1T associated with CommonCrawl dump CC-2023-23. For other PDF, HTML, and ArXiv subsets, refer to the 🍃 MINT-1T collection.

Examples

Updates

9/19/24

We have removed roughly 10% of the PDF samples as there was a mismatch between the frames in the TIFF images and the document metadata.

8/8/24

We have become aware that the image hashes in the PDF subset of MINT-1T do not match the images in the documents. We want to emphasize that the images for each document are correct, and only the image hashes in the documents' metadata are mislabeled.

Dataset Details

Dataset Sources

Uses

Direct Use

🍃 MINT-1T is designed to facilitate research in multimodal pretraining. The dataset can be used for training multimodal models that can reson about interleaved text and images sequences such as Idefics2, XGen-MM, and Chameleon.

Out-of-Scope Use

🍃 MINT-1T was built to make research into large multimodal models more accessible. Using the dataset to train models that ingest or generate personally identifying information (such as images of people’s faces and other sensitive content) as well as military applications are all inappropriate use cases of 🍃 MINT-1T.

Dataset Creation

Curation Rationale

🍃 MINT-1T was created to address a significant gap in the open-source domain by providing a large-scale multimodal interleaved dataset for pre-training large multimodal models. This dataset aims to be a valuable resource for the research community, facilitating open science in multimodal pretraining.

Source Data

The dataset is a comprehensive collection of multimodal documents from various sources:

  • HTML documents: Filtered from CommonCrawl WARC dumps spanning from 2017 to 2024
  • PDF documents: Extracted from CommonCrawl WAT dumps covering 2023 to 2024
  • ArXiv documents: A subset of papers from the ArXiv repository

In total, 🍃 MINT-1T contains 1056.8 million documents, broken down as follows:

  • 1029.4 million HTML documents
  • 24.0 million PDF documents
  • 0.6 million ArXiv documents

Data Collection and Processing

The data collection and processing involved several steps:

  1. Document Extraction:

    • HTML documents were parsed from CommonCrawl WARC files
    • PDF documents were extracted from CommonCrawl WAT files
    • ArXiv papers were directly sourced from ArXiv S3 buckets
  2. Filtering Process:

    • Applied text quality filters to ensure content relevance and readability
    • Removed duplicate content at both paragraph and document levels
    • Filtered out undesirable content based on predefined criteria
    • Verified image availability and quality for HTML documents
    • Limited PDF size to 50MB and 50 pages to manage dataset size and quality
  3. Image Processing:

    • Used NSFW image detection to remove pornographic or otherwise undesirable images
    • Removed images smaller than 150 pixels or larger than 20,000 pixels
    • Adjusted aspect ratio thresholds for HTML (2:1) and PDF (3:1) to preserve scientific figures
  4. Text Processing:

    • Used fasttext for language identification, focusing on English content
    • Masked personally identifiable information such as email addresses and IP addresses
    • Applied paragraph and document-level deduplication using Bloom filters
  5. PDF Specific Processing:

    • Used PyMuPDF for parsing PDFs and extracting reading order
    • Clustered text blocks based on columns and ordered from top left to bottom right
  6. ArXiv Specific Processing:

    • Used TexSoup to parse LaTeX source code and interleave images with text
    • Cleaned up LaTeX code by removing imports, bibliography, tables, and citation tags

Various open-source tools were utilized in this process, including fasttext, PyMuPDF, and DCLM and bff for deduplication and content filtering.

Personal and Sensitive Information

Despite sourcing from public web data, significant efforts were made to minimize the inclusion of personal and sensitive information:

  • Email addresses and IP addresses were masked to protect privacy
  • An NSFW image classifierto remove inappropriate visual content
  • URLs containing substrings associated with undesirable or sensitive content were filtered out

However, users should be aware that as the data originates from the public web, it may still contain some sensitive or personal information. The dataset creators acknowledge this limitation and advise users to exercise caution and potentially apply additional filtering based on their specific use cases.

Bias, Risks, and Limitations

Several potential biases, risks, and limitations have been identified:

  1. Data Bias: As the dataset is sourced from web crawls, it may inherit biases present in online content.

  2. Content Risks: Despite extensive filtering, there's a possibility that some offensive, insensitive, or inappropriate content may remain in the dataset.

  3. Image Availability: The dataset relies on external image URLs, which may become unavailable over time due to link rot, potentially affecting the dataset's long-term usability.

  4. PDF Parsing Limitations: The current method for extracting reading order from PDFs may not always accurately capture the intended flow, especially for documents with complex layouts.

  5. Potential Legal and Ethical Concerns: While efforts were made to respect robots.txt files and remove sensitive information, there may still be content that individuals did not explicitly consent to include.

Recommendations

Given these considerations, the following recommendations are provided:

  1. Additional Filtering: Users are strongly encouraged to apply additional filtering based on their specific use case and ethical considerations.

  2. Inappropriate Use Cases: The dataset is not recommended for applications involving the processing or generation of personally identifying information, nor for military applications.

  3. Legal Compliance: Users should independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.

  4. Bias Awareness: Researchers and developers should be cognizant of potential biases in the dataset and consider their impact on model training and outputs.

License

We release 🍃 MINT-1T under a CC-BY-4.0 license, designating it primarily as a research artifact. While the dataset is freely available, users are responsible for ensuring its legal use in commercial settings. Users must independently verify compliance with applicable laws before employing MINT-1T for commercial purposes.

Citation

@article{awadalla2024mint1t,
      title={MINT-1T: Scaling Open-Source Multimodal Data by 10x: A Multimodal Dataset with One Trillion Tokens}, 
      author={Anas Awadalla and Le Xue and Oscar Lo and Manli Shu and Hannah Lee and Etash Kumar Guha and Matt Jordan and Sheng Shen and Mohamed Awadalla and Silvio Savarese and Caiming Xiong and Ran Xu and Yejin Choi and Ludwig Schmidt},
      year={2024}
}
Downloads last month
630,899

Collection including mlfoundations/MINT-1T-PDF-CC-2023-23

Paper for mlfoundations/MINT-1T-PDF-CC-2023-23