ibm-project-codenet / README.md
ShawnXiaoyuWang's picture
Update README.md
7cd75d5 verified
metadata
dataset_info:
  features:
    - name: Source
      dtype: string
    - name: Date
      dtype: int64
    - name: Text
      dtype: string
    - name: Token_count
      dtype: int64
  splits:
    - name: train
      num_bytes: 8122744210
      num_examples: 6366648
  download_size: 3707767805
  dataset_size: 8122744210
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
pretty_name: Project_CodeNet
size_categories:
  - 1M<n<10M
task_categories:
  - text-generation
language:
  - code
license: other

Project_CodeNet

Overview

This dataset is constructed from the Project CodeNet corpus, consisting of competitive programming submissions collected from online judges.

We extract a large-scale code corpus designed for pretraining language models, with a focus on:

  • clean executable code
  • temporal metadata (submission time)
  • minimal preprocessing to preserve the original distribution

Dataset Statistics

  • Total samples: ~6.37M
  • Total tokens: ~3.06B
  • Average tokens per sample: 480.44

Token Length Distribution

  • P50: 162 tokens
  • P90: 679 tokens
  • P95: 1035 tokens
  • P99: 2702 tokens

Construction

Source

Filtering Rules

We apply the following steps:

  1. Keep only Accepted submissions

    • Removes incorrect or incomplete code.
  2. Deduplication at metadata level

    • For each (problem_id, user_id, language), keep the last accepted submission
    • This approximates the user's final solution
  3. No content-based deduplication

    • Similar solutions across users are preserved
    • Reflects real-world submission distribution
  4. No balancing

    • Language and temporal distributions are kept as-is

Fields

Each sample contains:

Field Description
Source Dataset name (Project_CodeNet)
Date Submission year
Text Source code
Token_count Token count computed using tiktoken

Tokenization

  • Tokenizer: tiktoken
  • Encoding: cl100k_base

Distribution Characteristics

Language Distribution

The dataset is highly skewed toward C++:

  • C++ dominates (~60%)
  • Python is the second largest (~23%)
  • Other languages form a long tail

Temporal Distribution

The dataset is heavily concentrated in recent years:

  • Majority of samples from 2019–2020
  • Reflects real submission activity in CodeNet

Important Notes

  • This dataset preserves the original submission distribution of CodeNet.
  • It is not balanced across languages or time.
  • It is primarily composed of competitive programming code, which may differ from production software code.
  • Some level of near-duplicate solutions exists due to similar problem-solving strategies.

Intended Use

  • Pretraining code language models
  • Studying temporal evolution of programming patterns
  • Benchmarking under real-world distribution settings

Limitations

  • Not representative of general software engineering code
  • Strong bias toward:
    • competitive programming tasks
    • algorithmic problem solving
  • Language and temporal imbalance

License

Please refer to the original Project CodeNet dataset for licensing details.


Citation

If you use this dataset, please cite Project CodeNet:

@article{puri2021project, title={Project CodeNet: A Large-Scale AI for Code Dataset for Learning a Diversity of Coding Tasks}, author={Puri, Ruchir and others}, year={2021} }