Datasets:
license: cc-by-4.0
language:
- en
- code
tags:
- solidity
- smart-contracts
- blockchain
- ethereum
- code
- continued-pretraining
- quality-filtered
size_categories:
- 10K<n<100K
task_categories:
- text-generation
- fill-mask
pretty_name: Solidity CPT Top-10% Quality-Filtered Corpus
configs:
- config_name: default
data_files:
- split: train
path: top10.parquet
Solidity CPT Top-10% Quality-Filtered Corpus
A curated, deduplicated corpus of 23,471 modern Solidity source files (~86M tokens) intended for continued-pretraining (CPT) of code LLMs on smart-contract code.
It's the top 10% slice (by composite quality score) of a larger raw corpus that combined:
ASSERT-KTH/DISL— 514 k unique deployed Solidity files, deduped at file level- 30 hand-picked GitHub blue-chip protocols (OpenZeppelin, Uniswap v2/v3/v4, Aave v3, Compound, Morpho, EigenLayer, Pendle, Solady, Seaport, LayerZero, ENS, Optimism, Arbitrum, Polygon L1, etc.)
After scoring all 234 877 raw rows on a 0–115 quality scale (see Quality filter below) and keeping the top 10 %, the result is 23 487 source files. After cross-source SHA-256 deduplication of whitespace-normalised text, 23 471 unique files remain. This is what's published here.
At a glance
| Rows | 23,471 |
| Total characters | 342,858,470 |
| Approx. tokens (Qwen3.6 tokenizer, ~4 chars/tok) | ~86M |
| Average chars per row | 14,607 |
Source: disl (Etherscan-verified) |
20,975 (89 %) |
Source: github/<protocol> (audited blue-chips) |
2,496 (11 %) |
| File format | parquet (snappy) + jsonl |
Schema
Each row is one Solidity source file:
| Field | Type | Description |
|---|---|---|
text |
string | Raw Solidity source code |
source |
string | disl (Etherscan-verified contract) or github/<repo-slug> |
address |
string | Ethereum mainnet address (DISL only) |
name |
string | Contract name |
compiler |
string | Solc version e.g. v0.8.20+commit.a1b2c3d4 (DISL only) |
license |
string | License from Etherscan (often empty) |
path |
string | Path inside the GitHub repo (GitHub rows only) |
n_chars |
int | Length of text in characters |
Quality filter (composite 0–115 score)
Every raw row was scored independently on each signal; rows scoring ≥ 55 (top-10% threshold) were kept.
| Signal | Points |
|---|---|
| Pragma 0.8.20+ (modern compiler) | +30 |
| Pragma 0.8.13–19 | +20 |
| Pragma 0.8.x (any) | +10 |
| Has SPDX license header | +10 |
Has @notice / @dev NatSpec |
+10 |
Has @param / @return NatSpec |
+10 |
| Comment density 5–30 % | +10 |
Uses custom errors (error X();) |
+5 |
Uses unchecked { ... } blocks |
+5 |
| No SafeMath import (avoids old patterns) | +5 |
| Size 500–8000 chars | +10 |
| Source: GitHub blue-chip | +20 |
Pre-scoring filters (applied to all sources before scoring):
- Pragma must be 0.8.x (0.4–0.7 dropped — incompatible with
forge-std/Test.sol >=0.8.13) - Not flagged as
proxy=Truein DISL (drops EIP-1167/1967 minimal proxies) - Has at least one
contract Xdeclaration (drops interface-only / library-only files) - 200 ≤ length ≤ 50000 chars (drops stubs and mega-flatteners)
- For GitHub: vendored
lib/openzeppelin-contracts,lib/forge-std,out/,cache/,typechain-types/stripped
Post-quality dedup: SHA-256 of the whitespace-collapsed text. Caught 16 cross-source duplicates.
Score distribution of the top-10% slice
The 23 471 surviving rows have scores in [55, 115]. The mode falls in the 55–69 band (typical "modern, idiomatic, audited" Solidity). The long tail (90+) skews heavily toward GitHub blue-chips and DISL contracts that cite their license + use NatSpec extensively.
Suggested use
This corpus is sized for adapter-based CPT (LoRA / DoRA) on a 7B–30B+ code LM. Per Biderman et al. 2024 ("LoRA Learns Less and Forgets Less"), useful adapter-CPT plateaus around 200M–1B tokens — this 86M corpus is on the lower end of that range and works well as a fast first-iteration validation slice.
For larger adapter-CPT runs, the same pipeline produces:
- top30 tier (~70 k rows, ~229M tokens)
- top60 tier (~141 k rows, ~458M tokens)
(Not yet published. Open an issue if you'd like them released.)
Loading
datasets library (recommended)
from datasets import load_dataset
ds = load_dataset("samscrack/solidity-cpt-top10-quality", split="train")
print(ds) # ~23 471 rows
print(ds[0]["text"][:500])
Direct parquet
import pyarrow.parquet as pq
table = pq.read_table("top10.parquet")
df = table.to_pandas()
print(df.head())
Streaming (for low-RAM environments)
ds = load_dataset(
"samscrack/solidity-cpt-top10-quality",
split="train",
streaming=True,
)
for row in ds.take(3):
print(row["source"], row["name"], row["compiler"])
Suggested CPT recipe (Qwen3.6-27B reference)
This corpus was built for the Qwopus3.6-27B-solidity-cpt-stageA pipeline. The recipe used:
- LoRA r=64, α=64, target modules:
q_projk_projv_projo_projgate_projup_projdown_projout_proj - Megatron-style packing into 8192-token sequences with EOS separators
- LR 5e-5 (cosine), bf16 + 4-bit base (QLoRA), 1 epoch
- Tokenized with the Qwen3.6 tokenizer (vocab 152 k)
A Stage-A CPT over a 1500-sequence (~12M token) subset of this corpus showed loss drop from a warmup peak of 0.67 to a stable 0.36–0.41 plateau in ~40 steps — clear evidence the data is high-signal for adapter-CPT.
License & legal notes
This dataset is released under CC BY 4.0, inheriting the most-restrictive upstream license (DISL is CC BY 4.0).
Per-row provenance varies:
- DISL rows: derived from Etherscan public-verified contracts. The dataset (and DISL itself) are CC BY 4.0; the underlying contract source has whatever license the deployer chose — often empty / "unspecified" / all-rights-reserved by default. Suitable for research and adapter-style CPT, not necessarily safe for redistribution-as-source.
- GitHub blue-chip rows: each repo has its own SPDX header. Most are MIT or Apache-2.0; some Uniswap pieces are BUSL/GPL. Inspect the
pathfield and the upstream repo if license matters for your use case.
If you need stronger commercial-use guarantees, restrict to rows where:
sourcestarts withgithub/, ANDtextcontains an explicit MIT or Apache-2.0 SPDX line near the top.
Citation
@misc{solidity_cpt_top10_2026,
title = {Solidity CPT Top-10\% Quality-Filtered Corpus},
author = {samscrack},
year = {2026},
url = {https://huggingface.co/datasets/samscrack/solidity-cpt-top10-quality}
}
If you use this data, please also cite the upstream sources:
@misc{disl_2024,
title = {DISL: Fueling Research with A Large Dataset of Solidity Smart Contracts},
author = {Storhaug, Andreas et al.},
year = {2024},
url = {https://arxiv.org/abs/2403.16861}
}
Acknowledgements
- ASSERT-KTH for releasing DISL
- The OpenZeppelin, Uniswap, Aave, Compound, Morpho, EigenLayer, Pendle, Solady, Seaport, LayerZero, ENS, Optimism, Arbitrum, and Polygon teams for keeping their protocol code public and auditable
- The Foundry team for forge-std (used as the
>=0.8.13floor for pragma filtering)
Changelog
- 2026-05-03 — initial release; 23 471 rows, ~86M tokens, top 10% by composite quality score