license: apache-2.0
NPset
A normalized Python dataset for training small language models on code logic without the overhead of raw code syntax.
Why
Feeding raw Python into small LMs is expensive — code-specific tokens and syntax overhead consume capacity that could go toward reasoning. NPset normalizes Python source via AST → TinyDSL (pseudocode), giving the model the logical structure of code in a much more compact, readable form.
Small models already carry semantic understanding of concepts like iteration, conditions, data flow, and function composition from pretraining on natural language. Raw code forces the model to bridge that understanding through unfamiliar syntax — brackets, colons, indentation rules, and language-specific idioms it may have seen rarely. TinyDSL closes that gap by expressing the same logic in a form that maps directly onto the model's existing semantic representations, letting it reason about what code does rather than spending capacity parsing what it looks like.
Format
Parquet, shuffled. Each row:
| Field | Type | Description |
|---|---|---|
code |
string | TinyDSL-normalized Python |
original_language |
string | Always Python |
source |
string | Origin dataset identifier |
Sources
| Source | Dataset | Notes |
|---|---|---|
nomic_cornstack_python_v1 |
nomic-ai/cornstack-python-v1 | Real GitHub files, max 5M rows |
zaydzuhri_stack_edu_python |
zaydzuhri/stack-edu-python | license_type=no_license only, max 10M rows |
jtatman_500k |
jtatman/python-code-dataset-500k | |
iamtarun_python_18k_alpaca |
iamtarun/python_code_instructions_18k_alpaca | |
flytech_python_25k |
flytech/python-codes-25k | |
dbands_pythonMath |
dbands/pythonMath | |
greatdarklord_python_dataset |
greatdarklord/python_dataset |