b64bench / README.md
thepushkarp's picture
Remove manual configs from frontmatter (let push_to_hub auto-configure)
dc00934 verified
metadata
license: cc-by-sa-4.0
task_categories:
  - text-generation
language:
  - en
tags:
  - base64
  - encoding
  - decoding
  - benchmark
  - evaluation
pretty_name: B64Bench
size_categories:
  - 1K<n<10K

B64Bench

A benchmark dataset for evaluating base64 decoding capabilities of language models.

Task

Given a base64-encoded string, decode it to recover the original plaintext.

Examples span multiple recursion depths (1-5), meaning the plaintext may itself be a base64-encoded string that requires further decoding.

Dataset Summary

Split Examples Description
test 2,999 Primary evaluation split
val 816 Development / prompt tuning
stress 996 Corruption stress testing
few_shot 20 Curated few-shot prompting pool (separate config)

Total: 4,831 examples across 4 splits. Evaluation-only — no training split.

Loading

from datasets import load_dataset

# Main splits (test, val, stress)
ds = load_dataset("thepushkarp/b64bench")

# Few-shot pool (separate schema)
fs = load_dataset("thepushkarp/b64bench", "few_shot", split="train")

Data Types

Examples are drawn from 11 source text types across three tiers:

Tier 1 — Primary (~53% of test split)

Anti-memorization data types using random strings.

Type Description
random_ascii Random printable ASCII characters (chr 32-126)
jwt_length_random Random printable ASCII at JWT-typical lengths (120-140 chars)
random_alnum Alphanumeric characters only (a-z, A-Z, 0-9)

Tier 2 — Secondary (~25% of test split)

Real-world-like data types for ecological validity.

Type Description
english_sentence Synthetic English sentences from compositional grammar
json_object Small JSON objects ({"key":"value","count":42})
shell_command Sanitized CLI commands (git commit -m 'fix bug')
uuid UUID v4 format strings (always 36 chars)

Tier 3 — Edge Cases (~22% of test split)

Structural probes testing specific encoding properties.

Type Description
probe_special_chars Mixed punctuation and special characters
probe_minimal Minimal-length strings (1-3 chars)
probe_boundary Strings at specific lengths for padding pattern coverage
probe_high_entropy Strings producing high-frequency +// in base64 output

Recursion Depth

Each example is base64-encoded at a specified recursion depth:

Depth Description % of test
1 Single encode: text -> base64 ~27%
2 Double encode: base64(base64(text)) ~24%
3 Triple encode ~20%
4 Quadruple encode ~16%
5 Quintuple encode ~13%

Higher depths produce longer base64 strings and require multi-step decoding. The depth-length feasibility constraint caps b64 output at ~1024 characters.

Corruptions

The surface_b64 field may contain a corrupted version of the canonical_b64. Corrupted examples link to their clean parent via parent_example_id.

Corruption What It Does Recoverable?
none Clean, valid base64 Yes
missing_padding Trailing = chars stripped Yes (re-pad)
whitespace Random space/newline/tab inserted Yes (strip)
base64url + -> -, / -> _, optional padding strip Yes (translate)
substitution One non-padding char replaced with different b64 char No
illegal_char One char replaced with illegal character (@#$%^&*?) No
truncation 1-3 chars removed from end No

Schema

Each example has 27 fields:

Field Type Description
example_id string Unique identifier (test-00000042)
parent_example_id string or null Links corrupted example to clean parent
split string test, val, or stress
schema_version string 1.0.0
root_seed int Global seed for reproducibility
example_seed int Per-example seed (blake2b-derived)
data_type string Source text type (see Data Types)
data_tier string primary, secondary, or edge_case
source_url string or null Provenance URL (null for synthetic)
source_license string CC0
source_text string Original plaintext
source_length_chars int Character count of source
source_length_bytes int UTF-8 byte count of source
recursion_depth int Number of base64 encoding layers (1-5)
intermediate_b64_levels list[string] All encoding layers
canonical_b64 string Final (outermost) base64 string
surface_b64 string What the model sees (may be corrupted)
b64_length int Length of canonical_b64
b64_num_blocks int Number of 4-char base64 blocks
b64_padding_chars int Count of = in canonical_b64 (0, 1, or 2)
b64_has_plus bool Whether canonical_b64 contains +
b64_has_slash bool Whether canonical_b64 contains /
corruption_name string Corruption type applied (see Corruptions)
corruption_params dict Corruption-specific parameters
normalization_recoverable bool Can standard normalization recover the original?
original_recoverable bool Is the original text recoverable at all?
target_text string Ground truth decoded output

Generation

The dataset is generated deterministically from a root seed (default: 42) using blake2b-derived per-example seeds. This makes generation order-independent and reproducible.

To regenerate:

uv run python scripts/generate_dataset.py --seed 42 --validate --summary

Canary

This dataset contains the canary string CANARY_b64bench_2026_DO_NOT_TRAIN in its metadata. It is intended for evaluation only and should not be included in language model training data.

Citation

@dataset{b64bench2026,
  title={B64Bench: A Base64 Decoding Benchmark},
  author={Pushkar Patel},
  year={2026},
  url={https://huggingface.co/datasets/thepushkarp/b64bench}
}

License

CC-BY-SA-4.0