CLIFT / README.md
longarmd's picture
Update README.md
84b1e9b verified
metadata
language:
  - en
pretty_name: CLIFT
size_categories:
  - 5K<n<10K
task_categories:
  - text-generation
tags:
  - benchmark
  - evaluation
  - synthetic
  - reasoning
  - in-context-learning
  - transfer-learning
  - structured-output

CLIFT: Contextual Learning across Inference, Format, and Transfer

A structured benchmark of 5,160 synthetic instances that stress-test whether models learn latent rules from context.

GitHub Dataset


At a glance

Instances 5,160
Design Full factorial over task × format × application × difficulty, with 10 i.i.d. draws per cell
Seed 42
Language English prompts and instructions
Modality Text (completion-style prompt → string target)

Load in Python

from datasets import load_dataset

ds = load_dataset("longarmd/CLIFT", split="train")
# Each row: prompt, target, task, format, application, difficulty, latent_structure, ...

If the Hub loader is misconfigured, use the Files tab JSONL and stream with json.loads per line—the schema matches the table below.


Why CLIFT?

Modern LMs are often evaluated on fixed formats or single-shot skills. CLIFT targets a narrower but critical capability: contextual learning—inferring a hidden structure from the prompt (examples, traces, or specs), then answering under controlled variation along three axes:

  1. InferenceWhat must be learned (lookup rules, algorithms, spatial transforms, small dynamical systems, …).
  2. FormatHow that knowledge is presented (demonstrations, natural language, execution traces, formal specs).
  3. Transfer / applicationHow the model must use it (forward prediction, inverse reasoning, articulation, OOD probes, planning, structural probes—task-dependent subsets).

Together, these axes yield a dense evaluation matrix suited for diagnostics, ablations, and comparing training or prompting strategies—not a single leaderboard score in isolation.


Task families

Instances are grouped into four families spanning 10 canonical tasks:

Family Tasks
Functional mappings lookup_table, arithmetic_rule, conditional_rule
Algorithmic insertion_sort, max_subarray, binary_search, naive_string_matcher
Spatial spatial_translation
Dynamic structures affine_dynamics_2d, register_machine_2d

Formats (all tasks use this set where applicable): demonstration, natural_language, trace, formal_spec.

Difficulty: integer levels 1, 2, 3 (structure complexity scales with level).

Application varies by task (e.g. affine/register tasks use dedicated OOD-suffixed probes). The shipped matrix matches the open-source generator defaults in clift.common.


Dataset structure

Data are distributed as JSONL: one JSON object per line, Hugging Face–friendly and line-diffable. A companion manifest.json (in the source repo) records generator kwargs, expected row count, and a SHA-256 over the canonical payload for integrity checks.

Fields (per instance)

Field Type Description
instance_id int Stable index within the snapshot
task string One of the 10 canonical task names
format string Presentation format
application string Probe / application axis
difficulty int 1–3
prompt string Model input (completion-style)
target string Reference answer (exact match is the primary check)
latent_structure object Gold structure for analysis & tooling (not shown to the model)
instruct bool Whether instruct/chat-style export was used
messages array (optional) OpenAI-style chat turns, when enabled at export
metadata object (optional) Extra fields when present

Note: latent_structure is intentionally included for research and scoring pipelines. Treat it as held-out supervision for training—do not condition generation on it unless your experimental design explicitly allows it.