Datasets:

Modalities:
Text
Formats:
parquet
Size:
< 1K
License:
Dataset Viewer
Auto-converted to Parquet Duplicate
dataset_name
stringclasses
7 values
script_name
stringclasses
7 values
model
stringclasses
3 values
hyperparameters
stringclasses
6 values
input_datasets
stringclasses
3 values
description
stringclasses
7 values
tags
stringclasses
7 values
custom_metadata
stringclasses
7 values
updated
stringclasses
7 values
experiment_id
null
run_id
null
artifact_type
null
visualizer_type
null
artifact_group
null
parent_artifact
null
size_bytes
int64
-1
-1
created
stringclasses
7 values
inside-out-replication-canary-v1
run_canary.py
meta-llama/Meta-Llama-3-8B-Instruct
{"n_questions": 5, "n_samples": 50, "temperature": 1.0, "judge_model": "Qwen/Qwen2.5-14B-Instruct"}
[]
Canary run: 5 questions per relation, 50 samples, Llama-3-8B. Full pipeline E2E test.
["inside-out-replication", "canary", "factual-knowledge"]
{"experiment_name": "inside-out-replication", "job_id": "mll:26166", "cluster": "mll", "artifact_status": "final", "canary": true}
2026-04-09T03:48:09.928223+00:00
null
null
null
null
null
null
-1
2026-04-09T03:48:09.928223+00:00
inside-out-replication-results-v1
full pipeline (02-09)
Llama-3-8B, Mistral-7B-v0.3, Gemma-2-9B
{"n_samples": 1000, "temperature": 1.0, "judge_model": "Qwen/Qwen2.5-14B-Instruct", "max_tokens": 64}
[]
Full Inside-Out replication: 3 models x 4 relations x 450 test questions x 1000 samples. Includes P(a|q), P_norm, P(True) V0/V1/V2, and probe scores.
["inside-out-replication", "factual-knowledge", "hidden-states", "probe"]
{"experiment_name": "inside-out-replication", "cluster": "mll", "artifact_status": "final", "canary": false}
2026-04-10T14:01:35.430354+00:00
null
null
null
null
null
null
-1
2026-04-10T14:01:35.430354+00:00
self-consistency-correction-exp0-sanity-canary
exp0_sanity.py
Qwen/Qwen2.5-7B-Instruct, meta-llama/Llama-3.1-8B-Instruct
{"dtype": "bfloat16", "prompt_template": "Q: {question} A: ", "validator_suffix": ". Is this correct? Yes or No? Answer:"}
[]
Exp 0 sanity check (paper §7.1): raw and PMI-corrected generator scores vs validator scores on 8 hand-crafted cases, 2 local models.
["self-consistency-correction", "canary", "exp0", "sanity-check"]
{"experiment_name": "self-consistency-correction", "job_id": "local:dgx-spark:exp0", "cluster": "local-dgx-spark", "artifact_status": "final", "canary": true}
2026-04-13T05:04:47.094818+00:00
null
null
null
null
null
null
-1
2026-04-13T05:04:47.094818+00:00
self-consistency-correction-exp5-diagnostic
exp5_diagnostic.py
Qwen/Qwen2.5-7B-Instruct, meta-llama/Llama-3.1-8B-Instruct
{"dtype": "bfloat16", "prompt_template": "Q: {question} A: ", "validator_suffix": ". Is this correct? Yes or No? Answer:"}
[]
Exp 5 Liechtenstein vs UK diagnostic (paper §7.6): raw, PMI, and exact-cluster generator scores vs validator scores on 6 hand-crafted cases with known equivalence classes.
["self-consistency-correction", "exp5", "diagnostic", "cluster-correction"]
{"experiment_name": "self-consistency-correction", "job_id": "local:dgx-spark:exp5", "cluster": "local-dgx-spark", "artifact_status": "final", "canary": false}
2026-04-13T05:04:54.127250+00:00
null
null
null
null
null
null
-1
2026-04-13T05:04:54.127250+00:00
self-consistency-correction-exp23-correlation-discrimination
exp23_combined.py
Qwen/Qwen2.5-7B-Instruct, meta-llama/Llama-3.1-8B-Instruct
{"dtype": "bfloat16", "num_cases": 20, "validator_suffix": ". Is this correct? Yes or No? Answer:"}
[]
Experiments 2 & 3 combined (paper §7.3, §7.4): raw, PMI, negative-prompt, verbalization-prompt, and exact-cluster scores on 20 hand-crafted paraphrase-equivalence cases, 2 local models. Measures Spearman(score, V') and AUROC(score, is_correct_class).
["self-consistency-correction", "exp2", "exp3", "correlation", "discrimination", "auroc"]
{"experiment_name": "self-consistency-correction", "job_id": "local:dgx-spark:exp23", "cluster": "local-dgx-spark", "artifact_status": "final", "canary": false}
2026-04-13T05:13:48.317056+00:00
null
null
null
null
null
null
-1
2026-04-13T05:13:48.317056+00:00
self-consistency-correction-exp1-assumptions
exp1_assumptions.py
Qwen/Qwen2.5-7B-Instruct, meta-llama/Llama-3.1-8B-Instruct
{"num_cases": 20}
["latkes/self-consistency-correction-exp23-correlation-discrimination"]
Experiment 1 (paper §7.2): empirical tests of Assumption 2 (V' constant across paraphrases) and Assumption 3 (cluster score equals validator log-odds). Derived from the exp23 artifact, no new inference.
["self-consistency-correction", "exp1", "assumption-tests"]
{"experiment_name": "self-consistency-correction", "job_id": "local:dgx-spark:exp1", "cluster": "local-dgx-spark", "artifact_status": "final", "canary": false}
2026-04-13T05:13:55.348889+00:00
null
null
null
null
null
null
-1
2026-04-13T05:13:55.348889+00:00
self-consistency-correction-ambigqa
exp_ambigqa.py
Qwen/Qwen2.5-7B-Instruct, meta-llama/Llama-3.1-8B-Instruct
{"dtype": "bfloat16", "num_cases": 10, "validator_suffix": ". Is this correct? Yes or No? Answer:"}
["juand-r/rankalign#longform:data/ambigqa/with_negatives"]
5-score self-consistency-correction comparison on 10 hand-clustered AmbigQA questions from the rankalign longform branch.
["self-consistency-correction", "ambigqa", "rankalign", "generator-validator-gap", "self-consistency"]
{"experiment_name": "self-consistency-correction", "job_id": "local:dgx-spark:ambigqa", "cluster": "local-dgx-spark", "artifact_status": "final", "canary": false}
2026-04-13T14:50:30.312276+00:00
null
null
null
null
null
null
-1
2026-04-13T14:50:30.312276+00:00

RACA-PROJECT-MANIFEST

Central registry of all datasets in the latkes organization.

  • Total Datasets Tracked: 7
  • Last Updated: 2026-04-13T14:50:31.814273+00:00

Usage

from datasets import load_dataset

manifest = load_dataset("latkes/RACA-PROJECT-MANIFEST", split="train")
print(f"Tracking {len(manifest)} datasets")
Downloads last month
87