Access CMMC Benchmark v1 Preview Q2 2026

Memoriant, Inc. gates this benchmark with auto-approval. Login and contact sharing are required, but access is granted automatically upon request.

By requesting access you acknowledge:

  1. This is a 46-question preview, NOT a full evaluation.
  2. You will not publish benchmark scores using only v1.
  3. You will not draw production deployment decisions from this small sample.
  4. You will always review AI output before using it for compliance work.

Acknowledge the responsible use terms below to access the preview benchmark.

Log in or Sign Up to review the conditions and access this dataset content.

CMMC Benchmark v1 Preview — Q2 2026

Version: 2026-q2 Tier: v1 Preview (46 questions) Purpose: Methodology sample — NOT a real evaluation Valid through: June 30, 2026 Next release: July 1, 2026 (Q3 2026) License: CC-BY-4.0 Publisher: Memoriant, Inc.


What This Is

A 46-question preview of the Memoriant Industrial Benchmark methodology. This is NOT a full evaluation — it's a small sample intended to show you how Memoriant structures compliance AI testing.

v1 is the smallest tier in the Memoriant benchmark family. It exists to give researchers, AI builders, and compliance teams a quick taste of the benchmark methodology before they commit to running the full evaluation.

Do not mistake v1 for a credible AI evaluation tool. 46 questions cannot validate a model for production use in any regulated industry. This preview is educational, not evaluative.


⚠️ AI Safety Disclaimer — Always Review Output

AI systems make mistakes. Always review AI-generated output before using it for any purpose.

This benchmark is used to evaluate AI systems that generate compliance guidance. Even AI systems that score highly on this preview benchmark can produce:

  • Factually incorrect information — even with high benchmark scores
  • Hallucinated citations — references to regulations, controls, or documents that do not exist
  • Outdated guidance — AI knowledge reflects training cutoff, not current regulations
  • Confident errors — AI often states wrong information with the same confidence as correct information
  • Plausible-sounding fabrications — responses that read like expert advice but are invented

Before using any AI output for:

  • Compliance documentation (SSPs, POA&Ms, audit responses)
  • Regulatory submissions to DoD, NIST, or other agencies
  • Internal policy or procedure creation
  • Assessment preparation or C3PAO engagements
  • Legal or contractual decisions

You must:

  1. Have a qualified human review every output
  2. Verify citations independently against authoritative sources
  3. Cross-check against NIST publications, DoD guidance, Federal Register
  4. Document the review process for audit purposes
  5. Never submit AI output directly — AI drafts are starting points, not finished products

Memoriant's position: AI is a force multiplier for compliance professionals, not a replacement. The human stays accountable. The AI accelerates the work.


What You Should NOT Do With v1

  • Do not draw conclusions about model quality from this sample. 46 questions cannot validate a model for production compliance use. Statistical significance requires the full benchmark.
  • Do not publish benchmark scores using only v1. Scoring a model on v1 alone produces noise, not signal. It is not a credible evaluation for marketing or research claims.
  • Do not use this to decide whether to deploy compliance AI. You will miss critical failure modes that only show up in larger evaluation sets.
  • Do not cite v1 as "the CMMC benchmark" — it is explicitly labeled a preview for a reason.

What v1 Is For

  • Understanding the Memoriant benchmark methodology
  • Quick sanity check during early model development
  • Educational use in AI research contexts
  • Demonstrating what "compliance AI evaluation" looks like
  • Testing benchmark harness integration before committing to larger evaluations

⚠️ Version Expiration

Valid through: June 30, 2026 Next release: July 1, 2026 (Q3 2026)

This benchmark is dated. CMMC regulations, DFARS clauses, and NIST publications update continuously. Testing AI systems against a frozen benchmark produces false confidence — models that pass today's questions may fail tomorrow's.

If you are using this benchmark after the expiration date, your evaluation is incomplete. The Memoriant benchmark family is refreshed quarterly to incorporate:

  • New DFARS clauses and amendments
  • NIST SP 800-171/172 revisions and errata
  • CMMC Program Office guidance updates
  • New attack patterns and compliance failure modes
  • Emerging regulatory questions from the field

To get the latest version: Follow memoriant on HuggingFace, or subscribe to release notifications at memoriant.ai.


For Real Evaluation

If you need to actually evaluate a compliance AI system, use one of these instead:

Benchmark Purpose Questions
v1 Preview (this) Methodology sample — not for validation 46
v2 Spot Check Triage tool — eliminates obvious failures 454
v3 Comprehensive The authoritative standard — use this for real evaluation 1,273

If your AI is going into a defense compliance workflow, v3 is the only credible evaluation.


Usage

from datasets import load_dataset

# Requires login to HuggingFace and auto-approved access
dataset = load_dataset("memoriant/cmmc-benchmark-v1-preview-2026-q2")
test = dataset["test"]  # 46 questions

# Example record structure
{
  "id": "T1-Q01",
  "tier": 1,
  "category": "Factual Recall",
  "question": "What are the 14 control families in NIST SP 800-171?",
  "expected_keywords": [...]
}

Coverage (Limited)

This preview covers a small sample across:

  • Factual recall (control family identification)
  • Basic control lookups
  • Framework identification (CMMC Levels 1, 2, 3)
  • Hallucination traps (Level 4/5 refusal)

It does NOT cover:

  • Document generation (POA&M, SSP drafting)
  • Gap analysis scenarios
  • Cross-framework mapping
  • Assessment guidance
  • Professional judgment testing
  • Consistency testing
  • Regulatory update awareness

For full coverage, use v3 comprehensive.


Citation

@dataset{memoriant-cmmc-benchmark-v1-preview-2026-q2,
  author = {Maine, Nathan},
  title = {CMMC Benchmark v1 Preview Q2 2026},
  year = {2026},
  month = {April},
  publisher = {Memoriant, Inc.},
  url = {https://huggingface.co/datasets/memoriant/cmmc-benchmark-v1-preview-2026-q2},
  note = {Preview sample. For real evaluation use v3 comprehensive.}
}

Related Memoriant Assets


Changelog

2026-q2 (Current)

  • Rebranded with quarterly versioning and "preview" tier labeling
  • Converted to auto-gated access with responsible use acknowledgement
  • Added AI safety disclaimer
  • Added expiration and refresh messaging
  • Repositioned as methodology sample, not evaluation tool
  • Preserved history from predecessor (memoriant/cmmc-compliance-benchmark)

Future Releases

  • 2026-q3 (July 1, 2026)
  • 2026-q4 (October 1, 2026)

Contact

Memoriant, Inc.


Published by Memoriant, Inc. — For real compliance AI evaluation, use v3 comprehensive.

Downloads last month
12