Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
ADDED
|
@@ -0,0 +1,79 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
tags:
|
| 6 |
+
- rag
|
| 7 |
+
- code-retrieval
|
| 8 |
+
- verified-ai
|
| 9 |
+
- constitutional-halt
|
| 10 |
+
- bft-consensus
|
| 11 |
+
- aevion
|
| 12 |
+
size_categories:
|
| 13 |
+
- 10K<n<100K
|
| 14 |
+
task_categories:
|
| 15 |
+
- question-answering
|
| 16 |
+
- document-retrieval
|
| 17 |
+
---
|
| 18 |
+
|
| 19 |
+
# Aevion Codebase RAG Benchmark
|
| 20 |
+
|
| 21 |
+
**Verified structured-retrieval benchmark** extracted from a real Python codebase (968 source files, 21,149 chunks) with cryptographically signed partition proofs.
|
| 22 |
+
|
| 23 |
+
## What's in this dataset
|
| 24 |
+
|
| 25 |
+
| File | Description |
|
| 26 |
+
|------|-------------|
|
| 27 |
+
| `codebase_corpus.jsonl` | 21,149 Python code chunks with 6-field structural metadata |
|
| 28 |
+
| `codebase_queries.jsonl` | 300 enterprise query-decomposition pairs |
|
| 29 |
+
| `partition_proofs.jsonl` | XGML-signed proof bundles per query |
|
| 30 |
+
| `benchmark_results.csv` | Precision/recall/F1 per retrieval method (60 eval queries) |
|
| 31 |
+
| `benchmark_summary.json` | Aggregate metrics and auto-tuning parameters |
|
| 32 |
+
| `tuning_summary.json` | Grid-search results across 10K synthetic docs |
|
| 33 |
+
|
| 34 |
+
## Corpus Schema
|
| 35 |
+
|
| 36 |
+
Each chunk in `codebase_corpus.jsonl` has:
|
| 37 |
+
|
| 38 |
+
```json
|
| 39 |
+
{
|
| 40 |
+
"doc_id": "chunk_000000",
|
| 41 |
+
"text": "module.ClassName (path/to/file.py:20)",
|
| 42 |
+
"layer": "core",
|
| 43 |
+
"module": "verification",
|
| 44 |
+
"function_type": "class",
|
| 45 |
+
"keyword": "hash",
|
| 46 |
+
"complexity": "simple",
|
| 47 |
+
"has_docstring": "yes",
|
| 48 |
+
"source_path": "core/python/...",
|
| 49 |
+
"source_line": 20
|
| 50 |
+
}
|
| 51 |
+
```
|
| 52 |
+
|
| 53 |
+
## Benchmark Results (60 eval queries)
|
| 54 |
+
|
| 55 |
+
| Method | Precision | Recall | F1 | Exact Match |
|
| 56 |
+
|--------|-----------|--------|----|-------------|
|
| 57 |
+
| naive | 0.516 | 0.657 | 0.425 | 11.7% |
|
| 58 |
+
| instructed | 1.000 | 0.385 | 0.463 | 23.3% |
|
| 59 |
+
| verified_structural | 1.000 | 0.385 | 0.463 | 23.3% |
|
| 60 |
+
| verified_consensus | 1.000 | 0.437 | 0.503 | 31.7% |
|
| 61 |
+
| **verified_structural_ensemble** | **1.000** | **0.459** | **0.527** | **33.3%** |
|
| 62 |
+
|
| 63 |
+
Key finding: Structural + ensemble retrieval achieves **100% precision** (zero irrelevant chunks) vs. 51.6% for naive keyword search.
|
| 64 |
+
|
| 65 |
+
## Method
|
| 66 |
+
|
| 67 |
+
1. **AST extraction**: Python files parsed with `ast` module → class/function/method chunks
|
| 68 |
+
2. **6-field structural metadata**: layer, module, function_type, keyword, complexity, has_docstring
|
| 69 |
+
3. **Constitutional Halt labeling**: VarianceHaltMonitor (σ > 2.5x threshold) as automatic quality gate
|
| 70 |
+
4. **XGML proof bundles**: Ed25519-signed proof chain on every partition plan
|
| 71 |
+
|
| 72 |
+
## Related
|
| 73 |
+
|
| 74 |
+
- [Aevion Verifiable AI](https://github.com/aevionai/aevion-verifiable-ai) — source codebase
|
| 75 |
+
- Patent US 63/896,282 — Variance Halt + Constitutional AI halts
|
| 76 |
+
|
| 77 |
+
## License
|
| 78 |
+
|
| 79 |
+
Apache 2.0 — freely use for research and commercial applications.
|