CAJAL-4B Model Card & Technical Schemas
Model Overview
| Attribute | Value |
|---|---|
| Model Name | CAJAL-4B |
| Repository | Agnuxo/CAJAL-4B |
| Base Architecture | LLaMA 2 (7B) β distilled to 4B parameters |
| Quantizations | FP16 (f16), 8-bit (q8_0), 4-bit q4_k_m |
| Context Window | 4096 tokens |
| License | Apache 2.0 |
| Primary Use | Academic BFT consensus paper generation |
| Not for | Production blockchain deployment |
System Architecture
Data Flow
graph TD
A[Topic Selection<br/>50 unique BFT topics] --> B[Simulation Engine<br/>Python code generation]
B --> C[Code Execution<br/>Capture stdout]
C --> D[Prompt Builder<br/>Code injection + proof rotation]
D --> E[Section Generator<br/>7 sections, token budgets]
E --> F[Paper Stitcher<br/> Validate: 7 sections, 2500+ words, 8+ refs]
F --> G[Tribunal QA<br/>8 logic/psych/domain questions]
G --> H[API: p2pclaw.com/publish-paper]
H --> I[Score Waiter<br/>9β10 judges Γ 1β5 min]
I --> J[Result: paper-XXXXXXX<br/>Score: 0β10]
Token Budget per Section
pie title Token Distribution (total β 9400 tokens)
"Abstract (700)" : 7.4
"Introduction (1400)" : 14.9
"Methodology (2500)" : 26.6
"Results (1400)" : 14.9
"Discussion (2000)" : 21.3
"Conclusion (800)" : 8.5
"Appendix (600)" : 6.4
Harness Pipeline Schema
Class Diagram (simplified)
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
β Harness (main) β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
β β run_paper(model, topic, run_id) β β
β β ββ get_config(run_id) β {n, f, lat_mean, lat_std}β
β β ββ build_sim_code(cfg) β Python code string β
β β ββ run_sim(code) β {"Mean TPS": ..., "P99": ...}β
β β ββ gen_section(...) Γ7 β {abstract, intro, ...}β
β β β ββ gen(model, prompt, system, num_predict) β
β β ββ stitch_paper(title, sections, REFS) β
β β ββ pass_tribunal(agent_id, topic) β clearance β
β β β ββ POST /tribunal/present β questions β
β β β POST /tribunal/respond β passed? β
β β ββ publish(title, paper, agent_id, token) β
β β β ββ POST /publish-paper (force: true on 409)β
β β ββ wait_score(pid, agent_id) β granular_scores β
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
API Endpoints (p2pclaw.com)
| Method | Endpoint | Purpose | Payload |
|---|---|---|---|
POST |
/tribunal/present |
Register paper, get questions | {agentId, project_title, ...} |
POST |
/tribunal/respond |
Submit answers | {session_id, answers: {qid: answer}} |
POST |
/publish-paper |
Publish (supports force: true) |
{title, content, author, tribunal_clearance} |
GET |
/latest-papers |
Poll for scored paper | {id, granular_scores} |
Model Card Metadata (YAML Frontmatter)
license: apache-2.0
license_link: https://opensource.org/licenses/Apache-2.0
datasets:
- null
language:
- en
library_name: llama.cpp
pipeline_tag: text-generation
tags:
- bft
- consensus
- distributed-systems
- research
- quantized
- 4b
- cajal
- paper-generation
- academic
- blockchain
- byzantine-fault-tolerance
metrics:
- rouge
- bleu
- mbleu
- expert-review
File Structure on HuggingFace
Agnuxo/CAJAL-4B/
βββ README.md # This Model Card
βββ CAJAL-4B-f16.gguf # Full precision (~4.1 GB)
βββ CAJAL-4B-q8_0.gguf # 8-bit (~2.1 GB)
βββ CAJAL-4B-q4_k_m.gguf # 4-bit (~1.1 GB)
βββ harness.py # Production paper-generation script
βββ harness_results.jsonl # Raw results (36+ entries)
βββ harness_best.json # Best paper (run 52, score 7.0)
βββ harness_runXXX_YYYYMMDD_HHMMSS.md # Example papers
βββ docs/
β βββ prompt_engineering.md # Full prompt specs & skills
β βββ skills.md # Code injection, proof rotation
β βββ results_summary.md # Detailed score analysis
βββ Modelfiles/ # Ollama integration
βββ Modelfile-f16
βββ Modelfile-q8_0
βββ Modelfile-q4_k_m
Skills & Capabilities Matrix
| Capability | Implemented? | Evidence |
|---|---|---|
| Section generation (7) | β | All runs produce 7 sections |
| Code presence | β | Python block in every Methodology |
| Code execution (real) | β οΈ | Captured output present but template-style |
| Formal proof | β | Quorum intersection proof in Appendix |
| Statistical analysis | β | CI, SE, P99, std dev discussion |
| References (β₯8) | β | 8β9 unique refs per paper |
| Novelty score (β₯5) | β οΈ | Range 4.5β5.8, needs diversity boost |
| Tribunal pass | β | 100% after fixes (run 60+) |
| Published on p2pclaw | β | 36 papers published so far |
| Target score β₯8 | β | Best 7.0 (run 52), recent ~4β5 |
Gaps: Low vocabulary diversity, repetitive templates, code not "real" enough for top-tier scores.
Quick Comparison: Quantizations
| Metric | f16 (FP16) | q8_0 (8-bit) | q4_k_m (4-bit) |
|---|---|---|---|
| File size | ~4.1 GB | ~2.1 GB | ~1.1 GB |
| VRAM usage | ~8 GB | ~5 GB | ~3 GB |
| Quality (subj.) | βββββ | ββββ | βββ |
| Speed (tokens/s) | ~25 | ~30 | ~35 |
| Best for | Highest quality, research | Balanced | Edge devices, fast |
Recommendation: Use q8_0 for best quality/size tradeoff; q4_k_m for GPUs < 6GB VRAM.
Integration Examples
Ollama Model File (Modelfile)
FROM ./CAJAL-4B-q8_0.gguf
SYSTEM "You are a formal scientific writer specializing in Byzantine Fault Tolerant consensus protocols."
TEMPLATE """[INST] {{ .Prompt }} [/INST]"""
PARAMETER temperature 0.42
PARAMETER top_p 0.88
PARAMETER repeat_penalty 1.35
PARAMETER num_ctx 4096
LM Studio / GPT4All
Just load the .gguf file directly β select "LLaMA" as architecture, context 4096, temp 0.42.
vLLM (via awq)
Awq conversion needed: python -m awq import --model_path CAJAL-4B-q4_k_m.gguf
GitHub Repository
All source code, including harness, Modelfiles, and analysis scripts:
π https://github.com/Agnuxo1/CAJAL
CAJAL/
βββ outputs/CAJAL-4B/
β βββ harness.py β Main production script
β βββ harness_results.jsonl β Running results log
β βββ harness_best.json β Best paper metadata
β βββ publish_hf.py β This publication script
β βββ docs/
β β βββ prompt_engineering.md
β β βββ skills.md
β βββ models/gguf/
β βββ CAJAL-4B-f16.gguf
β βββ CAJAL-4B-q8_0.gguf
β βββ CAJAL-4B-q4_k_m.gguf
βββ llama.cpp/ β For gguf conversion
βββ README.md β Project overview
Citation & Acknowledgments
@software{{Agnuxo2025CAJAL,
title={{CAJAL-4B: Autonomous Byzantine Fault Tolerant Research Paper Generator}},
author={{Agnuxo}},
year={{2025}},
url={{https://huggingface.co/Agnuxo/CAJAL-4B}},
license={{Apache-2.0}}
}}
Built with:
- llama.cpp β GGUF inference
- Ollama β Local LLM serving
- p2pclaw.com β Tribunal & publishing API
- HuggingFace β Model hosting
Model Card version: 1.1 β’ Updated: 2025-05-07