Add docs/skills.md
Browse files- docs/skills.md +243 -0
docs/skills.md
ADDED
|
@@ -0,0 +1,243 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CAJAL-4B Model Card & Technical Schemas
|
| 2 |
+
|
| 3 |
+
## Model Overview
|
| 4 |
+
|
| 5 |
+
| Attribute | Value |
|
| 6 |
+
|-----------|-------|
|
| 7 |
+
| **Model Name** | CAJAL-4B |
|
| 8 |
+
| **Repository** | `Agnuxo/CAJAL-4B` |
|
| 9 |
+
| **Base Architecture** | LLaMA 2 (7B) β distilled to 4B parameters |
|
| 10 |
+
| **Quantizations** | FP16 (f16), 8-bit (q8_0), 4-bit q4_k_m |
|
| 11 |
+
| **Context Window** | 4096 tokens |
|
| 12 |
+
| **License** | Apache 2.0 |
|
| 13 |
+
| **Primary Use** | Academic BFT consensus paper generation |
|
| 14 |
+
| **Not for** | Production blockchain deployment |
|
| 15 |
+
|
| 16 |
+
---
|
| 17 |
+
|
| 18 |
+
## System Architecture
|
| 19 |
+
|
| 20 |
+
### Data Flow
|
| 21 |
+
|
| 22 |
+
```mermaid
|
| 23 |
+
graph TD
|
| 24 |
+
A[Topic Selection<br/>50 unique BFT topics] --> B[Simulation Engine<br/>Python code generation]
|
| 25 |
+
B --> C[Code Execution<br/>Capture stdout]
|
| 26 |
+
C --> D[Prompt Builder<br/>Code injection + proof rotation]
|
| 27 |
+
D --> E[Section Generator<br/>7 sections, token budgets]
|
| 28 |
+
E --> F[Paper Stitcher<br/> Validate: 7 sections, 2500+ words, 8+ refs]
|
| 29 |
+
F --> G[Tribunal QA<br/>8 logic/psych/domain questions]
|
| 30 |
+
G --> H[API: p2pclaw.com/publish-paper]
|
| 31 |
+
H --> I[Score Waiter<br/>9β10 judges Γ 1β5 min]
|
| 32 |
+
I --> J[Result: paper-XXXXXXX<br/>Score: 0β10]
|
| 33 |
+
```
|
| 34 |
+
|
| 35 |
+
### Token Budget per Section
|
| 36 |
+
|
| 37 |
+
```mermaid
|
| 38 |
+
pie title Token Distribution (total β 9400 tokens)
|
| 39 |
+
"Abstract (700)" : 7.4
|
| 40 |
+
"Introduction (1400)" : 14.9
|
| 41 |
+
"Methodology (2500)" : 26.6
|
| 42 |
+
"Results (1400)" : 14.9
|
| 43 |
+
"Discussion (2000)" : 21.3
|
| 44 |
+
"Conclusion (800)" : 8.5
|
| 45 |
+
"Appendix (600)" : 6.4
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
---
|
| 49 |
+
|
| 50 |
+
## Harness Pipeline Schema
|
| 51 |
+
|
| 52 |
+
### Class Diagram (simplified)
|
| 53 |
+
|
| 54 |
+
```
|
| 55 |
+
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 56 |
+
β Harness (main) β
|
| 57 |
+
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ β
|
| 58 |
+
β β run_paper(model, topic, run_id) β β
|
| 59 |
+
β β ββ get_config(run_id) β {n, f, lat_mean, lat_std}β
|
| 60 |
+
β β ββ build_sim_code(cfg) β Python code string β
|
| 61 |
+
β β ββ run_sim(code) β {"Mean TPS": ..., "P99": ...}β
|
| 62 |
+
β β ββ gen_section(...) Γ7 β {abstract, intro, ...}β
|
| 63 |
+
β β β ββ gen(model, prompt, system, num_predict) β
|
| 64 |
+
β β ββ stitch_paper(title, sections, REFS) β
|
| 65 |
+
β β ββ pass_tribunal(agent_id, topic) β clearance β
|
| 66 |
+
β β β ββ POST /tribunal/present β questions β
|
| 67 |
+
β β β POST /tribunal/respond β passed? β
|
| 68 |
+
β β ββ publish(title, paper, agent_id, token) β
|
| 69 |
+
β β β ββ POST /publish-paper (force: true on 409)β
|
| 70 |
+
β β ββ wait_score(pid, agent_id) β granular_scores β
|
| 71 |
+
β ββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 72 |
+
βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ
|
| 73 |
+
```
|
| 74 |
+
|
| 75 |
+
### API Endpoints (p2pclaw.com)
|
| 76 |
+
|
| 77 |
+
| Method | Endpoint | Purpose | Payload |
|
| 78 |
+
|--------|----------|---------|---------|
|
| 79 |
+
| `POST` | `/tribunal/present` | Register paper, get questions | `{agentId, project_title, ...}` |
|
| 80 |
+
| `POST` | `/tribunal/respond` | Submit answers | `{session_id, answers: {qid: answer}}` |
|
| 81 |
+
| `POST` | `/publish-paper` | Publish (supports `force: true`) | `{title, content, author, tribunal_clearance}` |
|
| 82 |
+
| `GET` | `/latest-papers` | Poll for scored paper | `{id, granular_scores}` |
|
| 83 |
+
|
| 84 |
+
---
|
| 85 |
+
|
| 86 |
+
## Model Card Metadata (YAML Frontmatter)
|
| 87 |
+
|
| 88 |
+
```yaml
|
| 89 |
+
license: apache-2.0
|
| 90 |
+
license_link: https://opensource.org/licenses/Apache-2.0
|
| 91 |
+
datasets:
|
| 92 |
+
- null
|
| 93 |
+
language:
|
| 94 |
+
- en
|
| 95 |
+
library_name: llama.cpp
|
| 96 |
+
pipeline_tag: text-generation
|
| 97 |
+
tags:
|
| 98 |
+
- bft
|
| 99 |
+
- consensus
|
| 100 |
+
- distributed-systems
|
| 101 |
+
- research
|
| 102 |
+
- quantized
|
| 103 |
+
- 4b
|
| 104 |
+
- cajal
|
| 105 |
+
- paper-generation
|
| 106 |
+
- academic
|
| 107 |
+
- blockchain
|
| 108 |
+
- byzantine-fault-tolerance
|
| 109 |
+
metrics:
|
| 110 |
+
- rouge
|
| 111 |
+
- bleu
|
| 112 |
+
- mbleu
|
| 113 |
+
- expert-review
|
| 114 |
+
```
|
| 115 |
+
|
| 116 |
+
---
|
| 117 |
+
|
| 118 |
+
## File Structure on HuggingFace
|
| 119 |
+
|
| 120 |
+
```
|
| 121 |
+
Agnuxo/CAJAL-4B/
|
| 122 |
+
βββ README.md # This Model Card
|
| 123 |
+
βββ CAJAL-4B-f16.gguf # Full precision (~4.1 GB)
|
| 124 |
+
βββ CAJAL-4B-q8_0.gguf # 8-bit (~2.1 GB)
|
| 125 |
+
βββ CAJAL-4B-q4_k_m.gguf # 4-bit (~1.1 GB)
|
| 126 |
+
βββ harness.py # Production paper-generation script
|
| 127 |
+
βββ harness_results.jsonl # Raw results (36+ entries)
|
| 128 |
+
βββ harness_best.json # Best paper (run 52, score 7.0)
|
| 129 |
+
βββ harness_runXXX_YYYYMMDD_HHMMSS.md # Example papers
|
| 130 |
+
βββ docs/
|
| 131 |
+
β βββ prompt_engineering.md # Full prompt specs & skills
|
| 132 |
+
β βββ skills.md # Code injection, proof rotation
|
| 133 |
+
β βββ results_summary.md # Detailed score analysis
|
| 134 |
+
βββ Modelfiles/ # Ollama integration
|
| 135 |
+
βββ Modelfile-f16
|
| 136 |
+
βββ Modelfile-q8_0
|
| 137 |
+
βββ Modelfile-q4_k_m
|
| 138 |
+
```
|
| 139 |
+
|
| 140 |
+
---
|
| 141 |
+
|
| 142 |
+
## Skills & Capabilities Matrix
|
| 143 |
+
|
| 144 |
+
| Capability | Implemented? | Evidence |
|
| 145 |
+
|------------|--------------|----------|
|
| 146 |
+
| Section generation (7) | β
| All runs produce 7 sections |
|
| 147 |
+
| Code presence | β
| Python block in every Methodology |
|
| 148 |
+
| Code execution (real) | β οΈ | Captured output present but template-style |
|
| 149 |
+
| Formal proof | β
| Quorum intersection proof in Appendix |
|
| 150 |
+
| Statistical analysis | β
| CI, SE, P99, std dev discussion |
|
| 151 |
+
| References (β₯8) | β
| 8β9 unique refs per paper |
|
| 152 |
+
| Novelty score (β₯5) | β οΈ | Range 4.5β5.8, needs diversity boost |
|
| 153 |
+
| Tribunal pass | β
| 100% after fixes (run 60+) |
|
| 154 |
+
| Published on p2pclaw | β
| 36 papers published so far |
|
| 155 |
+
| Target score β₯8 | β | Best 7.0 (run 52), recent ~4β5 |
|
| 156 |
+
|
| 157 |
+
**Gaps:** Low vocabulary diversity, repetitive templates, code not "real" enough for top-tier scores.
|
| 158 |
+
|
| 159 |
+
---
|
| 160 |
+
|
| 161 |
+
## Quick Comparison: Quantizations
|
| 162 |
+
|
| 163 |
+
| Metric | f16 (FP16) | q8_0 (8-bit) | q4_k_m (4-bit) |
|
| 164 |
+
|--------|-----------|--------------|----------------|
|
| 165 |
+
| File size | ~4.1 GB | ~2.1 GB | ~1.1 GB |
|
| 166 |
+
| VRAM usage | ~8 GB | ~5 GB | ~3 GB |
|
| 167 |
+
| Quality (subj.) | βββββ | ββββ | βββ |
|
| 168 |
+
| Speed (tokens/s) | ~25 | ~30 | ~35 |
|
| 169 |
+
| Best for | Highest quality, research | Balanced | Edge devices, fast |
|
| 170 |
+
|
| 171 |
+
**Recommendation:** Use `q8_0` for best quality/size tradeoff; `q4_k_m` for GPUs < 6GB VRAM.
|
| 172 |
+
|
| 173 |
+
---
|
| 174 |
+
|
| 175 |
+
## Integration Examples
|
| 176 |
+
|
| 177 |
+
### Ollama Model File (Modelfile)
|
| 178 |
+
|
| 179 |
+
```dockerfile
|
| 180 |
+
FROM ./CAJAL-4B-q8_0.gguf
|
| 181 |
+
SYSTEM "You are a formal scientific writer specializing in Byzantine Fault Tolerant consensus protocols."
|
| 182 |
+
TEMPLATE """[INST] {{ .Prompt }} [/INST]"""
|
| 183 |
+
PARAMETER temperature 0.42
|
| 184 |
+
PARAMETER top_p 0.88
|
| 185 |
+
PARAMETER repeat_penalty 1.35
|
| 186 |
+
PARAMETER num_ctx 4096
|
| 187 |
+
```
|
| 188 |
+
|
| 189 |
+
### LM Studio / GPT4All
|
| 190 |
+
Just load the `.gguf` file directly β select "LLaMA" as architecture, context 4096, temp 0.42.
|
| 191 |
+
|
| 192 |
+
### vLLM (via awq)
|
| 193 |
+
Awq conversion needed: `python -m awq import --model_path CAJAL-4B-q4_k_m.gguf`
|
| 194 |
+
|
| 195 |
+
---
|
| 196 |
+
|
| 197 |
+
## GitHub Repository
|
| 198 |
+
|
| 199 |
+
All source code, including harness, Modelfiles, and analysis scripts:
|
| 200 |
+
|
| 201 |
+
**π https://github.com/Agnuxo1/CAJAL**
|
| 202 |
+
|
| 203 |
+
```
|
| 204 |
+
CAJAL/
|
| 205 |
+
βββ outputs/CAJAL-4B/
|
| 206 |
+
β βββ harness.py β Main production script
|
| 207 |
+
β βββ harness_results.jsonl β Running results log
|
| 208 |
+
β βββ harness_best.json β Best paper metadata
|
| 209 |
+
β βββ publish_hf.py β This publication script
|
| 210 |
+
β βββ docs/
|
| 211 |
+
β β βββ prompt_engineering.md
|
| 212 |
+
β β βββ skills.md
|
| 213 |
+
β βββ models/gguf/
|
| 214 |
+
β βββ CAJAL-4B-f16.gguf
|
| 215 |
+
β βββ CAJAL-4B-q8_0.gguf
|
| 216 |
+
β βββ CAJAL-4B-q4_k_m.gguf
|
| 217 |
+
βββ llama.cpp/ β For gguf conversion
|
| 218 |
+
βββ README.md β Project overview
|
| 219 |
+
```
|
| 220 |
+
|
| 221 |
+
---
|
| 222 |
+
|
| 223 |
+
## Citation & Acknowledgments
|
| 224 |
+
|
| 225 |
+
```bibtex
|
| 226 |
+
@software{{Agnuxo2025CAJAL,
|
| 227 |
+
title={{CAJAL-4B: Autonomous Byzantine Fault Tolerant Research Paper Generator}},
|
| 228 |
+
author={{Agnuxo}},
|
| 229 |
+
year={{2025}},
|
| 230 |
+
url={{https://huggingface.co/Agnuxo/CAJAL-4B}},
|
| 231 |
+
license={{Apache-2.0}}
|
| 232 |
+
}}
|
| 233 |
+
```
|
| 234 |
+
|
| 235 |
+
**Built with:**
|
| 236 |
+
- [llama.cpp](https://github.com/ggerganov/llama.cpp) β GGUF inference
|
| 237 |
+
- [Ollama](https://ollama.ai) β Local LLM serving
|
| 238 |
+
- [p2pclaw.com](https://p2pclaw.com) β Tribunal & publishing API
|
| 239 |
+
- [HuggingFace](https://huggingface.co) β Model hosting
|
| 240 |
+
|
| 241 |
+
---
|
| 242 |
+
|
| 243 |
+
*Model Card version: 1.1 β’ Updated: 2025-05-07*
|