license: apache-2.0
tags:
- sentinelbrain
- moe
- rocm
- mi300x
- pytorch
- checkpoint
- realignment
library_name: pytorch
pipeline_tag: text-generation
SentinelBrain 14B MoE v0.1 - Frankenstein Realignment v2
This repository now includes SentinelBrain Frankenstein realignment v2 artifacts from the AMD MI300X run completed on 2026-05-03.
v2 Training Update
- Architecture: custom SentinelBrain sparse MoE decoder, approximately 14.4B stored parameters, 4 experts, top-2 routing, 24 layers, d_model 4096, seq_len 4096.
- Hardware: AMD Instinct MI300X via ROCm/HIP.
- Run: Frankenstein realignment v2 from raw merged checkpoint.
- Completed steps: 5,000.
- Total training tokens during realignment: approximately 0.98B.
- Best validation loss observed: 5.1359.
- Final checkpoint:
checkpoints/frankenstein_v2_final.pt. - Best checkpoint:
checkpoints/frankenstein_v2_best.pt. - EMA best checkpoint:
checkpoints/frankenstein_v2_ema_best.pt. - Previous Hugging Face version preserved on branch:
previous-before-v2-realign-5000-20260503-103121.
Included Files
checkpoints/frankenstein_v2_final.pt: full final checkpoint at step 5000, including optimizer/progress state.checkpoints/frankenstein_v2_best.pt: best model-only checkpoint by validation loss.checkpoints/frankenstein_v2_ema_best.pt: EMA best checkpoint from the v2 run.checkpoints/sentinelbrain_pretrain_step2471_hf.pt: pretrain anchor used for comparison.logs/realign_v2.log: full realignment console log.logs/realign_v2_metrics.jsonl: step metrics emitted during training.reports/train_metrics_final.json: final dashboard training metrics snapshot.reports/conductor_state_final.json: final dashboard/conductor state.reports/sft_combined_ready_report.*: cleaned SFT dataset preflight report.reports/sentinelbrain_quality_stub_full_fixed.json: MI300X executable-code benchmark report.
A full progress archive containing all v2 milestones and optimizer-bearing checkpoints is backed up off-Hub on the Azure VM at /home/msrusu/sentinelbrain_backups/v2_realign_5000/sentinelbrain_v2_realign_full_20260503.tar.zst. A SHA256 sidecar is generated at archive completion.
Current Evaluation Notes
MI300X executable-code tests show that v2 realignment is not yet ready as a coding assistant checkpoint:
| Checkpoint | Pass@1 | Syntax Rate | Notes |
|---|---|---|---|
frankenstein_v2_best.pt |
0.0% | 62.5% | Failed all 8 HumanEval-style stub tasks. |
frankenstein_v2_final.pt |
0.0% | 75.0% | Failed all 8 HumanEval-style stub tasks. |
sentinelbrain_pretrain_step2471_hf.pt |
0.0% | 87.5% | Failed all 8 tasks but produced the most syntactically valid Python. |
Interpretation: v2 successfully completed corpus realignment and preserved all progress artifacts, but it needs a focused next phase of executable code SFT, function-call/chat formatting, and auto-critic rejection sampling before quality claims should be made.
Dataset Preparation Status
The next SFT combined dataset was cleaned non-destructively on the MI300X server:
- Input rows: 42,138.
- Kept rows: 32,996 (78.3%).
- Removed rows: 9,142.
- Max estimated tokens: 3,072.
- Main removals: short assistant/user outputs, garbage responses, repetitive responses, and over-length samples.
Loading
These are custom SentinelBrain PyTorch checkpoints, not standard Hugging Face AutoModelForCausalLM weights. Load with the SentinelBrain code from /workspace/sentinelprime or the matching source package.
import torch
from config import ModelConfig
from model.sentinel import SentinelBrain
ckpt = torch.load("checkpoints/frankenstein_v2_best.pt", map_location="cpu", weights_only=False)
model = SentinelBrain(ModelConfig())
state = ckpt.get("model_state_dict") or ckpt.get("model") or ckpt
model.load_state_dict(state, strict=False)
model.eval()
Next Phase Direction
The recommended next phase is a controlled SFT/auto-critic cycle: train from the pretrain anchor plus selected v2 weights only after passing format probes, prioritize executable Python/TypeScript/code-repair datasets, reject non-compiling generations, and benchmark every 250-500 steps before continuing.
Sentinel Vision Navigator Progress Log
Date: 2026-05-07
Hackathon: AMD Developer Hackathon
Tracks: Vision & Multimodal AI, AI Agents & Agentic Workflows, Fine-Tuning on AMD GPUs, Hugging Face Space
Sentinel Vision Navigator is now connected to this model page as the applied assistive-AI product layer for the SentinelBrain roadmap. The public prototype focuses on blind and partially sighted users who need immediate, camera-first understanding of their surroundings.
Current build
- Public web demo: https://amdvision.qubitpage.com/
- Hugging Face Space: https://huggingface.co/spaces/lablab-ai-amd-developer-hackathon/sentinel-vision-navigator
- Whitepaper PDF: https://amdvision.qubitpage.com/downloads/sentinel-vision-whitepaper.pdf
- Pitch deck PDF: https://amdvision.qubitpage.com/downloads/sentinel-vision-pitch-deck.pdf
- Android APK distribution path: https://amdvision.qubitpage.com/downloads/sentinel-vision.apk
Implemented application features
- Live camera scene analysis for blind mobility prompts.
- Camera-first local navigation behavior for commands such as
guide me,get out,avoid obstacles, andwhich way. - Embedded vision-direction RAG knowledge for indoor paths, outdoor sidewalks, stairs, curbs, text reading, and object finding.
- Fluent English speech output with calmer, shorter instructions.
- Android first-run setup for camera, mic/speech, contacts, location, email handoff, WhatsApp handoff, SMS, and calls.
- Multitasking agent routing for navigation, calls, SMS, WhatsApp, email, maps, app opening, battery, time, text reading, object finding, and conversation.
- 3D Terminator-style camera HUD with reticle, scan line, perspective grid, connector status, and detected-object chips.
Model status note
The production demo currently uses Akash-hosted Qwen multimodal inference for visual reasoning while SentinelBrain-14B-MoE-v0.1 remains the project model artifact and training/progress page for the specialized assistant roadmap. The next step is to collect interaction traces and evaluate a fine-tuned SentinelBrain navigation/router layer on AMD GPU infrastructure.