metadata
title: Sentinel Vision Navigator
emoji: 👁️
colorFrom: green
colorTo: blue
sdk: static
pinned: false
license: apache-2.0
short_description: AI camera guidance for blind users.
Sentinel Vision Navigator
Sentinel Vision Navigator is a web + Android assistive AI prototype for blind and partially sighted people.
It uses live camera frames, voice commands, Akash-hosted Qwen multimodal inference, and a small vision-navigation RAG layer to provide short, calm, physical guidance such as:
- Stop. Chair directly ahead, one step away.
- Doorway ahead, slightly left. Take two small steps forward.
- I cannot see enough. Hold still and slowly pan left, then right.
Demo
- Web app: https://amdvision.qubitpage.com/
- Android APK: https://amdvision.qubitpage.com/downloads/sentinel-vision.apk
- Whitepaper: https://amdvision.qubitpage.com/downloads/sentinel-vision-whitepaper.pdf
- Pitch deck: https://amdvision.qubitpage.com/downloads/sentinel-vision-pitch-deck.pdf
- Model/progress page: https://huggingface.co/lablab-ai-amd-developer-hackathon/SentinelBrain-14B-MoE-v0.1
Hackathon Fit
- Primary track: Vision & Multimodal AI
- Secondary track: AI Agents & Agentic Workflows
- Supporting track: Fine-tuning / AMD MI300X progress via SentinelBrain model artifacts
- Hugging Face category: Space deployment for public testing
Safety
This is an assistive prototype, not a medical device and not a replacement for a cane, guide dog, trained mobility support, or human assistance. It is designed to be conservative: if it is uncertain, it should ask the user to stop and pan slowly rather than invent a route.