ยท
AI & ML interests
Ral
Recent Activity
repliedto spillai's post about 8 hours ago mm-ctx โ fast, multimodal context for agents.
LLM-based agents handle text incredibly well, but images, videos, or PDFs with visual content are hard to interpret. mm-ctx gives your CLI agent multi-modal skills.
Try it interactively in Spaces: https://huggingface.co/spaces/vlm-run/mm-ctx
Readme: https://vlm-run.github.io/mm/
PyPI: https://pypi.org/project/mm-ctx
SKILL.md: https://github.com/vlm-run/skills/blob/main/skills/mm-cli-skill/SKILL.md
mm-ctx is meant to feel familiar: the UNIX tools we already love (find/cat/grep/wc), rebuilt for file types LLMs can't read natively and designed to work with agents via the CLI.
- mm grep "invoice #1234" ~/Downloads searches across PDFs and returns line-numbered matches
- mm cat <document>.pdf returns a metadata description of the file
- mm cat <photo>.jpg returns a caption of the photo
- mm cat <video>.mp4 returns a caption of the video
A few things we obsessed over:
โก Speed: Rust core for the hot paths
๐ Local-first, BYO model: Uses any OpenAI-compatible endpoint: Ollama, vLLM/SGLang, LMStudio with any multimodal LLM (Gemma4, Qwen3.5, GLM-4.6V).
๐ Composable: stdin + structured outputs
๐ค Drops into any agent via mm-cli-skills: Claude Code, Codex, Gemini CLI, OpenClaw.
Weโd love to hear your feedback! Especially on the CLI and what file types and workflows you would like to see next. View all activity Organizations