--- license: cc-by-nc-4.0 library_name: transformers tags: - mobile-o - multimodal - unified-model - vision-language - text-to-image - image-understanding - on-device - mobile pipeline_tag: text-to-image datasets: - Amshaker/Mobile-O-Post-Train - Amshaker/Mobile-O-SFT - Amshaker/Mobile-O-Pre-Train base_model: - Efficient-Large-Model/Sana_600M_512px_diffusers - apple/FastVLM-0.5B ---

Mobile-O-0.5B

**Unified Multimodal Understanding and Generation on Mobile Device**

arXiv Code Project Page Demo Datasets App Store

## πŸ“Œ Overview Mobile-O-0.5B is a compact unified vision–language–diffusion model that performs both **multimodal understanding** (VQA, OCR, reasoning) and **image generation** within a single architecture, designed for mobile and edge deployment. | Spec | Detail | |------|--------| | **Total Parameters** | 1.6B | | **Image Resolution** | 512Γ—512 | | **Image Generation** | ~3 seconds on iPhone | | **Visual Understanding** | ~0.4 seconds on iPhone | | **Memory Footprint** | < 2GB | ## 🎯 Supported Tasks | Task | Input β†’ Output | |------|---------------| | πŸ’¬ Conversational AI | Text β†’ Text | | πŸ‘οΈ Image Understanding | Image + Text β†’ Text | | πŸ–ΌοΈ Image Generation | Text β†’ Image | | ✏️ Image Editing | Image + Text β†’ Image | ## πŸš€ Quick Start ### Download ```python from huggingface_hub import snapshot_download snapshot_download( repo_id="Amshaker/Mobile-O-0.5B", repo_type="model", local_dir="checkpoints", allow_patterns=["final_merged_model_23620/*"] ) ``` ### Image Understanding ```bash python infer_und.py \ --model_path checkpoints/final_merged_model_23620/ \ --image_path assets/cute_cat.png \ --prompt "What is in the image?" ``` ### Image Generation ```bash python infer_gen.py \ --model_path checkpoints/final_merged_model_23620/ \ --prompt "A vibrant tropical rainforest scene with a scarlet macaw perched on a moss-covered branch" ``` ### Image Editing ```bash python infer_edit.py \ --model_path checkpoints/final_merged_model_23620/ \ --image_path assets/cute_cat.png \ --prompt "Make the cat wear a hat" ``` ## πŸ—οΈ Architecture Mobile-O consists of three main components: - **Vision-Language Model (VLM):** [FastVLM-0.5B](https://github.com/apple/ml-fastvlm) β€” FastViT vision encoder + Qwen2-0.5B language backbone - **Diffusion Decoder:** [SANA-600M-512](https://github.com/NVlabs/Sana) β€” lightweight linear DiT with VAE for 512Γ—512 generation - **Mobile Conditioning Projector (MCP):** ~2.4M param connector using layerwise feature fusion with temperature-scaled weights, depthwise-separable 1D convolutions, and efficient channel attention ## πŸ‹οΈ Training Trained in three stages: 1. **Pre-training** β€” Cross-modal alignment on [4M text-image pairs](https://huggingface.co/datasets/Amshaker/Mobile-O-Pre-Train) 2. **SFT** β€” Supervised fine-tuning on [~105K curated pairs](https://huggingface.co/datasets/Amshaker/Mobile-O-SFT) 3. **Post-training** β€” Unified multimodal training on [~105K quadruplets](https://huggingface.co/datasets/Amshaker/Mobile-O-Post-Train) ## πŸ”— Related Resources | Resource | Link | |----------|------| | πŸ€— Mobile-O-1.5B | [Model](https://huggingface.co/Amshaker/Mobile-O-1.5B) | | πŸ€— Mobile-O-0.5B-iOS | [iOS Components](https://huggingface.co/Amshaker/Mobile-O-0.5B-iOS) | | πŸ“± iOS App Source Code | [Mobile-O-App](https://github.com/Amshaker/Mobile-O/tree/main/Mobile-O-App) | ## πŸ“„ Citation ```bibtex @article{shaker2026mobileo, title={Mobile-O: Unified Multimodal Understanding and Generation on Mobile Device}, author={Shaker, Abdelrahman and Heakl, Ahmed and Muhammad, Jaseel and Thawkar, Ritesh and Thawakar, Omkar and Li, Senmao and Cholakkal, Hisham and Reid, Ian and Xing, Eric P. and Khan, Salman and Khan, Fahad Shahbaz}, journal={arXiv preprint arXiv:2602.20161}, year={2026} } ``` ## βš–οΈ License Released under [CC BY-NC 4.0](https://creativecommons.org/licenses/by-nc/4.0/). For research purposes only.