| --- |
| library_name: transformers |
| license: apache-2.0 |
| license_link: https://huggingface.co/Qwen/Qwen3.5-2B/blob/main/LICENSE |
| pipeline_tag: image-text-to-text |
| base_model: |
| - Qwen/Qwen3.5-2B-Base |
| --- |
| |
| # Vedika 3.5 flash |
|
|
| <img width="400px" src="https://i.ibb.co/ZR5f1rxF/Blue-and-Black-Minimalist-Brand-Logo-20260423-171504-0000.png"> |
|
|
| > [!Note] |
| > This repository contains model weights and configuration files for the post-trained model in the Hugging Face Transformers format. |
| > |
| > These artifacts are compatible with Hugging Face Transformers, vLLM, SGLang, KTransformers, etc. |
| > |
| > In light of its parameter scale, the intended use cases are prototyping, task-specific fine-tuning, and other research or development purposes. |
|
|
|
|
| Over recent months, we have intensified our focus on developing foundation models that deliver exceptional utility and performance. Vedika 3.5 flash represents a significant leap forward, integrating breakthroughs in multimodal learning, architectural efficiency, reinforcement learning scale, and global accessibility to empower developers and enterprises with unprecedented capability and efficiency. |
|
|
| ## Vedika 3.5 flash Highlights |
|
|
| Vedika 3.5 flash features the following enhancement: |
|
|
| - **Unified Vision-Language Foundation**: Early fusion training on multimodal tokens achieves cross-generational parity with Vedika and outperforms Vedika 3.5 flash models across reasoning, coding, agents, and visual understanding benchmarks. |
|
|
| - **Efficient Hybrid Architecture**: Gated Delta Networks combined with sparse Mixture-of-Experts deliver high-throughput inference with minimal latency and cost overhead. |
|
|
| - **Scalable RL Generalization**: Reinforcement learning scaled across million-agent environments with progressively complex task distributions for robust real-world adaptability. |
|
|
| - **Global Linguistic Coverage**: Expanded support to 201 languages and dialects, enabling inclusive, worldwide deployment with nuanced cultural and regional understanding. |
|
|
| - **Next-Generation Training Infrastructure**: Near-100% multimodal training efficiency compared to text-only training and asynchronous RL frameworks supporting massive-scale agent scaffolds and environment orchestration. |
|
|
|
|
|
|
| ## Model Overview |
|
|
| - Type: Causal Language Model with Vision Encoder |
| - Training Stage: Pre-training & Post-training |
| - Language Model |
| - Number of Parameters: 2B |
| - Hidden Dimension: 2048 |
| - Token Embedding: 248320 (Padded) |
| - Number of Layers: 24 |
| - Hidden Layout: 6 × (3 × (Gated DeltaNet → FFN) → 1 × (Gated Attention → FFN)) |
| - Gated DeltaNet: |
| - Number of Linear Attention Heads: 16 for V and 16 for QK |
| - Head Dimension: 128 |
| - Gated Attention: |
| - Number of Attention Heads: 8 for Q and 2 for KV |
| - Head Dimension: 256 |
| - Rotary Position Embedding Dimension: 64 |
| - Feed Forward Network: |
| - Intermediate Dimension: 6144 |
| - LM Output: 248320 (Tied to token embedding) |
| - MTP: trained with multi-steps |
| - Context Length: 262,144 natively |