google/gemma-4-31B-it
Image-Text-to-Text β’ 33B β’ Updated β’ 3.2M β’ β’ 1.96k
Does it mean that the Qwen model is passing tokens back to the context. In the retrieval level these tokens got encoded into MALM model's hashes which are used to retrieve information from context in MALM tokens and then the result converts to Qwen's tokens again?