| --- |
| license: mit |
| language: |
| - en |
| - zh |
| library_name: peft |
| pipeline_tag: text-generation |
| base_model: Qwen/Qwen3-235B-A22B-Instruct-2507 |
| base_model_relation: adapter |
| tags: |
| - macaron |
| - a2ui |
| - a2ui-v0.8 |
| - lora |
| - peft |
| - dynamic-ui |
| - structured-generation |
| - json-generation |
| - grpo |
| - qwen3 |
| --- |
| |
| # Macaron A2UI Grande |
|
|
| > This repository contains the LoRA adapter weights for **Macaron A2UI Grande**. |
|
|
| Macaron A2UI Grande is a LoRA adapter trained to generate valid **A2UI v0.8** cards from user context. It is designed for dynamic UI generation in personal-agent scenarios, where a model converts conversation context, product state, and available actions into one structured UI card. |
|
|
| This release corresponds to **Macaron A2UI Grande**. |
|
|
| ## Highlights |
|
|
| - **A2UI v0.8 card generation**: generates structured UI cards that can be consumed by an A2UI-compatible renderer. |
| - **LoRA adapter release**: lightweight adapter weights for continued training, inspection, and adaptation. |
| - **Context-aware UI generation**: takes user intent, conversation context, product state, and available actions as input. |
| - **GRPO post-training**: this release is produced with GRPO on top of a Qwen3-235B A2UI initialization. |
| - **Validation-first design**: outputs should be checked by the provided A2UI v0.8 validator before rendering. |
|
|
| ## Model Overview |
|
|
| | Field | Value | |
| | --- | --- | |
| | Model family | Macaron A2UI | |
| | Variant | Grande | |
| | Release name | `Macaron A2UI Grande` | |
| | Release type | LoRA adapter | |
| | Foundation checkpoint | `Qwen/Qwen3-235B-A22B-Instruct-2507` | |
| | Target protocol | A2UI v0.8 | |
| | Output format | JSON object with `text_response` and `a2ui` fields | |
| | Training method | GRPO with LoRA | |
| | Library | PEFT / Transformers | |
| | Recommended dtype | bfloat16 | |
| | Tokenizer | Same as foundation checkpoint | |
|
|
| ### Adapter Details |
|
|
| | Field | Value | |
| | --- | --- | |
| | LoRA rank | `16` | |
| | LoRA alpha | `32` | |
| | LoRA dropout | `0.0` | |
| | Target modules | `q_proj`, `k_proj`, `v_proj`, `o_proj`, `gate_proj`, `up_proj`, `down_proj` | |
| | LM head adapted | `No` | |
| | Training max response | `4096` | |
|
|
| ## Model Variants |
|
|
| | Variant | Release Name | Foundation Checkpoint | Release Type | |
| | --- | --- | --- | --- | |
| | Tall | Macaron A2UI Tall | `Qwen/Qwen3-30B-A3B-Instruct-2507` | LoRA adapter | |
| | Grande | Macaron A2UI Grande | `Qwen/Qwen3-235B-A22B-Instruct-2507` | LoRA adapter | |
| | Venti | Macaron A2UI Venti | `GLM 5.1` | LoRA adapter | |
|
|
| You are currently viewing the **Grande** release. |
|
|
| ## Quickstart |
|
|
| This repository contains adapter weights only. Load the corresponding foundation checkpoint first, then attach this adapter with PEFT. |
|
|
| ```python |
| from transformers import AutoModelForCausalLM, AutoTokenizer |
| from peft import PeftModel |
| import torch |
| |
| base_model_id = "Qwen/Qwen3-235B-A22B-Instruct-2507" |
| adapter_id = "mindlab-research/Macaron-A2UI-Grande" |
| |
| tokenizer = AutoTokenizer.from_pretrained( |
| base_model_id, |
| trust_remote_code=True, |
| ) |
| |
| base_model = AutoModelForCausalLM.from_pretrained( |
| base_model_id, |
| torch_dtype=torch.bfloat16, |
| device_map="auto", |
| trust_remote_code=True, |
| ) |
| |
| model = PeftModel.from_pretrained(base_model, adapter_id) |
| model.eval() |
| |
| messages = [ |
| { |
| "role": "system", |
| "content": "You are an A2UI v0.8 card generation model. Output exactly one valid A2UI JSON card." |
| }, |
| { |
| "role": "user", |
| "content": "<USER_CONTEXT_JSON>", |
| }, |
| ] |
| |
| text = tokenizer.apply_chat_template( |
| messages, |
| tokenize=False, |
| add_generation_prompt=True, |
| ) |
| |
| inputs = tokenizer(text, return_tensors="pt").to(model.device) |
| |
| outputs = model.generate( |
| **inputs, |
| max_new_tokens=2048, |
| do_sample=False, |
| ) |
| |
| response = tokenizer.decode( |
| outputs[0][inputs.input_ids.shape[-1]:], |
| skip_special_tokens=True, |
| ) |
| |
| print(response) |
| ``` |
|
|
| ## Output Contract |
|
|
| Macaron A2UI Grande is trained to output: |
|
|
| * valid JSON; |
| * a top-level object of the form `{"text_response": "...", "a2ui": [...]}`; |
| * no Markdown code fences; |
| * no extra explanation outside the JSON object; |
| * only A2UI actions and components supported by the calling product surface. |
|
|
| The `a2ui` field is expected to contain A2UI v0.8 messages such as `beginRendering`, `surfaceUpdate`, `dataModelUpdate`, or `deleteSurface`. |
|
|
| The model targets A2UI v0.8. Compatibility with later protocol revisions is not guaranteed without additional validation or fine-tuning. |
|
|
| ## Evaluation |
|
|
| We evaluate Macaron A2UI on internal A2UI v0.8 card-generation benchmarks and product-aligned task suites. |
|
|
| Public benchmark numbers and reproduction details are being standardized and will be added in a future revision of this model card. |
|
|
| At the moment, this repository should be interpreted as an adapter release first. Evaluation methodology, task definitions, and comparable public results are still being consolidated. |
|
|
| ## Limitations |
|
|
| Macaron A2UI Grande is specialized for A2UI generation and is not intended as a general-purpose chat model. |
|
|
| Known limitations: |
|
|
| * may generate valid JSON that is still semantically weak; |
| * may hallucinate actions if the action space is underspecified; |
| * may fail on A2UI versions other than v0.8; |
| * requires external validation before production rendering; |
| * should not be used for irreversible or safety-critical UI actions without user confirmation. |
|
|
| ## License |
|
|
| The adapter weights are released under MIT. |
|
|
| This adapter is trained on top of `Qwen/Qwen3-235B-A22B-Instruct-2507`. Users are responsible for complying with both: |
|
|
| 1. the adapter license; |
| 2. the license of the corresponding foundation checkpoint. |
|
|
| ## Citation |
|
|
| ```bibtex |
| @misc{kong2026macaron_a2ui, |
| author = {Fancy Kong and Congjie Zheng and Murphy Zhuang and Rio Yang and Sueky Zhang and Hao Fu and Gene Jin and Andrew Chen and Pony Ma and {Mind Lab}}, |
| title = {Macaron-A2UI: A Model for Generative UI in Personal Agent}, |
| year = {2026}, |
| howpublished = {Mind Lab: A Lab for Experiential Intelligence}, |
| note = {https://macaron.im/mindlab/research/macaron-a2ui-generative-ui-personal-agent} |
| } |
| ``` |
|
|
| ## Contact |
|
|
| contact@mindlab.ltd |
|
|