| --- |
| library_name: transformers |
| pipeline_tag: image-text-to-text |
| inference: true |
| widget: |
| - text: Hello! |
| example_title: Hello world |
| group: Python |
| --- |
| |
| This tiny model is for debugging. It is randomly initialized with the config adapted from [google/gemma-3-27b-it](https://huggingface.co/google/gemma-3-27b-it). |
|
|
| ### Example usage: |
|
|
| ```python |
| from transformers import pipeline |
| model_id = "tiny-random/gemma-3" |
| pipe = pipeline( |
| "image-text-to-text", model=model_id, device="cuda", |
| trust_remote_code=True, max_new_tokens=3, |
| ) |
| messages = [ |
| { |
| "role": "system", |
| "content": [{"type": "text", "text": "You are a helpful assistant."}] |
| }, |
| { |
| "role": "user", |
| "content": [ |
| {"type": "image", "url": "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/p-blog/candy.JPG"}, |
| {"type": "text", "text": "What animal is on the candy?"} |
| ] |
| } |
| ] |
| output = pipe(text=messages, max_new_tokens=5) |
| print(output) |
| ``` |
|
|
| ### Codes to create this repo: |
|
|
| ```python |
| import torch |
| |
| from transformers import ( |
| AutoConfig, |
| AutoModelForCausalLM, |
| AutoProcessor, |
| AutoTokenizer, |
| Gemma3ForConditionalGeneration, |
| GenerationConfig, |
| pipeline, |
| set_seed, |
| ) |
| |
| source_model_id = "google/gemma-3-27b-it" |
| save_folder = "/tmp/tiny-random/gemma-3" |
| |
| processor = AutoProcessor.from_pretrained( |
| source_model_id, trust_remote_code=True, |
| ) |
| processor.save_pretrained(save_folder) |
| |
| config = AutoConfig.from_pretrained( |
| source_model_id, trust_remote_code=True, |
| ) |
| config.text_config.hidden_size = 32 |
| config.text_config.intermediate_size = 128 |
| config.text_config.head_dim = 32 |
| config.text_config.num_attention_heads = 1 |
| config.text_config.num_key_value_heads = 1 |
| config.text_config.num_hidden_layers = 2 |
| config.text_config.sliding_window_pattern = 2 |
| config.vision_config.hidden_size = 32 |
| config.vision_config.num_hidden_layers = 2 |
| config.vision_config.num_attention_heads = 1 |
| config.vision_config.intermediate_size = 128 |
| model = Gemma3ForConditionalGeneration( |
| config, |
| ).to(torch.bfloat16) |
| for layer in model.language_model.model.layers: |
| print(layer.is_sliding) |
| model.generation_config = GenerationConfig.from_pretrained( |
| source_model_id, trust_remote_code=True, |
| ) |
| set_seed(42) |
| with torch.no_grad(): |
| for name, p in sorted(model.named_parameters()): |
| torch.nn.init.normal_(p, 0, 0.5) |
| print(name, p.shape) |
| model.save_pretrained(save_folder) |
| ``` |