Add paper and code links, and library metadata

#2
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +6 -3
README.md CHANGED
@@ -1,15 +1,18 @@
1
  ---
2
- license: apache-2.0
3
  base_model:
4
  - Zyphra/ZAYA1-base
 
5
  pipeline_tag: image-text-to-text
 
6
  ---
7
 
8
  # ZAYA1-VL-8B
9
 
10
  ZAYA1-VL-8B is a vision-language model (VLM) built upon Zyphra's ZAYA1-8B LLM. It has state-of-the-art performance among VLMs for its size and inference efficiency.
11
 
12
- Learn more about our vision-language models in our [announcement blog post](https://www.zyphra.com/post/zaya1-vl-8b) and our [accompanying technical report](http://www.zyphra.com/zaya1-vl-8b-technical-report)
 
 
13
 
14
  ZAYA1-VL-8B is open-sourced under the Apache 2.0 license.
15
 
@@ -106,4 +109,4 @@ inputs = {key: value.to(device) for key, value in inputs.items()}
106
 
107
  outputs = model.generate(**inputs, max_new_tokens=100)
108
  print(processor.tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
109
- ```
 
1
  ---
 
2
  base_model:
3
  - Zyphra/ZAYA1-base
4
+ license: apache-2.0
5
  pipeline_tag: image-text-to-text
6
+ library_name: transformers
7
  ---
8
 
9
  # ZAYA1-VL-8B
10
 
11
  ZAYA1-VL-8B is a vision-language model (VLM) built upon Zyphra's ZAYA1-8B LLM. It has state-of-the-art performance among VLMs for its size and inference efficiency.
12
 
13
+ - **Paper:** [ZAYA1-VL-8B Technical Report](https://huggingface.co/papers/2605.08560)
14
+ - **Code:** [GitHub (zaya1-vl branch)](https://github.com/Zyphra/transformers/tree/zaya1-vl)
15
+ - **Blog:** [Announcement blog post](https://www.zyphra.com/post/zaya1-vl-8b)
16
 
17
  ZAYA1-VL-8B is open-sourced under the Apache 2.0 license.
18
 
 
109
 
110
  outputs = model.generate(**inputs, max_new_tokens=100)
111
  print(processor.tokenizer.decode(outputs[0][inputs["input_ids"].shape[-1]:]))
112
+ ```