Improve model card for AtomThink-LLaVA1.5-7B
#1
by nielsr HF Staff - opened
This PR significantly enhances the model card for AtomThink-LLaVA1.5-7B by:
- Adding
pipeline_tag: image-text-to-textto improve discoverability for multimodal reasoning tasks. - Specifying
library_name: transformersas the model is compatible with the Hugging Face Transformers library. - Updating the
licensetoMIT, aligning with the official GitHub repository's license file. - Including descriptive
tagssuch asmultimodal,mathematical-reasoning,llava, andslow-thinking. - Adding direct links to the official Hugging Face paper page and the GitHub repository.
- Expanding the model description with the paper abstract and key features from the GitHub README.
- Embedding relevant diagrams from the GitHub repository for better visual understanding.
- Providing a clear and concise sample usage code snippet for quick inference using the
transformerslibrary. - Retaining and formatting the citation information.
- Adding an Acknowledgement section from the GitHub README.
These changes will make the model more discoverable, understandable, and easier to use for researchers and developers on the Hugging Face Hub.