Improve model card for AtomThink-LLaVA1.5-7B

#1
by nielsr HF Staff - opened

This PR significantly enhances the model card for AtomThink-LLaVA1.5-7B by:

  • Adding pipeline_tag: image-text-to-text to improve discoverability for multimodal reasoning tasks.
  • Specifying library_name: transformers as the model is compatible with the Hugging Face Transformers library.
  • Updating the license to MIT, aligning with the official GitHub repository's license file.
  • Including descriptive tags such as multimodal, mathematical-reasoning, llava, and slow-thinking.
  • Adding direct links to the official Hugging Face paper page and the GitHub repository.
  • Expanding the model description with the paper abstract and key features from the GitHub README.
  • Embedding relevant diagrams from the GitHub repository for better visual understanding.
  • Providing a clear and concise sample usage code snippet for quick inference using the transformers library.
  • Retaining and formatting the citation information.
  • Adding an Acknowledgement section from the GitHub README.

These changes will make the model more discoverable, understandable, and easier to use for researchers and developers on the Hugging Face Hub.

Ready to merge
This branch is ready to get merged automatically.

Sign up or log in to comment