Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

inclusionAI
/
LLaDA2.0-Uni

Any-to-Any
Transformers
Diffusers
Safetensors
English
llada2_moe
feature-extraction
multimodal
image-generation
image-understanding
image-editing
diffusion
Mixture of Experts
text-to-image
custom_code
Model card Files Files and versions
xet
Community
7

Instructions to use inclusionAI/LLaDA2.0-Uni with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Transformers

    How to use inclusionAI/LLaDA2.0-Uni with Transformers:

    # Load model directly
    from transformers import AutoModel
    model = AutoModel.from_pretrained("inclusionAI/LLaDA2.0-Uni", trust_remote_code=True, dtype="auto")
  • Notebooks
  • Google Colab
  • Kaggle
New discussion
Resources
  • PR & discussions documentation
  • Code of Conduct
  • Hub documentation

GGUF versions

1
#7 opened 6 days ago by
maroo87

add more examples to readme

#6 opened 11 days ago by
evewashere

ComfyUI support

➕ 5
2
#5 opened 12 days ago by
reverentelusarca

Using a single CPU kernel for some important things seems the performance bottleneck

#3 opened 13 days ago by
rpeinl

Impression

#1 opened 14 days ago by
NickupAI
Company
TOS Privacy About Careers
Website
Models Datasets Spaces Pricing Docs