Hugging Face
Models
Datasets
Spaces
Buckets
new
Docs
Enterprise
Pricing
Log In
Sign Up
Open to Collab
3
50
8
Juanxi Tian
Juanxi
Follow
LuYinMiao's profile picture
LiuYuanCheng's profile picture
Fishtiks's profile picture
26 followers
·
10 following
https://tianshijing.github.io
JuanxiTian
tianshijing
juanxi-tian
AI & ML interests
Efficient AI & Gen AI
Recent Activity
replied
to
their
post
1 day ago
📢 Awesome Multimodal Modeling We introduce Awesome Multimodal Modeling, a curated repository tracing the architectural evolution of multimodal intelligence—from foundational fusion to native omni-models. 🔹 Taxonomy & Evolution: Traditional Multimodal Learning – Foundational work on representation, fusion, and alignment. Multimodal LLMs (MLLMs) – Architectures connecting vision encoders to LLMs for understanding. Unified Multimodal Models (UMMs) – Models unifying Understanding + Generation via Diffusion, Autoregressive, or Hybrid paradigms. Native Multimodal Models (NMMs) – Models trained from scratch on all modalities; contrasts early vs. late fusion under scaling laws. 💡 Key Distinction: UMMs unify tasks via generation heads; NMMs enforce interleaving through joint pre-training. 🔗 Explore & Contribute: https://github.com/OpenEnvision-Lab/Awesome-Multimodal-Modeling
reacted
to
their
post
with 👍
1 day ago
📢 Awesome Multimodal Modeling We introduce Awesome Multimodal Modeling, a curated repository tracing the architectural evolution of multimodal intelligence—from foundational fusion to native omni-models. 🔹 Taxonomy & Evolution: Traditional Multimodal Learning – Foundational work on representation, fusion, and alignment. Multimodal LLMs (MLLMs) – Architectures connecting vision encoders to LLMs for understanding. Unified Multimodal Models (UMMs) – Models unifying Understanding + Generation via Diffusion, Autoregressive, or Hybrid paradigms. Native Multimodal Models (NMMs) – Models trained from scratch on all modalities; contrasts early vs. late fusion under scaling laws. 💡 Key Distinction: UMMs unify tasks via generation heads; NMMs enforce interleaving through joint pre-training. 🔗 Explore & Contribute: https://github.com/OpenEnvision-Lab/Awesome-Multimodal-Modeling
reacted
to
their
post
with 😎
1 day ago
📢 Awesome Multimodal Modeling We introduce Awesome Multimodal Modeling, a curated repository tracing the architectural evolution of multimodal intelligence—from foundational fusion to native omni-models. 🔹 Taxonomy & Evolution: Traditional Multimodal Learning – Foundational work on representation, fusion, and alignment. Multimodal LLMs (MLLMs) – Architectures connecting vision encoders to LLMs for understanding. Unified Multimodal Models (UMMs) – Models unifying Understanding + Generation via Diffusion, Autoregressive, or Hybrid paradigms. Native Multimodal Models (NMMs) – Models trained from scratch on all modalities; contrasts early vs. late fusion under scaling laws. 💡 Key Distinction: UMMs unify tasks via generation heads; NMMs enforce interleaving through joint pre-training. 🔗 Explore & Contribute: https://github.com/OpenEnvision-Lab/Awesome-Multimodal-Modeling
View all activity
Organizations
Juanxi
's datasets
2
Sort: Recently updated
Juanxi/PhyVideo
Viewer
•
Updated
Oct 18, 2025
•
6
•
20
Juanxi/Phys101
Viewer
•
Updated
Oct 18, 2025
•
103
•
59