-
Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer
Paper • 2511.22699 • Published • 245 -
A Survey on Diffusion Language Models
Paper • 2508.10875 • Published • 34 -
Scalable Diffusion Models with Transformers
Paper • 2212.09748 • Published • 17 -
Scaling Rectified Flow Transformers for High-Resolution Image Synthesis
Paper • 2403.03206 • Published • 71
Collections
Discover the best community collections!
Collections including paper arxiv:2309.05463
-
Attention Is All You Need
Paper • 1706.03762 • Published • 120 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 20 -
GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints
Paper • 2305.13245 • Published • 6 -
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 251
-
Attention Is All You Need
Paper • 1706.03762 • Published • 120 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 20 -
LLaMA: Open and Efficient Foundation Language Models
Paper • 2302.13971 • Published • 23 -
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 251
-
Textbooks Are All You Need
Paper • 2306.11644 • Published • 154 -
Textbooks Are All You Need II: phi-1.5 technical report
Paper • 2309.05463 • Published • 90 -
TinyStories: How Small Can Language Models Be and Still Speak Coherent English?
Paper • 2305.07759 • Published • 45 -
Scaling Synthetic Data Creation with 1,000,000,000 Personas
Paper • 2406.20094 • Published • 107
-
Z-Image: An Efficient Image Generation Foundation Model with Single-Stream Diffusion Transformer
Paper • 2511.22699 • Published • 245 -
A Survey on Diffusion Language Models
Paper • 2508.10875 • Published • 34 -
Scalable Diffusion Models with Transformers
Paper • 2212.09748 • Published • 17 -
Scaling Rectified Flow Transformers for High-Resolution Image Synthesis
Paper • 2403.03206 • Published • 71
-
Attention Is All You Need
Paper • 1706.03762 • Published • 120 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 20 -
LLaMA: Open and Efficient Foundation Language Models
Paper • 2302.13971 • Published • 23 -
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 251
-
Textbooks Are All You Need
Paper • 2306.11644 • Published • 154 -
Textbooks Are All You Need II: phi-1.5 technical report
Paper • 2309.05463 • Published • 90 -
TinyStories: How Small Can Language Models Be and Still Speak Coherent English?
Paper • 2305.07759 • Published • 45 -
Scaling Synthetic Data Creation with 1,000,000,000 Personas
Paper • 2406.20094 • Published • 107
-
Attention Is All You Need
Paper • 1706.03762 • Published • 120 -
Language Models are Few-Shot Learners
Paper • 2005.14165 • Published • 20 -
GQA: Training Generalized Multi-Query Transformer Models from Multi-Head Checkpoints
Paper • 2305.13245 • Published • 6 -
Llama 2: Open Foundation and Fine-Tuned Chat Models
Paper • 2307.09288 • Published • 251