new

Get trending papers in your email inbox!

Subscribe

Daily Papers

byAK and the research community

Apr 17

Immiscible Diffusion: Accelerating Diffusion Training with Noise Assignment

In this paper, we point out suboptimal noise-data mapping leads to slow training of diffusion models. During diffusion training, current methods diffuse each image across the entire noise space, resulting in a mixture of all images at every point in the noise layer. We emphasize that this random mixture of noise-data mapping complicates the optimization of the denoising function in diffusion models. Drawing inspiration from the immiscible phenomenon in physics, we propose Immiscible Diffusion, a simple and effective method to improve the random mixture of noise-data mapping. In physics, miscibility can vary according to various intermolecular forces. Thus, immiscibility means that the mixing of the molecular sources is distinguishable. Inspired by this, we propose an assignment-then-diffusion training strategy. Specifically, prior to diffusing the image data into noise, we assign diffusion target noise for the image data by minimizing the total image-noise pair distance in a mini-batch. The assignment functions analogously to external forces to separate the diffuse-able areas of images, thus mitigating the inherent difficulties in diffusion training. Our approach is remarkably simple, requiring only one line of code to restrict the diffuse-able area for each image while preserving the Gaussian distribution of noise. This ensures that each image is projected only to nearby noise. To address the high complexity of the assignment algorithm, we employ a quantized-assignment method to reduce the computational overhead to a negligible level. Experiments demonstrate that our method achieve up to 3x faster training for consistency models and DDIM on the CIFAR dataset, and up to 1.3x faster on CelebA datasets for consistency models. Besides, we conduct thorough analysis about the Immiscible Diffusion, which sheds lights on how it improves diffusion training speed while improving the fidelity.

  • 6 authors
·
Jun 18, 2024 1

Improved Immiscible Diffusion: Accelerate Diffusion Training by Reducing Its Miscibility

The substantial training cost of diffusion models hinders their deployment. Immiscible Diffusion recently showed that reducing diffusion trajectory mixing in the noise space via linear assignment accelerates training by simplifying denoising. To extend immiscible diffusion beyond the inefficient linear assignment under high batch sizes and high dimensions, we refine this concept to a broader miscibility reduction at any layer and by any implementation. Specifically, we empirically demonstrate the bijective nature of the denoising process with respect to immiscible diffusion, ensuring its preservation of generative diversity. Moreover, we provide thorough analysis and show step-by-step how immiscibility eases denoising and improves efficiency. Extending beyond linear assignment, we propose a family of implementations including K-nearest neighbor (KNN) noise selection and image scaling to reduce miscibility, achieving up to >4x faster training across diverse models and tasks including unconditional/conditional generation, image editing, and robotics planning. Furthermore, our analysis of immiscibility offers a novel perspective on how optimal transport (OT) enhances diffusion training. By identifying trajectory miscibility as a fundamental bottleneck, we believe this work establishes a potentially new direction for future research into high-efficiency diffusion training. The code is available at https://github.com/yhli123/Immiscible-Diffusion.

  • 6 authors
·
May 24, 2025

Experimental and Computational Analysis of the Hydrodynamics of Droplet Generation in a Cylindrical Microfluidic Device

This study investigates the hydrodynamics of droplet formation in a T-shaped cylindrical microfluidic device using micro-PIV experiments and CFD simulations. Devices of 150 micro-m internal diameter were fabricated from PDMS via a cost-effective embedded templating method. Flow visualization was conducted using immiscible silicone oil and deionized water, forming water-in-oil droplets. A mathematical model coupling the Navier-Stokes and conservative level-set equations was solved using the finite element method. Detailed flow fields (velocity, pressure, and phase distribution) were obtained over a wide range of flow-rate ratios (0.1-10) and capillary numbers (0.001-0.1) to characterize droplet formation mechanisms. Phase evolution revealed distinct breakup stages (lag, filling, necking, and pinch-off) and multiple regimes (squeezing, dripping, sausage flow, and parallel flow with tip streaming). A regime map delineating droplet and non-droplet regions was developed. Droplet size, curvature, and internal flow profiles exhibited strong dependence on Ca and Qr. Scaling analysis showed linear dependence of droplet size on Qr in the squeezing regime, with curvature nearly independent of Qr. In contrast, both size and curvature followed power-law dependence on Ca and Qr in the dripping regime. Velocity fields inside droplets were laminar and parabolic in the core. Fully developed plug-like profiles appeared in squeezing, whereas front and rear regions remained developing in dripping. Correlations for droplet length, curvature, and film thickness, including a novel thin-film model incorporating visco-inertial and capillary effects, enable predictive design within the studied range. These findings advance fundamental understanding of confined droplet dynamics and provide quantitative guidelines for optimizing droplet-based microfluidic systems.

  • 3 authors
·
Mar 3