--- license: other license_name: fair-ai-public-license-1.0-sd license_link: https://freedevproject.org/faipl-1.0-sd/ base_model: - CabalResearch/NoobAI-Flux2VAE-RectifiedFlow-0.3 library_name: diffusers --- ![cover2](https://cdn-uploads.huggingface.co/production/uploads/633b43d29fe04b13f46c8988/WjXXUrUqm_uSoYo5eagXl.webp) --- ## Model Description Mugen is a continuation of our SDXL to Flux 2 VAE conversion, renamed to signify a substantial divergence from the original NoobAI models. It has been trained for 7 additional epochs, totaling under **8000$** for a full latent space conversion, while preserving and improving upon model anime knowledge. In particular, we have paid attention to characters in this iteration, and developed in-house approach for benchmarking their performance, about which you can read below. Overall, model performs particularly well with textures and patterns that were previously simply impossible due to SDXL VAE. We prioritized keeping our training as standard-friendly as possible, so local community can easily train on it like on a new Base Model, which it practically is. We provide 4 models: - Mugen: A base model. - Mugen - Aesthetic: Slightly tuned on a limited dataset model for better quality output. - Mugen - Aesthetic - Anzhc/Selph: Further tune on opinionated dataset selection. --- - **Developed by:** Cabal Research (Bluvoll, Anzhc) - **Funded by:** Community - **License:** [fair-ai-public-license-1.0-sd](https://freedevproject.org/faipl-1.0-sd/) - **Resumed from:** [NoobAI Flux2 VAE v0.3](https://huggingface.co/CabalResearch/NoobAI-Flux2VAE-RectifiedFlow) --- ### Character Knowledge Benchmark ![series_dashboard](https://cdn-uploads.huggingface.co/production/uploads/633b43d29fe04b13f46c8988/trBLD3zcXDsMA1Iwuc4fj.png) ![unified_characters_standalone](https://cdn-uploads.huggingface.co/production/uploads/633b43d29fe04b13f46c8988/9RIxkWpmT9j_z2uGOWIXa.png) This benchmark measures character similarity across 1815 characters in this iteration of it. For convenience, we've gathered few major categories: gachas and vtubers. We utilize reference(non-generated) set of images, and measure character features against ai-generated data - this is the similarity score. Our custom in-house model for character discrimination trained on ~1.2kk images is used. Results are sorted indiscriminately, by score, treat it as general character knowledge index, not as any specific characters in particular. Same point on graphs might, or might not be corresponding to the same character. Due to compute constraint, we selected only single model to compare against - not yet released latest version of [Chenkin](https://huggingface.co/ChenkinNoob/ChenkinNoob-XL-V0.3-BETA) model, which currently is the most trained SDXL-based anime model. Future benchmark iterations might include different arches, more models and more characters. ## Bias and Limitations General data biases from Danbooru might apply. Flux 2 VAE seem to have brown bias overall, which can be alleviated by adding `sepia` or `brown theme` to negative. ## Model Output Examples ![image-wall-2048x9941.784038936581](https://cdn-uploads.huggingface.co/production/uploads/633b43d29fe04b13f46c8988/FXQzA7AWIuZnPnOTfBu2V.webp) You can download most of those images from [Here](https://huggingface.co/CabalResearch/Mugen/tree/main/images) for reference. --- # Recommendations ### Characters While in benchmark we test characters purely with their own trigger with no helper tags, it is advised to utilize series/game for better adherence. Characters that might appear not working initially could start working with appearance tags. ### Inference #### Comfy ![изображение](https://cdn-uploads.huggingface.co/production/uploads/633b43d29fe04b13f46c8988/668lIifTlZ8sYzWlzLVc0.png) [**BASIC WORKFLOW** ](https://huggingface.co/CabalResearch/Mugen/blob/main/Mugen%20Basic.json) We will provide a Node, and hope it will be adapted natively in main repo eventually: **https://github.com/Anzhc/SDXL-Flux2VAE-ComfyUI-Node** Just install it, and it will patch the model config, no node changes required. [SwarmUI](https://github.com/mcmonkeyprojects/SwarmUI) also requires only the node to be installed. Same as your normal inference, but with addition of SD3 sampling node, as this model is Flow-based. Recommended Parameters: **Sampler**: Euler A, Euler, DPM++ SDE, etc. **Steps**: 20-28 **CFG**: 4-7 **Shift**: 8-12 **Schedule**: Normal/Simple/SGM Uniform **Positive Quality Tags**: `masterpiece, best quality` **Negative Tags**: `worst quality, normal quality, bad anatomy, sepia` **Alternative Extended Negative**: `(worst quality:1.1), normal quality, (bad anatomy:1.1), (blurry:1.1), watermark, sepia, (adversarial noise:1.1), jpeg artifacts` (Some of our testers pointed out that they prefer longer negative) #### A1111 WebUI Recommended WebUI: [ReForge](https://github.com/Panchovix/stable-diffusion-webui-reForge) - has native support for Flow models, and we've PR'd our native support for Flux2vae-based SDXL modification. **How to use in ReForge**: ![изображение](https://cdn-uploads.huggingface.co/production/uploads/633b43d29fe04b13f46c8988/5H8vNl7ZzUh8F_4japzV9.png) Support for RF in ReForge is being implemented through a built-in extension: ![изображение](https://cdn-uploads.huggingface.co/production/uploads/633b43d29fe04b13f46c8988/Oo1_7dD2Lv7BRqeV_pUsg.png) **IMPORTANT** Set your preview method to this, if you use it. ![imagen](https://cdn-uploads.huggingface.co/production/uploads/634cf025fe861cc73a2e7dd0/siEBfX41E6D_DUEqaXMgs.png) Flux2VAE does not currently have an appropriate high quality preview method, please use Approx Cheap option, which would allow you to see simple PCA projection(ReForge). Recommended Parameters: **Sampler**: Euler A Comfy RF, Euler A2, Euler, DPM++ SDE Comfy, etc. **ALL VARIANTS MUST BE RF OR COMFY, IF AVAILABLE. In ComfyUI routing is automatic, but not in the case of WebUI.** **Steps**: 20-28 **CFG**: 4-7(or 7-15, if it appears to be weak/bugged) **Shift**: 8-12 **Schedule**: Normal/Simple/SGM Uniform **Positive Quality Tags**: `masterpiece, best quality` **Negative Tags**: `worst quality, normal quality, bad anatomy, sepia` **Alternative Extended Negative**: `(worst quality:1.1), normal quality, (bad anatomy:1.1), (blurry:1.1), watermark, sepia, (adversarial noise:1.1), jpeg artifacts` (Some of our testers pointed out that they prefer longer negative) **ADETAILER FIX FOR RF**: By default, Adetailer discards Advanced Model Sampling extension, which breaks RF. You need to add AMS to this part of settings: ![изображение](https://cdn-uploads.huggingface.co/production/uploads/633b43d29fe04b13f46c8988/RQMtfm5Xi3V7oNsqXoZJN.png) Add: `advanced_model_sampling_script,advanced_model_sampling_script_backported` to there. If that does not work, go into adetailer extension, find args.py, open it, replace _builtin_scripts like this: ![изображение](https://cdn-uploads.huggingface.co/production/uploads/633b43d29fe04b13f46c8988/rmnS-i_kciJzTZmeR-mGP.png) Here is a copypaste for easy copy: ``` _builtin_script = ( "advanced_model_sampling_script", "advanced_model_sampling_script_backported", "hypertile_script", "soft_inpainting", ) ``` Or use my fork of Adetailer - https://github.com/Anzhc/aadetailer-reforge --- ### LoRA Training You can directly reference config with all parameters: [Download](https://huggingface.co/CabalResearch/Mugen/blob/main/Training%20Config%20Example.zip) ![изображение](https://cdn-uploads.huggingface.co/production/uploads/633b43d29fe04b13f46c8988/_-ysLTJZE4FGNpoTW2b_m.png) ### Hardware Model was trained on cloud 8xH100 node. ### Software Custom fork of [SD-Scripts](https://github.com/bluvoll/sd-scripts)(maintained by Bluvoll) ## Acknowledgements ### Sponsors **To a special supporter who singlehandidly sponsored whole run and preferred to stay anonymous** ### Testers - ComradeAnanas - Daruda - Drac - itterative - kagame - Remix - Ryusho - edf - Epic - Ly - Panchovix - Rakosz - Sab - Silvelter - Talan - Void - Why ping --- # Support If you wish to support our continuous effort of making waifus 0.2% better, you can do it here: ### **https://ko-fi.com/bluvoll** (Blu, donate here to support training) https://ko-fi.com/anzhc (Anzhc, non-training, just survival) ![image](https://cdn-uploads.huggingface.co/production/uploads/633b43d29fe04b13f46c8988/qpcOFLCI-B7QBADEYwMhA.png) BTC: `37fLcfxX5ewhJXnb3T9Qzu9jiSLjVtoUJX` ETH: `0xfdF54655796bf2F5bf75192AeB562F8656c1C39E` Send DM to Blu if you want to donate on another network.