Huggy Style v1 - FLUX DreamBooth LoRA
A LoRA adapter for FLUX.1-dev trained with DreamBooth to generate Huggy โ the HuggingFace mascot character.
Character Description
Huggy is a yellow circular character with:
- Round body (no arms, legs, or feet)
- Two floating hands
- Orange outlines (no dark black outlines)
- Clean flat vector art style with edge shadows
- Expressive face with various emotions
Trigger Word
Use huggy_style_v1 in your prompts to activate the character.
Usage
import torch
from diffusers import FluxPipeline
pipe = FluxPipeline.from_pretrained(
"black-forest-labs/FLUX.1-dev",
torch_dtype=torch.bfloat16,
)
pipe.enable_model_cpu_offload()
# Load LoRA
pipe.load_lora_weights("Chunte/huggy-style-v1-lora")
image = pipe(
prompt="a huggy_style_v1 mascot wearing a pirate hat, waving, happy",
num_inference_steps=28,
guidance_scale=3.5,
width=768,
height=768,
generator=torch.Generator("cpu").manual_seed(42),
).images[0]
image.save("huggy.png")
Prompt Tips
- Always include
huggy_style_v1as the trigger word - Describe what varies โ costumes, poses, expressions, props
- Don't describe the character's base appearance (yellow, circular, etc.) โ the LoRA already knows this
- Example:
a huggy_style_v1 mascot wearing a santa hat, holding a gift, smiling
Checkpoints
Multiple checkpoints are available if the final weights are overfitting:
| Checkpoint | Use Case |
|---|---|
checkpoint-500 |
Early training โ more creative, less accurate character |
checkpoint-1000 |
Moderate โ good balance for some use cases |
checkpoint-1500 |
Strong character identity with good generalization |
| final (default) | Strongest character identity (2000 steps) |
Load a specific checkpoint:
pipe.load_lora_weights("Chunte/huggy-style-v1-lora", subfolder="checkpoint-1000")
Training Details
| Parameter | Value |
|---|---|
| Base model | FLUX.1-dev |
| Method | DreamBooth LoRA |
| Training script | train_dreambooth_lora_flux.py (diffusers v0.37.0) |
| Dataset | 72 hand-captioned images (1024x1024, white background) |
| Resolution | 768 |
| LoRA rank | 32 |
| Learning rate | 1e-4 (constant scheduler) |
| Warmup steps | 100 |
| Training steps | 2000 |
| Batch size | 1 (gradient accumulation: 4, effective batch: 4) |
| Mixed precision | bf16 |
| Guidance scale | 1 (recommended for FLUX training) |
| Gradient checkpointing | Enabled |
| Hardware | NVIDIA L40S (48GB VRAM) |
| Final loss | 0.021 |
Sample Images
License
This LoRA adapter inherits the FLUX.1-dev Non-Commercial License.
- Downloads last month
- 18
Model tree for Chunte/huggy-style-v1-lora
Base model
black-forest-labs/FLUX.1-dev


