Instructions to use zeyuren2002/EvalMDE with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use zeyuren2002/EvalMDE with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("zeyuren2002/EvalMDE", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
File size: 1,220 Bytes
7f921f4 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 | accelerate launch --config_file configs/accelerate_config.yaml scripts/train.py \
--dataset_base_path "/mnt/nfs/workspace/syq/dataset/Hypersim/processed_depth,/mnt/nfs/workspace/syq/dataset/vkitti2" \
--dataset_metadata_path "./data_split/hypersim_depth/filename_list_train_filtered2.txt,./data_split/vkitti_depth/vkitti_train.txt" \
--data_file_keys "kontext_images,image" \
--model_paths "./FLUX.1-Kontext-dev" \
--learning_rate "1e-5" \
--num_epochs "8" \
--remove_prefix_in_ckpt "pipe.dit." \
--trainable_models "dit" \
--extra_inputs "kontext_images" \
--use_gradient_checkpointing \
--default_caption "Transform to depth map while maintaining original composition" \
--batch_size "4" \
--output_path "ckpts/kontext/bs64_sqrt_cons" \
--eval_file_list "./data_split/nyu_depth/labeled/filename_list_test.txt" \
--multi_res_noise \
--save_steps "200" \
--eval_steps "50" \
--with_mask \
--depth_normalization sqrt \
--dataset_num_workers "16" \
--extra_loss "cycle_consistency_depth_estimation" \
--adamw8bit \
--using_sqrt
# --deterministic_flow
# --extra_loss_start_epoch 0 \
# --using_sqrt \
# --resume \
|