Instructions to use zeyuren2002/EvalMDE with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use zeyuren2002/EvalMDE with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("zeyuren2002/EvalMDE", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
File size: 653 Bytes
40a3ea8 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
export CUDA=0
export BASE_TEST_DATA_DIR="datasets/eval/"
export CHECKPOINT_DIR="jingheya/lotus-depth-d-v2-0-disparity"
export OUTPUT_DIR="output/Depth_D_Eval"
export TASK_NAME="depth"
export MODE="regression"
CUDA_VISIBLE_DEVICES=$CUDA python eval.py \
--pretrained_model_name_or_path=$CHECKPOINT_DIR \
--prediction_type="sample" \
--seed=42 \
--half_precision \
--base_test_data_dir=$BASE_TEST_DATA_DIR \
--task_name=$TASK_NAME \
--mode=$MODE \
--output_dir=$OUTPUT_DIR \
--disparity
# The defualt `processing_res` is set in the configuration file of each dataset.
|