Instructions to use zeyuren2002/EvalMDE with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diffusers
How to use zeyuren2002/EvalMDE with Diffusers:
pip install -U diffusers transformers accelerate
import torch from diffusers import DiffusionPipeline # switch to "mps" for apple devices pipe = DiffusionPipeline.from_pretrained("zeyuren2002/EvalMDE", dtype=torch.bfloat16, device_map="cuda") prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k" image = pipe(prompt).images[0] - Notebooks
- Google Colab
- Kaggle
File size: 657 Bytes
40a3ea8 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 |
export CUDA=0
export BASE_TEST_DATA_DIR="datasets/eval/"
export CHECKPOINT_DIR="jingheya/lotus-normal-g-v1-1"
export OUTPUT_DIR="output/Normal_G_Eval"
export TASK_NAME="normal"
export MODE="generation"
CUDA_VISIBLE_DEVICES=$CUDA python eval.py \
--pretrained_model_name_or_path=$CHECKPOINT_DIR \
--prediction_type="sample" \
--seed=42 \
--half_precision \
--base_test_data_dir=$BASE_TEST_DATA_DIR \
--task_name=$TASK_NAME \
--mode=$MODE \
--output_dir=$OUTPUT_DIR \
--disparity
# You can set `processing_res` for high-resolution images. Default: `--processing_res=None`.
|