Search is not available for this dataset
label class label 16
classes | latent sequence |
|---|---|
55 | [
[
[
2.198359251022339,
1.837585210800171,
1.7991176843643188,
1.7675293684005737,
1.911529541015625,
1.8007748126983643,
1.929028034210205,
1.9006341695785522,
1.9146620035171509,
1.8820974826812744,
1.9476919174194336,
1.8844822645187378,
... |
44 | [
[
[
0.5566282272338867,
-0.030196059495210648,
0.3221666216850281,
-0.01288745179772377,
-0.27821409702301025,
0.07357311993837357,
-0.1343780755996704,
0.4240894317626953,
0.5700436234474182,
-0.05466726794838905,
-0.07010704278945923,
0.1... |
66 | [[[-1.320440411567688,0.9821747541427612,0.24004271626472473,-2.730381727218628,-0.0764620304107666,(...TRUNCATED) |
44 | [[[-0.047935035079717636,1.7416795492172241,1.4909287691116333,1.63616144657135,1.857316017150879,1.(...TRUNCATED) |
10a | [[[0.4194176197052002,0.7733151912689209,0.6018428802490234,0.7277806997299194,0.9932883977890015,0.(...TRUNCATED) |
11 | [[[-0.2801194489002228,-0.6148724555969238,-1.232704997062683,0.3150796592235565,-0.5949820876121521(...TRUNCATED) |
14e | [[[-0.5045350193977356,-1.1655011177062988,-0.9843922853469849,-0.5407114028930664,-1.11630856990814(...TRUNCATED) |
22 | [[[0.059687864035367966,0.3399297893047333,0.7353955507278442,0.441192090511322,0.18219979107379913,(...TRUNCATED) |
13d | [[[1.326835036277771,1.188739538192749,1.0723780393600464,0.8247081637382507,0.7801157236099243,1.25(...TRUNCATED) |
11 | [[[1.615001916885376,1.2819876670837402,1.3419922590255737,1.6463745832443237,1.6021764278411865,1.3(...TRUNCATED) |
End of preview. Expand in Data Studio
Dataset Card for "latent_lsun_church_256px"
This is derived from https://huggingface.co/datasets/tglcourse/lsun_church_train
Each image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion
Decoding
from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_lsun_church_256px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 32, 32)
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
image = vae.decode(latent).sample[0] # Decode
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
- Downloads last month
- 5