Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
label
class label
3 classes
latent
sequence
1dog
[ [ [ 0.674001932144165, 0.5220367312431335, -0.08538233488798141, 0.3434295356273651, 1.421834945678711, -0.5711292028427124, 0.750150203704834, 0.022528884932398796, 0.8383235335350037, -0.14081640541553497, 2.334258794784546, 1.13108885288...
1dog
[ [ [ -0.5210975408554077, 0.2799062132835388, -1.3457638025283813, 0.042385704815387726, 0.2252449095249176, -0.6940844655036926, -0.7056360244750977, 0.8889088034629822, -0.5546013712882996, 0.9920197129249573, -0.921320378780365, 0.2313560...
1dog
[[[0.9299851655960083,-0.3545624613761902,-0.11646638065576553,1.080183506011963,0.9336830973625183,(...TRUNCATED)
1dog
[[[0.4643545150756836,1.159124732017517,0.4591493308544159,1.2618976831436157,0.8688426613807678,1.5(...TRUNCATED)
1dog
[[[0.02184654027223587,1.164801001548767,2.3916497230529785,1.8131107091903687,1.737846851348877,2.1(...TRUNCATED)
1dog
[[[0.7214303612709045,1.6001919507980347,0.15799014270305634,0.3384092450141907,0.3359840512275696,0(...TRUNCATED)
1dog
[[[1.4456974267959595,1.1459866762161255,1.0163121223449707,1.1947654485702515,1.0975418090820312,1.(...TRUNCATED)
1dog
[[[1.3393735885620117,1.0888593196868896,1.169750690460205,1.2294034957885742,1.153085470199585,0.95(...TRUNCATED)
1dog
[[[0.041221506893634796,0.1172712966799736,0.518294095993042,-0.38741397857666016,1.5991599559783936(...TRUNCATED)
1dog
[[[1.015040397644043,0.38717252016067505,0.8832125663757324,0.1732202023267746,0.26710712909698486,0(...TRUNCATED)
End of preview. Expand in Data Studio

Dataset Card for "latent_afhqv2_256px"

Each image is cropped to 256px square and encoded to a 4x32x32 latent representation using the same VAE as that employed by Stable Diffusion

Decoding

from diffusers import AutoencoderKL
from datasets import load_dataset
from PIL import Image
import numpy as np
import torch
# load the dataset
dataset = load_dataset('tglcourse/latent_afhqv2_256px')
# Load the VAE (requires access - see repo model card for info)
vae = AutoencoderKL.from_pretrained("CompVis/stable-diffusion-v1-4", subfolder="vae")
latent = torch.tensor([dataset['train'][0]['latent']]) # To tensor (bs, 4, 32, 32)
latent = (1 / 0.18215) * latent # Scale to match SD implementation
with torch.no_grad():
    image = vae.decode(latent).sample[0] # Decode 
image = (image / 2 + 0.5).clamp(0, 1) # To (0, 1)
image = image.detach().cpu().permute(1, 2, 0).numpy() # To numpy, channels lsat
image = (image * 255).round().astype("uint8") # (0, 255) and type uint8
image = Image.fromarray(image) # To PIL
image # The resulting PIL image
Downloads last month
6