Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
latents
sequencelengths
4
4
label_latent
int64
0
999
[ [ [ 2.1326520442962646, 4.022431373596191, 2.503732919692993, 3.529430866241455, 1.780723214149475, 0.5165610909461975, 4.273795127868652, 5.5315985679626465, 4.491783142089844, 1.7778517007827759, 3.0758378505706787, 5.2626543045043945, ...
726
[ [ [ 1.426426649093628, 0.6210892200469971, 4.963717460632324, -0.33825254440307617, 3.559312582015991, 1.4125216007232666, -4.890995502471924, -0.43694841861724854, -3.7673261165618896, -3.0983283519744873, 1.8089739084243774, -2.6937899589...
917
[[[7.844578266143799,3.911954641342163,6.265392780303955,1.486894965171814,9.750190734863281,4.48775(...TRUNCATED)
13
[[[1.864761233329773,10.804437637329102,1.0170234441757202,4.269654750823975,1.2092145681381226,1.26(...TRUNCATED)
939
[[[-0.5518589615821838,5.776851654052734,-1.8749537467956543,0.9443236589431763,-2.1623244285583496,(...TRUNCATED)
6
[[[-5.766753673553467,-0.3822507858276367,0.025632312521338463,-4.1985182762146,-3.683450698852539,8(...TRUNCATED)
983
[[[4.933608055114746,4.588385105133057,3.6943459510803223,3.9011635780334473,4.860240936279297,4.346(...TRUNCATED)
655
[[[5.3907151222229,6.166732311248779,7.227005481719971,6.848473072052002,6.387278079986572,5.1693058(...TRUNCATED)
579
[[[4.769820213317871,6.574337482452393,3.1444427967071533,2.533736228942871,6.514801502227783,7.0421(...TRUNCATED)
702
[[[7.381595611572266,14.06937026977539,17.47007179260254,14.849177360534668,17.10161781311035,14.999(...TRUNCATED)
845
End of preview. Expand in Data Studio

Better latent: I advise you to use another dataset https://huggingface.co/datasets/cloneofsimo/imagenet.int8 which is already compressed (5Go only) and use a better latent model (SDXL)

This dataset is the latent representation of the imagenet dataset using the stability VAE stabilityai/sd-vae-ft-ema.

Every image_latent is of shape (4, 32, 32).

If you want to retrieve the original image you have to use the model used to create the latent image :

vae_model = "stabilityai/sd-vae-ft-ema"
vae = AutoencoderKL.from_pretrained(vae_model)
vae.eval()

The images have been encoded using :

images = [DEFAULT_TRANSFORM(image.convert("RGB")) for image in examples["image"]]
images = torch.stack(images)
images = vaeprocess.preprocess(images)
images = images.to(device="cuda", dtype=torch.float)
with torch.no_grad():
    latents = vae.encode(images).latent_dist.sample()

With DEFAULT_TRANSFORM being :

DEFAULT_IMAGE_SIZE = 256

DEFAULT_TRANSFORM = transforms.Compose(
    [
        transforms.Resize((DEFAULT_IMAGE_SIZE, DEFAULT_IMAGE_SIZE)),
        transforms.ToTensor(),
    ]
)

The images can be decoded using :

import datasets

latent_dataset = datasets.load_dataset(
            "Forbu14/imagenet-1k-latent"
        )

latent = torch.tensor(latent_dataset["train"][0]["latents"])
image = vae.decode(latent).sample
Downloads last month
311