Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
image
imagewidth (px)
256
256
label
int64
0
999
726
917
13
939
6
983
655
579
702
845
69
822
575
906
752
219
192
191
292
848
108
372
765
473
525
639
686
99
127
76
905
550
30
634
907
979
718
154
914
293
9
922
130
33
968
719
653
840
139
198
236
304
547
940
215
853
805
28
104
67
311
429
941
950
603
971
486
504
497
670
459
559
940
9
829
888
773
784
782
579
714
274
146
245
761
256
326
264
827
690
973
130
91
615
301
361
614
572
92
303
End of preview. Expand in Data Studio

ImageNet-1k ADM Crop 256

This dataset is a preprocessed version of ILSVRC/imagenet-1k with all images center-cropped to 256Γ—256 pixels using the ADM (Ablated Diffusion Model) algorithm.

🎯 Purpose

Optimized for training diffusion models and other generative models that require fixed-size square images.

πŸ“Š Dataset Details

Split Images Files Size (approx)
train 1,281,167 294 ~38 GB
test 50,000 28 ~3.5 GB

πŸ”§ Processing Method

Center Crop Algorithm (from ADM)

The center crop implementation follows the guided-diffusion approach:

from PIL import Image
import numpy as np

def center_crop_arr(pil_image, image_size):
    """
    Center cropping implementation from ADM.
    https://github.com/openai/guided-diffusion/blob/8fb3ad9197f16bbc40620447b2742e13458d2831/guided_diffusion/image_datasets.py#L126
    """
    # Progressively downsample if image is much larger than target
    while min(*pil_image.size) >= 2 * image_size:
        pil_image = pil_image.resize(
            tuple(x // 2 for x in pil_image.size), resample=Image.BOX
        )

    # Scale so shortest side equals target size
    scale = image_size / min(*pil_image.size)
    pil_image = pil_image.resize(
        tuple(round(x * scale) for x in pil_image.size), resample=Image.BICUBIC
    )

    # Center crop to exact target size
    arr = np.array(pil_image)
    crop_y = (arr.shape[0] - image_size) // 2
    crop_x = (arr.shape[1] - image_size) // 2
    return Image.fromarray(
        arr[crop_y : crop_y + image_size, crop_x : crop_x + image_size]
    )

Why this algorithm?

  1. Progressive downsampling: Uses BOX filter for initial reduction, preserving image quality
  2. BICUBIC scaling: High-quality interpolation for final resize
  3. Exact center crop: Ensures consistent 256Γ—256 output

πŸ“ Data Structure

data/
β”œβ”€β”€ train-00000-of-00294.parquet
β”œβ”€β”€ train-00001-of-00294.parquet
β”œβ”€β”€ ...
β”œβ”€β”€ train-00293-of-00294.parquet
β”œβ”€β”€ test-00000-of-00028.parquet
β”œβ”€β”€ ...
└── test-00027-of-00028.parquet

πŸ“‹ Schema

Column Type Description
image Image 256Γ—256 RGB JPEG image
label int64 Class label (0-999 for train, -1 for test)

πŸš€ Usage

With πŸ€— Datasets

from datasets import load_dataset

# Load full dataset
dataset = load_dataset("Holasyb918/imagenet-1k-adm-crop-256")

# Load specific split
train_dataset = load_dataset("Holasyb918/imagenet-1k-adm-crop-256", split="train")
test_dataset = load_dataset("Holasyb918/imagenet-1k-adm-crop-256", split="test")

# Access data
for example in train_dataset:
    image = example["image"]  # PIL Image, 256Γ—256
    label = example["label"]  # int, 0-999

With PyTorch DataLoader

from datasets import load_dataset
from torch.utils.data import DataLoader
from torchvision import transforms

# Load dataset
dataset = load_dataset("Holasyb918/imagenet-1k-adm-crop-256", split="train")

# Define transform
transform = transforms.Compose([
    transforms.ToTensor(),
    transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5])  # [-1, 1]
])

def collate_fn(batch):
    images = torch.stack([transform(x["image"]) for x in batch])
    labels = torch.tensor([x["label"] for x in batch])
    return {"image": images, "label": labels}

# Create DataLoader
dataloader = DataLoader(dataset, batch_size=32, collate_fn=collate_fn, num_workers=4)

πŸ“œ License

This dataset follows the same license terms as the original ImageNet dataset. Please ensure you comply with ImageNet's terms of use.

πŸ™ Acknowledgments

πŸ“ Citation

If you use this dataset, please cite the original ImageNet paper:

@article{deng2009imagenet,
  title={ImageNet: A large-scale hierarchical image database},
  author={Deng, Jia and Dong, Wei and Socher, Richard and Li, Li-Jia and Li, Kai and Fei-Fei, Li},
},
  year={2009}
}

πŸ“ This README was generated with the assistance of AI (Claude).

Downloads last month
39