jxl unknown | json dict | __key__ stringlengths 16 16 | __url__ stringclasses 33
values |
|---|---|---|---|
[
255,
10,
250,
23,
232,
167,
12,
0,
19,
136,
2,
0,
236,
170,
182,
4,
144,
94,
80,
89,
80,
68,
208,
57,
215,
182,
84,
40,
128,
59,
1,
7,
109,
6,
165,
121,
133,
79,
141,
11,
100,
212,
184,
16,
157,
17,
234,
20,
76,
153,
... | {
"aspect_ratio": "16_9",
"base_resolution": 1024,
"generation_time_ms": 14506.570879835635,
"guidance_scale": 1,
"height": 768,
"image_path": "1024/16_9/0003a39a533a5de9.jxl",
"json_path": "1024/16_9/0003a39a533a5de9.json",
"model_id": "TONGYI-MAI/Z-Image-Turbo",
"num_inference_steps": 12,
"prompt"... | 0003a39a533a5de9 | hf://datasets/RareConcepts/ZImage-Turbo-200k-multires-aspectbucketed@66857f2e5757bc3f3192fcf07394768df36b7637/1024-16_9-0001-of-0012.tar |
"/wr6F+inDAATiAIA/GrrC1Bv0VnVuBNo13jTOcSMIVnhf+WhnXEBZGibrlWI3bVdsVin1gIAtZ8gAIikQmXGxrCtZV7FGkwPAPb(...TRUNCATED) | {"aspect_ratio":"16_9","base_resolution":1024,"generation_time_ms":9497.201697900891,"guidance_scale(...TRUNCATED) | 0007efc1a05a0d74 | "hf://datasets/RareConcepts/ZImage-Turbo-200k-multires-aspectbucketed@66857f2e5757bc3f3192fcf0739476(...TRUNCATED) |
"/wr6F+inDAATiAIANOvACpCEUZcXBVgfF1AYJFjbUIZUUxTFFR4V4tRgkDYU+xIGV0RVChR2ALWfIACIpkJlxsbI7E2WOsF2AHA(...TRUNCATED) | {"aspect_ratio":"16_9","base_resolution":1024,"generation_time_ms":13970.258112996817,"guidance_scal(...TRUNCATED) | 0008dfb30eeab685 | "hf://datasets/RareConcepts/ZImage-Turbo-200k-multires-aspectbucketed@66857f2e5757bc3f3192fcf0739476(...TRUNCATED) |
"/wr6F+inDAATiAIAFGumCNBVEQ7UTxT9E3uUI9Qn0JNUsBcCmiiZXBUhEHWUfRVXVOlVoFVQALWfIACIqEJlxuB6Qxpnrh6yBcD(...TRUNCATED) | {"aspect_ratio":"16_9","base_resolution":1024,"generation_time_ms":9510.587534401566,"guidance_scale(...TRUNCATED) | 000be935411bf14c | "hf://datasets/RareConcepts/ZImage-Turbo-200k-multires-aspectbucketed@66857f2e5757bc3f3192fcf0739476(...TRUNCATED) |
"/wr6F+inDAATiAIAdOuyCBAR0UrTdRQY1k4W3kPOBURpSBFnsWBdRekF4RydM3lBKT8RNHwKtZ8gAIiMQmWmMNZS+ddf53pO1V2(...TRUNCATED) | {"aspect_ratio":"16_9","base_resolution":1024,"generation_time_ms":9416.720625944436,"guidance_scale(...TRUNCATED) | 000cb306dadcbb44 | "hf://datasets/RareConcepts/ZImage-Turbo-200k-multires-aspectbucketed@66857f2e5757bc3f3192fcf0739476(...TRUNCATED) |
"/wr6F+inDAATiAIAQCpYAZCBQNxIHpOXxoLqCETRcB15k06C7ZxD1NkbeU8ZRoUGyAO1nyAAiKxCZZRxu4JOmTUwXe0A9+NTXr+(...TRUNCATED) | {"aspect_ratio":"16_9","base_resolution":1024,"generation_time_ms":9398.231178987771,"guidance_scale(...TRUNCATED) | 000ed1066711b823 | "hf://datasets/RareConcepts/ZImage-Turbo-200k-multires-aspectbucketed@66857f2e5757bc3f3192fcf0739476(...TRUNCATED) |
"/wr6F+inDAATiAIAmGqZEhCakW0bFN+3Hhrf5J3TUUvcCJx9XLKbbhzzEWpa8ZoJHNncd1yhAbWfIACIokJlxsbwreT9YgQ6AJj(...TRUNCATED) | {"aspect_ratio":"16_9","base_resolution":1024,"generation_time_ms":11608.218819834292,"guidance_scal(...TRUNCATED) | 001094bd809e1539 | "hf://datasets/RareConcepts/ZImage-Turbo-200k-multires-aspectbucketed@66857f2e5757bc3f3192fcf0739476(...TRUNCATED) |
"/wr6F+inDAATiAIAAKoHA1DogGU81ssVGUBfcMGZbT4hweFHzIktnRbxaRmC0WfADC21nyAAiIxCZaYw1lL511+d66l6ujtVVS8(...TRUNCATED) | {"aspect_ratio":"16_9","base_resolution":1024,"generation_time_ms":9843.411304987967,"guidance_scale(...TRUNCATED) | 0017e73078ced84d | "hf://datasets/RareConcepts/ZImage-Turbo-200k-multires-aspectbucketed@66857f2e5757bc3f3192fcf0739476(...TRUNCATED) |
"/wr6F+inDAATiAIAsGkLAxDbUMATtVNoFdgTs4LJhTLZOHlXUSbJMy0A9SJJVXU+pR7NMUkDtZ8gAIioQmXGxjDrZUJXy9kDgPv(...TRUNCATED) | {"aspect_ratio":"16_9","base_resolution":1024,"generation_time_ms":9365.309271030128,"guidance_scale(...TRUNCATED) | 001c942509b17616 | "hf://datasets/RareConcepts/ZImage-Turbo-200k-multires-aspectbucketed@66857f2e5757bc3f3192fcf0739476(...TRUNCATED) |
"/wr6F+inDAATiAIAJK19EhBOkgVXlNgnVvEY2Je4UHYbpF573JdZQ9rwEQabexpnWdId7N4hAbWfIACIRIAUemVww6EfkFK5mn0(...TRUNCATED) | {"aspect_ratio":"16_9","base_resolution":1024,"generation_time_ms":12244.624190963805,"guidance_scal(...TRUNCATED) | 001d03f98629c2cd | "hf://datasets/RareConcepts/ZImage-Turbo-200k-multires-aspectbucketed@66857f2e5757bc3f3192fcf0739476(...TRUNCATED) |
End of preview. Expand in Data Studio
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
ZImage-Turbo WebDataset
Generated images from the ZImage-Turbo model using DiffusionDB prompts.
Generation Details
- Hardware: 8x NVIDIA RTX 3090 GPUs
- Generation Time: ~2 days
- Estimated Cost: ~$70 (cloud compute)
Dataset Statistics
- Total Samples: 211,081
- Total Shards: 216
- Samples per Shard: ~1000
Shard Naming Convention
Tarballs are named: {base_resolution}-{aspect_ratio}-{shard_num:04d}-of-{total_shards:04d}.tar
For example: 1024-16_9-0001-of-0005.tar
Available Configurations
| Base Resolution | Aspect Ratio | Samples | Shards |
|---|---|---|---|
| 1024 | 16_9 | 11,801 | 12 |
| 1024 | 1_1 | 11,859 | 12 |
| 1024 | 1_2.21 | 11,693 | 12 |
| 1024 | 2.21_1 | 11,793 | 12 |
| 1024 | 4_3 | 11,646 | 12 |
| 1024 | 9_16 | 11,814 | 12 |
| 512 | 16_9 | 11,657 | 12 |
| 512 | 1_1 | 11,761 | 12 |
| 512 | 1_2.21 | 11,580 | 12 |
| 512 | 2.21_1 | 11,921 | 12 |
| 512 | 4_3 | 11,760 | 12 |
| 512 | 9_16 | 11,743 | 12 |
| 640 | 16_9 | 11,528 | 12 |
| 640 | 1_1 | 11,730 | 12 |
| 640 | 1_2.21 | 11,552 | 12 |
| 640 | 2.21_1 | 11,809 | 12 |
| 640 | 4_3 | 11,592 | 12 |
| 640 | 9_16 | 11,842 | 12 |
File Format
Each tar shard contains pairs of files:
{sample_id}.jxl- JPEG-XL encoded image{sample_id}.json- Metadata including prompt, seed, dimensions, etc.
Usage with WebDataset
Basic Usage
import webdataset as wds
# Load specific resolution/aspect
dataset = wds.WebDataset("1024-16_9-{0001..0012}.tar")
# Or load all shards
dataset = wds.WebDataset("*.tar")
for sample in dataset:
image_bytes = sample["jxl"]
metadata = json.loads(sample["json"])
print(metadata["prompt"])
Same-Shape Batching with Multiple Loaders
For training, you often want batches where all images have the same shape. This example shows how to create separate WebDataset loaders keyed by base resolution and aspect ratio, allowing you to sample same-shape batches:
import webdataset as wds
import json
from torch.utils.data import DataLoader
from pathlib import Path
# Configuration
BASE_RESOLUTIONS = [512, 640, 1024]
ASPECT_RATIOS = ["1_1", "4_3", "16_9", "9_16", "2.21_1", "1_2.21"]
DATASET_PATH = "/ml/datasets/hf-zimage-turbo"
def decode_sample(sample):
"""Decode a WebDataset sample."""
return {
"image": sample["jxl"], # Raw JPEG-XL bytes - decode with pillow-jxl or imagecodecs
"metadata": json.loads(sample["json"]),
}
def create_datasets_by_shape(dataset_path: str, batch_size: int = 8):
"""
Create a dictionary of WebDataset loaders keyed by (base_res, aspect_ratio).
Each loader yields batches of same-shape images.
"""
datasets = {}
for base_res in BASE_RESOLUTIONS:
for aspect in ASPECT_RATIOS:
# Build the shard pattern for this resolution/aspect combo
pattern = f"{dataset_path}/{base_res}-{aspect}-{{0001..0012}}.tar"
try:
ds = (
wds.WebDataset(pattern)
.shuffle(1000)
.map(decode_sample)
.batched(batch_size)
)
datasets[(base_res, aspect)] = ds
except Exception as e:
print(f"Skipping {base_res}-{aspect}: {e}")
return datasets
def round_robin_sampler(datasets: dict, steps_per_epoch: int = 1000):
"""
Sample from datasets in round-robin fashion.
Each batch contains same-shape images.
"""
import itertools
# Create iterators for each dataset
iterators = {key: iter(ds) for key, ds in datasets.items()}
keys = list(iterators.keys())
for step in range(steps_per_epoch):
# Round-robin through shape configurations
key = keys[step % len(keys)]
try:
batch = next(iterators[key])
yield key, batch
except StopIteration:
# Restart this iterator
iterators[key] = iter(datasets[key])
batch = next(iterators[key])
yield key, batch
def weighted_sampler(datasets: dict, weights: dict = None):
"""
Sample from datasets with optional weights.
Useful for emphasizing certain resolutions/aspects during training.
"""
import random
keys = list(datasets.keys())
iterators = {key: iter(ds) for key, ds in datasets.items()}
if weights is None:
weights = {key: 1.0 for key in keys}
total_weight = sum(weights.values())
probs = [weights[k] / total_weight for k in keys]
while True:
key = random.choices(keys, weights=probs)[0]
try:
batch = next(iterators[key])
yield key, batch
except StopIteration:
iterators[key] = iter(datasets[key])
batch = next(iterators[key])
yield key, batch
# Example usage
if __name__ == "__main__":
datasets = create_datasets_by_shape(DATASET_PATH, batch_size=4)
print(f"Created {len(datasets)} dataset loaders")
# Training loop with same-shape batches
for (base_res, aspect), batch in round_robin_sampler(datasets, steps_per_epoch=100):
images = batch["image"] # List of JPEG-XL bytes
metadata = batch["metadata"] # List of metadata dicts
print(f"Batch from {base_res}-{aspect}: {len(images)} images")
# Your training code here...
Streaming from Hugging Face Hub
import webdataset as wds
# Stream specific configuration from HF Hub
url = "https://huggingface.co/datasets/YOUR_USERNAME/zimage-turbo/resolve/main/1024-1_1-{0001..0012}.tar"
dataset = wds.WebDataset(url).shuffle(1000)
for sample in dataset:
# Process sample...
pass
Metadata Fields
Each JSON file contains:
sample_id: Unique identifierprompt: The text prompt usedseed: Random seed for reproducibilitybase_resolution: Base resolution (512, 1024, or 1536)aspect_ratio: Aspect ratio key (1_1, 4_3, 16_9, etc.)width,height: Actual pixel dimensionsnum_inference_steps: Steps used (8, 10, or 12)guidance_scale: CFG scale (1.0 for distilled model)model_id: Model identifiertimestamp: Generation timestampgeneration_time_ms: Time to generate in milliseconds
- Downloads last month
- 31