metadata
license: cc0-1.0
pretty_name: The Met Open Access — apple-mobileclip embeddings
tags:
- art
- museum
- embeddings
- apple-mobileclip
configs:
- config_name: default
data_files:
- split: train
path: default/train/apple-mobileclip-*.parquet
metmuseum/openaccess-embeddings-apple-mobileclip
Image embeddings for every public-domain artwork in metmuseum/openaccess, produced by apple/MobileCLIP-S2-OpenCLIP.
| Column | Type | Notes |
|---|---|---|
objectID |
int64 | Primary key — matches objectID in metmuseum/openaccess |
embedding |
list<float32> | L2-normalised, dim = 512 |
model |
string | Source model id |
dim |
int32 | Embedding dimension |
Image bytes are not stored here; join against the main dataset to recover them.
Embedding spec: dim=512, expected image size=256px.
Joining with the main dataset
from datasets import load_dataset
meta = load_dataset("metmuseum/openaccess", split="train")
emb = load_dataset("metmuseum/openaccess-embeddings-apple-mobileclip", split="train")
# Build an objectID -> embedding lookup, then attach to the metadata rows.
lookup = {r["objectID"]: r["embedding"] for r in emb}
joined = meta.map(lambda r: {"embedding": lookup.get(r["objectID"])})
print(joined[0].keys())
Nearest-neighbour example
import numpy as np
from datasets import load_dataset
emb = load_dataset("metmuseum/openaccess-embeddings-apple-mobileclip", split="train")
ids = np.array(emb["objectID"])
mat = np.array(emb["embedding"], dtype=np.float32) # already L2-normalised
query = mat[0]
scores = mat @ query
top = np.argsort(-scores)[:10]
print(list(zip(ids[top].tolist(), scores[top].tolist())))
Generated by et-openaccess-embeddings.