brettrenfer's picture
Update dataset card
a22909f verified
---
license: cc0-1.0
pretty_name: The Met Open Access apple-mobileclip embeddings
tags:
- art
- museum
- embeddings
- apple-mobileclip
configs:
- config_name: default
data_files:
- split: train
path: default/train/apple-mobileclip-*.parquet
---
# metmuseum/openaccess-embeddings-apple-mobileclip
Image embeddings for every public-domain artwork in [`metmuseum/openaccess`](https://huggingface.co/datasets/metmuseum/openaccess), produced by **apple/MobileCLIP-S2-OpenCLIP**.
| Column | Type | Notes |
|--------|------|-------|
| `objectID` | int64 | Primary key — matches `objectID` in `metmuseum/openaccess` |
| `embedding` | list<float32> | L2-normalised, dim = 512 |
| `model` | string | Source model id |
| `dim` | int32 | Embedding dimension |
Image bytes are **not** stored here; join against the main dataset to recover them.
Embedding spec: dim=512, expected image size=256px.
## Joining with the main dataset
```python
from datasets import load_dataset
meta = load_dataset("metmuseum/openaccess", split="train")
emb = load_dataset("metmuseum/openaccess-embeddings-apple-mobileclip", split="train")
# Build an objectID -> embedding lookup, then attach to the metadata rows.
lookup = {r["objectID"]: r["embedding"] for r in emb}
joined = meta.map(lambda r: {"embedding": lookup.get(r["objectID"])})
print(joined[0].keys())
```
## Nearest-neighbour example
```python
import numpy as np
from datasets import load_dataset
emb = load_dataset("metmuseum/openaccess-embeddings-apple-mobileclip", split="train")
ids = np.array(emb["objectID"])
mat = np.array(emb["embedding"], dtype=np.float32) # already L2-normalised
query = mat[0]
scores = mat @ query
top = np.argsort(-scores)[:10]
print(list(zip(ids[top].tolist(), scores[top].tolist())))
```
Generated by [`et-openaccess-embeddings`](https://github.com/metmuseum/et-openaccess-embeddings).