Dataset Viewer
Auto-converted to Parquet Duplicate
id
stringlengths
22
22
content
stringclasses
1 value
vector
dict
projected-01550341-010
{ "F1014": 6, "F1038": 18, "F1998": 38, "F2001": 28, "F2003": 13, "F2013": 34, "F2018": 7, "F2020": 10, "F2027": 5, "F2031": 23, "F2034": 34, "F2050": 3, "F2077": 25, "F2079": 30, "F2080": 64, "F2087": 31, "F2103": 59, "F2110": 101, "F2114": 50, "F2116": 31, "F2120": 71, "F21...
projected-48082827-002
{ "F1014": null, "F1038": null, "F1998": null, "F2001": 44, "F2003": 22, "F2013": null, "F2018": null, "F2020": null, "F2027": null, "F2031": null, "F2034": 60, "F2050": null, "F2077": null, "F2079": null, "F2080": null, "F2087": null, "F2103": null, "F2110": null, "F2114": null, ...
projected-00011315-001
{"F1014":null,"F1038":null,"F1998":null,"F2001":null,"F2003":null,"F2013":null,"F2018":null,"F2020":(...TRUNCATED)
projected-22583399-000
{"F1014":null,"F1038":null,"F1998":null,"F2001":59,"F2003":null,"F2013":null,"F2018":null,"F2020":nu(...TRUNCATED)
projected-00019961-005
{"F1014":14,"F1038":null,"F1998":null,"F2001":38,"F2003":null,"F2013":42,"F2018":44,"F2020":28,"F202(...TRUNCATED)
projected-00019961-006
{"F1014":null,"F1038":null,"F1998":null,"F2001":2,"F2003":null,"F2013":null,"F2018":42,"F2020":9,"F2(...TRUNCATED)
projected-00021888-013
{"F1014":null,"F1038":null,"F1998":null,"F2001":42,"F2003":null,"F2013":null,"F2018":null,"F2020":nu(...TRUNCATED)
projected-02457603-002
{"F1014":null,"F1038":null,"F1998":null,"F2001":65,"F2003":null,"F2013":17,"F2018":null,"F2020":59,"(...TRUNCATED)
projected-03426698-010
{"F1014":21,"F1038":null,"F1998":null,"F2001":11,"F2003":null,"F2013":null,"F2018":59,"F2020":null,"(...TRUNCATED)
projected-00142196-002
{"F1014":null,"F1038":null,"F1998":null,"F2001":null,"F2003":null,"F2013":null,"F2018":40,"F2020":nu(...TRUNCATED)
End of preview. Expand in Data Studio

YAML Metadata Warning:empty or missing yaml metadata in repo card

Check out the documentation for more information.

AToMiC Prebuilt Indexes

Example Usage:

Reproduction

Toolkits: https://github.com/TREC-AToMiC/AToMiC/tree/main/examples/dense_retriever_baselines

# Skip the encode and index steps, search with the prebuilt indexes and topics directly

python search.py \
    --topics topics/openai.clip-vit-base-patch32.text.validation \
    --index indexes/openai.clip-vit-base-patch32.image.faiss.flat \
    --hits 1000 \
    --output runs/run.openai.clip-vit-base-patch32.validation.t2i.large.trec

python search.py \
    --topics topics/openai.clip-vit-base-patch32.image.validation \
    --index indexes/openai.clip-vit-base-patch32.text.faiss.flat \
    --hits 1000 \
    --output runs/run.openai.clip-vit-base-patch32.validation.i2t.large.trec

Explore AToMiC datasets

import torch
from pathlib import Path
from datasets import load_dataset
from transformers import AutoModel, AutoProcessor

INDEX_DIR='indexes'
INDEX_NAME='openai.clip-vit-base-patch32.image.faiss.flat'
QUERY = 'Elizabeth II'

images = load_dataset('TREC-AToMiC/AToMiC-Images-v0.2', split='train') 
images.load_faiss_index(index_name=INDEX_NAME, file=Path(INDEX_DIR, INDEX_NAME, 'index'))

model = AutoModel.from_pretrained('openai/clip-vit-base-patch32')
processor = AutoProcessor.from_pretrained('openai/clip-vit-base-patch32')

# prebuilt indexes contain L2-normalized vectors
with torch.no_grad():
    q_embedding = model.get_text_features(**processor(text=query, return_tensors="pt"))
    q_embedding = torch.nn.functional.normalize(q_embedding, dim=-1).detach().numpy()

scores, retrieved = images.get_nearest_examples(index_name, q_embedding, k=10)
Downloads last month
178

Space using TREC-AToMiC/AToMiC-Baselines 1