You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

TreeOfLife-200M Embeddings

Work in progress. This dataset and card are under active development.

Pre-computed image embeddings for 200M+ images from the TreeOfLife-200M dataset, sorted by taxonomic hierarchy for efficient filtered access.

This repository hosts embedding configs for TreeOfLife-200M. Each config corresponds to a different embedding model and/or precision. Currently available: BioCLIP 2 (float16). Additional configs (e.g., BioCLIP 2.5 Huge) will be added as new embeddings are generated.

We recommend using DuckDB to access this dataset.

This repository does not contain or redistribute any images. It contains only pre-computed embedding vectors and metadata derived from TreeOfLife-200M.

Dataset Details

Dataset Description

This repository contains pre-computed embedding vectors derived from the TreeOfLife-200M dataset. Each row pairs a TreeOfLife-200M image (identified by uuid) with its embedding vector and associated taxonomic metadata. The data is sorted by taxonomy and stored in Parquet format with page indexes, enabling fast filtered access through Hugging Face.

Supported Tasks

  • Feature extraction: Retrieve pre-computed embedding vectors for any (taxonomic) subset without re-running the image embedding model.
  • Taxonomy-filtered retrieval: Query embeddings by kingdom, order, family, genus, or species using DuckDB with page-index predicate pushdown.
  • Downstream training: Use filtered embedding subsets for fine-tuning classifiers, training probes, or building custom indices.
  • Exact similarity computation: Compute similarity between embeddings.

Dataset Structure

Configs

Each config corresponds to a different embedding model and precision. Users load a specific config:

# Python
from datasets import load_dataset
ds = load_dataset("imageomics/TreeOfLife-200M-Embeddings", "bioclip-2_float16")
Config Model Precision Dimensions Files Size Rows
bioclip-2_float16 BioCLIP 2 (ViT-L/14) float16 768 666 ~346 GB 233,055,986

Data Fields

Column Type Description
uuid string Unique image identifier (matches TreeOfLife-200M)
emb fixed_size_list<float>[N] Image embedding vector (dimensions depend on config)
source_dataset string Data source: gbif, eol, bioscan, or fathomnet
source_id string Unique identifier from source (e.g., GBIF occurrence ID)
kingdom string Taxonomic kingdom
phylum string Taxonomic phylum
class string Taxonomic class
order string Taxonomic order
family string Taxonomic family
genus string Taxonomic genus
species string Species epithet (e.g., plexippus; use scientific_name for the full binomial Danaus plexippus)
scientific_name string Full scientific name
common_name string Vernacular/common name
publisher string Organization that published the data (GBIF records only; NULL for non-GBIF)
basisOfRecord string GBIF basis of record (GBIF records only; NULL for non-GBIF)
identifier string Source image URL
img_type string Image type (e.g., Citizen Science, Museum Specimen: Fungi, Camera-trap); GBIF records only, NULL for non-GBIF

For more background on metadata columns, see the TreeOfLife-200M data field descriptions.

Data Organization

  • Sort order: source_dataset > kingdom > phylum > class > order > family > genus > species > common_name
  • Row groups: 50,000 rows each with column statistics and page indexes
  • Compression: ZSTD level 3
  • Precision: config-dependent (see Configs table)
  • File naming: train-00000-of-NNNNN.parquet (count depends on config)

Data Splits

Single train split containing all 233,055,986 rows.

Usage

We recommend using DuckDB to access this dataset. All examples below use the bioclip-2_float16 config; replace with the appropriate config name for other embeddings.

order and class are SQL reserved words: always quote them as "order" and "class".

Choosing between download vs. remote query. Downloading the full config is the preferred path. Once local, queries run in seconds, read access is deterministic, and there are no network failure modes. The tradeoffs: it requires ~320 GB of disk, and users must occasionally re-sync as the dataset is updated on Hugging Face (hf download skips unchanged files based on LFS hashes, so subsequent syncs only transfer what changed). Remote queries over hf:// are still useful for quick checks such as row counts, metadata browsing, aggregate statistics, and fetching a single species' embeddings. However, at the time of writing, COPY of larger slices (more than ~20 MB of emb data spanning multiple files) is unstable and frequently fails mid-transfer with TProtocolException. For any repeated batch access to embeddings at scale, download first.

Download, then Query Locally (Recommended)

The fastest and most reliable workflow is to download the config once, then run DuckDB queries against the local files. All filtered slices return in seconds thanks to ZSTD decompression and page-index predicate pushdown.

Benchmark (Cardinal login node, single session): full download ~6 min at ~1 GB/s. Once local, a Felidae slice (152,867 rows, 466 MB) is extracted in ~6 s.

1. Download the config

hf download imageomics/TreeOfLife-200M-Embeddings \
    --repo-type dataset \
    --include "bioclip-2_float16/*" \
    --local-dir ./data

Check if your local copy is in sync. Re-run the same hf download command. It performs an incremental sync: files whose LFS hashes already match the remote are skipped with no bytes transferred, and only new or changed files are downloaded. A fully in-sync local copy completes in about 1 second.

2. Query locally with DuckDB

import duckdb

con = duckdb.connect()
glob = "./data/bioclip-2_float16/*.parquet"

# Extract a species slice to a new file (fast: ~6s for 150K rows)
con.sql(f"""
    COPY (
        SELECT * FROM read_parquet('{glob}')
        WHERE family = 'Felidae'
    ) TO 'felidae.parquet'
""")

# Aggregate across the full config (fast: page index + dictionary encoding)
con.sql(f"""
    SELECT "order", COUNT(*) AS n
    FROM read_parquet('{glob}')
    WHERE "order" IS NOT NULL
    GROUP BY "order"
    ORDER BY n DESC
    LIMIT 10
""").show()
# CLI equivalent
duckdb -c "
    COPY (SELECT * FROM read_parquet('./data/bioclip-2_float16/*.parquet')
          WHERE \"order\" = 'Primates')
    TO 'primates.parquet';
"

Slice sizes (after local extraction)

Slice Rows Size on disk Local COPY time*
scientific_name = 'Puma concolor' 11,811 36 MB 1.7 s
family = 'Felidae' 152,867 466 MB 5.9 s
"order" = 'Primates' 164,239 501 MB 5.8 s

Measured on the Cardinal cluster login node after the full config is already downloaded locally.

Query Remotely

For aggregate queries and small embedding slices, you can query directly from Hugging Face with DuckDB. The hf:// protocol streams only the page ranges needed, so filtered aggregations typically return in ~10 s and a single-species COPY finishes in under a minute.

Limitations of remote COPY: sustained multi-file reads from hf:// (e.g., extracting >~20 MB of embeddings across many files) can be interrupted by the Hugging Face CDN and fail with TProtocolException: Invalid data. If this happens, fall back to the download-then-query workflow above.

Python

import duckdb

con = duckdb.connect()
# Authenticate for higher rate limits (recommended)
con.execute("CREATE SECRET (TYPE HUGGINGFACE, TOKEN 'hf_...')")

glob = "hf://datasets/imageomics/TreeOfLife-200M-Embeddings/bioclip-2_float16/*.parquet"

# Count by taxonomic order (aggregate: fast, ~10s)
con.sql(f"""
    SELECT "order", COUNT(*) AS n
    FROM read_parquet('{glob}')
    WHERE "order" IS NOT NULL
    GROUP BY "order"
    ORDER BY n DESC
    LIMIT 10
""").show()

# Fetch embeddings for a single species (~18 MB, safe slice size)
df = con.sql(f"""
    SELECT uuid, emb
    FROM read_parquet('{glob}')
    WHERE scientific_name = 'Puma concolor'
""").df()

CLI

duckdb -c "
    SELECT species, COUNT(*) AS n
    FROM read_parquet('hf://datasets/imageomics/TreeOfLife-200M-Embeddings/bioclip-2_float16/*.parquet')
    WHERE family = 'Nymphalidae'
    GROUP BY species
    ORDER BY n DESC
    LIMIT 10;
"

First query in a session includes a one-time ~3 min glob resolution overhead: DuckDB expands the *.parquet wildcard by making HTTP requests to Hugging Face to discover all 666 filenames. Subsequent queries in the same session reuse the cached file list.

Dataset Creation

Curation Rationale

The TreeOfLife-200M dataset contains 200M+ organism images totaling ~92 TB. Running an embedding model over this corpus is expensive. This repository provides pre-computed embeddings to enable downstream tasks such as:

  • Training taxonomic classifiers or linear probes on embedding subsets
  • Computing exact pairwise similarity between organisms
  • Building custom search indices or retrieval systems
  • Analyzing embedding space structure across taxonomic groups

Source Data

Data Collection and Processing

  1. Embedding generation. All images in TreeOfLife-200M are embedded using a vision foundation model (see Configs for model details). Each image produces a fixed-dimensional vector.
  2. Global sort. All rows are sorted by source_dataset > kingdom > phylum > class > order > family > genus > species > common_name, so taxonomically similar rows are adjacent. This enables page-index predicate pushdown for fast filtered access.
  3. Precision. Embedding precision is config-dependent (e.g., float16 for bioclip-2_float16).
  4. Parquet write. PyArrow writes files targeting ~500 MB each, with ZSTD level 3 compression, page indexes, and 50,000-row row groups for optimal streaming and query performance.

Source Data Producers

Annotations

This dataset does not include annotations created specifically for this repository. All taxonomic labels, common names, and provenance metadata are inherited directly from the TreeOfLife-200M catalog, which aligned the taxonomic names provided by GBIF, EOL, BIOSCAN-5M, and FathomNet using TaxonoPy. See the TreeOfLife-200M dataset card for details on annotation processes and provenance.

Personal and Sensitive Information

This repository does not contain or redistribute any images. However, the metadata includes URLs (identifier column) pointing to source images that may occasionally contain humans in the background (e.g., citizen science observations). The upstream TreeOfLife-200M dataset applies human face detection filtering to minimize such occurrences. See the TreeOfLife-200M dataset card for details.

Considerations for Using the Data

Bias, Risks, and Limitations

This dataset inherits biases from TreeOfLife-200M:

  • Taxonomic coverage is uneven. Citizen science observations (primarily iNaturalist) comprise ~60% of the data, skewing representation toward charismatic species and regions where citizen science is most active.
  • Incomplete taxonomic labels. Not all records have complete taxonomy at every rank. Records may have NULL values at lower ranks and incertae sedis at kingdom level due to biodiversity data complexities (unresolved taxonomy, ambiguous identifications, etc.).
  • Embedding bias. Similarity is determined by the embedding model, which may encode biases from its training data.

Recommendations

  • When using results for research, verify taxonomic labels against authoritative sources, since labels are inherited from community-contributed data.
  • Be aware of geographic and taxonomic sampling biases when interpreting embedding-based analyses.
  • For issues with specific records, report via the Community tab.

Related Datasets

Licensing Information

The embedding vectors in this repository are dedicated to the public domain under the CC0 1.0 Universal Public Domain Dedication. The metadata inherits licensing terms from its upstream sources (GBIF, EOL, BIOSCAN-5M, FathomNet); see the TreeOfLife-200M licensing information for per-record details.

Important: This repository does not contain or redistribute any images. The metadata includes URLs pointing to images hosted by their original sources. Individual images retain their original source licenses. Users must respect each image's original license terms when accessing images via the provided URLs.

We ask that you cite this dataset and associated papers if you make use of it in your research.

Citation

Data:

@misc{treeoflife_200m_embeddings,
  author = {Zhang, Net and Campolongo, Elizabeth and Thompson, Matthew and Gu, Jianyang},
  title = {{TreeOfLife-200M Embeddings}},
  year = {2026},
  url = {https://huggingface.co/datasets/imageomics/TreeOfLife-200M-Embeddings},
  publisher = {Hugging Face}
}

Please also cite the source dataset and embedding model (include data sources as appropriate):

TreeOfLife-200M:

@dataset{treeoflife_200m,
  title = {{T}ree{O}f{L}ife-200{M} (Revision a8f38b4)}, 
  author = {Jianyang Gu and Samuel Stevens and Elizabeth G Campolongo and Matthew J Thompson and Net Zhang and Jiaman Wu and Andrei Kopanev and Zheda Mai and Alexander E. White and James Balhoff and Wasila M Dahdul and Daniel Rubenstein and Hilmar Lapp and Tanya Berger-Wolf and Wei-Lun Chao and Yu Su},
  year = {2025},
  url = {https://huggingface.co/datasets/imageomics/TreeOfLife-200M},
  doi = {10.57967/hf/6786},
  publisher = {Hugging Face}
}

BioCLIP 2:

@article{gu2025bioclip,
  title = {{B}io{CLIP} 2: Emergent Properties from Scaling Hierarchical Contrastive Learning}, 
  author = {Jianyang Gu and Samuel Stevens and Elizabeth G Campolongo and Matthew J Thompson and Net Zhang and Jiaman Wu and Andrei Kopanev and Zheda Mai and Alexander E. White and James Balhoff and Wasila M Dahdul and Daniel Rubenstein and Hilmar Lapp and Tanya Berger-Wolf and Wei-Lun Chao and Yu Su},
  year = {2025},
  eprint={2505.23883},
  archivePrefix={arXiv},
  primaryClass={cs.CV},
  url={https://arxiv.org/abs/2505.23883},
}

Acknowledgements

This work was supported by the Imageomics Institute, which is funded by the US National Science Foundation's Harnessing the Data Revolution (HDR) program under Award #2118240 (Imageomics: A New Frontier of Biological Information Powered by Knowledge-Guided Machine Learning). Any opinions, findings and conclusions or recommendations expressed in this material are those of the author(s) and do not necessarily reflect the views of the National Science Foundation.

This work used resources of the Ohio Supercomputer Center (OSC): Ohio Supercomputer Center. 1987. Ohio Supercomputer Center. Columbus OH: Ohio Supercomputer Center. https://ror.org/01apna436.

Dataset Card Authors

Net Zhang, Elizabeth Campolongo

Dataset Card Contact

For questions or issues, please use the Community tab on this repository.

Downloads last month
41

Paper for imageomics/TreeOfLife-200M-Embeddings