Dataset Viewer
The dataset viewer is taking too long to fetch the data. Try to refresh this page.
Server-side error
Error code:   ClientConnectionError

Dataset Card for Embedding Enriched The Ultimate 1Million Movies Dataset (TMDB + IMDb)

This dataset contains movie metadata from TMDB sourced from Kaggle, with an added layer of embeddings and token counts for semantic search and ML applications.

image/png

Dataset Details

Dataset Description

A daily-updated snapshot taken in early 2025 of an existing movie metadata collection, sourced from TMDB Movies Dataset 2025 on Kaggle, enriched with 768-dimensional nomic-embed-text embeddings and token counts (using OpenAI tiktoken cl100k_base tokenizer on concatenated text fields), for context management purposes.

No association with the original dataset creator, but I'm linking them below. I generated the embeddings for a personal project, which took a bit of time, so I'm sharing the result for anyone who finds it useful.

  • Original Set Curated by: Alan Vourch (Kaggle)
  • License: Apache 2.0

Dataset Sources

Uses

Direct Use

  • Semantic search and retrieval
  • Movie recommendation systems
  • ML model training for movie-related tasks

Out-of-Scope Use

  • Any use violating TMDB or Kaggle terms of service
  • Use as ground truth for sensitive or demographic analysis without independent validation.

Dataset Structure

  • Fields: Standard TMDB movie metadata (title, release_date, genres, etc.)
  • title_tagline_overview: Concatenated text used for embedding and tokenization.
  • embedding: 768-dimension vector, stored as a stringified list (e.g., "[0.1, 0.2, ...]").
  • token_count: Integer, calculated using OpenAI tiktoken cl100k_base tokenizer on the concatenated text fields.

Dataset Creation

Curation Rationale

To enable semantic search, recommendations, and ML research on a large, up-to-date movie dataset with ready-to-use embeddings and token counts.

Source Data

Data Collection and Processing

  • Downloaded from Kaggle (see above).
  • Embeddings generated using nomic-embed-text:latest via Ollama on concatenated text fields.
  • Token counts calculated using OpenAI tiktoken cl100k_base tokenizer (not the nomic-embed tokenizer, was added to follow a context management convention)

Who are the source data producers?

  • Original movie data: The Movie Database (TMDB) and contributors.
  • Embeddings and token counts: Jeremy Braun (Remsky)

Bias, Risks, and Limitations

  • The dataset inherits any biases present in TMDB and the embedding/tokenizer models.
  • Embeddings are stored as strings; conversion to array type is recommended for downstream ML and will be included in an update soon
  • Note: The majority are embedded with the genre appended to the end of the text description as well, though the initial 10-15% or so this wasn't included. Will aim to update for consistency, but it hasn't caused significant issues for my current use

Recommendations

Users should be aware of potential biases in both the source data and the embedding model. For ML use, convert the embedding column to a numeric array as needed, or insert to pgvector

Downloads last month
123