superskillret-index / README.md
hongchoel's picture
Mirror full-context index v2 from youngryankim/superskillret-index-fullcontext (name+description+body, +INT8 quantized)
01c08f8 verified
metadata
license: mit
tags:
  - embeddings
  - skill-retrieval
  - claude-code

superskillret prebuilt index — full-context

Prebuilt embedding index for the superskillret Claude Code plugin.

Unlike the v1 index (which embedded only name + description), v2 encodes the full skill body (name + description + body) up to max_seq_length=32768 tokens. Larger index, much higher recall on skills whose name/description don't capture every keyword in the body.

Files

File Description
skill_embeddings.npy FP16 numpy array of shape (16783, 1024)
skill_embeddings_int8.npy INT8 per-row quantized array of shape (16783, 1024)
skill_embeddings_scale.npy float32 per-row scale of shape (16783,) — reconstruct as (int8 / 127) * scale
skill_metadata.jsonl one JSON record per row, aligned with embeddings (name, description, body, source_url, namespace, repo, id)
VERSION integer version tag; bumped when the corpus, encoder, or encoded-text scheme changes

Usage

from huggingface_hub import snapshot_download
snapshot_download(
    repo_id="ThakiCloud/superskillret-index",
    repo_type="dataset",
    local_dir="cache/",
)

Downstream consumers should check VERSION against their cached copy before reusing local files.