Datasets:
metadata
license: mit
tags:
- embeddings
- skill-retrieval
- claude-code
superskillret prebuilt index — full-context
Prebuilt embedding index for the superskillret Claude Code plugin.
Unlike the v1 index (which embedded only name + description), v2 encodes
the full skill body (name + description + body) up to
max_seq_length=32768 tokens. Larger index, much higher recall on
skills whose name/description don't capture every keyword in the body.
- Version: 2
- Corpus:
ThakiCloud/SKILLRET(train+test) - Encoder:
ThakiCloud/SkillRet-Embedding-0.6B - Skills indexed: 16783
- Embedding dim: 1024
- Encoded text:
name + description + body(truncated to 32768 tokens) - Normalized: yes (inner product = cosine similarity)
Files
| File | Description |
|---|---|
skill_embeddings.npy |
FP16 numpy array of shape (16783, 1024) |
skill_embeddings_int8.npy |
INT8 per-row quantized array of shape (16783, 1024) |
skill_embeddings_scale.npy |
float32 per-row scale of shape (16783,) — reconstruct as (int8 / 127) * scale |
skill_metadata.jsonl |
one JSON record per row, aligned with embeddings (name, description, body, source_url, namespace, repo, id) |
VERSION |
integer version tag; bumped when the corpus, encoder, or encoded-text scheme changes |
Usage
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="ThakiCloud/superskillret-index",
repo_type="dataset",
local_dir="cache/",
)
Downstream consumers should check VERSION against their cached copy before reusing local files.