| # Upload Instructions |
|
|
| Target dataset name: |
|
|
| ```text |
| agent-runtime-telemetry-small |
| ``` |
|
|
| Recommended repo id: |
|
|
| ```text |
| Lightcap/agent-runtime-telemetry-small |
| ``` |
|
|
| Current uploaded URL: |
|
|
| ```text |
| https://huggingface.co/datasets/Lightcap/agent-runtime-telemetry-small |
| ``` |
|
|
| ## Files To Upload |
|
|
| Upload the full contents of this folder: |
|
|
| ```text |
| data/huggingface_exports/agent-runtime-telemetry-small/ |
| ``` |
|
|
| Do not upload the original runtime SQLite files. This folder already contains the viewer-friendly Parquet export and the Hugging Face dataset card. |
|
|
| ## CLI Upload |
|
|
| From the repository root: |
|
|
| ```bash |
| python3 - <<'PY' |
| from pathlib import Path |
| import os |
| |
| from dotenv import load_dotenv |
| from huggingface_hub import HfApi, create_repo, upload_folder |
| |
| root = Path("data/huggingface_exports/agent-runtime-telemetry-small").resolve() |
| load_dotenv(".env") |
| |
| token = ( |
| os.getenv("HF_TOKEN") |
| or os.getenv("HUGGINGFACE_HUB_TOKEN") |
| or os.getenv("HUGGING_FACE_HUB_TOKEN") |
| ) |
| if not token: |
| raise SystemExit("Missing HF_TOKEN or HUGGINGFACE_HUB_TOKEN.") |
| |
| api = HfApi(token=token) |
| username = api.whoami(token=token)["name"] |
| repo_id = f"{username}/agent-runtime-telemetry-small" |
| |
| create_repo(repo_id=repo_id, repo_type="dataset", token=token, exist_ok=True, private=False) |
| upload_folder( |
| repo_id=repo_id, |
| repo_type="dataset", |
| folder_path=str(root), |
| token=token, |
| commit_message="Add small agent runtime telemetry dataset", |
| ) |
| |
| print(f"https://huggingface.co/datasets/{repo_id}") |
| PY |
| ``` |
|
|
| ## Manual Web Upload |
|
|
| 1. Create a new Hugging Face Dataset named `agent-runtime-telemetry-small`. |
| 2. Upload `README.md`, `export_manifest.json`, and the full `data/` directory from this folder. |
| 3. Wait for the Dataset Viewer to process the Parquet files. |
| 4. Confirm the configs appear as `operations`, `operation_events`, `artifact_records`, `audit_records`, `tool_summary`, `artifact_summary`, `daily_activity`, and `dataset_overview`. |
|
|
| ## Validation After Upload |
|
|
| ```python |
| from datasets import load_dataset |
| |
| repo_id = "Lightcap/agent-runtime-telemetry-small" |
| for config in ["operations", "operation_events", "artifact_records", "audit_records", "tool_summary"]: |
| ds = load_dataset(repo_id, config) |
| print(config, ds["train"].num_rows, ds["train"].column_names[:8]) |
| ``` |
|
|
| ## Export Policy |
|
|
| This folder is a sanitized export: |
|
|
| - no source SQLite database files |
| - no raw nested `payload_json` bodies |
| - no absolute local paths |
| - no secret-like token strings |
| - Parquet tables optimized for Hugging Face Dataset Viewer |
|
|
| If you regenerate from newer runtime state, keep the same policy so the dataset remains useful and safe to browse. |
|
|