Datasets:
File size: 2,637 Bytes
a0bd4b2 7193198 a0bd4b2 7193198 a0bd4b2 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 | # Upload Instructions
Target dataset name:
```text
agent-runtime-telemetry-small
```
Recommended repo id:
```text
Lightcap/agent-runtime-telemetry-small
```
Current uploaded URL:
```text
https://huggingface.co/datasets/Lightcap/agent-runtime-telemetry-small
```
## Files To Upload
Upload the full contents of this folder:
```text
data/huggingface_exports/agent-runtime-telemetry-small/
```
Do not upload the original runtime SQLite files. This folder already contains the viewer-friendly Parquet export and the Hugging Face dataset card.
## CLI Upload
From the repository root:
```bash
python3 - <<'PY'
from pathlib import Path
import os
from dotenv import load_dotenv
from huggingface_hub import HfApi, create_repo, upload_folder
root = Path("data/huggingface_exports/agent-runtime-telemetry-small").resolve()
load_dotenv(".env")
token = (
os.getenv("HF_TOKEN")
or os.getenv("HUGGINGFACE_HUB_TOKEN")
or os.getenv("HUGGING_FACE_HUB_TOKEN")
)
if not token:
raise SystemExit("Missing HF_TOKEN or HUGGINGFACE_HUB_TOKEN.")
api = HfApi(token=token)
username = api.whoami(token=token)["name"]
repo_id = f"{username}/agent-runtime-telemetry-small"
create_repo(repo_id=repo_id, repo_type="dataset", token=token, exist_ok=True, private=False)
upload_folder(
repo_id=repo_id,
repo_type="dataset",
folder_path=str(root),
token=token,
commit_message="Add small agent runtime telemetry dataset",
)
print(f"https://huggingface.co/datasets/{repo_id}")
PY
```
## Manual Web Upload
1. Create a new Hugging Face Dataset named `agent-runtime-telemetry-small`.
2. Upload `README.md`, `export_manifest.json`, and the full `data/` directory from this folder.
3. Wait for the Dataset Viewer to process the Parquet files.
4. Confirm the configs appear as `operations`, `operation_events`, `artifact_records`, `audit_records`, `tool_summary`, `artifact_summary`, `daily_activity`, and `dataset_overview`.
## Validation After Upload
```python
from datasets import load_dataset
repo_id = "Lightcap/agent-runtime-telemetry-small"
for config in ["operations", "operation_events", "artifact_records", "audit_records", "tool_summary"]:
ds = load_dataset(repo_id, config)
print(config, ds["train"].num_rows, ds["train"].column_names[:8])
```
## Export Policy
This folder is a sanitized export:
- no source SQLite database files
- no raw nested `payload_json` bodies
- no absolute local paths
- no secret-like token strings
- Parquet tables optimized for Hugging Face Dataset Viewer
If you regenerate from newer runtime state, keep the same policy so the dataset remains useful and safe to browse.
|