Spaces:
Running
feat: GitHub Actions harvest worker — 3rd-tier compute leverage
Browse filesUser feedback: 'มี HF 3 account แถม PRO 2 ทำได้แค่นี้เองเหรอ' + 'ช้ามาก'
+ 'github account ช่วยไรไม่ได้เลยเหรอ'
Adds GitHub Actions as ANOTHER parallel harvest tier on top of the
4 HF Space workers + 2 ZeroGPU synth endpoints. Each ubuntu-latest
runner gets 2 cores + 7GB RAM + 14GB SSD — enough for one bounded
streaming-mirror-worker cycle pulling from one HF dataset per tick.
Capacity per GitHub account (free tier):
2,000 minutes/mo private repo / unlimited public
→ public repo + cron */30 * * * * = ~14,000 min/mo possible
3 GH accounts × ~14,000 min/mo = ~42,000 min/mo total CI compute
Each tick:
- bounded 22 min (job timeout 25 min)
- max 50,000 samples (override via workflow_dispatch input)
- pushes harvested chunks to the same 5 sharded HF datasets with
repo selection by hash(filename)%5 — same dedup story as Spaces
Setup steps for the user (one-time per GH repo):
1. Push this repo to a GitHub fork
2. Settings → Secrets and variables → Actions:
HF_TOKEN (PRO write token, e.g. HF_TOKEN_PRO_WRITE)
HF_TOKEN_POOL (csv of all 3 PRO tokens for rate-limit dodge)
DISCORD_WEBHOOK (optional — notify on completion)
3. Actions tab → enable workflows → harvest-cron auto-runs every 30 min
Combined with 4 HF Spaces + 2 ZeroGPU + Kaggle T4×2 trainer:
Total parallel compute footprints (Mac off):
4× cpu-basic harvest loops (HF Spaces)
2× ZeroGPU A10G synth_batch (PRO accounts)
1× Kaggle T4×2 training (V4 v1.5 SFT running)
3× GitHub Actions harvesters (when wired)
1× OCI A1.Flex anchor (when capacity available)
── 11 simultaneous compute streams ── all autonomous, zero Mac
|
@@ -0,0 +1,121 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
name: Surrogate-1 Harvest Worker (GitHub Actions)
|
| 2 |
+
|
| 3 |
+
# 3 GitHub accounts × 2,000 free min/mo = 6,000 min/mo CI compute on top of
|
| 4 |
+
# the 4 HF Space harvest workers + 2 ZeroGPU synth endpoints. Each tick runs
|
| 5 |
+
# a bounded chunk (~25 min) so we don't hit the 360-min job timeout.
|
| 6 |
+
#
|
| 7 |
+
# Runs on standard ubuntu-latest (2-core, 7GB RAM, 14GB SSD) — enough for
|
| 8 |
+
# a single streaming-mirror-worker pulling from one HF dataset.
|
| 9 |
+
#
|
| 10 |
+
# Setup:
|
| 11 |
+
# 1. Push this repo to any of your GitHub accounts.
|
| 12 |
+
# 2. Set repo Secrets: HF_TOKEN_POOL (csv of all HF tokens), DISCORD_WEBHOOK
|
| 13 |
+
# 3. Workflow auto-runs every 30 min + manual workflow_dispatch trigger.
|
| 14 |
+
|
| 15 |
+
on:
|
| 16 |
+
schedule:
|
| 17 |
+
- cron: '*/30 * * * *' # every 30 min — ~48 ticks/day per account
|
| 18 |
+
workflow_dispatch:
|
| 19 |
+
inputs:
|
| 20 |
+
worker_id:
|
| 21 |
+
description: 'override worker_id'
|
| 22 |
+
required: false
|
| 23 |
+
default: 'gh-actions-default'
|
| 24 |
+
max_samples:
|
| 25 |
+
description: 'max pairs to harvest this tick'
|
| 26 |
+
required: false
|
| 27 |
+
default: '50000'
|
| 28 |
+
|
| 29 |
+
concurrency:
|
| 30 |
+
# Avoid concurrent identical runs that would race on the dedup DB
|
| 31 |
+
group: harvest-${{ github.ref }}
|
| 32 |
+
cancel-in-progress: false
|
| 33 |
+
|
| 34 |
+
jobs:
|
| 35 |
+
harvest:
|
| 36 |
+
runs-on: ubuntu-latest
|
| 37 |
+
timeout-minutes: 25
|
| 38 |
+
steps:
|
| 39 |
+
- name: Checkout repo
|
| 40 |
+
uses: actions/checkout@v4
|
| 41 |
+
|
| 42 |
+
- name: Setup Python
|
| 43 |
+
uses: actions/setup-python@v5
|
| 44 |
+
with:
|
| 45 |
+
python-version: '3.11'
|
| 46 |
+
cache: 'pip'
|
| 47 |
+
|
| 48 |
+
- name: Install harvest deps
|
| 49 |
+
run: |
|
| 50 |
+
pip install --quiet \
|
| 51 |
+
"datasets>=3.0.0" \
|
| 52 |
+
"huggingface_hub>=0.25.0" \
|
| 53 |
+
"pyarrow>=15.0.0" \
|
| 54 |
+
"pandas>=2.0.0"
|
| 55 |
+
|
| 56 |
+
- name: Run streaming-mirror-worker
|
| 57 |
+
env:
|
| 58 |
+
HF_TOKEN: ${{ secrets.HF_TOKEN }}
|
| 59 |
+
HF_TOKEN_POOL: ${{ secrets.HF_TOKEN_POOL }}
|
| 60 |
+
DISCORD_WEBHOOK: ${{ secrets.DISCORD_WEBHOOK }}
|
| 61 |
+
WORKER_ID: ${{ github.event.inputs.worker_id || format('gh-actions-{0}-{1}', github.run_id, github.run_attempt) }}
|
| 62 |
+
MAX_SAMPLES: ${{ github.event.inputs.max_samples || '50000' }}
|
| 63 |
+
run: |
|
| 64 |
+
mkdir -p ~/.surrogate/data/bulk-mirror ~/.surrogate/state ~/.surrogate/logs
|
| 65 |
+
mkdir -p ~/.surrogate/bin/v2 ~/.surrogate/bin/lib
|
| 66 |
+
cp bin/v2/streaming-mirror-worker.sh ~/.surrogate/bin/v2/
|
| 67 |
+
cp bin/v2/bulk-mirror-coordinator.py ~/.surrogate/bin/v2/
|
| 68 |
+
cp bin/lib/*.py ~/.surrogate/bin/lib/ 2>/dev/null || true
|
| 69 |
+
cp bin/v2/bulk-datasets-massive.txt ~/.surrogate/bin/v2/ 2>/dev/null || true
|
| 70 |
+
cp bin/v2/trillion-token-sources.txt ~/.surrogate/bin/v2/ 2>/dev/null || true
|
| 71 |
+
chmod +x ~/.surrogate/bin/v2/*.sh
|
| 72 |
+
# Seed the queue from the source list (idempotent)
|
| 73 |
+
python3 ~/.surrogate/bin/v2/bulk-mirror-coordinator.py seed || true
|
| 74 |
+
# Run one bounded harvest cycle
|
| 75 |
+
bash ~/.surrogate/bin/v2/streaming-mirror-worker.sh "$WORKER_ID"
|
| 76 |
+
timeout-minutes: 22
|
| 77 |
+
|
| 78 |
+
- name: Push harvested chunk to HF Hub
|
| 79 |
+
if: always()
|
| 80 |
+
env:
|
| 81 |
+
HF_TOKEN: ${{ secrets.HF_TOKEN }}
|
| 82 |
+
run: |
|
| 83 |
+
python3 - <<'PYEOF'
|
| 84 |
+
import os, glob, time
|
| 85 |
+
from huggingface_hub import HfApi
|
| 86 |
+
api = HfApi(token=os.environ.get("HF_TOKEN"))
|
| 87 |
+
shard_id = os.environ.get("WORKER_ID", "gh-default")[-3:]
|
| 88 |
+
ts = time.strftime("%Y-%m-%d/gh-%H%M%S")
|
| 89 |
+
n_pushed = 0
|
| 90 |
+
for fp in glob.glob(os.path.expanduser("~/.surrogate/data/bulk-mirror/*.jsonl")):
|
| 91 |
+
size = os.path.getsize(fp)
|
| 92 |
+
if size < 1024:
|
| 93 |
+
continue
|
| 94 |
+
# Push to the appropriate sharded repo by hash
|
| 95 |
+
hash_n = sum(ord(c) for c in os.path.basename(fp)) % 5
|
| 96 |
+
repo = ["axentx/surrogate-1-training-pairs",
|
| 97 |
+
"axentx/surrogate-1-pairs-A",
|
| 98 |
+
"axentx/surrogate-1-pairs-B",
|
| 99 |
+
"axentx/surrogate-1-pairs-C",
|
| 100 |
+
"axentx/surrogate-1-pairs-D"][hash_n]
|
| 101 |
+
try:
|
| 102 |
+
api.upload_file(
|
| 103 |
+
path_or_fileobj=fp,
|
| 104 |
+
path_in_repo=f"batches/public-merged/{ts}-{os.path.basename(fp)}",
|
| 105 |
+
repo_id=repo, repo_type="dataset",
|
| 106 |
+
commit_message=f"gh-actions: +{size//1024}KB from {shard_id}")
|
| 107 |
+
n_pushed += 1
|
| 108 |
+
print(f" ✓ {fp} → {repo}")
|
| 109 |
+
except Exception as e:
|
| 110 |
+
print(f" ✗ {fp}: {e}")
|
| 111 |
+
print(f"\n pushed {n_pushed} chunk(s)")
|
| 112 |
+
PYEOF
|
| 113 |
+
|
| 114 |
+
- name: Discord notify
|
| 115 |
+
if: success() && env.DISCORD_WEBHOOK != ''
|
| 116 |
+
env:
|
| 117 |
+
DISCORD_WEBHOOK: ${{ secrets.DISCORD_WEBHOOK }}
|
| 118 |
+
run: |
|
| 119 |
+
curl -s -X POST -H "Content-Type: application/json" \
|
| 120 |
+
-d "{\"content\":\"🐙 gh-actions harvest worker ${{ github.run_id }} done\"}" \
|
| 121 |
+
"$DISCORD_WEBHOOK" >/dev/null || true
|