Claude commited on
Commit
55a628f
·
unverified ·
1 Parent(s): 0e83549

Add HF Space config + deploy instructions

Browse files

YAML frontmatter makes the repo deployable as a Gradio Space at
athurlow/qcal. New README section explains pushing to the Space remote,
required/optional secrets (NVIDIA_API_KEY etc.), and which hardware tier
each stage needs.

https://claude.ai/code/session_01Cr4KXXgtGcnGFYqxCG3Uct

Files changed (1) hide show
  1. README.md +41 -0
README.md CHANGED
@@ -1,3 +1,16 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  # QCal Copilot — MVP
2
 
3
  AI-assisted quantum calibration. Upload a calibration plot or CSV, get an
@@ -106,6 +119,34 @@ Open <http://localhost:7860>. Upload a calibration plot, click
106
  **Analyze calibration**, inspect the generated CUDA-Q script, then click
107
  **Run simulation** to execute it on the `cudaq` simulator.
108
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
109
  ## Error-correction decoder (optional stage)
110
 
111
  After a successful calibration analysis, expand the
 
1
+ ---
2
+ title: QCal Copilot
3
+ emoji: ⚛️
4
+ colorFrom: blue
5
+ colorTo: purple
6
+ sdk: gradio
7
+ sdk_version: 4.44.0
8
+ app_file: app.py
9
+ pinned: false
10
+ license: mit
11
+ short_description: AI-assisted quantum calibration + CUDA-Q + Ising decoder
12
+ ---
13
+
14
  # QCal Copilot — MVP
15
 
16
  AI-assisted quantum calibration. Upload a calibration plot or CSV, get an
 
119
  **Analyze calibration**, inspect the generated CUDA-Q script, then click
120
  **Run simulation** to execute it on the `cudaq` simulator.
121
 
122
+ ## Deploy to Hugging Face Spaces
123
+
124
+ This repo is ready to deploy as a Gradio Space (e.g. `athurlow/qcal`). The
125
+ YAML frontmatter at the top of this README tells Spaces which SDK to use and
126
+ which file to run.
127
+
128
+ 1. Push the repo to the Space:
129
+
130
+ ```bash
131
+ git remote add space https://huggingface.co/spaces/athurlow/qcal
132
+ git push space claude/qcal-copilot-mvp-OZ9wj:main
133
+ ```
134
+ 2. In the Space **Settings → Variables and secrets**, add:
135
+ - `NVIDIA_API_KEY` — required; the hosted Space can't download the 35B
136
+ VLM locally, so the app should call the NIM endpoint.
137
+ 3. (Optional) Override model ids via Space secrets if you have custom
138
+ deployments: `QCAL_NIM_MODEL`, `QCAL_DECODER_FAST_ID`,
139
+ `QCAL_DECODER_ACCURATE_ID`.
140
+ 4. **Hardware:** a free CPU Space runs the decoder's small CNN (~1.8M params)
141
+ and the NIM-backed analyzer fine. A GPU Space (T4 or better) is only
142
+ needed if you want to host the calibration VLM locally; `cudaq` requires
143
+ an NVIDIA GPU Space to run the simulation stage.
144
+
145
+ The app falls back gracefully when dependencies are missing: no
146
+ `NVIDIA_API_KEY` → analyzer reports the missing key; no `cudaq` → simulator
147
+ button surfaces the install hint; no `pymatching` → decoder shows density
148
+ metrics without MWPM timing.
149
+
150
  ## Error-correction decoder (optional stage)
151
 
152
  After a successful calibration analysis, expand the