binga commited on
Commit
7941c32
·
verified ·
1 Parent(s): bbed34a

Add comprehensive production inference benchmarks

Browse files
Files changed (1) hide show
  1. README.md +158 -4
README.md CHANGED
@@ -100,11 +100,144 @@ Per-class test accuracy:
100
  | Education & Reference | 0.310 |
101
  | Business & Finance | 0.263 |
102
 
103
- ### Inference Speed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
104
 
105
- | Device | Latency |
106
- |--------|---------|
107
- | **GPU (A10G, bf16)** | **~154 ms/sample** |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
108
 
109
  ## Training Strategy
110
 
@@ -171,6 +304,27 @@ top = probs.argmax().item()
171
  print(f"\nCategory: {categories[top]} ({probs[top]:.1%})")
172
  ```
173
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
174
  ## Example Outputs
175
 
176
  | Input | PII Detected | Category (confidence) |
 
100
  | Education & Reference | 0.310 |
101
  | Business & Finance | 0.263 |
102
 
103
+ ---
104
+
105
+ ## 🚀 Production Inference Guide
106
+
107
+ All numbers below are measured on real hardware with both task heads (NER + doc classification) executing on every call. Benchmark script: single forward pass produces PII entity tags **and** document category simultaneously.
108
+
109
+ ### Resource Requirements
110
+
111
+ | Resource | Value |
112
+ |----------|-------|
113
+ | Model weights (bf16) | **2.8 GB** GPU VRAM / RAM |
114
+ | Model weights (fp32) | **5.6 GB** RAM |
115
+ | ONNX variants available upstream | fp16, int8, q4 (see [openai/privacy-filter](https://huggingface.co/openai/privacy-filter/tree/main/onnx)) |
116
+ | Min GPU VRAM (bs=1, seq≤512) | **2.9 GB** |
117
+ | Min GPU VRAM (bs=64, seq=512) | **6.2 GB** |
118
+ | Fits on | T4 (16 GB), L4 (24 GB), A10G (24 GB), A100, any ≥8 GB GPU |
119
+
120
+ ### GPU — Single-Document Latency (NVIDIA A10G, bf16)
121
+
122
+ Time from raw text to both NER tags + document category:
123
+
124
+ | Sequence Length | Latency (mean) | Latency (p95) | Latency (p99) |
125
+ |:-:|:-:|:-:|:-:|
126
+ | 64 tokens | 113 ms | 117 ms | 122 ms |
127
+ | 128 tokens | 106 ms | 110 ms | 115 ms |
128
+ | 256 tokens | 106 ms | 111 ms | 113 ms |
129
+ | 512 tokens | 106 ms | 113 ms | 116 ms |
130
+
131
+ > Latency is dominated by a fixed ~105 ms kernel-launch overhead from the Sparse MoE routing — it barely changes with sequence length up to 512 tokens.
132
+
133
+ ### GPU — Batched Throughput (NVIDIA A10G, bf16)
134
+
135
+ | Batch Size | Seq 64 | Seq 128 | Seq 256 | Seq 512 |
136
+ |:-:|:-:|:-:|:-:|:-:|
137
+ | **1** | 8.9 docs/s | 9.4 docs/s | 9.4 docs/s | 9.4 docs/s |
138
+ | **4** | 36 docs/s | 37 docs/s | 37 docs/s | 32 docs/s |
139
+ | **8** | 73 docs/s | 73 docs/s | 69 docs/s | 53 docs/s |
140
+ | **16** | 139 docs/s | 138 docs/s | 114 docs/s | 73 docs/s |
141
+ | **32** | 265 docs/s | 238 docs/s | 165 docs/s | 89 docs/s |
142
+ | **64** | **460 docs/s** | **348 docs/s** | **207 docs/s** | **101 docs/s** |
143
+
144
+ ### GPU — Batched Latency Detail (NVIDIA A10G, bf16)
145
+
146
+ <details>
147
+ <summary>Full latency table (click to expand)</summary>
148
+
149
+ | Batch | Seq Len | Batch Latency (ms) | Per-Doc (ms) | p95 (ms) | p99 (ms) |
150
+ |:-:|:-:|:-:|:-:|:-:|:-:|
151
+ | 1 | 64 | 113 | 112.7 | 117 | 122 |
152
+ | 4 | 64 | 111 | 27.8 | 116 | 118 |
153
+ | 8 | 64 | 110 | 13.8 | 114 | 126 |
154
+ | 16 | 64 | 115 | 7.2 | 121 | 125 |
155
+ | 32 | 64 | 121 | 3.8 | 127 | 135 |
156
+ | 64 | 64 | 139 | 2.2 | 144 | 144 |
157
+ | 1 | 128 | 106 | 105.9 | 110 | 115 |
158
+ | 4 | 128 | 107 | 26.9 | 112 | 115 |
159
+ | 8 | 128 | 110 | 13.7 | 115 | 116 |
160
+ | 16 | 128 | 116 | 7.3 | 121 | 128 |
161
+ | 32 | 128 | 134 | 4.2 | 139 | 143 |
162
+ | 64 | 128 | 184 | 2.9 | 189 | 191 |
163
+ | 1 | 256 | 106 | 106.1 | 111 | 113 |
164
+ | 4 | 256 | 109 | 27.2 | 114 | 115 |
165
+ | 8 | 256 | 117 | 14.6 | 123 | 126 |
166
+ | 16 | 256 | 140 | 8.8 | 145 | 147 |
167
+ | 32 | 256 | 194 | 6.1 | 199 | 202 |
168
+ | 64 | 256 | 309 | 4.8 | 314 | 315 |
169
+ | 1 | 512 | 106 | 106.5 | 113 | 116 |
170
+ | 4 | 512 | 125 | 31.2 | 129 | 130 |
171
+ | 8 | 512 | 152 | 19.0 | 158 | 165 |
172
+ | 16 | 512 | 219 | 13.7 | 223 | 225 |
173
+ | 32 | 512 | 358 | 11.2 | 361 | 364 |
174
+ | 64 | 512 | 636 | 9.9 | 639 | 641 |
175
+
176
+ </details>
177
+
178
+ ### GPU — Peak VRAM Usage (bf16)
179
+
180
+ | Batch Size | Seq 128 | Seq 256 | Seq 512 |
181
+ |:-:|:-:|:-:|:-:|
182
+ | 1 | 2,817 MB | 2,824 MB | 2,862 MB |
183
+ | 8 | 2,857 MB | 2,936 MB | 3,237 MB |
184
+ | 32 | 3,000 MB | 3,309 MB | 4,522 MB |
185
+ | 64 | 3,189 MB | 3,809 MB | **6,236 MB** |
186
+
187
+ > The model is extremely memory-efficient. Even at batch=64, seq=512, it uses only 6.2 GB — comfortably fits on a T4 (16 GB). This is because the Sparse MoE only activates 4 of 128 experts per token.
188
+
189
+ ### CPU — Latency & Throughput (AMD EPYC 7R32, 8 cores, fp32)
190
+
191
+ | Batch | Seq 64 | Seq 128 | Seq 256 | Seq 512 |
192
+ |:-:|:-:|:-:|:-:|:-:|
193
+ | **1** | 152 ms (6.6/s) | 193 ms (5.2/s) | 302 ms (3.3/s) | 569 ms (1.8/s) |
194
+ | **4** | 278 ms (14.4/s) | 468 ms (8.6/s) | 935 ms (4.3/s) | 2,464 ms (1.6/s) |
195
+ | **8** | 467 ms (17.1/s) | 862 ms (9.3/s) | 1,728 ms (4.6/s) | 4,745 ms (1.7/s) |
196
+ | **16** | 837 ms (19.1/s) | 1,624 ms (9.9/s) | 3,814 ms (4.2/s) | 9,143 ms (1.7/s) |
197
+
198
+ > On CPU the model runs at ~152 ms/doc for short texts (seq=64, bs=1) — suitable for low-volume or batch-offline pipelines.
199
+
200
+ ### Daily Throughput Projections
201
 
202
+ Sustained throughput for a **single device**, running 24/7 at the optimal batch size:
203
+
204
+ | Sequence Length | GPU (A10G, bf16) | CPU (8-core, fp32) |
205
+ |:-:|:-:|:-:|
206
+ | 64 tokens | **39.8M docs/day** (460/s, bs=64) | 1.7M docs/day (19/s, bs=16) |
207
+ | 128 tokens | **30.1M docs/day** (348/s, bs=64) | 855K docs/day (10/s, bs=16) |
208
+ | 256 tokens | **17.9M docs/day** (207/s, bs=64) | 397K docs/day (4.6/s, bs=8) |
209
+ | 512 tokens | **8.7M docs/day** (101/s, bs=64) | 156K docs/day (1.8/s, bs=1) |
210
+
211
+ #### Multi-GPU Scaling Estimates
212
+
213
+ | Config | seq=128 | seq=256 | seq=512 |
214
+ |--------|:-:|:-:|:-:|
215
+ | 1× A10G (24 GB, ~$1/hr) | 30M/day | 18M/day | 8.7M/day |
216
+ | 1× A100 (80 GB, ~$3/hr) | ~70M/day¹ | ~42M/day¹ | ~20M/day¹ |
217
+ | 4× A10G data-parallel | 120M/day | 72M/day | 35M/day |
218
+ | 8× A10G data-parallel | 240M/day | 143M/day | 70M/day |
219
+
220
+ <sub>¹ A100 estimates are linearly extrapolated from A10G numbers using A100's ~2.3× higher memory bandwidth and larger batch capacity. Actual numbers will vary — benchmark on your target hardware.</sub>
221
+
222
+ ### Serving Recommendations
223
+
224
+ | Deployment Scenario | Recommended Config | Expected Perf |
225
+ |---|---|---|
226
+ | **Real-time API** (SLA <200ms) | 1× GPU, bs=1, seq≤512 | ~106 ms p50, ~113 ms p95 |
227
+ | **Near-real-time** (SLA <500ms) | 1× GPU, bs=8–16, seq≤512 | 53–73 docs/s, p95 <225 ms |
228
+ | **High-throughput batch** | 1× GPU, bs=64, seq=256 | 207 docs/s, 17.9M/day |
229
+ | **Max throughput batch** | 1× GPU, bs=64, seq=64² | 460 docs/s, 39.8M/day |
230
+ | **CPU offline / dev** | CPU, bs=1, seq≤256 | 3–7 docs/s |
231
+
232
+ <sub>² At seq=64 most documents will be truncated. Use seq=128–256 for production balance.</sub>
233
+
234
+ **Key observations:**
235
+ - The model has a **fixed ~105 ms overhead** per forward pass regardless of sequence length (MoE routing + expert dispatch). Batching amortizes this cost across documents — the per-doc cost drops from 106 ms (bs=1) to under 10 ms (bs=64).
236
+ - **Memory is not the bottleneck** — even at bs=64/seq=512 the model uses only 6.2 GB. You can run this on a T4 (16 GB) with room to spare.
237
+ - **Optimal batch size for throughput**: bs=64 for all sequence lengths on A10G.
238
+ - **Optimal batch size for latency-constrained**: bs=8–16 gives a good per-doc latency (13–19 ms) while keeping batch latency under 225 ms.
239
+
240
+ ---
241
 
242
  ## Training Strategy
243
 
 
304
  print(f"\nCategory: {categories[top]} ({probs[top]:.1%})")
305
  ```
306
 
307
+ ### Batched Inference (Production)
308
+
309
+ ```python
310
+ # Process a batch of documents — both tasks in a single forward pass
311
+ texts = ["doc1...", "doc2...", "doc3...", ...]
312
+ inputs = tokenizer(texts, return_tensors="pt", padding=True,
313
+ truncation=True, max_length=256).to(model.device)
314
+
315
+ with torch.no_grad():
316
+ outputs = model(**inputs, output_hidden_states=True)
317
+
318
+ # NER predictions for all docs: [batch, seq_len]
319
+ ner_preds = outputs.logits.argmax(dim=-1)
320
+
321
+ # Doc class for all docs: [batch]
322
+ hidden = outputs.hidden_states[-1]
323
+ mask = inputs["attention_mask"].unsqueeze(-1).to(hidden.dtype)
324
+ pooled = (hidden * mask).sum(1) / mask.sum(1).clamp(min=1)
325
+ doc_preds = doc_head(pooled).argmax(dim=-1)
326
+ ```
327
+
328
  ## Example Outputs
329
 
330
  | Input | PII Detected | Category (confidence) |