PasteProof PII Detector - ONNX (Browser-Ready)

Quantized ONNX version of pasteproof-pii-detector-v3 for fast browser-side inference.

Performance

Metric Value
Model size ~147 MB
Inference time 50-100ms (browser)
Accuracy ~97% F1 (minimal loss from quantization)

Usage with Transformers.js

import { pipeline } from '@xenova/transformers';

const detector = await pipeline(
  'token-classification',
  'joneauxedgar/pasteproof-pii-detector-onnx'
);

const results = await detector('const key = "sk_live_abc123";');
console.log(results);

Usage with ONNX Runtime Web

import * as ort from 'onnxruntime-web';

const session = await ort.InferenceSession.create('model.onnx');
// ... tokenize and run inference

Files

  • model.onnx - Quantized ONNX model
  • tokenizer.json - Tokenizer vocabulary
  • tokenizer_config.json - Tokenizer settings
  • config.json - Model configuration
  • special_tokens_map.json - Special token mappings

Original Model

See pasteproof-pii-detector-v3 for full details on entity types and training.

Downloads last month
2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support