schift-nli

ONNX-quantized DeBERTa-v3-xsmall for Natural Language Inference, optimized for Dot local inference.

Usage in Dot

Loaded automatically by Dot's local NLI classifier. No manual setup needed.

Usage with Transformers.js

import { pipeline } from '@huggingface/transformers';

const classifier = await pipeline('text-classification', 'schift-io/schift-nli', {
  quantized: true,
});

const result = await classifier(
  { text: 'A man is eating pizza', text_pair: 'A man is eating food' },
  { top_k: 3 }
);
// [{ label: 'entailment', score: 0.97 }, ...]

License

Apache 2.0 (inherited from base model)

Downloads last month
29
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for schift-io/schift-nli

Quantized
(5)
this model

Datasets used to train schift-io/schift-nli