demo / README.md
AI-that-works's picture
Update README.md
ac1a4e5 verified
metadata
title: groundlens  Hallucination Detection Demo
thumbnail: >-
  https://github.com/groundlens-dev/groundlens/blob/bc6d60ed03d2757fb71fa9317cef44f1da7d7f79/docs/assets/Logo_groundlens_new-05.png
colorFrom: yellow
colorTo: red
sdk: gradio
sdk_version: 6.14.0
app_file: app.py
pinned: true
license: mit
tags:
  - hallucination-detection
  - llm-evaluation
  - rag
  - grounding
  - nlp
  - groundlens
  - embedding-geometry
short_description: Geometric LLM hallucination detection. No second LLM.

PyPI GitHub

groundlens — Hallucination Detection Demo

Detects LLM hallucinations using embedding geometry. No second LLM. Deterministic. Auditable. Benchmarked against Vectara HHEM-2.1-Open.

Methods compared

groundlens SGI (with context): ratio of Euclidean distances on the embedding space — dist(response, question) / dist(response, context). No model inference for the evaluation. One embedding call, one division.

groundlens DGI (without context): cosine similarity between the response displacement vector and the mean displacement of verified grounded pairs.

HHEM-2.1-Open (Vectara): fine-tuned flan-T5 classifier. Full model inference per evaluation call.

When they disagree

Disagreement surfaces Type III hallucinations — factual errors within a correct semantic frame. Embedding geometry cannot detect these: the response occupies the geometrically correct region of the space despite being factually wrong. HHEM's classifier may catch some of these cases. The two methods are orthogonal signals, not competing alternatives.

Install the library

pip install groundlens

Links

Research