--- title: TruthLens emoji: πŸ¦€ colorFrom: red colorTo: pink sdk: gradio sdk_version: 5.46.0 app_file: app.py pinned: false license: apache-2.0 short_description: 'Here’s TruthLens – Lite: a public demo that verifies claims ' --- # 🧭 TruthLens – Misinformation-Aware RAG TruthLens is a **fact-checking AI demo** that shows how modern GenAI can be made **responsible, transparent, and safe**. It verifies controversial claims (vaccines, climate, elections, etc.) using **retrieval-augmented generation**, checks its own answers for **faithfulness**, and applies **safety filters**. --- ## πŸš€ How it works 1. **Input a claim/question** Example: *β€œDid humans cause global warming?”* 2. **Retrieve sources** The app looks at trusted sources (WHO, IPCC, election reports). You can also paste your own text. 3. **Generate grounded answer** TruthLens writes an answer with [1], [2] style citations from those sources. 4. **Self-check** It tests how well the answer actually matches the sources (faithfulness score). 5. **Safety filters** - Removes personal info (PII redaction). - Flags toxicity levels if the language is harmful. --- ## πŸ› οΈ Tech stack - **Gradio** (UI & deployment) - **Transformers** (Flan-T5 generation, DeBERTa NLI, BERT NER, Toxic-BERT) - **SentenceTransformers** (MiniLM embeddings) - **CrossEncoder** (MS-MARCO reranker) - **scikit-learn / pandas / numpy / matplotlib** All models are **CPU-friendly**, so the demo runs in a free Hugging Face Space. --- ## 🌍 Why it matters - Tackles **misinformation** around health, climate, and politics. - Demonstrates **responsible AI practices** (grounding, self-checking, safety). - Shows how a **Lead AI Developer** designs not just models, but **systems** ready for production. --- ## ▢️ Try it 1. Enter a claim or question. 2. (Optional) Paste your own text sources. 3. Click **Run TruthLens** β†’ see answer, citations, faithfulness score, and safe-share version. --- ## πŸ‘” About this project Created by **Aso Bozorgpanah** as a portfolio demo for **Lead AI Developer** roles. Focus: Explainability, safety, and production-readiness in Generative AI.