Spaces:
Sleeping
Sleeping
metadata
title: Humanzise API
emoji: πͺ
colorFrom: green
colorTo: indigo
sdk: docker
app_port: 7860
pinned: false
short_description: Free AI text humanizer and detector
Humanzise
Free, open-source AI text humanizer + AI detector. Paste any AI-generated text and rewrite it to sound more natural β or check how likely an existing text was written by AI.
- Frontend: Next.js 16 + shadcn/ui + Tailwind CSS (deployed on Vercel)
- Backend: FastAPI + PyTorch + DeBERTa-v3 detector (deployed on Hugging Face Spaces)
- Detector model:
desklib/ai-text-detector-v1.01β current leader on the RAID benchmark - Humanizer: rule-based pipeline (WordNet synonyms + contraction expansion + academic transitions + citation preservation)
Repository layout
humanzise/
βββ api/ FastAPI app (entry point: api.humanize_api:app)
β βββ humanize_api.py
βββ utils/ Backend logic
β βββ humanizer_core.py Text humanization pipeline
β βββ ai_detection_utils.py
β βββ desklib_model.py Custom DeBERTa-v3 wrapper for desklib weights
β βββ model_loaders.py
β βββ pdf_utils.py PDF text extraction
βββ web/ Next.js frontend
β βββ src/
β βββ app/
β βββ components/
β βββ lib/
βββ Dockerfile HF Spaces Docker image
βββ requirements.txt Production deps (lean, CPU-only torch)
βββ requirements-local.txt All dev deps
βββ DEPLOY.md Step-by-step deployment guide
Running locally
Backend (Python 3.12)
python -m venv venv
source venv/Scripts/activate # or venv/bin/activate on macOS/Linux
pip install -r requirements-local.txt
python -m spacy download en_core_web_sm
python -m uvicorn api.humanize_api:app --reload --port 8000
Scalar docs: http://localhost:8000/docs
Frontend (Node 20+)
cd web
npm install
npm run dev
Open http://localhost:3000. Set NEXT_PUBLIC_API_BASE_URL in web/.env.local if your backend isn't on http://127.0.0.1:8000.
API endpoints
| Method | Path | Description |
|---|---|---|
GET |
/health |
Liveness probe |
POST |
/humanize |
Rewrite AI text to sound more natural |
POST |
/detect |
Score text for AI likelihood (desklib DeBERTa-v3) |
POST |
/extract-file |
Extract text from uploaded PDF/TXT/MD |
All endpoints use JSON request/response; /extract-file uses multipart/form-data.
Deployment
Free deployment path is documented in DEPLOY.md:
- Frontend β Vercel (free,
web/subfolder) - Backend β Hugging Face Spaces (Docker SDK, free 16 GB RAM)
Credits
Forked from DadaNanjesha/AI-content-detector-Humanizer β original Streamlit app. This fork replaced the Streamlit UI with a Next.js frontend, modernized the backend, and swapped in the desklib detector.
License
MIT β see LICENSE.