Gemma-4-E2B-it-abliterated (LiteRT-LM)

LiteRT-LM export of huihui-ai/Huihui-gemma-4-E2B-it-abliterated for on-device / edge inference workflows.

Model File

  • Gemma-4-E2B-it-abliterated.litertlm

Source

  • Base checkpoint: huihui-ai/Huihui-gemma-4-E2B-it-abliterated
  • Export pipeline: safetensors-to-litertlm

Export Notes

  • Export format: .litertlm (LiteRT-LM bundle)
  • Quantization: INT8 profile (dynamic_wi8_afp32)
  • Intended runtime: litert-lm CLI / LiteRT-LM compatible apps

Quick Start (CPU)

litert-lm run ./Gemma-4-E2B-it-abliterated.litertlm --prompt "Hi" --backend cpu

Limitations

  • Behavior may differ from the original HF checkpoint due to conversion/quantization/runtime differences.
  • Some export profiles that reduce memory pressure can alter section topology and runtime behavior.

Safety

This model may generate unsafe or incorrect content. Evaluate carefully for your use case and apply application-level safeguards where needed.

License

Please follow the upstream license and usage terms of:

  • huihui-ai/Huihui-gemma-4-E2B-it-abliterated
  • underlying Gemma model family terms
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for nqd145/Gemma-4-E2B-it-abliterated-litertlm

Finetuned
(111)
this model