🧠 DQN GPT v0.1

Local AI for Everyone.

DQN GPT v0.1 is a lightweight, locally runnable assistant built on Phi-3 Mini (3.8B parameters).

This release is an early identity-alignment version focused on establishing personality and behavioral consistency. It is not yet a domain-specialized or heavily fine-tuned model.

This is the foundation.


πŸš€ Vision

Local AI for everyone.

DQN GPT exists to prove that powerful AI does not need to live in a datacenter.

It should run:

  • On laptops
  • On student machines
  • On modest hardware
  • On personal servers
  • On local networks

AI should be accessible.


🧠 Base Model

  • Architecture: Phi-3 Mini
  • Parameter Count: 3.8B
  • Context Length: 128K (as supported by base model)
  • Format: GGUF (llama.cpp / LM Studio compatible)

πŸ”§ Fine-Tuning Details

This version has been fine-tuned on a minimal identity-alignment dataset for testing purposes.

Focus areas:

  • Assistant identity consistency
  • Stable conversational tone
  • Reduced drift from defined persona

This is not a performance-focused or coding-specialized release yet.

Future updates will include:

  • Coding-focused fine-tuning
  • Hallucination reduction
  • Improved reasoning
  • Broader conversational robustness

πŸ’» Hardware Requirements

Designed to run locally.

Recommended:

  • 8GB+ RAM (Q4_K_M quant)
  • CPU inference supported
  • GPU optional

Quantization options determine performance and memory usage.


πŸ“¦ Intended Use

  • Local assistant
  • Personal AI experimentation
  • LAN-hosted AI servers
  • Offline productivity
  • Student AI access

⚠ Limitations

  • Early-stage release
  • Minimal dataset fine-tune
  • Not benchmark-optimized
  • Not trained for specialized domains yet

This is v0.1 β€” a foundation build.


πŸ›£ Roadmap

  • Coding-specialized variant
  • Refined conversational dataset
  • Larger releases
  • Improved reliability
  • Public evaluation benchmarks

🌍 Philosophy

AI should not be locked behind subscriptions.

AI should not require a supercomputer.

AI should run where you are.

Local AI for everyone.

Downloads last month
13
GGUF
Model size
4B params
Architecture
llama
Hardware compatibility
Log In to add your hardware

4-bit

Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for DQN-Labs/dqnGPT-v0.1-3.8B

Quantized
(161)
this model

Collection including DQN-Labs/dqnGPT-v0.1-3.8B