π§ DQN GPT v0.1
Local AI for Everyone.
DQN GPT v0.1 is a lightweight, locally runnable assistant built on Phi-3 Mini (3.8B parameters).
This release is an early identity-alignment version focused on establishing personality and behavioral consistency. It is not yet a domain-specialized or heavily fine-tuned model.
This is the foundation.
π Vision
Local AI for everyone.
DQN GPT exists to prove that powerful AI does not need to live in a datacenter.
It should run:
- On laptops
- On student machines
- On modest hardware
- On personal servers
- On local networks
AI should be accessible.
π§ Base Model
- Architecture: Phi-3 Mini
- Parameter Count: 3.8B
- Context Length: 128K (as supported by base model)
- Format: GGUF (llama.cpp / LM Studio compatible)
π§ Fine-Tuning Details
This version has been fine-tuned on a minimal identity-alignment dataset for testing purposes.
Focus areas:
- Assistant identity consistency
- Stable conversational tone
- Reduced drift from defined persona
This is not a performance-focused or coding-specialized release yet.
Future updates will include:
- Coding-focused fine-tuning
- Hallucination reduction
- Improved reasoning
- Broader conversational robustness
π» Hardware Requirements
Designed to run locally.
Recommended:
- 8GB+ RAM (Q4_K_M quant)
- CPU inference supported
- GPU optional
Quantization options determine performance and memory usage.
π¦ Intended Use
- Local assistant
- Personal AI experimentation
- LAN-hosted AI servers
- Offline productivity
- Student AI access
β Limitations
- Early-stage release
- Minimal dataset fine-tune
- Not benchmark-optimized
- Not trained for specialized domains yet
This is v0.1 β a foundation build.
π£ Roadmap
- Coding-specialized variant
- Refined conversational dataset
- Larger releases
- Improved reliability
- Public evaluation benchmarks
π Philosophy
AI should not be locked behind subscriptions.
AI should not require a supercomputer.
AI should run where you are.
Local AI for everyone.
- Downloads last month
- 13
4-bit
Model tree for DQN-Labs/dqnGPT-v0.1-3.8B
Base model
microsoft/Phi-3-mini-4k-instruct