dqnWrite v0.1 3B-A0.8B

dqnWrite v0.1 is a lightweight creative writing language model developed by DQN Labs.
It is fine-tuned for storytelling, descriptive prose, dialogue, and imaginative writing tasks.

The model is based on IBM Granite 3.1 3B-A800M, a sparse Mixture-of-Experts (MoE) architecture where only ~800M parameters are active per token. This enables strong language capabilities while maintaining high inference speed and efficiency.


Model Overview

Property Value
Model Name dqnWrite v0.1
Developer DQN Labs
Base Model IBM Granite 3.1 3B-A800M (by IBM)
Architecture Sparse Mixture-of-Experts
Total Parameters ~3B
Active Parameters ~800M
Primary Domain Creative Writing / Language Arts

Intended Use

dqnWrite is designed primarily for creative text generation.

Supported tasks include:

  • Story generation
  • Narrative scene writing
  • Dialogue writing
  • Character interactions
  • Descriptive worldbuilding
  • Writing prompts
  • Imaginative storytelling

The model is optimized for fast local inference, making it suitable for running on consumer hardware.


Dataset

The model was fine-tuned using the dataset:

TeichAI/mistral-small-creative-500x

This dataset consists of synthetic prompt-completion pairs distilled from the Mistral Small Creative model.

The dataset focuses on:

  • storytelling prompts
  • descriptive scene writing
  • creative narrative responses
  • imaginative writing tasks

Capabilities

dqnWrite performs best at:

  • descriptive storytelling
  • imaginative prompts
  • dialogue generation
  • narrative continuation
  • scene construction

The model tends to produce longer and more detailed responses compared to general-purpose models of similar size.


Limitations

As a 3B parameter model, dqnWrite has some limitations:

  • May struggle with complex reasoning tasks
  • Not designed for coding or technical tasks
  • Knowledge limited to pretraining cutoff
  • May occasionally repeat narrative patterns

The model is optimized specifically for creative text generation, not factual accuracy.


Hardware and Performance

Due to its sparse architecture, only ~800M parameters are active per token, allowing super fast inference even on lower-end devices.

Typical performance characteristics:

  • Fast local inference
  • Efficient memory usage
  • Suitable for laptops and consumer GPUs
  • Ideal for local writing tools

Version

dqnWrite v0.1

Initial experimental release of the dqnWrite creative writing model line.

We plan for future versions that may include :

  • larger parameter variants in the dqnWrite lineup of models
  • expanded creative datasets to train on

Developed by DQN Labs

Local AI for everyone.

Downloads last month
90
Safetensors
Model size
3B params
Tensor type
BF16
·
MLX
Hardware compatibility
Log In to add your hardware

Quantized

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for DQN-Labs-Community/dqnWrite-v0.1-3B-A0.8B

Dataset used to train DQN-Labs-Community/dqnWrite-v0.1-3B-A0.8B