GGUF
roleplay
conversational

LucentPersonika 1.2

LucentPersonika 1.2 is an improved iteration of LucentPersonika, a lightweight roleplay and personality-driven language model developed by Lucid Research.

This version uses a refined dataset and incorporates improvements to generation stability, short-prompt handling, and character consistency, while maintaining expressive dialogue and immersive character interaction.

Built on the Qwen2.5-0.5B base model and fine-tuned using the youndukn/ROLE_PLAY_INSTRUCT dataset with slight modifications, LucentPersonika 1.2 focuses on stylistic roleplay rather than raw reasoning performance.


Model Overview

  • Developer: Lucid Research
  • Model Name: LucentPersonika 1.2
  • Base Model: Qwen/Qwen2.5-0.5B
  • Architecture: Transformer
  • Fine-tuning Method: LoRA
  • Primary Use: Roleplay, character dialogue, creative interactions
  • Parameter Size: ~0.5B

Improvements in 1.2

  • Reduced repeated token generation
  • Better handling of short prompts (e.g., greetings, one-line inputs)
  • Improved in-character consistency across conversational turns
  • Fine-tuned on a slightly modified youndukn/ROLE_PLAY_INSTRUCT dataset for stronger personality anchoring

This release focuses on stability and stylistic improvements rather than architectural changes.


Intended Capabilities

LucentPersonika 1.2 is optimized for:

  • Character roleplay
  • Personality-driven responses
  • Creative conversations
  • Fictional scenarios
  • Dialogue generation

Its small size makes it well-suited for low-latency deployments and cost-efficient inference.


Limitations

LucentPersonika 1.2 is not designed for factual tasks or complex reasoning.

Users should expect:

  • Occasional factual inaccuracies
  • Simplified reasoning
  • Confident but incorrect answers
  • Reduced performance on multi-step logic

As a small parameter model, it may occasionally overemphasize learned character archetypes in low-context prompts.

It should not be relied upon for legal, medical, financial, or safety-critical use cases.


Training Data

  • Dataset: youndukn/ROLE_PLAY_INSTRUCT (slightly modified)
  • Focus: Character-driven multi-turn dialogue and roleplay interactions
  • No proprietary datasets were used

The dataset modifications focused on improving character consistency, scene immersion, and short-turn interactions.


Training Approach

  • Parameter-efficient LoRA fine-tuning
  • Preserved base model fluency while specializing in expressive, in-character responses

The goal of 1.2 was improved personality anchoring and conversational stability, not full behavioral retraining.


Example Prompt

Prompt:

Imagine you are a medieval knight. Describe your morning routine before a tournament.

Expected Behavior:

The model responds in character, maintaining a thematic voice, immersive detail, and consistent personality traits.


Responsible Use

LucentPersonika 1.2 is intended for creative and entertainment-oriented applications. Developers should apply appropriate safeguards depending on deployment context.


License

Derived from Qwen2.5-0.5B, released under the Apache 2.0 license. All usage must comply with the original license terms.

Downloads last month
-
GGUF
Model size
0.5B params
Architecture
qwen2
Hardware compatibility
Log In to add your hardware

We're not able to determine the quantization variants.

Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Lucid-Research/LucentPersonika1.2-GGUF-8bit

Quantized
(99)
this model

Dataset used to train Lucid-Research/LucentPersonika1.2-GGUF-8bit

Collections including Lucid-Research/LucentPersonika1.2-GGUF-8bit