Genshin Starlit
Genshin Starlit is a LLaMA-based causal language model fine-tuned for
roleplay and character-driven dialogue.
The model prioritizes immersive conversational flow and in-character consistency over strict factual accuracy.
Model Details
- Architecture: LlamaForCausalLM
- Parameters: ~71B
- Precision: bfloat16 (BF16)
- Context Length: up to 131k tokens
- Format: Safetensors (sharded)
Training and Composition
This model was created by merging two LoRA adapters into a single checkpoint:
Character Dialogue Adapter
Trained on character-specific dialogue transcripts to improve tone, personality, and speech consistency.Lore Knowledge Adapter
Trained on lore-oriented question–answer style data to improve narrative coherence and lore-aware responses during roleplay.
The merged model balances immersive roleplay with contextual lore awareness during generation.
Intended Use
Recommended uses
- Roleplay and character simulation
- Creative writing and dialogue
- Interactive chat applications
- Narrative-driven assistants
Not recommended
- Factual or extractive question answering
- Retrieval-augmented generation
- Safety-critical or professional domains
Chat / Roleplay Usage
This model uses a chat template and is intended to be run in a conversational setting.
Example (Transformers)
from transformers import AutoTokenizer, AutoModelForCausalLM
model_id = "your-username/genshin-starlit"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
model_id,
torch_dtype="bfloat16",
device_map="auto"
)
messages = [
{"role": "system", "content": "You are a fictional character engaging in immersive roleplay."},
{"role": "user", "content": "Hello. Who are you?"}
]
inputs = tokenizer.apply_chat_template(
messages,
return_tensors="pt"
).to(model.device)
outputs = model.generate(
inputs,
max_new_tokens=200,
do_sample=True,
temperature=0.8,
top_p=0.9
)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
- Downloads last month
- 5
Model tree for LumiCharles/genshin-starlit
Base model
meta-llama/Llama-3.1-70B