NekGPT XL
NekGPT XL is a full fine-tune of Qwen/Qwen3.5-0.8B-Base on authored twitter posts from @nekstoer.
This release is the larger follow-up to the original nekGPT toy model. The target was simple: make it more cohesive, less funny, and less likely to collapse immediately in short conversations.
Validation loss was best after epoch 1 and worsened slightly at epoch 2:
- epoch 1:
0.4954 - epoch 2:
0.5053
Description
The goal was not alignment polish or generic assistant quality. The goal was a model that still understands basic questions, but answers with the voice pressure of the source corpus instead of defaulting back into a helpful assistant persona.
The result is more coherent than the original GPT-2 release and less sanitized than the base Qwen checkpoint, but it still drifts under sustained multi-turn chat. It works best as an experimental voice model and chat toy, not as a reliable factual assistant.
Methodology
Corpus
- Source account:
@nekstoer - Collection method: GetXAPI scrape of authored posts only
- Reposts: excluded
- Replies: included
- Quotes: included
- Raw authored posts collected:
1526
Dataset shaping
The scrape was converted into higher-signal supervised examples:
- standalone posts:
write a tweet -> post - replies:
reply to this post from @user: ... -> reply - quotes:
quote tweet this post: ... -> quote tweet
To make the voice distribution more pronounced, a subset of standout standalone posts was repeated in the training split.
Effective dataset sizes for this run:
- train:
1786 - valid:
20 - test:
20
Training
- base model:
Qwen/Qwen3.5-0.8B-Base - hardware: Apple Silicon MPS
- max length:
384 - epochs:
2 - learning rate:
2e-5 - batch size:
1 - gradient accumulation:
8 - gradient checkpointing: enabled
Known behavior
Strengths:
- answers simple questions more coherently than the original
nekGPT - holds the local tone and phrasing much more aggressively than base Qwen
- stays chaotic without immediately collapsing into nonsense
Weaknesses:
- still drifts in longer multi-turn chats
- factual grounding is weak and can fail even on basic prompts
- overfits recurring motifs, formatting quirks, and usernames from the corpus
- the epoch-1 checkpoint had the best validation loss, while the final exported model is the epoch-2 artifact used for this release
Usage
from transformers import AutoModelForCausalLM, AutoTokenizer
repo_id = "potteryrage/NekGPT-XL"
tokenizer = AutoTokenizer.from_pretrained(repo_id)
model = AutoModelForCausalLM.from_pretrained(repo_id)
prompt = "System: Output only the completion.\nUser: write a tweet\nAssistant:"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(
**inputs,
max_new_tokens=64,
do_sample=True,
temperature=1.05,
top_p=0.98,
pad_token_id=tokenizer.eos_token_id,
)
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
License
Apache-2.0
- Downloads last month
- 417
Model tree for potteryrage/NekGPT-XL
Base model
Qwen/Qwen3.5-0.8B-Base