SchoolMoE
A tiny Korean Mixture-of-Experts language model with YaRN-scaled long context.
Highlights
architectures:SchoolMoEForCausalLM- tiny sparse MoE with fish-school expert routing
- total params: about 5.84M
- active params per token: about 3.00M
- attention: GQA (8 query heads / 2 KV heads)
- routed experts: 8
- shared experts: 2
- top-k routed experts per token: 2
- YaRN scaling from 128 to 512
Load
from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
config = AutoConfig.from_pretrained("YOUR_NAME/YOUR_REPO", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("YOUR_NAME/YOUR_REPO", trust_remote_code=True)
tokenizer = AutoTokenizer.from_pretrained("YOUR_NAME/YOUR_REPO", trust_remote_code=True, use_fast=False)
Simple Generation
prompt = "<|user|> 최근 관찰 기록:\n1. 관찰: 돌 옆 쪽 물살이 조금 강해졌어.\n2. 관찰: 고양이 그림자가 유리 가까이 스쳤어.\n\n질문: 지금 어디에 모이는 게 좋아? <|assistant|>"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=64)
print(tokenizer.decode(outputs[0], skip_special_tokens=False))
- Downloads last month
- 416