ModernBERT-JP-0.5B-PT-stage1

English / Japanese

Overview

ModernBERT-JP-0.5B-PT-stage1 continues from iamtatsuki05/ModernBERT-JP-0.5B-init and is trained on hotchpotch/fineweb-2-edu-japanese. The model processes roughly 10B tokens with 1,024-token windows, providing an encoder backbone for Japanese understanding tasks.

Consept

Usage

Requirements

transformers>=4.51.0
accelerate>=1.6.0
sentencepiece>=0.2.0
flash-attn>=2.7.3

Sample Code

import torch
from transformers import AutoModelForMaskedLM, AutoTokenizer

model_name = "iamtatsuki05/ModernBERT-JP-0.5B-PT-stage1"
model_kwargs = {
  "torch_dtype": torch.bfloat16,
  "attn_implementation": "flash_attention_2",
  "device_map": "auto",
}
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForMaskedLM.from_pretrained(model_name, **model_kwargs)

text = f"ハチワレは{tokenizer.mask_token}のキャラクターです。"
inputs = tokenizer(text, return_tensors="pt").to(model.device)
outputs = model(**inputs)
masked_index = inputs["input_ids"][0].tolist().index(tokenizer.mask_token_id)
print(tokenizer.decode(outputs.logits[0, masked_index].argmax(axis=-1)))

Model Details

  • Base model: iamtatsuki05/ModernBERT-JP-0.5B-init
  • Architecture: ModernBERT
  • Maximum sequence length: 8,192 tokens
  • Embedding dimension: 1280
  • Tokenizer: SentencePiece / vocabulary size 102,400
  • Positional encoding: RoPE
  • Supported languages: Japanese

Model Series

The checkpoints listed below share the same initialization but undergo approximately 10B-token pre-training on hotchpotch/fineweb-2-edu-japanese with 1,024-token contexts.

ID Architecture #Param. #Param.
w/o Emb.
iamtatsuki05/ModernBERT-JP-0.5B-PT-stage1
(this model)
ModernBERT 679M 548M
iamtatsuki05/Llama-JP-0.5B-PT-stage1 Llama 661M 530M

Licence

This model is distributed under the MIT License.

How to Cite

@article{MIREI
  title={同一条件下における Encoder/Decoder アーキテクチャによる文埋め込みの性能分析},
  author={岡田 龍樹 and 杉本 徹},
  journal={言語処理学会第 32 回年次大会 (NLP2026)},
  year={2026}
}
Downloads last month
3
Safetensors
Model size
0.8B params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for iamtatsuki05/ModernBERT-JP-0.5B-PT-stage1

Finetuned
(1)
this model
Finetunes
1 model

Dataset used to train iamtatsuki05/ModernBERT-JP-0.5B-PT-stage1

Collection including iamtatsuki05/ModernBERT-JP-0.5B-PT-stage1