Gemma3 CPT
Collection
9 items • Updated
This model is the Stage 3 Turkish Continued Pretraining (CPT) variant of Gemma-3-1B.
Unlike Stage 1, which was initialized from google/gemma-3-1b-pt,
this model was initialized from:
canbingol/gemma3_1B_base-tr-cpt-1epoch_stage2Stage 3 continues domain adaptation by exposing the model to new data rather than repeating the same subset.
The model was trained for 1 epoch on samples 100,000 to 150,000 of the Turkish web corpus.
Importantly, this model is a direct continuation of Stage 2.
Therefore, cumulatively it has been trained on samples 0–150,000 of the corpus (Stage 1: 0–50K, Stage 2: 50K–100K, Stage 3: 100K–150K).
google/gemma-3-1b-ptCumulative data exposure: 0–150,000 samples
This represents sequential CPT across disjoint data shards.
canbingol/vngrs-web-corpus-200kimport torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "canbingol/gemma3_1B_base-tr-cpt-1epoch_stage3"
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = model.to(device)
prompt = "bundan böyle"
inputs = tokenizer(prompt, return_tensors="pt").to(device)
outputs = model.generate(
**inputs,
max_new_tokens=50,
do_sample=True,
temperature=0.8,
top_p=0.9
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
Base model
google/gemma-3-1b-pt