Gemma3 CPT
Collection
9 items • Updated
This model is a Turkish Continued Pretraining (CPT) variant of google/gemma-3-1b-pt.
The base model was further trained for 1 epoch on the first 50,000 samples of a Turkish web corpus.
This stage represents a broader data exposure regime compared to small-subset experiments and aims to improve:
This release corresponds to Stage 1 of a multi-stage CPT pipeline and is intended for research and experimental analysis.
google/gemma-3-1b-ptcanbingol/vngrs-web-corpus-200kimport torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_id = "canbingol/gemma3_1B_base-tr-cpt-1epoch_stage1"
device = "cuda" if torch.cuda.is_available() else "cpu"
model = AutoModelForCausalLM.from_pretrained(model_id)
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = model.to(device)
prompt = "bundan böyle"
inputs = tokenizer(prompt, return_tensors="pt").to(device)
outputs = model.generate(
**inputs,
max_new_tokens=50,
do_sample=True,
temperature=0.8,
top_p=0.9
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)