Model Card: Gemma3-1B Turkish CPT (Only Stage 2 Data, 50K–100K Subset)
Overview
This model is a Turkish Continued Pretraining (CPT) variant of Gemma-3-1B.
Unlike multi-stage CPT runs that progressively adapt the model across multiple data shards, this model was trained specifically to isolate and measure the effect of only the second shard of the dataset. No prior stage adaptation was used.
The model was trained only on the second shard of the Turkish web corpus (samples 50,000–100,000).
Base model: google/gemma-3-1b-pt
Training method: standard continued pretraining (full model update)
Dataset shard: 50K–100K samples
Objective: isolate and evaluate the standalone impact of Stage 2 Turkish web data
For anyone interested in the full experimental results, I’ve compiled all runs here:
https://docs.google.com/spreadsheets/d/10dbABNIMc_WL85ba0rfGwrkbU-VHu3aRa9tnuOAGpyc/edit?usp=sharing
In particular, the Gemma 3B CPT table is the main one to look at.
Training Setup
Base Model: google/gemma-3-1b-pt
Dataset: canbingol/vngrs-web-corpus-200k
Subset Used: Samples 50,000–100,000
Training Objective: Continued Pretraining
Data Regime: Plain text
Epochs: 1
Token Count: ~21.6M tokens
Training Details
All model parameters were updated during training (no parameter-efficient methods such as LoRA were used).
This run represents an isolated CPT experiment where only the second data shard is used, without any carry-over from earlier stages.
Training Notes
This model was trained specifically to test the isolated impact of Stage 2 data (samples 50K–100K), independent of Stage 1 adaptation.
It is intended for controlled comparison against:
- Stage 1-only CPT models
- Sequential multi-stage CPT models
- LoRA-based CPT variants
This setup enables analysis of:
- Data ordering effects
- Incremental vs isolated adaptation
- Sensitivity of the model to specific corpus segments
Usage Example
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "canbingol/gemma3_1B_base-tr-cpt-only_2nd_stage_data"
device = "cuda" if torch.cuda.is_available() else "cpu"
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
model_name,
torch_dtype=torch.bfloat16
).to(device)
prompt = "bundan böyle"
inputs = tokenizer(prompt, return_tensors="pt").to(device)
outputs = model.generate(
**inputs,
max_new_tokens=50,
do_sample=True,
temperature=0.8,
top_p=0.9
)
generated_text = tokenizer.decode(outputs[0], skip_special_tokens=True)
print(generated_text)
- Downloads last month
- 1,050
Model tree for canbingol/gemma3_1B_base-tr-cpt-only_2nd_stage_data
Base model
google/gemma-3-1b-pt