Pretrained from scratch using GPT-2 architecture and a dataset of Latin texts (Corpus Corporum) 64 token context, loss 4.5, trained on 1 epoch of 492 million tokens GPT2 style tokenizer trained with min_frequency of 2000

Tends to get repetitive and is not very coherent, due to size and limited data.

Downloads last month
1
Safetensors
Model size
99.3M params
Tensor type
F32
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Dataset used to train gaodrew/cicero

Space using gaodrew/cicero 1