Chytrej2-90M-Base

A fully custom pretrained language model built from scratch on the LLaMA architecture trained on FineWeb Edu dataset.

Built by PingVortex Labs.

Discord


Model Details

  • Parameters: 90M
  • Context length: 8,192 tokens
  • Language: English only
  • Format: Base model
  • Architecture: LLaMA
  • License: Apache 2.0

Benchmarks

  • No benchmarks - at this scale the benchmarks are more random than based on what the model has learned - we got a checkpoint 55k get better arc easy score than the final model.

Usage

from transformers import LlamaForCausalLM, PreTrainedTokenizerFast

model = LlamaForCausalLM.from_pretrained("pvlabs/Chytrej2-90M-Base")
tokenizer = PreTrainedTokenizerFast.from_pretrained("pvlabs/Chytrej2-90M-Base")

prompt = "The capital of France is"
inputs = tokenizer(prompt, return_tensors="pt")
outputs = model.generate(**inputs, max_new_tokens=100, repetition_penalty=1.3)
print(tokenizer.decode(outputs[0]))

Made by PingVortex.

Downloads last month
713
Safetensors
Model size
86M params
Tensor type
BF16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train pvlabs/Chytrej2-90M-Base

Collections including pvlabs/Chytrej2-90M-Base