T-lite-0.1

🚨 T-lite is designed for further fine-tuning and is not intended as a ready-to-use conversational assistant. Users are advised to exercise caution and are responsible for any additional training and oversight required to ensure the model's responses meet acceptable ethical and safety standards. The responsibility for incorporating this model into industrial or commercial solutions lies entirely with those who choose to deploy it.

Description

T-lite is a continual pretraining model designed specifically for the Russian language, enabling the creation of large language model applications in Russian. This model aims to improve the quality of Russian text generation and provide domain-specific and cultural knowledge relevant to the Russian context.

Model Training Details

πŸ›οΈ Architecture and Configuration

T-lite is a decoder language model with:

  • pre-normalization via RMSNorm
  • SwiGLU activation function
  • rotary positional embeddings (RoPE)
  • grouped query attention (GQA)

T-lite was trained in bf16.

βš™οΈ Hyperparameters

We employed the Decoupled AdamW optimizer with Ξ²1 = 0.9, Ξ²2 = 0.95, and eps = 1.0e-8. The learning rate was set to 1.0e-5 with a constant schedule and a warmup period of 10 steps during stage 1, and a cosine schedule during stage 2. Weight decay was applied at a rate of 1.0e-6, and gradient clipping was performed with a maximum norm of 1.0. The maximum sequence length was set to 8192. Each batch contained approximately 6 million tokens.

πŸ‹πŸ½ Hardware Configuration & Performance

Training was conducted on 96 A100 GPUs with 80GB memory each, using Fully Sharded Data Parallel (FSDP) with full shard/hybrid shard strategies. The setup achieved a throughput of 3000 tokens/sec/GPU, with 100B tokens being processed in approximately 4 days. We achieved a 0.59 Model FLOPs Utilization (MFU).

πŸ“š Data

Stage 1

Massive continual pre-training

  • 300B tokens * 0.3 epoch
  • Proportion of data in Russian is 85%, as a trade-off between language adoptation and English language performance
  • Styles and topics in Common Crawl (CC) data were downsampled
  • Domains in book datasets were balanced
  • Proportion of code data was increased

Stage 2

Focuses on refining the quality of the dataset

  • 20B tokens * 3 epochs
  • Includes instructional sets of smaller volume
  • Advertisements and news were aggressively downsampled
  • Instructions and articles were upsampled
  • Educational content was balanced

πŸ“Š Benchmarks

πŸ‡·πŸ‡Ί Russian

MERA benchmark results

Task name Metric N shot Llama-3-8b T-lite-0.1
Total score 0.445 0.492
BPS Accuracy 2-shot 0.459 0.358
CheGeKa F1 / EM 4-shot 0.04/0 0.118/0.06
LCS Accuracy 2-shot 0.146 0.14
MathLogicQA Accuracy 5-shot 0.365 0.37
MultiQ F1-score / EM 0-shot 0.106/0.027 0.383/0.29
PARus Accuracy 0-shot 0.72 0.858
RCB Avg F1 / Accuracy 0-shot 0.42/0.434 0.511/0.416
ruHumanEval pass@k 0-shot 0.017/0.085/0.171 0.023/0.113/0.226
ruMMLU Accuracy 5-shot 0.693 0.759
ruModAr EM 0-shot 0.708 0.667
ruMultiAr EM 5-shot 0.259 0.269
ruOpenBookQA Avg F1 / Accuracy 5-shot 0.745/0.744 0.783/0.782
ruTiE Accuracy 0-shot 0.553 0.681
ruWorldTree Avg F1 / Accuracy 5-shot 0.838/0.839 0.88/0.88
RWSD Accuracy 0-shot 0.504 0.585
SimpleAr EM 5-shot 0.954 0.955
USE Grade Norm 0-shot 0.023 0.05

The evluation was performed using https://github.com/ai-forever/MERA/tree/main

πŸ‡¬πŸ‡§ English

It's consistent that after the model was adapted for the Russian language, performance on English benchmarks declined.

Benchmark N shot Llama-3-8b T-lite-0.1
ARC-challenge 0-shot 0.518 0.489
ARC-easy 0-shot 0.789 0.787
MMLU 0-shot 0.62 0.6
Natural Questions 0-shot 0.162 0.222
TriviaQA 0-shot 0.63 0.539

The evluation was performed using https://github.com/EleutherAI/lm-evaluation-harness.

πŸ‘¨β€πŸ’» Examples of usage

from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
torch.manual_seed(42)

tokenizer = AutoTokenizer.from_pretrained("t-bank-ai/T-lite-0.1")
model = AutoModelForCausalLM.from_pretrained("t-bank-ai/T-lite-0.1", device_map="auto")

input_text = "МашинноС ΠΎΠ±ΡƒΡ‡Π΅Π½ΠΈΠ΅ Π½ΡƒΠΆΠ½ΠΎ для"
input_ids = tokenizer(input_text, return_tensors="pt").to(model.device)

outputs = model.generate(**input_ids, max_new_tokens=256)
print(tokenizer.decode(outputs[0]))

Output:

МашинноС ΠΎΠ±ΡƒΡ‡Π΅Π½ΠΈΠ΅ Π½ΡƒΠΆΠ½ΠΎ для Ρ‚ΠΎΠ³ΠΎ, Ρ‡Ρ‚ΠΎΠ±Ρ‹ Π°Π²Ρ‚ΠΎΠΌΠ°Ρ‚ΠΈΠ·ΠΈΡ€ΠΎΠ²Π°Ρ‚ΡŒ процСсс принятия Ρ€Π΅ΡˆΠ΅Π½ΠΈΠΉ. ВмСсто Ρ‚ΠΎΠ³ΠΎ, Ρ‡Ρ‚ΠΎΠ±Ρ‹ Ρ‡Π΅Π»ΠΎΠ²Π΅ΠΊΡƒ Π½ΡƒΠΆΠ½ΠΎ Π±Ρ‹Π»ΠΎ Π²Ρ€ΡƒΡ‡Π½ΡƒΡŽ ΠΏΡ€ΠΎΡΠΌΠ°Ρ‚Ρ€ΠΈΠ²Π°Ρ‚ΡŒ ΠΈ Π°Π½Π°Π»ΠΈΠ·ΠΈΡ€ΠΎΠ²Π°Ρ‚ΡŒ Π΄Π°Π½Π½Ρ‹Π΅, Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΡ‹ машинного обучСния ΠΌΠΎΠ³ΡƒΡ‚ автоматичСски Π²Ρ‹ΡΠ²Π»ΡΡ‚ΡŒ закономСрности ΠΈ Π΄Π΅Π»Π°Ρ‚ΡŒ ΠΏΡ€ΠΎΠ³Π½ΠΎΠ·Ρ‹ Π½Π° основС этих Π΄Π°Π½Π½Ρ‹Ρ…. Π­Ρ‚ΠΎ ΠΌΠΎΠΆΠ΅Ρ‚ Π±Ρ‹Ρ‚ΡŒ особСнно ΠΏΠΎΠ»Π΅Π·Π½ΠΎ Π² Ρ‚Π°ΠΊΠΈΡ… областях, ΠΊΠ°ΠΊ финансы, Π³Π΄Π΅ объСм Π΄Π°Π½Π½Ρ‹Ρ… ΠΎΠ³Ρ€ΠΎΠΌΠ΅Π½, Π° Ρ€Π΅ΡˆΠ΅Π½ΠΈΡ Π΄ΠΎΠ»ΠΆΠ½Ρ‹ ΠΏΡ€ΠΈΠ½ΠΈΠΌΠ°Ρ‚ΡŒΡΡ быстро.

Π’ΠΎΡ‚ нСсколько ΠΏΡ€ΠΈΠΌΠ΅Ρ€ΠΎΠ² Ρ‚ΠΎΠ³ΠΎ, ΠΊΠ°ΠΊ машинноС ΠΎΠ±ΡƒΡ‡Π΅Π½ΠΈΠ΅ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΡƒΠ΅Ρ‚ΡΡ Π² финансах:

1. ΠžΠ±Π½Π°Ρ€ΡƒΠΆΠ΅Π½ΠΈΠ΅ ΠΌΠΎΡˆΠ΅Π½Π½ΠΈΡ‡Π΅ΡΡ‚Π²Π°: Π°Π»Π³ΠΎΡ€ΠΈΡ‚ΠΌΡ‹ машинного обучСния ΠΌΠΎΠ³ΡƒΡ‚ Π°Π½Π°Π»ΠΈΠ·ΠΈΡ€ΠΎΠ²Π°Ρ‚ΡŒ закономСрности Π² транзакциях ΠΈ Π²Ρ‹ΡΠ²Π»ΡΡ‚ΡŒ ΠΏΠΎΠ΄ΠΎΠ·Ρ€ΠΈΡ‚Π΅Π»ΡŒΠ½Ρ‹Π΅ дСйствия, ΠΊΠΎΡ‚ΠΎΡ€Ρ‹Π΅ ΠΌΠΎΠ³ΡƒΡ‚ ΡƒΠΊΠ°Π·Ρ‹Π²Π°Ρ‚ΡŒ Π½Π° ΠΌΠΎΡˆΠ΅Π½Π½ΠΈΡ‡Π΅ΡΡ‚Π²ΠΎ.
2. Π£ΠΏΡ€Π°Π²Π»Π΅Π½ΠΈΠ΅ рисками: МашинноС ΠΎΠ±ΡƒΡ‡Π΅Π½ΠΈΠ΅ ΠΌΠΎΠΆΠ΅Ρ‚ ΠΏΠΎΠΌΠΎΡ‡ΡŒ финансовым учрСТдСниям Π²Ρ‹ΡΠ²Π»ΡΡ‚ΡŒ ΠΈ ΠΎΡ†Π΅Π½ΠΈΠ²Π°Ρ‚ΡŒ риски, связанныС с Ρ€Π°Π·Π»ΠΈΡ‡Π½Ρ‹ΠΌΠΈ инвСстициями ΠΈΠ»ΠΈ ΠΊΡ€Π΅Π΄ΠΈΡ‚Π°ΠΌΠΈ.
3. ΠžΠ±Ρ€Π°Π±ΠΎΡ‚ΠΊΠ° Π΄Π°Π½Π½Ρ‹Ρ… Π½Π° СстСствСнном языкС: МашинноС ΠΎΠ±ΡƒΡ‡Π΅Π½ΠΈΠ΅ ΠΌΠΎΠΆΠ΅Ρ‚ ΠΈΡΠΏΠΎΠ»ΡŒΠ·ΠΎΠ²Π°Ρ‚ΡŒΡΡ для Π°Π½Π°Π»ΠΈΠ·Π° финансовых новостСй ΠΈ Π΄Ρ€ΡƒΠ³ΠΈΡ… тСкстовых Π΄Π°Π½Π½Ρ‹Ρ…, Ρ‡Ρ‚ΠΎΠ±Ρ‹ Π²Ρ‹ΡΠ²ΠΈΡ‚ΡŒ Ρ‚Π΅Π½Π΄Π΅Π½Ρ†ΠΈΠΈ
Downloads last month
20
Safetensors
Model size
8B params
Tensor type
BF16
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Model tree for AnatoliiPotapov/T-lite-0.1

Quantizations
5 models

Spaces using AnatoliiPotapov/T-lite-0.1 8

Papers for AnatoliiPotapov/T-lite-0.1