image

🦜 Duchifat-1-Base (דוכיפת)

1. Overview

Duchifat-1-Base is a state-of-the-art, small-scale Hebrew Language Model (LLM) built on a custom Llama-based architecture. With 132 Million parameters, it is designed for efficiency, speed, and deep understanding of Hebrew syntax and urban semantics.

The model was trained from scratch to demonstrate that high-quality Hebrew text generation is possible even with limited parameter counts, provided the architecture is optimized for the language's unique morphological structure.

2. Model Specifications

  • Architecture: Llama-2-style (Decoder-only Transformer)
  • Parameters: 132 Million
  • Vocabulary Size: 52,000 (AlephBERT Tokenizer)
  • Context Length: 1024 tokens
  • Training Device: Dual NVIDIA T4 GPUs
  • License: Apache 2.0

3. Key Capabilities

  • Syntactic Precision: Exceptional grasp of Hebrew prefixes (ו-כש-ל-ב) and sentence structure.
  • Domain Knowledge: Strong performance in social media context, financial reporting terminology, and sociological discourse.
  • Efficiency: Capable of running on edge devices and mobile hardware with minimal latency.

4. Training Data

The model was pre-trained on the Hebrew Space Restoration Corpus by Dicta-IL, focusing on:

  • Social media interactions and contemporary slang.
  • Formal news reporting and financial data.
  • Modern Hebrew literature and essays.

5. Usage (Python)

from transformers import AutoConfig, AutoModelForCausalLM, AutoTokenizer
import torch

# Load model and tokenizer
model_id = "Raziel1234/Duchifat-1-Base"
tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(model_id, torch_dtype=torch.float16, device_map="auto")

# Inference
prompt = "מצד שני ניכרים היום סימנים של"
inputs = tokenizer(prompt, return_tensors="pt", return_token_type_ids=False).to("cuda")

output = model.generate(**inputs, max_new_tokens=50, temperature=0.6, do_sample=True)
print(tokenizer.decode(output[0], skip_special_tokens=True))
Downloads last month
13
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Dataset used to train Raziel1234/Duchifat-1-Base