# Wikilangs Models: Comprehensive Research Report ## ARY - Full Ablation Study This report presents a comprehensive evaluation of language models trained on ARY Wikipedia data. We analyze tokenizers, n-gram models, Markov chains, vocabulary statistics, and word embeddings. --- ## 1. Tokenizer Evaluation ![Tokenizer Compression](visualizations/01_tokenizer_compression.png) ### Results | Vocab Size | Compression | Avg Token Len | UNK Rate | Total Tokens | |------------|-------------|---------------|----------|--------------| | **8k** | 3.134x | 3.09 | 0.0472% | 379,309 | | **16k** | 3.346x | 3.30 | 0.0504% | 355,311 | | **32k** | 3.535x | 3.49 | 0.0532% | 336,296 | | **64k** | 3.683x ๐Ÿ† | 3.64 | 0.0555% | 322,761 | ### Tokenization Examples Below are sample sentences tokenized with each vocabulary size: **Sample 1:** `ู†ูŠู†ฺญ ุจุงูŠุฒูˆุฑุง ุจู†ุช ุงู„ุดูŠุฎ ุญู…ุฒุฉ ุฃูˆู„ุง ู†ูŠู†ฺญ ุจุงูŠุฒูˆุฑุง ู‡ูŠ ู…ูˆู…ุชูŠู„ุฉ ูˆู…ูˆุบู†ูŠุฉ ู…ุงู„ูŠุฒูŠุฉ. ู…ุตุงุฏ...` | Vocab | Tokens | Count | |-------|--------|-------| | 8k | `โ–ู† ูŠู†ฺญ โ–ุจุงูŠ ุฒ ูˆุฑุง โ–ุจู†ุช โ–ุงู„ุดูŠุฎ โ–ุญู… ุฒุฉ โ–ุฃูˆู„ุง ... (+32 more)` | 42 | | 16k | `โ–ู† ูŠู†ฺญ โ–ุจุงูŠ ุฒ ูˆุฑุง โ–ุจู†ุช โ–ุงู„ุดูŠุฎ โ–ุญู…ุฒุฉ โ–ุฃูˆู„ุง โ–ู† ... (+29 more)` | 39 | | 32k | `โ–ู† ูŠู†ฺญ โ–ุจุงูŠ ุฒ ูˆุฑุง โ–ุจู†ุช โ–ุงู„ุดูŠุฎ โ–ุญู…ุฒุฉ โ–ุฃูˆู„ุง โ–ู† ... (+29 more)` | 39 | | 64k | `โ–ู† ูŠู†ฺญ โ–ุจุงูŠ ุฒ ูˆุฑุง โ–ุจู†ุช โ–ุงู„ุดูŠุฎ โ–ุญู…ุฒุฉ โ–ุฃูˆู„ุง โ–ู† ... (+27 more)` | 37 | **Sample 2:** `ู‡ุงุฏูŠ ุตูุญุฉ ุฏ ุงู„ุชูˆุถูŠุญุŒ ูƒู„ู…ุฉ ุจุฑูƒุงู† ูŠู…ูƒู† ูŠูƒูˆู†ูˆ ุนู†ุฏู‡ุง ู‡ุงุฏ ู„ู…ุนุงู†ูŠ: ุจู’ุฑู’ูƒุงู†: ู…ุฏูŠู†ุฉ ู…ุบ...` | Vocab | Tokens | Count | |-------|--------|-------| | 8k | `โ–ู‡ุงุฏูŠ โ–ุตูุญุฉ โ–ุฏ โ–ุงู„ุชูˆุถูŠุญ ุŒ โ–ูƒู„ู…ุฉ โ–ุจุฑูƒุงู† โ–ูŠู…ูƒู† โ–ูŠูƒูˆู†ูˆ โ–ุนู†ุฏู‡ุง ... (+26 more)` | 36 | | 16k | `โ–ู‡ุงุฏูŠ โ–ุตูุญุฉ โ–ุฏ โ–ุงู„ุชูˆุถูŠุญ ุŒ โ–ูƒู„ู…ุฉ โ–ุจุฑูƒุงู† โ–ูŠู…ูƒู† โ–ูŠูƒูˆู†ูˆ โ–ุนู†ุฏู‡ุง ... (+25 more)` | 35 | | 32k | `โ–ู‡ุงุฏูŠ โ–ุตูุญุฉ โ–ุฏ โ–ุงู„ุชูˆุถูŠุญ ุŒ โ–ูƒู„ู…ุฉ โ–ุจุฑูƒุงู† โ–ูŠู…ูƒู† โ–ูŠูƒูˆู†ูˆ โ–ุนู†ุฏู‡ุง ... (+24 more)` | 34 | | 64k | `โ–ู‡ุงุฏูŠ โ–ุตูุญุฉ โ–ุฏ โ–ุงู„ุชูˆุถูŠุญ ุŒ โ–ูƒู„ู…ุฉ โ–ุจุฑูƒุงู† โ–ูŠู…ูƒู† โ–ูŠูƒูˆู†ูˆ โ–ุนู†ุฏู‡ุง ... (+22 more)` | 32 | **Sample 3:** `ุฃุณูŠู„ ุนู…ุฑุงู† (ู…ุฒูŠูˆุฏุฉ ู 1989) ู‡ูŠ ู…ุบู†ูŠุฉ ูˆ ู…ู…ุชู„ุฉ ุณุนูˆุฏูŠุฉ ูƒุชุนูŠุด ู ู„ุฅู…ุงุฑุงุช. ู…ุตุงุฏุฑ ุชุต...` | Vocab | Tokens | Count | |-------|--------|-------| | 8k | `โ–ุฃุณ ูŠู„ โ–ุนู…ุฑ ุงู† โ–( ู…ุฒูŠูˆุฏุฉ โ–ู โ– 1 9 ... (+36 more)` | 46 | | 16k | `โ–ุฃุณ ูŠู„ โ–ุนู…ุฑ ุงู† โ–( ู…ุฒูŠูˆุฏุฉ โ–ู โ– 1 9 ... (+32 more)` | 42 | | 32k | `โ–ุฃุณ ูŠู„ โ–ุนู…ุฑุงู† โ–( ู…ุฒูŠูˆุฏุฉ โ–ู โ– 1 9 8 ... (+28 more)` | 38 | | 64k | `โ–ุฃุณ ูŠู„ โ–ุนู…ุฑุงู† โ–( ู…ุฒูŠูˆุฏุฉ โ–ู โ– 1 9 8 ... (+28 more)` | 38 | ### Key Findings - **Best Compression:** 64k achieves 3.683x compression - **Lowest UNK Rate:** 8k with 0.0472% unknown tokens - **Trade-off:** Larger vocabularies improve compression but increase model size - **Recommendation:** 32k vocabulary provides optimal balance for production use --- ## 2. N-gram Model Evaluation ![N-gram Perplexity](visualizations/05_ngram_perplexity.png) ![N-gram Coverage](visualizations/07_ngram_coverage.png) ### Results | N-gram | Perplexity | Entropy | Unique N-grams | Top-100 Coverage | Top-1000 Coverage | |--------|------------|---------|----------------|------------------|-------------------| | **2-gram** | 7,187 ๐Ÿ† | 12.81 | 56,749 | 24.4% | 53.2% | | **2-gram** | 486 ๐Ÿ† | 8.93 | 6,227 | 54.9% | 95.4% | | **3-gram** | 8,812 | 13.11 | 76,888 | 21.3% | 52.8% | | **3-gram** | 4,295 | 12.07 | 51,256 | 22.1% | 58.7% | | **4-gram** | 12,168 | 13.57 | 124,859 | 20.1% | 50.4% | | **4-gram** | 22,008 | 14.43 | 260,844 | 12.0% | 35.5% | ### Top 5 N-grams by Size **2-grams:** | Rank | N-gram | Count | |------|--------|-------| | 1 | `ุชุตู†ูŠู :` | 37,187 | | 2 | `ุŒ ูˆ` | 18,746 | | 3 | `ู† ู‘` | 10,639 | | 4 | `) :` | 10,185 | | 5 | `ู…ุตุงุฏุฑ ุชุตู†ูŠู` | 10,087 | **3-grams:** | Rank | N-gram | Count | |------|--------|-------| | 1 | `ู…ุตุงุฏุฑ ุชุตู†ูŠู :` | 10,087 | | 2 | `ุชุตู†ูŠู : ู…ู‚ุงู„ุงุช` | 7,001 | | 3 | `ู† ู‘ ุงุณ` | 6,981 | | 4 | `ู„ ู‘ ูŠ` | 6,914 | | 5 | `: ุฏูˆุงุฑ ู` | 5,007 | **4-grams:** | Rank | N-gram | Count | |------|--------|-------| | 1 | `ุชุตู†ูŠู : ุฏูˆุงุฑ ู` | 5,005 | | 2 | `ู†ุณุจุฉ ู† ู‘ ุงุณ` | 4,061 | | 3 | `. ู…ุตุงุฏุฑ ุชุตู†ูŠู :` | 3,827 | | 4 | `ุชุตู†ูŠู : ู…ู‚ุงู„ุงุช ุฒุงุฏู‡ูˆู…` | 3,506 | | 5 | `: ู…ู‚ุงู„ุงุช ุฒุงุฏู‡ูˆู… ุฏุงุฑูŠุฌุงุจูˆุช` | 3,506 | ### Key Findings - **Best Perplexity:** 2-gram with 486 - **Entropy Trend:** Decreases with larger n-grams (more predictable) - **Coverage:** Top-1000 patterns cover ~35% of corpus - **Recommendation:** 4-gram or 5-gram for best predictive performance --- ## 3. Markov Chain Evaluation ![Markov Entropy](visualizations/09_markov_entropy.png) ![Markov Branching](visualizations/10_markov_branching.png) ### Results | Context | Avg Entropy | Perplexity | Branching Factor | Unique Contexts | Predictability | |---------|-------------|------------|------------------|-----------------|----------------| | **1** | 0.7813 | 1.719 | 5.36 | 189,320 | 21.9% | | **1** | 1.1519 | 2.222 | 8.71 | 1,931 | 0.0% | | **2** | 0.2761 | 1.211 | 1.68 | 1,014,676 | 72.4% | | **2** | 0.9863 | 1.981 | 6.24 | 16,826 | 1.4% | | **3** | 0.0931 | 1.067 | 1.18 | 1,701,309 | 90.7% | | **3** | 0.8744 | 1.833 | 4.33 | 104,928 | 12.6% | | **4** | 0.0366 ๐Ÿ† | 1.026 | 1.07 | 2,000,181 | 96.3% | | **4** | 0.6731 ๐Ÿ† | 1.594 | 2.82 | 454,694 | 32.7% | ### Generated Text Samples Below are text samples generated from each Markov chain model: **Context Size 1:** 1. `. ู‚ุฑุงุช ู„ุงู†ููˆุฑู…ุงุชูŠูƒ ุŒ ูˆุญุณุจูˆู‡ู… ุงู„ู†ุณุงุจูˆู† ุงู„ู…ุณู„ู…ูŠู† ) ุบุงูŠุจ ู…ุฌู…ูˆุนntnษ™n ( ู‚ุงุฆู… ุงู„ุฒุงูˆูŠุฉ ู‡ูˆ ุŒ ุงู„ุทุงุจู„ูˆ` 2. `ุŒ ูƒุงู† ู„ 6 % ุŒ ูˆู„ูƒู† ู…ุงูƒุฎูˆู† ( ูˆู„ุง ู„ุจูŠุทุงู„ูŠูŠู† ุงู„ู„ูŠ ุณุจู‚ ู„ูŠู‡ูˆู… ุฎุฏู…ูˆ )` 3. `ู ูƒุชุงุจ " ู ุฌู…ุงุนุฉ ู‚ุฑูˆูŠุฉ ู ุฏูˆูƒ ู„ูŠ ุฌุงูˆ ูุงู„ุบุฑุจ ุฏ ู„ูƒูˆุฑุฉ ุชุง ู†ุชูŠุฌุฉ ู„ุงู†ุฏู…ุงุฌ` **Context Size 2:** 1. `ุชุตู†ูŠู : ุนูˆุงู… ุฏ ุชู‚ูˆูŠู… ู„ู…ูŠู„ุงุฏูŠ ุชุตู†ูŠู : ู†ู‡ุงุฑุงุช ุฏ ู„ุนุงู… ุชุตู†ูŠู : ูƒุชุงุชุจูŠุง ู…ุบุงุฑุจุง ุฏ ู„ู‚ุฑู†` 2. `ุŒ ูˆ ุตุฏุฑุงุช ู…ู†ูˆ ุฃุบู†ูŠุฉ rip , love . ุงู„ุฏูŠุณูƒ ุฎุฑุฌ ุฑุณู…ูŠุง ู‹ paypal holdings inc .` 3. `ู† ู‘ ุงุณ ู† ู‘ ุดูŠุทูŠู† ( ู„ ู‘ ูŠ ู‚ุงุฑูŠูŠู† ููˆู‚ ุงู„ู„ูŠุณูŠ ( ู„ูŠุณูŠ ูˆ ุฌุงู…ุนุฉ` **Context Size 3:** 1. `ู…ุตุงุฏุฑ ุชุตู†ูŠู : ูŠู†ุงูŠุฑ ุชุตู†ูŠู : ู†ู‡ุงุฑุงุช ุฏ ู„ุนุงู… ุชุตู†ูŠู : ู…ู‚ุงู„ุงุช ููŠู‡ุง ู…ุตุฏุฑ ูˆ 3000 ุจุงูŠุช ุชุตู†ูŠู` 2. `ุชุตู†ูŠู : ู…ู‚ุงู„ุงุช ุฒุงุฏู‡ูˆู… ุฏุงุฑูŠุฌุงุจูˆุช ุชุตู†ูŠู : ุจู„ุงูŠุต ู…ุณูƒูˆู†ูŠู† ู ุฅู‚ู„ูŠู… ุจุฑุดูŠุฏ ุŒ ุฌู‡ุฉ ุฏ ู‘ ุงุฑ ู„ุจูŠุถุง` 3. `ู† ู‘ ุงุณ ุงู„ู„ูŠ ุฎุฏุงู…ูŠู† ู ุฏ ู‘ ูˆู„ุฉ : 4 , 4 % ุฅู‚ุชุตุงุฏ ู†ุณุจุฉ ู† ู‘` **Context Size 4:** 1. `ุชุตู†ูŠู : ุฏูˆุงุฑ ู ู„ู…ุบุฑูŠุจ ุชุตู†ูŠู : ุฏูˆุงุฑ ู ู„ู…ุบุฑูŠุจ ุชุตู†ูŠู : ุฏูˆุงุฑ ู ู„ู…ุบุฑูŠุจ ุชุตู†ูŠู : ุฏูˆุงุฑ ู` 2. `ู†ุณุจุฉ ู† ู‘ ุงุณ ู† ู‘ ุดูŠุทูŠู† ( ู„ ู‘ ูŠ ูŠู‚ุฏุฑูˆ ูŠุฎุฏู…ูˆ ) : 50 , 2 %` 3. `. ู…ุตุงุฏุฑ ุชุตู†ูŠู : ุนูˆุงู… ุฏ ุชู‚ูˆูŠู… ู„ู…ูŠู„ุงุฏูŠ ุชุตู†ูŠู : ู…ู‚ุงู„ุงุช ุฒุงุฏู‡ูˆู… ุฏุงุฑูŠุฌุงุจูˆุช ุชุตู†ูŠู : ุนูˆุงู… 380 ู‚ุจู„ ู„ู…ูŠู„ุงุฏ` ### Key Findings - **Best Predictability:** Context-4 with 96.3% predictability - **Branching Factor:** Decreases with context size (more deterministic) - **Memory Trade-off:** Larger contexts require more storage (454,694 contexts) - **Recommendation:** Context-3 or Context-4 for text generation --- ## 4. Vocabulary Analysis ![Zipf's Law](visualizations/12_zipf_law.png) ![Top Words](visualizations/14_top20_words.png) ![Coverage Curve](visualizations/15_vocab_coverage.png) ### Statistics | Metric | Value | |--------|-------| | Vocabulary Size | 81,712 | | Total Tokens | 2,308,873 | | Mean Frequency | 28.26 | | Median Frequency | 4 | | Frequency Std Dev | 559.90 | ### Most Common Words | Rank | Word | Frequency | |------|------|-----------| | 1 | ู | 84,463 | | 2 | ุฏ | 69,201 | | 3 | ูˆ | 61,463 | | 4 | ุชุตู†ูŠู | 37,231 | | 5 | ู„ | 34,076 | | 6 | ุฏูŠุงู„ | 32,761 | | 7 | ู…ู† | 29,612 | | 8 | ุนู„ู‰ | 19,717 | | 9 | ู„ูŠ | 18,627 | | 10 | ุจ | 18,189 | ### Least Common Words (from vocabulary) | Rank | Word | Frequency | |------|------|-----------| | 1 | ุจูŠุชุณูŠ | 2 | | 2 | ูˆุตุงู†ุนูŠ | 2 | | 3 | ูˆุฃู‡ู…ูŠุชู‡ุง | 2 | | 4 | ุจูˆุฑุฏูŠูˆ | 2 | | 5 | ุจู„ูˆู…ุฑ | 2 | | 6 | ู…ู‚ุชุฑุญุฉ | 2 | | 7 | anchor | 2 | | 8 | ุงู„ุฑุณู…ูŠุฉุงู„ู„ูŠ | 2 | | 9 | ุจุนุตุจุฉ | 2 | | 10 | ู…ุงฺญูŠ | 2 | ### Zipf's Law Analysis | Metric | Value | |--------|-------| | Zipf Coefficient | 1.0380 | | Rยฒ (Goodness of Fit) | 0.999162 | | Adherence Quality | **excellent** | ### Coverage Analysis | Top N Words | Coverage | |-------------|----------| | Top 100 | 39.3% | | Top 1,000 | 63.8% | | Top 5,000 | 78.6% | | Top 10,000 | 84.8% | ### Key Findings - **Zipf Compliance:** Rยฒ=0.9992 indicates excellent adherence to Zipf's law - **High Frequency Dominance:** Top 100 words cover 39.3% of corpus - **Long Tail:** 71,712 words needed for remaining 15.2% coverage --- ## 5. Word Embeddings Evaluation ![Embedding Isotropy](visualizations/16_embedding_isotropy.png) ![Similarity Matrix](visualizations/18_embedding_similarity.png) ![t-SNE Words](visualizations/20_tsne_words.png) ![t-SNE Sentences](visualizations/21_tsne_sentences.png) ### Model Comparison | Model | Vocab Size | Dimension | Avg Norm | Std Norm | Isotropy | |-------|------------|-----------|----------|----------|----------| | **mono_32d** | 37,528 | 32 | 4.010 | 1.183 | 0.8264 ๐Ÿ† | | **mono_64d** | 37,528 | 64 | 4.579 | 1.040 | 0.8183 | | **mono_128d** | 37,528 | 128 | 5.112 | 0.875 | 0.7212 | | **embeddings_enhanced** | 0 | 0 | 0.000 | 0.000 | 0.0000 | ### Key Findings - **Best Isotropy:** mono_32d with 0.8264 (more uniform distribution) - **Dimension Trade-off:** Higher dimensions capture more semantics but reduce isotropy - **Vocabulary Coverage:** All models cover 37,528 words - **Recommendation:** 100d for balanced semantic capture and efficiency --- ## 6. Summary & Recommendations ![Performance Dashboard](visualizations/24_performance_dashboard.png) ### Production Recommendations | Component | Recommended | Rationale | |-----------|-------------|-----------| | Tokenizer | **32k BPE** | Best compression (3.68x) with low UNK rate | | N-gram | **5-gram** | Lowest perplexity (486) | | Markov | **Context-4** | Highest predictability (96.3%) | | Embeddings | **100d** | Balanced semantic capture and isotropy | --- ## Appendix: Metrics Glossary & Interpretation Guide This section provides definitions, intuitions, and guidance for interpreting the metrics used throughout this report. ### Tokenizer Metrics **Compression Ratio** > *Definition:* The ratio of characters to tokens (chars/token). Measures how efficiently the tokenizer represents text. > > *Intuition:* Higher compression means fewer tokens needed to represent the same text, reducing sequence lengths for downstream models. A 3x compression means ~3 characters per token on average. > > *What to seek:* Higher is generally better for efficiency, but extremely high compression may indicate overly aggressive merging that loses morphological information. **Average Token Length (Fertility)** > *Definition:* Mean number of characters per token produced by the tokenizer. > > *Intuition:* Reflects the granularity of tokenization. Longer tokens capture more context but may struggle with rare words; shorter tokens are more flexible but increase sequence length. > > *What to seek:* Balance between 2-5 characters for most languages. Arabic/morphologically-rich languages may benefit from slightly longer tokens. **Unknown Token Rate (OOV Rate)** > *Definition:* Percentage of tokens that map to the unknown/UNK token, indicating words the tokenizer cannot represent. > > *Intuition:* Lower OOV means better vocabulary coverage. High OOV indicates the tokenizer encounters many unseen character sequences. > > *What to seek:* Below 1% is excellent; below 5% is acceptable. BPE tokenizers typically achieve very low OOV due to subword fallback. ### N-gram Model Metrics **Perplexity** > *Definition:* Measures how "surprised" the model is by test data. Mathematically: 2^(cross-entropy). Lower values indicate better prediction. > > *Intuition:* If perplexity is 100, the model is as uncertain as if choosing uniformly among 100 options at each step. A perplexity of 10 means effectively choosing among 10 equally likely options. > > *What to seek:* Lower is better. Perplexity decreases with larger n-grams (more context). Values vary widely by language and corpus size. **Entropy** > *Definition:* Average information content (in bits) needed to encode the next token given the context. Related to perplexity: perplexity = 2^entropy. > > *Intuition:* High entropy means high uncertainty/randomness; low entropy means predictable patterns. Natural language typically has entropy between 1-4 bits per character. > > *What to seek:* Lower entropy indicates more predictable text patterns. Entropy should decrease as n-gram size increases. **Coverage (Top-K)** > *Definition:* Percentage of corpus occurrences explained by the top K most frequent n-grams. > > *Intuition:* High coverage with few patterns indicates repetitive/formulaic text; low coverage suggests diverse vocabulary usage. > > *What to seek:* Depends on use case. For language modeling, moderate coverage (40-60% with top-1000) is typical for natural text. ### Markov Chain Metrics **Average Entropy** > *Definition:* Mean entropy across all contexts, measuring average uncertainty in next-word prediction. > > *Intuition:* Lower entropy means the model is more confident about what comes next. Context-1 has high entropy (many possible next words); Context-4 has low entropy (few likely continuations). > > *What to seek:* Decreasing entropy with larger context sizes. Very low entropy (<0.1) indicates highly deterministic transitions. **Branching Factor** > *Definition:* Average number of unique next tokens observed for each context. > > *Intuition:* High branching = many possible continuations (flexible but uncertain); low branching = few options (predictable but potentially repetitive). > > *What to seek:* Branching factor should decrease with context size. Values near 1.0 indicate nearly deterministic chains. **Predictability** > *Definition:* Derived metric: (1 - normalized_entropy) ร— 100%. Indicates how deterministic the model's predictions are. > > *Intuition:* 100% predictability means the next word is always certain; 0% means completely random. Real text falls between these extremes. > > *What to seek:* Higher predictability for text generation quality, but too high (>98%) may produce repetitive output. ### Vocabulary & Zipf's Law Metrics **Zipf's Coefficient** > *Definition:* The slope of the log-log plot of word frequency vs. rank. Zipf's law predicts this should be approximately -1. > > *Intuition:* A coefficient near -1 indicates the corpus follows natural language patterns where a few words are very common and most words are rare. > > *What to seek:* Values between -0.8 and -1.2 indicate healthy natural language distribution. Deviations may suggest domain-specific or artificial text. **Rยฒ (Coefficient of Determination)** > *Definition:* Measures how well the linear fit explains the frequency-rank relationship. Ranges from 0 to 1. > > *Intuition:* Rยฒ near 1.0 means the data closely follows Zipf's law; lower values indicate deviation from expected word frequency patterns. > > *What to seek:* Rยฒ > 0.95 is excellent; > 0.99 indicates near-perfect Zipf adherence typical of large natural corpora. **Vocabulary Coverage** > *Definition:* Cumulative percentage of corpus tokens accounted for by the top N words. > > *Intuition:* Shows how concentrated word usage is. If top-100 words cover 50% of text, the corpus relies heavily on common words. > > *What to seek:* Top-100 covering 30-50% is typical. Higher coverage indicates more repetitive text; lower suggests richer vocabulary. ### Word Embedding Metrics **Isotropy** > *Definition:* Measures how uniformly distributed vectors are in the embedding space. Computed as the ratio of minimum to maximum singular values. > > *Intuition:* High isotropy (near 1.0) means vectors spread evenly in all directions; low isotropy means vectors cluster in certain directions, reducing expressiveness. > > *What to seek:* Higher isotropy generally indicates better-quality embeddings. Values > 0.1 are reasonable; > 0.3 is good. Lower-dimensional embeddings tend to have higher isotropy. **Average Norm** > *Definition:* Mean magnitude (L2 norm) of word vectors in the embedding space. > > *Intuition:* Indicates the typical "length" of vectors. Consistent norms suggest stable training; high variance may indicate some words are undertrained. > > *What to seek:* Relatively consistent norms across models. The absolute value matters less than consistency (low std deviation). **Cosine Similarity** > *Definition:* Measures angular similarity between vectors, ranging from -1 (opposite) to 1 (identical direction). > > *Intuition:* Words with similar meanings should have high cosine similarity. This is the standard metric for semantic relatedness in embeddings. > > *What to seek:* Semantically related words should score > 0.5; unrelated words should be near 0. Synonyms often score > 0.7. **t-SNE Visualization** > *Definition:* t-Distributed Stochastic Neighbor Embedding - a dimensionality reduction technique that preserves local structure for visualization. > > *Intuition:* Clusters in t-SNE plots indicate groups of semantically related words. Spread indicates vocabulary diversity; tight clusters suggest semantic coherence. > > *What to seek:* Meaningful clusters (e.g., numbers together, verbs together). Avoid over-interpreting distances - t-SNE preserves local, not global, structure. ### General Interpretation Guidelines 1. **Compare within model families:** Metrics are most meaningful when comparing models of the same type (e.g., 8k vs 64k tokenizer). 2. **Consider trade-offs:** Better performance on one metric often comes at the cost of another (e.g., compression vs. OOV rate). 3. **Context matters:** Optimal values depend on downstream tasks. Text generation may prioritize different metrics than classification. 4. **Corpus influence:** All metrics are influenced by corpus characteristics. Wikipedia text differs from social media or literature. 5. **Language-specific patterns:** Morphologically rich languages (like Arabic) may show different optimal ranges than analytic languages. ### Visualizations Index | # | Visualization | Description | |---|---------------|-------------| | 01 | Tokenizer Compression | Compression ratios by vocabulary size | | 02 | Tokenizer Fertility | Average token length by vocabulary | | 03 | Tokenizer OOV | Unknown token rates | | 04 | Tokenizer Tokens | Total tokens by vocabulary | | 05 | N-gram Perplexity | Perplexity by n-gram size | | 06 | N-gram Entropy | Entropy by n-gram size | | 07 | N-gram Coverage | Top pattern coverage | | 08 | N-gram Unique | Unique n-gram counts | | 09 | Markov Entropy | Entropy by context size | | 10 | Markov Branching | Branching factor by context | | 11 | Markov Contexts | Unique context counts | | 12 | Zipf's Law | Frequency-rank distribution with fit | | 13 | Vocab Frequency | Word frequency distribution | | 14 | Top 20 Words | Most frequent words | | 15 | Vocab Coverage | Cumulative coverage curve | | 16 | Embedding Isotropy | Vector space uniformity | | 17 | Embedding Norms | Vector magnitude distribution | | 18 | Similarity Matrix | Word similarity heatmap | | 19 | Nearest Neighbors | Similar words for key terms | | 20 | t-SNE Words | 2D word embedding visualization | | 21 | t-SNE Sentences | 2D sentence embedding visualization | | 22 | Position Encoding | Encoding method comparison | | 23 | Model Sizes | Storage requirements | | 24 | Dashboard | Comprehensive performance overview | --- *Generated by Wikilangs Models Pipeline* *Report Date: 2025-12-27 03:37:35*