Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
pdf
pdf

Dataset Card — Compact Ambitions: A Comparative Analysis of Portimbria-150M and SmolLM2-135M

Dataset Summary

This dataset contains the full text of a secondary comparative analysis examining two sub-200M parameter language models: Portimbria-150M (StentorLabs, 2026), a 151M-parameter decoder-only model trained on ~6B tokens at zero financial cost using Kaggle's free-tier TPU v5e-8; and SmolLM2-135M (HuggingFace, 2025), a 135M-parameter model trained on 2 trillion tokens by HuggingFace. The paper is released as a document dataset to support reproducible NLP research and to make the analysis citable, searchable, and accessible from the HuggingFace Hub.

The analysis covers seven dimensions:

  • Architectural design (depth vs. width tradeoffs, GQA configuration, vocabulary design, positional encoding)
  • Training data curation (FineWeb-HQ vs. FineWeb-Edu/DCLM curriculum, synthetic data)
  • Training infrastructure and compute cost
  • Optimizer and learning rate schedule configuration
  • Scaling law positioning (Chinchilla and inference-optimal frameworks)
  • Benchmark evaluation across eight tasks (HellaSwag, ARC, PIQA, Winogrande, TruthfulQA MC2, OpenBookQA, CommonsenseQA)
  • Deployment characteristics (quantization, speculative decoding, edge deployment)

Key findings: SmolLM2-135M leads on the majority of standard benchmarks by 10–15 percentage points, consistent with its 333× larger training token budget. Portimbria-150M leads all documented peers on TruthfulQA MC2 (46.94%) and offers architectural advantages suited to speculative decoding draft model applications against Mistral-family targets. The paper provides a detailed examination of how data volume, data quality, architectural configuration, and training methodology interact at this scale.

No new models, benchmarks, or training methodologies are introduced. All analysis is secondary, drawing on published model cards and the SmolLM2 paper (Ben Allal et al., 2025).

Supported Tasks and Leaderboards

This dataset is primarily intended for document retrieval and research reference use cases. It may also serve as:

  • A reference document for researchers studying sub-200M parameter model design and tradeoffs
  • A training or evaluation sample for document understanding / long-document NLP tasks
  • A source text for citation extraction or scientific NLP benchmarks

No active leaderboard is associated with this dataset.

Languages

English (en). The paper is a monolingual English academic analysis. All cited benchmark datasets (HellaSwag, ARC, etc.) are also English-language tasks.


Dataset Structure

Data Instances

This dataset contains two documents: the full text of the analysis paper in Markdown format (paper_fixed.md) and pdf format (paper_fixed.md.pdf). A typical instance is the complete paper, approximately 83,000 characters (~18,000 words) across 11 sections, a references section, and supplementary statements.

{
  "text": "# Compact Ambitions: A Comparative Analysis of Portimbria-150M and SmolLM2-135M\n\n**Data Volume vs. Architectural Efficiency in the Sub-200M Language Model Tier**\n\n...",
  "language": "en",
  "document_type": "research_paper_preprint",
  "sections": 11,
  "word_count": ~18000
}

Data Fields

Field Type Description
text string Full Markdown text of the paper
language string ISO 639-1 language code (en)
document_type string Document classification (research_paper_preprint)
title string Full title of the paper
author string Author name and affiliation
date string Publication date (April 2026)
version string Document version (1.0)

Data Splits

This dataset has no train/validation/test splits. It is a single-document reference dataset.

Split Documents
(none — single document) 1

Dataset Creation

Curation Rationale

The paper was prepared for release by Kai Izumoto (StentorLabs, Independent) in April 2026. It was published as a HuggingFace dataset to:

  1. Make the full analysis text directly accessible from the same Hub where Portimbria-150M is hosted
  2. Enable citation via a persistent, versioned HuggingFace dataset URL
  3. Support discoverability through Hub tags and full-text search
  4. Provide a citable, DOI-linkable artifact separate from the model weights repository

The analysis itself was motivated by the practical value of comparing a zero-cost TPU-trained model against an institutionally-trained model in the same parameter tier, as a natural experiment in minimum viable resources for the sub-200M regime.

Source Data

Initial Data Collection and Normalization

The paper text was authored in Markdown and pdf. Source material for the analysis consists of:

  • The Portimbria-150M model card (StentorLabs/Portimbria-150M) by Kai Izumoto (2026)
  • The SmolLM2 paper: Ben Allal et al. (2025), arXiv:2502.02737
  • The SmolLM2-135M model card (HuggingFaceTB/SmolLM2-135M)
  • Published benchmark evaluations and scaling law literature as cited in the paper's reference section

All benchmark scores are sourced from the respective model cards and the SmolLM2 paper; no new benchmarking was conducted.

Who are the source language producers?

The paper was written entirely by Kai Izumoto. It is an independent secondary analysis; no crowdsourcing or annotation was employed.

Annotations

Annotation process

No annotations were applied. The document is the raw authored Markdown text and pdf of the paper.

Who are the annotators?

N/A — no external annotation.

Personal and Sensitive Information

The paper contains no personal data, personally identifiable information (PII), or sensitive individual-level data. It references publicly available model cards and papers by name of author, consistent with standard academic citation practice.


Considerations for Using the Data

Social Impact of Dataset

This paper and dataset contribute to transparency in the sub-200M language model tier by:

  • Documenting that functional language models can be trained at zero financial cost, lowering the barrier to entry for independent ML researchers
  • Providing an honest, evenhandedly-written comparison that explicitly acknowledges the benchmark superiority of the institutionally-trained model
  • Offering a detailed failure mode analysis of the zero-cost model to help practitioners understand its limitations before deployment

Researchers using this analysis to inform model selection should note the conflict of interest declaration in the paper: the author is the creator of Portimbria-150M. All interpretive claims about that model's relative strengths should be read with appropriate scrutiny.

Discussion of Biases

Author bias: The author of this analysis is the creator of Portimbria-150M. Despite explicit efforts at evenhanded treatment — including direct acknowledgment of SmolLM2-135M's benchmark superiority on the majority of tasks — readers should apply heightened scrutiny to interpretive claims about Portimbria-150M's advantages (TruthfulQA, speculative decoding suitability, context window length).

Evaluation framework bias: Benchmark scores for the two models were produced under different evaluation frameworks (lm-evaluation-harness vs. lighteval) with different prompt templates. Cross-model score comparisons are directional, not exact. Score differences below ~3 percentage points may fall within harness variance.

Data coverage bias: The analysis draws exclusively on English-language models evaluated on English-language benchmarks. No multilingual or non-English evaluation is present.

Benchmark selection bias: Standard NLP benchmarks (HellaSwag, ARC, PIQA) reward exposure to specific domain patterns present in those datasets. SmolLM2-135M's data curation philosophy was explicitly designed to maximize performance on these benchmarks, which is legitimate but important context for interpreting the gaps.

Other Known Limitations

  • No new model training or benchmarking was conducted; all numerical claims are sourced from published model cards and the SmolLM2 paper.
  • Several SmolLM2-135M training hyperparameters for the 135M variant are not published; conclusions about that model's training dynamics are extrapolated from the documented 1.7B flagship run.
  • Neither model has been trained multiple times with different random seeds; reported benchmark scores represent a single sample from each training distribution.
  • The analysis covers base model behavior only and does not address instruction-tuned variants.

Additional Information

Dataset Curators

Kai Izumoto, StentorLabs (Independent). April 2026.

Licensing Information

This dataset (paper text) is released under the Creative Commons Attribution 4.0 International (CC BY 4.0) license. You are free to share and adapt the material for any purpose, provided appropriate credit is given, a link to the license is provided, and any changes are indicated.

License: CC BY 4.0

Citation Information

If you use or reference this analysis, please cite:

@misc{izumoto2026compactambitions,
  title        = {Compact Ambitions: A Comparative Analysis of {Portimbria-150M} and {SmolLM2-135M}},
  author       = {Izumoto, Kai},
  year         = {2026},
  month        = {April},
  institution  = {StentorLabs},
  note         = {Preprint. HuggingFace Datasets.},
  url          = {https://huggingface.co/datasets/StentorLabs/compact-ambitions-portimbria-smollm2}
}

Contributions

This analysis was conducted solely by Kai Izumoto. The author thanks Kaggle for providing free TPU v5e-8 compute access, and HuggingFace for hosting model weights and datasets under open licenses. No external funding was received.

Downloads last month
69

Models trained or fine-tuned on StentorLabs/Portimbria-150M-Vs.-SmolLM2-135M

Collection including StentorLabs/Portimbria-150M-Vs.-SmolLM2-135M

Paper for StentorLabs/Portimbria-150M-Vs.-SmolLM2-135M