Scaling Laws of AI: What Happens When You Make Models Bigger?

Community Article Published April 24, 2026

A deep dive into one of the most consequential empirical findings in modern machine learning.

scalinglaws


Tags: scaling-laws large-language-models training compute research


TL;DR

Neural language model performance improves predictably as you scale compute, data, and parameters following clean power-law relationships. This insight has fundamentally reshaped how AI labs think about model development, resource allocation, and the future trajectory of AI capabilities.


Table of Contents

  1. Introduction
  2. The Original Kaplan et al. Scaling Laws (2020)
  3. Key Variables: The Compute Triangle
  4. Power Laws in Practice
  5. Chinchilla: Revising the Laws (2022)
  6. Beyond Language Modeling
  7. Emergent Abilities and Phase Transitions
  8. Implications for Training Strategy
  9. Current Debates and Open Questions
  10. Conclusion

Introduction

One of the most profound discoveries in modern AI research wasn't a new architecture or a clever training trick it was the realization that model performance follows remarkably predictable mathematical laws as you scale up resources.

Before scaling laws, training large models felt like navigating a fog. Researchers would scale up a model, run expensive experiments, and hope for the best. The discovery of scaling laws lifted that fog. Suddenly, you could predict how well your model would perform before training it simply by knowing how much compute you planned to spend.

This article breaks down what scaling laws are, how they were discovered, how they've evolved, and why they matter for anyone building or studying large language models.


The Original Scaling Laws

The landmark paper "Scaling Laws for Neural Language Models" by Kaplan et al. (OpenAI, 2020) established the empirical foundation for this field. Their key finding:

"Model performance depends strongly on scale, weakly on model shape."

In other words: size matters far more than architecture details.

What they measured

The researchers trained hundreds of models varying along three axes:

Variable Symbol Range Studied
Model parameters N 10³ – 10⁹
Dataset size (tokens) D 10⁶ – 10¹¹
Compute budget (FLOPs) C 10¹² – 10²³

They measured cross-entropy loss on a held-out test set a proxy for how well the model predicts language. The relationship they found was clean, consistent, and startling.


Key Variables: The Compute Triangle

Think of model training as governed by a triangle of three resources:

          Compute (C)
          /          \
         /            \
   Parameters (N) — Data (D)

These three are related by the rough identity:

C ≈ 6 × N × D

(Each token sees each parameter about 6 times per forward+backward pass.)

The central insight is that all three matter, but in different ways, and they can be traded off against each other.


Power Laws in Practice

The Kaplan paper showed that loss L follows a power law with each resource individually (when the others are held in abundance):

L(N) = (Nc / N)^αN        where αN ≈ 0.076
L(D) = (Dc / D)^αD        where αD ≈ 0.095
L(C) = (Cc / C)^αC        where αC ≈ 0.050

These exponents are small which is the whole point. It means:

  • Doubling your parameters → modest but reliable loss improvement
  • Doubling your data → similarly reliable improvement
  • The improvements never plateau within the studied range

This is not like overfitting dynamics or diminishing returns in traditional ML. The scaling is consistent across many orders of magnitude.

Visualizing the Power Law

Loss (↓ better)
│
│ ●
│   ●
│     ●
│       ●
│         ●
│           ●
│             ●
└─────────────────────────────────────► Scale (log)
  10M    100M   1B    10B   100B

A log-log plot of loss vs. scale produces a straight line. That's the power law signature.


Chinchilla: Revising the Laws

In 2022, DeepMind published "Training Compute-Optimal Large Language Models" commonly known as the Chinchilla paper (after their 70B flagship model).

Their finding upended the conventional wisdom of the field:

Prior models were significantly undertrained relative to their size.

The Chinchilla Scaling Law

Kaplan et al. suggested that given a fixed compute budget, you should prioritize scaling parameters over data. Chinchilla showed this was wrong.

Their revised law found that parameters and tokens should scale roughly equally:

N_optimal ∝ C^0.50
D_optimal ∝ C^0.50

The rule of thumb that emerged: ~20 tokens per parameter for compute-optimal training.

Model Parameters Tokens (actual) Tokens (optimal) Verdict
GPT-3 175B 300B 3.5T Undertrained
Gopher 280B 300B 5.6T Undertrained
Chinchilla 70B 1.4T 1.4T ✅ Optimal

Chinchilla (70B) outperformed Gopher (280B) despite being 4× smaller, because it was trained on proportionally more data.


Beyond Language Modeling

Scaling laws aren't just for LLMs. Similar power-law relationships have been found in:

Vision Models

  • Image classification loss scales predictably with model size and dataset size
  • Vision-language models (like CLIP) follow similar laws across modalities

Code Generation

  • GitHub Copilot and similar models show scaling behavior consistent with language models

Multimodal Models

  • Early evidence suggests that cross-modal transfer follows its own scaling regime, potentially more data-efficient

Reinforcement Learning

  • Scaling laws in RL are less clean but emerging evidence suggests similar dynamics for policy models trained via RLHF

Emergent Abilities

One of the most debated consequences of scaling is emergence capabilities that appear suddenly at certain scale thresholds rather than improving gradually.

Examples include:

  • Chain-of-thought reasoning: appears around ~100B parameters
  • Multi-step arithmetic: jumps in capability at scale
  • In-context learning: becomes reliable only in sufficiently large models
  • Instruction following: qualitatively changes at scale

The "Phase Transition" View

Capability
│
│                               ████████████ (emergent)
│
│            ░░░░░░░░░░░░░░░░░░░ (gradual)
│
└──────────────────────────────────────────► Scale
           10B          100B         1T

However, this view is contested. A 2022 paper by Schaeffer et al. argued that many "emergent" abilities are artifacts of the metric used when you use a continuous metric (rather than binary pass/fail), the apparent phase transition smooths out into a gradual improvement.


Implications for Training Strategy

Understanding scaling laws has practical consequences for how labs train models:

1. Budget allocation

Given a fixed compute budget C, don't just make the biggest model you can. Follow the Chinchilla law to balance N and D.

2. Inference-aware scaling

The Chinchilla framework optimizes for training compute. But if you'll serve a model billions of times, a smaller model trained on more data may be cheaper overall (LLaMA's philosophy).

3. Data quality matters

Scaling laws assume random sampling from a corpus. Data quality can shift the effective scaling exponent cleaner data = better loss for same token count.

4. Predicting final performance early

Because loss follows predictable trajectories, you can often estimate final model quality from early checkpoints, saving enormous compute on ablations.

5. The "overtrained" models trend

Post-Chinchilla, many labs deliberately train smaller models on far more tokens than "optimal" for inference efficiency:

  • LLaMA 3.1 8B: trained on 15T tokens (~1,875 tokens/param)
  • Mistral 7B: heavily overtrained vs. Chinchilla optimal
  • Phi-3: extreme data efficiency through careful curation

Current Debates and Open Questions

Do scaling laws hold at frontier scale?

We've observed clean scaling up to ~10²⁵ FLOPs. Whether the laws continue cleanly beyond that or whether we hit new phenomena is unknown. Some evidence from GPT-4 and Claude 3 suggests continued scaling, but the data is sparse and proprietary.

The data wall

If you need 20 tokens per parameter for Chinchilla-optimal training, a 10T parameter model needs 200 trillion tokens. Total estimated internet text is ~100T tokens. Are we approaching a data wall? Synthetic data generation (models training on model outputs) is the leading proposed solution.

Scaling laws for reasoning

Does chain-of-thought reasoning which effectively extends inference compute follow its own scaling laws? Early work on "test-time compute" scaling (as in OpenAI's o1/o3 series) suggests yes: performance scales with inference compute in a predictable way. This opens a second axis of scaling beyond training.

Architecture independence

Kaplan et al. found that performance was "weakly dependent on model shape." But this was studied in a narrow range. Does it hold for dramatically different architectures like Mamba (SSM-based) or mixture-of-experts? Evidence is mixed.


Conclusion

Scaling laws represent one of the rare moments in science where messy, complex phenomena reduce to clean, predictable mathematics. They've given AI researchers something precious: reliable extrapolation.

The key takeaways:

  • ✅ Performance scales as a power law with compute, parameters, and data
  • ✅ Parameters and tokens should scale roughly equally (Chinchilla)
  • ✅ These laws have held across many orders of magnitude
  • ⚠️ Emergence is real but its interpretation is contested
  • ❓ The data wall, test-time scaling, and frontier behavior remain open

Perhaps most importantly, scaling laws suggest that AI progress at least in the current paradigm is not a matter of luck or sudden breakthroughs. It's a function of resources. That's a sobering and clarifying lens through which to understand the arms race in AI compute infrastructure happening today.


Further Reading


If you found this article useful, consider sharing it or opening a discussion below. Help a brother out by Upvoting the article

License: CC BY 4.0

Community

Article author

If you like this article, please support me by Upvoting the same⬆️😉

Sign up or log in to comment