Datasets:
provider stringclasses 6
values | model stringlengths 2 17 | input_cost_per_1m float64 0.08 30 | output_cost_per_1m float64 0.1 75 | context_window int64 8.19k 1.05M | max_output int64 4.1k 100k | median_latency_ms int64 180 2.5k | p99_latency_ms int64 550 8k | date_benchmarked stringdate 2026-04-01 00:00:00 2026-04-01 00:00:00 |
|---|---|---|---|---|---|---|---|---|
OpenAI | gpt-4o | 2.5 | 10 | 128,000 | 16,384 | 520 | 1,800 | 2026-04-01 |
OpenAI | gpt-4o-mini | 0.15 | 0.6 | 128,000 | 16,384 | 310 | 950 | 2026-04-01 |
OpenAI | gpt-4-turbo | 10 | 30 | 128,000 | 4,096 | 680 | 2,200 | 2026-04-01 |
OpenAI | gpt-4 | 30 | 60 | 8,192 | 4,096 | 850 | 3,100 | 2026-04-01 |
OpenAI | gpt-3.5-turbo | 0.5 | 1.5 | 16,385 | 4,096 | 280 | 800 | 2026-04-01 |
OpenAI | o1 | 15 | 60 | 200,000 | 100,000 | 2,500 | 8,000 | 2026-04-01 |
OpenAI | o1-mini | 3 | 12 | 128,000 | 65,536 | 1,200 | 4,500 | 2026-04-01 |
OpenAI | o3 | 10 | 40 | 200,000 | 100,000 | 2,200 | 7,500 | 2026-04-01 |
OpenAI | o3-mini | 1.1 | 4.4 | 200,000 | 100,000 | 900 | 3,200 | 2026-04-01 |
Anthropic | claude-4-opus | 15 | 75 | 200,000 | 32,000 | 1,100 | 3,800 | 2026-04-01 |
Anthropic | claude-4-sonnet | 3 | 15 | 200,000 | 64,000 | 650 | 2,100 | 2026-04-01 |
Anthropic | claude-3.5-sonnet | 3 | 15 | 200,000 | 8,192 | 600 | 1,900 | 2026-04-01 |
Anthropic | claude-3.5-haiku | 0.8 | 4 | 200,000 | 8,192 | 350 | 1,100 | 2026-04-01 |
Anthropic | claude-3-opus | 15 | 75 | 200,000 | 4,096 | 1,200 | 4,200 | 2026-04-01 |
Anthropic | claude-3-sonnet | 3 | 15 | 200,000 | 4,096 | 580 | 1,800 | 2026-04-01 |
Anthropic | claude-3-haiku | 0.25 | 1.25 | 200,000 | 4,096 | 290 | 850 | 2026-04-01 |
Google | gemini-2.0-flash | 0.1 | 0.4 | 1,048,576 | 8,192 | 250 | 750 | 2026-04-01 |
Google | gemini-1.5-pro | 1.25 | 5 | 1,048,576 | 8,192 | 700 | 2,400 | 2026-04-01 |
Google | gemini-1.5-flash | 0.075 | 0.3 | 1,048,576 | 8,192 | 220 | 650 | 2026-04-01 |
Google | gemini-pro | 0.5 | 1.5 | 32,768 | 8,192 | 450 | 1,400 | 2026-04-01 |
Meta | llama-3.1-405b | 3 | 3 | 131,072 | 4,096 | 1,800 | 6,000 | 2026-04-01 |
Meta | llama-3.1-70b | 0.9 | 0.9 | 131,072 | 4,096 | 650 | 2,100 | 2026-04-01 |
Meta | llama-3.1-8b | 0.1 | 0.1 | 131,072 | 4,096 | 180 | 550 | 2026-04-01 |
Meta | llama-3-70b | 0.8 | 0.8 | 8,192 | 4,096 | 700 | 2,300 | 2026-04-01 |
Meta | llama-3-8b | 0.1 | 0.1 | 8,192 | 4,096 | 190 | 600 | 2026-04-01 |
Mistral | mistral-large | 2 | 6 | 128,000 | 4,096 | 550 | 1,800 | 2026-04-01 |
Mistral | mistral-small | 0.2 | 0.6 | 32,768 | 4,096 | 280 | 850 | 2026-04-01 |
Mistral | mixtral-8x7b | 0.5 | 0.5 | 32,768 | 4,096 | 350 | 1,100 | 2026-04-01 |
Mistral | mistral-7b | 0.15 | 0.15 | 32,768 | 4,096 | 200 | 600 | 2026-04-01 |
Cohere | command-r-plus | 3 | 15 | 128,000 | 4,096 | 750 | 2,500 | 2026-04-01 |
Cohere | command-r | 0.5 | 1.5 | 128,000 | 4,096 | 400 | 1,200 | 2026-04-01 |
LLM Cost Benchmark
Token pricing and latency benchmarks across major LLM providers. Updated dataset for cost estimation, budget planning, and provider comparison.
Fields
provider: API provider (OpenAI, Anthropic, Google, Meta, Mistral)model: Model nameinput_cost_per_1m: Input cost per 1M tokens (USD)output_cost_per_1m: Output cost per 1M tokens (USD)context_window: Maximum context window sizemax_output: Maximum output tokensmedian_latency_ms: Median latency for 100-token response (ms)p99_latency_ms: P99 latency (ms)date_benchmarked: When the benchmark was run
Usage
from datasets import load_dataset
ds = load_dataset("zachz/llm-cost-benchmark")
License
MIT
- Downloads last month
- 32