id int64 1 324k | title stringlengths 1 112 | markdown stringlengths 120 318k | url stringlengths 31 262 | lang stringclasses 1
value | length int32 120 318k | timestamp stringdate 2006-11-04 22:54:18 2026-04-01 12:58:02 |
|---|---|---|---|---|---|---|
1 | Astronomia | [thumb|upright=1.2|[[Observatorium Paranal](https://la.wikipedia.org/wiki/Fasciculus:Laser_Towards_Milky_Ways_Centre.jpg) [Observatorii Meridiani Europaei](https://la.wikipedia.org/wiki/Observatorium_Meridianum_Europaeum) [lasericam stellam ducentam](https://la.wikipedia.org/wiki/stella_ducens_laserica) ad [Medium Gala... | https://la.wikipedia.org/wiki/Astronomia | la | 12,288 | 2026-02-12T23:24:37Z |
7 | Andreas Celsius | **Andreas Celsius** ([Suecice](https://la.wikipedia.org/wiki/Suecice): *[Anders Celsius](https://la.wikipedia.org/wiki/:sv:Anders_Celsius)*) [27 Novembris](https://la.wikipedia.org/wiki/27_Novembris) [1701](https://la.wikipedia.org/wiki/1701) oppido *Ovanåker* nomine natus, [25 Aprilis](https://la.wikipedia.org/wiki/25... | https://la.wikipedia.org/wiki/Andreas_Celsius | la | 3,139 | 2020-10-03T11:05:28Z |
10 | Articulata | **Articulata** erant superphylum [animalium](https://la.wikipedia.org/wiki/animalia) primum a [Georgio Cuvier](https://la.wikipedia.org/wiki/Georgius_Cuvier) anno 1817 propositum, cui sunt corpora [articulata](https://la.wikipedia.org/wiki/articulus), [systema nervosum](https://la.wikipedia.org/wiki/systema_nervosum) [... | https://la.wikipedia.org/wiki/Articulata | la | 2,457 | 2015-07-20T22:02:37Z |
12 | Ars rhetorica | [thumb|Imago e libro de *Chironomia* a Gilberto Austin, [[orator](https://la.wikipedia.org/wiki/Fasciculus:Chironomia_Sphere.jpg)e Hibernico, scripto. [Actio](https://la.wikipedia.org/wiki/Actio) est pars artis oratoris, ut saepe dicit [Cicero](https://la.wikipedia.org/wiki/Cicero), in e.g. [De oratore](https://la.wiki... | https://la.wikipedia.org/wiki/Ars_rhetorica | la | 6,129 | 2025-04-08T20:16:29Z |
9 | Animalia | **Animalia** (, ab *[anima](https://la.wikipedia.org/wiki/anima)*), seu **Metazoa** (, a [Gr.](https://la.wikipedia.org/wiki/Lingua_Graeca_antiqua) μέτα + ζῷᾰ), in [biologia](https://la.wikipedia.org/wiki/biologia) hodie appellantur illa copia rerum [viventium](https://la.wikipedia.org/wiki/vita) quae simul sunt [eukar... | https://la.wikipedia.org/wiki/Animalia | la | 19,860 | 2023-04-11T12:22:18Z |
15 | Ars | [thumb|[[Facies](https://la.wikipedia.org/wiki/Fasciculus:Art-portrait-collage_2.jpg) variis artis generibus pictae.]]
[thumb|upright=0.8|left|[[Sculptura|Signum](https://la.wikipedia.org/wiki/Fasciculus:PikiWiki_Israel_20592_The_Gymnast_sculpture_in_Wingate_Institute.JPG) quod [feminam](https://la.wikipedia.org/wiki/f... | https://la.wikipedia.org/wiki/Ars | la | 12,734 | 2025-10-13T00:03:54Z |
5 | Arithmetica | [thumb|upright=1.2|[[Fenestra](https://la.wikipedia.org/wiki/Fasciculus:Rose_Nord_Cath%C3%A9drale_de_Laon_181008_03.jpg) ecclesiae cathedralis [Dominae Nostrae](https://la.wikipedia.org/wiki/Virgo_Maria) [Lauduni](https://la.wikipedia.org/wiki/Laudunum) arithmeticam inter [artes liberales](https://la.wikipedia.org/wiki... | https://la.wikipedia.org/wiki/Arithmetica | la | 32,169 | 2025-04-05T15:52:13Z |
18 | Declinatio Prima | **Declinatio prima** seu **A-declinatio** [declinatio](https://la.wikipedia.org/wiki/Declinatio_%28grammatica%29) [nominum](https://la.wikipedia.org/wiki/Nomen_substantivum_%28grammatica_Latina%29) est, quae cum **A** littera finiuntur in [nominativo](https://la.wikipedia.org/wiki/nominativus) et [ablativo](https://la.... | https://la.wikipedia.org/wiki/Declinatio_Prima | la | 7,540 | 2023-04-09T18:02:55Z |
19 | Archaeologia | [thumb|upright=1.2|Cavatores [[Pompeii](https://la.wikipedia.org/wiki/Fasciculus:Excavations_of_a_house_in_Pompeii_at_the_End_of_the_19th_Century.jpg)s.]]
**Archaeologia** est [scientia](https://la.wikipedia.org/wiki/scientia_%28ratio%29) exstantium [antiquitatum](https://la.wikipedia.org/wiki/antiquitas), quae multum ... | https://la.wikipedia.org/wiki/Archaeologia | la | 6,286 | 2025-12-27T10:33:40Z |
20 | Asterix | [thumb|*Asterix* (sinistra) cum Obelige amico et cane Idefige in muro [[Hagen](https://la.wikipedia.org/wiki/Fasciculus:_Doppelhausfassade_in_Hagen-Westf._IMGP8309.jpg)ensi [Westfalia](https://la.wikipedia.org/wiki/Westfalia)e depictus]]
**Asterix** (; [Francogallice](https://la.wikipedia.org/wiki/lingua_Francica) *As... | https://la.wikipedia.org/wiki/Asterix | la | 5,772 | 2025-08-24T23:37:14Z |
17 | Aeneis | [thumb|[[Manuscriptum](https://la.wikipedia.org/wiki/Fasciculus:Cristoforo_Majorana_-_Leaf_from_Eclogues%2C_Georgics_and_Aeneid_-_Walters_W40055R_-_Open_Obverse.jpg) circa [1470](https://la.wikipedia.org/wiki/1470), [Christophorus Majorana](https://la.wikipedia.org/wiki/Christophorus_Majorana).]]
[thumb|upright=0.8|'*A... | https://la.wikipedia.org/wiki/Aeneis | la | 26,718 | 2025-10-13T03:12:19Z |
22 | Aëroplanum | [thumb|Aëroplanum [[fratres Wright|fratrum Wright](https://la.wikipedia.org/wiki/Fasciculus:Kitty-hawk.jpg).]]
[thumb|Aëroplanum "Concordia" nominatum.](https://la.wikipedia.org/wiki/Fasciculus:Concorde.planview.arp.jpg)
[thumb|Aëroplanum militare [[propulsorium|propulsorio](https://la.wikipedia.org/wiki/Fasciculus:Hur... | https://la.wikipedia.org/wiki/A%C3%ABroplanum | la | 2,678 | 2025-09-12T01:10:22Z |
23 | Augusta Praetoria | [thumb|Despectus in Augustam Prætoriam e montibus](https://la.wikipedia.org/wiki/Fasciculus:Aosta.jpg)
**Augusta Praetoria** vel *Augustenses* sunt.
Ad septentrionem Augustae anno [1965](https://la.wikipedia.org/wiki/1965) [cuniculus Rupis Albae](https://la.wikipedia.org/wiki/cuniculus_Rupis_Albae) itineris ad Francia... | https://la.wikipedia.org/wiki/Augusta_Praetoria | la | 4,629 | 2025-06-14T03:27:28Z |
25 | Aquae Mattiacae | [thumb|[[hortus](https://la.wikipedia.org/wiki/Fasciculus:Wiesbaden_Montage.jpg)]]
**Aquae Mattiacae** sive **Aquae Mattiacorum** ([Germanice](https://la.wikipedia.org/wiki/Germanice) *Wiesbaden*) sunt urbs in [Germania](https://la.wikipedia.org/wiki/Germania) sita, caput [Hassia](https://la.wikipedia.org/wiki/Hassia)e... | https://la.wikipedia.org/wiki/Aquae_Mattiacae | la | 3,850 | 2022-10-10T22:32:42Z |
26 | Atomus | [thumb|Atomus cum [[nucleus atomicus|nucleo](https://la.wikipedia.org/wiki/Fasciculus:Stylised_atom_with_three_Bohr_model_orbits_and_stylised_nucleus.png) amborum [proton](https://la.wikipedia.org/wiki/proton)ium et [neutron](https://la.wikipedia.org/wiki/neutron)ium et [electron](https://la.wikipedia.org/wiki/electron... | https://la.wikipedia.org/wiki/Atomus | la | 14,821 | 2025-06-23T21:46:46Z |
27 | Augur | [thumb|Augur cum [[avis|ave](https://la.wikipedia.org/wiki/Fasciculus:Augur%2C_Nordisk_familjebok.svg).]]
**Augur** (-is, *m*) in [religione Romana](https://la.wikipedia.org/wiki/religio_Romana) fuit [vir](https://la.wikipedia.org/wiki/vir) qui sacras [aves](https://la.wikipedia.org/wiki/avis) [auctoritate](https://la.... | https://la.wikipedia.org/wiki/Augur | la | 1,678 | 2024-02-10T15:17:24Z |
28 | Augurium | **Augurium** sacrarum [avium](https://la.wikipedia.org/wiki/aves) ab [augur](https://la.wikipedia.org/wiki/augur)e quod [astrologia](https://la.wikipedia.org/wiki/astrologia)e educatio est, animadversio atque ex iis [rerum futurarum divinatio](https://la.wikipedia.org/wiki/ars_futurorum_praenoscendorum).
Ex volatu av... | https://la.wikipedia.org/wiki/Augurium | la | 873 | 2024-06-28T20:40:34Z |
16 | Architectura | [thumb|upright=1.4|alt=View of Florence showing the dome, which dominates everything around it. It is octagonal in plan and ovoid in section. It has wide ribs rising to the apex with red tiles in between and a marble lantern on top.|[[Philippus Brunelleschi](https://la.wikipedia.org/wiki/Fasciculus:View_of_Santa_Maria_... | https://la.wikipedia.org/wiki/Architectura | la | 15,138 | 2025-09-19T17:30:27Z |
30 | Apocrypha | **Apocrypha** ([Graece](https://la.wikipedia.org/wiki/Graece), 'abscondita') sunt [libri](https://la.wikipedia.org/wiki/liber) qui ad [Biblia Sacra](https://la.wikipedia.org/wiki/Biblia) certorum [Christianorum](https://la.wikipedia.org/wiki/Religio_Christiana) non pertinere dicuntur, a [personis](https://la.wikipedia.... | https://la.wikipedia.org/wiki/Apocrypha | la | 2,572 | 2026-01-26T01:57:57Z |
31 | Albis | **Albis** est flumen magnum in [re publica Bohemica](https://la.wikipedia.org/wiki/Res_publica_Bohemica) [Germania](https://la.wikipedia.org/wiki/Germania)que, quod in [Oceanum Fresonicum](https://la.wikipedia.org/wiki/Mare_Germanicum) fluit. Alia nomina sunt *Alba*, *Albea*, *Albia*, *Albius*, *Alpia*, *Alvea*, *Heilb... | https://la.wikipedia.org/wiki/Albis | la | 2,790 | 2024-09-30T20:45:18Z |
29 | Adamus Bremensis | **Adamus** seu **Adam Bremensis** e Saxonia superiore [Brema](https://la.wikipedia.org/wiki/Brema)m missus est. Vir doctus magisterque scholarum fuit.
[thumb|[[Germania](https://la.wikipedia.org/wiki/Fasciculus:Ancient_Germania_-_New_York%2C_Harper_and_Brothers_1849.jpg)]]
## Vita
Nescitur quo anno Adamus natus sit.... | https://la.wikipedia.org/wiki/Adamus_Bremensis | la | 3,006 | 2023-04-07T19:33:01Z |
32 | Arbor Felix | [thumb|200 px|Ecclesia catholica Arboris Felicis](https://la.wikipedia.org/wiki/Fasciculus:Kirche_Arbon-Hafen.jpg)
[thumb| Arboris Felicis collocatio](https://la.wikipedia.org/wiki/Fasciculus:_Karte_Gemeinde_Arbon_2007.png)
**Arbor Felix** sive **Arbor** ([Theodisce](https://la.wikipedia.org/wiki/Theodisce) *Arbon*) es... | https://la.wikipedia.org/wiki/Arbor_Felix | la | 1,217 | 2016-01-26T08:53:18Z |
Open Wikipedia (Markdown)
Every Wikipedia article converted to clean Markdown, organized by language and updated from the latest Wikimedia dumps
What is it?
This dataset contains every article from every language edition of Wikipedia, converted from raw MediaWiki markup into clean, readable Markdown. Headings, bold, italic, code blocks, and internal links are all preserved as proper Markdown syntax, while templates, infoboxes, references, tables, categories, and other noise are stripped away.
The dataset currently contains 139.4K articles across 1 language, sourced from the official Wikimedia database dumps. Each language's full XML export is streamed, parsed, and converted article by article. The results are stored as sharded Apache Parquet files with Zstandard compression, organized by language. Each language gets its own directory under data/, and each shard holds up to 500,000 articles.
This is the Markdown variant of the Open Wikipedia collection. If you need plain text with all formatting removed, see open-index/open-wikipedia-text. If you need the original MediaWiki source markup, see open-index/open-wikipedia.
Live progress: 1/367 languages (0%) This dataset is being actively populated. New languages are added as they finish processing, typically several per hour. The pipeline downloads each language's full XML dump from Wikimedia, converts every article, and uploads the results here. Estimated completion: April 13, 2026. Languages already available are listed in the statistics section below.
What is being released?
The dataset is organized as one directory per language, with sharded Parquet files inside each:
data/
en/en-00000.parquet English, shard 0
en-00001.parquet English, shard 1
...
de/de-00000.parquet German
fr/fr-00000.parquet French
es/es-00000.parquet Spanish
ja/ja-00000.parquet Japanese
...
la/la-00000.parquet Latin
Each Parquet file contains up to 500,000 rows. Languages with fewer articles than that fit in a single shard. Larger languages like English, German, and French are split across multiple shards. All files use Zstandard compression.
How to download and use this dataset
Using DuckDB
DuckDB can read Parquet files directly from Hugging Face without downloading anything first. This is the fastest way to explore the data.
-- Count articles per language
SELECT lang, COUNT(*) as articles
FROM read_parquet('hf://datasets/open-index/open-wikipedia-markdown/data/*/*.parquet')
GROUP BY lang
ORDER BY articles DESC;
-- Search for articles about a topic across all languages
SELECT title, lang, length, url
FROM read_parquet('hf://datasets/open-index/open-wikipedia-markdown/data/*/*.parquet')
WHERE markdown ILIKE '%machine learning%'
ORDER BY length DESC
LIMIT 20;
-- Article length distribution for English Wikipedia
SELECT
percentile_disc(0.25) WITHIN GROUP (ORDER BY length) AS p25,
percentile_disc(0.50) WITHIN GROUP (ORDER BY length) AS p50,
percentile_disc(0.75) WITHIN GROUP (ORDER BY length) AS p75,
percentile_disc(0.90) WITHIN GROUP (ORDER BY length) AS p90,
percentile_disc(0.99) WITHIN GROUP (ORDER BY length) AS p99,
AVG(length)::INT AS avg_length,
MAX(length) AS max_length
FROM read_parquet('hf://datasets/open-index/open-wikipedia-markdown/data/en/*.parquet');
-- Find the longest articles in each language
SELECT lang, title, length, url
FROM (
SELECT *, ROW_NUMBER() OVER (PARTITION BY lang ORDER BY length DESC) AS rn
FROM read_parquet('hf://datasets/open-index/open-wikipedia-markdown/data/*/*.parquet')
)
WHERE rn = 1
ORDER BY length DESC
LIMIT 20;
-- How many articles contain code blocks?
SELECT lang, COUNT(*) AS articles_with_code, COUNT(*) * 100.0 / SUM(COUNT(*)) OVER () AS pct
FROM read_parquet('hf://datasets/open-index/open-wikipedia-markdown/data/*/*.parquet')
WHERE markdown LIKE '%```%'
GROUP BY lang
ORDER BY articles_with_code DESC
LIMIT 15;
Using datasets
from datasets import load_dataset
# Load English Wikipedia
ds = load_dataset("open-index/open-wikipedia-markdown", "en")
print(ds["train"][0]["title"])
print(ds["train"][0]["markdown"][:500])
# Stream the full dataset without downloading everything first
ds = load_dataset("open-index/open-wikipedia-markdown", "en", split="train", streaming=True)
for item in ds:
print(item["title"], item["length"])
# Load a specific language
ds = load_dataset("open-index/open-wikipedia-markdown", "de")
print(f"German articles: {len(ds['train']):,}")
Using huggingface_hub
from huggingface_hub import snapshot_download
# Download only English
snapshot_download(
"open-index/open-wikipedia-markdown",
repo_type="dataset",
local_dir="./wiki-md/",
allow_patterns="data/en/*",
)
For faster downloads, install pip install huggingface_hub[hf_transfer] and set HF_HUB_ENABLE_HF_TRANSFER=1.
Using the CLI
# Download a single language
huggingface-cli download open-index/open-wikipedia-markdown \
--include "data/la/*" \
--repo-type dataset --local-dir ./wiki-md/
Using Polars
import polars as pl
df = pl.read_parquet("data/en/*.parquet")
print(f"English articles: {len(df):,}")
print(f"Total Markdown text: {df['length'].sum() / 1e9:.1f} GB")
print(df.select("title", "length").describe())
Dataset statistics
You can query the per-language statistics directly from the stats.csv file included in the dataset:
SELECT * FROM read_csv_auto('hf://datasets/open-index/open-wikipedia-markdown/stats.csv')
ORDER BY articles DESC;
The stats.csv file tracks each committed language with the following columns:
| Column | Description |
|---|---|
lang |
ISO 639 language code |
lang_name |
Human-readable language name |
articles |
Number of articles in this language |
md_shards |
Number of Markdown Parquet shards |
text_shards |
Number of plain text Parquet shards |
wikitext_shards |
Number of wikitext Parquet shards |
md_bytes, text_bytes, wikitext_bytes |
Parquet file sizes per variant |
dump_bytes |
Original Wikimedia dump size |
dur_s |
Processing duration in seconds |
committed_at |
ISO 8601 timestamp of when this language was committed |
Languages
| Language | Code | Articles | Shards |
|---|---|---|---|
| Latin | la |
139.4K | 1 |
Schema
Every Parquet file shares the same schema:
| Column | Type | Description |
|---|---|---|
id |
int64 |
Wikipedia page ID, unique within each language edition |
title |
string |
Article title as it appears on Wikipedia |
markdown |
string |
Full article body converted from wikitext to Markdown |
url |
string |
Direct URL to the Wikipedia article |
lang |
string |
ISO 639 language code (e.g. en, de, fr, ja) |
length |
int32 |
Markdown body length in bytes |
timestamp |
string |
Last revision timestamp in ISO 8601 format |
Example instance
Here is an example row from the English partition, showing a converted article:
{
"id": 12,
"title": "Anarchism",
"markdown": "# Anarchism\n\n**Anarchism** is a political philosophy and movement that is against all forms of authority...",
"url": "https://en.wikipedia.org/wiki/Anarchism",
"lang": "en",
"length": 87453,
"timestamp": "2025-12-15T08:22:01Z"
}
The markdown field contains the full article text with Markdown formatting. Internal wiki links are converted to full Wikipedia URLs, so [[United States|US]] becomes [US](https://en.wikipedia.org/wiki/United_States).
Wikitext to Markdown conversion
The conversion handles the most common MediaWiki syntax elements and maps them to their Markdown equivalents:
| MediaWiki syntax | Markdown output |
|---|---|
== Heading == |
## Heading |
=== Subheading === |
### Subheading |
'''bold''' |
**bold** |
''italic'' |
*italic* |
[[Page|Text]] |
[Text](https://lang.wikipedia.org/wiki/Page) |
[https://example.com text] |
[text](https://example.com) |
<syntaxhighlight lang="python"> |
```python ``` |
<code>x</code> |
`x` |
<pre>block</pre> |
```\nblock\n``` |
What gets stripped
The following elements are removed during conversion to produce clean, readable text:
| Element | Handling |
|---|---|
{{templates}} |
Removed entirely, including Infobox, Navbox, Taxobox, and all other templates |
{{Infobox ...}} |
Removed, including nested template parameters |
| `{ | tables |
<ref> citations |
Removed, including named references |
[[File:]] / [[Image:]] |
Removed, including thumbnails and captions |
[[Category:]] |
Removed |
<!-- comments --> |
Removed |
| Interwiki links | Removed |
| Magic words | __NOTOC__, __FORCETOC__, and similar directives are removed |
The goal is to preserve the article's readable content and structure while removing everything that only makes sense in the context of the MediaWiki rendering engine.
How it works
The pipeline processes Wikipedia language editions through the following steps:
Download. The latest
{lang}wiki-latest-pages-articles.xml.bz2dump is streamed from dumps.wikimedia.org. Downloads support HTTP range resumption, so interrupted transfers pick up where they left off.Parse. A streaming XML parser processes the bzip2-compressed dump without extracting it to disk. Only namespace-0 pages (articles) are kept. Redirects, talk pages, user pages, and all other namespaces are skipped.
Convert. Each article's wikitext is converted to Markdown through a series of regex-based transformations. Templates are stripped with up to 5 nesting passes to handle deeply nested constructs. Internal wiki links are resolved to full Wikipedia URLs for the appropriate language edition.
Filter. Articles shorter than 100 bytes after conversion are excluded. This removes stubs, disambiguation pages, and other pages with minimal content.
Shard. Articles are written to Zstandard-compressed Parquet files, approximately 500,000 rows per shard. Multiple languages are processed in parallel using a worker pool.
Publish. Each language's shards are committed to this Hugging Face repository as they complete. Commit messages include article counts, shard counts, and file sizes for auditability.
Considerations
Why Markdown instead of plain text?
Plain text is sufficient for many NLP tasks, but it loses document structure. Markdown preserves headings, bold, italic, code blocks, and links, which makes it better suited for:
- Language model training where the model should understand document structure
- Retrieval-augmented generation (RAG) where chunking by heading sections produces more coherent results
- Knowledge graph construction where preserved links encode relationships between concepts
If you do not need formatting, the plain text variant is smaller and simpler.
Known limitations
- Conversion is regex-based, not a full parser. Some complex wikitext constructs (deeply nested tables inside templates, parser functions, Lua module output) may not convert perfectly. The vast majority of articles convert cleanly, but edge cases exist.
- Templates are stripped, not expanded. Infoboxes, navigation boxes, and other templates are removed entirely rather than expanded to their rendered output. This means some structured data that appears in rendered Wikipedia pages is not present in this dataset.
- One snapshot in time. This dataset represents a single snapshot of each language's dump. It does not track edit history or article revisions.
- Dump availability varies. Not all language editions have their dumps available at all times. Languages whose dumps fail to download are skipped and will be included in future updates.
Related datasets
- open-index/open-wikipedia-text - Same articles as pure plain text with all formatting removed. Smaller files, better for embeddings and classification.
- open-index/open-wikipedia - Same articles in original MediaWiki wikitext markup. Use this if you need templates, tables, references, and other source elements.
Thanks
The content in this dataset was written by millions of Wikipedia editors worldwide and is hosted by the Wikimedia Foundation. The raw data comes from the Wikimedia database dumps, which the Foundation makes freely available for download.
Wikipedia is one of humanity's greatest collaborative achievements. All credit for the content goes to the volunteer editors who write, review, and maintain it.
This dataset is an independent conversion and is not affiliated with or endorsed by the Wikimedia Foundation.
Licensing
Wikipedia content is released under the Creative Commons Attribution-ShareAlike 4.0 International License (CC BY-SA 4.0). This dataset inherits that license. If you redistribute or build upon this data, you must give appropriate credit and share your contributions under the same license.
Citation
@dataset{open_wikipedia_markdown,
title = {Open Wikipedia (Markdown)},
author = {Open Index},
year = {2026},
url = {https://huggingface.co/datasets/open-index/open-wikipedia-markdown},
license = {CC BY-SA 4.0},
publisher = {Hugging Face}
}
Last updated: 2026-04-03
- Downloads last month
- 874