Dataset Viewer
Auto-converted to Parquet Duplicate
pageid
int64
12
74.6M
title
stringlengths
2
102
revid
int64
962M
1.17B
description
stringlengths
4
100
categories
sequence
markdown
stringlengths
1.22k
148k
57,185,536
Georgia Hopley
1,163,683,705
American journalist and temperance advocate
[ "1858 births", "1944 deaths", "19th-century American journalists", "19th-century American women journalists", "19th-century American women writers", "20th-century American journalists", "20th-century American women journalists", "American temperance activists", "Hopley family", "Journalists from O...
Georgianna Eliza Hopley (1858–1944) was an American journalist, political figure, and temperance advocate. A member of a prominent Ohio publishing family, she was the first woman reporter in Columbus, and editor of several publications. She served as a correspondent and representative at the 1900 Paris Exposition and t...
15,394,015
Willis Ward
1,170,257,280
Track and field athlete and American football player
[ "1912 births", "1983 deaths", "20th-century African-American lawyers", "20th-century African-American sportspeople", "20th-century American lawyers", "African-American players of American football", "American football ends", "Detroit College of Law alumni", "Michigan Wolverines football players", ...
Willis Franklin Ward (December 28, 1912 – December 30, 1983) was a track and field athlete and American football player who was inducted into the University of Michigan Athletic Hall of Honor in 1981. Ward was the Michigan High School Athlete of the Year, after setting a national prep record in the high jump. At the U...
62,958,021
Instant Replay Game
1,166,130,448
Notable American football game
[ "1989 National Football League season", "1989 in sports in Wisconsin", "American football in Wisconsin", "Chicago Bears", "History of the Green Bay Packers", "National Football League controversies", "National Football League games" ]
The Instant Replay Game, also known as the Asterisk Game, was a National Football League (NFL) game between the Green Bay Packers and Chicago Bears on November 5, 1989. The Packers defeated the visiting Bears 14–13 on a controversial fourth-down touchdown pass from Don Majkowski to Sterling Sharpe with less than a minu...
17,546
Louvre
1,172,724,876
Art museum in Paris, France
[ "1793 establishments in France", "Archaeological museums in France", "Art museums and galleries in Paris", "Art museums established in 1793", "Egyptological collections in France", "History museums in France", "Institut de France", "Louvre", "Louvre Palace", "Museums in Paris", "Museums of Ancie...
The Louvre (English: /ˈluːv(rə)/ ), or the Louvre Museum (French: Musée du Louvre ), is a national art museum in Paris, France. A central landmark of the city, it is located on the Right Bank of the Seine in the city's 1st arrondissement (district or ward) and home to some of the most canonical works of Western art, in...
13,280,513
Launch Party
1,150,875,056
null
[ "2007 American television episodes", "The Office (American TV series) episodes in multiple parts", "The Office (American season 4) episodes" ]
"Launch Party" is the fifth and sixth episode of the fourth season of the American comedy television series The Office, and the show's fifty-eighth and fifty-ninth episode overall. The episode was written by Jennifer Celotta and directed by Ken Whittingham. It first aired in the United States on October 11, 2007, on NB...
25,420,409
2001 Gujarat cyclone
1,167,707,453
North Indian cyclone in 2001
[ "2001 disasters in India", "Extremely severe cyclonic storms", "Tropical cyclones in 2001", "Tropical cyclones in India", "Tropical cyclones in Pakistan" ]
The 2001 Gujarat cyclone was the third strongest tropical cyclone, in terms of barometric pressure, to form in the Arabian Sea on record; only Cyclones Gonu in 2007 and Kyarr in 2019 were stronger. The storm originated from a tropical disturbance that formed east of Somalia on May 18. Over the following few days, the s...
399,970
Morpeth, Northumberland
1,172,991,937
Town in Northumberland, England
[ "Civil parishes in Northumberland", "County towns in England", "Market towns in Northumberland", "Morpeth, Northumberland", "Towns in Northumberland" ]
Morpeth is a historic market town in Northumberland, North East England, lying on the River Wansbeck. Nearby towns include Ashington and Bedlington. In the 2011 census, the population of Morpeth was given as 14,017, up from 13,833 in the 2001 census. The earliest evidence of settlement is believed to be from the Neolit...
42,627,652
Five Days at Memorial
1,164,962,764
Book by Sheri Fink
[ "2013 non-fiction books", "American non-fiction books", "Books about Hurricane Katrina", "Books about health care", "Crown Publishing Group books", "Ethics books", "J. Anthony Lukas Book Prize-winning works", "Non-fiction books adapted into television shows" ]
Five Days at Memorial: Life and Death in a Storm-Ravaged Hospital is a 2013 non-fiction book by the American journalist Sheri Fink. The book details the aftermath of Hurricane Katrina at Memorial Medical Center in New Orleans in August 2005, and is an expansion of a Pulitzer Prize-winning article written by Fink and pu...
13,863,187
Constitution Center (Washington, D.C.)
1,116,182,011
null
[ "1969 establishments in Washington, D.C.", "Buildings and structures completed in 1969", "Landmarks in Washington, D.C.", "Leadership in Energy and Environmental Design gold certified buildings", "Office buildings completed in 2009", "Skyscraper office buildings in Washington, D.C.", "Southwest Federal ...
Constitution Center (formerly known as the David Nassif Building) is an office building located at 400 7th Street SW in Washington, D.C. It is 140 feet (43 m) high and has 10 floors. Covering an entire city block, it is the largest privately owned office building in the District of Columbia. Current tenants include the...
17,695,243
Tropical Storm Arthur (2008)
1,171,670,571
Atlantic tropical storm in 2008
[ "2008 Atlantic hurricane season", "2008 in Mexico", "Atlantic tropical storms", "Off-season Atlantic tropical cyclones", "Tropical cyclones in 2008" ]
Tropical Storm Arthur was the first Atlantic tropical storm that formed during the month of May since 1981. The first tropical cyclone of the 2008 Atlantic hurricane season, the storm formed on May 30, 2008 from the interaction of two tropical waves and the remnants of the eastern Pacific Tropical Storm Alma, which had...
15,162,865
New York State Route 268 (1934–1974)
1,054,948,495
Former highway in New York
[ "Former state highways in New York (state)", "Transportation in Erie County, New York" ]
New York State Route 268 (NY 268) was a state highway in northeastern Erie County, New York, in the United States. It served as a connector between NY 5 in the town of Clarence and NY 78 at the Clarence–Amherst town line. The route passed through rural areas of the town of Clarence and did not serve any areas of signif...
7,750,611
Mongolia at the 1994 Winter Olympics
1,145,113,903
null
[ "1994 in Mongolian sport", "Mongolia at the Winter Olympics by year", "Nations at the 1994 Winter Olympics" ]
Mongolia sent a delegation to compete at the 1994 Winter Olympics in Lillehammer, Norway from 12–27 February 1994. The Mongolian delegation consisted of a single short track speed skater Batchuluuny Bat-Orgil. He competed in two events, where he finished the 500 metres event in 24th place and the 1000 metres competitio...
14,494,103
Ed Muransky
1,121,580,888
American football player (born 1960)
[ "1960 births", "All-American college football players", "American football offensive tackles", "Living people", "Los Angeles Raiders players", "Michigan Wolverines football players", "Players of American football from Youngstown, Ohio", "Washington Federals/Orlando Renegades players" ]
Edward William "Ed" Muransky (born January 20, 1960) is a former professional American football offensive tackle who played for the Los Angeles Raiders of the National Football League (NFL) and Orlando Renegades of the United States Football League (USFL). He was a member of the Super Bowl XVIII Champion Raiders. Prior...
End of preview. Expand in Data Studio

GoodWiki Dataset

GoodWiki is a 179 million token dataset of English Wikipedia articles collected on September 4, 2023, that have been marked as Good or Featured by Wikipedia editors. The dataset provides these articles in GitHub-flavored Markdown format, preserving layout features like lists, code blocks, math, and block quotes, unlike many other public Wikipedia datasets. Articles are accompanied by a short description of the page as well as any associated categories.

Thanks to a careful conversion process from wikicode, the markup language used by Wikipedia, articles in GoodWiki are generally faithful reproductions of the corresponding original Wikipedia pages, minus references, files, infoboxes, and tables. Curated template transclusion and HTML tag handling have minimized instances where entire words and phrases are missing mid-sentence.

The hope is that this more comprehensive data will play a small role in improving open-source NLP efforts in language modeling, summarization, and instruction tuning.

GoodWiki is more than 1.5 times larger (when compared using the same tokenizer) than the widely used WikiText-103 dataset by Merity et al., even after excluding article descriptions. Also limited to articles marked as Good or Featured, WikiText inspired GoodWiki.

The code used to build this dataset can be found on GitHub.

Composition

The dataset consists of 44,754 rows in a 482.7 MB snappy-compressed Parquet file. Each row consists of the following fields:

  • pageid (int64): The Wikipedia id of the article.
  • title (string): The title of the article.
  • revid (int64): The Wikipedia id of the revision used.
  • description (string | null): Plaintext short description/summary of the article written by Wikipedia contributors.
  • categories (list[string]): The article's Wikipedia categories.
  • markdown (string): The content of the article in GitHub-flavored Markdown format.

Here's an example row in JSON format:

{
    "pageid": 40961074,
    "title": "Attarsiya",
    "revid": 1164804042,
    "description": "Military leader of Ahhiya",
    "categories": [
        "Ancient Anatolia",
        "Greek military leaders",
        "Mycenaean Greeks"
    ],
    "markdown": "Attarsiya was a 15th–14th century BCE military leader of Ahhiya. In the Hittite archives of circa 1400 BCE, he is described as a \"man of Ahhiya\", a country identified with the Achaeans and Mycenaean Greece. The campaigns of Attarsiya, as well as his conflict with the Hittite vassal, Madduwatta, represent the first recorded Mycenaean Greek military activity on the Anatolian mainland, as well as the first conflict between Achaeans and Hittites...",
}

The markdown field contains a total of 179,198,101 tokens tokenized using HuggingFace's pretrained facebook/opt-350m tokenizer. It also contains 811,791,686 characters and 132,691,055 words.

Even with the markdown formatting, GoodWiki can also be used as a plaintext dataset as markdown formatting syntax is fairly minimal.

Languages

While articles are taken exclusively from English Wikipedia, they sometimes contain small snippets from other languages as well as recurring use of the International Phonetic Alphabet in article ledes. Some articles include code blocks in pseudocode as well as in popular programming languages.

Markdown Details

GoodWiki articles follow the GitHub-flavored Markdown spec, including for blockquotes, code blocks, and lists. Bolding, italicizing, underlining, and strikethroughs have been removed as they introduce a lot of noise especially in math/computing articles.

Some markdown details are worth highlighting:

Math

Content in math templates and XML tags are enclosed in markdown with $ delimiters. For example,

<math>O(n^2)</math>

becomes: $O(n^2)$.

Super/Subscript

Superscripts and subscripts are denoted using <sup></sup> and <sub></sub> tags respectively.

$ and #

Dollar signs and hashes are escaped with \ to avoid interfering with math and heading syntax.

Methodology

On the evening of September 4, 2023 PT, we downloaded the wikicode of articles associated with the Good and Featured categories in the main namespace (ns=0) on Wikipedia via the Query API.

After some preprocessing including removing comments, applying magic code, and removing unrecognized or unnecessary template tags, we sent the resulting code to Wikipedia's Expandtemplates API. This endpoint transcludes template tags, turning them into HTML and plaintext. We chose the templates to transclude by counting all the templates used across the dataset and selecting the ones that are not rare, not used for citations, and not used for asides like infoboxes and tables.

The Expandtemplates output is then postprocessed. During this phase, we remove sections associated with references (e.g. Sources Cited), extract text from wikilinks and external links, delete media links, and handle HTML tags. The postprocessed output is then converted to GitHub-flavored Markdown using Pandoc. We also discarded articles detected by Pandoc to have corrupt wikicode (n=125).

The markdown output is then cleaned using regular expressions to remove excessive spacing, empty list items, unnecessary escaping, and resolve other problems with Pandoc's conversion. We normalized the markdown output unicode to a composed form (NFKC).

Alternatives Considered

Converting End-To-End Using Pandoc

While Pandoc can in theory convert raw wikicode to markdown, it is not a complete wikicode parser and therefore often produces errant output without preprocessing. Furthermore, direct conversion of raw wikicode would lose a lot of the content attached to wikicode templates as Pandoc cannot perform transclusion.

Using TextExtracts API

Wikipedia has a TextExtracts API that directly outputs a limited HTML or plaintext output of a page given that page's title. In practice, I've found the HTML output generated by this endpoint to often contain malformed or incomplete HTML with injected references that are difficult to parse. The plaintext output was also often poor, including reference artifacts and missing content.

Other caveats are listed here and were the reasons why this approach was discarded.

Transcluding All Templates

During the preprocessing process, we eliminate templates outside of a given subset. We did this because we found that transcluding all templates injected a lot of noise in the output, including janky HTML, styles, references, and unnecessary content. This noise made parsing difficult and error-prone, resulting in poor quality markdown littered with artifacts similar to those visible in the TextExtracts output.

Transcluding a subset largely solved these issues while still preserving as much content as possible.

Limitations

  • Chemical equations sometimes include formatting issues like unnecessary line-breaks. These equations, however, are rare.
  • In articles about ancient civilizations and languages, rare Unicode characters are occasionally included in the markdown. It might be worth removing these characters during the tokenization process.
  • In rare cases, book/article names may be missing from the markdown as they are considered citations in the wikicode.
  • Inflation data is missing from some articles. These articles use the Inflation template tag to include this information, which works poorly with the Extracttemplates API.
  • Articles may feature empty sections due to table/box removal.
  • Some code blocks are denoted using indents instead of formal code blocks. This is due to the original wikicode not denoting them as such.
  • Template subset allowing transclusion will probably need to be updated for use in future data dumps. The list of templates used on Wikipedia is constantly evolving.

Future Work

Time permitting, we hope to apply this careful conversion/generation process on all of English Wikipedia which will require our conversion script to be much faster and better parallelized. We also hope to extract other information from pages like entries in infoboxes that could be useful for question answering and instruction tuning applications.

If you're interested in helping out, please reach out!

License

The dataset and accompanying code are licensed under an MIT license. Pandoc, which must be downloaded separately, is GPL-licensed.

While this project is permissively licensed, we hope that you contribute any improvements you make to this dataset.

Citation

If you use the GoodWiki Dataset in your research or projects, please cite it using the following citation:

@misc{GoodWiki,
  title = {GoodWiki Dataset},
    author = {Choi, Euirim},
  howpublished = {\url{https://www.github.com/euirim/goodwiki}},
    month = {September},
    year = {2023}
}

Feedback and Contributions

Contributions via pull requests and discussions are welcome. If you don't know how you could help improve this project, please look at the Future Work section.

Was this dataset useful for your work? Please let us know. We'd love to feature your project :)

Downloads last month
131

Models trained or fine-tuned on euirim/goodwiki