You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Glossapi Greek Nanochat Pretraining Dataset

This dataset is a unified Greek pretraining corpus assembled from the local GlossAPI collection plus four selected external sources:

  • HuggingFaceFW/finewiki (el)
  • HuggingFaceFW/finepdfs-edu (ell_Grek)
  • AI-team-UoA/greek_legal_code
  • OPUS/OpenSubtitles-el-v2018 (monolingual Greek subtitles)

The default dataset view is the broad canonical corpus under data/*.parquet. This repo also carries builder-facing extras such as dedup metadata, planning manifests, and reconstruction scripts for nanochat-style subset construction.

Scope

  • Total rows: 810638
  • Included source datasets: 19
  • Explicitly excluded from the canonical corpus: 95k_deigma_ellinikis

Dedup Metadata

This repo also carries builder-facing dedup metadata under:

  • dedup_metadata/latest.json
  • dedup_metadata/full_publish_aws_strip_20260328T101312Z/

These artifacts are optional downstream builder inputs. They do not mutate the default dataset rows in data/*.parquet.

The intended builder flow is:

  1. load base rows from data/*.parquet
  2. resolve the current dedup bundle from dedup_metadata/latest.json
  3. apply builder-time dedup policy such as annotate, drop_intra, or drop_intra_and_inter

The configs block in this card keeps default loading scoped to data/*.parquet, so the extra dedup parquet files do not become part of the default dataset row view.

Canonical Columns

  • source_dataset
  • source_doc_id
  • text
  • title
  • author
  • source_metadata_json
  • is_historical_or_polytonic
  • contains_math
  • contains_latex
  • greek_percentage
  • latin_percentage
  • polytonic_ratio
  • table_ratio
  • greek_badness_score
  • mojibake_badness_score
  • needs_ocr
  • is_empty
  • filter
  • ocr_success
  • quality_method
  • reevaluated_at

Metadata Notes

  • title and author are the only normalized source metadata fields promoted to top-level canonical columns.
  • Pipeline-generated filenames and synthetic processing identifiers are excluded from source_metadata_json.
  • Some datasets have weak metadata or unresolved joins. Good text is still kept even when metadata are sparse.
  • contains_math and contains_latex are content-analysis flags, not source metadata.

Prepared-Source Notes

  • row_counts.csv and validation_summary.csv describe the broad canonical corpus under data/*.parquet.
  • prepare_manifest.json is a builder-oriented planning artifact with file-level text metrics for a prepared/source-ready subset view.
  • Because prepare_manifest.json is builder-oriented, its totals may differ from the broad-corpus totals reported in row_counts.csv.

Quality Notes

  • The canonical upload is intentionally broad. CLI-side filtering is expected for training subsets.
  • openarchives.gr stricter filtering such as needs_ocr == false plus greek_badness_score < 25 is a downstream mix choice, not a canonical-upload exclusion.
  • greek_percentage is derived where needed, typically from existing Latin-percentage signals.
  • Rust badness scores are more reliable for modern Greek cleanliness than for older/polytonic/liturgical corpora.

Included Source Row Counts

source_dataset row_count file_count
1000_prwta_xronia_ellhnikhs 1016 1
AI-team-UoA/greek_legal_code 47563 1
Apothetirio_Kallipos 4827 1
Apothetirio_Pergamos 15241 1
Ekklisiastika_Keimena 675 1
Ellinika_Keimena_Project_Gutenberg 214 1
HuggingFaceFW/finepdfs-edu 209039 1
HuggingFaceFW/finewiki 242517 1
OPUS/OpenSubtitles-el-v2018 143441 1
Sxolika_vivlia 123 1
Wikisource_Greek_texts 5394 1
dimodis_logotexnia 11 1
ellinika_dedomena_europaikou_koinovouliou 28723 1
eurlex-greek-legislation 22694 1
greek_phd 37229 2
klasikh_arx_ell_grammateia 815 1
openarchives.gr 46000 1
openbook_gr 3719 1
opengov.gr-diaboyleuseis 1397 1

Intended Use

  • Greek pretraining experiments
  • Corpus filtering and mixture design
  • Text-only export for nanochat-style training pipelines

Experiment Reconstruction

The repo also includes builder-facing scripts under:

  • scripts/prepare_glossapi_greek_experiment_data.py
  • scripts/summarize_glossapi_greek_experiment_data.py
  • scripts/text_dedup.py
  • glossapi_corpus_cli/pipeline.py
  • rust_reevaluate_pdf_datasets.py

These scripts are convenience snapshots for reconstruction and inspection. The dedup bundle itself remains versioned under dedup_metadata/.

Limitations

  • This dataset mixes modern, historical, polytonic, legal, academic, educational, repository, and subtitle text.
  • Source licenses are mixed and remain governed by the upstream datasets and repositories.
  • Not every source has equally strong metadata quality or equally strong OCR/noise diagnostics.

Rebuild

See BUILD_REPLICATION.md in this repo for the broad-corpus staging flow and the builder-facing overlay notes.

Downloads last month
35