Datasets:
annotations_creators:
- expert-generated
language:
- en
license:
- apache-2.0
pretty_name: Watsonx Docs Document Type Classification
size_categories:
- n<1K
source_datasets:
- ibm-research/watsonxDocsQA
task_categories:
- text-classification
task_ids:
- multi-class-classification
Watsonx Docs Document Type Classification
This dataset is a balanced binary document-level classification subset derived
from ibm-research/watsonxDocsQA.
Task
Classify IBM Watsonx documentation pages by their dominant user-facing purpose:
conceptual: documents primarily used to understand or look up information.how-to: documents primarily used to complete a procedure or fix a problem.
Splits
| Split | conceptual | how-to | Total |
|---|---|---|---|
| train | 140 | 140 | 280 |
| validation | 30 | 30 | 60 |
| test | 30 | 30 | 60 |
Fields
doc_id: original document ID from the source dataset.url: source documentation URL.title: documentation page title.text: model input text, constructed astitle + "\n" + first 800 words of document. The title is preserved in full; the document body is truncated to keep inputs manageable for embedding-based classifiers.label: string label, eitherconceptualorhow-to.label_id: numeric label ID, whereconceptual = 0andhow-to = 1.split: dataset split.
Usage
from datasets import load_dataset
data_files = {
"train": "train.csv",
"validation": "validation.csv",
"test": "test.csv",
}
dataset = load_dataset("csv", data_files=data_files)
Curation Notes
IBM technical documentation has traditionally been structured around DITA
(Darwin Information Typing Architecture), which classifies documents into four
types: task, concept, reference, and troubleshooting. This dataset
adapts that taxonomy into two classes: conceptual merges concept and
reference (both primarily information-seeking); how-to merges task and
troubleshooting (both action- or fix-oriented). The binary schema was chosen
because troubleshooting was too rare to form a reliable standalone class, and
reference and concept were difficult to separate consistently.
Annotation followed a semi-automatic process. Labelling rules were first defined
based on IBM Writing Style guidelines, then applied by a heuristic script to
generate candidate labels. Each candidate was assigned a confidence tier:
title_high (strong title signal), body_medium (body-text signal only, no
strong title match), or default_low (no strong signal in either title or
body). All tiers except body_medium how-to rows were manually reviewed. The
body_medium how-to subset (333 rows) was left unreviewed because the remaining
manually checked data was sufficient to construct a balanced 400-example
dataset; retaining unreviewed borderline rows would have introduced noise
without benefit.
Rows marked X during manual review were removed because the source document
was incomplete or too ambiguous to label reliably. Rows marked ? were
interpreted as belonging to the opposite binary class.
The final subset contains 400 examples, sampled with random seed 42 after
manual correction and filtering.
This dataset is derived from ibm-research/watsonxDocsQA, which is licensed under Apache 2.0. This dataset inherits the same license.