File size: 3,423 Bytes
a2c830f
 
 
 
 
 
d2b3899
a2c830f
 
 
 
 
 
 
 
a7a8a28
a2c830f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d2b3899
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
---
annotations_creators:
- expert-generated
language:
- en
license:
- apache-2.0
pretty_name: Watsonx Docs Document Type Classification
size_categories:
- n<1K
source_datasets:
- ibm-research/watsonxDocsQA
task_categories:
- text-classification
task_ids:
- multi-class-classification
---

# Watsonx Docs Document Type Classification

This dataset is a balanced binary document-level classification subset derived
from `ibm-research/watsonxDocsQA`.

## Task

Classify IBM Watsonx documentation pages by their dominant user-facing purpose:

- `conceptual`: documents primarily used to understand or look up information.
- `how-to`: documents primarily used to complete a procedure or fix a problem.

## Splits

| Split | conceptual | how-to | Total |
|---|---:|---:|---:|
| train | 140 | 140 | 280 |
| validation | 30 | 30 | 60 |
| test | 30 | 30 | 60 |

## Fields

- `doc_id`: original document ID from the source dataset.
- `url`: source documentation URL.
- `title`: documentation page title.
- `text`: model input text, constructed as `title + "\n" + first 800 words of document`. The title is preserved in full; the document body is truncated to keep inputs manageable for embedding-based classifiers.
- `label`: string label, either `conceptual` or `how-to`.
- `label_id`: numeric label ID, where `conceptual = 0` and `how-to = 1`.
- `split`: dataset split.

## Usage

```python
from datasets import load_dataset

data_files = {
    "train": "train.csv",
    "validation": "validation.csv",
    "test": "test.csv",
}

dataset = load_dataset("csv", data_files=data_files)
```

## Curation Notes

IBM technical documentation has traditionally been structured around DITA
(Darwin Information Typing Architecture), which classifies documents into four
types: `task`, `concept`, `reference`, and `troubleshooting`. This dataset
adapts that taxonomy into two classes: `conceptual` merges `concept` and
`reference` (both primarily information-seeking); `how-to` merges `task` and
`troubleshooting` (both action- or fix-oriented). The binary schema was chosen
because `troubleshooting` was too rare to form a reliable standalone class, and
`reference` and `concept` were difficult to separate consistently.

Annotation followed a semi-automatic process. Labelling rules were first defined
based on IBM Writing Style guidelines, then applied by a heuristic script to
generate candidate labels. Each candidate was assigned a confidence tier:
`title_high` (strong title signal), `body_medium` (body-text signal only, no
strong title match), or `default_low` (no strong signal in either title or
body). All tiers except `body_medium` how-to rows were manually reviewed. The
`body_medium` how-to subset (333 rows) was left unreviewed because the remaining
manually checked data was sufficient to construct a balanced 400-example
dataset; retaining unreviewed borderline rows would have introduced noise
without benefit.

Rows marked `X` during manual review were removed because the source document
was incomplete or too ambiguous to label reliably. Rows marked `?` were
interpreted as belonging to the opposite binary class.

The final subset contains 400 examples, sampled with random seed `42` after
manual correction and filtering.

This dataset is derived from [ibm-research/watsonxDocsQA](https://huggingface.co/datasets/ibm-research/watsonxDocsQA), which is licensed under Apache 2.0. This dataset inherits the same license.