Dataset Viewer
Duplicate
The dataset viewer is not available for this split.
Cannot load the dataset split (in streaming mode) to extract the first rows.
Error code:   StreamingRowsError
Exception:    CastError
Message:      Couldn't cast
metadata: struct<name: string, canonical: string, api: string, license: string, license_url: string, cite_as:  (... 122 chars omitted)
  child 0, name: string
  child 1, canonical: string
  child 2, api: string
  child 3, license: string
  child 4, license_url: string
  child 5, cite_as: string
  child 6, doi: string
  child 7, zenodo_url: string
  child 8, wikidata_qid: string
  child 9, publisher: string
  child 10, last_updated: timestamp[s]
  child 11, rows: int64
data: list<item: struct<slug: string, question: string, short_answer: string, answer: string, category: st (... 151 chars omitted)
  child 0, item: struct<slug: string, question: string, short_answer: string, answer: string, category: string, keywo (... 139 chars omitted)
      child 0, slug: string
      child 1, question: string
      child 2, short_answer: string
      child 3, answer: string
      child 4, category: string
      child 5, keywords: list<item: string>
          child 0, item: string
      child 6, canonical_url: string
      child 7, related_pages: list<item: struct<label: string, href: string>>
          child 0, item: struct<label: string, href: string>
              child 0, label: string
              child 1, href: string
      child 8, last_updated: timestamp[s]
description: string
@context: struct<@language: string, @vocab: string, citeAs: string, column: string, conformsTo: string, cr: st (... 552 chars omitted)
  child 0, @language: string
  child 1, @vocab: string
  child 2, citeAs: 
...
16, includes: string
  child 17, isLiveDataset: string
  child 18, jsonPath: string
  child 19, key: string
  child 20, md5: string
  child 21, parentField: string
  child 22, path: string
  child 23, recordSet: string
  child 24, references: string
  child 25, regex: string
  child 26, repeated: string
  child 27, replace: string
  child 28, sc: string
  child 29, separator: string
  child 30, source: string
  child 31, subField: string
  child 32, transform: string
sameAs: list<item: string>
  child 0, item: string
inLanguage: string
url: string
datePublished: timestamp[s]
@type: string
conformsTo: string
citeAs: string
dateModified: timestamp[s]
publisher: struct<@type: string, name: string, url: string>
  child 0, @type: string
  child 1, name: string
  child 2, url: string
distribution: list<item: struct<@type: string, @id: string, name: string, contentUrl: string, encodingFormat: stri (... 20 chars omitted)
  child 0, item: struct<@type: string, @id: string, name: string, contentUrl: string, encodingFormat: string, sha256: (... 8 chars omitted)
      child 0, @type: string
      child 1, @id: string
      child 2, name: string
      child 3, contentUrl: string
      child 4, encodingFormat: string
      child 5, sha256: string
version: string
name: string
keywords: list<item: string>
  child 0, item: string
creator: struct<@type: string, name: string, url: string>
  child 0, @type: string
  child 1, name: string
  child 2, url: string
isLiveDataset: bool
license: string
to
{'@context': {'@language': Value('string'), '@vocab': Value('string'), 'citeAs': Value('string'), 'column': Value('string'), 'conformsTo': Value('string'), 'cr': Value('string'), 'data': {'@id': Value('string'), '@type': Value('string')}, 'dataType': {'@id': Value('string'), '@type': Value('string')}, 'dct': Value('string'), 'examples': {'@id': Value('string'), '@type': Value('string')}, 'extract': Value('string'), 'field': Value('string'), 'fileProperty': Value('string'), 'fileObject': Value('string'), 'fileSet': Value('string'), 'format': Value('string'), 'includes': Value('string'), 'isLiveDataset': Value('string'), 'jsonPath': Value('string'), 'key': Value('string'), 'md5': Value('string'), 'parentField': Value('string'), 'path': Value('string'), 'recordSet': Value('string'), 'references': Value('string'), 'regex': Value('string'), 'repeated': Value('string'), 'replace': Value('string'), 'sc': Value('string'), 'separator': Value('string'), 'source': Value('string'), 'subField': Value('string'), 'transform': Value('string')}, '@type': Value('string'), 'name': Value('string'), 'description': Value('string'), 'url': Value('string'), 'sameAs': List(Value('string')), 'license': Value('string'), 'creator': {'@type': Value('string'), 'name': Value('string'), 'url': Value('string')}, 'publisher': {'@type': Value('string'), 'name': Value('string'), 'url': Value('string')}, 'datePublished': Value('timestamp[s]'), 'dateModified': Value('timestamp[s]'), 'version': Value('string'), 'keywords': List(Value('string')), 'inLanguage': Value('string'), 'conformsTo': Value('string'), 'citeAs': Value('string'), 'isLiveDataset': Value('bool'), 'distribution': List({'@type': Value('string'), '@id': Value('string'), 'name': Value('string'), 'contentUrl': Value('string'), 'encodingFormat': Value('string'), 'sha256': Value('string')})}
because column names don't match
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/src/worker/utils.py", line 99, in get_rows_or_raise
                  return get_rows(
                         ^^^^^^^^^
                File "/src/libs/libcommon/src/libcommon/utils.py", line 272, in decorator
                  return func(*args, **kwargs)
                         ^^^^^^^^^^^^^^^^^^^^^
                File "/src/services/worker/src/worker/utils.py", line 77, in get_rows
                  rows_plus_one = list(itertools.islice(ds, rows_max_number + 1))
                                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2690, in __iter__
                  for key, example in ex_iterable:
                                      ^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2227, in __iter__
                  for key, pa_table in self._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 2251, in _iter_arrow
                  for key, pa_table in self.ex_iterable._iter_arrow():
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 494, in _iter_arrow
                  for key, pa_table in iterator:
                                       ^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/iterable_dataset.py", line 384, in _iter_arrow
                  for key, pa_table in self.generate_tables_fn(**gen_kwags):
                                       ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 299, in _generate_tables
                  self._cast_table(pa_table, json_field_paths=json_field_paths),
                  ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/packaged_modules/json/json.py", line 128, in _cast_table
                  pa_table = table_cast(pa_table, self.info.features.arrow_schema)
                             ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2321, in table_cast
                  return cast_table_to_schema(table, schema)
                         ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
                File "/usr/local/lib/python3.12/site-packages/datasets/table.py", line 2249, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              metadata: struct<name: string, canonical: string, api: string, license: string, license_url: string, cite_as:  (... 122 chars omitted)
                child 0, name: string
                child 1, canonical: string
                child 2, api: string
                child 3, license: string
                child 4, license_url: string
                child 5, cite_as: string
                child 6, doi: string
                child 7, zenodo_url: string
                child 8, wikidata_qid: string
                child 9, publisher: string
                child 10, last_updated: timestamp[s]
                child 11, rows: int64
              data: list<item: struct<slug: string, question: string, short_answer: string, answer: string, category: st (... 151 chars omitted)
                child 0, item: struct<slug: string, question: string, short_answer: string, answer: string, category: string, keywo (... 139 chars omitted)
                    child 0, slug: string
                    child 1, question: string
                    child 2, short_answer: string
                    child 3, answer: string
                    child 4, category: string
                    child 5, keywords: list<item: string>
                        child 0, item: string
                    child 6, canonical_url: string
                    child 7, related_pages: list<item: struct<label: string, href: string>>
                        child 0, item: struct<label: string, href: string>
                            child 0, label: string
                            child 1, href: string
                    child 8, last_updated: timestamp[s]
              description: string
              @context: struct<@language: string, @vocab: string, citeAs: string, column: string, conformsTo: string, cr: st (... 552 chars omitted)
                child 0, @language: string
                child 1, @vocab: string
                child 2, citeAs: 
              ...
              16, includes: string
                child 17, isLiveDataset: string
                child 18, jsonPath: string
                child 19, key: string
                child 20, md5: string
                child 21, parentField: string
                child 22, path: string
                child 23, recordSet: string
                child 24, references: string
                child 25, regex: string
                child 26, repeated: string
                child 27, replace: string
                child 28, sc: string
                child 29, separator: string
                child 30, source: string
                child 31, subField: string
                child 32, transform: string
              sameAs: list<item: string>
                child 0, item: string
              inLanguage: string
              url: string
              datePublished: timestamp[s]
              @type: string
              conformsTo: string
              citeAs: string
              dateModified: timestamp[s]
              publisher: struct<@type: string, name: string, url: string>
                child 0, @type: string
                child 1, name: string
                child 2, url: string
              distribution: list<item: struct<@type: string, @id: string, name: string, contentUrl: string, encodingFormat: stri (... 20 chars omitted)
                child 0, item: struct<@type: string, @id: string, name: string, contentUrl: string, encodingFormat: string, sha256: (... 8 chars omitted)
                    child 0, @type: string
                    child 1, @id: string
                    child 2, name: string
                    child 3, contentUrl: string
                    child 4, encodingFormat: string
                    child 5, sha256: string
              version: string
              name: string
              keywords: list<item: string>
                child 0, item: string
              creator: struct<@type: string, name: string, url: string>
                child 0, @type: string
                child 1, name: string
                child 2, url: string
              isLiveDataset: bool
              license: string
              to
              {'@context': {'@language': Value('string'), '@vocab': Value('string'), 'citeAs': Value('string'), 'column': Value('string'), 'conformsTo': Value('string'), 'cr': Value('string'), 'data': {'@id': Value('string'), '@type': Value('string')}, 'dataType': {'@id': Value('string'), '@type': Value('string')}, 'dct': Value('string'), 'examples': {'@id': Value('string'), '@type': Value('string')}, 'extract': Value('string'), 'field': Value('string'), 'fileProperty': Value('string'), 'fileObject': Value('string'), 'fileSet': Value('string'), 'format': Value('string'), 'includes': Value('string'), 'isLiveDataset': Value('string'), 'jsonPath': Value('string'), 'key': Value('string'), 'md5': Value('string'), 'parentField': Value('string'), 'path': Value('string'), 'recordSet': Value('string'), 'references': Value('string'), 'regex': Value('string'), 'repeated': Value('string'), 'replace': Value('string'), 'sc': Value('string'), 'separator': Value('string'), 'source': Value('string'), 'subField': Value('string'), 'transform': Value('string')}, '@type': Value('string'), 'name': Value('string'), 'description': Value('string'), 'url': Value('string'), 'sameAs': List(Value('string')), 'license': Value('string'), 'creator': {'@type': Value('string'), 'name': Value('string'), 'url': Value('string')}, 'publisher': {'@type': Value('string'), 'name': Value('string'), 'url': Value('string')}, 'datePublished': Value('timestamp[s]'), 'dateModified': Value('timestamp[s]'), 'version': Value('string'), 'keywords': List(Value('string')), 'inLanguage': Value('string'), 'conformsTo': Value('string'), 'citeAs': Value('string'), 'isLiveDataset': Value('bool'), 'distribution': List({'@type': Value('string'), '@id': Value('string'), 'name': Value('string'), 'contentUrl': Value('string'), 'encodingFormat': Value('string'), 'sha256': Value('string')})}
              because column names don't match

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

SmartQHSE HSE Q&A Corpus

DOI Wikidata

24 long-form HSE / occupational-safety question-answer pairs across 15 categories — incident rates, ISO 45001, permits, risk assessment, OSHA (US), HSE (UK), GCC regulations, PPE, heat stress, exposure, ergonomics, incident investigation, training, HSE software. Each answer is multi-paragraph with cited sources, formulas, and OSHA/regulatory references. Suitable for instruction tuning and RAG.

Citation (preferred — academic) SmartQHSE Ltd (2026). SmartQHSE HSE Q&A Corpus [dataset]. Zenodo. https://doi.org/10.5281/zenodo.20010252

DOI: 10.5281/zenodo.20010252
Zenodo record: https://zenodo.org/record/20010252 Wikidata entity: Q139623115 CC BY 4.0 — free to use commercially with attribution.

Files

File Description
data.jsonl One JSON record per line — primary format. Loadable via datasets.load_dataset() directly.
data.json Same data as a single JSON array (with array key matching the source).
data.csv Selected fields as CSV for spreadsheet users.

Live access (CC BY 4.0, CORS-open, no auth)

Loading example

from datasets import load_dataset
ds = load_dataset("SmartQHSE/hse-qa-corpus")
print(ds["train"][0])
# Or directly via the live REST API
curl https://www.smartqhse.com/api/v1/answers

License

This dataset is released under Creative Commons Attribution 4.0 International (CC BY 4.0). Free to use commercially with attribution. Cite as:

SmartQHSE Ltd (2026). SmartQHSE HSE Q&A Corpus [dataset]. https://www.smartqhse.com/answers

Considerations for using the data

  • Small dataset (24 entries as of v1). Will expand quarterly.
  • Geographic bias toward US OSHA, UAE OSHAD-SF, UK HSE, GCC regulations. Less coverage of Asia-Pacific, Africa, Latin America.
  • Industry bias toward construction, oil & gas, manufacturing — reflects SmartQHSE customer mix.
  • Answers are point-in-time (last_updated field). Regulations change; cross-check with the canonical_url before relying on a specific number.
  • English-only at v1. Arabic translation planned for v2.

Related datasets

Sources

  • US Bureau of Labor Statistics (BLS) — SOII + CFOI
  • US Department of Labor — OSHA Injury Tracking Application + standards
  • UK Health and Safety Executive (HSE)
  • European Commission + Eurostat ESAW
  • IOGP, ILO, NIOSH, ACGIH
  • UAE OSHAD-SF, KSA SAPI, Qatar QCDD, Oman MOLSD
  • ISO Technical Committees (TC 283 — OH&S Management Systems)

All consolidated and republished under CC BY 4.0 with attribution.

About SmartQHSE

SmartQHSE is the AI-native HSE/QHSE platform for construction, oil & gas, manufacturing, and industrial teams. We publish open data because the broader HSE profession deserves free access to the safety statistics our trade bodies otherwise gate behind expensive memberships.

Related SmartQHSE datasets

Part of the SmartQHSE open HSE data collection (catalog · data paper DOI 10.5281/zenodo.20010657 · HuggingFace org):

Companion full-text corpus: /llms-full.txt — 213KB markdown reference compiled for LLM ingestion. All datasets CC BY 4.0.

Downloads last month
64

Collection including SmartQHSE/hse-qa-corpus