Datasets:

Modalities:
Text
Formats:
csv
Languages:
English
ArXiv:
Libraries:
Datasets
pandas
License:
BPC / README.md
limonad's picture
Update README.md
011f141 verified
metadata
license: cdla-permissive-2.0
task_categories:
  - question-answering
language:
  - en
tags:
  - Business Process Management
  - Causal
  - NLP
  - Reasoning
pretty_name: BP^C
size_categories:
  - 1K<n<10K

BPC: A Benchmark Dataset for Causal Business Process Reasoning

Dataset Card for BPC

Table of Contents

  • Table of Contents
  • Dataset Description
    • Dataset Summary
    • Supported Tasks
    • Languages Dataset Description

      Dataset Summary

      Abstract. Large Language Models (LLMs) are increasingly used for boosting organizational efficiency and automating tasks. While not originally designed for complex cognitive processes, recent efforts have further extended to employ LLMs in activities such as reasoning, planning, and decision-making. In business processes, such abilities could be invaluable for leveraging on the massive corpora LLMs have been trained on for gaining a deep understanding of such processes. In adherence to this goal, we attach here the BPC dataset, a newly developed set of process-aware Q&A that can be used to assess the ability of LLMs to reason about causal and process perspectives of business operations. We refer to this view as Causally-augmented Business Processes (BP^C). The benchmark comprises a set of domain-specific BPC related situations, a set of questions about these situations, and a set of ground truth answers to these questions. Reasoning on BP^C is of crucial importance for process interventions and process improvement. The benchmark could be used in one of two possible modalities: testing the performance of any target LLM and training an LLM to advance its capability to reason about BP^C.

      Supported Tasks

      • Question Answering
      • Causal and Process Reasoning
      • LLM tunning and testing

      Languages

      • English