researchpilot-data / chunks /1806.02421_semantic.json
Subhadip007's picture
feat: Upload full 358k vector database
e8a2c2e
[
{
"chunk_id": "2b6271cd-dfc7-4438-b0d5-92f58cf60ab2",
"text": "Human-aided Multi-Entity Bayesian Networks Learning from\nRelational Data Cheol Young Park CPARKF@MASONLIVE.GMU.EDU\nKathryn Blackmond Laskey KLASKEY@GMU.EDU\nThe Sensor Fusion Lab & Center of Excellence in C4I\nGeorge Mason University, MS 4B5\nFairfax, VA 22030-4444 U.S.A.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 0,
"total_chunks": 115,
"char_count": 269,
"word_count": 35,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5772d899-09ef-42b4-bf49-c4db97749f23",
"text": "Abstract\nAn Artificial Intelligence (AI) system is an autonomous system which emulates human's mental\nand physical activities such as Observe, Orient, Decide, and Act, called the OODA process. An AI\nsystem performing the OODA process requires a semantically rich representation to handle a\ncomplex real world situation and ability to reason under uncertainty about the situation. MultiEntity Bayesian Networks (MEBNs) combines First-Order Logic with Bayesian Networks for\nrepresenting and reasoning about uncertainty in complex, knowledge-rich domains. MEBN goes\nbeyond standard Bayesian networks to enable reasoning about an unknown number of entities\ninteracting with each other in various types of relationships, a key requirement for the OODA\nprocess of an AI system. MEBN models have heretofore been constructed manually by a domain\nexpert. However, manual MEBN modeling is labor-intensive and insufficiently agile. To address\nthese problems, an efficient method is needed for MEBN modeling. One of the methods is to use\nmachine learning to learn a MEBN model in whole or in part from data. In the era of Big Data,\ndata-rich environments, characterized by uncertainty and complexity, have become ubiquitous. The larger the data sample is, the more accurate the results of the machine learning approach can\nbe. Therefore, machine learning has potential to improve the quality of MEBN models as well as\nthe effectiveness for MEBN modeling. In this research, we study a MEBN learning framework to\ndevelop a MEBN model from a combination of domain expert's knowledge and data. To evaluate\nthe MEBN learning framework, we conduct an experiment to compare the MEBN learning\nframework and the existing manual MEBN modeling in terms of development efficiency. Keywords: Bayesian Networks, Multi-Entity Bayesian Networks, Human-aided Machine\nLearning 1 Introduction\nAn Artificial Intelligence (AI) system is an autonomous system which emulates human's mental\nand physical activities such as the OODA process [Boyd, 1976][Boyd, 1987].",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 1,
"total_chunks": 115,
"char_count": 2029,
"word_count": 303,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b0a2ff59-c5df-4500-b8a9-71d07796c621",
"text": "The OODA process\ncontains four steps (Observe, Orient, Decide, and Act). In the Observe step, data or signal from\nevery mental/physical situation (e.g., states, activities, and goals) of external systems (e.g., an\nadversary) as well as internal systems (e.g., a command center or an allied army) in the world are\nobserved in some internal observing guidance or control, and observations derived from data or\nsignal are produced. In the Orient step, observations become information, formed as a model, by\nreasoning, analysis, and synthesis influenced from knowledge, belief, condition, etc. The Orient\nstep can produce plan and COA (Course of Actions). Hypotheses or alternatives for models can\nbe decided by an AI in the Decide step. In the Act step, all decided results are implemented, and\nreal activities and states can be operated and produced, respectively. The four steps continue until\nthe end of the life cycle of the AI system. An AI system performing the OODA process requires a semantically rich representation to\nhandle situations in a complex real and/or cyber world.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 2,
"total_chunks": 115,
"char_count": 1080,
"word_count": 172,
"chunking_strategy": "semantic"
},
{
"chunk_id": "44f49a4d-ab61-48cc-aa93-e6d52397cb3d",
"text": "Furthermore, the number of entities and\nthe relationships among them may be uncertain. For this reason, the AI system needs an\nexpressive formal language for representing and reasoning about uncertain, complex, and\ndynamic situations. Multi-Entity Bayesian Networks (MEBNs) [Laskey, 2008] combines FirstOrder Logic with Bayesian Networks (BNs) [Pearl, 1988] for representing and reasoning about\nuncertainty in complex, knowledge-rich domains. MEBN goes beyond standard Bayesian\nnetworks to enable reasoning about an unknown number of entities interacting with each other in\nvarious types of relationships, a key requirement for the AI system. MEBN has been applied to AI systems [Laskey et al., 2000][Wright et al., 2002][Costa et al.,\n2005][Suzic, 2005][Costa et al., 2012][Park et al., 2014][Golestan, 2016][Li et al., 2016][Park et\nal., 2017]. In a recent review of knowledge representation formalisms for AI, Golestan et al.\n[2016] recommended MEBN as having the most comprehensive coverage of features needed to\nrepresent complex situations. Patnaikuni et al., [2017] reviewed various applications using\nMEBN. In previous applications of MEBN to the AI system, the MEBN model or MTheory was\nconstructed manually by a domain expert using a MEBN modeling process such as Uncertainty\nModeling Process for Semantic Technology (UMP-ST) [Carvalho et al., 2016]. Manual MEBN\nmodeling is a labor-intensive and insufficiently agile process. Greater automation through\nmachine learning may save labor and enhance agility. For this reason, Park et al. [2016]\nintroduced a process model called Human-aided Multi-Entity Bayesian Networks Learning for\nPredictive Situation Awareness by combining domain expertise with data. The process model was\nfocused on the predictive situation awareness (PSAW) domain. However, the process model is\nnot necessary to be only applied to the PSAW domain.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 3,
"total_chunks": 115,
"char_count": 1880,
"word_count": 273,
"chunking_strategy": "semantic"
},
{
"chunk_id": "9ec191bf-5718-4834-95e0-ffa56618dd5b",
"text": "This paper defines a general process\nmodel for Human-aided Multi-Entity Bayesian Networks Learning1 called HML. HML specifies\nfour steps with guidelines about how to associate with (1) domain knowledge, (2) database model,\nand (3) MEBN learning. Thus, the general process model is capable of generalization to reuse a\nvariety of domains to develop a domain specific MEBN model (e.g., predictive situation\nawareness, planning, natural language processing, and system modeling). (1) Domain knowledge\ncan be specified by a reference model which is an abstract framework to which a developer refers\nin order to develop a specific model. Such a reference model can support the design of a MEBN\nmodel in the certain domain and improve the quality of the MEBN model. (2) A database model\ncan support to the design of a MEBN model for automation, if there are common elements\nbetween the database model and MEBN model. For example, Relational Model (RM), which is a\ndatabase model based on first-order predicate logic [Codd, 1969; Codd, 1970] and the most\nwidely used data model in the world, represent entities and attributes. Such entities and attributes\nin RM can be mapped to entities and random variables in MEBN, respectively. Thus, common\nelements between a database model and MEBN can be used to automated conversion. In this\nresearch, we introduce the use of RM as the database model for MEBN learning. (3) MEBN\nlearning is to learn an optimal MEBN model which fits well an observed datasets in database\nmodels. MEBN learning can be classified into two types: One is MEBN structure learning (e.g.,\nfinding optimal structures of MEBN) and another is MEBN parameter learning (e.g., finding an\noptimal set of parameters for local distributions of random variables in MEBN). In this research,\nMEBN parameter learning is introduced. Overall, HML contains three supportive methodologies:\n(1) a domain reference model (e.g., a reference model for predictive situation awareness, planning,\nnatural language processing, or system modeling), (2) a mapping between MEBN and a database\nmodel (e.g., RM), and (3) MEBN learning (e.g., a conditional linear Gaussian parameter learning 1 This paper is an extension of the conference paper, [Park et al., 2016]. for MEBN) to develop a MEBN model efficiently and effectively. In this research, we conduct an\nexperiment to compare Human-aided Multi-Entity Bayesian Networks Learning (HML) and the\nexisting manual MEBN modeling in terms of development efficiency. Section 2 provides background information about MEBN and an existing MEBN modeling\nprocess.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 4,
"total_chunks": 115,
"char_count": 2586,
"word_count": 406,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c0bbeb97-5140-466d-b3fc-53f206eb55df",
"text": "In Section 3, a relational database, which is an illustrative database model used to\nexplain HML, is introduced. In Section 4, HML is presented with an illustrative example. In\nSection 5, an experiment comparing between HML and the existing MEBN modeling process is\nintroduced. Finally, conclusions are presented and future research directions are discussed.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 5,
"total_chunks": 115,
"char_count": 358,
"word_count": 54,
"chunking_strategy": "semantic"
},
{
"chunk_id": "45388c23-ed2e-4a19-a49b-81d0daf9162f",
"text": "2 Background\nThis section provides background information about Multi-Entity Bayesian Networks (MEBNs),\na script form of MEBN, and Uncertainty Modeling Process for Semantic Technology (UMP-ST). In Section 2.1, MEBN as a representation formalism is presented with some definitions and an\nexample for MEBN. In Section 2.2, a simple script form of MEBN is introduced. HML in this\nresearch is a modification of UMP-ST, so UMP-ST is introduced in Section 2.3.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 6,
"total_chunks": 115,
"char_count": 454,
"word_count": 70,
"chunking_strategy": "semantic"
},
{
"chunk_id": "83686110-a242-4090-bafd-0b5ce838ac6a",
"text": "2.1 Multi-Entity Bayesian Network\nIn this section, we describe MEBN and a graphical representation for MEBN. Details can be\nfound in [Laskey, 2008]. The following definitions are taken from [Laskey, 2008]. MEBN allows\ncompact representation of repeated structure in a joint distribution on a set of random variables. In MEBN, random variables are defined as templates than can be repeatedly instantiated to\nconstruct probabilistic models with repeated structure. MEBN represents domain knowledge\nusing an MTheory, which consists of a collection of MFrags (see Fig. 1). An MFrag is a fragment\nof a graphical model that is a template for probabilistic relationships among instances of its\nrandom variables. Random variables (RVs) may contain ordinary variables, which can be\ninstantiated for different domain entities.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 7,
"total_chunks": 115,
"char_count": 816,
"word_count": 122,
"chunking_strategy": "semantic"
},
{
"chunk_id": "35b1eaa1-95b1-4224-b5d5-d7bd103a9fa2",
"text": "We can think of an MFrag as a class which can generate\ninstances of BN fragments. These can then be assembled into a Bayesian network, called a\nsituation-specific Bayesian Network (SSBN), using an SSBN algorithm [Laskey, 2008]. In other\nwords, a given MTheory can be used to construct many different SSBNs for different situations. Fig. 1 Threat Assessment MTheory or MEBN Model To understand how this works, consider Fig. 1, which shows an MTheory called the Threat",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 8,
"total_chunks": 115,
"char_count": 466,
"word_count": 77,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c9e9c711-c8fa-4eb4-a64b-a7b1908730b0",
"text": "This MTheory contains six MFrags: Vehicle, MTI_Condition, Context,\nSituation, Speed, and Speed_Report. An MFrag (e.g., Fig. 2) may contain three types of random\nvariables: context RVs, denoted by green pentagons, resident RVs, denoted by yellow ovals, and\ninput RVs, denoted by gray trapezoids. Each MFrag defines local probability distributions for its\ninput RVs. These distributions may depend on the input RVs, whose distributions are defined in\nother MFrags. Context RVs express conditions that must be satisfied for the distributions defined\nin the MFrag to apply. Specifically, Fig. 2 shows the Situation MFrag (from the Threat Assessment MTheory) used for\nan illustrative example of an MFrag. The Situation MFrag represents probabilistic knowledge of\nhow the threat level of a region at a time is measured depending on the vehicle type of detected\nobjects. For example, if in a region there are many tracked vehicles (e.g., Tanks), the threat level\nof the region will be high.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 9,
"total_chunks": 115,
"char_count": 983,
"word_count": 154,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d84cfc74-ca2d-424f-bdcc-04a4ba25ad65",
"text": "An MFrag consists of a set of resident nodes, a set of context nodes, a\nset of input nodes, an acyclic directed graph for the nodes, and a set of class local distributions\n(CLD) for the nodes. The context nodes (i.e., isA(v, VEHICLE), isA(rgn, REGION), isA(t, TIME),\nand rgn = Location(v, t)) for this MFrag (shown as pentagons in the figure) show that this MFrag\napplies when a vehicle entity is substituted for the ordinary variable v, a region entity is\nsubstituted for the ordinary variable rgn, a time entity is substituted for the ordinary variable t,\nand a vehicle v is located in region rgn at time t. The context node rgn = Location(v, t) constrains\nthe values of v, rgn, and t from the possible instances of vehicle, region, and time, respectively. For example, suppose v1 and v2 are vehicles and r1 is a region in which the only v1 is located at\ntime t1. The context node rgn = Location(v, t) will allow only an instance of (v1, r1, t1) to be\nselected, but not (v2, r1, t1), because r1 is not the location of v2 at t1. Next, we see the input node\nVehicleType(v), depicted as a trapezoid. Input nodes are nodes whose distribution is defined in\nanother MFrag. For example, a resident node VehicleType(v) is found in the MFrag Vehicle from\nthe top left in Fig. 1. Fig. 2 Situation MFrag In Fig. 2, the node ThreatLevel(rgn, t) is a resident node, which means its distribution is defined\nin the MFrag of the figure. Like the graph of a common BN, the fragment graph shows\nprobabilistic dependencies. CLD 2.1 in the script below shows that a class local distribution for\nThreatLevel(rgn, t) describes its probability distribution as a function of the input nodes given the\ninstances that satisfy the context nodes. The class local distribution (CLD) 𝐿𝐿𝐢𝐢 can be used to\nproduce an instance local distribution (ILD) 𝐿𝐿𝐼𝐼 in a SSBN. Note that in Subsection 4.3.2, these\nCLD and ILD are defined formally. In this Subsection, we introduce CLD with a simple",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 10,
"total_chunks": 115,
"char_count": 1958,
"word_count": 347,
"chunking_strategy": "semantic"
},
{
"chunk_id": "cece55bb-9d64-42ca-8518-2ff5ae5d947b",
"text": "illustrative example. The class local distribution of ThreatLevel(rgn, t), which depends on the\ntype of vehicle, can be expressed as CLD 2.1. The CLD is defined in a language called Local\nProbability Description Language (LPDL).",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 11,
"total_chunks": 115,
"char_count": 228,
"word_count": 35,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5790b790-1237-41e5-b2f7-d279ec68f003",
"text": "In our example, the probabilities of the states, High\nand Low, of ThreatLevel(rgn, t) are defined as a function of the values, High and Low, of\ninstances rgn = Location(v, t) of the parent nodes that satisfy the context constraints. For the high\nstate in the first if-scope in CLD 2.1, the probability value is assigned by the function described\nby \"1 – 1 / CARDINALITY(v)\". The CARDINALITY function returns the number of instances\nof v satisfying the if-condition. For example, in CLD 2.1, if the situation involves three vehicles\nand two of them are tracked, then the CARDINALITY function will return 2. We see that as the\nnumber of tracked vehicles becomes very large, the function, \"1 – 1 / CARDINALITY(v)\", will\ntend to 1. This means the threat level of the region will be very high.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 12,
"total_chunks": 115,
"char_count": 788,
"word_count": 138,
"chunking_strategy": "semantic"
},
{
"chunk_id": "db1329a1-2fc4-420b-ae11-ef8e38005218",
"text": "CLD 2.1: The class local distribution for the resident node ThreatLevel(rgn, t)\n1 if some v have (VehicleType = Tracked) [\n2 High = 1 – 1 / CARDINALITY(v),\n3 Low = 1 – High\n4 ] else [\n5 High = 0,\n6 Low = 1\n7 ] Alternatively, we might model the resident node ThreatLevel(rgn, t) as a continuous random\nvariable. For a continuous resident node, the class local distribution is defined by a continuous\nprobability density function. The class local distribution of the continuous resident node (see\nCLD 2.2) can be described by LPDL, also. CLD 2.2: The class local distribution for the continuous resident node ThreatLevel(rgn, t)\n1 if some v have (VehicleType = Tracked) [\n2 10 * CARDINALITY(v) + NormalDist(10, 5)\n3 ] else [\n4 NormalDist(10, 5)\n5 ] The meaning of CLD 2.2 is that the degree of the threat in the region is 10 * the number of\ntracked vehicles plus a normally distributed error with mean 10 and variance 5. Currently, LPDL\nlimits continuous nodes to conditional linear Gaussian (CLG) distributions [Sun et al., 2010],\ndefined as: Pa(𝑅𝑅), 𝐢𝐢𝐢𝐢𝑗𝑗࡯= 𝒩𝒩(π‘šπ‘š+ 𝑏𝑏1𝑃𝑃1 + 𝑏𝑏2𝑃𝑃2 … , +𝑏𝑏𝑛𝑛𝑃𝑃𝑛𝑛, 𝜎𝜎2), where Pa() is a set of continuous parent resident nodes of the continuous resident node, R, having\n{𝑃𝑃1, … , 𝑃𝑃𝑛𝑛}, 𝑃𝑃𝑖𝑖 is a i-th continuous parent node, 𝐢𝐢𝐢𝐢𝑗𝑗 is a j-th configuration of the discrete parents\nof R (e.g., CF = {𝐢𝐢𝐢𝐢1 = (VehicleType = Tracked), 𝐢𝐢𝐢𝐢2 = (VehicleType = Wheeled)}), m is a\nregression intercept, 𝜎𝜎2 is a conditional variance, and 𝑏𝑏𝑖𝑖 is regression coefficient.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 13,
"total_chunks": 115,
"char_count": 1494,
"word_count": 264,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c3fc7f11-d518-4bfc-97fa-a4de1437a044",
"text": "Using the above MTheory example, we define elements of MTheory more precisely. The\nfollowing definitions are taken from [Laskey, 2008]. Definition 2.1 (MFrag) An MFrag F, or MEBN fragment, consists of: (i) a set π‘ͺπ‘ͺ of context\nnodes, which represent conditions under which the distribution defined in the MFrag is valid; (ii)\na set 𝑰𝑰 of input nodes, which have their distributions defined elsewhere and condition the\ndistributions defined in the MFrag; (iii) a set 𝑹𝑹 of resident nodes, whose distributions are defined\nin the MFrag2; (iv) an acyclic directed graph G, whose nodes are associated with resident and\ninput nodes; and (iv) a set 𝑳𝑳𝐢𝐢 of class local distributions, in which an element of 𝑳𝑳𝐢𝐢 is associated\nwith each resident node. The nodes in an MFrag are different from the nodes in a common BN. A node in a common BN\nrepresents a single random variable, whereas a node in an MFrag represents a collection of RVs:\nthose formed by replacing the ordinary variables with identifiers of entity instances that meet the\ncontext conditions. To emphasize the distinction, we call the resident nodes in the MBEN nodes,\nor MNodes. MNodes correspond to predicates (for true/false RVs) or terms (for other RVs) of first-order logic. An MNode is written as a predicate or term followed by a parenthesized list of ordinary variables\nas arguments. Definition 2.2 (MNode) An MNode, or MEBN Node, is a random variable N(ff) specified an nary function or predicate of first-order logic (FOL), a list of n arguments consisting of ordinary\nvariables, a set of mutually exclusive and collectively exhaustive possible values, and an\nassociated class local distribution.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 14,
"total_chunks": 115,
"char_count": 1661,
"word_count": 272,
"chunking_strategy": "semantic"
},
{
"chunk_id": "50bf1d07-2153-4b8e-9e5c-8cf32764d0cb",
"text": "The special values true and false are the possible values for\npredicates, but may not be possible values for functions. The RVs associated with the MNode are\nconstructed by substituting domain entities for the n arguments of the function or predicate. The\nclass local distribution specifies how to define local distributions for these RVs. For example, the node ThreatLevel(rgn, t) in Fig. 2 is an MNode specified by a FOL function\nThreatLevel(rgn, t) having two possible values (i.e., High and Low).",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 15,
"total_chunks": 115,
"char_count": 500,
"word_count": 81,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0b5522ce-a753-4b98-8747-cca0ad484d01",
"text": "Definition 2.3 (MTheory) An MTheory M, or MEBN Theory, is a collection of MFrags that\nsatisfies conditions given in [Laskey, 2008] ensuring the existence of a unique joint distribution\nover its random variables. An MTheory is a collection of MFrags that defines a consistent joint distribution over random\nvariables describing a domain. The MFrags forming an MTheory should be mutually consistent. To ensure consistency, conditions must be satisfied such as no-cycle, bounded causal depth,\nunique home MFrags, and recursive specification condition [Laskey, 2008]. No-cycle means that\nthe generated SSBN will contain no directed cycles. Bounded causal depth means that depth from\na root node to a leaf node of an instance SSBN should be finite.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 16,
"total_chunks": 115,
"char_count": 743,
"word_count": 115,
"chunking_strategy": "semantic"
},
{
"chunk_id": "26923f70-87f5-495e-8a28-1f349c7a4b7d",
"text": "Unique home MFrags means that\neach random variable has its distribution defined in a single MFrag, called its home MFrag. Recursive specification means that MEBN provides a means for defining the distribution for an\nRV depending on an ordered ordinary variable from previous instances of the RV. The IsA random variable is a special RV representing the type of an entity. IsA is commonly used\nas a context node to specify the type of entity that can be substituted for an ordinary variable in an\nMNode. Definition 2.4 (IsA random variable) An IsA random variable, IsA(ov, tp), is an RV\ncorresponding to a 2-argument FOL predicate. The IsA RV has value true when its second\nargument tp is filled by the type of its first argument ov and false otherwise. For example, in the Situation MFrag in Fig. 2, isA(v, VEHICLE) is an IsA RV. Its first argument v\nis filled by an entity instance and its second argument is the type symbol VEHICLE. 2 Bold italic letters are used to denote sets. true when its first argument is filled by an object of type VEHICLE.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 17,
"total_chunks": 115,
"char_count": 1050,
"word_count": 186,
"chunking_strategy": "semantic"
},
{
"chunk_id": "de719be8-8ad0-474f-aec4-591d18ffd30a",
"text": "2.2 Script for MEBN\nFig. 1 shows a graphical representation for an MTheory. In this subsection, we introduce a script\nrepresenting an MTheory. This script is useful to manage contents of an MTheory. The Threat\nAssessment MTheory in Fig. 1 can be represented by the following script (MTheory 2.1). MTheory 2.1: Part of Script MTheory for Threat Assessment\n1 [F: Situation\n2 [C: IsA (v, VEHICLE)][C: IsA (rgn, REGION)][C: IsA (t, TIME)]\n3 [C: rgn = Location (v, t)]\n4 [R: ThreatLevel (r, t)\n5 [IP: VehicleType (v)]\n6 ]\n7 ]\n8 [F: Vehicle\n9 [C: IsA (vid, VEHICLE)]\n10 [R: VehicleType (vid)]\n11 ]\n12 … The script contains several predefined single letters (F, C, R, IP, RP, and L). The single letters, F,\nC, and R denote an MFrag, a context node, and a resident node, respectively. For a resident node\n(e.g., Y) in an MFrag, a resident parent (RP) node (e.g., X), which is defined in the MFrag, is\ndenoted as RP (e.g., [R: Y [RP: X]]). For an input node, we use a single letter IP. Each node can\ncontain a CLD denoted as L. For example, suppose that there is a CLD type called\nThreatLevelCLD. If the resident node ThreatLevel in Line 4 uses the CLD type ThreatLevelCLD,\nthe resident node ThreatLevel can be represented as [R: ThreatLevel (rgn, t) [L:\nThreatLevelCLD]].",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 18,
"total_chunks": 115,
"char_count": 1263,
"word_count": 228,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f15b7550-5bea-42c4-b796-4d55dc75074f",
"text": "2.3 Uncertainty Modeling Process for Semantic Technology (UMP-ST)\nTraditional ontologies [Smith, 2003] are limited to deterministic knowledge. Probabilistic\nOntologies (POs) move beyond this limitation by incorporating formal probabilistic semantics. Probabilistic OWL (PR-OWL) [Costa, 2005] is a probabilistic ontology language that extends\nOWL with semantics based on Multi-Entity Bayesian Networks (MEBNs), a Bayesian\nprobabilistic logic [Laskey, 2008]. PR-OWL has been extended to PR-OWL 2 [Carvalho et al.,\n2017], which provides a tighter link between the deterministic and probabilistic aspects of the\nOntologies. Developing probabilistic ontologies can be greatly facilitated by the use of a\nmodeling framework such as the UMP-ST [Carvalho et al., 2016]. UMP-ST was applied for\nconstruction of PR-OWL 1 & 2 probabilistic ontologies. The UMP-ST process consists of four\nmain disciplines: (1) Requirement, (2) Analysis & Design, (3) Implementation, and (4) Test.\n(1) The Requirement discipline defines goals, queries, and evidence for a probabilistic ontology. The goals are objectives to be achieved by reasoning with the probabilistic ontology (e.g.,\nidentify a ground target). The queries are specific questions for which the answers help to achieve\nthe objectives. For example, what is the type of the target? The evidence is information used to\nanswer the query (e.g., history of the speed of the target). (2) The Analysis & Design discipline\ndesigns entities, attributes for the entities, relationships between the entities, and rules for\nattributes and relationships to represent uncertainty. These are associated with the goals, queries,\nand evidence in the Requirement discipline. For example, suppose that a vehicle entity has two\nattributes, type, and speed. Then an example of a rule might be that if the speed is low, the type is",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 19,
"total_chunks": 115,
"char_count": 1847,
"word_count": 273,
"chunking_strategy": "semantic"
},
{
"chunk_id": "20128bff-9928-415f-a734-41fcb7ce1bbe",
"text": "likely to be a tracked vehicle. (3) The Implementation discipline is a step to develop a\nprobabilistic ontology from outputs developed in the Analysis & Design discipline. Entities,\nattributes, relationships, and rules are mapped to elements of the probabilistic ontology. For\nexample, the attributes type and speed are mapped to random variables type and speed,\nrespectively. The rule for the speed and type is converted to the joint probability for the random\nvariables type and speed. (4) In the Test discipline, the probabilistic ontology developed in the\nprevious step is evaluated to assess its correctness. The correctness can be measured by three\napproaches: (a) Elicitation Review, in which completeness of the probabilistic ontology\naddressing requirements are reviewed, (b) Importance Analysis, which is a form of sensitivity\nanalysis that examines the strength of influence of each random variable on other random\nvariables, and (c) Case-based Evaluation, in which various scenarios are defined and used to\nexamine the reasoning implications of the probabilistic ontology [Laskey & Mahoney, 2000].",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 20,
"total_chunks": 115,
"char_count": 1109,
"word_count": 166,
"chunking_strategy": "semantic"
},
{
"chunk_id": "90bf671a-ea04-4465-a396-21a295448797",
"text": "3 Illustrative Running Example of Relational Data for MEBN Learning\nIn this section, we introduce an illustrative running example of relational data. For an illustrative\nexample for MEBN learning, a threat assessment relational database (RDB) is introduced. Fig. 3 Schema of a threat assessment relational database Fig. 3 shows a schema for the threat assessment RDB. In the example RDB schema, there are 14\nrelations: Region, Situation, Location, Time, Speed, Speed_Report, ActualObject, ObserverOf,\nVehicle, VehicleType, Predecessor, ReportedVehicle, MTI, and MTI_Condition. The relation\nRegion is for region information in this situation which can contain a region index (e.g., region1\nand region2). The relation Time is for time information which is a time stamp representing a time\ninterval (e.g., t1 and t2). The relation Vehicle is for vehicle information which is an index of a\nground-vehicle (e.g., v1 and v2). The relation VehicleType indicates a type of the vehicle (e.g.,\nWheeled and Tracked). The relation MTI is for a moving target indicator (e.g., mti1 and mti2). An\nMTI can be in a condition (e.g., Good and Bad) depending on weather and/or maintenance\nconditions. The relation MTI_Condition indicates the condition of an MTI. The relation Location\nis for a location where a vehicle is located. The relation Situation indicates a threat level to a\nregion at a time (e.g., Low and High). The relation ReportedVehicle indicates a reported vehicle",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 21,
"total_chunks": 115,
"char_count": 1460,
"word_count": 226,
"chunking_strategy": "semantic"
},
{
"chunk_id": "cd0b4c80-c8c2-481c-9262-089b8e3dcdf8",
"text": "The relation Speed indicates an actual speed of a vehicle, while the relation\nSpeed_Report indicates a reported speed of the vehicle from an MTI. The relation ActualObject\nmaps a reported vehicle to an actual vehicle. The relation ObserverOf indicates that an MTI\nobserves a vehicle. The relation Predecessor indicates a temporal order between two-time stamps.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 22,
"total_chunks": 115,
"char_count": 360,
"word_count": 55,
"chunking_strategy": "semantic"
},
{
"chunk_id": "53632f26-72b7-4b8b-9dbd-caa3ed86341f",
"text": "Table 1 shows parts of the relations of the threat assessment RDB for the schema in Fig. 3. As\nshown Table 1, we choose six relations (Vehicle, Time, Region, VehicleType, Location, and\nSituation), which are used for an illustrative example through the next section. For example, the\nrelation Vehicle contains a primary key VID. The relation VehicleType contains a primary key\nv/Vehicle.VID, which is a foreign key from the primary key VID in the relation Vehicle and an\nattribute VehicleType. Table.1 Parts of the threat assessment relational database",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 23,
"total_chunks": 115,
"char_count": 551,
"word_count": 88,
"chunking_strategy": "semantic"
},
{
"chunk_id": "7740f510-2ef4-4357-b92c-716ba426b60f",
"text": "Vehicle Time Region VehicleType Location Situation Rgn Vehicle Location v/Vehicle. v/Vehicle. VID TID RID t/Time Type /Region /Region. VID VID Level TID RID\nv1 t1 rgn1 v1 Wheeled v1 t1 rgn1\nrgn1 t1 High v2 t2 rgn2 v2 Tracked v1 t2 rgn1\nrgn2 t3 Low ... ... ... ... ... ... ... ...\n... ... ...",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 24,
"total_chunks": 115,
"char_count": 291,
"word_count": 56,
"chunking_strategy": "semantic"
},
{
"chunk_id": "35f2405a-25b4-4978-8958-5f0c5a872807",
"text": "4 Process for Human-Aided MEBN Learning\nThe process this research presents uses expert knowledge to define the set of possible parameters\nand structures. The process, called Human-aided MEBN learning (HML), modifies UMP-ST\n[Carvalho et al., 2016] to incorporate learning from data. As with UMP-ST, HML includes four\nsteps (Fig. 4): (1) Analyze Requirements, (2) Define World Model, (3) Construct Reasoning\nModel, and (4) Test Reasoning Model. Fig. 4 Process for Human-Aided MEBN Learning",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 25,
"total_chunks": 115,
"char_count": 487,
"word_count": 73,
"chunking_strategy": "semantic"
},
{
"chunk_id": "39c7dd2d-cacd-4eeb-ae69-7bb6b19c34cd",
"text": "Initial inputs of the process can be needs and/or missions from stakeholders in a certain domain\n(e.g., predictive situation awareness, planning, natural language processing, and system\nmodeling). In the Analyze Requirements step, specific requirements for a reasoning model (in our\ncase, an MTheory) were identified. According to different domain type, the goals and the\nreasoning model will be different. For example, the goal of the reasoning model for the PSAW\ndomain can be to identify a threatening target. The reasoning model in such domain may contain\nsensor models representing sensing noise. The goal of the reasoning model for the natural\nlanguage processing domain can be to analyze natural languages or classify documents. reasoning model in such domain may contain random variables specifying text corpus. In the\nDefine World Model step, a target world where the reasoning model operates is defined. In the\nConstruct Reasoning Model step, a training dataset can be an input for MEBN learning to learn a\nreasoning model. In the Test Reasoning step, a test dataset can be an input for the evaluation of\nthe learned reasoning model. An output of the process is the evaluated reasoning model.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 26,
"total_chunks": 115,
"char_count": 1202,
"word_count": 191,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b498fce4-c213-4c63-94be-fc937b0b621d",
"text": "The\nfollowing subsections describe these four steps with the illustrative example (Section 3) of threat\nassessment in the PSAW domain. 4.1 Analyze Requirements\nThis step is to identify requirements for development of a reasoning model. As with requirements\nin UMP-ST (Section 2.3), requirements in the HML define goals to be achieved, queries to\nanswer, and evidence to be used in answering queries. Also, the requirements should include\nperformance criteria for verification of the reasoning model. These performance criteria are used\nin the Test Reasoning Model step. Before the Analyze Requirements step begins, stakeholders\nprovide their initial requirements containing needs, wants, missions, and objectives. These initial\nrequirements may not be defined formally. Therefore, to clarify the initial requirements,\noperational scenarios are developed. In other words, the operational scenarios are used to identify\nthe goals, queries, and evidence in the requirements. This step contains three sub-steps (Fig. 5): (1) an Identify Goals step, (2) an Identify\nQueries/Evidence step, and (3) a Define Performance Criteria step. Fig. 5 Analyze Requirements 4.1.1 Identify Goals\nThe goals represent missions of the reasoning model we are developing. In this step, we can use a\nset of common questions in a certain domain, which enables us to grasp some ideas for what\nquestions the reasoning model should answer. Such domain questions can be determined by\nknowledge from experts of the domain. For example, in the predictive situation awareness\n(PSAW) domain, several PSAW questions derived from PSAW domain knowledge can be used\nin this step (e.g., examples of PSAW questions can be \"does a (grouped) target exist?\", \"what are\nthe environmental conditions?\", and \"what group does the target belong to?\").",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 27,
"total_chunks": 115,
"char_count": 1803,
"word_count": 272,
"chunking_strategy": "semantic"
},
{
"chunk_id": "47b9ba7a-df97-49a9-8455-cd77191117a5",
"text": "Requirement 4.1\nillustrates a goal we will use. Requirement 4.1\nGoal 1: Identify characteristics of a target. 4.1.2 Identify Queries/Evidence\nThe queries are specific questions for which the reasoning model is used to estimate and/or\npredict answers. The evidence consists of inputs used for reasoning. From these sub-steps, a set of\ngoals, a set of queries for each goal, and a set of evidence for each query are defined. The\nfollowing shows an illustrative example of defining a requirement. Requirement 4.1\nGoal 1: Identify characteristics of a target. Query 1.1: What is the speed of the target at a given time? Evidence 1.1.1: A speed report from a sensor.\n...",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 28,
"total_chunks": 115,
"char_count": 665,
"word_count": 110,
"chunking_strategy": "semantic"
},
{
"chunk_id": "37cd58d6-180f-4fb7-b83d-fa294116a8dc",
"text": "4.1.3 Define Performance Criteria\nIt is necessary to evaluate whether the results for a reasoning model which will be learned from\ndata address performance requirements in terms of reasoning. Criteria for evaluating reasoning\nperformance include: Speed (e.g., execution or computation time for reasoning), Accuracy (e.g.,\nmeasuring gap between an actual value and estimation) and Resource Usage (e.g., memory or\nCPU usage). In some situations, execution time for a reasoning model is the most important\nfactor. In other cases, accuracy for a reasoning model may be more important. For example, an\ninitial missile tracking may require high-speed reasoning to estimate the missile trajectory, while\nmatching faces in a security video against a no-fly database may prioritize accuracy over\nexecution time. The performance criteria in the requirements can be specified in terms of some measure of\naccuracy (e.g., the Brier score [Brier, 1950] or the continuous ranked probability score (CRPS)\n[Gneiting & Raftery, 2007]). For example, we might require that the average of CRPS values\nbetween ground truth and estimated results from a reasoning model shall be less than a given\nthreshold. The performance criteria are determined by stakeholder agreement. Such performance criteria can\nbe acquired through the following approaches: (1) survey, (2) experience, and (3) standard\nmetrics drawn from published literature and standards. (1) Performance criteria can be derived by\nagreement of stakeholders using survey. (2) Subject matter experts can provide appropriate\nperformance criteria from their experience. (3) Standards or literature can be used to obtain such\nperformance criteria.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 29,
"total_chunks": 115,
"char_count": 1680,
"word_count": 248,
"chunking_strategy": "semantic"
},
{
"chunk_id": "963a5fdf-2470-4a69-8f94-5bfca9da99bf",
"text": "4.2 Define World Model\nThe Define World Model step develops a world model consisting of a structure model and rules. The world model describes a target situation of concern that is the subject. The structure model\ncan contain entities (e.g., target and sensor), variables (e.g., Speed and ThreatLevel), and\nrelations (e.g., location and situation). The rules describe the causal relationships between entities\nin the structure model (e.g., the type of a target can influence the speed of the target). The causal\nrelationships can contain more specific information such as types of distributions and parameters\nfor the distributions which will be used to develop an initial MTheory in the next step. The\nstructure model and the rules provide a clear idea by which the reasoning model can be formed. Fig. 6 Define World Model This step decomposes into two sub-steps (Fig. 6): (1) a Define Structure Model step and (2) a\nDefine Rules step. The Define Structure Model step defines the structure model from the\nrequirements, domain knowledge and/or existing data schemas. The structure model is used to\nidentify rules. The Define Rules step defines a rule or an influencing relationship between\nattributes (e.g., A and B) in relations for the structure model. The influencing relationship is a\nrelationship between attributes in which there is an unknown causality between the attributes (e.g.,\ninfluencing(A, B)). If we know the causality, the influencing relationship becomes a causal\nrelationship (e.g., causal(A, B)). For many parent attributes which influence a child attribute (or\nvariable), a brace is used to indicate a set of parent attributes (e.g., causal({A, B}, C)). The child\nattribute is called a Target Attribute (or Variable).",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 30,
"total_chunks": 115,
"char_count": 1738,
"word_count": 274,
"chunking_strategy": "semantic"
},
{
"chunk_id": "28aab03c-3c52-4e71-8c32-4b7c83e0bacd",
"text": "Also, the set of rules should satisfy the Nocycle condition which means that the generated SSBN will contain no directed cycles (Section\n2.1).",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 31,
"total_chunks": 115,
"char_count": 142,
"word_count": 23,
"chunking_strategy": "semantic"
},
{
"chunk_id": "099acb8e-63e1-46cd-9a79-4621f99f1a53",
"text": "4.2.1 Define Structure Model\nThe Define Structure Model step uses requirements, domain knowledge and/or existing data\nschemas to develop a structure model. The structure can be represented in a modeling language\n(e.g., Entity–Relationship (ER) model, Enhanced Entity–Relationship (EER), Relational Model\n(RM), or Unified Modeling Language (UML)). The structure model can contain information\nabout entities, attributes, and groups for the entities and the attributes (e.g., a relation in RM). We need to define how to consider the world model in terms of the closed-world assumption and\nthe open-world assumption. The closed-world assumption (CWA) means that data, not known to\nbe true, in a database is considered as false, while in the open-world assumption (OWA) it is\nconsidered as unknown that can be either true or false [Reiter, 1978]. In the world model, entities,\nrelations, and attributes can be treated according to either CWA or OWA. For example, in CWA,\nif there is a set of disease entities, we assume the only diseases are the ones represented in the\nRDB. In OWA, there may be other disease entities in addition to the ones represented in the RDB. Considering CWA or OWA depends on the task and the quality of the data or knowledge. If it is\nsufficient for the task to assume we know all diseases (although in the real world, it is impossible),\nCWA can be used.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 32,
"total_chunks": 115,
"char_count": 1375,
"word_count": 226,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c78a5cfa-ddd1-46bb-8ee9-b737dea3dc18",
"text": "As another example, there are a group of trees in a region and we are trying to\nidentify the type of the trees. However, a method to count the trees performs poorly. In this case,\nit may make sense that data from such a method is treated according to OWA (although we can\nidentify the type of the trees). Therefore, the determination for CWA or OWA for data or\nknowledge can depend on how these fit well the real world and on the task. This can be an issue\nof data quality. If our data or knowledge fits well to the real world, we may use CWA. If our data\nor knowledge does not fit well to the real world, we may use OWA. How to measure such a\nquality? We may need an approach to qualify the fitness by matching between data and the real\nworld. However, the topic of data quality goes beyond our research.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 33,
"total_chunks": 115,
"char_count": 805,
"word_count": 157,
"chunking_strategy": "semantic"
},
{
"chunk_id": "9d3807cb-1892-4bd2-8821-78d337d8506f",
"text": "The original formulation of the relational model assumed a closed world [Date, 2007]. Date [2007]\ndiscussed the problem of using OWA in the relational model. Under OWA, data, not known to be true, is considered as unknown, which means that we don't know whether it is true or false. Date\n[2011] discussed that this leads to a three-valued logic (3VL) containing three truth values (e.g.,\ntrue, false, and unknown). However, the relational model was not developed for such a logic (but\nit is based on two-valued logic [Date, 2011]). Therefore, query results under the assumption of\nthe three-valued logic for the relation model can be wrong. \"Nulls and 3VL are supposed to be a\nsolution to the \"missing information\" problemβ€”but I believe I've shown that, to the extent they\ncan be considered a \"solution\" at all, they're a disastrously bad one. [Date, 2011]\". In this\nresearch, we follow a limited closed world assumption to maintain consistency between RM and\nMEBN in terms of MEBN learning:\n[Assumption 1] No Missing Data: Values of all RVs for entities explicitly represented in the\ndatabase are known.\n[Assumption 2] Boolean RV: For Boolean RVs, if the database does not indicate that the value\nis true, it is assumed false. We make no assumptions about entities that have not yet been represented in the database. The\npurpose of learning is to define a probability distribution for the attributes and relationships for\nnew entities.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 34,
"total_chunks": 115,
"char_count": 1436,
"word_count": 238,
"chunking_strategy": "semantic"
},
{
"chunk_id": "234e6f17-1b25-4cd4-9d5a-b1c6b8f25464",
"text": "Relaxing Assumption 1 and Assumption 2 is a topic for future research. A requirement specifies a query and evidence for the query. The elements of the requirement are\nused to define corresponding elements in the structure model. For example, suppose that the\nrequirements specify queries for the attributes Speed (Query 1.1) and Speed Report (Evidence\n1.1.1) for a target g at a time t. Based on these requirements, we know that these two attributes\nshould be included in the structure model. We can then identify additional attributes related to\nthese attributes by expert knowledge. For example, a TargetType attribute for the target g most\nlikely influences the Speed attribute and the Speed attribute at the previous time probably\ninfluences the current Speed attribute. Therefore, these attribute (TargetType and PreviousSpeed)\ncan be included in the structure model. In this step, domain knowledge can be used to identify these possible entities, variables, and\nrelationships. Domain knowledge may provide information about possible entities (e.g., time,\ntarget, and sensor) involved in the domain situation. For example, Park et al. [2014] suggested\npossible entities (e.g., the time entity, observer entity, and observed entity). These entities can be\na subject which MEBN developers consider for the design of MEBN models in the PSAW\ndomain. 4.2.2 Define Rules\nIn the Define Rules step, causal relationships between random variables can be suggested by the\nPSAW-MEBN reference model. For example, a Reported Object RV (e.g., Speed_RPT) depends\non a Target Object RV (e.g., Speed). Also, expert knowledge can provide some causal\nrelationships between RVs.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 35,
"total_chunks": 115,
"char_count": 1663,
"word_count": 255,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d8a63682-50f0-426e-acc2-1f4737fd9085",
"text": "For example, an expert can note that the RV VehicleType most likely\ninfluences the RV Speed and an RV PreviousSpeed also likely influences the current RV Speed. These beliefs from expert knowledge become a causal relationship rule as shown in the following. Rule 1: causal({VehicleType, PreviousSpeed}, Speed)\nRule 2: causal(VehicleType, ThreatLevel)\nRule 3: causal({Speed, MTI_Condition}, Speed_RPT)\nRules 1 and 2 are derived from expert knowledge, while Rule 3 is derived from the reference\nmodel. Also, in Section 4.2.3.1, the PSAW-MEBN reference model provided knowledge about\nspecial context variable types (i.e., context types ActualObject, ObserverOf, and Predecessor) to\nlink entities determined in different MFrags. Thus, the relation actualobject is used as the context type ActualObject, the relation observerof is used as the context type ObserverOf, and the relation\npredecessor is used as the context type Predecessor. In the Define Rules step, a (conditional) local distribution for an attribute (e.g., the speed attribute)\ncan be defined by expert knowledge. In reality, we can meet a situation in which there is no\ndataset for a rule and all we have is expert knowledge. For example, a conditional local\ndistribution for the speed attribute given the RV VehicleType can be identified by a domain expert\n(e.g., if a vehicle type is wheeled, then the speed of the vehicle on a road is normally distributed\nwith a mean of 50MPH and a standard deviation of 20MPH).",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 36,
"total_chunks": 115,
"char_count": 1477,
"word_count": 231,
"chunking_strategy": "semantic"
},
{
"chunk_id": "4eed548f-f111-4715-8d30-11b07aa3bad2",
"text": "The rules derived in this step are\nused in the next step to construct an MTheory and the MTheory will be learned by MEBN\nparameter learning. In this step, we determine whether data can be obtained for the attribute, and if so, either\ncollect data or identify an existing dataset. We usually divide the data into a training dataset and a\ntest dataset. If no data can be obtained, we use the judgment of domain experts to specify the\nnecessary probability distributions. For example, a belief for the target type attribute can be\nP(Wheeled) = 0.8 and P(Tacked) = 0.2. If neither data nor expert judgment is available, we\nconsider whether the attribute is really necessary. For this, we can return the Analyze\nRequirements step to modify the requirements.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 37,
"total_chunks": 115,
"char_count": 752,
"word_count": 129,
"chunking_strategy": "semantic"
},
{
"chunk_id": "29e9286d-0caa-4692-9106-47a732ba60cb",
"text": "4.3 Construct Reasoning Model\nThe Construct Reasoning Model step develops a reasoning model from a training dataset, a\nstructure model, and rules. Fig. 7 Construct Reasoning Model This step decomposes into two sub-steps (Fig. 7): (1) a Map to Reasoning Model step and (2) a\nLearn Reasoning Model step. The Map to Reasoning converts the structure model and rules in the\nworld model to an initial reasoning model. The Learn Reasoning Model uses a machine learning\nmethod to learn the model from a training dataset. 4.3.1 Map to Reasoning Model\nIn the Map to Reasoning Model step, MEBN-RM is used as a reference for a mapping rule\nbetween RM and MEBN [Park et al., 2013]. The relations in Fig. 3 can be converted to MFrags in\nan initial MTheory (MTheory 4.1) using MEBN-RM. 4.3.1.1 Perform Entity-Relationship Normalization\nBefore performing MEBN-RM [Park et al., 2013], the relations in Table 1 are normalized by the\nEntity-Relationship Normalization. Definition 4.1 (Entity-Relationship Normalization) Entity-Relationship Normal Form if either\nits primary key is a single attribute which is not a foreign key, or its primary key contains two or\nmore attributes, all of which are foreign keys. For example, in the relations in Table 1, we can notice that the relation VehicleType has as its\nprimary key a single foreign key imported from the relation Vehicle. They (Vehicle and\nVehicleType) can be merged into a relation Vehicle. The following table shows the normalized\ntable. Note that after the Entity-Relationship Normalization, any foreign key in a relation comes\nfrom a certain entity relation (not relationship relation), which has only one attribute for its\nprimary key, so there is no need to indicate which primary key is used for the entity relation and\nwe can simplify the notation for a foreign key (e.g., rgn/Region.RID and t/Time.TID). For\nexample, the notation of the foreign key for the vehicle (i.e., v/Vehicle.VID) in the relation\nLocation (Table 1) can be simplified as v/Vehicle. Table 2 Normalized relational dataset from Table 1",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 38,
"total_chunks": 115,
"char_count": 2049,
"word_count": 331,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a541b19b-3a44-47fe-9ef4-1bb999497068",
"text": "Time Region Vehicle Location Situation TID RID VID Vehicle v/ t/Tim Location rgn t Threat\nType Vehicle e /Region /Region /Time Level\nt1 rgn1\nv1 Wheeled v1 t1 rgn1 rgn1 t1 High t2 rgn2\nv2 Tracked v1 t2 rgn1 rgn2 T3 Low ... ...\n... ... ... ... ... ... ... ...",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 39,
"total_chunks": 115,
"char_count": 257,
"word_count": 52,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6708207b-03a9-4cbc-a00b-3bf4196be82f",
"text": "4.3.1.2 Perform MEBN-RM Mapping\nThe relations in Fig. 3 can be converted to MFrags in an initial MTheory (MTheory 4.1) using\nMEBN-RM mapping [Park et al., 2013]. For example, the relation mti_condition is converted to\nthe MFrag 1, F1, Mti_Condition described from the lines 1 and 4. Also, the attributes in relations\ncan be resident nodes in the initial MTheory using the resident node mapping defined in MEBNRM. For example, the attribute VehicleType for the vehicle v becomes a resident node\nVehicleType(v). The attribute Speed for the vehicle v and at the time t becomes a resident node\nSpeed(v, t). MTheory 4.1: Initial Threat Assessment\n1 [F1: MTI_CONDITION\n2 [C: IsA (v, VEHICLE), IsA (mti, MTI), IsA (t, TIME)]\n3 [R: MTI_Condition(v, mti, t)]\n4 ]\n5 [F2: VEHICLE\n6 [C: IsA (VID, VEHICLE)]\n7 [R: VehicleType(VID)]\n8 ]\n9 [F3: SPEED\n10 [C: IsA (v, VEHICLE), IsA (t, TIME)]\n11 [R: Speed (v, t)]\n12 ]\n13 [F4: LOCATION\n14 [C: IsA (v, VEHICLE), IsA (t, TIME)]\n15 [R: Location (v, t)]\n16 ] 17 [F5: SPEED_REPORT\n18 [C: IsA (r, REPORTEDVEHICLE), IsA (t, TIME)]\n19 [R: Speed_RPT (r, t)]\n20 ]\n21 [F6: SITUATION\n22 [C: IsA (rgn, REGION), IsA (t, TIME)]\n23 [R: ThreatLevel (rgn, t)]\n24 ]\n25 [F7: REPORTEDVEHICLE\n26 [C: IsA (r, REPORTEDVEHICLE)]\n27 [R: ActualObject(r)]\n28 ]\n29 [F8: OBSERVEROF\n30 [C: IsA (mti, MTI), IsA (v, VEHICLE)]\n31 [R: ObserverOf (mti, v)]\n32 ]\n33 [F9: PREDECESSOR\n34 [C: IsA (pret, TIME), IsA (t, TIME)]\n35 [R: Predecessor (pret, t)]\n36 ]",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 40,
"total_chunks": 115,
"char_count": 1453,
"word_count": 260,
"chunking_strategy": "semantic"
},
{
"chunk_id": "95554b65-31a2-4cc9-ac7c-477b17f59a1b",
"text": "The initial MTheory, which is directly derived from an RM using MEBN-RM, can be learned\nusing a dataset for each relation associated with the MFrag in the initial MTheory. More\nspecifically, the parameter for the distribution of each RV in the MFrag is learned from a\ncorresponding dataset of the relation for the MFrag. For example, the MFrag Situation is derived\nfrom the relation Situation in Table 2. The parameter for the distribution of the variable\nThreatLevel (Line 23 in MTheory 4.1) can be learned from the dataset of the attribute\nThreatLevel (Table 2). An RV (e.g., ThreatLevel) in MEBN can contain a default distribution\nwhich is used for reasoning, for cases in which none of the conditions associated with parent RVs\nis valid. In MEBN, the parameter for the default distribution should be learned from a dataset\ncontaining such cases. 4.3.1.3 Update Reasoning Model using the Rules\nThe initial MTheory can be updated by the rules defined in Section 4.2.2.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 41,
"total_chunks": 115,
"char_count": 970,
"word_count": 161,
"chunking_strategy": "semantic"
},
{
"chunk_id": "04b6763b-4690-460a-a637-575525491dce",
"text": "We have three rules for\nthe three variables Speed, ThreatLevel, and Speed_RPT. Each variable is associated with its\nparent variables (e.g., Pa(ThreatLevel) = {VehicleType})3. If the parent variables in the rule for a\nchild variable are in the MFrag where the distribution of the child variable is defined, the dataset\nfor the relation associated with the MFrag is used for learning. For example, suppose that in Table\n2, there is an attribute VehicleSize in the relation Vehicle. The attribute VehicleSize becomes a\nvariable VehicleSize in the MFrag Vehicle using MEBN-RM. If there is a rule such as a\ncausal(VehicleType, VehicleSize), the dataset in the relation Vehicle is used to learn the parameter\nfor the distribution of the variable VehicleSize. However, if the parent variables in the rule for a\nchild variable are resident in different MFrags where the child variable is not defined, the\nrelations associated with these MFrags for the child variable and the parent variables should be\njoined to generate a joined dataset containing both datasets for the child and the parents. Then, the\ndataset for the joined relation from these relations is used for learning. For such a joining, target\n(or child) variables from a set of rules play an important role. The target variables in the MFrag\ngiven their parent variables are learned using the joined relation containing all attributes related to\nthe target variables. In the following, we describe how to join relations in a relational database\n(e.g., the threat assessment relational database). Rule 1 specifies that the probability distribution 3 Pa(X) is the set of parent nodes of the node X.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 42,
"total_chunks": 115,
"char_count": 1651,
"word_count": 267,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f82f8972-601f-4b02-b26c-c19eb2970b79",
"text": "of the variable Speed depends on the values of variables PreviousSpeed and VehicleType. To\nlearn the parameter for the variable Speed in this situation, it is not enough to use only the dataset\nfrom the relation Speed, because the dataset doesn't contain information associated with the\nvariable VehicleType. Therefore, relations related to each target variable and its parent variables\nshould be joined. For this purpose, we need to define a joining rule. The variable PreviousSpeed\nindicates a variable Speed which happens just before a current time, so the relation Predecessor,\nwhich indicates a previous time and a current time, is also used for this joining for Rule 1. In\nother words, the relations Speed, VehicleType, and Predecessor are joined. 4.3.1.3.1 Join Relations\nNow, let us discuss how to join these relations. A new dataset from the joined relation is called a\njoined dataset. For example, the attributes (e.g., VehicleType and ThreatLevel) which are located\nin different relations can be joined. Joining in RM is an operation to combine two or more\nrelations. In the example, the relation Vehicle can be joined to the relation Situation through the\nrelation Location, because the relation Location contains the attributes v/Vehicle, t/Time, and\nLocation/Region corresponding to the primary key, VID, in the relation Vehicle and the primary\nkey, t/Time and rgn/Region, in the relation Situation. Table 3 Joined dataset Location.v Location.t Situation. Case Location VehicleType ThreatLevel Vehicle.VID Situation.t Situation.rgn\n1 Tracked Vehicle13 Time18 Region6 High\n2 Tracked Vehicle15 Time21 Region7 High\n3 Tracked Vehicle17 Time24 Region8 Low\n4 Tracked Vehicle19 Time27 Region9 High\n5 Wheeled Vehicle21 Time30 Region10 High\n6 Wheeled Vehicle23 Time33 Region11 Low\n7 Wheeled Vehicle0 Time2 Region0 High\n8 Tracked Vehicle1 Time2 Region0 High\n9 Tracked Vehicle2 Time5 Region1 Low\n10 Wheeled Vehicle3 Time5 Region1 Low\n11 Tracked Vehicle4 Time8 Region2 High\n12 Tracked Vehicle5 Time8 Region2 High\n13 Tracked Vehicle6 Time11 Region3 Low\n14 Tracked Vehicle7 Time11 Region3 Low\n15 Tracked Vehicle8 Time14 Region4 High\n16 Tracked Vehicle9 Time14 Region4 High\n17 Wheeled Vehicle10 Time17 Region5 High\n18 Wheeled Vehicle11 Time17 Region5 High",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 43,
"total_chunks": 115,
"char_count": 2254,
"word_count": 343,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d0a13243-dde6-48f1-b113-5be9a8182dae",
"text": "There are several joining rules (e.g., Cartesian Product, Outer Join, Inner Join, and Natural Join)\n[Date, 2011]. Table 3 shows an illustrative example of a joined dataset derived from Table 2\nusing Inner Join. Inner Join produces all tuples from relations as long as there is a match between\nvalues in the columns being joined. Table 3 shows the result of performing an inner join of the\nrelations Situation and Vehicle through the relation Location and then selecting the columns to be\nused for learning. The rows (or tuples) in the relations Situation and Vehicle are joined when rows\nof the attributes v/Vehicle, t/Time, and Location/Region in the relation Location match rows of\nthe attribute VID in the relation Vehicle and rows of the attributes rgn/Region and t/Time in the\nrelation Situation. The first column denotes cases for the matched rows. The second column\n(Vehicle.VehicleType) denotes the rows from the attribute VehicleType of the relation Vehicle in\nTable 2. The third column (Location.v and Vehicle.VID) denotes the matched rows between the attribute v from the relation Location and the attribute VID from the relation Vehicle. The fourth\ncolumn (Location.t and Situation.t) denotes the matched rows between the attribute t from the\nrelation Location and the attribute t from the relation Situation. The fifth column\n(Location.Location and Situation.rgn) denotes the matched rows between the attribute Location\nfrom the relation Location and the attribute rgn from the relation Situation. The sixth column\n(Situation.ThreatLevel) denotes the rows from the attribute ThreatLevel from the relation\nSituation. Table 3 shows the joined dataset for the attributes VehicleType and ThreatLevel. Now, let us\nassume that the attribute ThreatLevel will be a target variable depending on the variable\nVehicleType (i.e., Rule 2: causal(VehicleType, ThreatLevel)). For each instance of the target\nvariable ThreatLevel, Table 3 provides relevant information about all the configurations of its\nparents (i.e., the parent variable VehicleType).",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 44,
"total_chunks": 115,
"char_count": 2050,
"word_count": 312,
"chunking_strategy": "semantic"
},
{
"chunk_id": "fbeacce6-c179-4f58-9d9b-9b3f98f2b2ff",
"text": "For example, there is the value High for the Threat\nlevel in the situation at Region5 in Time17 (i.e., Cases 17 and 18). The value High is associated\nwith the wheeled Vehicle10 and the wheeled Vehicle11. In other words, two parent instances (i.e.,\nthe wheeled Vehicle10 and the wheeled Vehicle11) influence the target instance (i.e., the value\nHigh).",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 45,
"total_chunks": 115,
"char_count": 350,
"word_count": 58,
"chunking_strategy": "semantic"
},
{
"chunk_id": "aa2d2e95-141d-4263-a8e2-a3338aaac76e",
"text": "The following shows a query script4 which is an example using Inner Join for Table 3. SQL script 4.1: Joining for Table 3\n1 SELECT\n2 Vehicletype, Location.v, Location.t, Location.Location, ThreatLevel\n3 FROM Situation\n5 JOIN Location ON\n6 Situation.rgn = Location.Location &&\n7 Situation.t = Location.t\n9 JOIN Vehicle ON\n10 Vehicle.VID = Location.v SQL script 4.1 joins the relations Situation and VehicleType through the relation Location. In\nother words, the rows (or tuples) in the relations Situation and VehicleType are joined as shown\nTable 3 in which the two attributes (VehicleType, ThreatLevel) are connected through the\nattributes of the relation Location. The joined table shows how the dataset of the attribute\nVehicleType and the dataset of the attribute ThreatLevel are linked. We introduced how to join relations according to given rules. In the following, we discuss how to\nupdate an MFrag from the given rules. The initial threat assessment MTheory (MTheory 4.1) was\nconstructed by MEBN-RM.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 46,
"total_chunks": 115,
"char_count": 1007,
"word_count": 158,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b2f590f1-6e3f-4c95-8aba-94c01f3d5e89",
"text": "Each MFrag in the initial MTheory contains resident nodes without\nany causal relationship between the resident nodes. The given rules enable the resident nodes to\nspecify such causal relationships. Therefore, the MFrag in the initial MTheory may be changed\naccording to the updated resident nodes with the causal relationships by the given rules. This\nprocess contains three steps: Construct input/parent nodes, Construct context nodes, and Refine\ncontext nodes. 4.3.1.3.2 Construct Input/Parent Nodes\nA rule denotes a target variable and its parent variables. The joined table for such a given rule\ncontains parents of the resident node (i.e., the target variable) that may be resident in another\nMFrag and need to be added as input nodes for the resident node. For example, we defined a set\nof rules in the Define World Model step (e.g., Rule 2: causal(VehicleType, ThreatLevel)). 4 In this research, we used MySQL, an open-source relational database management system, and Structured Query Language (SQL) supported by MySQL.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 47,
"total_chunks": 115,
"char_count": 1027,
"word_count": 159,
"chunking_strategy": "semantic"
},
{
"chunk_id": "71df796e-3ba2-42b8-acfa-606ad1716074",
"text": "MTheory 4.1, for the target variable ThreatLevel in the MFrag Situation, its parent VehicleType is\ndefined in the MFrag Vehicle. The parent variable VehicleType should be an input node in the\nMFrag Situation. The following MFrag shows the updated result for the MFrag Situation using\nRule 2. MFrag 4.1: Situation\n1 [C: IsA (rgn, REGION), IsA (t, TIME)]\n2 [C: IsA (VID, VEHICLE)]\n3 [R: ThreatLevel (rgn, t)\n4 [IP: VehicleType (VID)]\n5 ] The primary key for VehicleType is VID associated with the entity VEHICLE, so IsA (v,\nVEHICLE) is added in the updated MFrag Situation (MFrag 4.1). 4.3.1.3.3 Construct Context Nodes\nIn this step, additional context nodes (other than IsA context nodes) are added to the updated\nMFrag.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 48,
"total_chunks": 115,
"char_count": 719,
"word_count": 120,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6451712e-f13f-4743-8da9-635a9809701c",
"text": "For this, we can use a joining script (e.g., SQL script 4.1) used for joining relations. In\nSQL script 4.1, there are conditions for joining. (e.g., Situation.rgn = Location.Location,\nSituation.t = Location.t, and Vehicle.VID = Location.v). These conditions are represented as\ncontext nodes. For example, the condition Situation.rgn = Location.Location can be a context\nnode rgn = Location(v, t1), where the ordinary variable rgn comes from the primary key rgn in\nthe relation Situation, the first v comes from the relation Location, and the second t1 comes from\nthe relation Location. Note that although the primary key t for the attribute ThreatLevel and the\nprimary key t for the attribute Location are same, they must be given different ordinary variable\nnames in the context nodes, because they refer to different entities. For example, Location(v, t)\nfor the attribute Location can be changed to Location(v, t1). The condition Situation.t =\nLocation.t can be a context node t = t1, where the first t comes from the relation Situation\nassociated with the attribute ThreatLevel and the second t1 comes from the relation Location\nassociated with the attribute Location. From the above process, the following script can be\ndeveloped.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 49,
"total_chunks": 115,
"char_count": 1235,
"word_count": 196,
"chunking_strategy": "semantic"
},
{
"chunk_id": "d2ad7630-34f1-4c95-8a7e-c642eaf5fba6",
"text": "MFrag 4.2: Situation\n1 [C: IsA (rgn, REGION), IsA (t, TIME)]\n2 [C: IsA (VID, VEHICLE)]\n3 [C: IsA (v, VEHICLE), IsA (t1, TIME)]\n4 [C: rgn = Location (v, t1)]\n5 [C: t = t1]\n6 [C: VID = v]\n7 [R: ThreatLevel (rgn, t)\n8 [IP: VehicleType (VID)]\n9 ] The primary key for the attribute Location are v and t, so the IsA context nodes IsA (v, VEHICLE)\nand IsA (t1, TIME) are added to MFrag 4.2. 4.3.1.3.4 Refine Context Nodes\nIn MFrag 4.2, we notice that two equal-context nodes (i.e., [C: t = t1] in Line 5 and [C: VID = v]\nin Line 6) indicate conditions that entities must be equal. Consequently, the equal-context node\nindicates that they are the same entity. The above script can be simplified by removing ordinary variables sharing the same entity and equal-context nodes as shown MFrag 4.2. MFrag 4.3: Situation\n1 [C: IsA (v, VEHICLE)]\n2 [C: IsA (t, TIME), IsA (rgn, REGION)]\n3 [C: rgn = Location (v, t)]\n4 [R: ThreatLevel (rgn, t)\n5 [IP: VehicleType (v)]\n6 ]",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 50,
"total_chunks": 115,
"char_count": 954,
"word_count": 182,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ab72c693-5e03-4dad-969c-9e113e21b2ef",
"text": "The Learn Reasoning Model step applies MTheory learning from relational data. In this research,\nwe focus on MEBN parameter learning given a training dataset D in RM and an initial MTheory\nM. Before introducing MEBN parameter learning, some definitions are introduced in the\nfollowing subsections. 4.3.2 Definitions for Class Local Distribution and Instance Local Distribution\nWe introduced Definition 2.2 (MFrag), Definition 2.3 (MNode), and Definition 2.4 (MTheory)\nfor MEBN in Section 2. An MTheory is composed of a set of MFrags F on the MTheory (i.e., M\n= {F1, F2, ... , Fn}) conditions (e.g., no-cycle, bounded causal depth, unique home MFrags, and\nrecursive specification condition [Laskey, 2008]) in Section 2. An MFrag F is composed of a set\nof MNodes N and a graph G for N (i.e., F = {N, G}). An MNode is composed of a function or\npredicate of FOL ff and a class local distribution (L) (i.e., N = {ff, L}). A CLD specifies how to define local distributions for instantiations of the MNode. The following\nCLD 4.1 and ILD 4.1 show illustrative examples for a CLD (Class Local Distribution) and an\nILD (Instance Local Distribution), respectively (recall that these examples were discussed in\nSection 2). CLD 4.1 defines a distribution for the threat level in a region.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 51,
"total_chunks": 115,
"char_count": 1274,
"word_count": 213,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6695beba-184d-4878-9765-9f250b58c2ef",
"text": "If there are no tracked\nvehicles, the default probability distribution described in Line 6 is used. The default probability\ndistribution in a CLD is used for ILDs generated from the CLD, when no nodes meet the\nconditions defined in the MFrag for parent nodes. This CLD is composed of a class parent condition CPCi and a class-sub-local distribution CSDi. A CPC indicates a condition whether a CSD associated with the CPC is valid. The CSD (classsub-local distribution) is a sub-probability distribution which specifies how to define a local\ndistribution under a condition in an RV derived from an MNode. For example, the first line in\nCLD 4.1 is CPC1 which indicates a condition of the first class-sub-local distribution CSD1.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 52,
"total_chunks": 115,
"char_count": 726,
"word_count": 119,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2a8dd3bf-8dec-4663-931f-75e51c9e924e",
"text": "In this\ncase, the condition means that \"if there is an object whose type is Tracked\". If this is satisfied (i.e.,\nCPC1 is valid), then CSD1 is used. A CPC can be used for a default probability distribution. In\nsuch a case, it is called a default CPC specified by CPCd and also the CSD associated with CPCd\nis called a default CSD, CSDd. CLD 4.1 [Discrete CLD]: ThreatLevel(rgn, t)\n1 CPC1: if some v have (VehicleType = Tracked) [\n2 CSD1: High = Ɵ1.1, Low = Ɵ1.2\n3 CPC2: ] else if some v have (VehicleType = Wheeled) [\n4 CSD2: High = Ɵ2.1, Low = Ɵ2.2\n5 CPCd: ] else [\n6 CSDd: High = Ɵd.1, Low = Ɵd.2 ] For this case, we assume that the MNode contains two states (High and Low) and the discrete\nparent the RV VehicleType(v) has two states (Tracked and Wheeled).",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 53,
"total_chunks": 115,
"char_count": 759,
"word_count": 149,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b20f5e7e-b715-4bf6-bc37-73902e98969d",
"text": "The pair of CSD1 and CSD1 (in Line 1 and 2) is for VehicleType(v) = Tracked. The pair of CSD2 and CSD2 (in Line 3 and 4) is\nfor VehicleType(v) = Wheeled. The pair of CPCd and CSDd (in Line 5 and 6) is for a default\ndistribution.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 54,
"total_chunks": 115,
"char_count": 228,
"word_count": 48,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5966f6e0-e76c-43c3-a52e-c0bc3b2fee97",
"text": "The following ILD 4.1 shows the ILD derived from the above CLD given one region entity\nregion1 and one vehicle entity v1. Like the CLD, the ILD is composed of an instance parent\ncondition IPCi and an instance-sub-local distribution ISDi. The IPC indicates a condition whether\nthe ISD associated with the IPC is valid. The ISD is a probability distribution which is defined in\nan ILD of a random variable. ILD 4.1: ILD with one region and one vehicle\n1 P(ThreatLevel_region1 | VehicleType_v1 )\n2 IPC1: if( VehicleType_V1 == Wheeled ) {\n3 ISD1: High = Ɵ1.1; Low = Ɵ1.2;\n4 IPC2: } else if( VehicleType_V1 == Wheeled ) {\n5 ISD2: High = Ɵ2.1; Low = Ɵ1.2;\n6 } Now, consider a situation in which there is a region containing no vehicles.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 55,
"total_chunks": 115,
"char_count": 730,
"word_count": 133,
"chunking_strategy": "semantic"
},
{
"chunk_id": "23ab49e8-20e9-4e81-b89c-1a245a99a2b1",
"text": "In this case, the\ndefault probability distribution in CLD 4.1 is used for such an ILD (i.e., ILD 4.2), because all\nconditions associated with parent nodes (i.e., CPC1 and CPC2 in CLD 4.1) are not valid. ILD 4.2: Default ILD with one region without any vehicle\n1 P(ThreatLevel_region1) =\n2 IPC1: {\n3 ISD1: High = Ɵd.1; Low = Ɵd.2;\n4 } Now, we introduce the ILD formally.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 56,
"total_chunks": 115,
"char_count": 369,
"word_count": 68,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c9420ae0-f8fa-4aef-9d71-9de287a8677e",
"text": "Definition 4.2 (Instance Local Distribution) An instance local distribution LI for a random\nvariable rv in a Bayesian network (Definition 2.1) is a function defining the probability\ndistribution for the random variable rv. It consists of a set of pairs (IPCi, ISDi) of an instance\nparent condition IPCi and an instance-sub-local distribution ISDi, and a rule for mapping an\ninstance parent condition IPCi into an instance-sub-local distribution ISDi. An ILD is derived from a CLD given entity information. For example, ILD 4.1 in the above\nexample is derived from CLD 4.1 given the three vehicle entities. Once an ILD is derived from a\nCLD, the ILD contains a set of pairs (IPCi, ISDi). In the following, the CLD is introduced\nformally. Definition 4.3 (Class Local Distribution) A class local distribution (CLD) LC (or simply L) for\nan MNode (Definition 2.3) is a function defining uncertainty for the MNode. It consists of a set of\npairs (CPCi, CSDi) of a class parent condition CPCi and a class-sub-local distribution CSDi, and\na rule for mapping it (CPCi, CSDi) into an instance local distribution (ILD) LI. A class local distribution defines a general rule for specifying distributions for instantiations of its\nrandom variables for specific entities. A CLD can refer to a parameterized family of distributions\n(e.g., normal distribution, categorical distribution). In this case, the CLD definition includes a\nspecification of the parameters. For example, a class local distribution CLD1 can represent a set of\nnormal distributions for CSDs in CLD1 and this CLD1 can be called a normal distribution CLD\n(i.e., TYPE(CLD1) = Normal Distribution CLD). A CSD can contain a set of parameters for its For example, CSD1 in CLD 4.1 is a distribution containing two parameters πœƒπœƒ1.1 and\nπœƒπœƒ1.2. We can think of a parameter function returning a set of parameters from a CSD (i.e., Θ\n(CSD1) = {πœƒπœƒ1.1, πœƒπœƒ1.2}).",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 57,
"total_chunks": 115,
"char_count": 1901,
"word_count": 310,
"chunking_strategy": "semantic"
},
{
"chunk_id": "cfa39f87-2144-4f36-8eda-73730723056c",
"text": "CLDs may be discrete or continuous. According to combination of the CLD types and parent\nCLD types, there are six categories for a CLD: (1) a discrete CLD with discrete parents, (2) a\ndiscrete CLD with continuous parents, (3) a discrete CLD with both discrete and continuous\nparents, (4) a continuous CLD with discrete parents, (5) a continuous CLD with continuous\nparents, and (6) a continuous CLD with both discrete and continuous parents. When a node has discrete parent nodes, influence counts (IC), the number of distinct entities in\nCPC, can be used to define a CLD. For example, we can think of a CLD for the MNode\nThreatLevel(rgn, t) described by LPDL as shown the following. CLD 4.2 is the case of a discrete\nCLD with discrete parents. CLD 4.2 [Inverse Cardinality Average]: ThreatLevel (rgn, t)\n1 CPC1: if some v have (VehicleType = Tracked ) [\n2 CSD1: High = 1 - Ɵ/(CARDINALITY(v) + 1), Low = 1 - High\n3 CPCd: ] else [\n4 CSDd: High = 0.1, Low = 0.9\n5 ] We name CLD 4.2 an Inverse Cardinality Average. Thus, the type of the class local distribution\nis the inverse cardinality average (i.e., TYPE(CLD 4.2) = Inverse Cardinality Average CLD). CLD 4.2 consists of two CSDs (CSD1 and CSDd). CSD1 contains a parameter πœƒπœƒ, where 0 < πœƒπœƒ < 1,\nas shown CLD 4.2. CLD 4.2 represents probabilistic knowledge of how the threat level of a\nregion is measured depending on the vehicle type of detected objects. For example, if in a region\nthere are many tracked vehicles (e.g., Tanks), the threat level of the region at a certain time will\nbe high. The influence counting (IC) function CARDINALITY(obj) returns the number of\ntracked vehicles from parents nodes. If there are many tracked vehicles, the probability of the\nstate High increases. If there is no tracked vehicles, the default probability distribution (i.e., CSDd)\ndescribed in Line 4 is used for the CLD of the MNode ThreatLevel(rgn, t).",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 58,
"total_chunks": 115,
"char_count": 1893,
"word_count": 333,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3faf66d3-77b6-48f0-ad4f-66a0b6232f17",
"text": "Thus, it indicates a\nsituation in peace time. Here is another CLD example. CLD 4.3 shows the case of the continuous CLD with hybrid\nparents.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 59,
"total_chunks": 115,
"char_count": 140,
"word_count": 25,
"chunking_strategy": "semantic"
},
{
"chunk_id": "7f03c90e-c5b2-44e5-b01a-37a922a39dfb",
"text": "For this case, we assume that there is an MNode Range(v, t) which is a parent node of the\nMNode ThreatLevel(rgn, t) and means a range between the region rgn and the vehicle v at a time CLD 4.3 [Hybrid Cardinality]: ThreatLevel(rgn, t)\n1 CPC1: if some v have (VehicleType = Tracked ) [\n2 CSD1: CARDINALITY(v) / average( Range ) + NormalDist(Ɵ, 5)\n3 CPCd: ] else [\n4 CSDd: NormalDist(10, 5)\n5 ] The meaning of CLD 4.3 is that the threat level in the region is the number of tracked vehicles\ndivided by an average of the ranges of vehicles and then plus a normally distributed error with a\nmean of Ɵ and a variance of 5. If there is no tracked vehicles, the default probability distribution,\nNormalDist(10, 5), described in Lines 4 is used.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 60,
"total_chunks": 115,
"char_count": 737,
"word_count": 138,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b46728c0-ce81-42e3-af81-a0e2808a6418",
"text": "If there are continuous parents, various\nnumerical aggregating (AG) functions (e.g., average, sum, and multiply) can be used. example, if there are three continuous parents Range1, Range2, and Range3, the numerical\naggregating functions average, sum, and multiply will construct three IPDs IPD1 = (Range1 +\nRange2 + Range3)/3, IPD2 = (Range1 + Range2 + Range3), and IPD3 = (Range1 * Range2 *\nRange3), respectively. The above CLDs 4.2 and 4.3 are based on an influence counting (IC) function for discrete parents\nand an aggregating (AG) function for continuous parents. Using such a function is related to the\naggregating influence problem, which treats many instances from a parent RV. The CLD 4.1 uses a very simple aggregation rule that treats all counts greater than zero as\nequivalent. In other words, a shared parameter in a CSD is learned from all instances of the parent\nRV with counts greater than zero. For example, with CLD 4.1, suppose that there are two cases:\nIn Case 1, there is one tracked vehicle. And in Case 2, there are two tracked vehicles. For Case 1,\none VehicleType RV is constructed and CSD1 (Line 1) in CLD 4.1 is used for the parameter of the\ndistribution for the ThreatLevel. For Case 2, two VehicleType RVs are constructed and also CSD1\n(Line 1) in CLD 4.1 is used for the parameter of the distribution for the ThreatLevel, although\nthere are two tracked vehicles. Thus, the shared parameter (i.e., High = Ɵ1.1 and Low = Ɵ1.2) for\nCSD1 in CLD 4.1 is used regardless of the number of the parent instances (i.e., one vehicle in\nCase 2, two vehicles in Case 2, and so on). In the following sections, we use such a simple\naggregation rule for MEBN parameter learning.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 61,
"total_chunks": 115,
"char_count": 1691,
"word_count": 292,
"chunking_strategy": "semantic"
},
{
"chunk_id": "48dd774b-6ed5-486c-8fb8-9c7504b861bf",
"text": "4.3.3 Dataset for Class-Sub-Local Distribution (CSD)\nA CLD can contain class parent conditions (CPC). Each CPC requires its own dataset to be\nlearned to a class-sub-local distribution CSD associated with the CPC. For example, CLD 4.1\ncontains three CPCs (CPC1, CPC2, and CPCd). Each CPC requires its own dataset. Such a dataset\ncan be classified by two categories: (1) A dataset for a common CPC (e.g., CPC1 and CPC2) and\n(2) a dataset for a default CPC (e.g., CPCd). In this section, we introduce how to get the dataset\nfor a common CPC first. Then we present how to get the dataset for a default CPC.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 62,
"total_chunks": 115,
"char_count": 602,
"word_count": 106,
"chunking_strategy": "semantic"
},
{
"chunk_id": "83070adc-5153-4ad1-bd5b-19175cc789d0",
"text": "Location.v Location.t Location.Location Situation. CPC Case VehicleType Vehicle.VID Situation.t Situation.rgn ThreatLevel\n1 Tracked Vehicle13 Time18 Region6 High\n2 Tracked Vehicle15 Time21 Region7 High\n3 Tracked Vehicle17 Time24 Region8 Low\n4 Tracked Vehicle19 Time27 Region9 High\n8 Tracked Vehicle1 Time2 Region0 High\nCPC1 9 Tracked Vehicle2 Time5 Region1 Low\n(GC1) 11 Tracked Vehicle4 Time8 Region2 High\n12 Tracked Vehicle5 Time8 Region2 High\n13 Tracked Vehicle6 Time11 Region3 Low\n14 Tracked Vehicle7 Time11 Region3 Low\n15 Tracked Vehicle8 Time14 Region4 High\n16 Tracked Vehicle9 Time14 Region4 High\n5 Wheeled Vehicle21 Time30 Region10 High\n6 Wheeled Vehicle23 Time33 Region11 Low\nCPC2 7 Wheeled Vehicle0 Time2 Region0 High\n(GC2) 10 Wheeled Vehicle3 Time5 Region1 Low\n17 Wheeled Vehicle10 Time17 Region5 High\n18 Wheeled Vehicle11 Time17 Region5 High Table 3 is a joined dataset for the common CPC (i.e., CPC1 and CPC2). It can be sorted according\nto each CPC as shown in Table 4. For example, the CPC1 in CLD 4.1 defines that it is only valid if a case contains a tracked vehicle.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 63,
"total_chunks": 115,
"char_count": 1083,
"word_count": 170,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f0b552c9-d8e8-4e84-bbae-f448eed329a3",
"text": "Therefore, by CPC1, we can sort the joined dataset in Table 3. Thus, the cases 1, 2, 3, 4, 8, 9, 11, 12, 13, 14, 15, and 16 are selected for CSD1, while other cases\nare used for CSD2 (Table 4). We call this dataset a CSD dataset. Definition 4.4 (CSD Dataset) Let there be a dataset D = {C1, C2, … , Cn}, where Ci is each case\n(or row), and a CLD L = {(CPC1, CSD1), (CPC2, CSD2),…, (CPCm, CSDm)}. A CSD Dataset (CD)\nis a dataset which is grouped by matching each class parent condition CPCj of L and each case Ci\nin D. The set of grouped cases GCj = {C1, C2, … , Cl} is assigned to a corresponding class parent\ncondition CPCj. For an RV, if there are cases for which the conditions associated with the parent RVs are not\nsatisfied, the dataset for a default CPC is required. The dataset for the default CPC (i.e., CPCd)\ncan be obtained by excluding the joined dataset from the original dataset. This is necessary\nbecause we need a dataset which doesn't include cases for which the conditions associated with\nthe parent RVs are satisfied. For example, in Table 2, there is an original dataset for the\nThreatLevel RV (i.e., the dataset in the relation Situation). Table 3 shows a joined dataset\nassociated with CPC1 and CPC2. The dataset for the default CPCd can be derived by subtracting\nthe joined dataset (Table 3) from the original dataset (Table 2). For example, the following is a\nSQL script to extract the default dataset for the ThreatLevel RV from the joined dataset. SQL script 4.2: SQL script for the default dataset of the ThreatLevel RV\n1 SELECT\n2 Situation.rgn, Situation. t, Situation .ThreatLevel\n3 FROM Situation\n4 WHERE NOT EXISTS (\n5 SELECT *\n6 FROM Location, Vehicle\n7 WHERE\n8 Situation.rgn = Location.Location &&\n9 Situation.t = Location.t &&\n10 Vehicle.VID = Location.v\n11 ) The dataset for the ThreatLevel RV comes from the relation Situation (Line 3).",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 64,
"total_chunks": 115,
"char_count": 1872,
"word_count": 340,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6ee87623-5378-4bc0-bb00-9aec7495105d",
"text": "When the dataset\nis selected, there is a condition (Line 4) in which the dataset should not include a joined dataset\nderived by Line 5~11. Using this script, the default dataset for the ThreatLevel RV is obtained\nand means the threat level at a certain region, where there is no vehicle. In the following subsections, a training dataset D means the CSD dataset for a certain CLD. 4.3.4 Parameter Learning\nIn this section, we introduce a parameter learning method to estimate parameters of a class local\ndistribution L given a training dataset D (i.e., a CSD dataset).",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 65,
"total_chunks": 115,
"char_count": 567,
"word_count": 97,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c7f0c211-09d5-4cfd-9887-492e9f76af3b",
"text": "We can think of a basic type of\nCLD for a discrete case and a continuous case. For the discrete case, Dirichlet distribution can be\nused (Section 4.3.4.1), while for the continuous case, Conditional Gaussian distribution can be\nused (Section 4.3.4.2). We introduce parameter learning for these types. In Definition 2.2\n(MNode), a predicate RV for MEBN was discussed. Learning the parameter of the distribution\nfor such a predicate RV, corresponding to a Boolean RV with possible values true and false, from\na relational database is discussed in Section 4.3.4.3. 4.3.4.1 Dirichlet Distribution Parameter Learning\nDetails on Dirichlet distribution parameter learning can be found in Appendix A.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 66,
"total_chunks": 115,
"char_count": 692,
"word_count": 107,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5f42b825-3180-49a7-b3a9-7a278e45eeda",
"text": "distribution is commonly used because it is conjugate to the multinomial distribution. With a\nDirichlet prior distribution, the posterior predictive distribution has a simple form [Heckerman et\nal., 1995][Koller & Friedman, 2009]. As an illustrative example of the Dirichlet distribution parameter learning for a CLD, we use\nCLD 4.1. Parameter learning for this CLD is to estimate CSD1's parameters (Ɵ1.1 and Ɵ1.2), and\nCSD2's parameters (Ɵ2.1 and Ɵ2.2), and CSDd's parameters (Ɵd.1 and Ɵd.2). To estimate these\nparameters, we can use the following predictive distribution using a Dirichlet conjugate prior,\ndiscussed in Appendix A. Equation 4.1 shows the posterior predictive distribution for the value xk\nof the RV X given a parent value 𝒂𝒂, the dataset D, and a hyperparameter Ξ± for the Dirichlet\nconjugate prior.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 67,
"total_chunks": 115,
"char_count": 816,
"word_count": 126,
"chunking_strategy": "semantic"
},
{
"chunk_id": "453c6c82-a3e2-4a87-81da-daf6fbe24b50",
"text": "𝛼𝛼xk|𝒂𝒂+ C[xk, 𝒂𝒂] (4.1) 𝑃𝑃(𝑋𝑋= xk | 𝑨𝑨= 𝒂𝒂, 𝑫𝑫, Ξ±) = N ,\nβˆ‘q=1 (𝛼𝛼xq|𝒂𝒂+ Cΰ΅£xq, 𝒂𝒂ࡧ) where a value xk ∈Val(𝑋𝑋), π’‚π’‚βˆˆVal(Pa(X) = A), C[xq, 𝒂𝒂] is the number of times outcome xq in X\nand its parent outcome 𝒂𝒂 in 𝑨𝑨 appears in D, a hyperparameter Ξ± ={Ξ±x1|𝒂𝒂, … , Ξ±xN |𝒂𝒂}, and N =\n| For the case of the CPC1 and CSD1, we can use the set of grouped cases GC1 in Table 4 as a\ntraining dataset. And CSD1 has two parameters Ɵ1.1 (for High) and Ɵ1.2 (for Low). For the\nparameters Ɵ1.1, we can use Equation 4.1 such as Ɵ1.1 = P(ThreatLevel = High | VehicleType =\nTracked, D = GC1, Ξ±), where Ξ± = {Ξ±π»π»π»π»π»π»β„Ž|𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇, Ξ±Low |𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇}. If there were previously one\ncase for High|Tracked and two cases Low|Tracked, Ξ±π»π»π»π»π»π»β„Ž|𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 = 1 and Ξ±Low |𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇𝑇 = 2 are\nused. This approach uses again for the case of the CPC2 and CSD2.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 68,
"total_chunks": 115,
"char_count": 837,
"word_count": 159,
"chunking_strategy": "semantic"
},
{
"chunk_id": "6d9fcf46-5139-4251-8572-b97a7a492b8c",
"text": "To learn the parameter for the\nCSDd, the default dataset discussed in Section 4.3.3 is required. The parameter Ɵd.1 and Ɵd.2 can\nbe learned from the default dataset using Equation 4.1 as the case of the CPC1 and CSD1. 4.3.4.2 Conditional Linear Gaussian Distribution Parameter Learning\nParameters for conditional Gaussian distribution can be estimated using multiple-regression. In\nthis section, we introduce parameter learning of a conditional linear Gaussian CLD using linear\nregression. The following CLD shows an illustrative example of a conditional linear Gaussian\nCLD for the RV Speed_RPT(r, t). The CLD of the RV is a continuous CLD with hybrid parents\n(MTI_Condition and Speed). In this case, we assume that the discrete parent RV\nMTI_Condition(v, mti, t) has two states (Good and Bad) and the RV Speed(v, t) is continuous. CLD 4.4 [Conditional Linear Gaussian]: Speed_RPT(r, t)\n1 CPC1: if some v.mti.t have (MTI_ Condtion = Good) [\n2 CSD1: Ɵ1.0 + Ɵ1.1* Speed + NormalDist(0, Ɵ1.2)\n3 CPC2: if some v.mti.t have (MTI_ Condtion = Bad) [\n4 CSD2: Ɵ2.0 + Ɵ2.1* Speed + NormalDist(0, Ɵ2.2)\n5 CPCd: ] else [\n6 CSDd: Ɵd.0 + NormalDist(0, Ɵd.2)\n7 ] Parameter learning for this CLD is to estimate CSD1's parameters (Ɵ1.0, Ɵ1.1 and Ɵ1.2), CSD2's\nparameters (Ɵ2.0, Ɵ2.1 and Ɵ2.2), and CSDd's parameters (Ɵd.0, Ɵd.1 and Ɵd.2). We can write this\nsituation more formally.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 69,
"total_chunks": 115,
"char_count": 1365,
"word_count": 225,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ed05257d-4264-40ca-9c8a-5adc2029db71",
"text": "If X is a continuous node with n continuous parents U1, …, Un and m\ndiscrete parents A1, …, Am, then the conditional distribution p(𝑋𝑋 | 𝒖𝒖, 𝒂𝒂) given parent states U = u and A = a has the following form: p(𝑋𝑋 | 𝒖𝒖, 𝒂𝒂) = 𝒩𝒩࡫L(𝒂𝒂)(𝒖𝒖), 𝜎𝜎(𝒂𝒂)ΰ΅―, (4.2) where L(a)(u) = π‘šπ‘š(𝒂𝒂) + 𝑏𝑏1(𝒂𝒂)𝑒𝑒1 + β‹―+ 𝑏𝑏𝑛𝑛(𝒂𝒂)𝑒𝑒𝑛𝑛 is a linear function of the continuous parents, with\nintercept π‘šπ‘š(𝒂𝒂), coefficients 𝑏𝑏𝑖𝑖(𝒂𝒂), and standard deviation 𝜎𝜎(𝒂𝒂) that depends on the state a of the\ndiscrete parents. Given CPCj (i.e., given the state aj), estimating the parameters the intercept π‘šπ‘š(𝒂𝒂j),\ncoefficients 𝑏𝑏𝑖𝑖(𝒂𝒂j), and standard deviation 𝜎𝜎(𝒂𝒂j) corresponds to estimating the CSD's parameters Ɵj.0,\nƟj.1 and Ɵj.2, respectively. The following shows multiple linear regression which is modified from [Rencher, 2003]. L(a)(u)\ncan be rewritten, if we suppose that there are k observations (Note that for one CSD case, we can\nomit the state a, because we know it). (4.3) Li(𝒖𝒖) = π‘šπ‘š+ 𝑏𝑏1𝑒𝑒𝑖𝑖1 + β‹―+ 𝑏𝑏𝑛𝑛𝑒𝑒𝑖𝑖𝑖𝑖+ πœŽπœŽπ‘–π‘– , 𝑖𝑖= 1, … , π‘˜π‘˜ where i indexes the observations.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 70,
"total_chunks": 115,
"char_count": 1038,
"word_count": 176,
"chunking_strategy": "semantic"
},
{
"chunk_id": "51b366c0-0dc1-4f3e-b01c-415f47c4141c",
"text": "For convenience, we can write the above equation more\ncompactly using matrix notation: where l denotes a vector of instances for the observations, U denotes a matrix containing all\ncontinuous parents in the observations, b denotes a vector containing an intercept π‘šπ‘š and a set of\ncoefficients 𝑏𝑏𝑖𝑖, and 𝝈𝝈 denotes a vector of regression residuals. The following equations show\nthese variables in forms of vectors and a matrix. L1(𝒖𝒖) 1 𝑒𝑒11 … 𝑒𝑒1𝑛𝑛 m 𝜎𝜎1\nb1 𝜎𝜎2 1 𝑒𝑒21 … 𝑒𝑒2𝑛𝑛 L2(𝒖𝒖) ΰ΅ͺ 𝐔𝐔= ࡦ ΰ΅ͺ 𝐛𝐛= ࡦ ΰ΅ͺ 𝝈𝝈= ࡦ ΰ΅ͺ (4.5) π₯π₯= ࡦ\n… … … … … … …\nLk(𝒖𝒖) 1 π‘’π‘’π‘˜π‘˜1 … π‘’π‘’π‘˜π‘˜π‘˜π‘˜ bk 𝜎𝜎k From the above settings, we can derive an optimal vector for the intercept and the set of\ncoefficients 𝒃𝒃ෑ.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 71,
"total_chunks": 115,
"char_count": 674,
"word_count": 129,
"chunking_strategy": "semantic"
},
{
"chunk_id": "3918d7a6-f9d9-43ca-a1d4-005cea019be3",
"text": "𝒃𝒃ෑ= (𝑼𝑼𝑇𝑇𝑼𝑼)βˆ’1𝑼𝑼𝑇𝑇𝒍𝒍 (4.6) Also, we can derive the optimal standard deviation 𝜎𝜎ො from the above linear algebra term [Rencher,\n2003]. 𝜎𝜎ො= ΰΆ¨(π’π’βˆ’π‘Όπ‘Όπ’ƒπ’ƒΰ·‘)𝑇𝑇(π’π’βˆ’π‘Όπ‘Όπ’ƒπ’ƒΰ·‘) (4.7)\nk βˆ’n βˆ’1 Using the above equations, the optimal parameters can be estimated. For CPC1 in CLD 4.4, CSD1\ncan be the following. Speed, MTI_Condtion = 𝐺𝐺𝐺𝐺𝐺𝐺𝐺𝐺 ) = π’©π’©ΰ΅«ΖŸ1.0 + Speed βˆ—ΖŸ1.1 (𝐺𝐺𝐺𝐺𝐺𝐺𝐺𝐺 ), Ɵ1.2 (𝐺𝐺𝐺𝐺𝐺𝐺𝐺𝐺)ΰ΅―.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 72,
"total_chunks": 115,
"char_count": 381,
"word_count": 60,
"chunking_strategy": "semantic"
},
{
"chunk_id": "1ab0383c-894c-40c0-895b-ee05b6d09abd",
"text": "In this section, we discussed how to learn parameters for the conditional linear Gaussian CLD\nusing linear regression. For a conditional nonlinear Gaussian CLD, we can use nonlinear\nregression. In this section, we didn't consider incremental parameter learning for the conditional\nlinear Gaussian CLD.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 73,
"total_chunks": 115,
"char_count": 301,
"word_count": 44,
"chunking_strategy": "semantic"
},
{
"chunk_id": "42013cc1-bfea-4b81-a6ce-1139468e9d4f",
"text": "For this, we can Bayesian regression [Press, 2003], which is more robust to\noverfitting than the traditional multiple-regression. 4.3.4.3 Parameter Learning for the Distribution of the Predicate/Boolean RV\nThe parameter of the distribution for a predicate or Boolean RV (Definition 2.2) can be learned\nfrom a relational database. To introduce predicate RV parameter learning, the following relations\nin Table 5 as an illustrative example are used to learn the parameter of the distribution for a\npredicate RV Communicate. The following table contains three relations Vehicle, Communicate,\nand Meet. The relation Communicate means that two vehicles communicate with each other by\nexchanging radio waves. The relation Meet means that two vehicles meet each other by locating\nin close proximity to each other. Table 5 Communicate Relation and Meet Relation Vehicle Communicate Meet\nVID VID1/Vehicle VID2/Vehicle VID1/Vehicle VID2/Vehicle\nv1 v1 v2 v1 v2\nv2 v2 v3 v2 v3\nv3 v3 v4 v1 v4",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 74,
"total_chunks": 115,
"char_count": 979,
"word_count": 151,
"chunking_strategy": "semantic"
},
{
"chunk_id": "fdd85c28-8a69-42c1-95a4-f483890c7d59",
"text": "The above relationship relations (i.e., Communicate and Meet) show true cases for predicates. For\nexample the relation Communicate contains the true cases {{v1, v2}, {v2, v3}, {v3, v4}}. However, relationship relations do not explicitly represent false cases for the predicates. By\nconverting the above relations to the following relations, we can see the false cases explicitly. This conversion is justified by CWA.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 75,
"total_chunks": 115,
"char_count": 416,
"word_count": 62,
"chunking_strategy": "semantic"
},
{
"chunk_id": "2d667e17-e269-40db-b7a6-e28a0414a483",
"text": "Thus, if a case in a relationship relation is not true, it is\nassumed to be false. Table 6 Converted Relations from Communicate Relation and Meet Relation Vehicle Communicate Meet\nVID VID1/Vehicle VID2/Vehicle Communicate VID1/Vehicle VID2/Vehicle Meet\nv1 v1 v2 True v1 v2 True\nv2 v1 v3 False v1 v3 False\nv3 v1 v4 False v1 v4 True\nv4 v2 v3 True v2 v3 True\nv2 v4 False v2 v4 False\nv3 v4 True v3 v4 False The relation Vehicle in Table 6 contains four vehicle entities (v1 ~ v4). These entities can be used\nto develop possible combinations of two vehicles interacting with each other as shown data in the\nfirst and second column in the relations Communicate and Meet (i.e., {{v1, v2}, {v1, v3}, {v2,\nv3}, {v1, v4}, {v2, v4}, {v3, v4}}). The relation Communicate means the possible combinations\nbetween the two vehicles communicating with each other and contains an attribute Communicate\nindicating whether the two vehicles are communicated (True) or not (False). From data in the\nrelation Communicate in Table 5, the true cases for the attribute Communicate in the relation\nCommunicate in Table 6 can be derived. The true cases for the attribute Meet in the relation Meet\nin Table 6 are also derived using the same approach. Now, as we can see Table 6, the relations\nCommunicate and Meet explicitly contain the true and false cases for the attributes Communicate\nand Meet, respectively. To construct the set of combination between the four vehicles in the relation Vehicle, we can use\nthe following script. SQL script 4.3: Combination between the four vehicles\n1 CREATE TABLE\n2 All_Vehicles AS\n3 ( SELECT\n4 t1.VID AS VID1,\n5 t2.VID AS VID2\n6 FROM vehicle AS t1\n7 JOIN vehicle AS t2\n8 ON t1.VID < t2.VID) The above script generates a new relation called All_Vehicles. The dataset for the relation\nAll_Vehicles contains {{v1, v2}, {v1, v3}, {v2, v3}, {v1, v4}, {v2, v4}, {v3, v4}}. The above\nscript selects the set of combination between the four vehicles occurring only once.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 76,
"total_chunks": 115,
"char_count": 1971,
"word_count": 341,
"chunking_strategy": "semantic"
},
{
"chunk_id": "138dccc2-417c-4a38-ba2e-06e7c401d7d5",
"text": "To generate\nthe dataset for the relation Communicate in Table 6, we can use the following script. SQL script 4.4: For the Relation Communicate in Table 6\n1 SELECT DISTINCT t2.VID1, t2.VID2,\n2 (SELECT\n3 IF(t1. VID1 = t2.VID1 && t1. VID2 = t2.VID2, \"True\", \"False\")\n4 ) AS Communicate\n5 FROM All_Vehicles t1, Communicate t2 The above script compares data between the relations All_Vehicles and Communicate. same primary key between them, a value True is assigned to an attribute Communicate. If not, a\nvalue False is assigned to the attribute Communicate.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 77,
"total_chunks": 115,
"char_count": 553,
"word_count": 92,
"chunking_strategy": "semantic"
},
{
"chunk_id": "1740d95b-127b-4b24-adfb-3813a0b32ec4",
"text": "To generate the dataset in the relation Meet\nin Table 6, we can use the same approach. For the relations in Table 6, we assume the following CLD 4.5 in which the meeting between two\nvehicles may influence the event for communication between the vehicles (i.e., P(Communicate | In CLD 4.5, CPC1 (Line 1) indicates a condition where two vehicles meet. CPC2 (Line 3)\nindicates a condition where two vehicles don't meet. For example, CSD2 (Line 4) represents the\nprobability that two vehicles VID1 and VID2 communicate with each other in the situation where\nthe two vehicles are not nearby. CLD 4.5 [Predicate RV]: Communicate (VID1, VID2)\n1 CPC1: if some VID1.VID2 have (Meet = True) [\n2 CSD1: True = Ɵ1.1, False = Ɵ1.2\n3 CPC2: if some VID1.VID2 have (Meet = False) [\n4 CSD2: True = Ɵ2.1, False = Ɵ2.2\n5 ] To learn parameters in CLD 4.5, CSD datasets for CPC1 and CPC2 are required. To generate such\ndatasets, the processes in Section 4.3.1 Map to Reasoning Model can be used. For example, a\njoined dataset between the relations Communicate and Meet is generated by matching same\nvehicle entities in both relations. The joined dataset contains four attributes VID1, VID2,\nCommunicate, and Meet (e.g., {{v1, v2, True, True}, …, {v3, v4, True, False}). Then, parameter\nlearning as described in Section 4.3.4 Parameter Learning is used to construct the parameters in\nCLD 4.5 (i.e., P(Communicate | 4.4 Test Reasoning Model\nIn the Test Reasoning Model step, a learned reasoning model is evaluated to determine whether to\naccept it. The accepted reasoning model is output as a final result in this step. This step is\ndecomposed into two sub-steps (Fig. 8): (1) an Experiment Reasoning Model step and (2) an\nEvaluate Experimental Results step. Fig. 8 Test Reasoning Model",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 78,
"total_chunks": 115,
"char_count": 1762,
"word_count": 301,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f2fb9472-0a21-4988-80f2-705bd52caa1f",
"text": "4.4.1 Experiment Reasoning Model\nIn this section, we introduce\nThe Experiment Reasoning Model step tests the learned reasoning model using a test dataset. The\ntest dataset can be generated from simulations, existing data and/or actual experiments. This\nexperiment can consist of the following five steps. (1) The learned reasoning model is exercised\non a test case from the test dataset. (2) The test dataset provides ground truth data to evaluate with a certain metric (e.g., the continuous ranked probability score) in the requirements defined in the\nAnalyze Requirement step. (3) The metric is used to measure performance between results from\nthe learned reasoning model and the ground truth data. (4) Steps 1-3 are repeated for all testing\ncases. (5) This step results in a result value integrating all measured values (e.g., an average of the\ncontinuous ranked probability scores). 4.4.2 Evaluate Experimental Results\nIn the Evaluate Experimental Results step, the performance of estimation and prediction for the\nlearned reasoning model is assessed by the performance criteria in the requirements defined in the\nAnalyze Requirement step (e.g., an average of the continuous ranked probability scores < 0.001). If the measured value satisfies the criteria, the learned reasoning model is accepted and this step\nresults in the learned reasoning model. If the requirement is not satisfied, we can return to the\nprevious steps to improve the performance of the learned reasoning model. 4.5 Summary of HML\nWe introduced a MEBN learning framework, called HML, which contained four steps ((1)\nAnalyze Requirements, (2) Define World Model, (3) Construct Reasoning Model, and (4) Test\nReasoning Model). The following list shows their specific sub-steps. (1) Analyze Requirements\n(1.1) Identify Goals\n(1.2) Identify Queries/Evidence\n(1.3) Define Performance Criteria\n(2) Define World Model\n(2.1) Define Structure Model\n(2.2) Define Rules\n(2.2.1) Define Causal Relationships between RVs\n(2.2.2) Define Distributions of RVs\n(3) Construct Reasoning Model\n(3.1) Map to Reasoning Model\n(3.1.1) Perform Entity-Relationship Normalization\n(3.1.2) Perform MEBN-RM Mapping\n(3.1.3) Update Reasoning Model using the Rules\n(3.1.3.1) Join Relations\n(3.1.3.2) Construct Input/Parent Nodes\n(3.1.3.3) Construct Context Nodes\n(3.1.3.4) Refine Context Nodes\n(3.2) Learn Reasoning Model\n(4) Test Reasoning Model\n(4.1) Conduct Experiments for Reasoning Model\n(4.1.1) Test Reasoning Model from Test Dataset (4.1.2) Measure Performance for Reasoning Model\n(4.2) Evaluate Experimental Results",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 79,
"total_chunks": 115,
"char_count": 2563,
"word_count": 374,
"chunking_strategy": "semantic"
},
{
"chunk_id": "fa650859-b092-4781-926c-84c37cef6708",
"text": "In (1) the Analyze Requirements step, there are three sub-steps: (1.1) the Identify Goals step, (1.2)\nthe Identify Queries/Evidence step, and (1.3) the Define Performance Criteria step. The goals\nrepresenting missions of the reasoning model is defined in (1.1). The queries, specific questions\nfor which the reasoning model is used to estimate and/or predict answers, and the evidence,\ninputs used for reasoning, are defined in (1.2). Each query should include performance criteria\n(1.3) for evaluation of reasoning.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 80,
"total_chunks": 115,
"char_count": 516,
"word_count": 77,
"chunking_strategy": "semantic"
},
{
"chunk_id": "27afc2a3-c9c3-45d4-a2ca-e15d19c585f2",
"text": "In (2) the Define World Model step, there are two sub-steps: (2.1) the Define Structure Model step\nand (2.2) the Define Rules step. The Define Rules step (2.2) contains two sub-steps: (2.2.1) the\nDefine Causal Relationships between RVs step and (2.2.2) the Define Distributions of RVs step. In (2.2.1), candidate causal relationships (e.g., influencing(A, B) and causal(A, B)) between RVs\nare specified using expert knowledge. In (2.2.2), a (conditional) local distribution of an RV is\ndefined by expert knowledge.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 81,
"total_chunks": 115,
"char_count": 514,
"word_count": 79,
"chunking_strategy": "semantic"
},
{
"chunk_id": "73464bc2-08e9-4ae5-9ac6-36d995f30b01",
"text": "In (3) the Construct Reasoning Model step, there are two sub-steps: (3.1) the Map to Reasoning\nModel step and (3.2) the Learn Reasoning Model step. The Map to Reasoning Model step (3.1) is\ncomposed of three sub-steps: (3.1.1) the Perform Entity-Relationship Normalization step, (3.1.2)\nthe Perform MEBN-RM Mapping step, and (3.1.3) the Update Reasoning Model using the Rules\nstep. Before applying MEBN-RM to a relational model, the relational model is normalized using\nEntity-Relationship Normalization (3.1.1). In (3.1.2), MEBN-RM is performed to construct an\ninitial MTheory from the relational model. In (3.1.3), the initial MTheory is updated according to\nthe rules defined in (2.2). The Update Reasoning Model using the Rules step (3.1.3) contains four\nsub-steps: (3.1.3.1) the Join Relations step, (3.1.3.2) the Construct Input/Parent Nodes step,\n(3.1.3.3) the Construct Context Nodes step, and (3.1.3.4) the Refine Context Nodes step. In\n(3.1.3.1), some relations are joined and an updated MFrag is created, if RVs in a rule are defined\nin different relations. The causal relationships for the RVs in the rule are defined in the updated\nMFrag through (3.1.3.2). In (3.1.3.2), if there is an input node, ordinary variables associated with\nthe input node are defined in the updated MFrag. In (3.1.3.3), the context nodes associated with\nthe RVs in the rule are defined in the updated MFrag. For this, the conditions (specified by a\n\"Where\" conditioning statement in SQL) in a joining script, used for joining relations in (3.1.3.1),\ncan be reused to construct such context nodes. In (3.1.3.4), ordinary variables sharing the same\nentity (e.g., IsA (t, TIME) and IsA (t1, TIME)) are converted into a single ordinary variable (e.g.,\nIsA (t, TIME)). Then, equal-context nodes (e.g., t = t1) for such ordinary variables are removed.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 82,
"total_chunks": 115,
"char_count": 1833,
"word_count": 287,
"chunking_strategy": "semantic"
},
{
"chunk_id": "cda509d2-7950-4b78-a6fe-d5af1aa5c342",
"text": "In (3.2) the Learn Reasoning Model step, a parameter learning algorithm performs to each RV in\nthe updated MFrag using a training dataset to generate the parameter of the distribution for the\nRV. In (4) the Test Reasoning Model step, there are two sub-steps: (4.1) the Conduct Experiments for\nReasoning Model step and (4.2) the Evaluate Experimental Results step. In (4.1) there are two\nsub-steps: (4.1.1) the Test Reasoning Model from Test Dataset step and (4.1.2) the Measure\nPerformance for Reasoning Model step. In (4.1.1), the learned MTheory from (3) the Construct\nReasoning Model step is tested using a test dataset and (4.1.2) measured for performance between\nresults from the learned MTheory and the ground truth data in the test dataset. In (4.2) the\nEvaluate Experimental Results step, whether the learned MTheory is accepted or not is decided\nusing the performance criteria defined in (1.3). In this research, some steps in HML are automated (e.g., (3.1.2) the Perform MEBN-RM Mapping\nstep), while some other steps are not yet automated (e.g., (3.1.1) the Perform Entity-Relationship\nNormalization step) but could be automated. Also, some other steps (e.g., (1.1) the Identify Goals step) require aid from human (i.e., human centric). The following table shows the level of\nautomation (i.e., Automated, Automatable, and Human centric) for each step in HML. Table 7 Processing Method for Steps in HML Main Steps Sub-steps Processing Method (1.1) Identify Goals Human centric\n(1) Analyze Requirements\n(1.2) Identify Queries/Evidence Human centric\n(2.1) Design World Model Human centric\n(2) Design World Model and Rules\n(2.2) Design Rules Human centric\n(3.1.1) Perform Entity-Relationship Normalization Automatable\n(3.1.2) Perform MEBN-RM Mapping Automated\n(3) Construct Reasoning Model\n(3.1.3) Update Reasoning Model using the Rules Automatable\n(3.2) Learn Reasoning Model Automated\n(4.1) Conduct Experiments for Reasoning Model Automatable\n(4) Test Reasoning Model (4.2) Evaluate Experimental Results Automatable For example, the (3.2) Learn Reasoning Model step is automated by the MEBN-RM mapping\nalgorithm (Section 3.6). The (3.1.1) the Perform Entity-Relationship Normalization step is\nautomatable by developing an algorithm converting from ordinary relations to the relations\nsatisfying Entity-Relationship Normalization. The (1.1) Identify Goals step is human centric and\nrequire human support to perform it.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 83,
"total_chunks": 115,
"char_count": 2425,
"word_count": 357,
"chunking_strategy": "semantic"
},
{
"chunk_id": "515b06f3-ccdb-49c3-b2c7-fb4f41af603f",
"text": "Automatable steps can become automated steps by\ndeveloping specific processes, algorithms, and software programs. We leave these as future\nstudies.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 84,
"total_chunks": 115,
"char_count": 147,
"word_count": 20,
"chunking_strategy": "semantic"
},
{
"chunk_id": "651f49ff-0e84-48f8-9ef4-7bc0549bb3d9",
"text": "We developed HML Tool that performs MEBN-RM and the MEBN parameter learning. HML\nTool is a JAVA based open-source program that can be used to create an MTheory script from a\nrelational data. This enables rapid development of an MTheory script by just clicking a button in\nthe tool. This is available on Github5 (see Appendix B).",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 85,
"total_chunks": 115,
"char_count": 328,
"word_count": 57,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0fea5fab-5cd3-4ca3-a6d5-fdc75230110d",
"text": "5 Experiment for UMP-ST and HML\nWe conducted an experiment to compare two MEBN development processes (UMP-ST and HML)\nin terms of development time supervised by a IRBNet support team (IRBNet ID: 1054232-1). A\nMEBN model can be constructed by UMP-ST and HML. UMP-ST is the traditional manual\napproach to develop a MEBN model, while HML is the new approach which is studied in this\ndissertation. In this experiment, there were two groups (A and B) selected from six adult people.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 86,
"total_chunks": 115,
"char_count": 477,
"word_count": 81,
"chunking_strategy": "semantic"
},
{
"chunk_id": "301b95c9-efb9-43c0-ab41-74ef43f57a11",
"text": "Both groups\nwere required to develop a MEBN model from stakeholder requirements. The main requirement\nwas to develop a MEBN model for a very simplified domain of a steel plate factory. Thus, we\nconducted a simplified development experiment, MEBN modelling for simple heating machinery. For the experiment, we tried to constitute same conditions for both groups (e.g., same level of\nknowledge for a certain domain, BN, MEBN, and MEBN modelling) to draw more general\nconclusions. Finding and inviting participants who are working for a same domain and have same\nlevel of knowledge is difficult. For that reason, the participants who didn't have any experience in\nthe target domain for the experiment were selected and provided domain knowledge to develop a\nMEBN model. Thus, knowledge for simple heating machinery was given for both groups. 5 Github is a distributed version control system (https://github.com). knowledge given to the participants is introduced in Section 5.1.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 87,
"total_chunks": 115,
"char_count": 975,
"word_count": 152,
"chunking_strategy": "semantic"
},
{
"chunk_id": "cc121aa8-0957-46af-a4bc-3afc089ef73b",
"text": "For the experiment, we performed three processes (preparation, execution, and evaluation). In the\npreparation process, we prepared the experimental settings to make both groups to have same\nconditions in terms of knowledge and skill for MEBN modelling for the simple heating\nmachinery. In the execution process, the main experiment for MEBN modelling was conducted. In the process, participants in both groups had developed MEBN models using the two methods\nassigned to each of them. In the evaluation process, development times by the participants were\nanalysed and MEBN models developed by them were tested in terms of accuracy using a\nsimulated test dataset.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 88,
"total_chunks": 115,
"char_count": 661,
"word_count": 102,
"chunking_strategy": "semantic"
},
{
"chunk_id": "cbc4edfc-5c71-4f09-a824-6bd236bfd851",
"text": "5.1 Preparation Process\nFor the experiment, six adult people were invited as subjects. Initially, they didn't have much\nknowledge about BN and were completely unfamiliar with MEBN, UMP-ST, and HML. (1)\nBefore the execution process, the six people were given such knowledge to develop a MEBN\nmodel for the simple heating machinery in the execution process. The lecture contained the\nminimum amount of knowledge for (continuous) BN, BN modelling, MEBN, a script form of\nMEBN, and MEBN modelling (i.e., UMP-ST) to develop the MEBN model for the simple\nheating machinery. Note that a lecture for full knowledge of such domains may require several\nsemesters, so the scope of the experiment was reduced to a smaller size (i.e., the simple heating\nmachinery) rather than the development of a MEBN model for full heating machinery. The six\npeople were divided into two groups (Groups A and B). Group A used UMP-ST, while Group B\nused HML to develop a MEBN model. (2) To constitute same conditions for each group in terms\nof skills and domain knowledge for MEBN and UMP-ST, a short test for such knowledge was\ntaken to all of the participants. (3) The short test was graded by a MEBN expert who was not\ninvestigator for this experiment. An identity of each participant was not given to the MEBN\nexpert to prevent intervention of prejudice. To penalize our new approach HML, the first (third\nand fifth) ranked participant belonged to Group A. Second (fourth and sixth) ranked participant\nbelonged to Group B. Table 8 Preparation Process",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 89,
"total_chunks": 115,
"char_count": 1526,
"word_count": 256,
"chunking_strategy": "semantic"
},
{
"chunk_id": "408fd502-948d-4dc5-8bc0-3f218c99455e",
"text": "Group A (UMP- Steps Group B (HML) Time ST)\nProvided a lecture for BN, MEBN, the 1. Obtain relevant knowledge 4 hours script form of MEBN, and UMP-ST\nProvided a short test for UMP-ST & 2. Take a short test 30 min MEBN\nGraded the test results and selected 3. Divide into two groups 1 hour participants for two groups\nProvided a lecture for Time was checked 4. Obtain HML knowledge None HML (Time A)",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 90,
"total_chunks": 115,
"char_count": 396,
"word_count": 75,
"chunking_strategy": "semantic"
},
{
"chunk_id": "b5400b27-4cf5-49ae-bf46-596e9972716a",
"text": "Before the execution process, (4) HML lecture was provided to Group B. The time for the lecture\nwas checked as Time A. The lecture contained the process of HML, the reference PSAW-MEBN\nmodel, and how to use the HML tool. 5.2 Execution Process\nIn the execution process, the both groups were requested to develop a MEBN model for a simple\nheater system heating a slab to support a next manufacturing step (e.g., a pressing step for the slab\nto make a steel plate). The MEBN model aimed to predict a total cost for the heater given input\nslabs. Table 9 Execution Process Steps UMP-ST (Group A) HML (Group B) Time\n5.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 91,
"total_chunks": 115,
"char_count": 612,
"word_count": 111,
"chunking_strategy": "semantic"
},
{
"chunk_id": "017d8ab7-0154-43f4-871e-da211112259c",
"text": "Obtain stakeholder Provided stakeholder requirements and domain knowledge torequirements and domain 1 hour both groupsknowledge Analyze Requirements Developed MEBN model Requirements (Time B) Define World Model Developed a structure model and rules (Time C)\nDeveloped MEBN model using the Develop MEBN model Time was checked8. Construct Reasoning Model script form of MEBN using the HML tool (Time D) (5) In the first step of the execution process, both groups were given a stakeholder requirement,\n\"Develop a MEBN model which is used to predict a total cost given input slabs\". Also, domain\nknowledge was given to the participants.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 92,
"total_chunks": 115,
"char_count": 632,
"word_count": 97,
"chunking_strategy": "semantic"
},
{
"chunk_id": "17dfaca7-4c88-4aa1-895c-503f22674f4a",
"text": "The domain knowledge was about the following information: The simple heater system is\nassociated with two infrared thermal imaging sensors to sense the temperature of a slab, each\nsensor has a sensing error with a normal distribution with a mean zero and a variance three, N(0,\n3) (e.g., if it sensed 10 ℃, this means that the actual temperature is in a range between 7.15 ℃\nand 12.85 ℃ with the 95th percentile), the heater system contains an actuator which is used to\ncontrol an energy value to heat a slab (i.e., the actuator calculates the energy value given the input\nslab temperature), there is no energy loss when the energy value is used in the heater, all\nmanufacturing factors (e.g., the temperature, energy value, and cost) are normally distributed\ncontinuous values, the energy unit is kWh (kilowatt-hour), there is a fixed slab weight 100kg,\nthere is an ordered fixed temperature 1200 ℃ for an output slab coming from the heater, and the\nenergy cost is 20cent/kWh. Fig. 9 Situation for the simple heating machinery Also, an idea of how to model the sensor error using BN was given. For example, to include the\nsensor error, two random variables are used. The first random variable is for an actual\ntemperature, while the second random variable is for a sensed temperature. The actual\ntemperature, then, influences the sensed temperature with the error normal distribution (i.e., N(0,\n3)). This can be modelled in a BN as P(sensed temperature | actual temperature) = actual\ntemperature + N(0, 3).",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 93,
"total_chunks": 115,
"char_count": 1508,
"word_count": 255,
"chunking_strategy": "semantic"
},
{
"chunk_id": "07dbfb4f-4265-401d-b751-2af224aba608",
"text": "For the situation of the simple heating machinery, datasets were generated by a simulator\ncontaining a ground truth model designed by a domain expert. The ground truth model contained\ntwo parts. The first part is for an actual model which represents a physical world which can't be\nobserved exactly. The second part is for a sensed model which represents an observed world\nwhere we can see using sensors. Therefore, the datasets were divided into two parts: Actual data\nand sensed data (Fig. 10). The sensed data (data sets in the rounded boxes in Fig. 10) were\nprovided to both groups in two formats: The data in an excel format and the data in a relational\ndatabase (RDB) (Fig. 11). The actual data (e.g., actual temperatures) were not given to either\ngroup.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 94,
"total_chunks": 115,
"char_count": 760,
"word_count": 131,
"chunking_strategy": "semantic"
},
{
"chunk_id": "7b6ef64f-341f-4839-ae39-21ac36cfca32",
"text": "Fig. 10 Each of training and test data has sensed data and actual data for the simple heating machinery Also, the simulator generated two datasets (as shown in Fig. 10): One was a training dataset\nwhich was used by the participants to understand the context of the situation and learn a MEBN\nmodel using HML, and another was a test dataset which was used to evaluate the models\ndeveloped by the participants in terms of prediction accuracy for the total cost (6.4.3 Evaluation\nProcess). For this model evaluation, the actual and sensed data in the test dataset were used. For\nexample, sensed data for the temperature of an input slab were used as evidence for the developed\nmodel and the developed model was used to reason about a predicted total cost. The predicted\ntotal cost was compared with a total cost derived from the actual data in the test dataset. Participants were requested to (6) develop MEBN model requirements, (7) Define World Model,\nand (8) construct Reasoning Model. And the development times Time B, Time C, and Time D\nrespectively were checked. Fig. 11 Sensed datasets for the simple heating machinery",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 95,
"total_chunks": 115,
"char_count": 1122,
"word_count": 191,
"chunking_strategy": "semantic"
},
{
"chunk_id": "00767458-7367-4fad-98dc-b811bbb0aab8",
"text": "5.3 Evaluation Process\nIn this process, the MEBN models developed by the participants were evaluated and their\ndevelopment times were analyzed. For the model evaluations, simulated test datasets were used. The development times for both were measured according to the use of methods UMP-ST and\nHML. Our goal for this experiment is to compare two methods in terms of the development time.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 96,
"total_chunks": 115,
"char_count": 387,
"word_count": 62,
"chunking_strategy": "semantic"
},
{
"chunk_id": "0acd5586-ffa0-4449-a9fc-53f2886fbbab",
"text": "However, in some cases, a MEBN model is developed quickly with low accuracy. The\ncomparison between a low quality model and a high quality model in terms of the development\ntime is unfair. Thus, obviously, in order to demonstrate the superiority of HML against UMP-ST,\nthe results of this experiment should ensure two things: A quality of the model developed using\nHML is equal or better than a quality of the model developed using UMP-ST, and the\ndevelopment time using HML is faster than using UMP-ST. In this experiment, we used the\nprediction accuracy as the quality of the developed model, because the mission of the developed\nmodel is to predict the total cost for the simple heating machinery.\n(9) The first step in this process is to evaluate the accuracies of the MEBN models developed by\nthe participants. For this, an accuracy test for the models was performed to determine how well\nthe models predict the total cost using the test dataset generated from the simulator. The simulator\ngenerated the test dataset regarding a situation in which three slabs were inputs and a total cost\nfor heating the three slabs was an output. The total cost was calculated using the energy values in\nthe actuator. The output (the total cost) was used to compare a predictive cost reasoned from the\nMEBN models. To the comparison between the total cost and the predictive cost, we used a\ncontinuous ranked probability score (CRPS) in which a perfect prediction yields a score of zero. For prediction accuracy metrics, we can use a mean absolute error (MAE). MAE uses a mean\nvalue only to compare an actual (or observed) value with a predictive value, while CRPS uses a\npredicted probability distribution for comparison (i.e., a mean and a variance). Therefore, CRPS\nanalysis is more precise than MAE analysis. In this step, for a case (i.e., three input slabs and one\noutput total cost) in the test data, CRPS was calculated using a predictive cost. Then, 100 cases\nwere used to compute 100 CRPSs and they were averaged (i.e., Average CRPS).\n(10) The development times for both groups were measured according to each step in UMP-ST\nand HML. The development times Time A~D were checked in the preparation process and the\nexecution process.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 97,
"total_chunks": 115,
"char_count": 2231,
"word_count": 379,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5677b63f-5d4d-4985-916a-6bed98fe03c5",
"text": "In this step, a total development time was calculated. development time included Time B~D, while for Group B, a total development time included\nTime A~D. Table 10 Evaluation Process Steps UMP-ST (Group A) HML (Group B) Tested both models in terms of accuracy using a simulated test 9. Evaluate accuracy of model dataset",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 98,
"total_chunks": 115,
"char_count": 319,
"word_count": 53,
"chunking_strategy": "semantic"
},
{
"chunk_id": "c31c1b11-c7e8-473e-905d-d980b86aa444",
"text": "Measured the development times for both according to the use of 10. Evaluate development time methods UMP-ST and HML 5.4 Comparison Results\nTable 11 shows an average CRPS to show model accuracy and a total development times to show\nefficiency of modelling methods for each participant. In the table, the grand CRPS average for\nGroup A is higher than the grand CRPS average for Group B. This means that the MEBN models\nfrom Group B are better than the models from Group A in terms of accuracy. Then, the\ncomparison for the total development times makes sense. The average of the total development\ntimes from Group A is higher than the average of the total development times from Group B. This\nimplies that HML is a faster process than UMP-ST. Table 11 Comparison results Total Development Average Group Participants Times CRPS (Hours: Minutes)\n#1 1735.3 1:06\n#2 74.6 2:48\nGroup A (UMP-ST) #3 114.78 2:21\nGrand Average (Standard 641.53 2:05\nDeviation) (947.45) (0:52)\n#4 45.05 1:02\n#5 45.05 0:58\nGroup B (HML) #6 40.48 1:28\nGrand Average (Standard 43.53 1:09\nDeviation) (2.64) (0:16) In the experiment, we expected the participants would develop an ideal MEBN model.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 99,
"total_chunks": 115,
"char_count": 1164,
"word_count": 196,
"chunking_strategy": "semantic"
},
{
"chunk_id": "f2e052ad-bd49-4524-915b-2e3816798dab",
"text": "The\nfollowing figure shows ideal conditional relationships between random variables for the simple\nheating machinery. In the ideal model, there are three parts: A situation group, an actual target\ngroup, and a report group. The situation group contains a random variable representing an overall\ntotal cost for this system. The actual target group contains three random variables (a temperature\nfor an input slab, an actual energy for heating, and a temperature for an output slab). The report\ngroup contains two random variables (an observed/sensed temperature for the input slab and an\nobserved/sensed temperature for the output slab). Fig. 12 Ideal conditional relationships between random variables for the simple heating machinery",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 100,
"total_chunks": 115,
"char_count": 734,
"word_count": 110,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a2deb9fe-080c-4f10-9f15-f6083c67fca5",
"text": "In the experiment, we observed where the participants spent a lot of time. Table 12 shows timeconsuming tasks in the experiment. The mark \"X\" in the table means that it is a time-consuming\ntask for the method. Table 12 Comparison results for time-consuming tasks Group A Group B Time-consuming tasks in the experiment (UMP-ST) (HML)\n- 1. Following process (UMP-ST or HML) X (Supported by HML tool)\n2. Finding structure model/rules X (Supported by the PSAWMEBN reference model)\n3. Finding entity/RV/MFrag from relational - X data (Supported by MEBN-RM)\n4. Finding parameter X (Supported by MEBN parameter\nlearning)",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 101,
"total_chunks": 115,
"char_count": 613,
"word_count": 98,
"chunking_strategy": "semantic"
},
{
"chunk_id": "7b97b8f2-39c4-4433-95b3-d3f73e65f8ac",
"text": "(1) Following UMP-ST process: Although the participants had studied UMP-ST, it was not easy\nto follow the process. They didn't have many experiences to develop a MEBN model using\nUMP-ST, so they were not familiar with the process. They remembered the process by reading a\nUMP-ST paper and developed their model according to each step of UMP-ST.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 102,
"total_chunks": 115,
"char_count": 344,
"word_count": 57,
"chunking_strategy": "semantic"
},
{
"chunk_id": "89276309-caa0-49f3-a140-04afd568c0d6",
"text": "For Group B, the\nHML tool supported the development of a MEBN model. By clicking some buttons in the HML\ntool, each step in HML was shown and the participants could make the models quickly. (2)\nFinding Structure Model/Rules: The participants in both Groups were required to find the\nstructure model for the simple heating machinery. Although knowledge of the simple heating\nmachinery situation was given, the participants in both groups struggled to find the structure\nmodel and rules. Group B was taught about the PSAW-MEBN reference model [Park et al., 2014]. The PSAW-MEBN reference model provides knowledge about a set of random variable groups\n(Situation, Actual Target, and Report) and causal relationships (i.e., rules) for PSAW. However,\nsuch knowledge did not have much influence on the development time for the structure model\nand rules, because the context for the simple heating machinery was too simple to use the PSAWMEBN reference model.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 103,
"total_chunks": 115,
"char_count": 952,
"word_count": 152,
"chunking_strategy": "semantic"
},
{
"chunk_id": "cd0f2be0-54b1-4c9c-b358-278796b206b3",
"text": "So, the participants in the two groups thought about their models in\nsimilar ways. However, the participants could not be sure whether or not their models were\ncorrect, so they spent relatively more time to think about their structure models and rules. (3)\nFinding entity/RV/MFrag from the RDB: The participants in Group A could not be sure of which\nelements in the RDB can be entity/RV/MFrag in MEBN, so they used times to figure out this. On\nthe other hand, the participants in Group B used the HML tool containing MEBN-RM, so they\ndidn't consider this step much. (4) Finding CLD: The participants in Group A looked at data to find normal distributions and regression models for RVs, while the participants in Group B used\nthe MEBN parameter learning built in the HML tool. 6 Conclusion\nIn this research, we introduced a new development framework for MEBN, providing a\nsemantically rich representation that also captures uncertainty. MEBN was used to develop\nArtificial Intelligence (AI) systems. MEBN models for such systems were constructed manually\nwith the help of domain experts. This manual MEBN modeling was labor-intensive and\ninsufficiently agile. To address this problem, we introduced a development framework (HML)\ncombining machine learning with subject matter expertise to construct MEBN models. We also\npresented a MEBN parameter learning for MEBN. In this research, we conducted an experiment\nbetween HML and an existing MEBN modeling process in terms of the development efficiency. In conclusion, HML could be used to develop more quickly a MEBN model than the existing\napproach. Future steps for HML are to apply it to realistic Artificial Intelligence systems. Also,\nHML should be more thoroughly investigated in terms of efficiency (agility for the development\nof a reasoning model) and effectiveness (producing a correct reasoning model).",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 104,
"total_chunks": 115,
"char_count": 1860,
"word_count": 295,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ef4c7c30-2f56-490d-b3b7-67aa376cf617",
"text": "Appendix A: Bayesian Network Learning\nBayesian Networks (BN) learning from data is a process to find a Bayesian network that fits data\nwell. Given a graph of BN and a dataset, Parameter Learning is the problem of finding a\nparameter ΞΈ that provides a good fit to the data. Structure Learning is the problem of finding a\ngraph G of BN that provides a good fit to the data. Structure Learning can have following topics:\n(1) dependency or independency between nodes can be learned, (2) a hidden or an unobserved\nnode in a BN can be found, and (3) a functional form of a local distribution for a node can be\nidentified.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 105,
"total_chunks": 115,
"char_count": 615,
"word_count": 112,
"chunking_strategy": "semantic"
},
{
"chunk_id": "8bca50c4-b984-47a4-b9d4-39c84e82270f",
"text": "In Bayesian theory, any uncertain aspect of the world can be represented as a random variable\n(RV). In BN learning, data D, graph G, and parameter πœƒπœƒ can be RVs. The set of data D, graph G,\nand parameter πœƒπœƒ are represented as D, G, and Σ¨, respectively. For BN, data D means flat data\nwhich has no relationships among its records, while for MEBN learning in this research, data D\nmeans relational data. In the following subsections, we introduce common approaches for BN\nparameter learning. We refer to [Pearl 1988][Heckerman, 1998][Koller & Friedman, 2009] for\nfollowing subsections. BN Parameter Learning\nBN parameter learning is to find parameters that fit a dataset well.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 106,
"total_chunks": 115,
"char_count": 674,
"word_count": 113,
"chunking_strategy": "semantic"
},
{
"chunk_id": "9c89c9c3-cac6-4653-90c2-6804bf63158c",
"text": "We can think of this as\ninference in probability theory. Two of the most popular approaches to estimating parameters\nfrom data are Maximum Likelihood Estimation (MLE) and Bayesian inference. In the Bayesian\napproach, we begin with a prior distribution for the parameter and use the data to obtain a\nposterior distribution. MLE finds the parameter that maximizes the likelihood function, and does\nnot use a prior distribution. The use of the prior distribution in the Bayesian approach can help to\novercome the over-fitting problem of learned parameters. We introduce MLE first and then the\nBayesian approach. Maximum Likelihood Estimation\nFor MLE, let's assume that we are doing a statistical experiment (e.g., tossing coins, where H is a\nhead and T is a tail). In the experiment, there is a set of independent and identically distributed\n(IID) observations D = {D1, D2, …, Dn} (e.g., {H, H, T}), which are drawn at random from a\ndistribution with an unknown probability density or mass function f(Di | ΞΈA), where n is the number of the observations and ΞΈA is an actual parameter for the distribution. Since we don't\nknow the actual parameter ΞΈA, we find an estimator ΞΈ* which would be close to the actual\nparameter ΞΈA. For this, we introduce a function, called likelihood, which is a function of a\nparameter ΞΈ for given observations D and is used to find the estimator ΞΈ*.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 107,
"total_chunks": 115,
"char_count": 1373,
"word_count": 233,
"chunking_strategy": "semantic"
},
{
"chunk_id": "a111bb60-5938-40da-a85f-c838e080b0c8",
"text": "𝐿𝐿(πœƒπœƒβˆΆπ‘«π‘«) = 𝑃𝑃 (𝑫𝑫 | πœƒπœƒ), (A.1) where D is the observations and ΞΈ is a parameter. The parameter ΞΈ can contain sub-parameters {ΞΈ1, ΞΈ2, …, ΞΈm}, where m is the number of the subparameters in ΞΈ. For example, the parameter ΞΈ for a normal distribution can contain two subparameters ΞΈ1 (for mean) and ΞΈ2 (for variance). Also, a parameter ΞΈ can have only one subparameter ΞΈ1 (e.g., ΞΈ1 = P(H)). The likelihood function L(ΞΈ : D) in Equation A.1 is equal to the probability of the observations D\ngiven a parameter ΞΈ. We then define an estimator ΞΈ*, called a maximum likelihood estimator in\nEquation A.2. πœƒπœƒβˆ—= arg π‘šπ‘šπ‘šπ‘šπ‘šπ‘šπœƒπœƒβˆˆπšΉπšΉπΏπΏ(πœƒπœƒβˆΆπ‘«π‘«). (A.2) where Ο΄ is a set of parameters. Equation A.2 means that in the set of parameters, a maximum likelihood estimate or parameter\nwhich maximizes the likelihood function is found. We can think of various types of distribution\nfor the observations D in the experiment. If we consider an RV X for the observations D with the\nmultinomial distribution, we can have the following equation which is the maximum likelihood\nestimator for the parameter ΞΈ.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 108,
"total_chunks": 115,
"char_count": 1071,
"word_count": 185,
"chunking_strategy": "semantic"
},
{
"chunk_id": "5a47ae09-d2b4-4b11-93da-aebcf3d1dd35",
"text": "C[π‘₯π‘₯π‘˜π‘˜]\nπœƒπœƒπ‘˜π‘˜βˆ—= N , (A.3)\nβˆ‘π‘žπ‘ž=1 C[π‘₯π‘₯π‘žπ‘ž] where C[.] is a function returning the number of times a value xk ∈ Val(X) in an RV X appears in\nD and N = |Val(X)|. Note that for a variable X, a function Val(X) returns a set of values for X. For example, suppose that there is an RV X for an observation Di. The RV X contain two values\nx1 = H and x2 = T and there is a set of observations D = {H, H, H, T}. On a set of observations,\nthe count of the number for x1 and x2 is observed using the function C[.]. For example, C[x1] = 3\nand C[x2] = 1. Using Equation A.3, we can calculate the maximum likelihood estimates for x1 and\nx2 as πœƒπœƒ1βˆ—= 3/4 and πœƒπœƒ2βˆ—= 1/4, respectively. We can use MLE for a Bayesian network (BN) to estimate a parameter of an RV in the BN. Suppose that there is an RV Xi in the BN, xk is a value for the RV Xi (i.e., xk ∈ Val(Xi)), there is a\nset of parent RVs for the RV (i.e., Pa(Xi) = U), and u is some instantiation for the set of parent\nRVs (i.e., u ∈ Val(U)). If we assume that each RV Xi is the multinomial distribution and the\nobservations associated with the RV Xi are independent and identically distributed, then the\nmaximum likelihood estimator for a value xk|u in the RV Xi in the BN can be formed as Equation\nA.4. βˆ— C[xk, u ] = πœƒπœƒπ‘–π‘– , (A.4) xk|u βˆ‘nq=1 Cΰ΅£xq, uΰ΅§ where C[xq, u] is the number of times observation xq in X and its parent observation u in Val(U)\nappears in D. For example, we assume that there are a node X1 in a BN, Val(X1) = {x1 = T, x2 = F}, the set of\nparent nodes for X1 (i.e., Pa(X1) = U = {U1}), and Val(U1) = {u1 = A, u2 = B}. Also, there is a\ndesta set D = {D1 = {T, A}, D2 = {T, A}, D3 = {F, A}}, where the first value in Dk is for X1 and\nthe second value in Dk is for U1.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 109,
"total_chunks": 115,
"char_count": 1718,
"word_count": 369,
"chunking_strategy": "semantic"
},
{
"chunk_id": "fcd9583f-4fbf-4f35-80dd-54c0eee91b27",
"text": "Bayesian Parameter Estimation\nFor Bayesian approach, we assume that the parameter ΞΈ in the statistical experiment in Section\nA.1 is a value of an RV Ο΄ (i.e., P(Ο΄ = ΞΈ) = g(ΞΈ)). In this setting, we try to draw inference about\nthe RV Ο΄ given a set of IID observations D = {D1, D2, …, Dn}, where n is the number of the\nobservations. We then find a posterior distribution of the parameter ΞΈ given the observations D\n(i.e., P(ΞΈ | For this, we can use Bayes' theorem, shown in Equation A.5. Note that the\nposterior distribution can be used to compute a posterior predictive distribution (or simply a\npredictive distribution) which is the distribution of a future observation given past observations. The posterior predictive distribution will be discussed later. The following equation shows the\nposterior distribution. P(𝑫𝑫 | πœƒπœƒ)P(πœƒπœƒ)\nP(πœƒπœƒ | 𝑫𝑫) = , (A.5)\nP(𝑫𝑫) where D is the current observations and P(D) > 0. Bayesian inference computes the posterior distribution P(ΞΈ | D) using a prior probability P(ΞΈ) and\na likelihood function P(D | ΞΈ) in Bayes' theorem. The prior probability P(ΞΈ) is the probability of a\nparameter ΞΈ before the current observations D are observed. The likelihood function P(D | ΞΈ)\n(Equation A.1) is the probability of the observations D given the parameter ΞΈ. In Equation A.5,\nP(D) is a marginal likelihood (or a normalizing constant) which is the probability distribution for\nthe observations D integrated over all parameters (i.e., P(D) = βˆ«πœƒπœƒ P(𝑫𝑫 | πœƒπœƒ)P(πœƒπœƒ)dπœƒπœƒ ). The posterior\ndistribution P(ΞΈ |",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 111,
"total_chunks": 115,
"char_count": 1517,
"word_count": 256,
"chunking_strategy": "semantic"
},
{
"chunk_id": "baf8db4e-79ba-42d2-9794-1f91ffeadd59",
"text": "D) is updated using the prior probability P(ΞΈ) and the likelihood function P(D |\nΞΈ), and this update can be repeated. For example, after applying some observations to Equation\nA.5, we can have a posterior distribution. This posterior distribution can be regarded as a prior\nprobability. And then given some new observations, we can compute a new posterior distribution. For the parameter ΞΈ, we can have a hyperparameter Ξ± which influences the parameter ΞΈ and can\nbe formed as ΞΈ ~ P(ΞΈ | Ξ±), where the hyperparameter Ξ± can be a vector or have subhyperparameters. In this setting, the probability for the parameter P(ΞΈ) can be changed to the\nprobability given the hyperparameter P(ΞΈ | Ξ±). By adding the hyperparameter to Equation A.5,\nEquation A.6 is derived under some assumptions that (1) the observations are independent of a\nhyperparameter given the parameter associated with the hyperparameter and (2) the sample space\nfor parameters is a partition. P(𝑫𝑫 | πœƒπœƒ)P(πœƒπœƒ | Ξ±)\nP(πœƒπœƒ | 𝑫𝑫, Ξ±) = . (A.6)\n∫ P(𝑫𝑫 | πœƒπœƒ)P(πœƒπœƒ | Ξ±) dπœƒπœƒ πœƒπœƒ We use this posterior distribution containing the hyperparameter (Equation A.6) to compute the\npredictive distribution. The predictive distribution is the distribution of a new observation given\npast observations. P(𝐷𝐷𝑛𝑛𝑛𝑛𝑛𝑛 | 𝑫𝑫, Ξ±) = ΰΆ±P(𝐷𝐷𝑛𝑛𝑛𝑛𝑛𝑛 | πœƒπœƒ)P(πœƒπœƒ | 𝑫𝑫, Ξ±)𝑑𝑑𝑑𝑑 , (A.7)",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 112,
"total_chunks": 115,
"char_count": 1303,
"word_count": 217,
"chunking_strategy": "semantic"
},
{
"chunk_id": "9ab797b3-a427-4baf-8e2c-5f77b0307597",
"text": "where Dnew is a new observation and is independent of the past IID observations D given a\nparameter ΞΈ. In Equation A.7, the predictive distribution integrates over all parameters for the new observation\nand the posterior distributions (Equation A.6). To compute the predictive distribution (Equation\nA.7), we should deal with the posterior distribution (Equation A.6) first. If there is no closed form\nexpression for the integral in the denominator in Equation A.6, we may need to approximate the\nposterior distribution. If there is a closed form expression for the integral in the denominator and,\nthe prior distribution and the likelihood are a conjugate pair, then an exact posterior distribution\ncan be found. A probability distribution in the exponential family (e.g., normal, exponential, and gamma) has a\nconjugate prior [Gelman et al, 2014]. We can consider an RV X with a categorical probability\ndistribution. For such a categorical probability distribution, Dirichlet conjugate distribution is\ncommonly used. Using Dirichlet distribution, the predictive distribution will be a compact form\n[Koller & Friedman, 2009]. π›Όπ›Όπ‘˜π‘˜+ C[xk]\nP(𝐷𝐷𝑛𝑛𝑛𝑛𝑛𝑛 | 𝑫𝑫, Ξ±) = N , (A.8)\nβˆ‘π›Όπ›Όπ‘—π‘—π‘—π‘— + βˆ‘q=1 C[xq] where C[.] is a function returning the number of times a value xk ∈ Val(X) in a variable X appears\nin D and N = |Val(X)|, Ξ± is a hyperparameter, and 𝛼𝛼𝑗𝑗 is a sub-hyperparameter in Dirichlet\ndistribution as shown the following. π›Όπ›Όπ‘—π‘—βˆ’1\nπœƒπœƒ~Dirichlet (𝛼𝛼1, 𝛼𝛼2, … , 𝛼𝛼𝑁𝑁) if P(πœƒπœƒ) ∝ ΰ·‘πœƒπœƒπ‘—π‘— , (A.9) where the sub-hyperparameter 𝛼𝛼𝑗𝑗 is the number of samples which have already happened [Koller\n& Friedman, 2009]. The Bayesian approach above can be used for BN parameter learning. If a prior distribution for\nan RV Xi, P(πœƒπœƒπ‘–π‘– | Ξ±), is the Dirichlet prior with a hyperparameter Ξ± ={Ξ±x1|u, … , Ξ±xN |u}, then the",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 113,
"total_chunks": 115,
"char_count": 1795,
"word_count": 293,
"chunking_strategy": "semantic"
},
{
"chunk_id": "340ab492-815d-49b2-8013-f7aaae03ee39",
"text": "Dirichlet posterior for P( πœƒπœƒπ‘–π‘–| Ξ±) is P( πœƒπœƒπ‘–π‘–| 𝑫𝑫, Ξ± ) with a hyperparameter Ξ± ={ Ξ±x1|u + C[x1,\nu], … , Ξ±xN |u + C[ xN, u]}, where a value xk ∈ Val(Xi), u ∈ Val(Pa(Xi) = U), and C[xq, u] is the\nnumber of times observation xq in Xi and its parent observation u in Val(U) appears in D. Using\nthe Dirichlet posterior, we can derive the predictive distribution for a value of Xi in a BN under\nsome assumptions: (1) local parameter independences and (2) global parameter independences\n[Heckerman et al., 1995]. 𝛼𝛼xk|u + C[xk, u]\n𝑃𝑃(Xi = xk | U = u, 𝑫𝑫, Ξ±) = N , (A.10)\nβˆ‘q=1 (𝛼𝛼xq|u + Cΰ΅£xq, uΰ΅§) Equation A.10 shows the posterior predictive distribution for the value xk of the i-th RV Xi in the\nBN given a parent value u, the observations D, and a hyperparameter Ξ± for Dirichlet conjugate\ndistribution. Appendix B: Bayesian Network Learning\nWe developed HML Tool (Fig. B.1) that performs MEBN-RM and the MEBN parameter learning. HML Tool is a JAVA based open-source program6 that can be used to create an MTheory script\nfrom a relational schema. This enables rapid development of an MTheory script by just clicking a\nbutton in the tool.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 114,
"total_chunks": 115,
"char_count": 1131,
"word_count": 209,
"chunking_strategy": "semantic"
},
{
"chunk_id": "ad5d7220-c011-4ec6-903f-c1fd04b00007",
"text": "The current version of HML Tool uses MySQL. The most recent version and\nsource codes of HML Tool are available online at\nhttps://github.com/pcyoung75/GMU_HMLP.git. HML Tool codes are in the GMU_HMLP\nGithub repository.7\nMEBN-RM Tool which contains three panels: (1) a left tree panel shows a list of relational\ndatabase, (2) a right top panel shows a result MTheory script, and (3) a right bottom panel shows\nan input window in which we can insert some information. The following figure shows the\ninterface of HML Tool and a result MTheory script using the tool. 6Researchers around the world can debug and extend MEBN-RM Tool.\n7Github is a distributed version control system (https://github.com). Acknowledgements\nThe research was partially supported by the Office of Naval Research (ONR), under Contract#:\nN00173-09-C-4008. Shou Matsumoto for their helpful\ncomments on this research.",
"paper_id": "1806.02421",
"title": "Human-aided Multi-Entity Bayesian Networks Learning from Relational Data",
"authors": [
"Cheol Young Park",
"Kathryn Blackmond Laskey"
],
"published_date": "2018-06-06",
"primary_category": "cs.LG",
"arxiv_url": "http://arxiv.org/abs/1806.02421v1",
"chunk_index": 115,
"total_chunks": 115,
"char_count": 884,
"word_count": 136,
"chunking_strategy": "semantic"
}
]