| [ |
| { |
| "chunk_id": "c57fc2c7-10d0-4781-b079-2c2434b7da53", |
| "text": "Contrastive Explanations with Local Foil Trees Jasper van der Waa * 1 2 Marcel Robeer * 1 3 Jurriaan van Diggelen 1 Matthieu Brinkhuis 3 Mark Neerincx 1 2", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 0, |
| "total_chunks": 25, |
| "char_count": 154, |
| "word_count": 29, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "92b11bd8-5949-453a-a96f-3037802fabb7", |
| "text": "Abstract model engineers to build better models and debug existing\nmodels (Kulesza et al., 2011; 2015). Recent advances in interpretable Machine Learning (iML) and eXplainable AI (XAI) construct The existing methods in iML focus on different approaches\nexplanations based on the importance of fea- of how the information for an explanation can be obtained\ntures in classification tasks. However, in a high- and how the explanation itself can be constructed. See for2018 dimensional feature space this approach may be- example for an overview the review papers of Guidotti et al.\ncome unfeasible without restraining the set of (2018) and Chakraborty et al. (2017). A number of examimportant features. We propose to utilize the hu- ples of common methods are: ordering the feature's con-Jun\nman tendency to ask questions like \"Why this tribution to an output (Datta et al., 2016; Lei et al., 2016;\noutput (the fact) instead of that output (the foil)?\" Ribeiro et al., 2016), attention maps and saliency of the19\nto reduce the number of features to those that features (Selvaraju et al., 2016; Montavon et al., 2017; Sunplay a main role in the asked contrast. Our pro- dararajan et al., 2017; Zhang et al., 2017), prototype selecposed method utilizes locally trained one-versus- tion, construction and presentation (Nguyen et al., 2016),\nall decision trees to identify the disjoint set of word annotations (Hendricks et al., 2016; Ehsan et al.,\nrules that causes the tree to classify data points 2017), and summaries with decision trees (Krishnan et al.,\nas the foil and not as the fact. In this study we 1999; Thiagarajan et al., 2016; Zhou & Hooker, 2016) and\nillustrate this approach on three benchmark clas- decision rules (Hein et al., 2017; Malioutov et al., 2017;[stat.ML] sification tasks. Puri et al., 2017; Wang et al., 2017). In this study we focus on feature-based explanations. Such explanations tend\nto be long when based on all features or use an arbitrary\n1. Introduction cutoff point. We propose a model-agnostic method to limit\nthe explanation length with the help of contrastive explanaThe research field of making Machine Learning (ML) mod- tions. The method also adds information of how that feaels more interpretable is receiving much attention. One of ture contributes to the output in the form of decision rules.\nthe main reasons for this is the advance in such ML models\nThroughout this paper, the main reason for explanations is and their applications to high-risk domains.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 1, |
| "total_chunks": 25, |
| "char_count": 2496, |
| "word_count": 405, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "bbe97906-06d2-4843-adb5-7b8249911372", |
| "text": "Interpretabilto offer transparency in the model's given output based on ity in ML can be applied for the following purposes: (i)\nwhich features play a role and what that role is. A few transparency in the model to facilitate understanding by\nmethods that offer similar explanations are LIME (Ribeiro users (Herman, 2017); (ii) the detection of biased views\net al., 2016), QII (Datta et al., 2016), STREAK (Elenberg in a model (Crawford, 2016; Caliskan et al., 2017); (iii) the\ndered list of all features, either visualized or structured in a of accurate explanations that explain the underlying causal\ntext template. However, when humans answer such ques- phenomena (Lipton, 2016); and (v) to build tools that allow\ntions to each other they tend to limit their explanations to\nResearch Organization for Applied Research (TNO), Soesterberg, tendency for simplicity also shows in iML: when multiple\nThe Netherlands 2Interactive Intelligence group, Technical Uni- explanations hold we should pick the simplest explanation\nversity of Delft, Delft, The Netherlands 3Department of Inforthat is consistent with the data (Huysmans et al., 2011). mation and Computing Sciences, Utrecht University, Utrecht,\nThe Netherlands. Correspondence to: Jasper van der Waa The mentioned approaches do this by either thresholding\n<jasper.vanderwaa@tno.nl>. the contribution parameter to a fixed value, presenting the\nentire ordered list or by applying it only to low-dimensional\nLearning (WHI 2018), Stockholm, Sweden. Copyright by the author(s).", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 2, |
| "total_chunks": 25, |
| "char_count": 1525, |
| "word_count": 228, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "8eebf686-1c28-4fc4-8bd3-47855679e463", |
| "text": "Contrastive Explanations with Local Foil Trees This study offers a more human-like way of limiting the foil class. The complement is then the set of decision\nthe list of contributing features by setting a contrast be- nodes (representing rules) that are a parent of the foil-leaf\ntween two outputs. The proposed contrastive explanations but not of the fact-leaf.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 3, |
| "total_chunks": 25, |
| "char_count": 362, |
| "word_count": 58, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "afe452de-3b9c-4581-a227-b1056ab67716", |
| "text": "Rules that overlap are merged to\npresent only the information that causes some data point obtain a minimum coverage rule set. The rules are then\nto be classified as some class instead of another (Miller used to construct our explanation. The method is discussed\net al., 2017). Recently, Dhurandhar et al. (2018) have in more detail in section 2. An example of its usage is disproposed constructing explanations by finding contrastive cussed in section 3 on three benchmark classification tasks.\nperturbations—minimal changes required to change the The validation on these three tasks shows that the proposed\ncurrent classification to any arbitrary other class. Instead, method constructs shorter explanations than the fully feaour approach creates contrastive targeted explanations by ture list, provide more information of how these features\nfirst defining the output of interest. In other words, our con- contribute and that this contribution matches the underlytrastive explanations answer the question \"Why this output ing model closely.\ninstead of that output?\". The contrast is made between the\nfact, the given output, and the foil, the output of interest. 2. Foil Trees; a way for obtaining contrastive\nA relative straightforward way to construct contrastive ex- explanations\nplanations given a foil based on feature contributions, is to\nThe method we propose learns a decision tree centredcompare the two ordered feature lists and see how much\naround any questioned data point. The decision tree issome feature differs in their ranking. However, a feature\ntrained to locally distinguish the foil-class from any othermay have the same rank in both ordered lists but can be\nclass, including the fact class. Its training occurs on dataused in entirely different ways for the fact and foil classes.\npoints that can either be generated or sampled from anTo mitigate this problem we propose a more meaningful\nexisting data set, each labeled with predictions from thecomparison based on how a feature is used to distinct the\nmodel it aims to explain. As such, our method is model-foil from the fact. We train an arbitrary model to distinagnostic. Similar to LIME (Ribeiro et al., 2016), the sampleguish between fact and foil that is more accessible.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 4, |
| "total_chunks": 25, |
| "char_count": 2250, |
| "word_count": 354, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "31f6c4d7-5adc-4447-9d2d-808412abf877", |
| "text": "From\nweights of each generated or sampled data point depend onthat model we distill two sets of rules; one used to identify\nits similarity to the data point in question. Samples in thedata points as a fact and the other to identify data points as\nvicinity of the questioned data point receive higher weightsa foil. Given these two sets, we subtract the factual rule set\nin training the tree, ensuring its local faithfulness.from the foil rule set. This relative complement of the fact\nrules in the foil rules is used to construct our contrastive Given this tree, the 'foil-tree', we search for the leaf in\nexplanation. See Figure 1 for an illustration. which the data point in question resides, the so called 'factleaf'. This gives us the set of rules that defines that data\npoint as the not-foil class according to the foil-tree. These\nrules respect the decision boundary of the underlying ML\nmodel as it is trained to mirror the foil class outputs. Next,\nwe use an arbitrary strategy to locate the 'foil-leaf'—for\nexample the leaf that classifies data point as the foil class\nwith the lowest number of nodes between itself and the\nfact-leaf. This results in two rule sets, whose relative complement define how the data point in question differs from\nthe foil data points as classified by the foil-leaf. This figure shows the general idea of our approach to planation of the difference is done in terms of the input\ncontrastive explanations. Given a set of rules that define data features themselves.\npoints as either the fact or foil, we take the relative complement\nIn summary, the proposed method goes through the follow-of the fact rules in the foil rules to obtain a description how the\nfoil differs from the fact in terms of features. ing steps to obtain a contrastive explanation for an arbitrary\nML model, the questioned data point and its output accordThe method we propose in this study obtains this comple- ing to that ML model:\nment by training a one-versus-all decision tree to recognize\nthe foil class. We refer to this decision tree as the Foil Tree. 1.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 5, |
| "total_chunks": 25, |
| "char_count": 2069, |
| "word_count": 356, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "715de4db-a022-4b76-acba-4bc6d014fd5f", |
| "text": "Retrieve the fact; the output class. Next, we identify the fact-leaf—the leaf in which the cur- 2. Identify the foil; explicitly given in the question or\nrent questioned data point resides. Followed by identifying derived (e.g. second most likely class).\nthe foil-leaf, which is obtained by searching the tree with\nsome strategy. Currently our strategy is simply to choose 3.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 6, |
| "total_chunks": 25, |
| "char_count": 375, |
| "word_count": 60, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "15cfa516-1b1e-4b04-ae09-7ad259a60fb7", |
| "text": "Generate or sample a local data set; either ranthe closest leaf to the fact-leaf that classifies data points as domly sampled from an existing data set, generated Contrastive Explanations with Local Foil Trees according to a normal distribution, generated based on The strategy used in this preliminary study simply reduces\nmarginal distributions of feature values or more com- to each edge having a weight of one, resulting in the nearest\nplex methods. foil-leaf when minimizing the total weights. Train a decision tree; with sample weights depending As an example, an improved strategy may be where the\non the training point's proximity or similarity to the edge weights are based on the relative accuracy of a node\ndata point in question. (based on its leaves) or leaf. Where a higher accuracy results in a lower weight, allowing the strategy to find more\n5. Locate the 'fact-leaf'; the leaf in which the data point distant, but more accurate, foil-leaves. This may result in\nin question resides. relatively more complex and longer explanations, which\nnonetheless hold in more general cases. Locate a 'foil-leaf'; we select the leaf that classifies\nnearest foil-leaf may only classify a few data points accu- data points as part of the foil class with the lowest\nrately, whereas a slightly more distant leaf classifies sig- number of decision nodes between it and the fact-leaf.\nnificantly more data points accurately. Given the fact that\n7.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 7, |
| "total_chunks": 25, |
| "char_count": 1444, |
| "word_count": 235, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "e332f10d-d9aa-402c-ad2d-5399be2af26b", |
| "text": "Compute differences; to obtain the two set of rules an explanation should be both accurate and fairly general,\nthat define the difference between fact- and foil-leaf, this proposed strategy may be more beneficial (Craven &\nall common parent decision nodes are removed from Shavlik, 1999).\neach rule sets. From the decision nodes that remain,\nNote that the proposed method assumes the knowledge of\nthose that regard the same feature are combined to\nthe used foil. In all cases we take the second most likely\nform a single literal.\nclass as our foil. Although this may be an interesting foil it\n8. Construct explanation; the actual presentation of the may not be the contrast the user actually wants to make.\ndifferences between the fact-leaf and foil-leaf. Either the user makes its foil explicit or we introduce a\nfeedback loop in the interaction that allows our approach\nFigure 2 illustrates the aforementioned steps. The search to learn which foil is asked for in which situations. We\nfor the appropriate foil-leaf in step 6 can vary. In Section leave this for future work.\n2.1 we discuss this more in detail. Finally, note that the\nmethod is not symmetrical. There will be a different an- 3. Validation\nswer on the question \"Why class A and not B?\" then on\n\"Why class B and not A?\" as the foil-tree is trained in the The proposed method is validated on three benchmark clasfirst case to identify class B and in the second case to iden- sification tasks from the UCI Machine Learning Repositify class A. This is because we treat the foil as the ex- tory (Dua & Karra Taniskidou, 2017); the Iris data set, the\npected class or the class of interest to which we compare PIMA Indians Diabetes data set and the Cleveland Heart\neverything else. In addition, even if the trees are similar, Disease data set. The first data set is a well-known clasthe relative complements of their rule sets are reversed sification task of plants based on four flower leaf characteristics with a size of 150 data points and three classes.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 8, |
| "total_chunks": 25, |
| "char_count": 2016, |
| "word_count": 349, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "011ebfe6-533e-433b-8a72-c224c47cb3a7", |
| "text": "The second data set is a binary classification task whose2.1. Foil-leaf strategies\ntask is to correctly diagnose diabetes and contains 769 data\nUp to now we mentioned one strategy to find a foil-leaf, points and has nine features. The third data set is aims at\nhowever multiple strategies are possible—although not all classifying the risk of heart disease from no presence (0)\nstrategies may result in a satisfactory explanation according to presence (1–4), consisting of 297 instances with 13 feato the user. The strategy used in this study is simply the tures.\nfirst leaf that is closest to the fact-leaf in terms of number\nTo show the model-agnostic nature of our proposed methoddecision nodes, resulting in a minimal length explanation.\nwe applied four distinct classification models to each data\nA disadvantage of this strategy is its ignorance towards the set: a random forest, logistic regression, support vector mavalue of the foil-leaf compared to the rest of the tree. The chine (SVM) and a neural network. Table 1 shows for each\nnearest foil-leaf may be a leaf that classifies only a rela- data set and classifier the F1 score of the trained model.\ntively few data points or classifies them with a relatively We validated our approach on four measures; explanation\nhigh error rate. To mitigate such issues the foil-leaf selec- length, accuracy, fidelity and time. These measures for\ntion mechanism can be generalized to a graph-search from evaluating iML decision rules are adapted from Craven &\na specific (fact) vertex to a different (foil) vertex while min- Shavlik (1999), where the mean length serves as a proxy\nimizing edge weights. The foil-tree is treated as a graph measure demonstrating the relative explanation comprewhose decision node and leaf properties influence some hensibility (Doshi-Velez & Kim, 2017). The fidelity allows\nweight function. This generalization allows for a number us to state how well the tree explains the underlying model,\nof strategies, and each may result in a different foil-leaf. Contrastive Explanations with Local Foil Trees The steps needed to define and train a Foil Tree and to use it to construct a contrastive explanation. Each step corresponds with\nthe listed steps in section 2. and the accuracy tells us how well its explanations general- mean fidelity of 0.93 and generalizes well to unseen data\nize to unseen data points.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 9, |
| "total_chunks": 25, |
| "char_count": 2386, |
| "word_count": 387, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "655468b9-a181-48f3-adcb-2e09500b15b8", |
| "text": "Below we describe each in detail: with a mean accuracy of 0.92. The foil-tree performs similar to the underlying ML model in terms of accuracy. Mean length; average length of the explanation in that for the random forest, logistic regression and SVM\nterms of decision nodes. The ideal value is in the models on the diabetes data set rules of length zero were\nrange [1.0, Nr. features), since a length of 0 means found—i.e. no explanatory differences were found bethat no explanation is found and a length near the tween facts and foils in a number of cases—, resulting in\nnumber of features offers little gain compared to a mean length of less than one.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 10, |
| "total_chunks": 25, |
| "char_count": 653, |
| "word_count": 115, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "f35b1ee1-2e53-44a9-ad89-3fb39f64a29e", |
| "text": "For all other models our\nshowing the entire ordered feature contribution list as method was able to find a difference for every questioned\nin other iML methods. data point. Accuracy; F1 score of the foil-tree for its binary clas- To further illustrate the proposed method, below we present\nsification task on the test set compared to the true la- a single explanation of two classes of the Iris data set in a\nbels. This measure indicates how general the expla- dialogue setting;\nnations generated from the Foil Tree are on an unseen\ntest set. • System: The flowertype is 'Setosa'.\n3. Fidelity; F1 score of the foil-tree on the test set com- • User: Why 'Setosa' and not 'Versicolor'?\npared to the model output. This measure provides a • System: Because for it to be 'Versicolor' the\nquantitative value of how well the Foil Tree agrees 'petal width (cm)' should be smaller and the\nwith the underlying classification model it tries to ex- 'sepal width (cm)' should be larger.\nplain. • User: How much smaller and larger?\n• System: The 'petal width (cm)' should be 4. Time; number of seconds needed on average to exsmaller than or equal to 0.8 and the 'sepal plain a test data point.\nwidth (cm)' should be larger than 3.3. Each measure is cross-validated three times to account for\nrandomness in foil-tree construction. These results are The fact is the 'Setosa' class, the foil is the 'Versicolor'\nshown in their respective columns in Table 1. They show class and the total length of the explanation contains two\nthat on average the Foil Tree is able to provide concise ex- decision nodes or literals. The generation of this small diplanations, with a mean length 1.33, while accurately mim- alogue is based on text templates and fixed interactions for\nicking the decision boundaries used by the model with a the user. Contrastive Explanations with Local Foil Trees Performance of foil-tree explanations on the Iris, PIMA Indians Diabetes and Heart Disease classification tasks. The column\n'Mean length' also contains the total number of features for that data set as the upper bound of the explanation length. DATA SET MODEL F1 SCORE MEAN LENGTH ACCURACY FIDELITY TIME RANDOM FOREST 0.93 1.94 (4) 0.96 0.97 0.014\nLOGISTIC REGRESSION 0.93 1.50 (4) 0.89 0.96 0.007\nIRIS\nSVM 0.93 1.37 (4) 0.89 0.92 0.010\nNEURAL NETWORK 0.97 1.32 (4) 0.87 0.87 0.005", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 11, |
| "total_chunks": 25, |
| "char_count": 2344, |
| "word_count": 400, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "c2c2b436-da47-47dc-ae2a-f26d13fa7801", |
| "text": "RANDOM FOREST 1.00 0.98 (9) 0.94 0.94 0.041\nLOGISTIC REGRESSION 1.00 0.98 (9) 0.94 0.94 0.032\nDIABETES\nSVM 1.00 0.98 (9) 0.94 0.94 0.034\nNEURAL NETWORK 1.00 1.66 (9) 0.99 0.99 0.009 RANDOM FOREST 0.94 1.32 (13) 0.88 0.90 0.106\nLOGISTIC REGRESSION 1.00 1.21 (13) 0.99 0.99 0.006\nHEART DISEASE\nSVM 1.00 1.19 (13) 0.86 0.86 0.012\nNEURAL NETWORK 1.00 1.56 (13) 0.92 0.92 0.009 Conclusion underlying ML models to show its model-agnostic capacity. Current developments in Interpretable Machine Learning\n(iML) created new methods to answer \"Why output A?\" The results showed that for different classifiers our method\nfor Machine Learning (ML) models. A large set of such is able to offer concise explanations that accurately demethods use the contributions of each feature used to clas- scribe the decision boundaries of the model it explains.\nsify A and then provides either a subset of feature whose As mentioned, our future work will consist out of extending\ncontribution is above a threshold, the entire ordered feature this preliminary method with more foil-leaf search stratelist or simply apply it only to low-dimensional data. gies as well as applying the method to more complex tasks\nThis study proposes a novel method to reduce the number and validating its explanations with users. Furthermore, we\nof contributing features for a class by answering a contrast- plan to extend the method with an adaptive foil-leaf search\ning question of the form \"Why output A (fact) instead of to adapt explanations towards a specific user based on user\noutput B (foil)?\" for an arbitrary data point. This allows feedback.\nus to construct an explanation in which only those features\nplay a role that distinguish A from B. Our approach finds References\nthe contrastive explanation by taking the complement set of\ndecision rules that cause the classification of A in the rule Barocas, S. and Selbst, A. Big Data's Disparate Impact.\nset of B. In this study we implemented this idea by training Cal.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 12, |
| "total_chunks": 25, |
| "char_count": 1982, |
| "word_count": 326, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "2694e400-8bc3-4b2a-b895-d47f1c808590", |
| "text": "Rev., 104:671, 2016.\na decision tree to distinguish between B and not-B (one- Caliskan, A., Bryson, J. J., and Narayanan, A. Semantics\nversus-all approach). A fact-leaf is found in which the data Derived Automatically from Language Corpora Conpoint in question resides. Also, a foil-leaf is selected ac- tain Human-Like Biases. Science, 356(6334):183–186,\ncording to a strategy where all data points are classified as 2017.\nthe foil (output B). We then form the contrasting rules by\nextracting the decision nodes in the sub-tree from the low- Chakraborty, S., Tomsett, R., Raghavendra, R., Harborne,\nest common ancestor between the fact-leaf and foil-leaf, D., Alzantot, M., Cerutti, F., Srivastava, M., Preece, A.,\nthat hold for the foil-leaf but not for the fact-leaf. Overlap- Julier, S., Rao, R. M., Kelley, Troy D., Braines, D., Senping rules are merged and eventually used to construct an soy, M., Willis, C. Interpretability of\nexplanation. Deep Learning Models: A Survey of Results. In IEEE\nSmart World Congr. InfrasWe introduced a simple and naive strategy of finding an ap- truct.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 13, |
| "total_chunks": 25, |
| "char_count": 1090, |
| "word_count": 171, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "81623ab2-0d31-45e1-8129-cc553d40db30", |
| "text": "Algorithms Multi-Organization Fed. IEEE, 2017.\npropriate foil-leaf. We also provided an idea to extend this\nmethod with more complex and accurate strategies, which Coglianese, C. and Lehr, D.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 14, |
| "total_chunks": 25, |
| "char_count": 191, |
| "word_count": 28, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "08511113-1de9-406a-b4bf-781a7f6ae65c", |
| "text": "Regulating by Robot: Adminis part of our future work. We plan a user validation of istrative Decision Making in the Machine-Learning Era.\nour explanations with non-experts in Machine Learning to Geo. LJ, 105:1147, 2016.\ntest the satisfaction of our explanations. In this study we\nCraven, M.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 15, |
| "total_chunks": 25, |
| "char_count": 290, |
| "word_count": 46, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "4dd7f391-a7e6-424b-b517-92b7b4f3f19c", |
| "text": "Rule Extraction: Where\ntested if the proposed method is viable on three different\nDo We Go from Here? Technical report, University of\nbenchmark tasks as well as to test its fidelity on different\nWisconsin Machine Learning Research Group, 1999. Contrastive Explanations with Local Foil Trees Artificial Intelligence's White Guy Problem. Huysmans, J., Dejaeger, K., Mues, C., Vanthienen, J., and\nThe New York Times, 2016. An Empirical Evaluation of the Comprehensibility of Decision Table, Tree and Rule Based PredicDatta, A., Sen, S., and Zick, Y. Algorithmic Transparency tive Models. Support Syst., 51(1):141–154, 2011.\nvia Quantitative Input Influence: Theory and Experi- ISSN 01679236. doi: 10.1016/j.dss.2010.12.003.\nments with Learning Systems. In Proc. 2016 IEEE Symp. Priv. (SP 2016), pp. 598–617. ISBN Krishnan, R., Sivakumar, G., and Bhattacharya, P. Ex-\n9781509008247. doi: 10.1109/SP.2016.42. tracting Decision Trees From Trained Neural Networks. Pattern Recognit., 32:1999–2009, 1999. doi: 10.1145/\nDhurandhar, Amit, Chen, Pin-Yu, Luss, Ronny, Tu, Chun- 775047.775113. Chen, Ting, Paishun, Shanmugam, Karthikeyan, and\nTrans. Syst. (TiiS), 1(1):2, 2011.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 16, |
| "total_chunks": 25, |
| "char_count": 1164, |
| "word_count": 162, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "866c5d07-4e30-4347-8255-6a94c37049da", |
| "text": "Towards A Rigorous SciKulesza, T., Burnett, M., Wong, W.-K., and Stumpf, S.\nence of Interpretable Machine Learning. arXiv preprint\nteractive Machine Learning. User Interfaces, pp. 126–137. ISSN 9781450321389. doi: 10.1145/2939672.2939778. Ehsan, U., Harrison, B., Chan, L., and Riedl, M. Rationalization: A Neural Machine Translation Approach Lipton, Z. The Mythos of Model Interpretability. In\nto Generating Natural Language Explanations. arXiv 2016 ICML Work. Lundberg, S. and Lee, S.-I. An Unexpected Unity Among\nElenberg, E. G., Feldman, M., and Karbasi, Methods for Interpreting Model Predictions. Streaming Weak Submodularity: Interpreting Neural Conf. Syst. (NIPS 2016), 2016. R., Emad, A., and Dash, 2017.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 18, |
| "total_chunks": 25, |
| "char_count": 713, |
| "word_count": 99, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "27bde2eb-0f01-4aad-8038-43c166a0b9bc", |
| "text": "Learning Interpretable Classification Rules with\nBoolean Compressed Sensing. Transparent Data Min.Friedler, S. A., Scheidegger, C., Venkatasubramanian, S.,\nBig Small Data. Big Data, 32, 2017. doi: 10.1007/ Choudhary, S., Hamilton, E. A Compar-\nMiller, T., Howe, P., and Sonenberg, L. Explainable AI:\nBeware of Inmates Running the Asylum. Guidotti, R., Monreale, A., Turini, F., Pedreschi, D., and\nConf. Intell. (IJCAI), pp. 36–41, 2017. A Survey Of Methods For Explaining\n2018. and M¨uller, K. Explaining Nonlinear Classification\nDecisions with Deep Taylor Decomposition. Pattern\nHein, D, Udluft, S, and Runkler, T. Interpretable Poli- Recognit., 65(C):211–222, 2017. ISSN 00313203. doi:\ncies for Reinforcement Learning by Genetic Program- 10.1016/j.patcog.2016.11.008. Nguyen, A., Dosovitskiy, A., Yosinski, J., Brox, T., and\nHendricks, L. A., Akata, Z., Rohrbach, M., Donahue, J., Clune, J.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 19, |
| "total_chunks": 25, |
| "char_count": 892, |
| "word_count": 124, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "13e80562-8767-49f1-bc59-2a651cd9984b", |
| "text": "Synthesizing the Preferred Inputs for Neurons\nSchiele, B., and Darrell, T. Generating Visual Explana- in Neural Networks via Deep Generator Networks.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 20, |
| "total_chunks": 25, |
| "char_count": 149, |
| "word_count": 21, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "cab01e18-878a-4256-9045-41ff6b3933d0", |
| "text": "Vis., pp. 3–19, 2016. Syst., 29, 2016.\n9783319464923. doi: 10.1007/978-3-319-46493-0 1. Pacer, M. and Lombrozo, T. Ockham's Razor Cuts to the\nHerman, B. The Promise and Peril of Human Evaluation Root: Simplicity in Causal Explanation. Psychol.\nfor Model Interpretability.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 21, |
| "total_chunks": 25, |
| "char_count": 271, |
| "word_count": 39, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "8fe0d32d-731e-4f24-bd1c-7177e34010f0", |
| "text": "Gen., 146(12):1761–1780, 2017. ISSN 1556-5068. doi:\nSyst., 2017. 10.1037/xge0000318. Contrastive Explanations with Local Foil Trees Puri, N., Gupta, P., Agarwal, P., Verma, S., and Krishnamurthy, B.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 22, |
| "total_chunks": 25, |
| "char_count": 198, |
| "word_count": 26, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "6e4f71c8-539a-4e0f-b3a9-83b97ec74f00", |
| "text": "MAGIX: Model Agnostic Globally Interpretable Explanations. arXiv preprint T., Singh, S., and Guestrin, C. \"Why Should I\nTrust You?\": Explaining the Predictions of Any Classifier. In Proc. 22nd ACM SIGKDD Int. Data Min. (KDD'16), pp. 1135–1144, 2016. ISBN\n9781450321389. doi: 10.1145/2939672.2939778. R., Cogswell, M., Das, A., Vedantam, R.,\nParikh, D., and Batra, D. Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization. ISBN 9781538610329. doi:\n10.1109/ICCV.2017.74. Sundararajan, M., Taly, A., and Yan, Q. Axiomatic Attribution for Deep Networks. J., Kailkhura, B., Sattigeri, P., and Ramamurthy, K.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 23, |
| "total_chunks": 25, |
| "char_count": 629, |
| "word_count": 86, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "97fc0c3c-5a7d-402f-bbae-13e4518d40f0", |
| "text": "TreeView: Peeking into Deep Neural\nNetworks Via Feature-Space Partitioning. Wang, T., Rudin, C., Velez-Doshi, F., Liu, Y., Klampfl,\nE., and Macneille, P. Bayesian Rule Sets for Interpretable Classification. Data\nMin. (ICDM), pp. 1269–1274. ISBN\n9781509054725. doi: 10.1109/ICDM.2016.130. Zhang, J., Bargal, S. A., Lin, Z., Brandt, J., Shen, X., and\nSclaroff, S. Top-Down Neural Attention by Excitation\nBackprop.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 24, |
| "total_chunks": 25, |
| "char_count": 411, |
| "word_count": 57, |
| "chunking_strategy": "semantic" |
| }, |
| { |
| "chunk_id": "3ed12b57-d00b-4206-b3c7-d5c80a213e0a", |
| "text": "Vis., pp. 1–19, 2017. ISSN\n15731405. doi: 10.1007/s11263-017-1059-x. Zhou, Y. and Hooker, G. Interpreting Models via Single\n2016.", |
| "paper_id": "1806.07470", |
| "title": "Contrastive Explanations with Local Foil Trees", |
| "authors": [ |
| "Jasper van der Waa", |
| "Marcel Robeer", |
| "Jurriaan van Diggelen", |
| "Matthieu Brinkhuis", |
| "Mark Neerincx" |
| ], |
| "published_date": "2018-06-19", |
| "primary_category": "stat.ML", |
| "arxiv_url": "http://arxiv.org/abs/1806.07470v1", |
| "chunk_index": 25, |
| "total_chunks": 25, |
| "char_count": 129, |
| "word_count": 18, |
| "chunking_strategy": "semantic" |
| } |
| ] |