[ { "chunk_id": "1a7605f7-a1ad-4e1f-83f4-e8a250254851", "text": "Dennis Collaris 1 Leo M. Vink 2 Jarke J. van Wijk 1 Abstract In order to find out how effective these explanations are in\nFraud detection is a difficult problem that can a real world application, we conducted a case study at a\nbenefit from predictive modeling. However, the large insurance firm. To this end, we designed two novel\nverification of a prediction is challenging; for a dashboards combining various state-of-the-art explanation\nsingle insurance policy, the model only provides techniques, extended where needed. They enable domain\na prediction score. We present a case study where experts to analyze and understand individual predictions of2018\nwe reflect on different instance-level model expla- Random Forest models. At the insurance firm, the dashtively identify potential fraud cases.Jun nationtheir work.techniquesTo thisto end,aid awefrauddesigneddetectiontwoteamnovelin boards are used to aid a fraud detection team to more effecdashboards combining various state-of-the-art ex-\n19 planation techniques. These enable the domain Theprovideremainderan overviewof thisof papercurrentisexplanationstructured astechniquesfollows: thatwe\nexpert to analyze and understand predictions, dra- are relevant for the interpretation of Random Forest models\nmatically speeding up the process of filtering po- in Section 2. Next, in Section 3 the case study and dashtential fraud cases.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 1, "total_chunks": 28, "char_count": 1389, "word_count": 195, "chunking_strategy": "semantic" }, { "chunk_id": "fcb9cd65-fdb5-4308-8651-f90561ef26bc", "text": "Finally, we discuss the lessons boards are presented. Applying these techniques in practice\nlearned and outline open research issues. revealed many issues and biases that need to be addressed. In Sections 4 and 5, we reflect on the lessons learned and[cs.LG] 1. Introduction outline open research issues to stimulate the potential of\nmodel explanations. Many Machine Learning models have been introduced to\nsolve tasks faster and more accurate.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 2, "total_chunks": 28, "char_count": 444, "word_count": 68, "chunking_strategy": "semantic" }, { "chunk_id": "41a5d61b-2a47-475a-b63b-053a26f81b51", "text": "However, along with\n2. Related work these improvements, the complexity of these models also\nrapidly increases. This negatively affects the comprehensi- As Random Forests can get notoriously complex, the interbility of these models. For instance, Random Forest models pretability of these models is increasingly important. We\nare often used for fraud detection. However, for models can distinguish between two types of approaches. Authors\ncomprised of hundreds of trees, it can be difficult to grasp either analyze the features in the context of a model or work\nwhich choices are made to yield a prediction. Especially on creating a simpler model that behaves and performs like\nfor applications where the consequences of a bad decision the original model. A visual overview of the taxonomy is\nare significant and the problem is difficult to predict, an shown in Figure 1.\nexplanation of the choices can be essential for the model to\nbe useful. Random Forest\nexplanation\nated explanations of the model prediction on a global level. However, a simple global explanation may omit many poten- Feature analysis Model simplification\ntially important details, decreasing accuracy with respect to\nthe reference model. To alleviate this problem, authors have\nFeature Sensitivity Meta-learning Model\ntaken a local approach: explanations that are simple and re- importance analysis condensing\nmain accurate by only explaining a single instance (Ribeiro\net al., 2016; Lundberg & Lee, 2017; Robnik-ˇSikonja, 2018). Taxonomy of explanation techniques for Random Forests. Feature analysis hoven University of Technology, The Netherlands 2Achmea BV. Learning (WHI 2018), Stockholm, Sweden.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 3, "total_chunks": 28, "char_count": 1672, "word_count": 251, "chunking_strategy": "semantic" }, { "chunk_id": "f17b81c0-beb6-4d1e-808e-38d99529887a", "text": "Copyright by the model. By understanding which features are more relevant,\nauthor(s). we reveal information about the decision making process. Instance-Level Explanations for Fraud Detection: A Case Study Feature importance Feature importance metrics enable Fraud detection, however, is a challenging problem. It is\nexperts to effectively compare and rank features. They fundamentally incomplete (Doshi-Velez, 2017) in the sense\noutput a single score for a feature based on their contribution that no perfect rule exists to distinguish a fraudulent case\nto the prediction. from a non-fraudulent one.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 4, "total_chunks": 28, "char_count": 599, "word_count": 86, "chunking_strategy": "semantic" }, { "chunk_id": "13853e0c-bed2-4f8a-b067-d781865d717e", "text": "To substantiate this, Figure 2\nshows a data set of sick leave insurances plotted using t-SNE\nThis can be done globally or locally. In the original imple-\n(Maaten & Hinton, 2008). Similar insurances will appear\nmentation of Random Forests by Breiman (Breiman, 2001),\nclose to each other in the plot. For any perplexity value,\na global feature importance metric was already included,\nthe fraud cases are uniformly distributed among the rest of\nwhich was efficiently estimated due to random subspace\nthe data. This clearly shows that no particular subset of the\nprojection (Ho, 2002) in the training process.\ninsurances is more likely to contain fraud; fraud seems to\nRecently there have been efforts to create local feature appear in all shapes and sizes.\nimportance metrics specifically for Random Forests (Palczewska et al., 2014; Kuz'min et al., 2011; Altmann et al.,\n2010; Tolomei et al., 2017) as well as model-agnostic approaches (Lundberg & Lee, 2017; ˇStrumbelj et al., 2009;\nRobnik-ˇSikonja & Kononenko, 2008). Sensitivity analysis Another approach to analyze features\nis through sensitivity analysis (Cortez & Embrechts, 2013;\nGoldstein et al., 2015; Friedman, 2001; Welling et al., 2016;\nKrause et al., 2016; Lou et al., 2013). This approach analyzes how the output of the model changes when the value\nof a feature of interest is varied. This is an example of a\nmodel-agnostic (or black box) approach, as only the input\nand output of the model are considered. Figure 2. t-SNE projection of sick leave insurances, with 1000\n2.2. Model simplification iterations and perplexity 30.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 5, "total_chunks": 28, "char_count": 1587, "word_count": 254, "chunking_strategy": "semantic" }, { "chunk_id": "5e34a320-2739-419d-a120-c0cb9ad2d81c", "text": "Fraud cases are colored red. Model simplification methods take a reference model and\nderive a simpler model, while retaining the original behavior To aid fraud experts in their work, Achmea trained a preas as well as possible. These simplified models are far less dictive model to detect fraud among sick leave insurances.\ncomplex and thus easier to interpret, but at the expense A dataset of around 40,000 insurance policies was used, of\nof generality or accuracy. We distinguish two varieties which 129 records are labeled fraudulent. It contains 49 feaof methods: meta-learning, a black box approach where tures sourced from different internal systems; 8 categorical\nanother model is trained on synthetically generated data and 41 continuous. To achieve the best possible accuracy,\nfrom the reference model (Domingos, 1997; Stiglic & Kokol, Achmea created a complex bagging ensemble of 100 Ran-\n2007; Buciluˇa et al., 2006; Zhou et al., 2003; Ribeiro et al., dom Forests with 500 decision trees each. With an OOB\n2016), and model condensing, which is a white box method error of 27.7%, this model still makes mistakes.\nthat tries to remove the least relevant parts of the model\n(Assche & Blockeel, 2007; P´erez et al., 2007; Gurrutxaga The verification of the model prediction is challenging; for\net al., 2006; Deng, 2014; Hara & Hayashi, 2016). a single insurance policy, the fraud expert is only provided\nwith a prediction score (see Figure 3). Even if the model is\nvery certain, manual investigation is required to validate the\n3. Case study: insurance fraud detection\nsuspicion of fraud.\n3.1. A case study is carried out at Achmea BV: one of the leadpolicy 88% risk of fraud\ning providers of insurances in the Netherlands. A major data ML Model\nconcern for this company is fraud. As much as 5% of in- Fraud\nexpert\nsurances are estimated to be fraudulent by the company. However, Achmea is only able to detect a fraction of the Explanation\nestimated amount of fraud. Naturally, there is high interest Explainer\nin automated fraud detection techniques. Fraud detection pipeline.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 6, "total_chunks": 28, "char_count": 2083, "word_count": 341, "chunking_strategy": "semantic" }, { "chunk_id": "ac9fef4c-26a5-4160-bdb1-4e4273cc2660", "text": "Contributions highlighted in blue. Instance-Level Explanations for Fraud Detection: A Case Study Fraud detection augmented with model explanations To provide additional explanations along with the prediction\n(blue highlights in Figure 3), two dashboards were designed. They combine feature importance, sensitivity analysis and\nmodel simplification techniques to enable the fraud detection team to more effectively identify potential fraud cases. Feature dashboard This dashboard is centered around\nfeatures, giving a per-feature explanation of its contribution. The main element is a table of features along with\ntheir values for the selected instance, ranked according to\nfeature importance. Various visualization techniques can be Figure 5. Rule dashboard for the Pima data set. (B) is a Sankey diachosen and configured, the table can be sorted and tooltips gram representation of locally extracted decision rules. As these rules\nreveal values behind visualizations. To not expose sensitive are discarding some detail from the model, an explicit indication of\ninformation, we show an example in Figure 4 explaining the faithfulness of the model is included (A).\na Random Forest trained on the Pima Indian UCI data set\n(Smith et al., 1988).\n3.3. Explanation techniques used The visualizations in the dashboards are made possible by\nthe following techniques. Feature contribution We use the instance-level feature\nimportance method of Palczewska et al. (Palczewska et al.,\n2014) for the bar chart in Figure 4(a). It is a white-box\napproach as it utilizes the structure of the model in order to\nderive the contribution of a feature to the final prediction.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 7, "total_chunks": 28, "char_count": 1655, "word_count": 249, "chunking_strategy": "semantic" }, { "chunk_id": "1749e2cc-3631-40f1-9027-5ccadc91e048", "text": "It\nis based on the concept of local increments LIcf for a feature\nFigure 4. Feature dashboard for the Pima data set. (A) shows a bar\nf between a parent node p and child node c:\nchart expressing feature contribution to the target class. A feature\nwith negative contribution indicates this instance is less likely to belong to the target class. (B) shows partial dependence plots, showing\nthe impact of changing the feature value (indicated with a vertical (Ymeanc −Ymean,p Parent splits on feature f.\nline) on the final prediction. Along with these model explanations, LIcf =\n0 Otherwise.\nthe distributions of the two classes of training data and the current\n(1)case are presented (C).\nwhere YmeanN is the probability that an arbitrary element\nRule dashboard The second dashboard takes the possible from the training data subset in node N belongs to the\ntarget classes as a starting point and uses model simplifica- target class. This metric is closely related to Gini impurity\ntion to present a set of rules that describe the choices the (Breiman et al., 1984), the split criterion used for decision\nmodel made for the prediction of those classes. An example trees in a Random Forest, but is specific to the target class.\nis shown in Figure 5. The feature contribution FCfi,t for an instance i is first calLocally extracted decision rules are visualized as a Sankey culated for every tree t in the forest as\ndiagram. The ratio of color in the first block corresponds\nto the model posterior probability. Next, a number of rules FCfi,t = X LINf (2)\nfor the class are connected, where the width of the edge\nN∈Ri,t\ncorresponds to the rule importance. Every rule is connected\nto one or more constraints, where the width of these edges where Ri,t is the composition of all nodes on the path of\ncorresponds to the feature contribution. instance i from the root node to the leaf node in tree t. Next,\nClicking on a constraint in the diagram reveals more infor- the contribution of a Random Forest can be computed by\nmation about that feature, such as a histogram and partial averaging over all trees.\ndependence plot. As these rules are discarding some details\nfrom the model, an explicit indication of the faithfulness of Partial dependence Feature contribution is unable to capthe explanation is included at Figure 5(a). ture the influence of the value of a feature on the prediction. Instance-Level Explanations for Fraud Detection: A Case Study To obtain this insight, we use a sensitivity analysis technique 4. Discussion\nby Friedman (Friedman, 2001) called partial dependence.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 8, "total_chunks": 28, "char_count": 2574, "word_count": 434, "chunking_strategy": "semantic" }, { "chunk_id": "2387082e-26ae-4cf7-9ad3-fcbcf6ea374a", "text": "We have applied our methods to the sick leave insurance dataIt is visualized using line charts in Figure 4(b).", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 9, "total_chunks": 28, "char_count": 110, "word_count": 19, "chunking_strategy": "semantic" }, { "chunk_id": "96cf4765-2f1e-47d1-8567-e9b4444db600", "text": "For a local\nand presented the results to five fraud experts. In general,understanding on a feature fa of instance i, n uniformly disthey were very positive and considered it as a highly usefultributed points are sampled along the range of this feature.\ntool to accelerate their understanding. From the differentNext, n records with values of instance i of all features f\ndashboards, they preferred the rule-based version, as theywith f ∈F, f ̸= fa are created and the uniformly sampled\nfound it to be clear and concise. The partial dependencevalues are used for feature fa. Finally, a prediction score\nplots were less appreciated, but for their cases, most of theseis obtained for all n created records and plotted against the\nshowed almost flat curves.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 10, "total_chunks": 28, "char_count": 753, "word_count": 124, "chunking_strategy": "semantic" }, { "chunk_id": "a91f1dbc-c986-4631-a256-fd864a7d7d73", "text": "However, this application invalues of fa. The resulting curve shows how the prediction\npractice also revealed many issues and biases that need toscore changes when feature fa in instance i is varied.\nbe addressed. For a global understanding, the instance-level sensitivity\nanalysis results for all k training records can be combined. Understanding explanations First and foremost, it was\nEither the mean of all prediction scores is plotted for each challenging to evaluate explanations. Even though recent\nof the n samples (Friedman, 2001), or k different lines are literature tries to address this issue (Lipton, 2016; Doshiplotted to reveal an overall trend (Goldstein et al., 2015). Velez, 2017), the community is far from reaching consensus\non what best practices are.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 11, "total_chunks": 28, "char_count": 772, "word_count": 119, "chunking_strategy": "semantic" }, { "chunk_id": "da99c4aa-8196-45cd-ac8b-e6e2fd49e223", "text": "Local rule extraction By using model simplification we We applied three explanation techniques that yielded differcan present a simplified model as an explanation. A popular ent results for the insurance case; features with the highest\nmethod of doing this is by extracting logical rules. How- contribution did not correspond with features with the highever, to the best of our knowledge, local rule extraction est variance in partial dependence. Likewise, the most\ntechniques have not yet been proposed. To this end, we important local rules used yet another set of important feacombined existing approaches to obtain a concise set of tures. These explanations may be equally valid and useful,\ndecision rules that only has to be faithful locally. These but do not establish trust in the system, nor will they provide\nrules are represented as a Sankey diagram in Figure 5(b). a coherent explanation when combined. More research is\nFirst, a synthetic pruning data set is obtained in the local needed to understand the solution space of possible explavicinity of instance i of interest. This can be done by uni- nations and to identify trade-offs between desiderata.\nformly sampling from an n-ball with n = |F|, radius r = δ Alarmingly, this incongruency did not affect the evaluation\nand centered at instance i. These records are labeled by by both fraud team nor various data science teams at the\nthe reference Random Forest.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 12, "total_chunks": 28, "char_count": 1425, "word_count": 232, "chunking_strategy": "semantic" }, { "chunk_id": "bd1bf53b-019c-4fe2-a199-365eb3a0595b", "text": "This method is similar to the insurance firm. They readily trusted the provided explanamethod used by Ribeiro et al. (Ribeiro et al., 2016) but tion and did not question their validity, even when provoked.\nyields discrete records rather than weighted ones. There seems to be an Illusion Of Explanatory Depth (Keil,\nNext, all decision rules applicable to instance i are extracted 2006) causing overconfidence of understanding and the disfrom the Random Forest by extracting the path from root to regard of uncertainties. This can be especially dangerous\nleaf node when classifying instance i for every tree. These considering various works on the topic of explainability evaldecision rules are first pruned by iteratively removing con- uate their systems by means of user testing (Doshi-Velez,\nstraints from the rule and leaving them out when the impact 2017; Ribeiro et al., 2016; Tolomei et al., 2017).\non the prediction on the synthetic pruning set is not worse Another issue is that the fraud experts confused the prethan a given threshold. Duplicates introduced as result of sented explanations for actual causality. Using explanations\npruning are removed. in this context only provides a conjecture on what possiFinally, the relevance of each rule is estimated by a tech- ble causalities may exist, based on the correlations found\nnique introduced by Deng (Deng, 2014). A binary matrix is by the classifier. We should be very careful not to present\ncreated with the synthetic pruning data along the rows and misleading explanations to our users.\nset of rules along the columns. Another Random Forest is\ntrained to predict the labels of the synthetic pruning data. Data quality The real world data set introduced more difThe global feature importance from this Random Forest now ficulties as compared to standardized UCI data sets. Missing\nconstitutes a metric of importance for individual rules. By values and imbalanced data have a significant impact on the\nusing regularization (Deng & Runger, 2013), this impor- interpretability, and should always be considered when cretance metric can be biased to favor shorter rules. Discarding ating explanations. For instance, if the feature cannot be\nirrelevant rules with rule importance below a given thresh- meaningfully explained, this will have a direct impact on\nold yields a set of relevant rules that are locally relevant the interpretability of the explanation, regardless of the clasaround instance i. sifier.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 13, "total_chunks": 28, "char_count": 2467, "word_count": 388, "chunking_strategy": "semantic" }, { "chunk_id": "ba0e82e4-8730-4caf-b001-704a38eaeebb", "text": "Instance-Level Explanations for Fraud Detection: A Case Study Likewise, the value of a feature can also lose meaning by References\nimputed values that do not follow the feature distribution\nAltmann, Andr´e, Toloi, Laura, Sander, Oliver, and Lengauer,\n(e.g., 9999). In our project imputations skewed histograms\nThomas. Permutation importance: A corrected feature\nobscuring the actual trend, and shifted the decision boundary\nimportance measure. Bioinformatics, 26(10):1340–1347,\nfor features to unrealistic values (e.g., the constraint Fraud\n2010. ISSN 13674803. doi: 10.1093/bioinformatics/\nwhen the duration of sickness is less than 50 years would\nbtq134.\nonly select non-imputed values). Additionally, heavy imbalance can make showing his- Assche, Anneleen Van and Blockeel, Hendrik.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 14, "total_chunks": 28, "char_count": 785, "word_count": 107, "chunking_strategy": "semantic" }, { "chunk_id": "2ea27716-aac8-40bd-8a19-141097f02d6f", "text": "Seeing\ntograms of data impossible without some form of normal- the forest through the trees: Learning a comprehensiization. This, in turn, can mislead the expert. ble model from an ensemble. In European Conference\non Machine Learning, pp. 418–429. Generality We found that global insights are often too\nBreiman, Leo.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 15, "total_chunks": 28, "char_count": 316, "word_count": 49, "chunking_strategy": "semantic" }, { "chunk_id": "a7d2f806-08b2-4666-8810-222121eac3ff", "text": "Machine learning, 45(1):\nsimplistic to capture the behavior of a complex model. The\n5–32, 2001.\nfraud model has various different 'strategies' to detect fraud,\nthat will be lost when averaging over all local effects like Breiman, Leo, Friedman, Jerome, Olshen, RA, and Stone,\ndone with feature importance metrics and partial depen- Charles J. Classification and Regression Trees (CART),\ndence. volume 40. 09 1984. The latter technique did not even work in a local setting\nBuciluˇa, Cristian, Caruana, Rich, and Niculescu-Mizil,for the complex model: no single feature had a significant\nAlexandru.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 16, "total_chunks": 28, "char_count": 596, "word_count": 90, "chunking_strategy": "semantic" }, { "chunk_id": "b487f9a9-3104-47c9-b8e1-c76ee8aed5fb", "text": "Proceedings of the 12thimpact on the prediction. Rather, the prediction is based\nACM SIGKDD international conference on Knowledgeon various features in unison. Such interactions are not\ndiscovery and data mining - KDD '06, pp. 535, 2006. doi:captured in partial dependence.\n10.1145/1150402.1150464. The Random Forest model tested on was vastly complex\n(1.3 million decisions), but we were still able to obtain sim- Cortez, Paulo and Embrechts, Mark J. Using sensitivity\nple and reasonably accurate explanations by considering analysis and visualization techniques to open black box\nsingle instances. However, even though instance-level ex- data mining models. Information Sciences, 225:1–17,\nplanations offer a solution for this case, we argue this makes 2013.\nit challenging to explore what is happening on the global\nlevel; exploring many instance-level explanations is imprac- Deng, Houtao. Interpreting tree ensembles with inTrees.\nexplanations may again be misleading, as the expert may\nDeng, Houtao and Runger, George. Gene selection withfalsely assume that the presented behavior applies in more\nguided regularized random forest. Pattern Recognition,situations than just the instance.\n46(12):3483–3489, 2013. Conclusion Domingos, Pedro.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 17, "total_chunks": 28, "char_count": 1243, "word_count": 173, "chunking_strategy": "semantic" }, { "chunk_id": "823ca536-f8c8-45e2-bfe3-4152c387993b", "text": "Knowledge acquisition from examples\nvia multiple models. In Proceedings of the Fourteenth\nIn order to find out how effective these explanations are in a International Conference on Machine Learning, pp. 98–\nreal world application, we have conducted a case study at 106, 1997. We created two dashboards to enable domain expert to analyze and understand individual predictions. The Doshi-Velez, Finale; Kim, Been.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 18, "total_chunks": 28, "char_count": 411, "word_count": 62, "chunking_strategy": "semantic" }, { "chunk_id": "8258d43b-0d57-4179-9de1-65aa59eba8f8", "text": "Towards a rigorous\nlocal focus allowed us to explain a very complex model, science of interpretable machine learning. In eprint\nthat different explanation techniques may yield widely varying results, yet may all be considered reasonably valid and Friedman, Jerome H. Greedy function approximation: A\nuseful. This incongruency is unclear to the domain experts, gradient boosting machine.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 19, "total_chunks": 28, "char_count": 386, "word_count": 56, "chunking_strategy": "semantic" }, { "chunk_id": "7196bbab-880a-4bdb-93c6-c7bdb9f45c5c", "text": "Annals of Statistics, 29(5):\nwho were eager to trust any explanation provided to them. 1189–1232, 2001. ISSN 00905364. doi: DOI10.1214/\nThis can be especially dangerous for application grounded aos/1013203451.\nevaluation of explanation techniques. Finally, data quality\ncan have a significant impact on the explanation and should Goldstein, Alex, Kapelner, Adam, Bleich, Justin, and Pitkin,\nnot be taken for granted.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 20, "total_chunks": 28, "char_count": 416, "word_count": 59, "chunking_strategy": "semantic" }, { "chunk_id": "8216164b-667b-4407-a556-8a3ee3808189", "text": "Peeking inside the black box: Visualizing statistical\nlearning with plots of individual conditional expectation. Journal of Computational and Graphical Statistics, 24\n(1):44–65, 2015. Instance-Level Explanations for Fraud Detection: A Case Study", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 21, "total_chunks": 28, "char_count": 245, "word_count": 31, "chunking_strategy": "semantic" }, { "chunk_id": "fe4aa3a1-0d73-4acc-b50c-cf6bd5214f3e", "text": "Gurrutxaga, Ibai, P´erez, Jes´us Ma, Arbelaitz, Olatz, Ribeiro, Marco Tulio, Singh, Sameer, and Guestrin, Carlos. Muguerza, Javier, Mart´ın, Jos´e I, and Ansuategi, An- Why should i trust you?: Explaining the predictions of\nder. CTC: An alternative to extract explanation from any classifier. In Proceedings of the 22nd ACM SIGKDD\nbagging. Artificial Intelligence, 4177(1):49–62, 2006. International Conference on Knowledge Discovery and\nISSN 03029743. doi: 10.1007/11881216. Data Mining, pp. 1135–1144. Hara, Satoshi and Hayashi, Kohei.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 22, "total_chunks": 28, "char_count": 537, "word_count": 73, "chunking_strategy": "semantic" }, { "chunk_id": "8d2db889-d92e-4a51-94fe-ea4ff308f748", "text": "Making tree ensem- Robnik-ˇSikonja, Marko. Explanation of prediction models\nInterpretability in Machine Learning, pp. 81–85, 2016. Robnik-ˇSikonja, Marko and Kononenko, Igor. Explaining\nHo, Tin Kam.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 23, "total_chunks": 28, "char_count": 198, "word_count": 25, "chunking_strategy": "semantic" }, { "chunk_id": "ca68b8ed-dd21-42e8-859f-24e06215b712", "text": "A data complexity analysis of comparative ad- classifications for individual instances. IEEE Transacvantages of decision forest constructors. Pattern Analysis tions on Knowledge and Data Engineering, 20(5):589–\n& Applications, 5(2):102–112, 2002. 600, 2008. Smith, Jack W, Everhart, JE, Dickson, WC, Knowler, WC,Keil, Frank C. Explanation and understanding. Rev.\nand Johannes, RS. Using the adap learning algorithm to Psychol., 57:227–254, 2006.\nforecast the onset of diabetes mellitus. In Proceedings\nKrause, Josua, Perer, Adam, and Ng, Kenney.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 24, "total_chunks": 28, "char_count": 545, "word_count": 74, "chunking_strategy": "semantic" }, { "chunk_id": "ec442d77-ec21-45dd-af81-2b060de4df99", "text": "Interacting of the Annual Symposium on Computer Application in\nwith predictions: Visual inspection of black-box machine Medical Care, pp. 261. American Medical Informatics\nlearning models. ACM Conference on Human Factors in Association, 1988. Computing Systems, pp. 5686–5697, 2016. doi: 10.1145/\nStiglic, Gregor and Kokol, Peter. Evolutionary approach to 2858036.2858529.\ncombined multiple models tuning. International Journal\nKuz'min, Victor E, Polishchuk, Pavel G, Artemenko, Ana- of Knowledge-based and Intelligent Engineering Systems,\ntoly G, and Andronati, Sergey A. Interpretation of qsar 11(4):227–235, 2007. ISSN 1327-2314.\nmodels based on random forest methods. Molecular in-\nˇStrumbelj, Erik, Kononenko, Igor, and ˇSikonja, M Robnik.\nformatics, 30(6-7):593–603, 2011. Explaining instance classifications with interactions of\nMachine Learning, pp. 96–100, 2016. Tolomei, Gabriele, Silvestri, Fabrizio, Haines, Andrew, and\nLalmas, Mounia.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 25, "total_chunks": 28, "char_count": 947, "word_count": 121, "chunking_strategy": "semantic" }, { "chunk_id": "320ee5f5-2719-432d-a1bd-5d22992dc5f1", "text": "Interpretable predictions of tree-basedLou, Yin, Caruana, Rich, Gehrke, Johannes, and Hooker,\nensembles via actionable feature tweaking. Accurate intelligible models with pairwise interings of the 23rd ACM SIGKDD International Conference actions. Proceedings of the 2013 KDD Conference on\non Knowledge Discovery and Data Mining, pp. 465–474. Knowledge Discovery and Data Mining, pp. 623, 2013. ACM, 2017. doi: 10.1145/2487575.2487579. Welling, Soeren H, Refsgaard, Hanne HF, Brockhoff, Per B,\nLundberg, Scott M and Lee, Su-In. A unified approach to\nand Clemmensen, Line H. Forest floor visualizations of interpreting model predictions. In Advances in Neural\nZhou, Zhi-Hua, Jiang, Y., and Chen, S.-F. Extracting symMaaten, Laurens van der and Hinton, Geoffrey. Visualizing\nbolic rules from trained neural network ensembles. Journal of Machine Learning Research,\nCommunications, 16(1):3–15, 2003. ISSN 09217126.\n9(Nov):2579–2605, 2008. Palczewska, Anna, Palczewski, Jan, Robinson,\nRichard Marchese, and Neagu, Daniel. Interpreting random forest classification models using a feature\ncontribution method.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 26, "total_chunks": 28, "char_count": 1101, "word_count": 146, "chunking_strategy": "semantic" }, { "chunk_id": "01087a66-8fa4-4a18-bb11-5247c0df7ed9", "text": "In Integration of reusable systems,\npp. 193–218. P´erez, Jes´us M., Muguerza, Javier, Arbelaitz, Olatz, Gurrutxaga, Ibai, and Mart´ın, Jos´e I.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 27, "total_chunks": 28, "char_count": 143, "word_count": 20, "chunking_strategy": "semantic" }, { "chunk_id": "c80c97f0-9518-410a-954d-2734a8ade96b", "text": "Combining multiple\nclass distribution modified subsamples in a single tree. Pattern Recognition Letters, 28(4):414–422, 2007. ISSN\n01678655. doi: 10.1016/j.patrec.2006.08.013.", "paper_id": "1806.07129", "title": "Instance-Level Explanations for Fraud Detection: A Case Study", "authors": [ "Dennis Collaris", "Leo M. Vink", "Jarke J. van Wijk" ], "published_date": "2018-06-19", "primary_category": "cs.LG", "arxiv_url": "http://arxiv.org/abs/1806.07129v1", "chunk_index": 28, "total_chunks": 28, "char_count": 175, "word_count": 19, "chunking_strategy": "semantic" } ]