forked from ModelOriented/DALEX-docs
-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.Rmd
executable file
·111 lines (71 loc) · 7.65 KB
/
index.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
---
title: "DALEX: Descriptive mAchine Learning EXplanations"
author: "Przemysław Biecek"
date: "`r Sys.Date()`"
site: bookdown::bookdown_site
output:
tint::tufte_html:
split_by: chapter
documentclass: book
bibliography: [book.bib, packages.bib]
biblio-style: apalike
link-citations: yes
github-repo: rstudio/bookdown-demo
description: "Do not trust a black-box model. Unless it explains itself."
---
# Introduction
*Machine Learning* (ML) models have a wide range of applications in classification or regression problems. Due to the increasing computational power of computers and complexity of data sources, ML models are becoming more and more sophisticated. Models created with the use of techniques such as boosting or bagging of neural networks are parametrized by thousands of coefficients. They are obscure; it is hard to trace the link between input variables and model outcomes - in fact they are treated as black boxes.
They are used because of their elasticity and high performance, but their deficiency in interpretability is one of their weakest sides.
In many applications we need to know, understand or prove how the input variables are used in the model. We need to know the impact of particular variables on the final model predictions. Thus we need tools that extract useful information from thousands of model parameters.
DALEX [see @DALEX] is an R [@RcoreT] library with such tools. DALEX helps to understand the way complex models work.
In this document we show two typical use-cases for DALEX: one case will increase our understanding of a model, while the other will increase our understanding of predictions for particular data points.
<p><span class="marginnote">Figure 1.1. Workflow of a typical machine learning modeling. <br/>
A) Modeling is a process in which domain knowledge and data are turned into models. <br/>
B) Models are used to generate predictions. <br/>
C) Understanding of a model structure may increase our knowledge, and in consequence it may lead to a better model. DALEX helps here.<br/>
D) Understanding of drivers behind a particular model's predictions may help to correct wrong decisions, and in consequence it leads to a better model. DALEX helps here.
</span>
<img src="images/mp_understanding.png"/></p>
## Motivation
*Machine Learning* is a vague name. There is some *learning* and some *machines*, but what the heck is going on? What does it really mean? Is it possible that the meaning of this term evolves over time?
* A few years ago I would say that the term refers to *machines learning from humans*. In the supervised learning problems, a human being creates a labeled dataset and machines are tuned/trained to predict correct labels from data.
* Recently we have more and more examples of *machines that are learning from other machines*. Self-playing neural nets like AlphaGo Zero [@AlphaGoZero] learn from themselves with blazing speed. Humans are involved in designing the learning environment but the labeling turns out to be very expensive or not feasible, and we are looking for other ways to learn from partial labels, fuzzy labels, or no labels at all.
* I could imagine that in close future *humans will learn from machines*. Well trained black-boxes may teach us how to be better at playing Go, how to be better at reading PET images (Positron-Emission Tomography images), or how to be better at diagnosing patients.
As the human supervision over learning is decreasing over time, the understanding of black-boxes is more important. To make this future possible, we need tools that extract useful information from black-box models.
DALEX is *the tool* for this.
### Why DALEX?
In recent years we have been observing an increasing interest in tools for knowledge extraction from complex machine learning models, see [@Strumbelj], [@nnet_vis], [@magix], [@Zeiler_Fergus_2014].
There are some very useful R packages that may be used for knowledge extraction from R models, see for example `pdp` [@pdp], `ALEPlot` [@ALEPlot], `randomForestExplainer` [@randomForestExplainer], `xgboostExplainer` [@xgboostExplainer], `live` [@live] and others.
Do we need yet another R package to better understand ML models?
I think so. There are some features available in the DALEX package which make it unique.
* Scope. DALEX is a wrapper for a large number of very good tools / model explainers. It offers a wide range of state-of-the-art techniques for model exploration. Some of these techniques are more useful for understanding model predictions; other techniques are more handy for understanding model structure.
* Consistency. DALEX offers a consistent grammar across various techniques for model explanation. It's a wrapper that smoothes differences across different R packages.
* Model agnostic. DALEX explainers are model agnostic. One can use them for linear models, tree ensembles, or other structures, hence we are not limited to any particular family of black-box models.
* Model comparisons. One can learn a lot from a single black-box model, but one can learn much more by contrasting models with different structures, like linear models with ensembles of trees. All DALEX explainers support model comparisons.
* Visual consistency. Each DALEX explainer can be plotted with the generic `plot()` function. These visual explanations are based on `ggplot2` [@ggplot2] package, which generates elegant, customizable, and consistent graphs.
Chapter \@ref(architecture) presents the overall architecture of the DALEX package.
Chapter \@ref(modelUnderstanding) presents explainers that explore global model performance and variable importance of feature effects.
Chapter \@ref(predictionUnderstanding) presents explainers that explore feature attribution for single predictions of validation of a model prediction's reliability.
In this document we focus on three primary use-cases for DALEX explainers.
### To validate
Explainers presented in Section \@ref(modelPerformance) help in understanding model performance and comparing performance of different models.
Explainers presented in Section \@ref(outlierDetection) help to identify outliers or observations with particularly large residuals.
Explainers presented in Section \@ref(predictionBreakdown) help to understand which key features influence model predictions.
### To understand
Explainers presented in Section \@ref(featureImportance) help to understand which variables are the most important in the model. Explainers presented in Section \@ref(predictionBreakdown) help to understand which features influence single predictions. They are useful in identifying key influencers behind the black-box.
Explainers presented in Section \@ref(variableResponse) help to understand how particular features affect model prediction.
### To improve
Explainers presented in Section \@ref(variableResponse) help to perform feature engineering based on model conditional responses.
Explainers presented in Section \@ref(predictionBreakdown) help to understand which variables result in incorrect model decisions. These explainers are useful in identifying and correcting biases in the training data.
## Trivia
<span class="marginnote">
![](images/dalex01small.jpg)
</span>
[The Daleks](https://en.wikipedia.org/wiki/Dalek) are a fictional extraterrestrial race portrayed in the [Doctor Who](https://en.wikipedia.org/wiki/Doctor_Who) BBC series. Rather dim aliens, known to repeat the phrase *Explain!* very often.
Daleks were engineered. They consist of live bodies closed in tank-like robotic shells. They seem like nice mascots for explanations concerning Machine Learning models.
```{r bibliography, include=FALSE}
# automatically create a bib database for R packages
knitr::write_bib(c(
.packages(), 'bookdown', 'knitr', 'rmarkdown'
), 'packages.bib')
```