aboutsummaryrefslogtreecommitdiff
path: root/site-from-md/posts/2018-12-02-lime-shapley.html
diff options
context:
space:
mode:
Diffstat (limited to 'site-from-md/posts/2018-12-02-lime-shapley.html')
-rw-r--r--site-from-md/posts/2018-12-02-lime-shapley.html202
1 files changed, 202 insertions, 0 deletions
diff --git a/site-from-md/posts/2018-12-02-lime-shapley.html b/site-from-md/posts/2018-12-02-lime-shapley.html
new file mode 100644
index 0000000..cd7903b
--- /dev/null
+++ b/site-from-md/posts/2018-12-02-lime-shapley.html
@@ -0,0 +1,202 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>Shapley, LIME and SHAP</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script data-isso="/comments/"
+ data-isso-css="true"
+ data-isso-lang="en"
+ data-isso-reply-to-self="false"
+ data-isso-require-author="true"
+ data-isso-require-email="true"
+ data-isso-max-comments-top="10"
+ data-isso-max-comments-nested="5"
+ data-isso-reveal-on-click="5"
+ data-isso-avatar="true"
+ data-isso-avatar-bg="#f0f0f0"
+ data-isso-avatar-fg="#9abf88 #5698c4 #e279a3 #9163b6 ..."
+ data-isso-vote="true"
+ data-vote-levels=""
+ src="/comments/js/embed.min.js"></script>
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <h2> Shapley, LIME and SHAP </h2>
+ <p>Posted on 2018-12-02 | <a href="/posts/2018-12-02-lime-shapley.html#isso-thread">Comments</a> </p>
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/MathJax.js?config=TeX-AMS_CHTML-full" type="text/javascript"></script>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<nav id="TOC">
+<ul>
+<li><a href="#shapley-values">Shapley values</a></li>
+<li><a href="#lime">LIME</a></li>
+<li><a href="#shapley-values-and-lime">Shapley values and LIME</a></li>
+<li><a href="#shap">SHAP</a></li>
+<li><a href="#evaluating-shap">Evaluating SHAP</a></li>
+<li><a href="#references">References</a></li>
+</ul>
+</nav>
+<p>In this post I explain LIME (Ribeiro et. al. 2016), the Shapley values (Shapley, 1953) and the SHAP values (Strumbelj-Kononenko, 2014; Lundberg-Lee, 2017).</p>
+<p><strong>Acknowledgement</strong>. Thanks to Josef Lindman Hörnlund for bringing the LIME and SHAP papers to my attention. The research was done while working at KTH mathematics department.</p>
+<p><em>If you are reading on a mobile device, you may need to “request desktop site” for the equations to be properly displayed. This post is licensed under CC BY-SA and GNU FDL.</em></p>
+<h2 id="shapley-values">Shapley values</h2>
+<p>A coalitional game <span class="math inline">\((v, N)\)</span> of <span class="math inline">\(n\)</span> players involves</p>
+<ul>
+<li>The set <span class="math inline">\(N = \{1, 2, ..., n\}\)</span> that represents the players.</li>
+<li>A function <span class="math inline">\(v: 2^N \to \mathbb R\)</span>, where <span class="math inline">\(v(S)\)</span> is the worth of coalition <span class="math inline">\(S \subset N\)</span>.</li>
+</ul>
+<p>The Shapley values <span class="math inline">\(\phi_i(v)\)</span> of such a game specify a fair way to distribute the total worth <span class="math inline">\(v(N)\)</span> to the players. It is defined as (in the following, for a set <span class="math inline">\(S \subset N\)</span> we use the convention <span class="math inline">\(s = |S|\)</span> to be the number of elements of set <span class="math inline">\(S\)</span> and the shorthand <span class="math inline">\(S - i := S \setminus \{i\}\)</span> and <span class="math inline">\(S + i := S \cup \{i\}\)</span>)</p>
+<p><span class="math display">\[\phi_i(v) = \sum_{S: i \in S} {(n - s)! (s - 1)! \over n!} (v(S) - v(S - i)).\]</span></p>
+<p>It is not hard to see that <span class="math inline">\(\phi_i(v)\)</span> can be viewed as an expectation:</p>
+<p><span class="math display">\[\phi_i(v) = \mathbb E_{S \sim \nu_i} (v(S) - v(S - i))\]</span></p>
+<p>where <span class="math inline">\(\nu_i(S) = n^{-1} {n - 1 \choose s - 1}^{-1} 1_{i \in S}\)</span>, that is, first pick the size <span class="math inline">\(s\)</span> uniformly from <span class="math inline">\(\{1, 2, ..., n\}\)</span>, then pick <span class="math inline">\(S\)</span> uniformly from the subsets of <span class="math inline">\(N\)</span> that has size <span class="math inline">\(s\)</span> and contains <span class="math inline">\(i\)</span>.</p>
+<p>The Shapley values satisfy some nice properties which are readily verified, including:</p>
+<ul>
+<li><strong>Efficiency</strong>. <span class="math inline">\(\sum_i \phi_i(v) = v(N) - v(\emptyset)\)</span>.</li>
+<li><strong>Symmetry</strong>. If for some <span class="math inline">\(i, j \in N\)</span>, for all <span class="math inline">\(S \subset N\)</span>, we have <span class="math inline">\(v(S + i) = v(S + j)\)</span>, then <span class="math inline">\(\phi_i(v) = \phi_j(v)\)</span>.</li>
+<li><strong>Null player</strong>. If for some <span class="math inline">\(i \in N\)</span>, for all <span class="math inline">\(S \subset N\)</span>, we have <span class="math inline">\(v(S + i) = v(S)\)</span>, then <span class="math inline">\(\phi_i(v) = 0\)</span>.</li>
+<li><strong>Linearity</strong>. <span class="math inline">\(\phi_i\)</span> is linear in games. That is <span class="math inline">\(\phi_i(v) + \phi_i(w) = \phi_i(v + w)\)</span>, where <span class="math inline">\(v + w\)</span> is defined by <span class="math inline">\((v + w)(S) := v(S) + w(S)\)</span>.</li>
+</ul>
+<p>In the literature, an added assumption <span class="math inline">\(v(\emptyset) = 0\)</span> is often given, in which case the Efficiency property is defined as <span class="math inline">\(\sum_i \phi_i(v) = v(N)\)</span>. Here I discard this assumption to avoid minor inconsistencies across different sources. For example, in the LIME paper, the local model is defined without an intercept, even though the underlying <span class="math inline">\(v(\emptyset)\)</span> may not be <span class="math inline">\(0\)</span>. In the SHAP paper, an intercept <span class="math inline">\(\phi_0 = v(\emptyset)\)</span> is added which fixes this problem when making connections to the Shapley values.</p>
+<p>Conversely, according to Strumbelj-Kononenko (2010), it was shown in Shapley's original paper (Shapley, 1953) that these four properties together with <span class="math inline">\(v(\emptyset) = 0\)</span> defines the Shapley values.</p>
+<h2 id="lime">LIME</h2>
+<p>LIME (Ribeiro et. al. 2016) is a model that offers a way to explain feature contributions of supervised learning models locally.</p>
+<p>Let <span class="math inline">\(f: X_1 \times X_2 \times ... \times X_n \to \mathbb R\)</span> be a function. We can think of <span class="math inline">\(f\)</span> as a model, where <span class="math inline">\(X_j\)</span> is the space of <span class="math inline">\(j\)</span>th feature. For example, in a language model, <span class="math inline">\(X_j\)</span> may correspond to the count of the <span class="math inline">\(j\)</span>th word in the vocabulary, i.e. the bag-of-words model.</p>
+<p>The output may be something like housing price, or log-probability of something.</p>
+<p>LIME tries to assign a value to each feature <em>locally</em>. By locally, we mean that given a specific sample <span class="math inline">\(x \in X := \prod_{i = 1}^n X_i\)</span>, we want to fit a model around it.</p>
+<p>More specifically, let <span class="math inline">\(h_x: 2^N \to X\)</span> be a function defined by</p>
+<p><span class="math display">\[(h_x(S))_i =
+\begin{cases}
+x_i, &amp; \text{if }i \in S; \\
+0, &amp; \text{otherwise.}
+\end{cases}\]</span></p>
+<p>That is, <span class="math inline">\(h_x(S)\)</span> masks the features that are not in <span class="math inline">\(S\)</span>, or in other words, we are perturbing the sample <span class="math inline">\(x\)</span>. Specifically, <span class="math inline">\(h_x(N) = x\)</span>. Alternatively, the <span class="math inline">\(0\)</span> in the "otherwise" case can be replaced by some kind of default value (see the section titled SHAP in this post).</p>
+<p>For a set <span class="math inline">\(S \subset N\)</span>, let us denote <span class="math inline">\(1_S \in \{0, 1\}^n\)</span> to be an <span class="math inline">\(n\)</span>-bit where the <span class="math inline">\(k\)</span>th bit is <span class="math inline">\(1\)</span> if and only if <span class="math inline">\(k \in S\)</span>.</p>
+<p>Basically, LIME samples <span class="math inline">\(S_1, S_2, ..., S_m \subset N\)</span> to obtain a set of perturbed samples <span class="math inline">\(x_i = h_x(S_i)\)</span> in the <span class="math inline">\(X\)</span> space, and then fits a linear model <span class="math inline">\(g\)</span> using <span class="math inline">\(1_{S_i}\)</span> as the input samples and <span class="math inline">\(f(h_x(S_i))\)</span> as the output samples:</p>
+<p><strong>Problem</strong>(LIME). Find <span class="math inline">\(w = (w_1, w_2, ..., w_n)\)</span> that minimises</p>
+<p><span class="math display">\[\sum_i (w \cdot 1_{S_i} - f(h_x(S_i)))^2 \pi_x(h_x(S_i))\]</span></p>
+<p>where <span class="math inline">\(\pi_x(x&#39;)\)</span> is a function that penalises <span class="math inline">\(x&#39;\)</span>s that are far away from <span class="math inline">\(x\)</span>. In the LIME paper the Gaussian kernel was used:</p>
+<p><span class="math display">\[\pi_x(x&#39;) = \exp\left({- \|x - x&#39;\|^2 \over \sigma^2}\right).\]</span></p>
+<p>Then <span class="math inline">\(w_i\)</span> represents the importance of the <span class="math inline">\(i\)</span>th feature.</p>
+<p>The LIME model has a more general framework, but the specific model considered in the paper is the one described above, with a Lasso for feature selection.</p>
+<p><strong>Remark</strong>. One difference between our account here and the one in the LIME paper is: the dimension of the data space may differ from <span class="math inline">\(n\)</span> (see Section 3.1 of that paper). But in the case of text data, they do use bag-of-words (our <span class="math inline">\(X\)</span>) for an “intermediate” representation. So my understanding is, in their context, there is an “original” data space (let’s call it <span class="math inline">\(X&#39;\)</span>). And there is a one-one correspondence between <span class="math inline">\(X&#39;\)</span> and <span class="math inline">\(X\)</span> (let’s call it <span class="math inline">\(r: X&#39; \to X\)</span>), so that given a sample <span class="math inline">\(x&#39; \in X&#39;\)</span>, we can compute the output of <span class="math inline">\(S\)</span> in the local model with <span class="math inline">\(f(r^{-1}(h_{r(x&#39;)}(S)))\)</span>. As an example, in the example of <span class="math inline">\(X\)</span> being the bag of words, <span class="math inline">\(X&#39;\)</span> may be the embedding vector space, so that <span class="math inline">\(r(x&#39;) = A^{-1} x&#39;\)</span>, where <span class="math inline">\(A\)</span> is the word embedding matrix. Therefore, without loss of generality, we assume the input space to be <span class="math inline">\(X\)</span> which is of dimension <span class="math inline">\(n\)</span>.</p>
+<h2 id="shapley-values-and-lime">Shapley values and LIME</h2>
+<p>The connection between the Shapley values and LIME is noted in Lundberg-Lee (2017), but the underlying connection goes back to 1988 (Charnes et. al.).</p>
+<p>To see the connection, we need to modify LIME a bit.</p>
+<p>First, we need to make LIME less efficient by considering <em>all</em> the <span class="math inline">\(2^n\)</span> subsets instead of the <span class="math inline">\(m\)</span> samples <span class="math inline">\(S_1, S_2, ..., S_{m}\)</span>.</p>
+<p>Then we need to relax the definition of <span class="math inline">\(\pi_x\)</span>. It no longer needs to penalise samples that are far away from <span class="math inline">\(x\)</span>. In fact, we will see later than the choice of <span class="math inline">\(\pi_x(x&#39;)\)</span> that yields the Shapley values is high when <span class="math inline">\(x&#39;\)</span> is very close or very far away from <span class="math inline">\(x\)</span>, and low otherwise. We further add the restriction that <span class="math inline">\(\pi_x(h_x(S))\)</span> only depends on the size of <span class="math inline">\(S\)</span>, thus we rewrite it as <span class="math inline">\(q(s)\)</span> instead.</p>
+<p>We also denote <span class="math inline">\(v(S) := f(h_x(S))\)</span> and <span class="math inline">\(w(S) = \sum_{i \in S} w_i\)</span>.</p>
+<p>Finally, we add the Efficiency property as a constraint: <span class="math inline">\(\sum_{i = 1}^n w_i = f(x) - f(h_x(\emptyset)) = v(N) - v(\emptyset)\)</span>.</p>
+<p>Then the problem becomes a weighted linear regression:</p>
+<p><strong>Problem</strong>. minimises <span class="math inline">\(\sum_{S \subset N} (w(S) - v(S))^2 q(s)\)</span> over <span class="math inline">\(w\)</span> subject to <span class="math inline">\(w(N) = v(N) - v(\emptyset)\)</span>.</p>
+<p><strong>Claim</strong> (Charnes et. al. 1988). The solution to this problem is</p>
+<p><span class="math display">\[w_i = {1 \over n} (v(N) - v(\emptyset)) + \left(\sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s)\right)^{-1} \sum_{S \subset N: i \in S} \left({n - s \over n} q(s) v(S) - {s - 1 \over n} q(s - 1) v(S - i)\right). \qquad (-1)\]</span></p>
+<p>Specifically, if we choose</p>
+<p><span class="math display">\[q(s) = c {n - 2 \choose s - 1}^{-1}\]</span></p>
+<p>for any constant <span class="math inline">\(c\)</span>, then <span class="math inline">\(w_i = \phi_i(v)\)</span> are the Shapley values.</p>
+<p><strong>Remark</strong>. Don't worry about this specific choice of <span class="math inline">\(q(s)\)</span> when <span class="math inline">\(s = 0\)</span> or <span class="math inline">\(n\)</span>, because <span class="math inline">\(q(0)\)</span> and <span class="math inline">\(q(n)\)</span> do not appear on the right hand side of (-1). Therefore they can be defined to be of any value. A common convention of the binomial coefficients is to set <span class="math inline">\({\ell \choose k} = 0\)</span> if <span class="math inline">\(k &lt; 0\)</span> or <span class="math inline">\(k &gt; \ell\)</span>, in which case <span class="math inline">\(q(0) = q(n) = \infty\)</span>.</p>
+<p>In Lundberg-Lee (2017), <span class="math inline">\(c\)</span> is chosen to be <span class="math inline">\(1 / n\)</span>, see Theorem 2 there.</p>
+<p>In Charnes et. al. 1988, the <span class="math inline">\(w_i\)</span>s defined in (-1) are called the generalised Shapley values.</p>
+<p><strong>Proof</strong>. The Lagrangian is</p>
+<p><span class="math display">\[L(w, \lambda) = \sum_{S \subset N} (v(S) - w(S))^2 q(s) - \lambda(w(N) - v(N) + v(\emptyset)).\]</span></p>
+<p>and by making <span class="math inline">\(\partial_{w_i} L(w, \lambda) = 0\)</span> we have</p>
+<p><span class="math display">\[{1 \over 2} \lambda = \sum_{S \subset N: i \in S} (w(S) - v(S)) q(s). \qquad (0)\]</span></p>
+<p>Summing (0) over <span class="math inline">\(i\)</span> and divide by <span class="math inline">\(n\)</span>, we have</p>
+<p><span class="math display">\[{1 \over 2} \lambda = {1 \over n} \sum_i \sum_{S: i \in S} (w(S) q(s) - v(S) q(s)). \qquad (1)\]</span></p>
+<p>We examine each of the two terms on the right hand side.</p>
+<p>Counting the terms involving <span class="math inline">\(w_i\)</span> and <span class="math inline">\(w_j\)</span> for <span class="math inline">\(j \neq i\)</span>, and using <span class="math inline">\(w(N) = v(N) - v(\emptyset)\)</span> we have</p>
+<p><span class="math display">\[\begin{aligned}
+&amp;\sum_{S \subset N: i \in S} w(S) q(s) \\
+&amp;= \sum_{s = 1}^n {n - 1 \choose s - 1} q(s) w_i + \sum_{j \neq i}\sum_{s = 2}^n {n - 2 \choose s - 2} q(s) w_j \\
+&amp;= q(1) w_i + \sum_{s = 2}^n q(s) \left({n - 1 \choose s - 1} w_i + \sum_{j \neq i} {n - 2 \choose s - 2} w_j\right) \\
+&amp;= q(1) w_i + \sum_{s = 2}^n \left({n - 2 \choose s - 1} w_i + {n - 2 \choose s - 2} (v(N) - v(\emptyset))\right) q(s) \\
+&amp;= \sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s) w_i + \sum_{s = 2}^n {n - 2 \choose s - 2} q(s) (v(N) - v(\emptyset)). \qquad (2)
+\end{aligned}\]</span></p>
+<p>Summing (2) over <span class="math inline">\(i\)</span>, we obtain</p>
+<p><span class="math display">\[\begin{aligned}
+&amp;\sum_i \sum_{S: i \in S} w(S) q(s)\\
+&amp;= \sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s) (v(N) - v(\emptyset)) + \sum_{s = 2}^n n {n - 2 \choose s - 2} q(s) (v(N) - v(\emptyset))\\
+&amp;= \sum_{s = 1}^n s{n - 1 \choose s - 1} q(s) (v(N) - v(\emptyset)). \qquad (3)
+\end{aligned}\]</span></p>
+<p>For the second term in (1), we have</p>
+<p><span class="math display">\[\sum_i \sum_{S: i \in S} v(S) q(s) = \sum_{S \subset N} s v(S) q(s). \qquad (4)\]</span></p>
+<p>Plugging (3)(4) in (1), we have</p>
+<p><span class="math display">\[{1 \over 2} \lambda = {1 \over n} \left(\sum_{S \subset N} s q(s) v(S) - \sum_{s = 1}^n s {n - 1 \choose s - 1} q(s) (v(N) - v(\emptyset))\right). \qquad (5)\]</span></p>
+<p>Plugging (5)(2) in (0) and solve for <span class="math inline">\(w_i\)</span>, we have</p>
+<p><span class="math display">\[w_i = {1 \over n} (v(N) - v(\emptyset)) + \left(\sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s) \right)^{-1} \left( \sum_{S: i \in S} q(s) v(S) - {1 \over n} \sum_{S \subset N} s q(s) v(S) \right). \qquad (6)\]</span></p>
+<p>By splitting all subsets of <span class="math inline">\(N\)</span> into ones that contain <span class="math inline">\(i\)</span> and ones that do not and pair them up, we have</p>
+<p><span class="math display">\[\sum_{S \subset N} s q(s) v(S) = \sum_{S: i \in S} (s q(s) v(S) + (s - 1) q(s - 1) v(S - i)).\]</span></p>
+<p>Plugging this back into (6) we get the desired result. <span class="math inline">\(\square\)</span></p>
+<h2 id="shap">SHAP</h2>
+<p>The paper that coined the term "SHAP values" (Lundberg-Lee 2017) is not clear in its definition of the "SHAP values" and its relation to LIME, so the following is my interpretation of their interpretation model, which coincide with a model studied in Strumbelj-Kononenko 2014.</p>
+<p>Recall that we want to calculate feature contributions to a model <span class="math inline">\(f\)</span> at a sample <span class="math inline">\(x\)</span>.</p>
+<p>Let <span class="math inline">\(\mu\)</span> be a probability density function over the input space <span class="math inline">\(X = X_1 \times ... \times X_n\)</span>. A natural choice would be the density that generates the data, or one that approximates such density (e.g. empirical distribution).</p>
+<p>The feature contribution (SHAP value) is thus defined as the Shapley value <span class="math inline">\(\phi_i(v)\)</span>, where</p>
+<p><span class="math display">\[v(S) = \mathbb E_{z \sim \mu} (f(z) | z_S = x_S). \qquad (7)\]</span></p>
+<p>So it is a conditional expectation where <span class="math inline">\(z_i\)</span> is clamped for <span class="math inline">\(i \in S\)</span>. In fact, the definition of feature contributions in this form predates Lundberg-Lee 2017. For example, it can be found in Strumbelj-Kononenko 2014.</p>
+<p>One simplification is to assume the <span class="math inline">\(n\)</span> features are independent, thus <span class="math inline">\(\mu = \mu_1 \times \mu_2 \times ... \times \mu_n\)</span>. In this case, (7) becomes</p>
+<p><span class="math display">\[v(S) = \mathbb E_{z_{N \setminus S} \sim \mu_{N \setminus S}} f(x_S, z_{N \setminus S}) \qquad (8)\]</span></p>
+<p>For example, Strumbelj-Kononenko (2010) considers this scenario where <span class="math inline">\(\mu\)</span> is the uniform distribution over <span class="math inline">\(X\)</span>, see Definition 4 there.</p>
+<p>A further simplification is model linearity, which means <span class="math inline">\(f\)</span> is linear. In this case, (8) becomes</p>
+<p><span class="math display">\[v(S) = f(x_S, \mathbb E_{\mu_{N \setminus S}} z_{N \setminus S}). \qquad (9)\]</span></p>
+<p>It is worth noting that to make the modified LIME model considered in the previous section fall under the linear SHAP framework (9), we need to make two further specialisations, the first is rather cosmetic: we need to change the definition of <span class="math inline">\(h_x(S)\)</span> to</p>
+<p><span class="math display">\[(h_x(S))_i =
+\begin{cases}
+x_i, &amp; \text{if }i \in S; \\
+\mathbb E_{\mu_i} z_i, &amp; \text{otherwise.}
+\end{cases}\]</span></p>
+<p>But we also need to boldly assume the original <span class="math inline">\(f\)</span> to be linear, which in my view, defeats the purpose of interpretability, because linear models are interpretable by themselves.</p>
+<p>One may argue that perhaps we do not need linearity to define <span class="math inline">\(v(S)\)</span> as in (9). If we do so, however, then (9) loses mathematical meaning. A bigger question is: how effective is SHAP? An even bigger question: in general, how do we evaluate models of interpretation?</p>
+<h2 id="evaluating-shap">Evaluating SHAP</h2>
+<p>The quest of the SHAP paper can be decoupled into two independent components: showing the niceties of Shapley values and choosing the coalitional game <span class="math inline">\(v\)</span>.</p>
+<p>The SHAP paper argues that Shapley values <span class="math inline">\(\phi_i(v)\)</span> are a good measurement because they are the only values satisfying the some nice properties including the Efficiency property mentioned at the beginning of the post, invariance under permutation and monotonicity, see the paragraph below Theorem 1 there, which refers to Theorem 2 of Young (1985).</p>
+<p>Indeed, both efficiency (the “additive feature attribution methods” in the paper) and monotonicity are meaningful when considering <span class="math inline">\(\phi_i(v)\)</span> as the feature contribution of the <span class="math inline">\(i\)</span>th feature.</p>
+<p>The question is thus reduced to the second component: what constitutes a nice choice of <span class="math inline">\(v\)</span>?</p>
+<p>The SHAP paper answers this question with 3 options with increasing simplification: (7)(8)(9) in the previous section of this post (corresponding to (9)(11)(12) in the paper). They are intuitive, but it will be interesting to see more concrete (or even mathematical) justifications of such choices.</p>
+<h2 id="references">References</h2>
+<ul>
+<li>Charnes, A., B. Golany, M. Keane, and J. Rousseau. “Extremal Principle Solutions of Games in Characteristic Function Form: Core, Chebychev and Shapley Value Generalizations.” In Econometrics of Planning and Efficiency, edited by Jati K. Sengupta and Gopal K. Kadekodi, 123–33. Dordrecht: Springer Netherlands, 1988. <a href="https://doi.org/10.1007/978-94-009-3677-5_7" class="uri">https://doi.org/10.1007/978-94-009-3677-5_7</a>.</li>
+<li>Lundberg, Scott, and Su-In Lee. “A Unified Approach to Interpreting Model Predictions.” ArXiv:1705.07874 [Cs, Stat], May 22, 2017. <a href="http://arxiv.org/abs/1705.07874" class="uri">http://arxiv.org/abs/1705.07874</a>.</li>
+<li>Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier.” ArXiv:1602.04938 [Cs, Stat], February 16, 2016. <a href="http://arxiv.org/abs/1602.04938" class="uri">http://arxiv.org/abs/1602.04938</a>.</li>
+<li>Shapley, L. S. “17. A Value for n-Person Games.” In Contributions to the Theory of Games (AM-28), Volume II, Vol. 2. Princeton: Princeton University Press, 1953. <a href="https://doi.org/10.1515/9781400881970-018" class="uri">https://doi.org/10.1515/9781400881970-018</a>.</li>
+<li>Strumbelj, Erik, and Igor Kononenko. “An Efficient Explanation of Individual Classifications Using Game Theory.” J. Mach. Learn. Res. 11 (March 2010): 1–18.</li>
+<li>Strumbelj, Erik, and Igor Kononenko. “Explaining Prediction Models and Individual Predictions with Feature Contributions.” Knowledge and Information Systems 41, no. 3 (December 2014): 647–65. <a href="https://doi.org/10.1007/s10115-013-0679-x" class="uri">https://doi.org/10.1007/s10115-013-0679-x</a>.</li>
+<li>Young, H. P. “Monotonic Solutions of Cooperative Games.” International Journal of Game Theory 14, no. 2 (June 1, 1985): 65–72. <a href="https://doi.org/10.1007/BF01769885" class="uri">https://doi.org/10.1007/BF01769885</a>.</li>
+</ul>
+</body>
+</html>
+
+ </div>
+ <section id="isso-thread"></section>
+ </div>
+ </body>
+</html>