aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorYuchen Pei <me@ypei.me>2021-06-18 12:58:44 +1000
committerYuchen Pei <me@ypei.me>2021-06-18 12:58:44 +1000
commit147a19e84a743f1379f05bf2f444143b4afd7bd6 (patch)
tree3127395250cb958f06a98b86f73e77658150b43c
parent4fa26fec8b7e978955e5630d3f820ba9c53be72c (diff)
Updated.
-rw-r--r--Makefile20
-rw-r--r--blog.html21
-rwxr-xr-xconvert_microposts.sh12
-rwxr-xr-xconvert_microposts_standalone.sh11
-rw-r--r--css/default.css71
-rw-r--r--filter-standalone.hs39
-rw-r--r--filter.hs41
-rw-r--r--html-templates/post-preamble.html8
-rwxr-xr-xmd2org.sh9
-rw-r--r--microposts/2048-mdp.org7
-rw-r--r--microposts/ats.org23
-rw-r--r--microposts/bostoncalling.org7
-rw-r--r--microposts/boyer-moore.org21
-rw-r--r--microposts/catalan-overflow.org6
-rw-r--r--microposts/colah-blog.org13
-rw-r--r--microposts/coursera-basic-income.org7
-rw-r--r--microposts/darknet-diaries.org8
-rw-r--r--microposts/decss-haiku.org61
-rw-r--r--microposts/defense-stallman.org12
-rw-r--r--microposts/fsf-membership.org16
-rw-r--r--microposts/gavin-belson.org14
-rw-r--r--microposts/google-search-not-ai.org18
-rw-r--r--microposts/hacker-ethics.org20
-rw-r--r--microposts/hackers-excerpt.org34
-rw-r--r--microposts/how-can-you-help-ia.org7
-rw-r--r--microposts/how-to-learn-on-your-own.org9
-rw-r--r--microposts/ia-lawsuit.org24
-rw-r--r--microposts/learning-knowledge-graph-reddit-journal-club.org34
-rw-r--r--microposts/learning-undecidable.org70
-rw-r--r--microposts/margins.org7
-rw-r--r--microposts/math-writing-decoupling.org26
-rw-r--r--microposts/neural-nets-activation.org24
-rw-r--r--microposts/neural-nets-regularization.org25
-rw-r--r--microposts/neural-networks-programming-paradigm.org21
-rw-r--r--microposts/neural-turing-machine.org37
-rw-r--r--microposts/nlp-arxiv.org10
-rw-r--r--microposts/open-library.org17
-rw-r--r--microposts/open-review-net.org15
-rw-r--r--microposts/pun-generator.org6
-rw-r--r--microposts/random-forests.org24
-rw-r--r--microposts/rnn-fsm.org30
-rw-r--r--microposts/rnn-turing.org11
-rw-r--r--microposts/sanders-suspend-campaign.org8
-rw-r--r--microposts/short-science.org23
-rw-r--r--microposts/simple-solution-lack-of-math-rendering.org10
-rw-r--r--microposts/sql-injection-video.org10
-rw-r--r--microposts/stallman-resign.org24
-rw-r--r--microposts/static-site-generator.org13
-rw-r--r--microposts/zitierkartell.org7
-rw-r--r--org-template/style.org0
-rw-r--r--pages/all-microposts.org773
-rw-r--r--pages/blog.org20
-rw-r--r--pages/microblog.org683
-rw-r--r--posts/2013-06-01-q-robinson-schensted-paper.org28
-rw-r--r--posts/2014-04-01-q-robinson-schensted-symmetry-paper.org16
-rw-r--r--posts/2015-01-20-weighted-interpretation-super-catalan-numbers.org39
-rw-r--r--posts/2015-04-01-unitary-double-products.org10
-rw-r--r--posts/2015-04-02-juggling-skill-tree.org28
-rw-r--r--posts/2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.org67
-rw-r--r--posts/2015-07-01-causal-quantum-product-levy-area.org26
-rw-r--r--posts/2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.org64
-rw-r--r--posts/2016-10-13-q-robinson-schensted-knuth-polymer.org50
-rw-r--r--posts/2017-04-25-open_research_toywiki.org21
-rw-r--r--posts/2017-08-07-mathematical_bazaar.org213
-rw-r--r--posts/2018-04-10-update-open-research.org185
-rw-r--r--posts/2018-06-03-automatic_differentiation.org100
-rw-r--r--posts/2018-12-02-lime-shapley.org362
-rw-r--r--posts/2019-01-03-discriminant-analysis.org293
-rw-r--r--posts/2019-02-14-raise-your-elbo.org1150
-rw-r--r--posts/2019-03-13-a-tail-of-two-densities.org1304
-rw-r--r--posts/2019-03-14-great-but-manageable-expectations.org836
-rw-r--r--posts/blog.html21
-rw-r--r--publish.el119
l---------site-from-md/assets1
-rw-r--r--site-from-md/blog-feed.xml1864
-rw-r--r--site-from-md/blog.html62
-rw-r--r--site-from-md/index.html54
-rw-r--r--site-from-md/links.html113
-rw-r--r--site-from-md/microblog-feed.xml291
-rw-r--r--site-from-md/microblog.html341
-rw-r--r--site-from-md/notations.html67
-rw-r--r--site-from-md/postlist.html82
-rw-r--r--site-from-md/posts/2013-06-01-q-robinson-schensted-paper.html52
-rw-r--r--site-from-md/posts/2014-04-01-q-robinson-schensted-symmetry-paper.html53
-rw-r--r--site-from-md/posts/2015-01-20-weighted-interpretation-super-catalan-numbers.html52
-rw-r--r--site-from-md/posts/2015-04-01-unitary-double-products.html49
-rw-r--r--site-from-md/posts/2015-04-02-juggling-skill-tree.html52
-rw-r--r--site-from-md/posts/2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.html69
-rw-r--r--site-from-md/posts/2015-07-01-causal-quantum-product-levy-area.html51
-rw-r--r--site-from-md/posts/2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.html61
-rw-r--r--site-from-md/posts/2016-10-13-q-robinson-schensted-knuth-polymer.html58
-rw-r--r--site-from-md/posts/2017-04-25-open_research_toywiki.html53
-rw-r--r--site-from-md/posts/2017-08-07-mathematical_bazaar.html108
-rw-r--r--site-from-md/posts/2018-04-10-update-open-research.html104
-rw-r--r--site-from-md/posts/2018-06-03-automatic_differentiation.html98
-rw-r--r--site-from-md/posts/2018-12-02-lime-shapley.html202
-rw-r--r--site-from-md/posts/2019-01-03-discriminant-analysis.html177
-rw-r--r--site-from-md/posts/2019-02-14-raise-your-elbo.html562
-rw-r--r--site-from-md/posts/2019-03-13-a-tail-of-two-densities.html542
-rw-r--r--site-from-md/posts/2019-03-14-great-but-manageable-expectations.html359
100 files changed, 12976 insertions, 0 deletions
diff --git a/Makefile b/Makefile
new file mode 100644
index 0000000..4531888
--- /dev/null
+++ b/Makefile
@@ -0,0 +1,20 @@
+# Makefile for myblog
+
+.PHONY: all publish publish_no_init
+
+all: publish
+
+publish: publish.el
+ @echo "Publishing... with current Emacs configurations."
+ emacs --batch --load publish.el --funcall org-publish-all
+
+publish_no_init: publish.el
+ @echo "Publishing... with --no-init."
+ emacs --batch --no-init --load publish.el --funcall org-publish-all
+
+clean:
+ @echo "Cleaning up.."
+ @rm -rvf *.elc
+ @rm -rvf site
+ @rm -rvf ~/.org-timestamps/*
+ @rm -rvf pages/blog.org
diff --git a/blog.html b/blog.html
new file mode 100644
index 0000000..80176e7
--- /dev/null
+++ b/blog.html
@@ -0,0 +1,21 @@
+#+TITLE: All posts
+
+- *[[file:sitemap.org][All posts]]* - 2021-06-17
+- *[[file:2019-03-14-great-but-manageable-expectations.org][Great but Manageable Expectations]]* - 2019-03-14
+- *[[file:2019-03-13-a-tail-of-two-densities.org][A Tail of Two Densities]]* - 2019-03-13
+- *[[file:2019-02-14-raise-your-elbo.org][Raise your ELBO]]* - 2019-02-14
+- *[[file:2019-01-03-discriminant-analysis.org][Discriminant analysis]]* - 2019-01-03
+- *[[file:2018-12-02-lime-shapley.org][Shapley, LIME and SHAP]]* - 2018-12-02
+- *[[file:2018-06-03-automatic_differentiation.org][Automatic differentiation]]* - 2018-06-03
+- *[[file:2018-04-10-update-open-research.org][Updates on open research]]* - 2018-04-29
+- *[[file:2017-08-07-mathematical_bazaar.org][The Mathematical Bazaar]]* - 2017-08-07
+- *[[file:2017-04-25-open_research_toywiki.org][Open mathematical research and launching toywiki]]* - 2017-04-25
+- *[[file:2016-10-13-q-robinson-schensted-knuth-polymer.org][A \(q\)-Robinson-Schensted-Knuth algorithm and a \(q\)-polymer]]* - 2016-10-13
+- *[[file:2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.org][AMS review of 'Double Macdonald polynomials as the stable limit of Macdonald superpolynomials' by Blondeau-Fournier, Lapointe and Mathieu]]* - 2015-07-15
+- *[[file:2015-07-01-causal-quantum-product-levy-area.org][On a causal quantum double product integral related to Lévy stochastic area.]]* - 2015-07-01
+- *[[file:2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.org][AMS review of 'Infinite binary words containing repetitions of odd period' by Badkobeh and Crochemore]]* - 2015-05-30
+- *[[file:2015-04-02-juggling-skill-tree.org][jst]]* - 2015-04-02
+- *[[file:2015-04-01-unitary-double-products.org][Unitary causal quantum stochastic double products as universal]]* - 2015-04-01
+- *[[file:2015-01-20-weighted-interpretation-super-catalan-numbers.org][AMS review of 'A weighted interpretation for the super Catalan]]* - 2015-01-20
+- *[[file:2014-04-01-q-robinson-schensted-symmetry-paper.org][Symmetry property of \(q\)-weighted Robinson-Schensted algorithms and branching algorithms]]* - 2014-04-01
+- *[[file:2013-06-01-q-robinson-schensted-paper.org][A \(q\)-weighted Robinson-Schensted algorithm]]* - 2013-06-01 \ No newline at end of file
diff --git a/convert_microposts.sh b/convert_microposts.sh
new file mode 100755
index 0000000..8df8a3f
--- /dev/null
+++ b/convert_microposts.sh
@@ -0,0 +1,12 @@
+#!/bin/bash
+
+for md in $(ls ./microposts/*.md); do
+ base=$(basename $md)
+ fname="${base%%.*}"
+# pandoc -s -f markdown -t org -o ./microposts/${fname}.org ./microposts/${md}
+ echo "pandoc -s -f markdown -t org --filter ./filter.hs --metadata filename=${fname} -o ./microposts/${fname}.org ${md}"
+ pandoc -s -f markdown -t org --filter ./filter.hs --metadata filename=${fname} -o ./microposts/${fname}.org ${md}
+done
+
+rm ./microposts/microblog.org -f
+cat ./microposts/*.org > ./microposts/microblog.org
diff --git a/convert_microposts_standalone.sh b/convert_microposts_standalone.sh
new file mode 100755
index 0000000..b78b733
--- /dev/null
+++ b/convert_microposts_standalone.sh
@@ -0,0 +1,11 @@
+#!/bin/bash
+
+for md in $(ls ./microposts/*.md); do
+ base=$(basename $md)
+ fname="${base%%.*}"
+# pandoc -s -f markdown -t org -o ./microposts/${fname}.org ${md}
+ echo "pandoc -s -f markdown -t org --filter ./filter.hs --metadata filename=${fname} -o ./microposts/${fname}.org ${md}"
+ pandoc -s -f markdown -t org --filter ./filter-standalone.hs --metadata title=${fname} -o ./microposts/${fname}.org ${md}
+# sed -i "s/:END:/&\n/" ./microposts/${fname}.org # add a new line before the content
+done
+sed -i 's/^\(#+date:\s\+\)\(.\{10\}\)/\1<\2>/' ./microposts/*.org # fix the dates
diff --git a/css/default.css b/css/default.css
new file mode 100644
index 0000000..5a3782c
--- /dev/null
+++ b/css/default.css
@@ -0,0 +1,71 @@
+/*
+*{
+ background-color: #faebbc;
+}
+*/
+nav {
+ display: inline;
+ float: right;
+}
+
+#TOC:before {
+ content: "Table of Contents";
+}
+
+#TOC{
+ display: inline;
+ float: right;
+ margin: 1rem;
+}
+
+/*
+nav#TOC li{
+ list-style-type: none;
+}
+*/
+
+span.logo {
+ float: left;
+}
+
+header {
+ width: 40rem;
+ margin: auto;
+ overflow: auto;
+ background-color: #f3f3f3;
+}
+
+div#content {
+ width: 40rem;
+ margin: auto;
+ margin-bottom: 3rem;
+ line-height: 1.6;
+}
+
+a {
+ text-decoration: none;
+}
+
+header a {
+ padding: 1rem;
+ display: inline-block;
+ background-color: #f3f3f3;
+}
+
+blockquote {
+ border-left: .3rem solid #ccc;
+ padding-left: 1rem;
+}
+
+a:hover{
+ background-color: #ddd;
+}
+
+li.postlistitem{
+ margin-bottom: .5rem;
+}
+
+ul.postlist{
+ list-style-type: none;
+ padding: 0;
+}
diff --git a/filter-standalone.hs b/filter-standalone.hs
new file mode 100644
index 0000000..8bd769d
--- /dev/null
+++ b/filter-standalone.hs
@@ -0,0 +1,39 @@
+#!/usr/bin/env runhaskell
+-- filter.hs
+{--
+A filter to help convert a markdown file to an org file.
+It does the following:
+1. Remove all headings
+--}
+import Text.Pandoc.JSON
+import Data.Text
+import Data.Map.Strict
+
+main :: IO ()
+main = toJSONFilter filter''
+
+filter'' :: Pandoc -> Pandoc
+filter'' (Pandoc meta blocks) =
+ Pandoc meta (filter' <$> blocks)
+
+getFilename :: Meta -> Text
+getFilename meta =
+ case lookupMeta (pack "filename") meta of
+ Just (MetaString s) -> s
+ _ -> pack ""
+
+makeInlines :: Text -> [Inline]
+makeInlines s = [Str s]
+
+getFilenameInlines :: Meta -> [Inline]
+getFilenameInlines = makeInlines . getFilename
+
+makeCustomId :: Text -> Attr
+makeCustomId s = (pack "", [], [(pack "CUSTOM_ID", s)])
+
+emptyAttr :: Attr
+emptyAttr = (pack "", [], [])
+
+filter' :: Block -> Block
+filter' (Header _ _ _) = Null
+filter' x = x
diff --git a/filter.hs b/filter.hs
new file mode 100644
index 0000000..8db4980
--- /dev/null
+++ b/filter.hs
@@ -0,0 +1,41 @@
+#!/usr/bin/env runhaskell
+-- filter.hs
+{--
+A filter to help convert a vimwiki file to an org file.
+It does the following:
+1. Remove metadata
+2. Add filename or empty as a level three heading
+3. Remove all other headings
+--}
+import Text.Pandoc.JSON
+import Data.Text
+import Data.Map.Strict
+
+main :: IO ()
+main = toJSONFilter filter''
+
+filter'' :: Pandoc -> Pandoc
+filter'' (Pandoc meta blocks) =
+ Pandoc (Meta {unMeta = Data.Map.Strict.empty}) ((Header 3 (makeCustomId $ getFilename meta) ((docDate meta) ++ [Str $ pack ": "] ++ (getFilenameInlines meta))) : (filter' <$> blocks))
+
+getFilename :: Meta -> Text
+getFilename meta =
+ case lookupMeta (pack "filename") meta of
+ Just (MetaString s) -> s
+ _ -> pack ""
+
+makeInlines :: Text -> [Inline]
+makeInlines s = [Str s]
+
+getFilenameInlines :: Meta -> [Inline]
+getFilenameInlines = makeInlines . getFilename
+
+makeCustomId :: Text -> Attr
+makeCustomId s = (pack "", [], [(pack "CUSTOM_ID", s)])
+
+emptyAttr :: Attr
+emptyAttr = (pack "", [], [])
+
+filter' :: Block -> Block
+filter' (Header _ _ _) = Null
+filter' x = x
diff --git a/html-templates/post-preamble.html b/html-templates/post-preamble.html
new file mode 100644
index 0000000..56c3ea7
--- /dev/null
+++ b/html-templates/post-preamble.html
@@ -0,0 +1,8 @@
+<header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+</header>
diff --git a/md2org.sh b/md2org.sh
new file mode 100755
index 0000000..ef01b6d
--- /dev/null
+++ b/md2org.sh
@@ -0,0 +1,9 @@
+#!/bin/bash
+
+dir=$1
+for file in $(ls $dir); do
+ base=$(basename $file)
+ name="${base%%.*}"
+ echo "pandoc -f markdown -t org -s -o $dir/$name.org $dir/$file"
+ pandoc -f markdown -t org -s -o $dir/$name.org $dir/$file
+done
diff --git a/microposts/2048-mdp.org b/microposts/2048-mdp.org
new file mode 100644
index 0000000..4794780
--- /dev/null
+++ b/microposts/2048-mdp.org
@@ -0,0 +1,7 @@
+#+title: 2048-mdp
+
+#+date: <2018-05-25>
+
+[[http://jdlm.info/articles/2018/03/18/markov-decision-process-2048.html][This
+post]] models 2048 as an MDP and solves it using policy iteration and
+backward induction.
diff --git a/microposts/ats.org b/microposts/ats.org
new file mode 100644
index 0000000..45d6417
--- /dev/null
+++ b/microposts/ats.org
@@ -0,0 +1,23 @@
+#+title: ats
+
+#+date: <2018-05-22>
+
+#+begin_quote
+ ATS (Applied Type System) is a programming language designed to unify
+ programming with formal specification. ATS has support for combining
+ theorem proving with practical programming through the use of advanced
+ type systems. A past version of The Computer Language Benchmarks Game
+ has demonstrated that the performance of ATS is comparable to that of
+ the C and C++ programming languages. By using theorem proving and
+ strict type checking, the compiler can detect and prove that its
+ implemented functions are not susceptible to bugs such as division by
+ zero, memory leaks, buffer overflow, and other forms of memory
+ corruption by verifying pointer arithmetic and reference counting
+ before the program compiles. Additionally, by using the integrated
+ theorem-proving system of ATS (ATS/LF), the programmer may make use of
+ static constructs that are intertwined with the operative code to
+ prove that a function attains its specification.
+#+end_quote
+
+[[https://en.wikipedia.org/wiki/ATS_(programming_language)][Wikipedia
+entry on ATS]]
diff --git a/microposts/bostoncalling.org b/microposts/bostoncalling.org
new file mode 100644
index 0000000..02ba871
--- /dev/null
+++ b/microposts/bostoncalling.org
@@ -0,0 +1,7 @@
+#+title: bostoncalling
+
+#+date: <2018-05-20>
+
+(5-second fame) I sent a picture of my kitchen sink to BBC and got
+mentioned in the [[https://www.bbc.co.uk/programmes/w3cswg8c][latest
+Boston Calling episode]] (listen at 25:54).
diff --git a/microposts/boyer-moore.org b/microposts/boyer-moore.org
new file mode 100644
index 0000000..6298454
--- /dev/null
+++ b/microposts/boyer-moore.org
@@ -0,0 +1,21 @@
+#+title: boyer-moore
+
+#+date: <2018-06-04>
+
+The
+[[https://en.wikipedia.org/wiki/Boyer–Moore_majority_vote_algorithm][Boyer-Moore
+algorithm for finding the majority of a sequence of elements]] falls in
+the category of "very clever algorithms".
+
+#+begin_example
+ int majorityElement(vector<int>& xs) {
+ int count = 0;
+ int maj = xs[0];
+ for (auto x : xs) {
+ if (x == maj) count++;
+ else if (count == 0) maj = x;
+ else count--;
+ }
+ return maj;
+ }
+#+end_example
diff --git a/microposts/catalan-overflow.org b/microposts/catalan-overflow.org
new file mode 100644
index 0000000..8ddf294
--- /dev/null
+++ b/microposts/catalan-overflow.org
@@ -0,0 +1,6 @@
+#+title: catalan-overflow
+
+#+date: <2018-06-11>
+
+To compute Catalan numbers without unnecessary overflow, use the
+recurrence formula \(C_n = {4 n - 2 \over n + 1} C_{n - 1}\).
diff --git a/microposts/colah-blog.org b/microposts/colah-blog.org
new file mode 100644
index 0000000..de0f28d
--- /dev/null
+++ b/microposts/colah-blog.org
@@ -0,0 +1,13 @@
+#+title: colah-blog
+
+#+date: <2018-05-18>
+
+[[https://colah.github.io/][colah's blog]] has a cool feature that
+allows you to comment on any paragraph of a blog post. Here's an
+[[https://colah.github.io/posts/2015-08-Understanding-LSTMs/][example]].
+If it is doable on a static site hosted on Github pages, I suppose it
+shouldn't be too hard to implement. This also seems to work more
+seamlessly than [[https://fermatslibrary.com/][Fermat's Library]],
+because the latter has to embed pdfs in webpages. Now fantasy time:
+imagine that one day arXiv shows html versions of papers (through author
+uploading or conversion from TeX) with this feature.
diff --git a/microposts/coursera-basic-income.org b/microposts/coursera-basic-income.org
new file mode 100644
index 0000000..e051a7a
--- /dev/null
+++ b/microposts/coursera-basic-income.org
@@ -0,0 +1,7 @@
+#+title: coursera-basic-income
+
+#+date: <2018-06-20>
+
+Coursera is having
+[[https://www.coursera.org/learn/exploring-basic-income-in-a-changing-economy][a
+Teach-Out on Basic Income]].
diff --git a/microposts/darknet-diaries.org b/microposts/darknet-diaries.org
new file mode 100644
index 0000000..19e87f9
--- /dev/null
+++ b/microposts/darknet-diaries.org
@@ -0,0 +1,8 @@
+#+title: darknet-diaries
+
+#+date: <2018-08-13>
+
+[[https://darknetdiaries.com][Darknet Diaries]] is a cool podcast.
+According to its about page it covers "true stories from the dark side
+of the Internet. Stories about hackers, defenders, threats, malware,
+botnets, breaches, and privacy."
diff --git a/microposts/decss-haiku.org b/microposts/decss-haiku.org
new file mode 100644
index 0000000..643ff7d
--- /dev/null
+++ b/microposts/decss-haiku.org
@@ -0,0 +1,61 @@
+#+title: decss-haiku
+
+#+date: <2019-03-16>
+
+#+begin_quote
+ #+begin_example
+ Muse! When we learned to
+ count, little did we know all
+ the things we could do
+
+ some day by shuffling
+ those numbers: Pythagoras
+ said "All is number"
+
+ long before he saw
+ computers and their effects,
+ or what they could do
+
+ by computation,
+ naive and mechanical
+ fast arithmetic.
+
+ It changed the world, it
+ changed our consciousness and lives
+ to have such fast math
+
+ available to
+ us and anyone who cared
+ to learn programming.
+
+ Now help me, Muse, for
+ I wish to tell a piece of
+ controversial math,
+
+ for which the lawyers
+ of DVD CCA
+ don't forbear to sue:
+
+ that they alone should
+ know or have the right to teach
+ these skills and these rules.
+
+ (Do they understand
+ the content, or is it just
+ the effects they see?)
+
+ And all mathematics
+ is full of stories (just read
+ Eric Temple Bell);
+
+ and CSS is
+ no exception to this rule.
+ Sing, Muse, decryption
+
+ once secret, as all
+ knowledge, once unknown: how to
+ decrypt DVDs.
+ #+end_example
+#+end_quote
+
+Seth Schoen, [[https://en.wikipedia.org/wiki/DeCSS_haiku][DeCSS haiku]]
diff --git a/microposts/defense-stallman.org b/microposts/defense-stallman.org
new file mode 100644
index 0000000..8c6fc07
--- /dev/null
+++ b/microposts/defense-stallman.org
@@ -0,0 +1,12 @@
+#+title: defense-stallman
+
+#+date: <2019-09-30>
+
+Someone wrote a bold article titled
+[[https://geoff.greer.fm/2019/09/30/in-defense-of-richard-stallman/]["In
+Defense of Richard Stallman"]]. Kudos to him.
+
+Also, an interesting read:
+[[https://cfenollosa.com/blog/famous-computer-public-figure-suffers-the-consequences-for-asshole-ish-behavior.html][Famous
+public figure in tech suffers the consequences for asshole-ish
+behavior]].
diff --git a/microposts/fsf-membership.org b/microposts/fsf-membership.org
new file mode 100644
index 0000000..78ea8bb
--- /dev/null
+++ b/microposts/fsf-membership.org
@@ -0,0 +1,16 @@
+#+title: fsf-membership
+
+#+date: <2020-08-02>
+
+I am a proud associate member of Free Software Freedom. For me the
+philosophy of Free Software is about ensuring the enrichment of a
+digital commons, so that knowledge and information are not concentrated
+in the hands of selected privileged people and locked up as
+"intellectual property". The genius of copyleft licenses like GNU (A)GPL
+ensures software released for the public, remains public. Open source
+does not care about that.
+
+If you also care about the public good, the hacker ethics, or the spirit
+of the web, please take a moment to consider joining FSF as an associate
+member. It comes with [[https://www.fsf.org/associate/benefits][numerous
+perks and benefits]].
diff --git a/microposts/gavin-belson.org b/microposts/gavin-belson.org
new file mode 100644
index 0000000..2078e50
--- /dev/null
+++ b/microposts/gavin-belson.org
@@ -0,0 +1,14 @@
+#+title: gavin-belson
+
+#+date: <2018-12-11>
+
+#+begin_quote
+ I don't know about you people, but I don't want to live in a world
+ where someone else makes the world a better place better than we do.
+#+end_quote
+
+Gavin Belson, Silicon Valley S2E1.
+
+I came across this quote in
+[[https://slate.com/business/2018/12/facebook-emails-lawsuit-embarrassing-mark-zuckerberg.html][a
+Slate post about Facebook]]
diff --git a/microposts/google-search-not-ai.org b/microposts/google-search-not-ai.org
new file mode 100644
index 0000000..f75532a
--- /dev/null
+++ b/microposts/google-search-not-ai.org
@@ -0,0 +1,18 @@
+#+title: google-search-not-ai
+
+#+date: <2018-04-30>
+
+#+begin_quote
+ But, users have learned to accommodate to Google not the other way
+ around. We know what kinds of things we can type into Google and what
+ we can't and we keep our searches to things that Google is likely to
+ help with. We know we are looking for texts and not answers to start a
+ conversation with an entity that knows what we really need to talk
+ about. People learn from conversation and Google can't have one. It
+ can pretend to have one using Siri but really those conversations tend
+ to get tiresome when you are past asking about where to eat.
+#+end_quote
+
+Roger Schank -
+[[http://www.rogerschank.com/fraudulent-claims-made-by-IBM-about-Watson-and-AI][Fraudulent
+claims made by IBM about Watson and AI]]
diff --git a/microposts/hacker-ethics.org b/microposts/hacker-ethics.org
new file mode 100644
index 0000000..a81bca5
--- /dev/null
+++ b/microposts/hacker-ethics.org
@@ -0,0 +1,20 @@
+#+title: hacker-ethics
+
+#+date: <2018-04-06>
+
+#+begin_quote
+
+ - Access to computers---and anything that might teach you something
+ about the way the world works---should be unlimited and total.
+ Always yield to the Hands-On Imperative!
+ - All information should be free.
+ - Mistrust Authority---Promote Decentralization.
+ - Hackers should be judged by their hacking, not bogus criteria such
+ as degrees, age, race, or position.
+ - You can create art and beauty on a computer.
+ - Computers can change your life for the better.
+#+end_quote
+
+[[https://en.wikipedia.org/wiki/Hacker_ethic][The Hacker Ethic]],
+[[https://en.wikipedia.org/wiki/Hackers:_Heroes_of_the_Computer_Revolution][Hackers:
+Heroes of Computer Revolution]], by Steven Levy
diff --git a/microposts/hackers-excerpt.org b/microposts/hackers-excerpt.org
new file mode 100644
index 0000000..412a35a
--- /dev/null
+++ b/microposts/hackers-excerpt.org
@@ -0,0 +1,34 @@
+#+title: hackers-excerpt
+
+#+date: <2018-06-15>
+
+#+begin_quote
+ But as more nontechnical people bought computers, the things that
+ impressed hackers were not as essential. While the programs themselves
+ had to maintain a certain standard of quality, it was quite possible
+ that the most exacting standards---those applied by a hacker who
+ wanted to add one more feature, or wouldn't let go of a project until
+ it was demonstrably faster than anything else around---were probably
+ counterproductive. What seemed more important was marketing. There
+ were plenty of brilliant programs which no one knew about. Sometimes
+ hackers would write programs and put them in the public domain, give
+ them away as easily as John Harris had lent his early copy of
+ Jawbreaker to the guys at the Fresno computer store. But rarely would
+ people ask for public domain programs by name: they wanted the ones
+ they saw advertised and discussed in magazines, demonstrated in
+ computer stores. It was not so important to have amazingly clever
+ algorithms. Users would put up with more commonplace ones.
+
+ The Hacker Ethic, of course, held that every program should be as good
+ as you could make it (or better), infinitely flexible, admired for its
+ brilliance of concept and execution, and designed to extend the user's
+ powers. Selling computer programs like toothpaste was heresy. But it
+ was happening. Consider the prescription for success offered by one of
+ a panel of high-tech venture capitalists, gathered at a 1982 software
+ show: "I can summarize what it takes in three words: marketing,
+ marketing, marketing." When computers are sold like toasters, programs
+ will be sold like toothpaste. The Hacker Ethic notwithstanding.
+#+end_quote
+
+[[http://www.stevenlevy.com/index.php/books/hackers][Hackers: Heroes of
+Computer Revolution]], by Steven Levy.
diff --git a/microposts/how-can-you-help-ia.org b/microposts/how-can-you-help-ia.org
new file mode 100644
index 0000000..ae6b80b
--- /dev/null
+++ b/microposts/how-can-you-help-ia.org
@@ -0,0 +1,7 @@
+#+title: how-can-you-help-ia
+
+#+date: <2020-06-21>
+
+[[https://blog.archive.org/2020/06/14/how-can-you-help-the-internet-archive/][How
+can you help the Internet Archive?]] Use it. It's more than the Wayback
+Machine. And get involved.
diff --git a/microposts/how-to-learn-on-your-own.org b/microposts/how-to-learn-on-your-own.org
new file mode 100644
index 0000000..14e4dfd
--- /dev/null
+++ b/microposts/how-to-learn-on-your-own.org
@@ -0,0 +1,9 @@
+#+title: how-to-learn-on-your-own
+
+#+date: <2018-05-30>
+
+Roger Grosse's post
+[[https://metacademy.org/roadmaps/rgrosse/learn_on_your_own][How to
+learn on your own (2015)]] is an excellent modern guide on how to learn
+and research technical stuff (especially machine learning and maths) on
+one's own.
diff --git a/microposts/ia-lawsuit.org b/microposts/ia-lawsuit.org
new file mode 100644
index 0000000..f5952e9
--- /dev/null
+++ b/microposts/ia-lawsuit.org
@@ -0,0 +1,24 @@
+#+title: ia-lawsuit
+
+#+date: <2020-08-02>
+
+The four big publishers Hachette, HarperCollins, Wiley, and Penguin
+Random House are still pursuing Internet Archive.
+
+#+begin_quote
+ [Their] lawsuit does not stop at seeking to end the practice of
+ Controlled Digital Lending. These publishers call for the destruction
+ of the 1.5 million digital books that Internet Archive makes available
+ to our patrons. This form of digital book burning is unprecedented and
+ unfairly disadvantages people with print disabilities. For the blind,
+ ebooks are a lifeline, yet less than one in ten exists in accessible
+ formats. Since 2010, Internet Archive has made our lending library
+ available to the blind and print disabled community, in addition to
+ sighted users. If the publishers are successful with their lawsuit,
+ more than a million of those books would be deleted from the
+ Internet's digital shelves forever.
+#+end_quote
+
+[[https://blog.archive.org/2020/07/29/internet-archive-responds-to-publishers-lawsuit/][Libraries
+lend books, and must continue to lend books: Internet Archive responds
+to publishers' lawsuit]]
diff --git a/microposts/learning-knowledge-graph-reddit-journal-club.org b/microposts/learning-knowledge-graph-reddit-journal-club.org
new file mode 100644
index 0000000..c382fc0
--- /dev/null
+++ b/microposts/learning-knowledge-graph-reddit-journal-club.org
@@ -0,0 +1,34 @@
+#+title: learning-knowledge-graph-reddit-journal-club
+
+#+date: <2018-05-07>
+
+It is a natural idea to look for ways to learn things like going through
+a skill tree in a computer RPG.
+
+For example I made a
+[[https://ypei.me/posts/2015-04-02-juggling-skill-tree.html][DAG for
+juggling]].
+
+Websites like [[https://knowen.org][Knowen]] and
+[[https://metacademy.org][Metacademy]] explore this idea with added
+flavour of open collaboration.
+
+The design of Metacademy looks quite promising. It also has a nice
+tagline: "your package manager for knowledge".
+
+There are so so many tools to assist learning / research / knowledge
+sharing today, and we should keep experimenting, in the hope that
+eventually one of them will scale.
+
+On another note, I often complain about the lack of a place to discuss
+math research online, but today I found on Reddit some journal clubs on
+machine learning:
+[[https://www.reddit.com/r/MachineLearning/comments/8aluhs/d_machine_learning_wayr_what_are_you_reading_week/][1]],
+[[https://www.reddit.com/r/MachineLearning/comments/8elmd8/d_anyone_having_trouble_reading_a_particular/][2]].
+If only we had this for maths. On the other hand r/math does have some
+interesting recurring threads as well:
+[[https://www.reddit.com/r/math/wiki/everythingaboutx][Everything about
+X]] and
+[[https://www.reddit.com/r/math/search?q=what+are+you+working+on?+author:automoderator+&sort=new&restrict_sr=on&t=all][What
+Are You Working On?]]. Hopefully these threads can last for years to
+come.
diff --git a/microposts/learning-undecidable.org b/microposts/learning-undecidable.org
new file mode 100644
index 0000000..a4e3af9
--- /dev/null
+++ b/microposts/learning-undecidable.org
@@ -0,0 +1,70 @@
+#+title: learning-undecidable
+
+#+date: <2019-01-27>
+
+My take on the
+[[https://www.nature.com/articles/s42256-018-0002-3][Nature paper
+/Learning can be undecidable/]]:
+
+Fantastic article, very clearly written.
+
+So it reduces a kind of learninability called estimating the maximum
+(EMX) to the cardinality of real numbers which is undecidable.
+
+When it comes to the relation between EMX and the rest of machine
+learning framework, the article mentions that EMX belongs to "extensions
+of PAC learnability include Vapnik's statistical learning setting and
+the equivalent general learning setting by Shalev-Shwartz and
+colleagues" (I have no idea what these two things are), but it does not
+say whether EMX is representative of or reduces to common learning
+tasks. So it is not clear whether its undecidability applies to ML at
+large.
+
+Another condition to the main theorem is the union bounded closure
+assumption. It seems a reasonable property of a family of sets, but then
+again I wonder how that translates to learning.
+
+The article says "By now, we know of quite a few independence [from
+mathematical axioms] results, mostly for set theoretic questions like
+the continuum hypothesis, but also for results in algebra, analysis,
+infinite combinatorics and more. Machine learning, so far, has escaped
+this fate." but the description of the EMX learnability makes it more
+like a classical mathematical / theoretical computer science problem
+rather than machine learning.
+
+An insightful conclusion: "How come learnability can neither be proved
+nor refuted? A closer look reveals that the source of the problem is in
+defining learnability as the existence of a learning function rather
+than the existence of a learning algorithm. In contrast with the
+existence of algorithms, the existence of functions over infinite
+domains is a (logically) subtle issue."
+
+In relation to practical problems, it uses an example of ad targeting.
+However, A lot is lost in translation from the main theorem to this ad
+example.
+
+The EMX problem states: given a domain X, a distribution P over X which
+is unknown, some samples from P, and a family of subsets of X called F,
+find A in F that approximately maximises P(A).
+
+The undecidability rests on X being the continuous [0, 1] interval, and
+from the insight, we know the problem comes from the cardinality of
+subsets of the [0, 1] interval, which is "logically subtle".
+
+In the ad problem, the domain X is all potential visitors, which is
+finite because there are finite number of people in the world. In this
+case P is a categorical distribution over the 1..n where n is the
+population of the world. One can have a good estimate of the parameters
+of a categorical distribution by asking for sufficiently large number of
+samples and computing the empirical distribution. Let's call the
+estimated distribution Q. One can choose the from F (also finite) the
+set that maximises Q(A) which will be a solution to EMX.
+
+In other words, the theorem states: EMX is undecidable because not all
+EMX instances are decidable, because there are some nasty ones due to
+infinities. That does not mean no EMX instance is decidable. And I think
+the ad instance is decidable. Is there a learning task that actually
+corresponds to an undecidable EMX instance? I don't know, but I will not
+believe the result of this paper is useful until I see one.
+
+h/t Reynaldo Boulogne
diff --git a/microposts/margins.org b/microposts/margins.org
new file mode 100644
index 0000000..d29804b
--- /dev/null
+++ b/microposts/margins.org
@@ -0,0 +1,7 @@
+#+title: margins
+
+#+date: <2018-10-05>
+
+With Fermat's Library's new tool
+[[https://fermatslibrary.com/margins][margins]], you can host your own
+journal club.
diff --git a/microposts/math-writing-decoupling.org b/microposts/math-writing-decoupling.org
new file mode 100644
index 0000000..3ccb9d1
--- /dev/null
+++ b/microposts/math-writing-decoupling.org
@@ -0,0 +1,26 @@
+#+title: math-writing-decoupling
+
+#+date: <2018-05-10>
+
+One way to write readable mathematics is to decouple concepts. One idea
+is the following template. First write a toy example with all the
+important components present in this example, then analyse each
+component individually and elaborate how (perhaps more complex)
+variations of the component can extend the toy example and induce more
+complex or powerful versions of the toy example. Through such
+incremental development, one should be able to arrive at any result in
+cutting edge research after a pleasant journey.
+
+It's a bit like the UNIX philosophy, where you have a basic system of
+modules like IO, memory management, graphics etc, and modify / improve
+each module individually (H/t [[http://nand2tetris.org/][NAND2Tetris]]).
+
+The book [[http://neuralnetworksanddeeplearning.com/][Neutral networks
+and deep learning]] by Michael Nielsen is an example of such approach.
+It begins the journey with a very simple neutral net with one hidden
+layer, no regularisation, and sigmoid activations. It then analyses each
+component including cost functions, the back propagation algorithm, the
+activation functions, regularisation and the overall architecture (from
+fully connected to CNN) individually and improve the toy example
+incrementally. Over the course the accuracy of the example of mnist
+grows incrementally from 95.42% to 99.67%.
diff --git a/microposts/neural-nets-activation.org b/microposts/neural-nets-activation.org
new file mode 100644
index 0000000..aee7c2d
--- /dev/null
+++ b/microposts/neural-nets-activation.org
@@ -0,0 +1,24 @@
+#+title: neural-nets-activation
+
+#+date: <2018-05-09>
+
+#+begin_quote
+ What makes the rectified linear activation function better than the
+ sigmoid or tanh functions? At present, we have a poor understanding of
+ the answer to this question. Indeed, rectified linear units have only
+ begun to be widely used in the past few years. The reason for that
+ recent adoption is empirical: a few people tried rectified linear
+ units, often on the basis of hunches or heuristic arguments. They got
+ good results classifying benchmark data sets, and the practice has
+ spread. In an ideal world we'd have a theory telling us which
+ activation function to pick for which application. But at present
+ we're a long way from such a world. I should not be at all surprised
+ if further major improvements can be obtained by an even better choice
+ of activation function. And I also expect that in coming decades a
+ powerful theory of activation functions will be developed. Today, we
+ still have to rely on poorly understood rules of thumb and experience.
+#+end_quote
+
+Michael Nielsen,
+[[http://neuralnetworksanddeeplearning.com/chap6.html#convolutional_neural_networks_in_practice][Neutral
+networks and deep learning]]
diff --git a/microposts/neural-nets-regularization.org b/microposts/neural-nets-regularization.org
new file mode 100644
index 0000000..f92feb6
--- /dev/null
+++ b/microposts/neural-nets-regularization.org
@@ -0,0 +1,25 @@
+#+title: neural-nets-regularization
+
+#+date: <2018-05-08>
+
+#+begin_quote
+ no-one has yet developed an entirely convincing theoretical
+ explanation for why regularization helps networks generalize. Indeed,
+ researchers continue to write papers where they try different
+ approaches to regularization, compare them to see which works better,
+ and attempt to understand why different approaches work better or
+ worse. And so you can view regularization as something of a kludge.
+ While it often helps, we don't have an entirely satisfactory
+ systematic understanding of what's going on, merely incomplete
+ heuristics and rules of thumb.
+
+ There's a deeper set of issues here, issues which go to the heart of
+ science. It's the question of how we generalize. Regularization may
+ give us a computational magic wand that helps our networks generalize
+ better, but it doesn't give us a principled understanding of how
+ generalization works, nor of what the best approach is.
+#+end_quote
+
+Michael Nielsen,
+[[http://neuralnetworksanddeeplearning.com/chap3.html#why_does_regularization_help_reduce_overfitting][Neural
+networks and deep learning]]
diff --git a/microposts/neural-networks-programming-paradigm.org b/microposts/neural-networks-programming-paradigm.org
new file mode 100644
index 0000000..c96f2b8
--- /dev/null
+++ b/microposts/neural-networks-programming-paradigm.org
@@ -0,0 +1,21 @@
+#+title: neural-networks-programming-paradigm
+
+#+date: <2018-05-01>
+
+#+begin_quote
+ Neural networks are one of the most beautiful programming paradigms
+ ever invented. In the conventional approach to programming, we tell
+ the computer what to do, breaking big problems up into many small,
+ precisely defined tasks that the computer can easily perform. By
+ contrast, in a neural network we don't tell the computer how to solve
+ our problem. Instead, it learns from observational data, figuring out
+ its own solution to the problem at hand.
+#+end_quote
+
+Michael Nielsen -
+[[http://neuralnetworksanddeeplearning.com/about.html][What this book
+(Neural Networks and Deep Learning) is about]]
+
+Unrelated to the quote, note that Nielsen's book is licensed under
+[[https://creativecommons.org/licenses/by-nc/3.0/deed.en_GB][CC BY-NC]],
+so one can build on it and redistribute non-commercially.
diff --git a/microposts/neural-turing-machine.org b/microposts/neural-turing-machine.org
new file mode 100644
index 0000000..b4212c2
--- /dev/null
+++ b/microposts/neural-turing-machine.org
@@ -0,0 +1,37 @@
+#+title: neural-turing-machine
+
+#+date: <2018-05-09>
+
+#+begin_quote
+ One way RNNs are currently being used is to connect neural networks
+ more closely to traditional ways of thinking about algorithms, ways of
+ thinking based on concepts such as Turing machines and (conventional)
+ programming languages. [[https://arxiv.org/abs/1410.4615][A 2014
+ paper]] developed an RNN which could take as input a
+ character-by-character description of a (very, very simple!) Python
+ program, and use that description to predict the output. Informally,
+ the network is learning to "understand" certain Python programs.
+ [[https://arxiv.org/abs/1410.5401][A second paper, also from 2014]],
+ used RNNs as a starting point to develop what they called a neural
+ Turing machine (NTM). This is a universal computer whose entire
+ structure can be trained using gradient descent. They trained their
+ NTM to infer algorithms for several simple problems, such as sorting
+ and copying.
+
+ As it stands, these are extremely simple toy models. Learning to
+ execute the Python program =print(398345+42598)= doesn't make a
+ network into a full-fledged Python interpreter! It's not clear how
+ much further it will be possible to push the ideas. Still, the results
+ are intriguing. Historically, neural networks have done well at
+ pattern recognition problems where conventional algorithmic approaches
+ have trouble. Vice versa, conventional algorithmic approaches are good
+ at solving problems that neural nets aren't so good at. No-one today
+ implements a web server or a database program using a neural network!
+ It'd be great to develop unified models that integrate the strengths
+ of both neural networks and more traditional approaches to algorithms.
+ RNNs and ideas inspired by RNNs may help us do that.
+#+end_quote
+
+Michael Nielsen,
+[[http://neuralnetworksanddeeplearning.com/chap6.html#other_approaches_to_deep_neural_nets][Neural
+networks and deep learning]]
diff --git a/microposts/nlp-arxiv.org b/microposts/nlp-arxiv.org
new file mode 100644
index 0000000..da9525c
--- /dev/null
+++ b/microposts/nlp-arxiv.org
@@ -0,0 +1,10 @@
+#+title: nlp-arxiv
+
+#+date: <2018-05-08>
+
+Primer Science is a tool by a startup called Primer that uses NLP to
+summarize contents (but not single papers, yet) on arxiv. A developer of
+this tool predicts in
+[[https://twimlai.com/twiml-talk-136-taming-arxiv-w-natural-language-processing-with-john-bohannon/#][an
+interview]] that progress on AI's ability to extract meanings from AI
+research papers will be the biggest accelerant on AI research.
diff --git a/microposts/open-library.org b/microposts/open-library.org
new file mode 100644
index 0000000..c1a64c3
--- /dev/null
+++ b/microposts/open-library.org
@@ -0,0 +1,17 @@
+#+title: open-library
+
+#+date: <2020-06-12>
+
+Open Library was cofounded by Aaron Swartz. As part of the Internet
+Archive, it has done good work to spread knowledge. However it is
+currently
+[[https://arstechnica.com/tech-policy/2020/06/internet-archive-ends-emergency-library-early-to-appease-publishers/][being
+sued by four major publishers]] for the
+[[https://archive.org/details/nationalemergencylibrary][National
+Emergency Library]]. IA decided to
+[[https://blog.archive.org/2020/06/10/temporary-national-emergency-library-to-close-2-weeks-early-returning-to-traditional-controlled-digital-lending/][close
+the NEL two weeks earlier than planned]], but the lawsuit is not over,
+which in the worst case scenario has the danger of resulting in
+Controlled Digital Lending being considered illegal and (less likely)
+bancruptcy of the Internet Archive. If this happens it will be a big
+setback of the free-culture movement.
diff --git a/microposts/open-review-net.org b/microposts/open-review-net.org
new file mode 100644
index 0000000..72eacfd
--- /dev/null
+++ b/microposts/open-review-net.org
@@ -0,0 +1,15 @@
+#+title: open-review-net
+
+#+date: <2018-05-14>
+
+Open peer review means peer review process where communications
+e.g. comments and responses are public.
+
+Like [[https://scipost.org/][SciPost]] mentioned in
+[[/posts/2018-04-10-update-open-research.html][my post]],
+[[https://openreview.net][OpenReview.net]] is an example of open peer
+review in research. It looks like their focus is machine learning. Their
+[[https://openreview.net/about][about page]] states their mission, and
+here's [[https://openreview.net/group?id=ICLR.cc/2018/Conference][an
+example]] where you can click on each entry to see what it is like. We
+definitely need this in the maths research community.
diff --git a/microposts/pun-generator.org b/microposts/pun-generator.org
new file mode 100644
index 0000000..9b71ff9
--- /dev/null
+++ b/microposts/pun-generator.org
@@ -0,0 +1,6 @@
+#+title: pun-generator
+
+#+date: <2018-06-19>
+
+[[https://en.wikipedia.org/wiki/Computational_humor#Pun_generation][Pun
+generators exist]].
diff --git a/microposts/random-forests.org b/microposts/random-forests.org
new file mode 100644
index 0000000..f52c176
--- /dev/null
+++ b/microposts/random-forests.org
@@ -0,0 +1,24 @@
+#+title: random-forests
+
+#+date: <2018-05-15>
+
+[[https://lagunita.stanford.edu/courses/HumanitiesSciences/StatLearning/Winter2016/info][Stanford
+Lagunita's statistical learning course]] has some excellent lectures on
+random forests. It starts with explanations of decision trees, followed
+by bagged trees and random forests, and ends with boosting. From these
+lectures it seems that:
+
+1. The term "predictors" in statistical learning = "features" in machine
+ learning.
+2. The main idea of random forests of dropping predictors for individual
+ trees and aggregate by majority or average is the same as the idea of
+ dropout in neural networks, where a proportion of neurons in the
+ hidden layers are dropped temporarily during different minibatches of
+ training, effectively averaging over an emsemble of subnetworks. Both
+ tricks are used as regularisations, i.e. to reduce the variance. The
+ only difference is: in random forests, all but a square root number
+ of the total number of features are dropped, whereas the dropout
+ ratio in neural networks is usually a half.
+
+By the way, here's a comparison between statistical learning and machine
+learning from the slides of the Statistcal Learning course:
diff --git a/microposts/rnn-fsm.org b/microposts/rnn-fsm.org
new file mode 100644
index 0000000..a1bdf2d
--- /dev/null
+++ b/microposts/rnn-fsm.org
@@ -0,0 +1,30 @@
+#+title: rnn-fsm
+
+#+date: <2018-05-11>
+
+Related to [[file:neural-turing-machine][a previous micropost]].
+
+[[http://www.cs.toronto.edu/~rgrosse/csc321/lec9.pdf][These slides from
+Toronto]] are a nice introduction to RNN (recurrent neural network) from
+a computational point of view. It states that RNN can simulate any FSM
+(finite state machine, a.k.a. finite automata abbr. FA) with a toy
+example computing the parity of a binary string.
+
+[[http://www.deeplearningbook.org/contents/rnn.html][Goodfellow et.
+al.'s book]] (see page 372 and 374) goes one step further, stating that
+RNN with a hidden-to-hidden layer can simulate Turing machines, and not
+only that, but also the /universal/ Turing machine abbr. UTM (the book
+referenced
+[[https://www.sciencedirect.com/science/article/pii/S0022000085710136][Siegelmann-Sontag]]),
+a property not shared by the weaker network where the hidden-to-hidden
+layer is replaced by an output-to-hidden layer (page 376).
+
+By the way, the RNN with a hidden-to-hidden layer has the same
+architecture as the so-called linear dynamical system mentioned in
+[[https://www.coursera.org/learn/neural-networks/lecture/Fpa7y/modeling-sequences-a-brief-overview][Hinton's
+video]].
+
+From what I have learned, the universality of RNN and feedforward
+networks are therefore due to different arguments, the former coming
+from Turing machines and the latter from an analytical view of
+approximation by step functions.
diff --git a/microposts/rnn-turing.org b/microposts/rnn-turing.org
new file mode 100644
index 0000000..8636a5a
--- /dev/null
+++ b/microposts/rnn-turing.org
@@ -0,0 +1,11 @@
+#+title: rnn-turing
+
+#+date: <2018-09-18>
+
+Just some non-rigorous guess / thought: Feedforward networks are like
+combinatorial logic, and recurrent networks are like sequential logic
+(e.g. data flip-flop is like the feedback connection in RNN). Since NAND
++ combinatorial logic + sequential logic = von Neumann machine which is
+an approximation of the Turing machine, it is not surprising that RNN
+(with feedforward networks) is Turing complete (assuming that neural
+networks can learn the NAND gate).
diff --git a/microposts/sanders-suspend-campaign.org b/microposts/sanders-suspend-campaign.org
new file mode 100644
index 0000000..fb6865b
--- /dev/null
+++ b/microposts/sanders-suspend-campaign.org
@@ -0,0 +1,8 @@
+#+title: sanders-suspend-campaign
+
+#+date: <2020-04-15>
+
+Suspending the campaign is different from dropping out of the race.
+Bernie Sanders remains on the ballot, and indeed in his campaign
+suspension speech he encouraged people to continue voting for him in the
+democratic primaries to push for changes in the convention.
diff --git a/microposts/short-science.org b/microposts/short-science.org
new file mode 100644
index 0000000..a98facb
--- /dev/null
+++ b/microposts/short-science.org
@@ -0,0 +1,23 @@
+#+title: short-science
+
+#+date: <2018-09-05>
+
+#+begin_quote
+
+ - ShortScience.org is a platform for post-publication discussion
+ aiming to improve accessibility and reproducibility of research
+ ideas.
+ - The website has over 800 summaries, mostly in machine learning,
+ written by the community and organized by paper, conference, and
+ year.
+ - Reading summaries of papers is useful to obtain the perspective and
+ insight of another reader, why they liked or disliked it, and their
+ attempt to demystify complicated sections.
+ - Also, writing summaries is a good exercise to understand the content
+ of a paper because you are forced to challenge your assumptions when
+ explaining it.
+ - Finally, you can keep up to date with the flood of research by
+ reading the latest summaries on our Twitter and Facebook pages.
+#+end_quote
+
+[[https://shortscience.org][ShortScience.org]]
diff --git a/microposts/simple-solution-lack-of-math-rendering.org b/microposts/simple-solution-lack-of-math-rendering.org
new file mode 100644
index 0000000..e8e3d83
--- /dev/null
+++ b/microposts/simple-solution-lack-of-math-rendering.org
@@ -0,0 +1,10 @@
+#+title: simple-solution-lack-of-math-rendering
+
+#+date: <2018-05-02>
+
+The lack of maths rendering in major online communication platforms like
+instant messaging, email or Github has been a minor obsession of mine
+for quite a while, as I saw it as a big factor preventing people from
+talking more maths online. But today I realised this is totally a
+non-issue. Just do what people on IRC have been doing since the
+inception of the universe: use a (latex) pastebin.
diff --git a/microposts/sql-injection-video.org b/microposts/sql-injection-video.org
new file mode 100644
index 0000000..cc616c5
--- /dev/null
+++ b/microposts/sql-injection-video.org
@@ -0,0 +1,10 @@
+#+title: sql-injection-video
+
+#+date: <2018-05-08>
+
+Computerphile has some brilliant educational videos on computer science,
+like [[https://www.youtube.com/watch?v=ciNHn38EyRc][a demo of SQL
+injection]], [[https://www.youtube.com/watch?v=eis11j_iGMs][a toy
+example of the lambda calculus]], and
+[[https://www.youtube.com/watch?v=9T8A89jgeTI][explaining the Y
+combinator]].
diff --git a/microposts/stallman-resign.org b/microposts/stallman-resign.org
new file mode 100644
index 0000000..727d10f
--- /dev/null
+++ b/microposts/stallman-resign.org
@@ -0,0 +1,24 @@
+#+title: stallman-resign
+
+#+date: <2019-09-29>
+
+Last week Richard Stallman resigned from FSF. It is a great loss for the
+free software movement.
+
+The apparent cause of his resignation and the events that triggered it
+reflect some alarming trends of the zeitgeist. Here is a detailed review
+of what happened: [[https://sterling-archermedes.github.io/][Low grade
+"journalists" and internet mob attack RMS with lies. In-depth review.]].
+Some interesting articles on this are:
+[[https://jackbaruth.com/?p=16779][Weekly Roundup: The Passion Of Saint
+iGNUcius Edition]],
+[[http://techrights.org/2019/09/17/rms-witch-hunt/][Why I Once Called
+for Richard Stallman to Step Down]].
+
+Dishonest and misleading media pieces involved in this incident include
+[[https://www.thedailybeast.com/famed-mit-computer-scientist-richard-stallman-defends-epstein-victims-were-entirely-willing][The
+Daily Beast]],
+[[https://www.vice.com/en_us/article/9ke3ke/famed-computer-scientist-richard-stallman-described-epstein-victims-as-entirely-willing][Vice]],
+[[https://techcrunch.com/2019/09/16/computer-scientist-richard-stallman-who-defended-jeffrey-epstein-resigns-from-mit-csail-and-the-free-software-foundation/][Tech
+Crunch]],
+[[https://www.wired.com/story/richard-stallmans-exit-heralds-a-new-era-in-tech/][Wired]].
diff --git a/microposts/static-site-generator.org b/microposts/static-site-generator.org
new file mode 100644
index 0000000..1deac71
--- /dev/null
+++ b/microposts/static-site-generator.org
@@ -0,0 +1,13 @@
+#+title: static-site-generator
+
+#+date: <2018-03-23>
+
+#+begin_quote
+ "Static site generators seem like music databases, in that everyone
+ eventually writes their own crappy one that just barely scratches the
+ itch they had (and I'm no exception)."
+#+end_quote
+
+__david__@hackernews
+
+So did I.
diff --git a/microposts/zitierkartell.org b/microposts/zitierkartell.org
new file mode 100644
index 0000000..eedaf2f
--- /dev/null
+++ b/microposts/zitierkartell.org
@@ -0,0 +1,7 @@
+#+title: zitierkartell
+
+#+date: <2018-09-07>
+
+[[https://academia.stackexchange.com/questions/116489/counter-strategy-against-group-that-repeatedly-does-strategic-self-citations-and][Counter
+strategy against group that repeatedly does strategic self-citations and
+ignores other relevant research]]
diff --git a/org-template/style.org b/org-template/style.org
new file mode 100644
index 0000000..e69de29
--- /dev/null
+++ b/org-template/style.org
diff --git a/pages/all-microposts.org b/pages/all-microposts.org
new file mode 100644
index 0000000..92896bc
--- /dev/null
+++ b/pages/all-microposts.org
@@ -0,0 +1,773 @@
+#+title: Yuchen's Microblog
+
+*** 2020-08-02: ia-lawsuit
+ :PROPERTIES:
+ :CUSTOM_ID: ia-lawsuit
+ :END:
+The four big publishers Hachette, HarperCollins, Wiley, and Penguin
+Random House are still pursuing Internet Archive.
+
+#+begin_quote
+ [Their] lawsuit does not stop at seeking to end the practice of
+ Controlled Digital Lending. These publishers call for the destruction
+ of the 1.5 million digital books that Internet Archive makes available
+ to our patrons. This form of digital book burning is unprecedented and
+ unfairly disadvantages people with print disabilities. For the blind,
+ ebooks are a lifeline, yet less than one in ten exists in accessible
+ formats. Since 2010, Internet Archive has made our lending library
+ available to the blind and print disabled community, in addition to
+ sighted users. If the publishers are successful with their lawsuit,
+ more than a million of those books would be deleted from the
+ Internet's digital shelves forever.
+#+end_quote
+
+[[https://blog.archive.org/2020/07/29/internet-archive-responds-to-publishers-lawsuit/][Libraries
+lend books, and must continue to lend books: Internet Archive responds
+to publishers' lawsuit]]
+*** 2020-08-02: fsf-membership
+ :PROPERTIES:
+ :CUSTOM_ID: fsf-membership
+ :END:
+I am a proud associate member of Free Software Freedom. For me the
+philosophy of Free Software is about ensuring the enrichment of a
+digital commons, so that knowledge and information are not concentrated
+in the hands of selected privileged people and locked up as
+"intellectual property". The genius of copyleft licenses like GNU (A)GPL
+ensures software released for the public, remains public. Open source
+does not care about that.
+
+If you also care about the public good, the hacker ethics, or the spirit
+of the web, please take a moment to consider joining FSF as an associate
+member. It comes with [[https://www.fsf.org/associate/benefits][numerous
+perks and benefits]].
+*** 2020-06-21: how-can-you-help-ia
+ :PROPERTIES:
+ :CUSTOM_ID: how-can-you-help-ia
+ :END:
+[[https://blog.archive.org/2020/06/14/how-can-you-help-the-internet-archive/][How
+can you help the Internet Archive?]] Use it. It's more than the Wayback
+Machine. And get involved.
+*** 2020-06-12: open-library
+ :PROPERTIES:
+ :CUSTOM_ID: open-library
+ :END:
+Open Library was cofounded by Aaron Swartz. As part of the Internet
+Archive, it has done good work to spread knowledge. However it is
+currently
+[[https://arstechnica.com/tech-policy/2020/06/internet-archive-ends-emergency-library-early-to-appease-publishers/][being
+sued by four major publishers]] for the
+[[https://archive.org/details/nationalemergencylibrary][National
+Emergency Library]]. IA decided to
+[[https://blog.archive.org/2020/06/10/temporary-national-emergency-library-to-close-2-weeks-early-returning-to-traditional-controlled-digital-lending/][close
+the NEL two weeks earlier than planned]], but the lawsuit is not over,
+which in the worst case scenario has the danger of resulting in
+Controlled Digital Lending being considered illegal and (less likely)
+bancruptcy of the Internet Archive. If this happens it will be a big
+setback of the free-culture movement.
+*** 2020-04-15: sanders-suspend-campaign
+ :PROPERTIES:
+ :CUSTOM_ID: sanders-suspend-campaign
+ :END:
+Suspending the campaign is different from dropping out of the race.
+Bernie Sanders remains on the ballot, and indeed in his campaign
+suspension speech he encouraged people to continue voting for him in the
+democratic primaries to push for changes in the convention.
+*** 2019-09-30: defense-stallman
+ :PROPERTIES:
+ :CUSTOM_ID: defense-stallman
+ :END:
+Someone wrote a bold article titled
+[[https://geoff.greer.fm/2019/09/30/in-defense-of-richard-stallman/]["In
+Defense of Richard Stallman"]]. Kudos to him.
+
+Also, an interesting read:
+[[https://cfenollosa.com/blog/famous-computer-public-figure-suffers-the-consequences-for-asshole-ish-behavior.html][Famous
+public figure in tech suffers the consequences for asshole-ish
+behavior]].
+*** 2019-09-29: stallman-resign
+ :PROPERTIES:
+ :CUSTOM_ID: stallman-resign
+ :END:
+Last week Richard Stallman resigned from FSF. It is a great loss for the
+free software movement.
+
+The apparent cause of his resignation and the events that triggered it
+reflect some alarming trends of the zeitgeist. Here is a detailed review
+of what happened: [[https://sterling-archermedes.github.io/][Low grade
+"journalists" and internet mob attack RMS with lies. In-depth review.]].
+Some interesting articles on this are:
+[[https://jackbaruth.com/?p=16779][Weekly Roundup: The Passion Of Saint
+iGNUcius Edition]],
+[[http://techrights.org/2019/09/17/rms-witch-hunt/][Why I Once Called
+for Richard Stallman to Step Down]].
+
+Dishonest and misleading media pieces involved in this incident include
+[[https://www.thedailybeast.com/famed-mit-computer-scientist-richard-stallman-defends-epstein-victims-were-entirely-willing][The
+Daily Beast]],
+[[https://www.vice.com/en_us/article/9ke3ke/famed-computer-scientist-richard-stallman-described-epstein-victims-as-entirely-willing][Vice]],
+[[https://techcrunch.com/2019/09/16/computer-scientist-richard-stallman-who-defended-jeffrey-epstein-resigns-from-mit-csail-and-the-free-software-foundation/][Tech
+Crunch]],
+[[https://www.wired.com/story/richard-stallmans-exit-heralds-a-new-era-in-tech/][Wired]].
+*** 2019-03-16: decss-haiku
+ :PROPERTIES:
+ :CUSTOM_ID: decss-haiku
+ :END:
+
+#+begin_quote
+ #+begin_example
+ Muse! When we learned to
+ count, little did we know all
+ the things we could do
+
+ some day by shuffling
+ those numbers: Pythagoras
+ said "All is number"
+
+ long before he saw
+ computers and their effects,
+ or what they could do
+
+ by computation,
+ naive and mechanical
+ fast arithmetic.
+
+ It changed the world, it
+ changed our consciousness and lives
+ to have such fast math
+
+ available to
+ us and anyone who cared
+ to learn programming.
+
+ Now help me, Muse, for
+ I wish to tell a piece of
+ controversial math,
+
+ for which the lawyers
+ of DVD CCA
+ don't forbear to sue:
+
+ that they alone should
+ know or have the right to teach
+ these skills and these rules.
+
+ (Do they understand
+ the content, or is it just
+ the effects they see?)
+
+ And all mathematics
+ is full of stories (just read
+ Eric Temple Bell);
+
+ and CSS is
+ no exception to this rule.
+ Sing, Muse, decryption
+
+ once secret, as all
+ knowledge, once unknown: how to
+ decrypt DVDs.
+ #+end_example
+#+end_quote
+
+Seth Schoen, [[https://en.wikipedia.org/wiki/DeCSS_haiku][DeCSS haiku]]
+*** 2019-01-27: learning-undecidable
+ :PROPERTIES:
+ :CUSTOM_ID: learning-undecidable
+ :END:
+My take on the
+[[https://www.nature.com/articles/s42256-018-0002-3][Nature paper
+/Learning can be undecidable/]]:
+
+Fantastic article, very clearly written.
+
+So it reduces a kind of learninability called estimating the maximum
+(EMX) to the cardinality of real numbers which is undecidable.
+
+When it comes to the relation between EMX and the rest of machine
+learning framework, the article mentions that EMX belongs to "extensions
+of PAC learnability include Vapnik's statistical learning setting and
+the equivalent general learning setting by Shalev-Shwartz and
+colleagues" (I have no idea what these two things are), but it does not
+say whether EMX is representative of or reduces to common learning
+tasks. So it is not clear whether its undecidability applies to ML at
+large.
+
+Another condition to the main theorem is the union bounded closure
+assumption. It seems a reasonable property of a family of sets, but then
+again I wonder how that translates to learning.
+
+The article says "By now, we know of quite a few independence [from
+mathematical axioms] results, mostly for set theoretic questions like
+the continuum hypothesis, but also for results in algebra, analysis,
+infinite combinatorics and more. Machine learning, so far, has escaped
+this fate." but the description of the EMX learnability makes it more
+like a classical mathematical / theoretical computer science problem
+rather than machine learning.
+
+An insightful conclusion: "How come learnability can neither be proved
+nor refuted? A closer look reveals that the source of the problem is in
+defining learnability as the existence of a learning function rather
+than the existence of a learning algorithm. In contrast with the
+existence of algorithms, the existence of functions over infinite
+domains is a (logically) subtle issue."
+
+In relation to practical problems, it uses an example of ad targeting.
+However, A lot is lost in translation from the main theorem to this ad
+example.
+
+The EMX problem states: given a domain X, a distribution P over X which
+is unknown, some samples from P, and a family of subsets of X called F,
+find A in F that approximately maximises P(A).
+
+The undecidability rests on X being the continuous [0, 1] interval, and
+from the insight, we know the problem comes from the cardinality of
+subsets of the [0, 1] interval, which is "logically subtle".
+
+In the ad problem, the domain X is all potential visitors, which is
+finite because there are finite number of people in the world. In this
+case P is a categorical distribution over the 1..n where n is the
+population of the world. One can have a good estimate of the parameters
+of a categorical distribution by asking for sufficiently large number of
+samples and computing the empirical distribution. Let's call the
+estimated distribution Q. One can choose the from F (also finite) the
+set that maximises Q(A) which will be a solution to EMX.
+
+In other words, the theorem states: EMX is undecidable because not all
+EMX instances are decidable, because there are some nasty ones due to
+infinities. That does not mean no EMX instance is decidable. And I think
+the ad instance is decidable. Is there a learning task that actually
+corresponds to an undecidable EMX instance? I don't know, but I will not
+believe the result of this paper is useful until I see one.
+
+h/t Reynaldo Boulogne
+*** 2018-12-11: gavin-belson
+ :PROPERTIES:
+ :CUSTOM_ID: gavin-belson
+ :END:
+
+#+begin_quote
+ I don't know about you people, but I don't want to live in a world
+ where someone else makes the world a better place better than we do.
+#+end_quote
+
+Gavin Belson, Silicon Valley S2E1.
+
+I came across this quote in
+[[https://slate.com/business/2018/12/facebook-emails-lawsuit-embarrassing-mark-zuckerberg.html][a
+Slate post about Facebook]]
+*** 2018-10-05: margins
+ :PROPERTIES:
+ :CUSTOM_ID: margins
+ :END:
+With Fermat's Library's new tool
+[[https://fermatslibrary.com/margins][margins]], you can host your own
+journal club.
+*** 2018-09-18: rnn-turing
+ :PROPERTIES:
+ :CUSTOM_ID: rnn-turing
+ :END:
+Just some non-rigorous guess / thought: Feedforward networks are like
+combinatorial logic, and recurrent networks are like sequential logic
+(e.g. data flip-flop is like the feedback connection in RNN). Since NAND
++ combinatorial logic + sequential logic = von Neumann machine which is
+an approximation of the Turing machine, it is not surprising that RNN
+(with feedforward networks) is Turing complete (assuming that neural
+networks can learn the NAND gate).
+*** 2018-09-07: zitierkartell
+ :PROPERTIES:
+ :CUSTOM_ID: zitierkartell
+ :END:
+[[https://academia.stackexchange.com/questions/116489/counter-strategy-against-group-that-repeatedly-does-strategic-self-citations-and][Counter
+strategy against group that repeatedly does strategic self-citations and
+ignores other relevant research]]
+*** 2018-09-05: short-science
+ :PROPERTIES:
+ :CUSTOM_ID: short-science
+ :END:
+
+#+begin_quote
+
+ - ShortScience.org is a platform for post-publication discussion
+ aiming to improve accessibility and reproducibility of research
+ ideas.
+ - The website has over 800 summaries, mostly in machine learning,
+ written by the community and organized by paper, conference, and
+ year.
+ - Reading summaries of papers is useful to obtain the perspective and
+ insight of another reader, why they liked or disliked it, and their
+ attempt to demystify complicated sections.
+ - Also, writing summaries is a good exercise to understand the content
+ of a paper because you are forced to challenge your assumptions when
+ explaining it.
+ - Finally, you can keep up to date with the flood of research by
+ reading the latest summaries on our Twitter and Facebook pages.
+#+end_quote
+
+[[https://shortscience.org][ShortScience.org]]
+*** 2018-08-13: darknet-diaries
+ :PROPERTIES:
+ :CUSTOM_ID: darknet-diaries
+ :END:
+[[https://darknetdiaries.com][Darknet Diaries]] is a cool podcast.
+According to its about page it covers "true stories from the dark side
+of the Internet. Stories about hackers, defenders, threats, malware,
+botnets, breaches, and privacy."
+*** 2018-06-20: coursera-basic-income
+ :PROPERTIES:
+ :CUSTOM_ID: coursera-basic-income
+ :END:
+Coursera is having
+[[https://www.coursera.org/learn/exploring-basic-income-in-a-changing-economy][a
+Teach-Out on Basic Income]].
+*** 2018-06-19: pun-generator
+ :PROPERTIES:
+ :CUSTOM_ID: pun-generator
+ :END:
+[[https://en.wikipedia.org/wiki/Computational_humor#Pun_generation][Pun
+generators exist]].
+*** 2018-06-15: hackers-excerpt
+ :PROPERTIES:
+ :CUSTOM_ID: hackers-excerpt
+ :END:
+
+#+begin_quote
+ But as more nontechnical people bought computers, the things that
+ impressed hackers were not as essential. While the programs themselves
+ had to maintain a certain standard of quality, it was quite possible
+ that the most exacting standards---those applied by a hacker who
+ wanted to add one more feature, or wouldn't let go of a project until
+ it was demonstrably faster than anything else around---were probably
+ counterproductive. What seemed more important was marketing. There
+ were plenty of brilliant programs which no one knew about. Sometimes
+ hackers would write programs and put them in the public domain, give
+ them away as easily as John Harris had lent his early copy of
+ Jawbreaker to the guys at the Fresno computer store. But rarely would
+ people ask for public domain programs by name: they wanted the ones
+ they saw advertised and discussed in magazines, demonstrated in
+ computer stores. It was not so important to have amazingly clever
+ algorithms. Users would put up with more commonplace ones.
+
+ The Hacker Ethic, of course, held that every program should be as good
+ as you could make it (or better), infinitely flexible, admired for its
+ brilliance of concept and execution, and designed to extend the user's
+ powers. Selling computer programs like toothpaste was heresy. But it
+ was happening. Consider the prescription for success offered by one of
+ a panel of high-tech venture capitalists, gathered at a 1982 software
+ show: "I can summarize what it takes in three words: marketing,
+ marketing, marketing." When computers are sold like toasters, programs
+ will be sold like toothpaste. The Hacker Ethic notwithstanding.
+#+end_quote
+
+[[http://www.stevenlevy.com/index.php/books/hackers][Hackers: Heroes of
+Computer Revolution]], by Steven Levy.
+*** 2018-06-11: catalan-overflow
+ :PROPERTIES:
+ :CUSTOM_ID: catalan-overflow
+ :END:
+To compute Catalan numbers without unnecessary overflow, use the
+recurrence formula \(C_n = {4 n - 2 \over n + 1} C_{n - 1}\).
+*** 2018-06-04: boyer-moore
+ :PROPERTIES:
+ :CUSTOM_ID: boyer-moore
+ :END:
+The
+[[https://en.wikipedia.org/wiki/Boyer–Moore_majority_vote_algorithm][Boyer-Moore
+algorithm for finding the majority of a sequence of elements]] falls in
+the category of "very clever algorithms".
+
+#+begin_example
+ int majorityElement(vector<int>& xs) {
+ int count = 0;
+ int maj = xs[0];
+ for (auto x : xs) {
+ if (x == maj) count++;
+ else if (count == 0) maj = x;
+ else count--;
+ }
+ return maj;
+ }
+#+end_example
+*** 2018-05-30: how-to-learn-on-your-own
+ :PROPERTIES:
+ :CUSTOM_ID: how-to-learn-on-your-own
+ :END:
+Roger Grosse's post
+[[https://metacademy.org/roadmaps/rgrosse/learn_on_your_own][How to
+learn on your own (2015)]] is an excellent modern guide on how to learn
+and research technical stuff (especially machine learning and maths) on
+one's own.
+*** 2018-05-25: 2048-mdp
+ :PROPERTIES:
+ :CUSTOM_ID: 2048-mdp
+ :END:
+[[http://jdlm.info/articles/2018/03/18/markov-decision-process-2048.html][This
+post]] models 2048 as an MDP and solves it using policy iteration and
+backward induction.
+*** 2018-05-22: ats
+ :PROPERTIES:
+ :CUSTOM_ID: ats
+ :END:
+
+#+begin_quote
+ ATS (Applied Type System) is a programming language designed to unify
+ programming with formal specification. ATS has support for combining
+ theorem proving with practical programming through the use of advanced
+ type systems. A past version of The Computer Language Benchmarks Game
+ has demonstrated that the performance of ATS is comparable to that of
+ the C and C++ programming languages. By using theorem proving and
+ strict type checking, the compiler can detect and prove that its
+ implemented functions are not susceptible to bugs such as division by
+ zero, memory leaks, buffer overflow, and other forms of memory
+ corruption by verifying pointer arithmetic and reference counting
+ before the program compiles. Additionally, by using the integrated
+ theorem-proving system of ATS (ATS/LF), the programmer may make use of
+ static constructs that are intertwined with the operative code to
+ prove that a function attains its specification.
+#+end_quote
+
+[[https://en.wikipedia.org/wiki/ATS_(programming_language)][Wikipedia
+entry on ATS]]
+*** 2018-05-20: bostoncalling
+ :PROPERTIES:
+ :CUSTOM_ID: bostoncalling
+ :END:
+(5-second fame) I sent a picture of my kitchen sink to BBC and got
+mentioned in the [[https://www.bbc.co.uk/programmes/w3cswg8c][latest
+Boston Calling episode]] (listen at 25:54).
+*** 2018-05-18: colah-blog
+ :PROPERTIES:
+ :CUSTOM_ID: colah-blog
+ :END:
+[[https://colah.github.io/][colah's blog]] has a cool feature that
+allows you to comment on any paragraph of a blog post. Here's an
+[[https://colah.github.io/posts/2015-08-Understanding-LSTMs/][example]].
+If it is doable on a static site hosted on Github pages, I suppose it
+shouldn't be too hard to implement. This also seems to work more
+seamlessly than [[https://fermatslibrary.com/][Fermat's Library]],
+because the latter has to embed pdfs in webpages. Now fantasy time:
+imagine that one day arXiv shows html versions of papers (through author
+uploading or conversion from TeX) with this feature.
+*** 2018-05-15: random-forests
+ :PROPERTIES:
+ :CUSTOM_ID: random-forests
+ :END:
+[[https://lagunita.stanford.edu/courses/HumanitiesSciences/StatLearning/Winter2016/info][Stanford
+Lagunita's statistical learning course]] has some excellent lectures on
+random forests. It starts with explanations of decision trees, followed
+by bagged trees and random forests, and ends with boosting. From these
+lectures it seems that:
+
+1. The term "predictors" in statistical learning = "features" in machine
+ learning.
+2. The main idea of random forests of dropping predictors for individual
+ trees and aggregate by majority or average is the same as the idea of
+ dropout in neural networks, where a proportion of neurons in the
+ hidden layers are dropped temporarily during different minibatches of
+ training, effectively averaging over an emsemble of subnetworks. Both
+ tricks are used as regularisations, i.e. to reduce the variance. The
+ only difference is: in random forests, all but a square root number
+ of the total number of features are dropped, whereas the dropout
+ ratio in neural networks is usually a half.
+
+By the way, here's a comparison between statistical learning and machine
+learning from the slides of the Statistcal Learning course:
+*** 2018-05-14: open-review-net
+ :PROPERTIES:
+ :CUSTOM_ID: open-review-net
+ :END:
+Open peer review means peer review process where communications
+e.g. comments and responses are public.
+
+Like [[https://scipost.org/][SciPost]] mentioned in
+[[/posts/2018-04-10-update-open-research.html][my post]],
+[[https://openreview.net][OpenReview.net]] is an example of open peer
+review in research. It looks like their focus is machine learning. Their
+[[https://openreview.net/about][about page]] states their mission, and
+here's [[https://openreview.net/group?id=ICLR.cc/2018/Conference][an
+example]] where you can click on each entry to see what it is like. We
+definitely need this in the maths research community.
+*** 2018-05-11: rnn-fsm
+ :PROPERTIES:
+ :CUSTOM_ID: rnn-fsm
+ :END:
+Related to [[#neural-turing-machine][a previous micropost]].
+
+[[http://www.cs.toronto.edu/~rgrosse/csc321/lec9.pdf][These slides from
+Toronto]] are a nice introduction to RNN (recurrent neural network) from
+a computational point of view. It states that RNN can simulate any FSM
+(finite state machine, a.k.a. finite automata abbr. FA) with a toy
+example computing the parity of a binary string.
+
+[[http://www.deeplearningbook.org/contents/rnn.html][Goodfellow et.
+al.'s book]] (see page 372 and 374) goes one step further, stating that
+RNN with a hidden-to-hidden layer can simulate Turing machines, and not
+only that, but also the /universal/ Turing machine abbr. UTM (the book
+referenced
+[[https://www.sciencedirect.com/science/article/pii/S0022000085710136][Siegelmann-Sontag]]),
+a property not shared by the weaker network where the hidden-to-hidden
+layer is replaced by an output-to-hidden layer (page 376).
+
+By the way, the RNN with a hidden-to-hidden layer has the same
+architecture as the so-called linear dynamical system mentioned in
+[[https://www.coursera.org/learn/neural-networks/lecture/Fpa7y/modeling-sequences-a-brief-overview][Hinton's
+video]].
+
+From what I have learned, the universality of RNN and feedforward
+networks are therefore due to different arguments, the former coming
+from Turing machines and the latter from an analytical view of
+approximation by step functions.
+*** 2018-05-10: math-writing-decoupling
+ :PROPERTIES:
+ :CUSTOM_ID: math-writing-decoupling
+ :END:
+One way to write readable mathematics is to decouple concepts. One idea
+is the following template. First write a toy example with all the
+important components present in this example, then analyse each
+component individually and elaborate how (perhaps more complex)
+variations of the component can extend the toy example and induce more
+complex or powerful versions of the toy example. Through such
+incremental development, one should be able to arrive at any result in
+cutting edge research after a pleasant journey.
+
+It's a bit like the UNIX philosophy, where you have a basic system of
+modules like IO, memory management, graphics etc, and modify / improve
+each module individually (H/t [[http://nand2tetris.org/][NAND2Tetris]]).
+
+The book [[http://neuralnetworksanddeeplearning.com/][Neutral networks
+and deep learning]] by Michael Nielsen is an example of such approach.
+It begins the journey with a very simple neutral net with one hidden
+layer, no regularisation, and sigmoid activations. It then analyses each
+component including cost functions, the back propagation algorithm, the
+activation functions, regularisation and the overall architecture (from
+fully connected to CNN) individually and improve the toy example
+incrementally. Over the course the accuracy of the example of mnist
+grows incrementally from 95.42% to 99.67%.
+*** 2018-05-09: neural-turing-machine
+ :PROPERTIES:
+ :CUSTOM_ID: neural-turing-machine
+ :END:
+
+#+begin_quote
+ One way RNNs are currently being used is to connect neural networks
+ more closely to traditional ways of thinking about algorithms, ways of
+ thinking based on concepts such as Turing machines and (conventional)
+ programming languages. [[https://arxiv.org/abs/1410.4615][A 2014
+ paper]] developed an RNN which could take as input a
+ character-by-character description of a (very, very simple!) Python
+ program, and use that description to predict the output. Informally,
+ the network is learning to "understand" certain Python programs.
+ [[https://arxiv.org/abs/1410.5401][A second paper, also from 2014]],
+ used RNNs as a starting point to develop what they called a neural
+ Turing machine (NTM). This is a universal computer whose entire
+ structure can be trained using gradient descent. They trained their
+ NTM to infer algorithms for several simple problems, such as sorting
+ and copying.
+
+ As it stands, these are extremely simple toy models. Learning to
+ execute the Python program =print(398345+42598)= doesn't make a
+ network into a full-fledged Python interpreter! It's not clear how
+ much further it will be possible to push the ideas. Still, the results
+ are intriguing. Historically, neural networks have done well at
+ pattern recognition problems where conventional algorithmic approaches
+ have trouble. Vice versa, conventional algorithmic approaches are good
+ at solving problems that neural nets aren't so good at. No-one today
+ implements a web server or a database program using a neural network!
+ It'd be great to develop unified models that integrate the strengths
+ of both neural networks and more traditional approaches to algorithms.
+ RNNs and ideas inspired by RNNs may help us do that.
+#+end_quote
+
+Michael Nielsen,
+[[http://neuralnetworksanddeeplearning.com/chap6.html#other_approaches_to_deep_neural_nets][Neural
+networks and deep learning]]
+*** 2018-05-09: neural-nets-activation
+ :PROPERTIES:
+ :CUSTOM_ID: neural-nets-activation
+ :END:
+
+#+begin_quote
+ What makes the rectified linear activation function better than the
+ sigmoid or tanh functions? At present, we have a poor understanding of
+ the answer to this question. Indeed, rectified linear units have only
+ begun to be widely used in the past few years. The reason for that
+ recent adoption is empirical: a few people tried rectified linear
+ units, often on the basis of hunches or heuristic arguments. They got
+ good results classifying benchmark data sets, and the practice has
+ spread. In an ideal world we'd have a theory telling us which
+ activation function to pick for which application. But at present
+ we're a long way from such a world. I should not be at all surprised
+ if further major improvements can be obtained by an even better choice
+ of activation function. And I also expect that in coming decades a
+ powerful theory of activation functions will be developed. Today, we
+ still have to rely on poorly understood rules of thumb and experience.
+#+end_quote
+
+Michael Nielsen,
+[[http://neuralnetworksanddeeplearning.com/chap6.html#convolutional_neural_networks_in_practice][Neutral
+networks and deep learning]]
+*** 2018-05-08: sql-injection-video
+ :PROPERTIES:
+ :CUSTOM_ID: sql-injection-video
+ :END:
+Computerphile has some brilliant educational videos on computer science,
+like [[https://www.youtube.com/watch?v=ciNHn38EyRc][a demo of SQL
+injection]], [[https://www.youtube.com/watch?v=eis11j_iGMs][a toy
+example of the lambda calculus]], and
+[[https://www.youtube.com/watch?v=9T8A89jgeTI][explaining the Y
+combinator]].
+*** 2018-05-08: nlp-arxiv
+ :PROPERTIES:
+ :CUSTOM_ID: nlp-arxiv
+ :END:
+Primer Science is a tool by a startup called Primer that uses NLP to
+summarize contents (but not single papers, yet) on arxiv. A developer of
+this tool predicts in
+[[https://twimlai.com/twiml-talk-136-taming-arxiv-w-natural-language-processing-with-john-bohannon/#][an
+interview]] that progress on AI's ability to extract meanings from AI
+research papers will be the biggest accelerant on AI research.
+*** 2018-05-08: neural-nets-regularization
+ :PROPERTIES:
+ :CUSTOM_ID: neural-nets-regularization
+ :END:
+
+#+begin_quote
+ no-one has yet developed an entirely convincing theoretical
+ explanation for why regularization helps networks generalize. Indeed,
+ researchers continue to write papers where they try different
+ approaches to regularization, compare them to see which works better,
+ and attempt to understand why different approaches work better or
+ worse. And so you can view regularization as something of a kludge.
+ While it often helps, we don't have an entirely satisfactory
+ systematic understanding of what's going on, merely incomplete
+ heuristics and rules of thumb.
+
+ There's a deeper set of issues here, issues which go to the heart of
+ science. It's the question of how we generalize. Regularization may
+ give us a computational magic wand that helps our networks generalize
+ better, but it doesn't give us a principled understanding of how
+ generalization works, nor of what the best approach is.
+#+end_quote
+
+Michael Nielsen,
+[[http://neuralnetworksanddeeplearning.com/chap3.html#why_does_regularization_help_reduce_overfitting][Neural
+networks and deep learning]]
+*** 2018-05-07: learning-knowledge-graph-reddit-journal-club
+ :PROPERTIES:
+ :CUSTOM_ID: learning-knowledge-graph-reddit-journal-club
+ :END:
+It is a natural idea to look for ways to learn things like going through
+a skill tree in a computer RPG.
+
+For example I made a
+[[https://ypei.me/posts/2015-04-02-juggling-skill-tree.html][DAG for
+juggling]].
+
+Websites like [[https://knowen.org][Knowen]] and
+[[https://metacademy.org][Metacademy]] explore this idea with added
+flavour of open collaboration.
+
+The design of Metacademy looks quite promising. It also has a nice
+tagline: "your package manager for knowledge".
+
+There are so so many tools to assist learning / research / knowledge
+sharing today, and we should keep experimenting, in the hope that
+eventually one of them will scale.
+
+On another note, I often complain about the lack of a place to discuss
+math research online, but today I found on Reddit some journal clubs on
+machine learning:
+[[https://www.reddit.com/r/MachineLearning/comments/8aluhs/d_machine_learning_wayr_what_are_you_reading_week/][1]],
+[[https://www.reddit.com/r/MachineLearning/comments/8elmd8/d_anyone_having_trouble_reading_a_particular/][2]].
+If only we had this for maths. On the other hand r/math does have some
+interesting recurring threads as well:
+[[https://www.reddit.com/r/math/wiki/everythingaboutx][Everything about
+X]] and
+[[https://www.reddit.com/r/math/search?q=what+are+you+working+on?+author:automoderator+&sort=new&restrict_sr=on&t=all][What
+Are You Working On?]]. Hopefully these threads can last for years to
+come.
+*** 2018-05-02: simple-solution-lack-of-math-rendering
+ :PROPERTIES:
+ :CUSTOM_ID: simple-solution-lack-of-math-rendering
+ :END:
+The lack of maths rendering in major online communication platforms like
+instant messaging, email or Github has been a minor obsession of mine
+for quite a while, as I saw it as a big factor preventing people from
+talking more maths online. But today I realised this is totally a
+non-issue. Just do what people on IRC have been doing since the
+inception of the universe: use a (latex) pastebin.
+*** 2018-05-01: neural-networks-programming-paradigm
+ :PROPERTIES:
+ :CUSTOM_ID: neural-networks-programming-paradigm
+ :END:
+
+#+begin_quote
+ Neural networks are one of the most beautiful programming paradigms
+ ever invented. In the conventional approach to programming, we tell
+ the computer what to do, breaking big problems up into many small,
+ precisely defined tasks that the computer can easily perform. By
+ contrast, in a neural network we don't tell the computer how to solve
+ our problem. Instead, it learns from observational data, figuring out
+ its own solution to the problem at hand.
+#+end_quote
+
+Michael Nielsen -
+[[http://neuralnetworksanddeeplearning.com/about.html][What this book
+(Neural Networks and Deep Learning) is about]]
+
+Unrelated to the quote, note that Nielsen's book is licensed under
+[[https://creativecommons.org/licenses/by-nc/3.0/deed.en_GB][CC BY-NC]],
+so one can build on it and redistribute non-commercially.
+*** 2018-04-30: google-search-not-ai
+ :PROPERTIES:
+ :CUSTOM_ID: google-search-not-ai
+ :END:
+
+#+begin_quote
+ But, users have learned to accommodate to Google not the other way
+ around. We know what kinds of things we can type into Google and what
+ we can't and we keep our searches to things that Google is likely to
+ help with. We know we are looking for texts and not answers to start a
+ conversation with an entity that knows what we really need to talk
+ about. People learn from conversation and Google can't have one. It
+ can pretend to have one using Siri but really those conversations tend
+ to get tiresome when you are past asking about where to eat.
+#+end_quote
+
+Roger Schank -
+[[http://www.rogerschank.com/fraudulent-claims-made-by-IBM-about-Watson-and-AI][Fraudulent
+claims made by IBM about Watson and AI]]
+*** 2018-04-06: hacker-ethics
+ :PROPERTIES:
+ :CUSTOM_ID: hacker-ethics
+ :END:
+
+#+begin_quote
+
+ - Access to computers---and anything that might teach you something
+ about the way the world works---should be unlimited and total.
+ Always yield to the Hands-On Imperative!
+ - All information should be free.
+ - Mistrust Authority---Promote Decentralization.
+ - Hackers should be judged by their hacking, not bogus criteria such
+ as degrees, age, race, or position.
+ - You can create art and beauty on a computer.
+ - Computers can change your life for the better.
+#+end_quote
+
+[[https://en.wikipedia.org/wiki/Hacker_ethic][The Hacker Ethic]],
+[[https://en.wikipedia.org/wiki/Hackers:_Heroes_of_the_Computer_Revolution][Hackers:
+Heroes of Computer Revolution]], by Steven Levy
+*** 2018-03-23: static-site-generator
+ :PROPERTIES:
+ :CUSTOM_ID: static-site-generator
+ :END:
+
+#+begin_quote
+ "Static site generators seem like music databases, in that everyone
+ eventually writes their own crappy one that just barely scratches the
+ itch they had (and I'm no exception)."
+#+end_quote
+
+__david__@hackernews
+
+So did I.
diff --git a/pages/blog.org b/pages/blog.org
new file mode 100644
index 0000000..d8928f5
--- /dev/null
+++ b/pages/blog.org
@@ -0,0 +1,20 @@
+#+TITLE: All posts
+
+- *[[file:posts/2019-03-14-great-but-manageable-expectations.org][Great but Manageable Expectations]]* - 2019-03-14
+- *[[file:posts/2019-03-13-a-tail-of-two-densities.org][A Tail of Two Densities]]* - 2019-03-13
+- *[[file:posts/2019-02-14-raise-your-elbo.org][Raise your ELBO]]* - 2019-02-14
+- *[[file:posts/2019-01-03-discriminant-analysis.org][Discriminant analysis]]* - 2019-01-03
+- *[[file:posts/2018-12-02-lime-shapley.org][Shapley, LIME and SHAP]]* - 2018-12-02
+- *[[file:posts/2018-06-03-automatic_differentiation.org][Automatic differentiation]]* - 2018-06-03
+- *[[file:posts/2018-04-10-update-open-research.org][Updates on open research]]* - 2018-04-29
+- *[[file:posts/2017-08-07-mathematical_bazaar.org][The Mathematical Bazaar]]* - 2017-08-07
+- *[[file:posts/2017-04-25-open_research_toywiki.org][Open mathematical research and launching toywiki]]* - 2017-04-25
+- *[[file:posts/2016-10-13-q-robinson-schensted-knuth-polymer.org][A \(q\)-Robinson-Schensted-Knuth algorithm and a \(q\)-polymer]]* - 2016-10-13
+- *[[file:posts/2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.org][AMS review of 'Double Macdonald polynomials as the stable limit of Macdonald superpolynomials' by Blondeau-Fournier, Lapointe and Mathieu]]* - 2015-07-15
+- *[[file:posts/2015-07-01-causal-quantum-product-levy-area.org][On a causal quantum double product integral related to Lévy stochastic area.]]* - 2015-07-01
+- *[[file:posts/2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.org][AMS review of 'Infinite binary words containing repetitions of odd period' by Badkobeh and Crochemore]]* - 2015-05-30
+- *[[file:posts/2015-04-02-juggling-skill-tree.org][jst]]* - 2015-04-02
+- *[[file:posts/2015-04-01-unitary-double-products.org][Unitary causal quantum stochastic double products as universal]]* - 2015-04-01
+- *[[file:posts/2015-01-20-weighted-interpretation-super-catalan-numbers.org][AMS review of 'A weighted interpretation for the super Catalan]]* - 2015-01-20
+- *[[file:posts/2014-04-01-q-robinson-schensted-symmetry-paper.org][Symmetry property of \(q\)-weighted Robinson-Schensted algorithms and branching algorithms]]* - 2014-04-01
+- *[[file:posts/2013-06-01-q-robinson-schensted-paper.org][A \(q\)-weighted Robinson-Schensted algorithm]]* - 2013-06-01 \ No newline at end of file
diff --git a/pages/microblog.org b/pages/microblog.org
new file mode 100644
index 0000000..fb39a67
--- /dev/null
+++ b/pages/microblog.org
@@ -0,0 +1,683 @@
+#+TITLE: Microblog
+
+- 2020-08-02 - *[[file:microposts/ia-lawsuit.org][ia-lawsuit]]*
+
+ The four big publishers Hachette, HarperCollins, Wiley, and Penguin
+ Random House are still pursuing Internet Archive.
+
+ #+begin_quote
+ [Their] lawsuit does not stop at seeking to end the practice of
+ Controlled Digital Lending. These publishers call for the destruction
+ of the 1.5 million digital books that Internet Archive makes available
+ to our patrons. This form of digital book burning is unprecedented and
+ unfairly disadvantages people with print disabilities. For the blind,
+ ebooks are a lifeline, yet less than one in ten exists in accessible
+ formats. Since 2010, Internet Archive has made our lending library
+ available to the blind and print disabled community, in addition to
+ sighted users. If the publishers are successful with their lawsuit,
+ more than a million of those books would be deleted from the
+ Internet's digital shelves forever.
+ #+end_quote
+
+ [[https://blog.archive.org/2020/07/29/internet-archive-responds-to-publishers-lawsuit/][Libraries
+ lend books, and must continue to lend books: Internet Archive responds
+ to publishers' lawsuit]]
+- 2020-08-02 - *[[file:microposts/fsf-membership.org][fsf-membership]]*
+
+ I am a proud associate member of Free Software Freedom. For me the
+ philosophy of Free Software is about ensuring the enrichment of a
+ digital commons, so that knowledge and information are not concentrated
+ in the hands of selected privileged people and locked up as
+ "intellectual property". The genius of copyleft licenses like GNU (A)GPL
+ ensures software released for the public, remains public. Open source
+ does not care about that.
+
+ If you also care about the public good, the hacker ethics, or the spirit
+ of the web, please take a moment to consider joining FSF as an associate
+ member. It comes with [[https://www.fsf.org/associate/benefits][numerous
+ perks and benefits]].
+- 2020-06-21 - *[[file:microposts/how-can-you-help-ia.org][how-can-you-help-ia]]*
+
+ [[https://blog.archive.org/2020/06/14/how-can-you-help-the-internet-archive/][How
+ can you help the Internet Archive?]] Use it. It's more than the Wayback
+ Machine. And get involved.
+- 2020-06-12 - *[[file:microposts/open-library.org][open-library]]*
+
+ Open Library was cofounded by Aaron Swartz. As part of the Internet
+ Archive, it has done good work to spread knowledge. However it is
+ currently
+ [[https://arstechnica.com/tech-policy/2020/06/internet-archive-ends-emergency-library-early-to-appease-publishers/][being
+ sued by four major publishers]] for the
+ [[https://archive.org/details/nationalemergencylibrary][National
+ Emergency Library]]. IA decided to
+ [[https://blog.archive.org/2020/06/10/temporary-national-emergency-library-to-close-2-weeks-early-returning-to-traditional-controlled-digital-lending/][close
+ the NEL two weeks earlier than planned]], but the lawsuit is not over,
+ which in the worst case scenario has the danger of resulting in
+ Controlled Digital Lending being considered illegal and (less likely)
+ bancruptcy of the Internet Archive. If this happens it will be a big
+ setback of the free-culture movement.
+- 2020-04-15 - *[[file:microposts/sanders-suspend-campaign.org][sanders-suspend-campaign]]*
+
+ Suspending the campaign is different from dropping out of the race.
+ Bernie Sanders remains on the ballot, and indeed in his campaign
+ suspension speech he encouraged people to continue voting for him in the
+ democratic primaries to push for changes in the convention.
+- 2019-09-30 - *[[file:microposts/defense-stallman.org][defense-stallman]]*
+
+ Someone wrote a bold article titled
+ [[https://geoff.greer.fm/2019/09/30/in-defense-of-richard-stallman/]["In
+ Defense of Richard Stallman"]]. Kudos to him.
+
+ Also, an interesting read:
+ [[https://cfenollosa.com/blog/famous-computer-public-figure-suffers-the-consequences-for-asshole-ish-behavior.html][Famous
+ public figure in tech suffers the consequences for asshole-ish
+ behavior]].
+- 2019-09-29 - *[[file:microposts/stallman-resign.org][stallman-resign]]*
+
+ Last week Richard Stallman resigned from FSF. It is a great loss for the
+ free software movement.
+
+ The apparent cause of his resignation and the events that triggered it
+ reflect some alarming trends of the zeitgeist. Here is a detailed review
+ of what happened: [[https://sterling-archermedes.github.io/][Low grade
+ "journalists" and internet mob attack RMS with lies. In-depth review.]].
+ Some interesting articles on this are:
+ [[https://jackbaruth.com/?p=16779][Weekly Roundup: The Passion Of Saint
+ iGNUcius Edition]],
+ [[http://techrights.org/2019/09/17/rms-witch-hunt/][Why I Once Called
+ for Richard Stallman to Step Down]].
+
+ Dishonest and misleading media pieces involved in this incident include
+ [[https://www.thedailybeast.com/famed-mit-computer-scientist-richard-stallman-defends-epstein-victims-were-entirely-willing][The
+ Daily Beast]],
+ [[https://www.vice.com/en_us/article/9ke3ke/famed-computer-scientist-richard-stallman-described-epstein-victims-as-entirely-willing][Vice]],
+ [[https://techcrunch.com/2019/09/16/computer-scientist-richard-stallman-who-defended-jeffrey-epstein-resigns-from-mit-csail-and-the-free-software-foundation/][Tech
+ Crunch]],
+ [[https://www.wired.com/story/richard-stallmans-exit-heralds-a-new-era-in-tech/][Wired]].
+- 2019-03-16 - *[[file:microposts/decss-haiku.org][decss-haiku]]*
+
+ #+begin_quote
+ #+begin_example
+ Muse! When we learned to
+ count, little did we know all
+ the things we could do
+
+ some day by shuffling
+ those numbers: Pythagoras
+ said "All is number"
+
+ long before he saw
+ computers and their effects,
+ or what they could do
+
+ by computation,
+ naive and mechanical
+ fast arithmetic.
+
+ It changed the world, it
+ changed our consciousness and lives
+ to have such fast math
+
+ available to
+ us and anyone who cared
+ to learn programming.
+
+ Now help me, Muse, for
+ I wish to tell a piece of
+ controversial math,
+
+ for which the lawyers
+ of DVD CCA
+ don't forbear to sue:
+
+ that they alone should
+ know or have the right to teach
+ these skills and these rules.
+
+ (Do they understand
+ the content, or is it just
+ the effects they see?)
+
+ And all mathematics
+ is full of stories (just read
+ Eric Temple Bell);
+
+ and CSS is
+ no exception to this rule.
+ Sing, Muse, decryption
+
+ once secret, as all
+ knowledge, once unknown: how to
+ decrypt DVDs.
+ #+end_example
+ #+end_quote
+
+ Seth Schoen, [[https://en.wikipedia.org/wiki/DeCSS_haiku][DeCSS haiku]]
+- 2019-01-27 - *[[file:microposts/learning-undecidable.org][learning-undecidable]]*
+
+ My take on the
+ [[https://www.nature.com/articles/s42256-018-0002-3][Nature paper
+ /Learning can be undecidable/]]:
+
+ Fantastic article, very clearly written.
+
+ So it reduces a kind of learninability called estimating the maximum
+ (EMX) to the cardinality of real numbers which is undecidable.
+
+ When it comes to the relation between EMX and the rest of machine
+ learning framework, the article mentions that EMX belongs to "extensions
+ of PAC learnability include Vapnik's statistical learning setting and
+ the equivalent general learning setting by Shalev-Shwartz and
+ colleagues" (I have no idea what these two things are), but it does not
+ say whether EMX is representative of or reduces to common learning
+ tasks. So it is not clear whether its undecidability applies to ML at
+ large.
+
+ Another condition to the main theorem is the union bounded closure
+ assumption. It seems a reasonable property of a family of sets, but then
+ again I wonder how that translates to learning.
+
+ The article says "By now, we know of quite a few independence [from
+ mathematical axioms] results, mostly for set theoretic questions like
+ the continuum hypothesis, but also for results in algebra, analysis,
+ infinite combinatorics and more. Machine learning, so far, has escaped
+ this fate." but the description of the EMX learnability makes it more
+ like a classical mathematical / theoretical computer science problem
+ rather than machine learning.
+
+ An insightful conclusion: "How come learnability can neither be proved
+ nor refuted? A closer look reveals that the source of the problem is in
+ defining learnability as the existence of a learning function rather
+ than the existence of a learning algorithm. In contrast with the
+ existence of algorithms, the existence of functions over infinite
+ domains is a (logically) subtle issue."
+
+ In relation to practical problems, it uses an example of ad targeting.
+ However, A lot is lost in translation from the main theorem to this ad
+ example.
+
+ The EMX problem states: given a domain X, a distribution P over X which
+ is unknown, some samples from P, and a family of subsets of X called F,
+ find A in F that approximately maximises P(A).
+
+ The undecidability rests on X being the continuous [0, 1] interval, and
+ from the insight, we know the problem comes from the cardinality of
+ subsets of the [0, 1] interval, which is "logically subtle".
+
+ In the ad problem, the domain X is all potential visitors, which is
+ finite because there are finite number of people in the world. In this
+ case P is a categorical distribution over the 1..n where n is the
+ population of the world. One can have a good estimate of the parameters
+ of a categorical distribution by asking for sufficiently large number of
+ samples and computing the empirical distribution. Let's call the
+ estimated distribution Q. One can choose the from F (also finite) the
+ set that maximises Q(A) which will be a solution to EMX.
+
+ In other words, the theorem states: EMX is undecidable because not all
+ EMX instances are decidable, because there are some nasty ones due to
+ infinities. That does not mean no EMX instance is decidable. And I think
+ the ad instance is decidable. Is there a learning task that actually
+ corresponds to an undecidable EMX instance? I don't know, but I will not
+ believe the result of this paper is useful until I see one.
+
+ h/t Reynaldo Boulogne
+- 2018-12-11 - *[[file:microposts/gavin-belson.org][gavin-belson]]*
+
+ #+begin_quote
+ I don't know about you people, but I don't want to live in a world
+ where someone else makes the world a better place better than we do.
+ #+end_quote
+
+ Gavin Belson, Silicon Valley S2E1.
+
+ I came across this quote in
+ [[https://slate.com/business/2018/12/facebook-emails-lawsuit-embarrassing-mark-zuckerberg.html][a
+ Slate post about Facebook]]
+- 2018-10-05 - *[[file:microposts/margins.org][margins]]*
+
+ With Fermat's Library's new tool
+ [[https://fermatslibrary.com/margins][margins]], you can host your own
+ journal club.
+- 2018-09-18 - *[[file:microposts/rnn-turing.org][rnn-turing]]*
+
+ Just some non-rigorous guess / thought: Feedforward networks are like
+ combinatorial logic, and recurrent networks are like sequential logic
+ (e.g. data flip-flop is like the feedback connection in RNN). Since NAND
+ - combinatorial logic + sequential logic = von Neumann machine which is
+ an approximation of the Turing machine, it is not surprising that RNN
+ (with feedforward networks) is Turing complete (assuming that neural
+ networks can learn the NAND gate).
+- 2018-09-07 - *[[file:microposts/zitierkartell.org][zitierkartell]]*
+
+ [[https://academia.stackexchange.com/questions/116489/counter-strategy-against-group-that-repeatedly-does-strategic-self-citations-and][Counter
+ strategy against group that repeatedly does strategic self-citations and
+ ignores other relevant research]]
+- 2018-09-05 - *[[file:microposts/short-science.org][short-science]]*
+
+ #+begin_quote
+
+
+ - ShortScience.org is a platform for post-publication discussion
+ aiming to improve accessibility and reproducibility of research
+ ideas.
+ - The website has over 800 summaries, mostly in machine learning,
+ written by the community and organized by paper, conference, and
+ year.
+ - Reading summaries of papers is useful to obtain the perspective and
+ insight of another reader, why they liked or disliked it, and their
+ attempt to demystify complicated sections.
+ - Also, writing summaries is a good exercise to understand the content
+ of a paper because you are forced to challenge your assumptions when
+ explaining it.
+ - Finally, you can keep up to date with the flood of research by
+ reading the latest summaries on our Twitter and Facebook pages.
+ #+end_quote
+
+ [[https://shortscience.org][ShortScience.org]]
+- 2018-08-13 - *[[file:microposts/darknet-diaries.org][darknet-diaries]]*
+
+ [[https://darknetdiaries.com][Darknet Diaries]] is a cool podcast.
+ According to its about page it covers "true stories from the dark side
+ of the Internet. Stories about hackers, defenders, threats, malware,
+ botnets, breaches, and privacy."
+- 2018-06-20 - *[[file:microposts/coursera-basic-income.org][coursera-basic-income]]*
+
+ Coursera is having
+ [[https://www.coursera.org/learn/exploring-basic-income-in-a-changing-economy][a
+ Teach-Out on Basic Income]].
+- 2018-06-19 - *[[file:microposts/pun-generator.org][pun-generator]]*
+
+ [[https://en.wikipedia.org/wiki/Computational_humor#Pun_generation][Pun
+ generators exist]].
+- 2018-06-15 - *[[file:microposts/hackers-excerpt.org][hackers-excerpt]]*
+
+ #+begin_quote
+ But as more nontechnical people bought computers, the things that
+ impressed hackers were not as essential. While the programs themselves
+ had to maintain a certain standard of quality, it was quite possible
+ that the most exacting standards---those applied by a hacker who
+ wanted to add one more feature, or wouldn't let go of a project until
+ it was demonstrably faster than anything else around---were probably
+ counterproductive. What seemed more important was marketing. There
+ were plenty of brilliant programs which no one knew about. Sometimes
+ hackers would write programs and put them in the public domain, give
+ them away as easily as John Harris had lent his early copy of
+ Jawbreaker to the guys at the Fresno computer store. But rarely would
+ people ask for public domain programs by name: they wanted the ones
+ they saw advertised and discussed in magazines, demonstrated in
+ computer stores. It was not so important to have amazingly clever
+ algorithms. Users would put up with more commonplace ones.
+
+ The Hacker Ethic, of course, held that every program should be as good
+ as you could make it (or better), infinitely flexible, admired for its
+ brilliance of concept and execution, and designed to extend the user's
+ powers. Selling computer programs like toothpaste was heresy. But it
+ was happening. Consider the prescription for success offered by one of
+ a panel of high-tech venture capitalists, gathered at a 1982 software
+ show: "I can summarize what it takes in three words: marketing,
+ marketing, marketing." When computers are sold like toasters, programs
+ will be sold like toothpaste. The Hacker Ethic notwithstanding.
+ #+end_quote
+
+ [[http://www.stevenlevy.com/index.php/books/hackers][Hackers: Heroes of
+ Computer Revolution]], by Steven Levy.
+- 2018-06-11 - *[[file:microposts/catalan-overflow.org][catalan-overflow]]*
+
+ To compute Catalan numbers without unnecessary overflow, use the
+ recurrence formula \(C_n = {4 n - 2 \over n + 1} C_{n - 1}\).
+- 2018-06-04 - *[[file:microposts/boyer-moore.org][boyer-moore]]*
+
+ The
+ [[https://en.wikipedia.org/wiki/Boyer–Moore_majority_vote_algorithm][Boyer-Moore
+ algorithm for finding the majority of a sequence of elements]] falls in
+ the category of "very clever algorithms".
+
+ #+begin_example
+ int majorityElement(vector<int>& xs) {
+ int count = 0;
+ int maj = xs[0];
+ for (auto x : xs) {
+ if (x == maj) count++;
+ else if (count == 0) maj = x;
+ else count--;
+ }
+ return maj;
+ }
+ #+end_example
+- 2018-05-30 - *[[file:microposts/how-to-learn-on-your-own.org][how-to-learn-on-your-own]]*
+
+ Roger Grosse's post
+ [[https://metacademy.org/roadmaps/rgrosse/learn_on_your_own][How to
+ learn on your own (2015)]] is an excellent modern guide on how to learn
+ and research technical stuff (especially machine learning and maths) on
+ one's own.
+- 2018-05-25 - *[[file:microposts/2048-mdp.org][2048-mdp]]*
+
+ [[http://jdlm.info/articles/2018/03/18/markov-decision-process-2048.html][This
+ post]] models 2048 as an MDP and solves it using policy iteration and
+ backward induction.
+- 2018-05-22 - *[[file:microposts/ats.org][ats]]*
+
+ #+begin_quote
+ ATS (Applied Type System) is a programming language designed to unify
+ programming with formal specification. ATS has support for combining
+ theorem proving with practical programming through the use of advanced
+ type systems. A past version of The Computer Language Benchmarks Game
+ has demonstrated that the performance of ATS is comparable to that of
+ the C and C++ programming languages. By using theorem proving and
+ strict type checking, the compiler can detect and prove that its
+ implemented functions are not susceptible to bugs such as division by
+ zero, memory leaks, buffer overflow, and other forms of memory
+ corruption by verifying pointer arithmetic and reference counting
+ before the program compiles. Additionally, by using the integrated
+ theorem-proving system of ATS (ATS/LF), the programmer may make use of
+ static constructs that are intertwined with the operative code to
+ prove that a function attains its specification.
+ #+end_quote
+
+ [[https://en.wikipedia.org/wiki/ATS_(programming_language)][Wikipedia
+ entry on ATS]]
+- 2018-05-20 - *[[file:microposts/bostoncalling.org][bostoncalling]]*
+
+ (5-second fame) I sent a picture of my kitchen sink to BBC and got
+ mentioned in the [[https://www.bbc.co.uk/programmes/w3cswg8c][latest
+ Boston Calling episode]] (listen at 25:54).
+- 2018-05-18 - *[[file:microposts/colah-blog.org][colah-blog]]*
+
+ [[https://colah.github.io/][colah's blog]] has a cool feature that
+ allows you to comment on any paragraph of a blog post. Here's an
+ [[https://colah.github.io/posts/2015-08-Understanding-LSTMs/][example]].
+ If it is doable on a static site hosted on Github pages, I suppose it
+ shouldn't be too hard to implement. This also seems to work more
+ seamlessly than [[https://fermatslibrary.com/][Fermat's Library]],
+ because the latter has to embed pdfs in webpages. Now fantasy time:
+ imagine that one day arXiv shows html versions of papers (through author
+ uploading or conversion from TeX) with this feature.
+- 2018-05-15 - *[[file:microposts/random-forests.org][random-forests]]*
+
+ [[https://lagunita.stanford.edu/courses/HumanitiesSciences/StatLearning/Winter2016/info][Stanford
+ Lagunita's statistical learning course]] has some excellent lectures on
+ random forests. It starts with explanations of decision trees, followed
+ by bagged trees and random forests, and ends with boosting. From these
+ lectures it seems that:
+
+ 1. The term "predictors" in statistical learning = "features" in machine
+ learning.
+ 1. The main idea of random forests of dropping predictors for individual
+ trees and aggregate by majority or average is the same as the idea of
+ dropout in neural networks, where a proportion of neurons in the
+ hidden layers are dropped temporarily during different minibatches of
+ training, effectively averaging over an emsemble of subnetworks. Both
+ tricks are used as regularisations, i.e. to reduce the variance. The
+ only difference is: in random forests, all but a square root number
+ of the total number of features are dropped, whereas the dropout
+ ratio in neural networks is usually a half.
+
+ By the way, here's a comparison between statistical learning and machine
+ learning from the slides of the Statistcal Learning course:
+- 2018-05-14 - *[[file:microposts/open-review-net.org][open-review-net]]*
+
+ Open peer review means peer review process where communications
+ e.g. comments and responses are public.
+
+ Like [[https://scipost.org/][SciPost]] mentioned in
+ [[file:/posts/2018-04-10-update-open-research.html][my post]],
+ [[https://openreview.net][OpenReview.net]] is an example of open peer
+ review in research. It looks like their focus is machine learning. Their
+ [[https://openreview.net/about][about page]] states their mission, and
+ here's [[https://openreview.net/group?id=ICLR.cc/2018/Conference][an
+ example]] where you can click on each entry to see what it is like. We
+ definitely need this in the maths research community.
+- 2018-05-11 - *[[file:microposts/rnn-fsm.org][rnn-fsm]]*
+
+ Related to [[file:neural-turing-machine][a previous micropost]].
+
+ [[http://www.cs.toronto.edu/~rgrosse/csc321/lec9.pdf][These slides from
+ Toronto]] are a nice introduction to RNN (recurrent neural network) from
+ a computational point of view. It states that RNN can simulate any FSM
+ (finite state machine, a.k.a. finite automata abbr. FA) with a toy
+ example computing the parity of a binary string.
+
+ [[http://www.deeplearningbook.org/contents/rnn.html][Goodfellow et.
+ al.'s book]] (see page 372 and 374) goes one step further, stating that
+ RNN with a hidden-to-hidden layer can simulate Turing machines, and not
+ only that, but also the /universal/ Turing machine abbr. UTM (the book
+ referenced
+ [[https://www.sciencedirect.com/science/article/pii/S0022000085710136][Siegelmann-Sontag]]),
+ a property not shared by the weaker network where the hidden-to-hidden
+ layer is replaced by an output-to-hidden layer (page 376).
+
+ By the way, the RNN with a hidden-to-hidden layer has the same
+ architecture as the so-called linear dynamical system mentioned in
+ [[https://www.coursera.org/learn/neural-networks/lecture/Fpa7y/modeling-sequences-a-brief-overview][Hinton's
+ video]].
+
+ From what I have learned, the universality of RNN and feedforward
+ networks are therefore due to different arguments, the former coming
+ from Turing machines and the latter from an analytical view of
+ approximation by step functions.
+- 2018-05-10 - *[[file:microposts/math-writing-decoupling.org][math-writing-decoupling]]*
+
+ One way to write readable mathematics is to decouple concepts. One idea
+ is the following template. First write a toy example with all the
+ important components present in this example, then analyse each
+ component individually and elaborate how (perhaps more complex)
+ variations of the component can extend the toy example and induce more
+ complex or powerful versions of the toy example. Through such
+ incremental development, one should be able to arrive at any result in
+ cutting edge research after a pleasant journey.
+
+ It's a bit like the UNIX philosophy, where you have a basic system of
+ modules like IO, memory management, graphics etc, and modify / improve
+ each module individually (H/t [[http://nand2tetris.org/][NAND2Tetris]]).
+
+ The book [[http://neuralnetworksanddeeplearning.com/][Neutral networks
+ and deep learning]] by Michael Nielsen is an example of such approach.
+ It begins the journey with a very simple neutral net with one hidden
+ layer, no regularisation, and sigmoid activations. It then analyses each
+ component including cost functions, the back propagation algorithm, the
+ activation functions, regularisation and the overall architecture (from
+ fully connected to CNN) individually and improve the toy example
+ incrementally. Over the course the accuracy of the example of mnist
+ grows incrementally from 95.42% to 99.67%.
+- 2018-05-09 - *[[file:microposts/neural-nets-activation.org][neural-nets-activation]]*
+
+ #+begin_quote
+ What makes the rectified linear activation function better than the
+ sigmoid or tanh functions? At present, we have a poor understanding of
+ the answer to this question. Indeed, rectified linear units have only
+ begun to be widely used in the past few years. The reason for that
+ recent adoption is empirical: a few people tried rectified linear
+ units, often on the basis of hunches or heuristic arguments. They got
+ good results classifying benchmark data sets, and the practice has
+ spread. In an ideal world we'd have a theory telling us which
+ activation function to pick for which application. But at present
+ we're a long way from such a world. I should not be at all surprised
+ if further major improvements can be obtained by an even better choice
+ of activation function. And I also expect that in coming decades a
+ powerful theory of activation functions will be developed. Today, we
+ still have to rely on poorly understood rules of thumb and experience.
+ #+end_quote
+
+ Michael Nielsen,
+ [[http://neuralnetworksanddeeplearning.com/chap6.html#convolutional_neural_networks_in_practice][Neutral
+ networks and deep learning]]
+- 2018-05-09 - *[[file:microposts/neural-turing-machine.org][neural-turing-machine]]*
+
+ #+begin_quote
+ One way RNNs are currently being used is to connect neural networks
+ more closely to traditional ways of thinking about algorithms, ways of
+ thinking based on concepts such as Turing machines and (conventional)
+ programming languages. [[https://arxiv.org/abs/1410.4615][A 2014
+ paper]] developed an RNN which could take as input a
+ character-by-character description of a (very, very simple!) Python
+ program, and use that description to predict the output. Informally,
+ the network is learning to "understand" certain Python programs.
+ [[https://arxiv.org/abs/1410.5401][A second paper, also from 2014]],
+ used RNNs as a starting point to develop what they called a neural
+ Turing machine (NTM). This is a universal computer whose entire
+ structure can be trained using gradient descent. They trained their
+ NTM to infer algorithms for several simple problems, such as sorting
+ and copying.
+
+ As it stands, these are extremely simple toy models. Learning to
+ execute the Python program =print(398345+42598)= doesn't make a
+ network into a full-fledged Python interpreter! It's not clear how
+ much further it will be possible to push the ideas. Still, the results
+ are intriguing. Historically, neural networks have done well at
+ pattern recognition problems where conventional algorithmic approaches
+ have trouble. Vice versa, conventional algorithmic approaches are good
+ at solving problems that neural nets aren't so good at. No-one today
+ implements a web server or a database program using a neural network!
+ It'd be great to develop unified models that integrate the strengths
+ of both neural networks and more traditional approaches to algorithms.
+ RNNs and ideas inspired by RNNs may help us do that.
+ #+end_quote
+
+ Michael Nielsen,
+ [[http://neuralnetworksanddeeplearning.com/chap6.html#other_approaches_to_deep_neural_nets][Neural
+ networks and deep learning]]
+- 2018-05-08 - *[[file:microposts/nlp-arxiv.org][nlp-arxiv]]*
+
+ Primer Science is a tool by a startup called Primer that uses NLP to
+ summarize contents (but not single papers, yet) on arxiv. A developer of
+ this tool predicts in
+ [[https://twimlai.com/twiml-talk-136-taming-arxiv-w-natural-language-processing-with-john-bohannon/#][an
+ interview]] that progress on AI's ability to extract meanings from AI
+ research papers will be the biggest accelerant on AI research.
+- 2018-05-08 - *[[file:microposts/neural-nets-regularization.org][neural-nets-regularization]]*
+
+ #+begin_quote
+ no-one has yet developed an entirely convincing theoretical
+ explanation for why regularization helps networks generalize. Indeed,
+ researchers continue to write papers where they try different
+ approaches to regularization, compare them to see which works better,
+ and attempt to understand why different approaches work better or
+ worse. And so you can view regularization as something of a kludge.
+ While it often helps, we don't have an entirely satisfactory
+ systematic understanding of what's going on, merely incomplete
+ heuristics and rules of thumb.
+
+ There's a deeper set of issues here, issues which go to the heart of
+ science. It's the question of how we generalize. Regularization may
+ give us a computational magic wand that helps our networks generalize
+ better, but it doesn't give us a principled understanding of how
+ generalization works, nor of what the best approach is.
+ #+end_quote
+
+ Michael Nielsen,
+ [[http://neuralnetworksanddeeplearning.com/chap3.html#why_does_regularization_help_reduce_overfitting][Neural
+ networks and deep learning]]
+- 2018-05-08 - *[[file:microposts/sql-injection-video.org][sql-injection-video]]*
+
+ Computerphile has some brilliant educational videos on computer science,
+ like [[https://www.youtube.com/watch?v=ciNHn38EyRc][a demo of SQL
+ injection]], [[https://www.youtube.com/watch?v=eis11j_iGMs][a toy
+ example of the lambda calculus]], and
+ [[https://www.youtube.com/watch?v=9T8A89jgeTI][explaining the Y
+ combinator]].
+- 2018-05-07 - *[[file:microposts/learning-knowledge-graph-reddit-journal-club.org][learning-knowledge-graph-reddit-journal-club]]*
+
+ It is a natural idea to look for ways to learn things like going through
+ a skill tree in a computer RPG.
+
+ For example I made a
+ [[https://ypei.me/posts/2015-04-02-juggling-skill-tree.html][DAG for
+ juggling]].
+
+ Websites like [[https://knowen.org][Knowen]] and
+ [[https://metacademy.org][Metacademy]] explore this idea with added
+ flavour of open collaboration.
+
+ The design of Metacademy looks quite promising. It also has a nice
+ tagline: "your package manager for knowledge".
+
+ There are so so many tools to assist learning / research / knowledge
+ sharing today, and we should keep experimenting, in the hope that
+ eventually one of them will scale.
+
+ On another note, I often complain about the lack of a place to discuss
+ math research online, but today I found on Reddit some journal clubs on
+ machine learning:
+ [[https://www.reddit.com/r/MachineLearning/comments/8aluhs/d_machine_learning_wayr_what_are_you_reading_week/][1]],
+ [[https://www.reddit.com/r/MachineLearning/comments/8elmd8/d_anyone_having_trouble_reading_a_particular/][2]].
+ If only we had this for maths. On the other hand r/math does have some
+ interesting recurring threads as well:
+ [[https://www.reddit.com/r/math/wiki/everythingaboutx][Everything about
+ X]] and
+ [[https://www.reddit.com/r/math/search?q=what+are+you+working+on?+author:automoderator+&sort=new&restrict_sr=on&t=all][What
+ Are You Working On?]]. Hopefully these threads can last for years to
+ come.
+- 2018-05-02 - *[[file:microposts/simple-solution-lack-of-math-rendering.org][simple-solution-lack-of-math-rendering]]*
+
+ The lack of maths rendering in major online communication platforms like
+ instant messaging, email or Github has been a minor obsession of mine
+ for quite a while, as I saw it as a big factor preventing people from
+ talking more maths online. But today I realised this is totally a
+ non-issue. Just do what people on IRC have been doing since the
+ inception of the universe: use a (latex) pastebin.
+- 2018-05-01 - *[[file:microposts/neural-networks-programming-paradigm.org][neural-networks-programming-paradigm]]*
+
+ #+begin_quote
+ Neural networks are one of the most beautiful programming paradigms
+ ever invented. In the conventional approach to programming, we tell
+ the computer what to do, breaking big problems up into many small,
+ precisely defined tasks that the computer can easily perform. By
+ contrast, in a neural network we don't tell the computer how to solve
+ our problem. Instead, it learns from observational data, figuring out
+ its own solution to the problem at hand.
+ #+end_quote
+
+ Michael Nielsen -
+ [[http://neuralnetworksanddeeplearning.com/about.html][What this book
+ (Neural Networks and Deep Learning) is about]]
+
+ Unrelated to the quote, note that Nielsen's book is licensed under
+ [[https://creativecommons.org/licenses/by-nc/3.0/deed.en_GB][CC BY-NC]],
+ so one can build on it and redistribute non-commercially.
+- 2018-04-30 - *[[file:microposts/google-search-not-ai.org][google-search-not-ai]]*
+
+ #+begin_quote
+ But, users have learned to accommodate to Google not the other way
+ around. We know what kinds of things we can type into Google and what
+ we can't and we keep our searches to things that Google is likely to
+ help with. We know we are looking for texts and not answers to start a
+ conversation with an entity that knows what we really need to talk
+ about. People learn from conversation and Google can't have one. It
+ can pretend to have one using Siri but really those conversations tend
+ to get tiresome when you are past asking about where to eat.
+ #+end_quote
+
+ Roger Schank -
+ [[http://www.rogerschank.com/fraudulent-claims-made-by-IBM-about-Watson-and-AI][Fraudulent
+ claims made by IBM about Watson and AI]]
+- 2018-04-06 - *[[file:microposts/hacker-ethics.org][hacker-ethics]]*
+
+ #+begin_quote
+
+
+ - Access to computers---and anything that might teach you something
+ about the way the world works---should be unlimited and total.
+ Always yield to the Hands-On Imperative!
+ - All information should be free.
+ - Mistrust Authority---Promote Decentralization.
+ - Hackers should be judged by their hacking, not bogus criteria such
+ as degrees, age, race, or position.
+ - You can create art and beauty on a computer.
+ - Computers can change your life for the better.
+ #+end_quote
+
+ [[https://en.wikipedia.org/wiki/Hacker_ethic][The Hacker Ethic]],
+ [[https://en.wikipedia.org/wiki/Hackers:_Heroes_of_the_Computer_Revolution][Hackers:
+ Heroes of Computer Revolution]], by Steven Levy
+- 2018-03-23 - *[[file:microposts/static-site-generator.org][static-site-generator]]*
+
+ #+begin_quote
+ "Static site generators seem like music databases, in that everyone
+ eventually writes their own crappy one that just barely scratches the
+ itch they had (and I'm no exception)."
+ #+end_quote
+
+ __david__@hackernews
+
+ So did I. \ No newline at end of file
diff --git a/posts/2013-06-01-q-robinson-schensted-paper.org b/posts/2013-06-01-q-robinson-schensted-paper.org
new file mode 100644
index 0000000..27a6b0e
--- /dev/null
+++ b/posts/2013-06-01-q-robinson-schensted-paper.org
@@ -0,0 +1,28 @@
+#+title: A \(q\)-weighted Robinson-Schensted algorithm
+
+#+date: <2013-06-01>
+
+In [[https://projecteuclid.org/euclid.ejp/1465064320][this paper]] with
+[[http://www.bristol.ac.uk/maths/people/neil-m-oconnell/][Neil]] we
+construct a \(q\)-version of the Robinson-Schensted algorithm with
+column insertion. Like the
+[[http://en.wikipedia.org/wiki/Robinson–Schensted_correspondence][usual
+RS correspondence]] with column insertion, this algorithm could take
+words as input. Unlike the usual RS algorithm, the output is a set of
+weighted pairs of semistandard and standard Young tableaux \((P,Q)\)
+with the same shape. The weights are rational functions of indeterminant
+\(q\).
+
+If \(q\in[0,1]\), the algorithm can be considered as a randomised RS
+algorithm, with 0 and 1 being two interesting cases. When \(q\to0\), it
+is reduced to the latter usual RS algorithm; while when \(q\to1\) with
+proper scaling it should scale to directed random polymer model in
+[[http://arxiv.org/abs/0910.0069][(O'Connell 2012)]]. When the input
+word \(w\) is a random walk:
+
+\begin{align*}\mathbb
+P(w=v)=\prod_{i=1}^na_{v_i},\qquad\sum_ja_j=1\end{align*}
+
+the shape of output evolves as a Markov chain with kernel related to
+\(q\)-Whittaker functions, which are Macdonald functions when \(t=0\)
+with a factor.
diff --git a/posts/2014-04-01-q-robinson-schensted-symmetry-paper.org b/posts/2014-04-01-q-robinson-schensted-symmetry-paper.org
new file mode 100644
index 0000000..b1c967d
--- /dev/null
+++ b/posts/2014-04-01-q-robinson-schensted-symmetry-paper.org
@@ -0,0 +1,16 @@
+#+title: Symmetry property of \(q\)-weighted Robinson-Schensted algorithms and branching algorithms
+#+date: <2014-04-01>
+
+In [[http://link.springer.com/article/10.1007/s10801-014-0505-x][this
+paper]] a symmetry property analogous to the well known symmetry
+property of the normal Robinson-Schensted algorithm has been shown for
+the \(q\)-weighted Robinson-Schensted algorithm. The proof uses a
+generalisation of the growth diagram approach introduced by Fomin. This
+approach, which uses "growth graphs", can also be applied to a wider
+class of insertion algorithms which have a branching structure.
+
+#+caption: Growth graph of q-RS for 1423
+[[../assets/resources/1423graph.jpg]]
+
+Above is the growth graph of the \(q\)-weighted Robinson-Schensted
+algorithm for the permutation \({1 2 3 4\choose1 4 2 3}\).
diff --git a/posts/2015-01-20-weighted-interpretation-super-catalan-numbers.org b/posts/2015-01-20-weighted-interpretation-super-catalan-numbers.org
new file mode 100644
index 0000000..9cde382
--- /dev/null
+++ b/posts/2015-01-20-weighted-interpretation-super-catalan-numbers.org
@@ -0,0 +1,39 @@
+#+title: AMS review of 'A weighted interpretation for the super Catalan
+numbers' by Allen and Gheorghiciuc
+
+#+date: <2015-01-20>
+
+The super Catalan numbers are defined as $$ T(m,n) = {(2 m)! (2 n)!
+\over 2 m! n! (m + n)!}. $$
+
+   This paper has two main results. First a combinatorial interpretation
+of the super Catalan numbers is given: $$ T(m,n) = P(m,n) - N(m,n) $$
+where \(P(m,n)\) enumerates the number of 2-Motzkin paths whose \(m\)
+-th step begins at an even level (called \(m\)-positive paths) and
+\(N(m,n)\) those with \(m\)-th step beginning at an odd level
+(\(m\)-negative paths). The proof uses a recursive argument on the
+number of \(m\)-positive and -negative paths, based on a recursion of
+the super Catalan numbers appearing in [I. M. Gessel, J. Symbolic
+Comput. *14* (1992), no. 2-3, 179--194;
+[[http://www.ams.org/mathscinet/search/publdoc.html?r=1&pg1=MR&s1=1187230&loc=fromrevtext][MR1187230]]]:
+$$ 4T(m,n) = T(m+1, n) + T(m, n+1). $$ This result gives an expression
+for the super Catalan numbers in terms of numbers counting the so-called
+ballot paths. The latter sometimes are also referred to as the
+generalised Catalan numbers forming the entries of the Catalan triangle.
+
+   Based on the first result, the second result is a combinatorial
+interpretation of the super Catalan numbers \(T(2,n)\) in terms of
+counting certain Dyck paths. This is equivalent to a theorem, which
+represents \(T(2,n)\) as counting of certain pairs of Dyck paths, in [I.
+M. Gessel and G. Xin, J. Integer Seq. *8* (2005), no. 2, Article 05.2.3,
+13 pp.;
+[[http://www.ams.org/mathscinet/search/publdoc.html?r=1&pg1=MR&s1=2134162&loc=fromrevtext][MR2134162]]],
+and the equivalence is explained at the end of the paper by a bijection
+between the Dyck paths and the pairs of Dyck paths. The proof of the
+theorem itself is also done by constructing two bijections between Dyck
+paths satisfying certain conditions. All the three bijections are
+formulated by locating, removing and adding steps.
+
+Copyright notice: This review is published at
+http://www.ams.org/mathscinet-getitem?mr=3275875, its copyright owned by
+the AMS.
diff --git a/posts/2015-04-01-unitary-double-products.org b/posts/2015-04-01-unitary-double-products.org
new file mode 100644
index 0000000..d545b3a
--- /dev/null
+++ b/posts/2015-04-01-unitary-double-products.org
@@ -0,0 +1,10 @@
+#+title: Unitary causal quantum stochastic double products as universal
+interactions I
+
+#+date: <2015-04-01>
+
+In
+[[http://www.actaphys.uj.edu.pl/findarticle?series=Reg&vol=46&page=1851][this
+paper]] with [[http://homepages.lboro.ac.uk/~marh3/][Robin]] we show the
+explicit formulae for a family of unitary triangular and rectangular
+double product integrals which can be described as second quantisations.
diff --git a/posts/2015-04-02-juggling-skill-tree.org b/posts/2015-04-02-juggling-skill-tree.org
new file mode 100644
index 0000000..79b35ad
--- /dev/null
+++ b/posts/2015-04-02-juggling-skill-tree.org
@@ -0,0 +1,28 @@
+#+title: jst
+
+#+date: <2015-04-02>
+
+jst = juggling skill tree
+
+If you have ever played a computer role playing game, you may have
+noticed the protagonist sometimes has a skill "tree" (most of the time
+it is actually a directed acyclic graph), where certain skills leads to
+others. For example,
+[[http://hydra-media.cursecdn.com/diablo.gamepedia.com/3/37/Sorceress_Skill_Trees_%28Diablo_II%29.png?version=b74b3d4097ef7ad4e26ebee0dcf33d01][here]]
+is the skill tree of sorceress in
+[[https://en.wikipedia.org/wiki/Diablo_II][Diablo II]].
+
+Now suppose our hero embarks on a quest for learning all the possible
+juggling patterns. Everyone would agree she should start with cascade,
+the simplest nontrivial 3-ball pattern, but what afterwards? A few other
+accessible patterns for beginners are juggler's tennis, two in one and
+even reverse cascade, but what to learn after that? The encyclopeadic
+[[http://libraryofjuggling.com/][Library of Juggling]] serves as a good
+guide, as it records more than 160 patterns, some of which very
+aesthetically appealing. On this website almost all the patterns have a
+"prerequisite" section, indicating what one should learn beforehand. I
+have therefore written a script using [[http://python.org][Python]],
+[[http://www.crummy.com/software/BeautifulSoup/][BeautifulSoup]] and
+[[http://pygraphviz.github.io/][pygraphviz]] to generate a jst (graded
+by difficulties, which is the leftmost column) from the Library of
+Juggling (click the image for the full size):
diff --git a/posts/2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.org b/posts/2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.org
new file mode 100644
index 0000000..b632c03
--- /dev/null
+++ b/posts/2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.org
@@ -0,0 +1,67 @@
+#+title: AMS review of 'Infinite binary words containing repetitions of
+#+title: odd period' by Badkobeh and Crochemore
+
+#+date: <2015-05-30>
+
+This paper is about the existence of pattern-avoiding infinite binary
+words, where the patterns are squares, cubes and \(3^+\)-powers.   
+There are mainly two kinds of results, positive (existence of an
+infinite binary word avoiding a certain pattern) and negative
+(non-existence of such a word). Each positive result is proved by the
+construction of a word with finitely many squares and cubes which are
+listed explicitly. First a synchronising (also known as comma-free)
+uniform morphism \(g\: \Sigma_3^* \to \Sigma_2^*\)
+
+is constructed. Then an argument is given to show that the length of
+squares in the code \(g(w)\) for a squarefree \(w\) is bounded, hence
+all the squares can be obtained by examining all \(g(s)\) for \(s\) of
+bounded lengths. The argument resembles that of the proof of, e.g.,
+Theorem 1, Lemma 2, Theorem 3 and Lemma 4 in [N. Rampersad, J. O.
+Shallit and M. Wang, Theoret. Comput. Sci. *339* (2005), no. 1, 19--34;
+[[http://www.ams.org/mathscinet/search/publdoc.html?r=1&pg1=MR&s1=2142071&loc=fromrevtext][MR2142071]]].
+The negative results are proved by traversing all possible finite words
+satisfying the conditions.
+
+   Let \(L(n_2, n_3, S)\) be the maximum length of a word with \(n_2\)
+distinct squares, \(n_3\) distinct cubes and that the periods of the
+squares can take values only in \(S\) , where \(n_2, n_3 \in \Bbb N \cup
+\{\infty, \omega\}\) and \(S \subset \Bbb N_+\) .    \(n_k = 0\)
+corresponds to \(k\)-free, \(n_k = \infty\) means no restriction on the
+number of distinct \(k\)-powers, and \(n_k = \omega\) means
+\(k^+\)-free.
+
+   Below is the summary of the positive and negative results:
+
+1) (Negative) \(L(\infty, \omega, 2 \Bbb N) < \infty\) : \(\nexists\) an
+ infinite \(3^+\) -free binary word avoiding all squares of odd
+ periods. (Proposition 1)
+
+2) (Negative) \(L(\infty, 0, 2 \Bbb N + 1) \le 23\) : \(\nexists\) an
+ infinite 3-free binary word, avoiding squares of even periods. The
+ longest one has length \(\le 23\) (Proposition 2).
+
+3) (Positive) \(L(\infty, \omega, 2 \Bbb N +
+
+ 1) - = \infty\) :: \(\exists\) an infinite \(3^+\) -free binary word
+ avoiding squares of even periods (Theorem 1).
+
+4) (Positive) \(L(\infty, \omega, \{1, 3\}) = \infty\) : \(\exists\) an
+ infinite \(3^+\) -free binary word containing only squares of period
+ 1 or 3 (Theorem 2).
+
+5) (Negative) \(L(6, 1, 2 \Bbb N + 1) = 57\) : \(\nexists\) an infinite
+ binary word avoiding squares of even period containing \(< 7\)
+ squares and \(< 2\) cubes. The longest one containing 6 squares and 1
+ cube has length 57 (Proposition 6).
+
+6) (Positive) \(L(7, 1, 2 \Bbb N + 1) = \infty\) : \(\exists\) an
+ infinite \(3^+\) -free binary word avoiding squares of even period
+ with 1 cube and 7 squares (Theorem 3).
+
+7) (Positive) \(L(4, 2, 2 \Bbb N + 1) = \infty\) : \(\exists\) an
+ infinite \(3^+\) -free binary words avoiding squares of even period
+ and containing 2 cubes and 4 squares (Theorem 4).
+
+Copyright notice: This review is published at
+http://www.ams.org/mathscinet-getitem?mr=3313467, its copyright owned by
+the AMS.
diff --git a/posts/2015-07-01-causal-quantum-product-levy-area.org b/posts/2015-07-01-causal-quantum-product-levy-area.org
new file mode 100644
index 0000000..528b9b7
--- /dev/null
+++ b/posts/2015-07-01-causal-quantum-product-levy-area.org
@@ -0,0 +1,26 @@
+#+title: On a causal quantum double product integral related to Lévy
+#+title: stochastic area.
+
+#+date: <2015-07-01>
+
+In [[https://arxiv.org/abs/1506.04294][this paper]] with
+[[http://homepages.lboro.ac.uk/~marh3/][Robin]] we study the family of
+causal double product integrals \[ \prod_{a < x < y < b}\left(1 +
+i{\lambda \over 2}(dP_x dQ_y - dQ_x dP_y) + i {\mu \over 2}(dP_x dP_y +
+dQ_x dQ_y)\right) \]
+
+where $P$ and $Q$ are the mutually noncommuting momentum and position
+Brownian motions of quantum stochastic calculus. The evaluation is
+motivated heuristically by approximating the continuous double product
+by a discrete product in which infinitesimals are replaced by finite
+increments. The latter is in turn approximated by the second
+quantisation of a discrete double product of rotation-like operators in
+different planes due to a result in
+[[http://www.actaphys.uj.edu.pl/findarticle?series=Reg&vol=46&page=1851][(Hudson-Pei2015)]].
+The main problem solved in this paper is the explicit evaluation of the
+continuum limit $W$ of the latter, and showing that $W$ is a unitary
+operator. The kernel of $W$ is written in terms of Bessel functions, and
+the evaluation is achieved by working on a lattice path model and
+enumerating linear extensions of related partial orderings, where the
+enumeration turns out to be heavily related to Dyck paths and
+generalisations of Catalan numbers.
diff --git a/posts/2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.org b/posts/2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.org
new file mode 100644
index 0000000..cda6967
--- /dev/null
+++ b/posts/2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.org
@@ -0,0 +1,64 @@
+#+title: AMS review of 'Double Macdonald polynomials as the stable limit
+#+title: of Macdonald superpolynomials' by Blondeau-Fournier, Lapointe and
+#+title: Mathieu
+
+#+date: <2015-07-15>
+
+A Macdonald superpolynomial (introduced in [O. Blondeau-Fournier et al.,
+Lett. Math. Phys. 101 (2012), no. 1, 27--47;
+[[http://www.ams.org/mathscinet/search/publdoc.html?pg1=MR&s1=2935476&loc=fromrevtext][MR2935476]];
+J. Comb. 3 (2012), no. 3, 495--561;
+[[http://www.ams.org/mathscinet/search/publdoc.html?pg1=MR&s1=3029444&loc=fromrevtext][MR3029444]]])
+in \(N\) Grassmannian variables indexed by a superpartition \(\Lambda\)
+is said to be stable if \({m (m + 1) \over 2} \ge |\Lambda|\) and \(N
+\ge |\Lambda| - {m (m - 3) \over 2}\) , where \(m\) is the fermionic
+degree. A stable Macdonald superpolynomial (corresponding to a
+bisymmetric polynomial) is also called a double Macdonald polynomial
+(dMp). The main result of this paper is the factorisation of a dMp into
+plethysms of two classical Macdonald polynomials (Theorem 5). Based on
+this result, this paper
+
+1) shows that the dMp has a unique decomposition into bisymmetric
+ monomials;
+
+2) calculates the norm of the dMp;
+
+3) calculates the kernel of the Cauchy-Littlewood-type identity of the
+ dMp;
+
+4) shows the specialisation of the aforementioned factorisation to the
+ Jack, Hall-Littlewood and Schur cases. One of the three Schur
+ specialisations, denoted as \(s_{\lambda, \mu}\), also appears in (7)
+ and (9) below;
+
+5) defines the \(\omega\) -automorphism in this setting, which was used
+ to prove an identity involving products of four Littlewood-Richardson
+ coefficients;
+
+6) shows an explicit evaluation of the dMp motivated by the most general
+ evaluation of the usual Macdonald polynomials;
+
+7) relates dMps with the representation theory of the hyperoctahedral
+ group \(B_n\) via the double Kostka coefficients (which are defined
+ as the entries of the transition matrix from the bisymmetric Schur
+ functions \(s_{\lambda, \mu}\) to the modified dMps);
+
+8) shows that the double Kostka coefficients have the positivity and the
+ symmetry property, and can be written as sums of products of the
+ usual Kostka coefficients;
+
+9) defines an operator \(\nabla^B\) as an analogue of the nabla operator
+ \(\nabla\) introduced in [F. Bergeron and A. M. Garsia, in /Algebraic
+ methods and \(q\)-special functions/ (Montréal, QC, 1996), 1--52, CRM
+ Proc. Lecture Notes, 22, Amer. Math. Soc., Providence, RI, 1999;
+ [[http://www.ams.org/mathscinet/search/publdoc.html?r=1&pg1=MR&s1=1726826&loc=fromrevtext][MR1726826]]].
+ The action of \(\nabla^B\) on the bisymmetric Schur function
+ \(s_{\lambda, \mu}\) yields the dimension formula \((h + 1)^r\) for
+ the corresponding representation of \(B_n\) , where \(h\) and \(r\)
+ are the Coxeter number and the rank of \(B_n\) , in the same way that
+ the action of \(\nabla\) on the \(n\) th elementary symmetric
+ function leads to the same formula for the group of type \(A_n\) .
+
+Copyright notice: This review is published at
+http://www.ams.org/mathscinet-getitem?mr=3306078, its copyright owned by
+the AMS.
diff --git a/posts/2016-10-13-q-robinson-schensted-knuth-polymer.org b/posts/2016-10-13-q-robinson-schensted-knuth-polymer.org
new file mode 100644
index 0000000..93da639
--- /dev/null
+++ b/posts/2016-10-13-q-robinson-schensted-knuth-polymer.org
@@ -0,0 +1,50 @@
+#+title: A \(q\)-Robinson-Schensted-Knuth algorithm and a \(q\)-polymer
+
+#+date: <2016-10-13>
+
+(Latest update: 2017-01-12) In
+[[http://arxiv.org/abs/1504.00666][Matveev-Petrov 2016]] a
+\(q\)-deformed Robinson-Schensted-Knuth algorithm (\(q\)RSK) was
+introduced. In this article we give reformulations of this algorithm in
+terms of Noumi-Yamada description, growth diagrams and local moves. We
+show that the algorithm is symmetric, namely the output tableaux pair
+are swapped in a sense of distribution when the input matrix is
+transposed. We also formulate a \(q\)-polymer model based on the
+\(q\)RSK and prove the corresponding Burke property, which we use to
+show a strong law of large numbers for the partition function given
+stationary boundary conditions and \(q\)-geometric weights. We use the
+\(q\)-local moves to define a generalisation of the \(q\)RSK taking a
+Young diagram-shape of array as the input. We write down the joint
+distribution of partition functions in the space-like direction of the
+\(q\)-polymer in \(q\)-geometric environment, formulate a \(q\)-version
+of the multilayer polynuclear growth model (\(q\)PNG) and write down the
+joint distribution of the \(q\)-polymer partition functions at a fixed
+time.
+
+This article is available at
+[[https://arxiv.org/abs/1610.03692][arXiv]]. It seems to me that one
+difference between arXiv and Github is that on arXiv each preprint has a
+few versions only. In Github many projects have a "dev" branch hosting
+continuous updates, whereas the master branch is where the stable
+releases live.
+
+[[file:%7B%7B%20site.url%20%7D%7D/assets/resources/qrsklatest.pdf][Here]]
+is a "dev" version of the article, which I shall push to arXiv when it
+stablises. Below is the changelog.
+
+- 2017-01-12: Typos and grammar, arXiv v2.
+- 2016-12-20: Added remarks on the geometric \(q\)-pushTASEP. Added
+ remarks on the converse of the Burke property. Added natural language
+ description of the \(q\)RSK. Fixed typos.
+- 2016-11-13: Fixed some typos in the proof of Theorem 3.
+- 2016-11-07: Fixed some typos. The \(q\)-Burke property is now stated
+ in a more symmetric way, so is the law of large numbers Theorem 2.
+- 2016-10-20: Fixed a few typos. Updated some references. Added a
+ reference: [[http://web.mit.edu/~shopkins/docs/rsk.pdf][a set of notes
+ titled "RSK via local transformations"]]. It is written by
+ [[http://web.mit.edu/~shopkins/][Sam Hopkins]] in 2014 as an
+ expository article based on MIT combinatorics preseminar presentations
+ of Alex Postnikov. It contains some idea (applying local moves to a
+ general Young-diagram shaped array in the order that matches any
+ growth sequence of the underlying Young diagram) which I thought I was
+ the first one to write down.
diff --git a/posts/2017-04-25-open_research_toywiki.org b/posts/2017-04-25-open_research_toywiki.org
new file mode 100644
index 0000000..1e672b0
--- /dev/null
+++ b/posts/2017-04-25-open_research_toywiki.org
@@ -0,0 +1,21 @@
+#+title: Open mathematical research and launching toywiki
+
+#+date: <2017-04-25>
+
+As an experimental project, I am launching toywiki.
+
+It hosts a collection of my research notes.
+
+It takes some ideas from the open source culture and apply them to
+mathematical research: 1. It uses a very permissive license (CC-BY-SA).
+For example anyone can fork the project and make their own version if
+they have a different vision and want to build upon the project. 2. All
+edits will done with maximum transparency, and discussions of any of
+notes should also be as public as possible (e.g. Github issues) 3.
+Anyone can suggest changes by opening issues and submitting pull
+requests
+
+Here are the links: [[http://toywiki.xyz][toywiki]] and
+[[https://github.com/ycpei/toywiki][github repo]].
+
+Feedbacks are welcome by email.
diff --git a/posts/2017-08-07-mathematical_bazaar.org b/posts/2017-08-07-mathematical_bazaar.org
new file mode 100644
index 0000000..64bf335
--- /dev/null
+++ b/posts/2017-08-07-mathematical_bazaar.org
@@ -0,0 +1,213 @@
+#+title: The Mathematical Bazaar
+
+#+date: <2017-08-07>
+
+In this essay I describe some problems in academia of mathematics and
+propose an open source model, which I call open research in mathematics.
+
+This essay is a work in progress - comments and criticisms are
+welcome! [fn:1]
+
+Before I start I should point out that
+
+1. Open research is /not/ open access. In fact the latter is a
+ prerequisite to the former.
+2. I am not proposing to replace the current academic model with the
+ open model - I know academia works well for many people and I am
+ happy for them, but I think an open research community is long
+ overdue since the wide adoption of the World Wide Web more than two
+ decades ago. In fact, I fail to see why an open model can not run in
+ tandem with the academia, just like open source and closed source
+ software development coexist today.
+
+** problems of academia
+ :PROPERTIES:
+ :CUSTOM_ID: problems-of-academia
+ :END:
+Open source projects are characterised by publicly available source
+codes as well as open invitations for public collaborations, whereas
+closed source projects do not make source codes accessible to the
+public. How about mathematical academia then, is it open source or
+closed source? The answer is neither.
+
+Compared to some other scientific disciplines, mathematics does not
+require expensive equipments or resources to replicate results; compared
+to programming in conventional software industry, mathematical findings
+are not meant to be commercial, as credits and reputation rather than
+money are the direct incentives (even though the former are commonly
+used to trade for the latter). It is also a custom and common belief
+that mathematical derivations and theorems shouldn't be patented.
+Because of this, mathematical research is an open source activity in the
+sense that proofs to new results are all available in papers, and thanks
+to open access e.g. the arXiv preprint repository most of the new
+mathematical knowledge is accessible for free.
+
+Then why, you may ask, do I claim that maths research is not open
+sourced? Well, this is because 1. mathematical arguments are not easily
+replicable and 2. mathematical research projects are mostly not open for
+public participation.
+
+Compared to computer programs, mathematical arguments are not written in
+an unambiguous language, and they are terse and not written in maximum
+verbosity (this is especially true in research papers as journals
+encourage limiting the length of submissions), so the understanding of a
+proof depends on whether the reader is equipped with the right
+background knowledge, and the completeness of a proof is highly
+subjective. More generally speaking, computer programs are mostly
+portable because all machines with the correct configurations can
+understand and execute a piece of program, whereas humans are subject to
+their environment, upbringings, resources etc. to have a brain ready to
+comprehend a proof that interests them. (these barriers are softer than
+the expensive equipments and resources in other scientific fields
+mentioned before because it is all about having access to the right
+information)
+
+On the other hand, as far as the pursuit of reputation and prestige
+(which can be used to trade for the scarce resource of research
+positions and grant money) goes, there is often little practical
+motivation for career mathematicians to explain their results to the
+public carefully. And so the weird reality of the mathematical academia
+is that it is not an uncommon practice to keep trade secrets in order to
+protect one's territory and maintain a monopoly. This is doable because
+as long as a paper passes the opaque and sometimes political peer review
+process and is accepted by a journal, it is considered work done,
+accepted by the whole academic community and adds to the reputation of
+the author(s). Just like in the software industry, trade secrets and
+monopoly hinder the development of research as a whole, as well as
+demoralise outsiders who are interested in participating in related
+research.
+
+Apart from trade secrets and territoriality, another reason to the
+nonexistence of open research community is an elitist tradition in the
+mathematical academia, which goes as follows:
+
+- Whoever is not good at mathematics or does not possess a degree in
+ maths is not eligible to do research, or else they run high risks of
+ being labelled a crackpot.
+- Mistakes made by established mathematicians are more tolerable than
+ those less established.
+- Good mathematical writings should be deep, and expositions of
+ non-original results are viewed as inferior work and do not add to
+ (and in some cases may even damage) one's reputation.
+
+All these customs potentially discourage public participations in
+mathematical research, and I do not see them easily go away unless an
+open source community gains momentum.
+
+To solve the above problems, I propose a open source model of
+mathematical research, which has high levels of openness and
+transparency and also has some added benefits listed in the last section
+of this essay. This model tries to achieve two major goals:
+
+- Open and public discussions and collaborations of mathematical
+ research projects online
+- Open review to validate results, where author name, reviewer name,
+ comments and responses are all publicly available online.
+
+To this end, a Github model is fitting. Let me first describe how open
+source collaboration works on Github.
+
+** open source collaborations on Github
+ :PROPERTIES:
+ :CUSTOM_ID: open-source-collaborations-on-github
+ :END:
+On [[https://github.com][Github]], every project is publicly available
+in a repository (we do not consider private repos). The owner can update
+the project by "committing" changes, which include a message of what has
+been changed, the author of the changes and a timestamp. Each project
+has an issue tracker, which is basically a discussion forum about the
+project, where anyone can open an issue (start a discussion), and the
+owner of the project as well as the original poster of the issue can
+close it if it is resolved, e.g. bug fixed, feature added, or out of the
+scope of the project. Closing the issue is like ending the discussion,
+except that the thread is still open to more posts for anyone
+interested. People can react to each issue post, e.g. upvote, downvote,
+celebration, and importantly, all the reactions are public too, so you
+can find out who upvoted or downvoted your post.
+
+When one is interested in contributing code to a project, they fork it,
+i.e. make a copy of the project, and make the changes they like in the
+fork. Once they are happy with the changes, they submit a pull request
+to the original project. The owner of the original project may accept or
+reject the request, and they can comment on the code in the pull
+request, asking for clarification, pointing out problematic part of the
+code etc and the author of the pull request can respond to the comments.
+Anyone, not just the owner can participate in this review process,
+turning it into a public discussion. In fact, a pull request is a
+special issue thread. Once the owner is happy with the pull request,
+they accept it and the changes are merged into the original project. The
+author of the changes will show up in the commit history of the original
+project, so they get the credits.
+
+As an alternative to forking, if one is interested in a project but has
+a different vision, or that the maintainer has stopped working on it,
+they can clone it and make their own version. This is a more independent
+kind of fork because there is no longer intention to contribute back to
+the original project.
+
+Moreover, on Github there is no way to send private messages, which
+forces people to interact publicly. If say you want someone to see and
+reply to your comment in an issue post or pull request, you simply
+mention them by =@someone=.
+
+** open research in mathematics
+ :PROPERTIES:
+ :CUSTOM_ID: open-research-in-mathematics
+ :END:
+All this points to a promising direction of open research. A maths
+project may have a wiki / collection of notes, the paper being written,
+computer programs implementing the results etc. The issue tracker can
+serve as a discussion forum about the project as well as a platform for
+open review (bugs are analogous to mistakes, enhancements are possible
+ways of improving the main results etc.), and anyone can make their own
+version of the project, and (optionally) contribute back by making pull
+requests, which will also be openly reviewed. One may want to add an
+extra "review this project" functionality, so that people can comment on
+the original project like they do in a pull request. This may or may not
+be necessary, as anyone can make comments or point out mistakes in the
+issue tracker.
+
+One may doubt this model due to concerns of credits because work in
+progress is available to anyone. Well, since all the contributions are
+trackable in project commit history and public discussions in issues and
+pull request reviews, there is in fact /less/ room for cheating than the
+current model in academia, where scooping can happen without any
+witnesses. What we need is a platform with a good amount of trust like
+arXiv, so that the open research community honours (and can not ignore)
+the commit history, and the chance of mis-attribution can be reduced to
+minimum.
+
+Compared to the academic model, open research also has the following
+advantages:
+
+- Anyone in the world with Internet access will have a chance to
+ participate in research, whether they are affiliated to a university,
+ have the financial means to attend conferences, or are colleagues of
+ one of the handful experts in a specific field.
+- The problem of replicating / understanding maths results will be
+ solved, as people help each other out. This will also remove the
+ burden of answering queries about one's research. For example, say one
+ has a project "Understanding the fancy results in [paper name]", they
+ write up some initial notes but get stuck understanding certain
+ arguments. In this case they can simply post the questions on the
+ issue tracker, and anyone who knows the answer, or just has a
+ speculation can participate in the discussion. In the end the problem
+ may be resolved without the authors of the paper being bothered, who
+ may be too busy to answer.
+- Similarly, the burden of peer review can also be shifted from a few
+ appointed reviewers to the crowd.
+
+** related readings
+ :PROPERTIES:
+ :CUSTOM_ID: related-readings
+ :END:
+
+- [[http://www.catb.org/esr/writings/cathedral-bazaar/][The Cathedral
+ and the Bazaar by Eric Raymond]]
+- [[http://michaelnielsen.org/blog/doing-science-online/][Doing sience
+ online by Michael Nielson]]
+- [[https://gowers.wordpress.com/2009/01/27/is-massively-collaborative-mathematics-possible/][Is
+ massively collaborative mathematics possible? by Timothy Gowers]]
+
+[fn:1] Please send your comments to my email address - I am still
+ looking for ways to add a comment functionality to this website.
diff --git a/posts/2018-04-10-update-open-research.org b/posts/2018-04-10-update-open-research.org
new file mode 100644
index 0000000..4b078d5
--- /dev/null
+++ b/posts/2018-04-10-update-open-research.org
@@ -0,0 +1,185 @@
+#+title: Updates on open research
+
+#+date: <2018-04-29>
+
+It has been 9 months since I last wrote about open (maths) research.
+Since then two things happened which prompted me to write an update.
+
+As always I discuss open research only in mathematics, not because I
+think it should not be applied to other disciplines, but simply because
+I do not have experience nor sufficient interests in non-mathematical
+subjects.
+
+First, I read about Richard Stallman the founder of the free software
+movement, in [[http://shop.oreilly.com/product/9780596002879.do][his
+biography by Sam Williams]] and his own collection of essays
+[[https://shop.fsf.org/books-docs/free-software-free-society-selected-essays-richard-m-stallman-3rd-edition][/Free
+software, free society/]], from which I learned a bit more about the
+context and philosophy of free software and its relation to that of open
+source software. For anyone interested in open research, I highly
+recommend having a look at these two books. I am also reading Levy's
+[[http://www.stevenlevy.com/index.php/books/hackers][Hackers]], which
+documented the development of the hacker culture predating Stallman. I
+can see the connection of ideas from the hacker ethic to the free
+software philosophy and to the open source philosophy. My guess is that
+the software world is fortunate to have pioneers who advocated for
+various kinds of freedom and openness from the beginning, whereas for
+academia which has a much longer history, credit protection has always
+been a bigger concern.
+
+Also a month ago I attended a workshop called
+[[https://www.perimeterinstitute.ca/conferences/open-research-rethinking-scientific-collaboration][Open
+research: rethinking scientific collaboration]]. That was the first time
+I met a group of people (mostly physicists) who also want open research
+to happen, and we had some stimulating discussions. Many thanks to the
+organisers at Perimeter Institute for organising the event, and special
+thanks to
+[[https://www.perimeterinstitute.ca/people/matteo-smerlak][Matteo
+Smerlak]] and
+[[https://www.perimeterinstitute.ca/people/ashley-milsted][Ashley
+Milsted]] for invitation and hosting.
+
+From both of these I feel like I should write an updated post on open
+research.
+
+*** Freedom and community
+ :PROPERTIES:
+ :CUSTOM_ID: freedom-and-community
+ :END:
+Ideals matter. Stallman's struggles stemmed from the frustration of
+denied request of source code (a frustration I shared in academia except
+source code is replaced by maths knowledge), and revolved around two
+things that underlie the free software movement: freedom and community.
+That is, the freedom to use, modify and share a work, and by sharing, to
+help the community.
+
+Likewise, as for open research, apart from the utilitarian view that
+open research is more efficient / harder for credit theft, we should not
+ignore the ethical aspect that open research is right and fair. In
+particular, I think freedom and community can also serve as principles
+in open research. One way to make this argument more concrete is to
+describe what I feel are the central problems: NDAs (non-disclosure
+agreements) and reproducibility.
+
+*NDAs*. It is assumed that when establishing a research collaboration,
+or just having a discussion, all those involved own the joint work in
+progress, and no one has the freedom to disclose any information
+e.g. intermediate results without getting permission from all
+collaborators. In effect this amounts to signing an NDA. NDAs are
+harmful because they restrict people's freedom from sharing information
+that can benefit their own or others' research. Considering that in
+contrast to the private sector, the primary goal of academia is
+knowledge but not profit, NDAs in research are unacceptable.
+
+*Reproducibility*. Research papers written down are not necessarily
+reproducible, even though they appear on peer-reviewed journals. This is
+because the peer-review process is opaque and the proofs in the papers
+may not be clear to everyone. To make things worse, there are no open
+channels to discuss results in these papers and one may have to rely on
+interacting with the small circle of the informed. One example is folk
+theorems. Another is trade secrets required to decipher published works.
+
+I should clarify that freedom works both ways. One should have the
+freedom to disclose maths knowledge, but they should also be free to
+withhold any information that does not hamper the reproducibility of
+published works (e.g. results in ongoing research yet to be published),
+even though it may not be nice to do so when such information can help
+others with their research.
+
+Similar to the solution offered by the free software movement, we need a
+community that promotes and respects free flow of maths knowledge, in
+the spirit of the [[https://www.gnu.org/philosophy/][four essential
+freedoms]], a community that rejects NDAs and upholds reproducibility.
+
+Here are some ideas on how to tackle these two problems and build the
+community:
+
+1. Free licensing. It solves NDA problem - free licenses permit
+ redistribution and modification of works, so if you adopt them in
+ your joint work, then you have the freedom to modify and distribute
+ the work; it also helps with reproducibility - if a paper is not
+ clear, anyone can write their own version and publish it. Bonus
+ points with the use of copyleft licenses like
+ [[https://creativecommons.org/licenses/by-sa/4.0/][Creative Commons
+ Share-Alike]] or the [[https://www.gnu.org/licenses/fdl.html][GNU
+ Free Documentation License]].
+2. A forum for discussions of mathematics. It helps solve the
+ reproducibility problem - public interaction may help quickly clarify
+ problems. By the way, Math Overflow is not a forum.
+3. An infrastructure of mathematical knowledge. Like the GNU system, a
+ mathematics encyclopedia under a copyleft license maintained in the
+ Github-style rather than Wikipedia-style by a "Free Mathematics
+ Foundation", and drawing contributions from the public (inside or
+ outside of the academia). To begin with, crowd-source (again,
+ Github-style) the proofs of say 1000 foundational theorems covered in
+ the curriculum of a bachelor's degree. Perhaps start with taking
+ contributions from people with some credentials (e.g. having a
+ bachelor degree in maths) and then expand the contribution permission
+ to the public, or taking advantage of existing corpus under free
+ license like Wikipedia.
+4. Citing with care: if a work is considered authorative but you
+ couldn't reproduce the results, whereas another paper which tries to
+ explain or discuss similar results makes the first paper
+ understandable to you, give both papers due attribution (something
+ like: see [1], but I couldn't reproduce the proof in [1], and the
+ proofs in [2] helped clarify it). No one should be offended if you
+ say you can not reproduce something - there may be causes on both
+ sides, whereas citing [2] is fairer and helps readers with a similar
+ background.
+
+*** Tools for open research
+ :PROPERTIES:
+ :CUSTOM_ID: tools-for-open-research
+ :END:
+The open research workshop revolved around how to lead academia towards
+a more open culture. There were discussions on open research tools,
+improving credit attributions, the peer-review process and the path to
+adoption.
+
+During the workshop many efforts for open research were mentioned, and
+afterwards I was also made aware by more of them, like the following:
+
+- [[https://osf.io][OSF]], an online research platform. It has a clean
+ and simple interface with commenting, wiki, citation generation, DOI
+ generation, tags, license generation etc. Like Github it supports
+ private and public repositories (but defaults to private), version
+ control, with the ability to fork or bookmark a project.
+- [[https://scipost.org/][SciPost]], physics journals whose peer review
+ reports and responses are public (peer-witnessed refereeing), and
+ allows comments (post-publication evaluation). Like arXiv, it requires
+ some academic credential (PhD or above) to register.
+- [[https://knowen.org/][Knowen]], a platform to organise knowledge in
+ directed acyclic graphs. Could be useful for building the
+ infrastructure of mathematical knowledge.
+- [[https://fermatslibrary.com/][Fermat's Library]], the journal club
+ website that crowd-annotates one notable paper per week released a
+ Chrome extension [[https://fermatslibrary.com/librarian][Librarian]]
+ that overlays a commenting interface on arXiv. As an example Ian
+ Goodfellow did an
+ [[https://fermatslibrary.com/arxiv_comments?url=https://arxiv.org/pdf/1406.2661.pdf][AMA
+ (ask me anything) on his GAN paper]].
+- [[https://polymathprojects.org/][The Polymath project]], the famous
+ massive collaborative mathematical project. Not exactly new, the
+ Polymath project is the only open maths research project that has
+ gained some traction and recognition. However, it does not have many
+ active projects
+ ([[http://michaelnielsen.org/polymath1/index.php?title=Main_Page][currently
+ only one active project]]).
+- [[https://stacks.math.columbia.edu/][The Stacks Project]]. I was made
+ aware of this project by [[https://people.kth.se/~yitingl/][Yiting]].
+ Its data is hosted on github and accepts contributions via pull
+ requests and is licensed under the GNU Free Documentation License,
+ ticking many boxes of the free and open source model.
+
+*** An anecdote from the workshop
+ :PROPERTIES:
+ :CUSTOM_ID: an-anecdote-from-the-workshop
+ :END:
+In a conversation during the workshop, one of the participants called
+open science "normal science", because reproducibility, open access,
+collaborations, and fair attributions are all what science is supposed
+to be, and practices like treating the readers as buyers rather than
+users should be called "bad science", rather than "closed science".
+
+To which an organiser replied: maybe we should rename the workshop
+"Not-bad science".
diff --git a/posts/2018-06-03-automatic_differentiation.org b/posts/2018-06-03-automatic_differentiation.org
new file mode 100644
index 0000000..cebcf8c
--- /dev/null
+++ b/posts/2018-06-03-automatic_differentiation.org
@@ -0,0 +1,100 @@
+#+title: Automatic differentiation
+
+#+date: <2018-06-03>
+
+This post serves as a note and explainer of autodiff. It is licensed
+under [[https://www.gnu.org/licenses/fdl.html][GNU FDL]].
+
+For my learning I benefited a lot from
+[[http://www.cs.toronto.edu/%7Ergrosse/courses/csc321_2018/slides/lec10.pdf][Toronto
+CSC321 slides]] and the
+[[https://github.com/mattjj/autodidact/][autodidact]] project which is a
+pedagogical implementation of
+[[https://github.com/hips/autograd][Autograd]]. That said, any mistakes
+in this note are mine (especially since some of the knowledge is
+obtained from interpreting slides!), and if you do spot any I would be
+grateful if you can let me know.
+
+Automatic differentiation (AD) is a way to compute derivatives. It does
+so by traversing through a computational graph using the chain rule.
+
+There are the forward mode AD and reverse mode AD, which are kind of
+symmetric to each other and understanding one of them results in little
+to no difficulty in understanding the other.
+
+In the language of neural networks, one can say that the forward mode AD
+is used when one wants to compute the derivatives of functions at all
+layers with respect to input layer weights, whereas the reverse mode AD
+is used to compute the derivatives of output functions with respect to
+weights at all layers. Therefore reverse mode AD (rmAD) is the one to
+use for gradient descent, which is the one we focus in this post.
+
+Basically rmAD requires the computation to be sufficiently decomposed,
+so that in the computational graph, each node as a function of its
+parent nodes is an elementary function that the AD engine has knowledge
+about.
+
+For example, the Sigmoid activation $a' = \sigma(w a + b)$ is quite
+simple, but it should be decomposed to simpler computations:
+
+- $a' = 1 / t_1$
+- $t_1 = 1 + t_2$
+- $t_2 = \exp(t_3)$
+- $t_3 = - t_4$
+- $t_4 = t_5 + b$
+- $t_5 = w a$
+
+Thus the function $a'(a)$ is decomposed to elementary operations like
+addition, subtraction, multiplication, reciprocitation, exponentiation,
+logarithm etc. And the rmAD engine stores the Jacobian of these
+elementary operations.
+
+Since in neural networks we want to find derivatives of a single loss
+function $L(x; \theta)$, we can omit $L$ when writing derivatives and
+denote, say $\bar \theta_k := \partial_{\theta_k} L$.
+
+In implementations of rmAD, one can represent the Jacobian as a
+transformation $j: (x \to y) \to (y, \bar y, x) \to \bar x$. $j$ is
+called the /Vector Jacobian Product/ (VJP). For example,
+$j(\exp)(y, \bar y, x) = y \bar y$ since given $y = \exp(x)$,
+
+$\partial_x L = \partial_x y \cdot \partial_y L = \partial_x \exp(x) \cdot \partial_y L = y \bar y$
+
+as another example, $j(+)(y, \bar y, x_1, x_2) = (\bar y, \bar y)$ since
+given $y = x_1 + x_2$, $\bar{x_1} = \bar{x_2} = \bar y$.
+
+Similarly,
+
+1. $j(/)(y, \bar y, x_1, x_2) = (\bar y / x_2, - \bar y x_1 / x_2^2)$
+2. $j(\log)(y, \bar y, x) = \bar y / x$
+3. $j((A, \beta) \mapsto A \beta)(y, \bar y, A, \beta) = (\bar y \otimes \beta, A^T \bar y)$.
+4. etc...
+
+In the third one, the function is a matrix $A$ multiplied on the right
+by a column vector $\beta$, and $\bar y \otimes \beta$ is the tensor
+product which is a fancy way of writing $\bar y \beta^T$. See
+[[https://github.com/mattjj/autodidact/blob/master/autograd/numpy/numpy_vjps.py][numpy_vjps.py]]
+for the implementation in autodidact.
+
+So, given a node say $y = y(x_1, x_2, ..., x_n)$, and given the value of
+$y$, $x_{1 : n}$ and $\bar y$, rmAD computes the values of
+$\bar x_{1 : n}$ by using the Jacobians.
+
+This is the gist of rmAD. It stores the values of each node in a forward
+pass, and computes the derivatives of each node exactly once in a
+backward pass.
+
+It is a nice exercise to derive the backpropagation in the fully
+connected feedforward neural networks
+(e.g. [[http://neuralnetworksanddeeplearning.com/chap2.html#the_four_fundamental_equations_behind_backpropagation][the
+one for MNIST in Neural Networks and Deep Learning]]) using rmAD.
+
+AD is an approach lying between the extremes of numerical approximation
+(e.g. finite difference) and symbolic evaluation. It uses exact formulas
+(VJP) at each elementary operation like symbolic evaluation, while
+evaluates each VJP numerically rather than lumping all the VJPs into an
+unwieldy symbolic formula.
+
+Things to look further into: the higher-order functional currying form
+$j: (x \to y) \to (y, \bar y, x) \to \bar x$ begs for a functional
+programming implementation.
diff --git a/posts/2018-12-02-lime-shapley.org b/posts/2018-12-02-lime-shapley.org
new file mode 100644
index 0000000..05ef4ee
--- /dev/null
+++ b/posts/2018-12-02-lime-shapley.org
@@ -0,0 +1,362 @@
+#+title: Shapley, LIME and SHAP
+
+#+date: <2018-12-02>
+
+In this post I explain LIME (Ribeiro et. al. 2016), the Shapley values
+(Shapley, 1953) and the SHAP values (Strumbelj-Kononenko, 2014;
+Lundberg-Lee, 2017).
+
+*Acknowledgement*. Thanks to Josef Lindman Hörnlund for bringing the
+LIME and SHAP papers to my attention. The research was done while
+working at KTH mathematics department.
+
+/If you are reading on a mobile device, you may need to "request desktop
+site" for the equations to be properly displayed. This post is licensed
+under CC BY-SA and GNU FDL./
+
+** Shapley values
+ :PROPERTIES:
+ :CUSTOM_ID: shapley-values
+ :END:
+A coalitional game $(v, N)$ of $n$ players involves
+
+- The set $N = \{1, 2, ..., n\}$ that represents the players.
+- A function $v: 2^N \to \mathbb R$, where $v(S)$ is the worth of
+ coalition $S \subset N$.
+
+The Shapley values $\phi_i(v)$ of such a game specify a fair way to
+distribute the total worth $v(N)$ to the players. It is defined as (in
+the following, for a set $S \subset N$ we use the convention $s = |S|$
+to be the number of elements of set $S$ and the shorthand
+$S - i := S \setminus \{i\}$ and $S + i := S \cup \{i\}$)
+
+$$\phi_i(v) = \sum_{S: i \in S} {(n - s)! (s - 1)! \over n!} (v(S) - v(S - i)).$$
+
+It is not hard to see that $\phi_i(v)$ can be viewed as an expectation:
+
+$$\phi_i(v) = \mathbb E_{S \sim \nu_i} (v(S) - v(S - i))$$
+
+where $\nu_i(S) = n^{-1} {n - 1 \choose s - 1}^{-1} 1_{i \in S}$, that
+is, first pick the size $s$ uniformly from $\{1, 2, ..., n\}$, then pick
+$S$ uniformly from the subsets of $N$ that has size $s$ and contains
+$i$.
+
+The Shapley values satisfy some nice properties which are readily
+verified, including:
+
+- *Efficiency*. $\sum_i \phi_i(v) = v(N) - v(\emptyset)$.
+- *Symmetry*. If for some $i, j \in N$, for all $S \subset N$, we have
+ $v(S + i) = v(S + j)$, then $\phi_i(v) = \phi_j(v)$.
+- *Null player*. If for some $i \in N$, for all $S \subset N$, we have
+ $v(S + i) = v(S)$, then $\phi_i(v) = 0$.
+- *Linearity*. $\phi_i$ is linear in games. That is
+ $\phi_i(v) + \phi_i(w) = \phi_i(v + w)$, where $v + w$ is defined by
+ $(v + w)(S) := v(S) + w(S)$.
+
+In the literature, an added assumption $v(\emptyset) = 0$ is often
+given, in which case the Efficiency property is defined as
+$\sum_i \phi_i(v) = v(N)$. Here I discard this assumption to avoid minor
+inconsistencies across different sources. For example, in the LIME
+paper, the local model is defined without an intercept, even though the
+underlying $v(\emptyset)$ may not be $0$. In the SHAP paper, an
+intercept $\phi_0 = v(\emptyset)$ is added which fixes this problem when
+making connections to the Shapley values.
+
+Conversely, according to Strumbelj-Kononenko (2010), it was shown in
+Shapley's original paper (Shapley, 1953) that these four properties
+together with $v(\emptyset) = 0$ defines the Shapley values.
+
+** LIME
+ :PROPERTIES:
+ :CUSTOM_ID: lime
+ :END:
+LIME (Ribeiro et. al. 2016) is a model that offers a way to explain
+feature contributions of supervised learning models locally.
+
+Let $f: X_1 \times X_2 \times ... \times X_n \to \mathbb R$ be a
+function. We can think of $f$ as a model, where $X_j$ is the space of
+$j$th feature. For example, in a language model, $X_j$ may correspond to
+the count of the $j$th word in the vocabulary, i.e. the bag-of-words
+model.
+
+The output may be something like housing price, or log-probability of
+something.
+
+LIME tries to assign a value to each feature /locally/. By locally, we
+mean that given a specific sample $x \in X := \prod_{i = 1}^n X_i$, we
+want to fit a model around it.
+
+More specifically, let $h_x: 2^N \to X$ be a function defined by
+
+$$(h_x(S))_i =
+\begin{cases}
+x_i, & \text{if }i \in S; \\
+0, & \text{otherwise.}
+\end{cases}$$
+
+That is, $h_x(S)$ masks the features that are not in $S$, or in other
+words, we are perturbing the sample $x$. Specifically, $h_x(N) = x$.
+Alternatively, the $0$ in the "otherwise" case can be replaced by some
+kind of default value (see the section titled SHAP in this post).
+
+For a set $S \subset N$, let us denote $1_S \in \{0, 1\}^n$ to be an
+$n$-bit where the $k$th bit is $1$ if and only if $k \in S$.
+
+Basically, LIME samples $S_1, S_2, ..., S_m \subset N$ to obtain a set
+of perturbed samples $x_i = h_x(S_i)$ in the $X$ space, and then fits a
+linear model $g$ using $1_{S_i}$ as the input samples and $f(h_x(S_i))$
+as the output samples:
+
+*Problem*(LIME). Find $w = (w_1, w_2, ..., w_n)$ that minimises
+
+$$\sum_i (w \cdot 1_{S_i} - f(h_x(S_i)))^2 \pi_x(h_x(S_i))$$
+
+where $\pi_x(x')$ is a function that penalises $x'$s that are far away
+from $x$. In the LIME paper the Gaussian kernel was used:
+
+$$\pi_x(x') = \exp\left({- \|x - x'\|^2 \over \sigma^2}\right).$$
+
+Then $w_i$ represents the importance of the $i$th feature.
+
+The LIME model has a more general framework, but the specific model
+considered in the paper is the one described above, with a Lasso for
+feature selection.
+
+*Remark*. One difference between our account here and the one in the
+LIME paper is: the dimension of the data space may differ from $n$ (see
+Section 3.1 of that paper). But in the case of text data, they do use
+bag-of-words (our $X$) for an "intermediate" representation. So my
+understanding is, in their context, there is an "original" data space
+(let's call it $X'$). And there is a one-one correspondence between $X'$
+and $X$ (let's call it $r: X' \to X$), so that given a sample
+$x' \in X'$, we can compute the output of $S$ in the local model with
+$f(r^{-1}(h_{r(x')}(S)))$. As an example, in the example of $X$ being
+the bag of words, $X'$ may be the embedding vector space, so that
+$r(x') = A^{-1} x'$, where $A$ is the word embedding matrix. Therefore,
+without loss of generality, we assume the input space to be $X$ which is
+of dimension $n$.
+
+** Shapley values and LIME
+ :PROPERTIES:
+ :CUSTOM_ID: shapley-values-and-lime
+ :END:
+The connection between the Shapley values and LIME is noted in
+Lundberg-Lee (2017), but the underlying connection goes back to 1988
+(Charnes et. al.).
+
+To see the connection, we need to modify LIME a bit.
+
+First, we need to make LIME less efficient by considering /all/ the
+$2^n$ subsets instead of the $m$ samples $S_1, S_2, ..., S_{m}$.
+
+Then we need to relax the definition of $\pi_x$. It no longer needs to
+penalise samples that are far away from $x$. In fact, we will see later
+than the choice of $\pi_x(x')$ that yields the Shapley values is high
+when $x'$ is very close or very far away from $x$, and low otherwise. We
+further add the restriction that $\pi_x(h_x(S))$ only depends on the
+size of $S$, thus we rewrite it as $q(s)$ instead.
+
+We also denote $v(S) := f(h_x(S))$ and $w(S) = \sum_{i \in S} w_i$.
+
+Finally, we add the Efficiency property as a constraint:
+$\sum_{i = 1}^n w_i = f(x) - f(h_x(\emptyset)) = v(N) - v(\emptyset)$.
+
+Then the problem becomes a weighted linear regression:
+
+*Problem*. minimises $\sum_{S \subset N} (w(S) - v(S))^2 q(s)$ over $w$
+subject to $w(N) = v(N) - v(\emptyset)$.
+
+*Claim* (Charnes et. al. 1988). The solution to this problem is
+
+$$w_i = {1 \over n} (v(N) - v(\emptyset)) + \left(\sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s)\right)^{-1} \sum_{S \subset N: i \in S} \left({n - s \over n} q(s) v(S) - {s - 1 \over n} q(s - 1) v(S - i)\right). \qquad (-1)$$
+
+Specifically, if we choose
+
+$$q(s) = c {n - 2 \choose s - 1}^{-1}$$
+
+for any constant $c$, then $w_i = \phi_i(v)$ are the Shapley values.
+
+*Remark*. Don't worry about this specific choice of $q(s)$ when $s = 0$
+or $n$, because $q(0)$ and $q(n)$ do not appear on the right hand side
+of (-1). Therefore they can be defined to be of any value. A common
+convention of the binomial coefficients is to set ${\ell \choose k} = 0$
+if $k < 0$ or $k > \ell$, in which case $q(0) = q(n) = \infty$.
+
+In Lundberg-Lee (2017), $c$ is chosen to be $1 / n$, see Theorem 2
+there.
+
+In Charnes et. al. 1988, the $w_i$s defined in (-1) are called the
+generalised Shapley values.
+
+*Proof*. The Lagrangian is
+
+$$L(w, \lambda) = \sum_{S \subset N} (v(S) - w(S))^2 q(s) - \lambda(w(N) - v(N) + v(\emptyset)).$$
+
+and by making $\partial_{w_i} L(w, \lambda) = 0$ we have
+
+$${1 \over 2} \lambda = \sum_{S \subset N: i \in S} (w(S) - v(S)) q(s). \qquad (0)$$
+
+Summing (0) over $i$ and divide by $n$, we have
+
+$${1 \over 2} \lambda = {1 \over n} \sum_i \sum_{S: i \in S} (w(S) q(s) - v(S) q(s)). \qquad (1)$$
+
+We examine each of the two terms on the right hand side.
+
+Counting the terms involving $w_i$ and $w_j$ for $j \neq i$, and using
+$w(N) = v(N) - v(\emptyset)$ we have
+
+$$\begin{aligned}
+&\sum_{S \subset N: i \in S} w(S) q(s) \\
+&= \sum_{s = 1}^n {n - 1 \choose s - 1} q(s) w_i + \sum_{j \neq i}\sum_{s = 2}^n {n - 2 \choose s - 2} q(s) w_j \\
+&= q(1) w_i + \sum_{s = 2}^n q(s) \left({n - 1 \choose s - 1} w_i + \sum_{j \neq i} {n - 2 \choose s - 2} w_j\right) \\
+&= q(1) w_i + \sum_{s = 2}^n \left({n - 2 \choose s - 1} w_i + {n - 2 \choose s - 2} (v(N) - v(\emptyset))\right) q(s) \\
+&= \sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s) w_i + \sum_{s = 2}^n {n - 2 \choose s - 2} q(s) (v(N) - v(\emptyset)). \qquad (2)
+\end{aligned}$$
+
+Summing (2) over $i$, we obtain
+
+$$\begin{aligned}
+&\sum_i \sum_{S: i \in S} w(S) q(s)\\
+&= \sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s) (v(N) - v(\emptyset)) + \sum_{s = 2}^n n {n - 2 \choose s - 2} q(s) (v(N) - v(\emptyset))\\
+&= \sum_{s = 1}^n s{n - 1 \choose s - 1} q(s) (v(N) - v(\emptyset)). \qquad (3)
+\end{aligned}$$
+
+For the second term in (1), we have
+
+$$\sum_i \sum_{S: i \in S} v(S) q(s) = \sum_{S \subset N} s v(S) q(s). \qquad (4)$$
+
+Plugging (3)(4) in (1), we have
+
+$${1 \over 2} \lambda = {1 \over n} \left(\sum_{s = 1}^n s {n - 1 \choose s - 1} q(s) (v(N) - v(\emptyset)) - \sum_{S \subset N} s q(s) v(S) \right). \qquad (5)$$
+
+Plugging (5)(2) in (0) and solve for $w_i$, we have
+
+$$w_i = {1 \over n} (v(N) - v(\emptyset)) + \left(\sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s) \right)^{-1} \left( \sum_{S: i \in S} q(s) v(S) - {1 \over n} \sum_{S \subset N} s q(s) v(S) \right). \qquad (6)$$
+
+By splitting all subsets of $N$ into ones that contain $i$ and ones that
+do not and pair them up, we have
+
+$$\sum_{S \subset N} s q(s) v(S) = \sum_{S: i \in S} (s q(s) v(S) + (s - 1) q(s - 1) v(S - i)).$$
+
+Plugging this back into (6) we get the desired result. $\square$
+
+** SHAP
+ :PROPERTIES:
+ :CUSTOM_ID: shap
+ :END:
+The paper that coined the term "SHAP values" (Lundberg-Lee 2017) is not
+clear in its definition of the "SHAP values" and its relation to LIME,
+so the following is my interpretation of their interpretation model,
+which coincide with a model studied in Strumbelj-Kononenko 2014.
+
+Recall that we want to calculate feature contributions to a model $f$ at
+a sample $x$.
+
+Let $\mu$ be a probability density function over the input space
+$X = X_1 \times ... \times X_n$. A natural choice would be the density
+that generates the data, or one that approximates such density (e.g.
+empirical distribution).
+
+The feature contribution (SHAP value) is thus defined as the Shapley
+value $\phi_i(v)$, where
+
+$$v(S) = \mathbb E_{z \sim \mu} (f(z) | z_S = x_S). \qquad (7)$$
+
+So it is a conditional expectation where $z_i$ is clamped for $i \in S$.
+In fact, the definition of feature contributions in this form predates
+Lundberg-Lee 2017. For example, it can be found in
+Strumbelj-Kononenko 2014.
+
+One simplification is to assume the $n$ features are independent, thus
+$\mu = \mu_1 \times \mu_2 \times ... \times \mu_n$. In this case, (7)
+becomes
+
+$$v(S) = \mathbb E_{z_{N \setminus S} \sim \mu_{N \setminus S}} f(x_S, z_{N \setminus S}) \qquad (8)$$
+
+For example, Strumbelj-Kononenko (2010) considers this scenario where
+$\mu$ is the uniform distribution over $X$, see Definition 4 there.
+
+A further simplification is model linearity, which means $f$ is linear.
+In this case, (8) becomes
+
+$$v(S) = f(x_S, \mathbb E_{\mu_{N \setminus S}} z_{N \setminus S}). \qquad (9)$$
+
+It is worth noting that to make the modified LIME model considered in
+the previous section fall under the linear SHAP framework (9), we need
+to make two further specialisations, the first is rather cosmetic: we
+need to change the definition of $h_x(S)$ to
+
+$$(h_x(S))_i =
+\begin{cases}
+x_i, & \text{if }i \in S; \\
+\mathbb E_{\mu_i} z_i, & \text{otherwise.}
+\end{cases}$$
+
+But we also need to boldly assume the original $f$ to be linear, which
+in my view, defeats the purpose of interpretability, because linear
+models are interpretable by themselves.
+
+One may argue that perhaps we do not need linearity to define $v(S)$ as
+in (9). If we do so, however, then (9) loses mathematical meaning. A
+bigger question is: how effective is SHAP? An even bigger question: in
+general, how do we evaluate models of interpretation?
+
+** Evaluating SHAP
+ :PROPERTIES:
+ :CUSTOM_ID: evaluating-shap
+ :END:
+The quest of the SHAP paper can be decoupled into two independent
+components: showing the niceties of Shapley values and choosing the
+coalitional game $v$.
+
+The SHAP paper argues that Shapley values $\phi_i(v)$ are a good
+measurement because they are the only values satisfying the some nice
+properties including the Efficiency property mentioned at the beginning
+of the post, invariance under permutation and monotonicity, see the
+paragraph below Theorem 1 there, which refers to Theorem 2 of Young
+(1985).
+
+Indeed, both efficiency (the "additive feature attribution methods" in
+the paper) and monotonicity are meaningful when considering $\phi_i(v)$
+as the feature contribution of the $i$th feature.
+
+The question is thus reduced to the second component: what constitutes a
+nice choice of $v$?
+
+The SHAP paper answers this question with 3 options with increasing
+simplification: (7)(8)(9) in the previous section of this post
+(corresponding to (9)(11)(12) in the paper). They are intuitive, but it
+will be interesting to see more concrete (or even mathematical)
+justifications of such choices.
+
+** References
+ :PROPERTIES:
+ :CUSTOM_ID: references
+ :END:
+
+- Charnes, A., B. Golany, M. Keane, and J. Rousseau. "Extremal Principle
+ Solutions of Games in Characteristic Function Form: Core, Chebychev
+ and Shapley Value Generalizations." In Econometrics of Planning and
+ Efficiency, edited by Jati K. Sengupta and Gopal K. Kadekodi, 123--33.
+ Dordrecht: Springer Netherlands, 1988.
+ [[https://doi.org/10.1007/978-94-009-3677-5_7]].
+- Lundberg, Scott, and Su-In Lee. "A Unified Approach to Interpreting
+ Model Predictions." ArXiv:1705.07874 [Cs, Stat], May 22, 2017.
+ [[http://arxiv.org/abs/1705.07874]].
+- Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. "'Why Should
+ I Trust You?': Explaining the Predictions of Any Classifier."
+ ArXiv:1602.04938 [Cs, Stat], February 16, 2016.
+ [[http://arxiv.org/abs/1602.04938]].
+- Shapley, L. S. "17. A Value for n-Person Games." In Contributions to
+ the Theory of Games (AM-28), Volume II, Vol. 2. Princeton: Princeton
+ University Press, 1953. [[https://doi.org/10.1515/9781400881970-018]].
+- Strumbelj, Erik, and Igor Kononenko. "An Efficient Explanation of
+ Individual Classifications Using Game Theory." J. Mach. Learn. Res. 11
+ (March 2010): 1--18.
+- Strumbelj, Erik, and Igor Kononenko. "Explaining Prediction Models and
+ Individual Predictions with Feature Contributions." Knowledge and
+ Information Systems 41, no. 3 (December 2014): 647--65.
+ [[https://doi.org/10.1007/s10115-013-0679-x]].
+- Young, H. P. "Monotonic Solutions of Cooperative Games." International
+ Journal of Game Theory 14, no. 2 (June 1, 1985): 65--72.
+ [[https://doi.org/10.1007/BF01769885]].
diff --git a/posts/2019-01-03-discriminant-analysis.org b/posts/2019-01-03-discriminant-analysis.org
new file mode 100644
index 0000000..34c16bf
--- /dev/null
+++ b/posts/2019-01-03-discriminant-analysis.org
@@ -0,0 +1,293 @@
+#+title: Discriminant analysis
+
+#+DATE: <2019-01-03>
+
+In this post I talk about the theory and implementation of linear and
+quadratic discriminant analysis, classical methods in statistical
+learning.
+
+*Acknowledgement*. Various sources were of great help to my
+understanding of the subject, including Chapter 4 of
+[[https://web.stanford.edu/~hastie/ElemStatLearn/][The Elements of
+Statistical Learning]],
+[[http://cs229.stanford.edu/notes/cs229-notes2.pdf][Stanford CS229
+Lecture notes]], and
+[[https://github.com/scikit-learn/scikit-learn/blob/7389dba/sklearn/discriminant_analysis.py][the
+scikit-learn code]]. Research was done while working at KTH mathematics
+department.
+
+/If you are reading on a mobile device, you may need to "request desktop
+site" for the equations to be properly displayed. This post is licensed
+under CC BY-SA and GNU FDL./
+
+** Theory
+ :PROPERTIES:
+ :CUSTOM_ID: theory
+ :END:
+Quadratic discriminant analysis (QDA) is a classical classification
+algorithm. It assumes that the data is generated by Gaussian
+distributions, where each class has its own mean and covariance.
+
+$$(x | y = i) \sim N(\mu_i, \Sigma_i).$$
+
+It also assumes a categorical class prior:
+
+$$\mathbb P(y = i) = \pi_i$$
+
+The log-likelihood is thus
+
+$$\begin{aligned}
+\log \mathbb P(y = i | x) &= \log \mathbb P(x | y = i) \log \mathbb P(y = i) + C\\
+&= - {1 \over 2} \log \det \Sigma_i - {1 \over 2} (x - \mu_i)^T \Sigma_i^{-1} (x - \mu_i) + \log \pi_i + C', \qquad (0)
+\end{aligned}$$
+
+where $C$ and $C'$ are constants.
+
+Thus the prediction is done by taking argmax of the above formula.
+
+In training, let $X$, $y$ be the input data, where $X$ is of shape
+$m \times n$, and $y$ of shape $m$. We adopt the convention that each
+row of $X$ is a sample $x^{(i)T}$. So there are $m$ samples and $n$
+features. Denote by $m_i = \#\{j: y_j = i\}$ be the number of samples in
+class $i$. Let $n_c$ be the number of classes.
+
+We estimate $\mu_i$ by the sample means, and $\pi_i$ by the frequencies:
+
+$$\begin{aligned}
+\mu_i &:= {1 \over m_i} \sum_{j: y_j = i} x^{(j)}, \\
+\pi_i &:= \mathbb P(y = i) = {m_i \over m}.
+\end{aligned}$$
+
+Linear discriminant analysis (LDA) is a specialisation of QDA: it
+assumes all classes share the same covariance, i.e. $\Sigma_i = \Sigma$
+for all $i$.
+
+Guassian Naive Bayes is a different specialisation of QDA: it assumes
+that all $\Sigma_i$ are diagonal, since all the features are assumed to
+be independent.
+
+*** QDA
+ :PROPERTIES:
+ :CUSTOM_ID: qda
+ :END:
+We look at QDA.
+
+We estimate $\Sigma_i$ by the variance mean:
+
+$$\begin{aligned}
+\Sigma_i &= {1 \over m_i - 1} \sum_{j: y_j = i} \hat x^{(j)} \hat x^{(j)T}.
+\end{aligned}$$
+
+where $\hat x^{(j)} = x^{(j)} - \mu_{y_j}$ are the centred $x^{(j)}$.
+Plugging this into (0) we are done.
+
+There are two problems that can break the algorithm. First, if one of
+the $m_i$ is $1$, then $\Sigma_i$ is ill-defined. Second, one of
+$\Sigma_i$'s might be singular.
+
+In either case, there is no way around it, and the implementation should
+throw an exception.
+
+This won't be a problem of the LDA, though, unless there is only one
+sample for each class.
+
+*** Vanilla LDA
+ :PROPERTIES:
+ :CUSTOM_ID: vanilla-lda
+ :END:
+Now let us look at LDA.
+
+Since all classes share the same covariance, we estimate $\Sigma$ using
+sample variance
+
+$$\begin{aligned}
+\Sigma &= {1 \over m - n_c} \sum_j \hat x^{(j)} \hat x^{(j)T},
+\end{aligned}$$
+
+where $\hat x^{(j)} = x^{(j)} - \mu_{y_j}$ and ${1 \over m - n_c}$ comes
+from [[https://en.wikipedia.org/wiki/Bessel%27s_correction][Bessel's
+Correction]].
+
+Let us write down the decision function (0). We can remove the first
+term on the right hand side, since all $\Sigma_i$ are the same, and we
+only care about argmax of that equation. Thus it becomes
+
+$$- {1 \over 2} (x - \mu_i)^T \Sigma^{-1} (x - \mu_i) + \log\pi_i. \qquad (1)$$
+
+Notice that we just walked around the problem of figuring out
+$\log \det \Sigma$ when $\Sigma$ is singular.
+
+But how about $\Sigma^{-1}$?
+
+We sidestep this problem by using the pseudoinverse of $\Sigma$ instead.
+This can be seen as applying a linear transformation to $X$ to turn its
+covariance matrix to identity. And thus the model becomes a sort of a
+nearest neighbour classifier.
+
+*** Nearest neighbour classifier
+ :PROPERTIES:
+ :CUSTOM_ID: nearest-neighbour-classifier
+ :END:
+More specifically, we want to transform the first term of (0) to a norm
+to get a classifier based on nearest neighbour modulo $\log \pi_i$:
+
+$$- {1 \over 2} \|A(x - \mu_i)\|^2 + \log\pi_i$$
+
+To compute $A$, we denote
+
+$$X_c = X - M,$$
+
+where the $i$th row of $M$ is $\mu_{y_i}^T$, the mean of the class $x_i$
+belongs to, so that $\Sigma = {1 \over m - n_c} X_c^T X_c$.
+
+Let
+
+$${1 \over \sqrt{m - n_c}} X_c = U_x \Sigma_x V_x^T$$
+
+be the SVD of ${1 \over \sqrt{m - n_c}}X_c$. Let
+$D_x = \text{diag} (s_1, ..., s_r)$ be the diagonal matrix with all the
+nonzero singular values, and rewrite $V_x$ as an $n \times r$ matrix
+consisting of the first $r$ columns of $V_x$.
+
+Then with an abuse of notation, the pseudoinverse of $\Sigma$ is
+
+$$\Sigma^{-1} = V_x D_x^{-2} V_x^T.$$
+
+So we just need to make $A = D_x^{-1} V_x^T$. When it comes to
+prediction, just transform $x$ with $A$, and find the nearest centroid
+$A \mu_i$ (again, modulo $\log \pi_i$) and label the input with $i$.
+
+*** Dimensionality reduction
+ :PROPERTIES:
+ :CUSTOM_ID: dimensionality-reduction
+ :END:
+We can further simplify the prediction by dimensionality reduction.
+Assume $n_c \le n$. Then the centroid spans an affine space of dimension
+$p$ which is at most $n_c - 1$. So what we can do is to project both the
+transformed sample $Ax$ and centroids $A\mu_i$ to the linear subspace
+parallel to the affine space, and do the nearest neighbour
+classification there.
+
+So we can perform SVD on the matrix $(M - \bar x) V_x D_x^{-1}$ where
+$\bar x$, a row vector, is the sample mean of all data i.e. average of
+rows of $X$:
+
+$$(M - \bar x) V_x D_x^{-1} = U_m \Sigma_m V_m^T.$$
+
+Again, we let $V_m$ be the $r \times p$ matrix by keeping the first $p$
+columns of $V_m$.
+
+The projection operator is thus $V_m$. And so the final transformation
+is $V_m^T D_x^{-1} V_x^T$.
+
+There is no reason to stop here, and we can set $p$ even smaller, which
+will result in a lossy compression / regularisation equivalent to doing
+[[https://en.wikipedia.org/wiki/Principal_component_analysis][principle
+component analysis]] on $(M - \bar x) V_x D_x^{-1}$.
+
+Note that as of 2019-01-04, in the
+[[https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/discriminant_analysis.py][scikit-learn
+implementation of LDA]], the prediction is done without any lossy
+compression, even if the parameter =n_components= is set to be smaller
+than dimension of the affine space spanned by the centroids. In other
+words, the prediction does not change regardless of =n_components=.
+
+*** Fisher discriminant analysis
+ :PROPERTIES:
+ :CUSTOM_ID: fisher-discriminant-analysis
+ :END:
+The Fisher discriminant analysis involves finding an $n$-dimensional
+vector $a$ that maximises between-class covariance with respect to
+within-class covariance:
+
+$${a^T M_c^T M_c a \over a^T X_c^T X_c a},$$
+
+where $M_c = M - \bar x$ is the centred sample mean matrix.
+
+As it turns out, this is (almost) equivalent to the derivation above,
+modulo a constant. In particular, $a = c V_m^T D_x^{-1} V_x^T$ where
+$p = 1$ for arbitrary constant $c$.
+
+To see this, we can first multiply the denominator with a constant
+${1 \over m - n_c}$ so that the matrix in the denominator becomes the
+covariance estimate $\Sigma$.
+
+We decompose $a$: $a = V_x D_x^{-1} b + \tilde V_x \tilde b$, where
+$\tilde V_x$ consists of column vectors orthogonal to the column space
+of $V_x$.
+
+We ignore the second term in the decomposition. In other words, we only
+consider $a$ in the column space of $V_x$.
+
+Then the problem is to find an $r$-dimensional vector $b$ to maximise
+
+$${b^T (M_c V_x D_x^{-1})^T (M_c V_x D_x^{-1}) b \over b^T b}.$$
+
+This is the problem of principle component analysis, and so $b$ is first
+column of $V_m$.
+
+Therefore, the solution to Fisher discriminant analysis is
+$a = c V_x D_x^{-1} V_m$ with $p = 1$.
+
+*** Linear model
+ :PROPERTIES:
+ :CUSTOM_ID: linear-model
+ :END:
+The model is called linear discriminant analysis because it is a linear
+model. To see this, let $B = V_m^T D_x^{-1} V_x^T$ be the matrix of
+transformation. Now we are comparing
+
+$$- {1 \over 2} \| B x - B \mu_k\|^2 + \log \pi_k$$
+
+across all $k$s. Expanding the norm and removing the common term
+$\|B x\|^2$, we see a linear form:
+
+$$\mu_k^T B^T B x - {1 \over 2} \|B \mu_k\|^2 + \log\pi_k$$
+
+So the prediction for $X_{\text{new}}$ is
+
+$$\text{argmax}_{\text{axis}=0} \left(K B^T B X_{\text{new}}^T - {1 \over 2} \|K B^T\|_{\text{axis}=1}^2 + \log \pi\right)$$
+
+thus the decision boundaries are linear.
+
+This is how scikit-learn implements LDA, by inheriting from
+=LinearClassifierMixin= and redirecting the classification there.
+
+** Implementation
+ :PROPERTIES:
+ :CUSTOM_ID: implementation
+ :END:
+This is where things get interesting. How do I validate my understanding
+of the theory? By implementing and testing the algorithm.
+
+I try to implement it as close as possible to the natural language /
+mathematical descriptions of the model, which means clarity over
+performance.
+
+How about testing? Numerical experiments are harder to test than
+combinatorial / discrete algorithms in general because the output is
+less verifiable by hand. My shortcut solution to this problem is to test
+against output from the scikit-learn package.
+
+It turned out to be harder than expected, as I had to dig into the code
+of scikit-learn when the outputs mismatch. Their code is quite
+well-written though.
+
+The result is
+[[https://github.com/ycpei/machine-learning/tree/master/discriminant-analysis][here]].
+
+*** Fun facts about LDA
+ :PROPERTIES:
+ :CUSTOM_ID: fun-facts-about-lda
+ :END:
+One property that can be used to test the LDA implementation is the fact
+that the scatter matrix $B(X - \bar x)^T (X - \bar X) B^T$ of the
+transformed centred sample is diagonal.
+
+This can be derived by using another fun fact that the sum of the
+in-class scatter matrix and the between-class scatter matrix is the
+sample scatter matrix:
+
+$$X_c^T X_c + M_c^T M_c = (X - \bar x)^T (X - \bar x) = (X_c + M_c)^T (X_c + M_c).$$
+
+The verification is not very hard and left as an exercise.
diff --git a/posts/2019-02-14-raise-your-elbo.org b/posts/2019-02-14-raise-your-elbo.org
new file mode 100644
index 0000000..9e15552
--- /dev/null
+++ b/posts/2019-02-14-raise-your-elbo.org
@@ -0,0 +1,1150 @@
+#+title: Raise your ELBO
+
+#+date: <2019-02-14>
+
+In this post I give an introduction to variational inference, which is
+about maximising the evidence lower bound (ELBO).
+
+I use a top-down approach, starting with the KL divergence and the ELBO,
+to lay the mathematical framework of all the models in this post.
+
+Then I define mixture models and the EM algorithm, with Gaussian mixture
+model (GMM), probabilistic latent semantic analysis (pLSA) and the
+hidden markov model (HMM) as examples.
+
+After that I present the fully Bayesian version of EM, also known as
+mean field approximation (MFA), and apply it to fully Bayesian mixture
+models, with fully Bayesian GMM (also known as variational GMM), latent
+Dirichlet allocation (LDA) and Dirichlet process mixture model (DPMM) as
+examples.
+
+Then I explain stochastic variational inference, a modification of EM
+and MFA to improve efficiency.
+
+Finally I talk about autoencoding variational Bayes (AEVB), a
+Monte-Carlo + neural network approach to raising the ELBO, exemplified
+by the variational autoencoder (VAE). I also show its fully Bayesian
+version.
+
+*Acknowledgement*. The following texts and resources were illuminating
+during the writing of this post: the Stanford CS228 notes
+([[https://ermongroup.github.io/cs228-notes/inference/variational/][1]],[[https://ermongroup.github.io/cs228-notes/learning/latent/][2]]),
+the
+[[https://www.cs.tau.ac.il/~rshamir/algmb/presentations/EM-BW-Ron-16%20.pdf][Tel
+Aviv Algorithms in Molecular Biology slides]] (clear explanations of the
+connection between EM and Baum-Welch), Chapter 10 of
+[[https://www.springer.com/us/book/9780387310732][Bishop's book]]
+(brilliant introduction to variational GMM), Section 2.5 of
+[[http://cs.brown.edu/~sudderth/papers/sudderthPhD.pdf][Sudderth's
+thesis]] and [[https://metacademy.org][metacademy]]. Also thanks to
+Josef Lindman Hörnlund for discussions. The research was done while
+working at KTH mathematics department.
+
+/If you are reading on a mobile device, you may need to "request desktop
+site" for the equations to be properly displayed. This post is licensed
+under CC BY-SA and GNU FDL./
+
+** KL divergence and ELBO
+ :PROPERTIES:
+ :CUSTOM_ID: kl-divergence-and-elbo
+ :END:
+Let $p$ and $q$ be two probability measures. The Kullback-Leibler (KL)
+divergence is defined as
+
+$$D(q||p) = E_q \log{q \over p}.$$
+
+It achieves minimum $0$ when $p = q$.
+
+If $p$ can be further written as
+
+$$p(x) = {w(x) \over Z}, \qquad (0)$$
+
+where $Z$ is a normaliser, then
+
+$$\log Z = D(q||p) + L(w, q), \qquad(1)$$
+
+where $L(w, q)$ is called the evidence lower bound (ELBO), defined by
+
+$$L(w, q) = E_q \log{w \over q}. \qquad (1.25)$$
+
+From (1), we see that to minimise the nonnegative term $D(q || p)$, one
+can maximise the ELBO.
+
+To this end, we can simply discard $D(q || p)$ in (1) and obtain:
+
+$$\log Z \ge L(w, q) \qquad (1.3)$$
+
+and keep in mind that the inequality becomes an equality when
+$q = {w \over Z}$.
+
+It is time to define the task of variational inference (VI), also known
+as variational Bayes (VB).
+
+*Definition*. Variational inference is concerned with maximising the
+ELBO $L(w, q)$.
+
+There are mainly two versions of VI, the half Bayesian and the fully
+Bayesian cases. Half Bayesian VI, to which expectation-maximisation
+algorithms (EM) apply, instantiates (1.3) with
+
+$$\begin{aligned}
+Z &= p(x; \theta)\\
+w &= p(x, z; \theta)\\
+q &= q(z)
+\end{aligned}$$
+
+and the dummy variable $x$ in Equation (0) is substituted with $z$.
+
+Fully Bayesian VI, often just called VI, has the following
+instantiations:
+
+$$\begin{aligned}
+Z &= p(x) \\
+w &= p(x, z, \theta) \\
+q &= q(z, \theta)
+\end{aligned}$$
+
+and $x$ in Equation (0) is substituted with $(z, \theta)$.
+
+In both cases $\theta$ are parameters and $z$ are latent variables.
+
+*Remark on the naming of things*. The term "variational" comes from the
+fact that we perform calculus of variations: maximise some functional
+($L(w, q)$) over a set of functions ($q$). Note however, most of the VI
+/ VB algorithms do not concern any techniques in calculus of variations,
+but only uses Jensen's inequality / the fact the $D(q||p)$ reaches
+minimum when $p = q$. Due to this reasoning of the naming, EM is also a
+kind of VI, even though in the literature VI often referes to its fully
+Bayesian version.
+
+** EM
+ :PROPERTIES:
+ :CUSTOM_ID: em
+ :END:
+To illustrate the EM algorithms, we first define the mixture model.
+
+*Definition (mixture model)*. Given dataset $x_{1 : m}$, we assume the
+data has some underlying latent variable $z_{1 : m}$ that may take a
+value from a finite set $\{1, 2, ..., n_z\}$. Let $p(z_{i}; \pi)$ be
+categorically distributed according to the probability vector $\pi$.
+That is, $p(z_{i} = k; \pi) = \pi_k$. Also assume
+$p(x_{i} | z_{i} = k; \eta) = p(x_{i}; \eta_k)$. Find
+$\theta = (\pi, \eta)$ that maximises the likelihood
+$p(x_{1 : m}; \theta)$.
+
+Represented as a DAG (a.k.a the plate notations), the model looks like
+this:
+
+[[/assets/resources/mixture-model.png]]
+
+where the boxes with $m$ mean repitition for $m$ times, since there $m$
+indepdent pairs of $(x, z)$, and the same goes for $\eta$.
+
+The direct maximisation
+
+$$\max_\theta \sum_i \log p(x_{i}; \theta) = \max_\theta \sum_i \log \int p(x_{i} | z_i; \theta) p(z_i; \theta) dz_i$$
+
+is hard because of the integral in the log.
+
+We can fit this problem in (1.3) by having $Z = p(x_{1 : m}; \theta)$
+and $w = p(z_{1 : m}, x_{1 : m}; \theta)$. The plan is to update
+$\theta$ repeatedly so that $L(p(z, x; \theta_t), q(z))$ is non
+decreasing over time $t$.
+
+Equation (1.3) at time $t$ for the $i$th datapoint is
+
+$$\log p(x_{i}; \theta_t) \ge L(p(z_i, x_{i}; \theta_t), q(z_i)) \qquad (2)$$
+
+Each timestep consists of two steps, the E-step and the M-step.
+
+At E-step, we set
+
+$$q(z_{i}) = p(z_{i}|x_{i}; \theta_t), $$
+
+to turn the inequality into equality. We denote $r_{ik} = q(z_i = k)$
+and call them responsibilities, so the posterior $q(z_i)$ is categorical
+distribution with parameter $r_i = r_{i, 1 : n_z}$.
+
+At M-step, we maximise $\sum_i L(p(x_{i}, z_{i}; \theta), q(z_{i}))$
+over $\theta$:
+
+$$\begin{aligned}
+\theta_{t + 1} &= \text{argmax}_\theta \sum_i L(p(x_{i}, z_{i}; \theta), p(z_{i} | x_{i}; \theta_t)) \\
+&= \text{argmax}_\theta \sum_i \mathbb E_{p(z_{i} | x_{i}; \theta_t)} \log p(x_{i}, z_{i}; \theta) \qquad (2.3)
+\end{aligned}$$
+
+So $\sum_i L(p(x_{i}, z_{i}; \theta), q(z_i))$ is non-decreasing at both
+the E-step and the M-step.
+
+We can see from this derivation that EM is half-Bayesian. The E-step is
+Bayesian it computes the posterior of the latent variables and the
+M-step is frequentist because it performs maximum likelihood estimate of
+$\theta$.
+
+It is clear that the ELBO sum coverges as it is nondecreasing with an
+upper bound, but it is not clear whether the sum converges to the
+correct value, i.e. $\max_\theta p(x_{1 : m}; \theta)$. In fact it is
+said that the EM does get stuck in local maximum sometimes.
+
+A different way of describing EM, which will be useful in hidden Markov
+model is:
+
+- At E-step, one writes down the formula
+ $$\sum_i \mathbb E_{p(z_i | x_{i}; \theta_t)} \log p(x_{i}, z_i; \theta). \qquad (2.5)$$
+
+- At M-setp, one finds $\theta_{t + 1}$ to be the $\theta$ that
+ maximises the above formula.
+
+*** GMM
+ :PROPERTIES:
+ :CUSTOM_ID: gmm
+ :END:
+Gaussian mixture model (GMM) is an example of mixture models.
+
+The space of the data is $\mathbb R^n$. We use the hypothesis that the
+data is Gaussian conditioned on the latent variable:
+
+$$(x_i; \eta_k) \sim N(\mu_k, \Sigma_k),$$
+
+so we write $\eta_k = (\mu_k, \Sigma_k)$,
+
+During E-step, the $q(z_i)$ can be directly computed using Bayes'
+theorem:
+
+$$r_{ik} = q(z_i = k) = \mathbb P(z_i = k | x_{i}; \theta_t)
+= {g_{\mu_{t, k}, \Sigma_{t, k}} (x_{i}) \pi_{t, k} \over \sum_{j = 1 : n_z} g_{\mu_{t, j}, \Sigma_{t, j}} (x_{i}) \pi_{t, j}},$$
+
+where
+$g_{\mu, \Sigma} (x) = (2 \pi)^{- n / 2} \det \Sigma^{-1 / 2} \exp(- {1 \over 2} (x - \mu)^T \Sigma^{-1} (x - \mu))$
+is the pdf of the Gaussian distribution $N(\mu, \Sigma)$.
+
+During M-step, we need to compute
+
+$$\text{argmax}_{\Sigma, \mu, \pi} \sum_{i = 1 : m} \sum_{k = 1 : n_z} r_{ik} \log (g_{\mu_k, \Sigma_k}(x_{i}) \pi_k).$$
+
+This is similar to the quadratic discriminant analysis, and the solution
+is
+
+$$\begin{aligned}
+\pi_{k} &= {1 \over m} \sum_{i = 1 : m} r_{ik}, \\
+\mu_{k} &= {\sum_i r_{ik} x_{i} \over \sum_i r_{ik}}, \\
+\Sigma_{k} &= {\sum_i r_{ik} (x_{i} - \mu_{t, k}) (x_{i} - \mu_{t, k})^T \over \sum_i r_{ik}}.
+\end{aligned}$$
+
+*Remark*. The k-means algorithm is the $\epsilon \to 0$ limit of the GMM
+with constraints $\Sigma_k = \epsilon I$. See Section 9.3.2 of Bishop
+2006 for derivation. It is also briefly mentioned there that a variant
+in this setting where the covariance matrix is not restricted to
+$\epsilon I$ is called elliptical k-means algorithm.
+
+*** SMM
+ :PROPERTIES:
+ :CUSTOM_ID: smm
+ :END:
+As a transition to the next models to study, let us consider a simpler
+mixture model obtained by making one modification to GMM: change
+$(x; \eta_k) \sim N(\mu_k, \Sigma_k)$ to
+$\mathbb P(x = w; \eta_k) = \eta_{kw}$ where $\eta$ is a stochastic
+matrix and $w$ is an arbitrary element of the space for $x$. So now the
+space for both $x$ and $z$ are finite. We call this model the simple
+mixture model (SMM).
+
+As in GMM, at E-step $r_{ik}$ can be explicitly computed using Bayes'
+theorem.
+
+It is not hard to write down the solution to the M-step in this case:
+
+$$\begin{aligned}
+\pi_{k} &= {1 \over m} \sum_i r_{ik}, \qquad (2.7)\\
+\eta_{k, w} &= {\sum_i r_{ik} 1_{x_i = w} \over \sum_i r_{ik}}. \qquad (2.8)
+\end{aligned}$$
+
+where $1_{x_i = w}$ is the
+[[https://en.wikipedia.org/wiki/Indicator_function][indicator
+function]], and evaluates to $1$ if $x_i = w$ and $0$ otherwise.
+
+Two trivial variants of the SMM are the two versions of probabilistic
+latent semantic analysis (pLSA), which we call pLSA1 and pLSA2.
+
+The model pLSA1 is a probabilistic version of latent semantic analysis,
+which is basically a simple matrix factorisation model in collaborative
+filtering, whereas pLSA2 has a fully Bayesian version called latent
+Dirichlet allocation (LDA), not to be confused with the other LDA
+(linear discriminant analysis).
+
+*** pLSA
+ :PROPERTIES:
+ :CUSTOM_ID: plsa
+ :END:
+The pLSA model (Hoffman 2000) is a mixture model, where the dataset is
+now pairs $(d_i, x_i)_{i = 1 : m}$. In natural language processing, $x$
+are words and $d$ are documents, and a pair $(d, x)$ represent an
+ocurrance of word $x$ in document $d$.
+
+For each datapoint $(d_{i}, x_{i})$,
+
+$$\begin{aligned}
+p(d_i, x_i; \theta) &= \sum_{z_i} p(z_i; \theta) p(d_i | z_i; \theta) p(x_i | z_i; \theta) \qquad (2.91)\\
+&= p(d_i; \theta) \sum_z p(x_i | z_i; \theta) p (z_i | d_i; \theta) \qquad (2.92).
+\end{aligned}$$
+
+Of the two formulations, (2.91) corresponds to pLSA type 1, and (2.92)
+corresponds to type 2.
+
+**** pLSA1
+ :PROPERTIES:
+ :CUSTOM_ID: plsa1
+ :END:
+The pLSA1 model (Hoffman 2000) is basically SMM with $x_i$ substituted
+with $(d_i, x_i)$, which conditioned on $z_i$ are independently
+categorically distributed:
+
+$$p(d_i = u, x_i = w | z_i = k; \theta) = p(d_i ; \xi_k) p(x_i; \eta_k) = \xi_{ku} \eta_{kw}.$$
+
+The model can be illustrated in the plate notations:
+
+[[/assets/resources/plsa1.png]]
+
+So the solution of the M-step is
+
+$$\begin{aligned}
+\pi_{k} &= {1 \over m} \sum_i r_{ik} \\
+\xi_{k, u} &= {\sum_i r_{ik} 1_{d_{i} = u} \over \sum_i r_{ik}} \\
+\eta_{k, w} &= {\sum_i r_{ik} 1_{x_{i} = w} \over \sum_i r_{ik}}.
+\end{aligned}$$
+
+*Remark*. pLSA1 is the probabilistic version of LSA, also known as
+matrix factorisation.
+
+Let $n_d$ and $n_x$ be the number of values $d_i$ and $x_i$ can take.
+
+*Problem* (LSA). Let $R$ be a $n_d \times n_x$ matrix, fix
+$s \le \min\{n_d, n_x\}$. Find $n_d \times s$ matrix $D$ and
+$n_x \times s$ matrix $X$ that minimises
+
+$$J(D, X) = \|R - D X^T\|_F.$$
+
+where $\|\cdot\|_F$ is the Frobenius norm.
+
+*Claim*. Let $R = U \Sigma V^T$ be the SVD of $R$, then the solution to
+the above problem is $D = U_s \Sigma_s^{{1 \over 2}}$ and
+$X = V_s \Sigma_s^{{1 \over 2}}$, where $U_s$ (resp. $V_s$) is the
+matrix of the first $s$ columns of $U$ (resp. $V$) and $\Sigma_s$ is the
+$s \times s$ submatrix of $\Sigma$.
+
+One can compare pLSA1 with LSA. Both procedures produce embeddings of
+$d$ and $x$: in pLSA we obtain $n_z$ dimensional embeddings
+$\xi_{\cdot, u}$ and $\eta_{\cdot, w}$, whereas in LSA we obtain $s$
+dimensional embeddings $D_{u, \cdot}$ and $X_{w, \cdot}$.
+
+**** pLSA2
+ :PROPERTIES:
+ :CUSTOM_ID: plsa2
+ :END:
+Let us turn to pLSA2 (Hoffman 2004), corresponding to (2.92). We rewrite
+it as
+
+$$p(x_i | d_i; \theta) = \sum_{z_i} p(x_i | z_i; \theta) p(z_i | d_i; \theta).$$
+
+To simplify notations, we collect all the $x_i$s with the corresponding
+$d_i$ equal to 1 (suppose there are $m_1$ of them), and write them as
+$(x_{1, j})_{j = 1 : m_1}$. In the same fashion we construct
+$x_{2, 1 : m_2}, x_{3, 1 : m_3}, ... x_{n_d, 1 : m_{n_d}}$. Similarly,
+we relabel the corresponding $d_i$ and $z_i$ accordingly.
+
+With almost no loss of generality, we assume all $m_\ell$s are equal and
+write them as $m$.
+
+Now the model becomes
+
+$$p(x_{\ell, i} | d_{\ell, i} = \ell; \theta) = \sum_k p(x_{\ell, i} | z_{\ell, i} = k; \theta) p(z_{\ell, i} = k | d_{\ell, i} = \ell; \theta).$$
+
+Since we have regrouped the $x$'s and $z$'s whose indices record the
+values of the $d$'s, we can remove the $d$'s from the equation
+altogether:
+
+$$p(x_{\ell, i}; \theta) = \sum_k p(x_{\ell, i} | z_{\ell, i} = k; \theta) p(z_{\ell, i} = k; \theta).$$
+
+It is effectively a modification of SMM by making $n_d$ copies of $\pi$.
+More specifically the parameters are
+$\theta = (\pi_{1 : n_d, 1 : n_z}, \eta_{1 : n_z, 1 : n_x})$, where we
+model $(z | d = \ell) \sim \text{Cat}(\pi_{\ell, \cdot})$ and, as in
+pLSA1, $(x | z = k) \sim \text{Cat}(\eta_{k, \cdot})$.
+
+Illustrated in the plate notations, pLSA2 is:
+
+[[/assets/resources/plsa2.png]]
+
+The computation is basically adding an index $\ell$ to the computation
+of SMM wherever applicable.
+
+The updates at the E-step is
+
+$$r_{\ell i k} = p(z_{\ell i} = k | x_{\ell i}; \theta) \propto \pi_{\ell k} \eta_{k, x_{\ell i}}.$$
+
+And at the M-step
+
+$$\begin{aligned}
+\pi_{\ell k} &= {1 \over m} \sum_i r_{\ell i k} \\
+\eta_{k w} &= {\sum_{\ell, i} r_{\ell i k} 1_{x_{\ell i} = w} \over \sum_{\ell, i} r_{\ell i k}}.
+\end{aligned}$$
+
+*** HMM
+ :PROPERTIES:
+ :CUSTOM_ID: hmm
+ :END:
+The hidden markov model (HMM) is a sequential version of SMM, in the
+same sense that recurrent neural networks are sequential versions of
+feed-forward neural networks.
+
+HMM is an example where the posterior $p(z_i | x_i; \theta)$ is not easy
+to compute, and one has to utilise properties of the underlying Bayesian
+network to go around it.
+
+Now each sample is a sequence $x_i = (x_{ij})_{j = 1 : T}$, and so are
+the latent variables $z_i = (z_{ij})_{j = 1 : T}$.
+
+The latent variables are assumed to form a Markov chain with transition
+matrix $(\xi_{k \ell})_{k \ell}$, and $x_{ij}$ is completely dependent
+on $z_{ij}$:
+
+$$\begin{aligned}
+p(z_{ij} | z_{i, j - 1}) &= \xi_{z_{i, j - 1}, z_{ij}},\\
+p(x_{ij} | z_{ij}) &= \eta_{z_{ij}, x_{ij}}.
+\end{aligned}$$
+
+Also, the distribution of $z_{i1}$ is again categorical with parameter
+$\pi$:
+
+$$p(z_{i1}) = \pi_{z_{i1}}$$
+
+So the parameters are $\theta = (\pi, \xi, \eta)$. And HMM can be shown
+in plate notations as:
+
+[[/assets/resources/hmm.png]]
+
+Now we apply EM to HMM, which is called the
+[[https://en.wikipedia.org/wiki/Baum%E2%80%93Welch_algorithm][Baum-Welch
+algorithm]]. Unlike the previous examples, it is too messy to compute
+$p(z_i | x_{i}; \theta)$, so during the E-step we instead write down
+formula (2.5) directly in hope of simplifying it:
+
+$$\begin{aligned}
+\mathbb E_{p(z_i | x_i; \theta_t)} \log p(x_i, z_i; \theta_t) &=\mathbb E_{p(z_i | x_i; \theta_t)} \left(\log \pi_{z_{i1}} + \sum_{j = 2 : T} \log \xi_{z_{i, j - 1}, z_{ij}} + \sum_{j = 1 : T} \log \eta_{z_{ij}, x_{ij}}\right). \qquad (3)
+\end{aligned}$$
+
+Let us compute the summand in second term:
+
+$$\begin{aligned}
+\mathbb E_{p(z_i | x_{i}; \theta_t)} \log \xi_{z_{i, j - 1}, z_{ij}} &= \sum_{k, \ell} (\log \xi_{k, \ell}) \mathbb E_{p(z_{i} | x_{i}; \theta_t)} 1_{z_{i, j - 1} = k, z_{i, j} = \ell} \\
+&= \sum_{k, \ell} p(z_{i, j - 1} = k, z_{ij} = \ell | x_{i}; \theta_t) \log \xi_{k, \ell}. \qquad (4)
+\end{aligned}$$
+
+Similarly, one can write down the first term and the summand in the
+third term to obtain
+
+$$\begin{aligned}
+\mathbb E_{p(z_i | x_{i}; \theta_t)} \log \pi_{z_{i1}} &= \sum_k p(z_{i1} = k | x_{i}; \theta_t), \qquad (5) \\
+\mathbb E_{p(z_i | x_{i}; \theta_t)} \log \eta_{z_{i, j}, x_{i, j}} &= \sum_{k, w} 1_{x_{ij} = w} p(z_{i, j} = k | x_i; \theta_t) \log \eta_{k, w}. \qquad (6)
+\end{aligned}$$
+
+plugging (4)(5)(6) back into (3) and summing over $j$, we obtain the
+formula to maximise over $\theta$ on:
+
+$$\sum_k \sum_i r_{i1k} \log \pi_k + \sum_{k, \ell} \sum_{j = 2 : T, i} s_{ijk\ell} \log \xi_{k, \ell} + \sum_{k, w} \sum_{j = 1 : T, i} r_{ijk} 1_{x_{ij} = w} \log \eta_{k, w},$$
+
+where
+
+$$\begin{aligned}
+r_{ijk} &:= p(z_{ij} = k | x_{i}; \theta_t), \\
+s_{ijk\ell} &:= p(z_{i, j - 1} = k, z_{ij} = \ell | x_{i}; \theta_t).
+\end{aligned}$$
+
+Now we proceed to the M-step. Since each of the
+$\pi_k, \xi_{k, \ell}, \eta_{k, w}$ is nicely confined in the inner sum
+of each term, together with the constraint
+$\sum_k \pi_k = \sum_\ell \xi_{k, \ell} = \sum_w \eta_{k, w} = 1$ it is
+not hard to find the argmax at time $t + 1$ (the same way one finds the
+MLE for any categorical distribution):
+
+$$\begin{aligned}
+\pi_{k} &= {1 \over m} \sum_i r_{i1k}, \qquad (6.1) \\
+\xi_{k, \ell} &= {\sum_{j = 2 : T, i} s_{ijk\ell} \over \sum_{j = 1 : T - 1, i} r_{ijk}}, \qquad(6.2) \\
+\eta_{k, w} &= {\sum_{ij} 1_{x_{ij} = w} r_{ijk} \over \sum_{ij} r_{ijk}}. \qquad(6.3)
+\end{aligned}$$
+
+Note that (6.1)(6.3) are almost identical to (2.7)(2.8). This makes
+sense as the only modification HMM makes over SMM is the added
+dependencies between the latent variables.
+
+What remains is to compute $r$ and $s$.
+
+This is done by using the forward and backward procedures which takes
+advantage of the conditioned independence / topology of the underlying
+Bayesian network. It is out of scope of this post, but for the sake of
+completeness I include it here.
+
+Let
+
+$$\begin{aligned}
+\alpha_k(i, j) &:= p(x_{i, 1 : j}, z_{ij} = k; \theta_t), \\
+\beta_k(i, j) &:= p(x_{i, j + 1 : T} | z_{ij} = k; \theta_t).
+\end{aligned}$$
+
+They can be computed recursively as
+
+$$\begin{aligned}
+\alpha_k(i, j) &= \begin{cases}
+\eta_{k, x_{1j}} \pi_k, & j = 1; \\
+\eta_{k, x_{ij}} \sum_\ell \alpha_\ell(j - 1, i) \xi_{k\ell}, & j \ge 2.
+\end{cases}\\
+\beta_k(i, j) &= \begin{cases}
+1, & j = T;\\
+\sum_\ell \xi_{k\ell} \beta_\ell(j + 1, i) \eta_{\ell, x_{i, j + 1}}, & j < T.
+\end{cases}
+\end{aligned}$$
+
+Then
+
+$$\begin{aligned}
+p(z_{ij} = k, x_{i}; \theta_t) &= \alpha_k(i, j) \beta_k(i, j), \qquad (7)\\
+p(x_{i}; \theta_t) &= \sum_k \alpha_k(i, j) \beta_k(i, j),\forall j = 1 : T \qquad (8)\\
+p(z_{i, j - 1} = k, z_{i, j} = \ell, x_{i}; \theta_t) &= \alpha_k(i, j) \xi_{k\ell} \beta_\ell(i, j + 1) \eta_{\ell, x_{j + 1, i}}. \qquad (9)
+\end{aligned}$$
+
+And this yields $r_{ijk}$ and $s_{ijk\ell}$ since they can be computed
+as ${(7) \over (8)}$ and ${(9) \over (8)}$ respectively.
+
+** Fully Bayesian EM / MFA
+ :PROPERTIES:
+ :CUSTOM_ID: fully-bayesian-em-mfa
+ :END:
+Let us now venture into the realm of full Bayesian.
+
+In EM we aim to maximise the ELBO
+
+$$\int q(z) \log {p(x, z; \theta) \over q(z)} dz d\theta$$
+
+alternately over $q$ and $\theta$. As mentioned before, the E-step of
+maximising over $q$ is Bayesian, in that it computes the posterior of
+$z$, whereas the M-step of maximising over $\theta$ is maximum
+likelihood and frequentist.
+
+The fully Bayesian EM makes the M-step Bayesian by making $\theta$ a
+random variable, so the ELBO becomes
+
+$$L(p(x, z, \theta), q(z, \theta)) = \int q(z, \theta) \log {p(x, z, \theta) \over q(z, \theta)} dz d\theta$$
+
+We further assume $q$ can be factorised into distributions on $z$ and
+$\theta$: $q(z, \theta) = q_1(z) q_2(\theta)$. So the above formula is
+rewritten as
+
+$$L(p(x, z, \theta), q(z, \theta)) = \int q_1(z) q_2(\theta) \log {p(x, z, \theta) \over q_1(z) q_2(\theta)} dz d\theta$$
+
+To find argmax over $q_1$, we rewrite
+
+$$\begin{aligned}
+L(p(x, z, \theta), q(z, \theta)) &= \int q_1(z) \left(\int q_2(\theta) \log p(x, z, \theta) d\theta\right) dz - \int q_1(z) \log q_1(z) dz - \int q_2(\theta) \log q_2(\theta) \\&= - D(q_1(z) || p_x(z)) + C,
+\end{aligned}$$
+
+where $p_x$ is a density in $z$ with
+
+$$\log p_x(z) = \mathbb E_{q_2(\theta)} \log p(x, z, \theta) + C.$$
+
+So the $q_1$ that maximises the ELBO is $q_1^* = p_x$.
+
+Similarly, the optimal $q_2$ is such that
+
+$$\log q_2^*(\theta) = \mathbb E_{q_1(z)} \log p(x, z, \theta) + C.$$
+
+The fully Bayesian EM thus alternately evaluates $q_1^*$ (E-step) and
+$q_2^*$ (M-step).
+
+It is also called mean field approximation (MFA), and can be easily
+generalised to models with more than two groups of latent variables, see
+e.g. Section 10.1 of Bishop 2006.
+
+*** Application to mixture models
+ :PROPERTIES:
+ :CUSTOM_ID: application-to-mixture-models
+ :END:
+*Definition (Fully Bayesian mixture model)*. The relations between
+$\pi$, $\eta$, $x$, $z$ are the same as in the definition of mixture
+models. Furthermore, we assume the distribution of $(x | \eta_k)$
+belongs to the
+[[https://en.wikipedia.org/wiki/Exponential_family][exponential family]]
+(the definition of the exponential family is briefly touched at the end
+of this section). But now both $\pi$ and $\eta$ are random variables.
+Let the prior distribution $p(\pi)$ is Dirichlet with parameter
+$(\alpha, \alpha, ..., \alpha)$. Let the prior $p(\eta_k)$ be the
+conjugate prior of $(x | \eta_k)$, with parameter $\beta$, we will see
+later in this section that the posterior $q(\eta_k)$ belongs to the same
+family as $p(\eta_k)$. Represented in a plate notations, a fully
+Bayesian mixture model looks like:
+
+[[/assets/resources/fully-bayesian-mm.png]]
+
+Given this structure we can write down the mean-field approximation:
+
+$$\log q(z) = \mathbb E_{q(\eta)q(\pi)} (\log(x | z, \eta) + \log(z | \pi)) + C.$$
+
+Both sides can be factored into per-sample expressions, giving us
+
+$$\log q(z_i) = \mathbb E_{q(\eta)} \log p(x_i | z_i, \eta) + \mathbb E_{q(\pi)} \log p(z_i | \pi) + C$$
+
+Therefore
+
+$$\log r_{ik} = \log q(z_i = k) = \mathbb E_{q(\eta_k)} \log p(x_i | \eta_k) + \mathbb E_{q(\pi)} \log \pi_k + C. \qquad (9.1)$$
+
+So the posterior of each $z_i$ is categorical regardless of the $p$s and
+$q$s.
+
+Computing the posterior of $\pi$ and $\eta$:
+
+$$\log q(\pi) + \log q(\eta) = \log p(\pi) + \log p(\eta) + \sum_i \mathbb E_{q(z_i)} p(x_i | z_i, \eta) + \sum_i \mathbb E_{q(z_i)} p(z_i | \pi) + C.$$
+
+So we can separate the terms involving $\pi$ and those involving $\eta$.
+First compute the posterior of $\pi$:
+
+$$\log q(\pi) = \log p(\pi) + \sum_i \mathbb E_{q(z_i)} \log p(z_i | \pi) = \log p(\pi) + \sum_i \sum_k r_{ik} \log \pi_k + C.$$
+
+The right hand side is the log of a Dirichlet modulus the constant $C$,
+from which we can update the posterior parameter $\phi^\pi$:
+
+$$\phi^\pi_k = \alpha + \sum_i r_{ik}. \qquad (9.3)$$
+
+Similarly we can obtain the posterior of $\eta$:
+
+$$\log q(\eta) = \log p(\eta) + \sum_i \sum_k r_{ik} \log p(x_i | \eta_k) + C.$$
+
+Again we can factor the terms with respect to $k$ and get:
+
+$$\log q(\eta_k) = \log p(\eta_k) + \sum_i r_{ik} \log p(x_i | \eta_k) + C. \qquad (9.5)$$
+
+Here we can see why conjugate prior works. Mathematically, given a
+probability distribution $p(x | \theta)$, the distribution $p(\theta)$
+is called conjugate prior of $p(x | \theta)$ if
+$\log p(\theta) + \log p(x | \theta)$ has the same form as
+$\log p(\theta)$.
+
+For example, the conjugate prior for the exponential family
+$p(x | \theta) = h(x) \exp(\theta \cdot T(x) - A(\theta))$ where $T$,
+$A$ and $h$ are some functions is
+$p(\theta; \chi, \nu) \propto \exp(\chi \cdot \theta - \nu A(\theta))$.
+
+Here what we want is a bit different from conjugate priors because of
+the coefficients $r_{ik}$. But the computation carries over to the
+conjugate priors of the exponential family (try it yourself and you'll
+see). That is, if $p(x_i | \eta_k)$ belongs to the exponential family
+
+$$p(x_i | \eta_k) = h(x) \exp(\eta_k \cdot T(x) - A(\eta_k))$$
+
+and if $p(\eta_k)$ is the conjugate prior of $p(x_i | \eta_k)$
+
+$$p(\eta_k) \propto \exp(\chi \cdot \eta_k - \nu A(\eta_k))$$
+
+then $q(\eta_k)$ has the same form as $p(\eta_k)$, and from (9.5) we can
+compute the updates of $\phi^{\eta_k}$:
+
+$$\begin{aligned}
+\phi^{\eta_k}_1 &= \chi + \sum_i r_{ik} T(x_i), \qquad (9.7) \\
+\phi^{\eta_k}_2 &= \nu + \sum_i r_{ik}. \qquad (9.9)
+\end{aligned}$$
+
+So the mean field approximation for the fully Bayesian mixture model is
+the alternate iteration of (9.1) (E-step) and (9.3)(9.7)(9.9) (M-step)
+until convergence.
+
+*** Fully Bayesian GMM
+ :PROPERTIES:
+ :CUSTOM_ID: fully-bayesian-gmm
+ :END:
+A typical example of fully Bayesian mixture models is the fully Bayesian
+Gaussian mixture model (Attias 2000, also called variational GMM in the
+literature). It is defined by applying the same modification to GMM as
+the difference between Fully Bayesian mixture model and the vanilla
+mixture model.
+
+More specifically:
+
+- $p(z_{i}) = \text{Cat}(\pi)$ as in vanilla GMM
+- $p(\pi) = \text{Dir}(\alpha, \alpha, ..., \alpha)$ has Dirichlet
+ distribution, the conjugate prior to the parameters of the categorical
+ distribution.
+- $p(x_i | z_i = k) = p(x_i | \eta_k) = N(\mu_{k}, \Sigma_{k})$ as in
+ vanilla GMM
+- $p(\mu_k, \Sigma_k) = \text{NIW} (\mu_0, \lambda, \Psi, \nu)$ is the
+ normal-inverse-Wishart distribution, the conjugate prior to the mean
+ and covariance matrix of the Gaussian distribution.
+
+The E-step and M-step can be computed using (9.1) and (9.3)(9.7)(9.9) in
+the previous section. The details of the computation can be found in
+Chapter 10.2 of Bishop 2006 or Attias 2000.
+
+*** LDA
+ :PROPERTIES:
+ :CUSTOM_ID: lda
+ :END:
+As the second example of fully Bayesian mixture models, Latent Dirichlet
+allocation (LDA) (Blei-Ng-Jordan 2003) is the fully Bayesian version of
+pLSA2, with the following plate notations:
+
+[[/assets/resources/lda.png]]
+
+It is the smoothed version in the paper.
+
+More specifically, on the basis of pLSA2, we add prior distributions to
+$\eta$ and $\pi$:
+
+$$\begin{aligned}
+p(\eta_k) &= \text{Dir} (\beta, ..., \beta), \qquad k = 1 : n_z \\
+p(\pi_\ell) &= \text{Dir} (\alpha, ..., \alpha), \qquad \ell = 1 : n_d \\
+\end{aligned}$$
+
+And as before, the prior of $z$ is
+
+$$p(z_{\ell, i}) = \text{Cat} (\pi_\ell), \qquad \ell = 1 : n_d, i = 1 : m$$
+
+We also denote posterior distribution
+
+$$\begin{aligned}
+q(\eta_k) &= \text{Dir} (\phi^{\eta_k}), \qquad k = 1 : n_z \\
+q(\pi_\ell) &= \text{Dir} (\phi^{\pi_\ell}), \qquad \ell = 1 : n_d \\
+p(z_{\ell, i}) &= \text{Cat} (r_{\ell, i}), \qquad \ell = 1 : n_d, i = 1 : m
+\end{aligned}$$
+
+As before, in E-step we update $r$, and in M-step we update $\lambda$
+and $\gamma$.
+
+But in the LDA paper, one treats optimisation over $r$, $\lambda$ and
+$\gamma$ as a E-step, and treats $\alpha$ and $\beta$ as parameters,
+which is optmised over at M-step. This makes it more akin to the
+classical EM where the E-step is Bayesian and M-step MLE. This is more
+complicated, and we do not consider it this way here.
+
+Plugging in (9.1) we obtain the updates at E-step
+
+$$r_{\ell i k} \propto \exp(\psi(\phi^{\pi_\ell}_k) + \psi(\phi^{\eta_k}_{x_{\ell i}}) - \psi(\sum_w \phi^{\eta_k}_w)), \qquad (10)$$
+
+where $\psi$ is the digamma function. Similarly, plugging in
+(9.3)(9.7)(9.9), at M-step, we update the posterior of $\pi$ and $\eta$:
+
+$$\begin{aligned}
+\phi^{\pi_\ell}_k &= \alpha + \sum_i r_{\ell i k}. \qquad (11)\\
+%%}}$
+%%As for $\eta$, we have
+%%{{$%align%
+%%\log q(\eta) &= \sum_k \log p(\eta_k) + \sum_{\ell, i} \mathbb E_{q(z_{\ell i})} \log p(x_{\ell i} | z_{\ell i}, \eta) \\
+%%&= \sum_{k, j} (\sum_{\ell, i} r_{\ell i k} 1_{x_{\ell i} = j} + \beta - 1) \log \eta_{k j}
+%%}}$
+%%which gives us
+%%{{$
+\phi^{\eta_k}_w &= \beta + \sum_{\ell, i} r_{\ell i k} 1_{x_{\ell i} = w}. \qquad (12)
+\end{aligned}$$
+
+So the algorithm iterates over (10) and (11)(12) until convergence.
+
+*** DPMM
+ :PROPERTIES:
+ :CUSTOM_ID: dpmm
+ :END:
+The Dirichlet process mixture model (DPMM) is like the fully Bayesian
+mixture model except $n_z = \infty$, i.e. $z$ can take any positive
+integer value.
+
+The probability of $z_i = k$ is defined using the so called
+stick-breaking process: let $v_i \sim \text{Beta} (\alpha, \beta)$ be
+i.i.d. random variables with Beta distributions, then
+
+$$\mathbb P(z_i = k | v_{1:\infty}) = (1 - v_1) (1 - v_2) ... (1 - v_{k - 1}) v_k.$$
+
+So $v$ plays a similar role to $\pi$ in the previous models.
+
+As before, we have that the distribution of $x$ belongs to the
+exponential family:
+
+$$p(x | z = k, \eta) = p(x | \eta_k) = h(x) \exp(\eta_k \cdot T(x) - A(\eta_k))$$
+
+so the prior of $\eta_k$ is
+
+$$p(\eta_k) \propto \exp(\chi \cdot \eta_k - \nu A(\eta_k)).$$
+
+Because of the infinities we can't directly apply the formulas in the
+general fully Bayesian mixture models. So let us carefully derive the
+whole thing again.
+
+As before, we can write down the ELBO:
+
+$$L(p(x, z, \theta), q(z, \theta)) = \mathbb E_{q(\theta)} \log {p(\theta) \over q(\theta)} + \mathbb E_{q(\theta) q(z)} \log {p(x, z | \theta) \over q(z)}.$$
+
+Both terms are infinite series:
+
+$$L(p, q) = \sum_{k = 1 : \infty} \mathbb E_{q(\theta_k)} \log {p(\theta_k) \over q(\theta_k)} + \sum_{i = 1 : m} \sum_{k = 1 : \infty} q(z_i = k) \mathbb E_{q(\theta)} \log {p(x_i, z_i = k | \theta) \over q(z_i = k)}.$$
+
+There are several ways to deal with the infinities. One is to fix some
+level $T > 0$ and set $v_T = 1$ almost surely (Blei-Jordan 2006). This
+effectively turns the model into a finite one, and both terms become
+finite sums over $k = 1 : T$.
+
+Another walkaround (Kurihara-Welling-Vlassis 2007) is also a kind of
+truncation, but less heavy-handed: setting the posterior
+$q(\theta) = q(\eta) q(v)$ to be:
+
+$$q(\theta) = q(\theta_{1 : T}) p(\theta_{T + 1 : \infty}) =: q(\theta_{\le T}) p(\theta_{> T}).$$
+
+That is, tie the posterior after $T$ to the prior. This effectively
+turns the first term in the ELBO to a finite sum over $k = 1 : T$, while
+keeping the second sum an infinite series:
+
+$$L(p, q) = \sum_{k = 1 : T} \mathbb E_{q(\theta_k)} \log {p(\theta_k) \over q(\theta_k)} + \sum_i \sum_{k = 1 : \infty} q(z_i = k) \mathbb E_{q(\theta)} \log {p(x_i, z_i = k | \theta) \over q(z_i = k)}. \qquad (13)$$
+
+The plate notation of this model is:
+
+[[/assets/resources/dpmm.png]]
+
+As it turns out, the infinities can be tamed in this case.
+
+As before, the optimal $q(z_i)$ is computed as
+
+$$r_{ik} = q(z_i = k) = s_{ik} / S_i$$
+
+where
+
+$$\begin{aligned}
+s_{ik} &= \exp(\mathbb E_{q(\theta)} \log p(x_i, z_i = k | \theta)) \\
+S_i &= \sum_{k = 1 : \infty} s_{ik}.
+\end{aligned}$$
+
+Plugging this back to (13) we have
+
+$$\begin{aligned}
+\sum_{k = 1 : \infty} r_{ik} &\mathbb E_{q(\theta)} \log {p(x_i, z_i = k | \theta) \over r_{ik}} \\
+&= \sum_{k = 1 : \infty} r_{ik} \mathbb E_{q(\theta)} (\log p(x_i, z_i = k | \theta) - \mathbb E_{q(\theta)} \log p(x_i, z_i = k | \theta) + \log S_i) = \log S_i.
+\end{aligned}$$
+
+So it all rests upon $S_i$ being finite.
+
+For $k \le T + 1$, we compute the quantity $s_{ik}$ directly. For
+$k > T$, it is not hard to show that
+
+$$s_{ik} = s_{i, T + 1} \exp((k - T - 1) \mathbb E_{p(w)} \log (1 - w)),$$
+
+where $w$ is a random variable with same distribution as $p(v_k)$, i.e.
+$\text{Beta}(\alpha, \beta)$.
+
+Hence
+
+$$S_i = \sum_{k = 1 : T} s_{ik} + {s_{i, T + 1} \over 1 - \exp(\psi(\beta) - \psi(\alpha + \beta))}$$
+
+is indeed finite. Similarly we also obtain
+
+$$q(z_i > k) = S^{-1} \left(\sum_{\ell = k + 1 : T} s_\ell + {s_{i, T + 1} \over 1 - \exp(\psi(\beta) - \psi(\alpha + \beta))}\right), k \le T \qquad (14)$$
+
+Now let us compute the posterior of $\theta_{\le T}$. In the following
+we exchange the integrals without justifying them (c.f. Fubini's
+Theorem). Equation (13) can be rewritten as
+
+$$L(p, q) = \mathbb E_{q(\theta_{\le T})} \left(\log p(\theta_{\le T}) + \sum_i \mathbb E_{q(z_i) p(\theta_{> T})} \log {p(x_i, z_i | \theta) \over q(z_i)} - \log q(\theta_{\le T})\right).$$
+
+Note that unlike the derivation of the mean-field approximation, we keep
+the $- \mathbb E_{q(z)} \log q(z)$ term even though we are only
+interested in $\theta$ at this stage. This is again due to the problem
+of infinities: as in the computation of $S$, we would like to cancel out
+some undesirable unbounded terms using $q(z)$. We now have
+
+$$\log q(\theta_{\le T}) = \log p(\theta_{\le T}) + \sum_i \mathbb E_{q(z_i) p(\theta_{> T})} \log {p(x_i, z_i | \theta) \over q(z_i)} + C.$$
+
+By plugging in $q(z = k)$ we obtain
+
+$$\log q(\theta_{\le T}) = \log p(\theta_{\le T}) + \sum_{k = 1 : \infty} q(z_i = k) \left(\mathbb E_{p(\theta_{> T})} \log {p(x_i, z_i = k | \theta) \over q(z_i = k)} - \mathbb E_{q(\theta)} \log {p(x_i, z_i = k | \theta) \over q(z_i = k)}\right) + C.$$
+
+Again, we separate the $v_k$'s and the $\eta_k$'s to obtain
+
+$$q(v_{\le T}) = \log p(v_{\le T}) + \sum_i \sum_k q(z_i = k) \left(\mathbb E_{p(v_{> T})} \log p(z_i = k | v) - \mathbb E_{q(v)} \log p (z_i = k | v)\right).$$
+
+Denote by $D_k$ the difference between the two expetations on the right
+hand side. It is easy to show that
+
+$$D_k = \begin{cases}
+\log(1 - v_1) ... (1 - v_{k - 1}) v_k - \mathbb E_{q(v)} \log (1 - v_1) ... (1 - v_{k - 1}) v_k & k \le T\\
+\log(1 - v_1) ... (1 - v_T) - \mathbb E_{q(v)} \log (1 - v_1) ... (1 - v_T) & k > T
+\end{cases}$$
+
+so $D_k$ is bounded. With this we can derive the update for
+$\phi^{v, 1}$ and $\phi^{v, 2}$:
+
+$$\begin{aligned}
+\phi^{v, 1}_k &= \alpha + \sum_i q(z_i = k) \\
+\phi^{v, 2}_k &= \beta + \sum_i q(z_i > k),
+\end{aligned}$$
+
+where $q(z_i > k)$ can be computed as in (14).
+
+When it comes to $\eta$, we have
+
+$$\log q(\eta_{\le T}) = \log p(\eta_{\le T}) + \sum_i \sum_{k = 1 : \infty} q(z_i = k) (\mathbb E_{p(\eta_k)} \log p(x_i | \eta_k) - \mathbb E_{q(\eta_k)} \log p(x_i | \eta_k)).$$
+
+Since $q(\eta_k) = p(\eta_k)$ for $k > T$, the inner sum on the right
+hand side is a finite sum over $k = 1 : T$. By factorising
+$q(\eta_{\le T})$ and $p(\eta_{\le T})$, we have
+
+$$\log q(\eta_k) = \log p(\eta_k) + \sum_i q(z_i = k) \log (x_i | \eta_k) + C,$$
+
+which gives us
+
+$$\begin{aligned}
+\phi^{\eta, 1}_k &= \xi + \sum_i q(z_i = k) T(x_i) \\
+\phi^{\eta, 2}_k &= \nu + \sum_i q(z_i = k).
+\end{aligned}$$
+
+** SVI
+ :PROPERTIES:
+ :CUSTOM_ID: svi
+ :END:
+In variational inference, the computation of some parameters are more
+expensive than others.
+
+For example, the computation of M-step is often much more expensive than
+that of E-step:
+
+- In the vanilla mixture models with the EM algorithm, the update of
+ $\theta$ requires the computation of $r_{ik}$ for all $i = 1 : m$, see
+ Eq (2.3).
+- In the fully Bayesian mixture model with mean field approximation, the
+ updates of $\phi^\pi$ and $\phi^\eta$ require the computation of a sum
+ over all samples (see Eq (9.3)(9.7)(9.9)).
+
+Similarly, in pLSA2 (resp. LDA), the updates of $\eta_k$ (resp.
+$\phi^{\eta_k}$) requires a sum over $\ell = 1 : n_d$, whereas the
+updates of other parameters do not.
+
+In these cases, the parameter that requires more computations are called
+global and the other ones local.
+
+Stochastic variational inference (SVI, Hoffman-Blei-Wang-Paisley 2012)
+addresses this problem in the same way as stochastic gradient descent
+improves efficiency of gradient descent.
+
+Each time SVI picks a sample, updates the corresponding local
+parameters, and computes the update of the global parameters as if all
+the $m$ samples are identical to the picked sample. Finally it
+incorporates this global parameter value into previous computations of
+the global parameters, by means of an exponential moving average.
+
+As an example, here's SVI applied to LDA:
+
+1. Set $t = 1$.
+
+2. Pick $\ell$ uniformly from $\{1, 2, ..., n_d\}$.
+
+3. Repeat until convergence:
+
+ 1. Compute $(r_{\ell i k})_{i = 1 : m, k = 1 : n_z}$ using (10).
+ 2. Compute $(\phi^{\pi_\ell}_k)_{k = 1 : n_z}$ using (11).
+
+4. Compute $(\tilde \phi^{\eta_k}_w)_{k = 1 : n_z, w = 1 : n_x}$ using
+ the following formula (compare with (12))
+ $$\tilde \phi^{\eta_k}_w = \beta + n_d \sum_{i} r_{\ell i k} 1_{x_{\ell i} = w}$$
+
+5. Update the exponential moving average
+ $(\phi^{\eta_k}_w)_{k = 1 : n_z, w = 1 : n_x}$:
+ $$\phi^{\eta_k}_w = (1 - \rho_t) \phi^{\eta_k}_w + \rho_t \tilde \phi^{\eta_k}_w$$
+
+6. Increment $t$ and go back to Step 2.
+
+In the original paper, $\rho_t$ needs to satisfy some conditions that
+guarantees convergence of the global parameters:
+
+$$\begin{aligned}
+\sum_t \rho_t = \infty \\
+\sum_t \rho_t^2 < \infty
+\end{aligned}$$
+
+and the choice made there is
+
+$$\rho_t = (t + \tau)^{-\kappa}$$
+
+for some $\kappa \in (.5, 1]$ and $\tau \ge 0$.
+
+** AEVB
+ :PROPERTIES:
+ :CUSTOM_ID: aevb
+ :END:
+SVI adds to variational inference stochastic updates similar to
+stochastic gradient descent. Why not just use neural networks with
+stochastic gradient descent while we are at it? Autoencoding variational
+Bayes (AEVB) (Kingma-Welling 2013) is such an algorithm.
+
+Let's look back to the original problem of maximising the ELBO:
+
+$$\max_{\theta, q} \sum_{i = 1 : m} L(p(x_i | z_i; \theta) p(z_i; \theta), q(z_i))$$
+
+Since for any given $\theta$, the optimal $q(z_i)$ is the posterior
+$p(z_i | x_i; \theta)$, the problem reduces to
+
+$$\max_{\theta} \sum_i L(p(x_i | z_i; \theta) p(z_i; \theta), p(z_i | x_i; \theta))$$
+
+Let us assume $p(z_i; \theta) = p(z_i)$ is independent of $\theta$ to
+simplify the problem. In the old mixture models, we have
+$p(x_i | z_i; \theta) = p(x_i; \eta_{z_i})$, which we can generalise to
+$p(x_i; f(\theta, z_i))$ for some function $f$. Using Beyes' theorem we
+can also write down $p(z_i | x_i; \theta) = q(z_i; g(\theta, x_i))$ for
+some function $g$. So the problem becomes
+
+$$\max_{\theta} \sum_i L(p(x_i; f(\theta, z_i)) p(z_i), q(z_i; g(\theta, x_i)))$$
+
+In some cases $g$ can be hard to write down or compute. AEVB addresses
+this problem by replacing $g(\theta, x_i)$ with a neural network
+$g_\phi(x_i)$ with input $x_i$ and some separate parameters $\phi$. It
+also replaces $f(\theta, z_i)$ with a neural network $f_\theta(z_i)$
+with input $z_i$ and parameters $\theta$. And now the problem becomes
+
+$$\max_{\theta, \phi} \sum_i L(p(x_i; f_\theta(z_i)) p(z_i), q(z_i; g_\phi(x_i))).$$
+
+The objective function can be written as
+
+$$\sum_i \mathbb E_{q(z_i; g_\phi(x_i))} \log p(x_i; f_\theta(z_i)) - D(q(z_i; g_\phi(x_i)) || p(z_i)).$$
+
+The first term is called the negative reconstruction error, like the
+$- \|decoder(encoder(x)) - x\|$ in autoencoders, which is where the
+"autoencoder" in the name comes from.
+
+The second term is a regularisation term that penalises the posterior
+$q(z_i)$ that is very different from the prior $p(z_i)$. We assume this
+term can be computed analytically.
+
+So only the first term requires computing.
+
+We can approximate the sum over $i$ in a similar fashion as SVI: pick
+$j$ uniformly randomly from $\{1 ... m\}$ and treat the whole dataset as
+$m$ replicates of $x_j$, and approximate the expectation using
+Monte-Carlo:
+
+$$U(x_i, \theta, \phi) := \sum_i \mathbb E_{q(z_i; g_\phi(x_i))} \log p(x_i; f_\theta(z_i)) \approx m \mathbb E_{q(z_j; g_\phi(x_j))} \log p(x_j; f_\theta(z_j)) \approx {m \over L} \sum_{\ell = 1}^L \log p(x_j; f_\theta(z_{j, \ell})),$$
+
+where each $z_{j, \ell}$ is sampled from $q(z_j; g_\phi(x_j))$.
+
+But then it is not easy to approximate the gradient over $\phi$. One can
+use the log trick as in policy gradients, but it has the problem of high
+variance. In policy gradients this is overcome by using baseline
+subtractions. In the AEVB paper it is tackled with the
+reparameterisation trick.
+
+Assume there exists a transformation $T_\phi$ and a random variable
+$\epsilon$ with distribution independent of $\phi$ or $\theta$, such
+that $T_\phi(x_i, \epsilon)$ has distribution $q(z_i; g_\phi(x_i))$. In
+this case we can rewrite $U(x, \phi, \theta)$ as
+
+$$\sum_i \mathbb E_{\epsilon \sim p(\epsilon)} \log p(x_i; f_\theta(T_\phi(x_i, \epsilon))),$$
+
+This way one can use Monte-Carlo to approximate
+$\nabla_\phi U(x, \phi, \theta)$:
+
+$$\nabla_\phi U(x, \phi, \theta) \approx {m \over L} \sum_{\ell = 1 : L} \nabla_\phi \log p(x_j; f_\theta(T_\phi(x_j, \epsilon_\ell))),$$
+
+where each $\epsilon_{\ell}$ is sampled from $p(\epsilon)$. The
+approximation of $U(x, \phi, \theta)$ itself can be done similarly.
+
+*** VAE
+ :PROPERTIES:
+ :CUSTOM_ID: vae
+ :END:
+As an example of AEVB, the paper introduces variational autoencoder
+(VAE), with the following instantiations:
+
+- The prior $p(z_i) = N(0, I)$ is standard normal, thus independent of
+ $\theta$.
+- The distribution $p(x_i; \eta)$ is either Gaussian or categorical.
+- The distribution $q(z_i; \mu, \Sigma)$ is Gaussian with diagonal
+ covariance matrix. So
+ $g_\phi(z_i) = (\mu_\phi(x_i), \text{diag}(\sigma^2_\phi(x_i)_{1 : d}))$.
+ Thus in the reparameterisation trick $\epsilon \sim N(0, I)$ and
+ $T_\phi(x_i, \epsilon) = \epsilon \odot \sigma_\phi(x_i) + \mu_\phi(x_i)$,
+ where $\odot$ is elementwise multiplication.
+- The KL divergence can be easily computed analytically as
+ $- D(q(z_i; g_\phi(x_i)) || p(z_i)) = {d \over 2} + \sum_{j = 1 : d} \log\sigma_\phi(x_i)_j - {1 \over 2} \sum_{j = 1 : d} (\mu_\phi(x_i)_j^2 + \sigma_\phi(x_i)_j^2)$.
+
+With this, one can use backprop to maximise the ELBO.
+
+*** Fully Bayesian AEVB
+ :PROPERTIES:
+ :CUSTOM_ID: fully-bayesian-aevb
+ :END:
+Let us turn to fully Bayesian version of AEVB. Again, we first recall
+the ELBO of the fully Bayesian mixture models:
+
+$$L(p(x, z, \pi, \eta; \alpha, \beta), q(z, \pi, \eta; r, \phi)) = L(p(x | z, \eta) p(z | \pi) p(\pi; \alpha) p(\eta; \beta), q(z; r) q(\eta; \phi^\eta) q(\pi; \phi^\pi)).$$
+
+We write $\theta = (\pi, \eta)$, rewrite $\alpha := (\alpha, \beta)$,
+$\phi := r$, and $\gamma := (\phi^\eta, \phi^\pi)$. Furthermore, as in
+the half-Bayesian version we assume $p(z | \theta) = p(z)$, i.e. $z$
+does not depend on $\theta$. Similarly we also assume
+$p(\theta; \alpha) = p(\theta)$. Now we have
+
+$$L(p(x, z, \theta; \alpha), q(z, \theta; \phi, \gamma)) = L(p(x | z, \theta) p(z) p(\theta), q(z; \phi) q(\theta; \gamma)).$$
+
+And the objective is to maximise it over $\phi$ and $\gamma$. We no
+longer maximise over $\theta$, because it is now a random variable, like
+$z$. Now let us transform it to a neural network model, as in the
+half-Bayesian case:
+
+$$L\left(\left(\prod_{i = 1 : m} p(x_i; f_\theta(z_i))\right) \left(\prod_{i = 1 : m} p(z_i) \right) p(\theta), \left(\prod_{i = 1 : m} q(z_i; g_\phi(x_i))\right) q(\theta; h_\gamma(x))\right).$$
+
+where $f_\theta$, $g_\phi$ and $h_\gamma$ are neural networks. Again, by
+separating out KL-divergence terms, the above formula becomes
+
+$$\sum_i \mathbb E_{q(\theta; h_\gamma(x))q(z_i; g_\phi(x_i))} \log p(x_i; f_\theta(z_i)) - \sum_i D(q(z_i; g_\phi(x_i)) || p(z_i)) - D(q(\theta; h_\gamma(x)) || p(\theta)).$$
+
+Again, we assume the latter two terms can be computed analytically.
+Using reparameterisation trick, we write
+
+$$\begin{aligned}
+\theta &= R_\gamma(\zeta, x) \\
+z_i &= T_\phi(\epsilon, x_i)
+\end{aligned}$$
+
+for some transformations $R_\gamma$ and $T_\phi$ and random variables
+$\zeta$ and $\epsilon$ so that the output has the desired distributions.
+
+Then the first term can be written as
+
+$$\mathbb E_{\zeta, \epsilon} \log p(x_i; f_{R_\gamma(\zeta, x)} (T_\phi(\epsilon, x_i))),$$
+
+so that the gradients can be computed accordingly.
+
+Again, one may use Monte-Carlo to approximate this expetation.
+
+** References
+ :PROPERTIES:
+ :CUSTOM_ID: references
+ :END:
+
+- Attias, Hagai. "A variational baysian framework for graphical models."
+ In Advances in neural information processing systems, pp.
+ 209-215. 2000.
+- Bishop, Christopher M. Neural networks for pattern recognition.
+ Springer. 2006.
+- Blei, David M., and Michael I. Jordan. "Variational Inference for
+ Dirichlet Process Mixtures." Bayesian Analysis 1, no. 1 (March 2006):
+ 121--43. [[https://doi.org/10.1214/06-BA104]].
+- Blei, David M., Andrew Y. Ng, and Michael I. Jordan. "Latent Dirichlet
+ Allocation." Journal of Machine Learning Research 3, no. Jan (2003):
+ 993--1022.
+- Hofmann, Thomas. "Latent Semantic Models for Collaborative Filtering."
+ ACM Transactions on Information Systems 22, no. 1 (January 1, 2004):
+ 89--115. [[https://doi.org/10.1145/963770.963774]].
+- Hofmann, Thomas. "Learning the similarity of documents: An
+ information-geometric approach to document retrieval and
+ categorization." In Advances in neural information processing systems,
+ pp. 914-920. 2000.
+- Hoffman, Matt, David M. Blei, Chong Wang, and John Paisley.
+ "Stochastic Variational Inference." ArXiv:1206.7051 [Cs, Stat], June
+ 29, 2012. [[http://arxiv.org/abs/1206.7051]].
+- Kingma, Diederik P., and Max Welling. "Auto-Encoding Variational
+ Bayes." ArXiv:1312.6114 [Cs, Stat], December 20, 2013.
+ [[http://arxiv.org/abs/1312.6114]].
+- Kurihara, Kenichi, Max Welling, and Nikos Vlassis. "Accelerated
+ variational Dirichlet process mixtures." In Advances in neural
+ information processing systems, pp. 761-768. 2007.
+- Sudderth, Erik Blaine. "Graphical models for visual object recognition
+ and tracking." PhD diss., Massachusetts Institute of Technology, 2006.
diff --git a/posts/2019-03-13-a-tail-of-two-densities.org b/posts/2019-03-13-a-tail-of-two-densities.org
new file mode 100644
index 0000000..783e0c5
--- /dev/null
+++ b/posts/2019-03-13-a-tail-of-two-densities.org
@@ -0,0 +1,1304 @@
+#+title: A Tail of Two Densities
+
+#+date: <2019-03-13>
+
+This is Part 1 of a two-part post where I give an introduction to the
+mathematics of differential privacy.
+
+Practically speaking,
+[[https://en.wikipedia.org/wiki/Differential_privacy][differential
+privacy]] is a technique of perturbing database queries so that query
+results do not leak too much information while still being relatively
+accurate.
+
+This post however focuses on the mathematical aspects of differential
+privacy, which is a study of
+[[https://en.wikipedia.org/wiki/Concentration_inequality][tail bounds]]
+of the divergence between two probability measures, with the end goal of
+applying it to
+[[https://en.wikipedia.org/wiki/Stochastic_gradient_descent][stochastic
+gradient descent]]. This post should be suitable for anyone familiar
+with probability theory.
+
+I start with the definition of \(\epsilon\)-differential privacy
+(corresponding to max divergence), followed by
+\((\epsilon, \delta)\)-differential privacy (a.k.a. approximate
+differential privacy, corresponding to the \(\delta\)-approximate max
+divergence). I show a characterisation of the
+\((\epsilon, \delta)\)-differential privacy as conditioned
+\(\epsilon\)-differential privacy. Also, as examples, I illustrate the
+\(\epsilon\)-dp with Laplace mechanism and, using some common tail bounds,
+the approximate dp with the Gaussian mechanism.
+
+Then I continue to show the effect of combinatorial and sequential
+compositions of randomised queries (called mechanisms) on privacy by
+stating and proving the composition theorems for differential privacy,
+as well as the effect of mixing mechanisms, by presenting the
+subsampling theorem (a.k.a. amplification theorem).
+
+In [[/posts/2019-03-14-great-but-manageable-expectations.html][Part 2]],
+I discuss the Rényi differential privacy, corresponding to the Rényi
+divergence, a study of the
+[[https://en.wikipedia.org/wiki/Moment-generating_function][moment
+generating functions]] of the divergence between probability measures to
+derive the tail bounds.
+
+Like in Part 1, I prove a composition theorem and a subsampling theorem.
+
+I also attempt to reproduce a seemingly better moment bound for the
+Gaussian mechanism with subsampling, with one intermediate step which I
+am not able to prove.
+
+After that I explain the Tensorflow implementation of differential
+privacy in its
+[[https://github.com/tensorflow/privacy/tree/master/privacy][Privacy]]
+module, which focuses on the differentially private stochastic gradient
+descent algorithm (DP-SGD).
+
+Finally I use the results from both Part 1 and Part 2 to obtain some
+privacy guarantees for composed subsampling queries in general, and for
+DP-SGD in particular. I also compare these privacy guarantees.
+
+*Acknowledgement*. I would like to thank
+[[http://stockholm.ai][Stockholm AI]] for introducing me to the subject
+of differential privacy. Thanks to Amir Hossein Rahnama for hosting the
+discussions at Stockholm AI. Thanks to (in chronological order) Reynaldo
+Boulogne, Martin Abedi, Ilya Mironov, Kurt Johansson, Mark Bun, Salil
+Vadhan, Jonathan Ullman, Yuanyuan Xu and Yiting Li for communication and
+discussions. Also thanks to the
+[[https://www.reddit.com/r/MachineLearning/][r/MachineLearning]]
+community for comments and suggestions which result in improvement of
+readability of this post. The research was done while working at
+[[https://www.kth.se/en/sci/institutioner/math][KTH Department of
+Mathematics]].
+
+/If you are confused by any notations, ask me or try
+[[/notations.html][this]]. This post (including both Part 1 and Part2)
+is licensed under [[https://creativecommons.org/licenses/by-sa/4.0/][CC
+BY-SA]] and [[https://www.gnu.org/licenses/fdl.html][GNU FDL]]./
+
+** The gist of differential privacy
+ :PROPERTIES:
+ :CUSTOM_ID: the-gist-of-differential-privacy
+ :END:
+If you only have one minute, here is what differential privacy is about:
+
+Let \(p\) and \(q\) be two probability densities, we define the /divergence
+variable/[fn:1] of \((p, q)\) to be
+
+\[L(p || q) := \log {p(\xi) \over q(\xi)}\]
+
+where \(\xi\) is a random variable distributed according to \(p\).
+
+Roughly speaking, differential privacy is the study of the tail bound of
+\(L(p || q)\): for certain \(p\)s and \(q\)s, and for \(\epsilon > 0\), find
+\(\delta(\epsilon)\) such that
+
+\[\mathbb P(L(p || q) > \epsilon) < \delta(\epsilon),\]
+
+where \(p\) and \(q\) are the laws of the outputs of a randomised functions
+on two very similar inputs. Moreover, to make matters even simpler, only
+three situations need to be considered:
+
+1. (General case) \(q\) is in the form of \(q(y) = p(y + \Delta)\) for some
+ bounded constant \(\Delta\).
+2. (Compositions) \(p\) and \(q\) are combinatorial or sequential
+ compositions of some simpler \(p_i\)'s and \(q_i\)'s respectively
+3. (Subsampling) \(p\) and \(q\) are mixtures / averages of some simpler
+ \(p_i\)'s and \(q_i\)'s respectively
+
+In applications, the inputs are databases and the randomised functions
+are queries with an added noise, and the tail bounds give privacy
+guarantees. When it comes to gradient descent, the input is the training
+dataset, and the query updates the parameters, and privacy is achieved
+by adding noise to the gradients.
+
+Now if you have an hour...
+
+** \(\epsilon\)-dp
+ :PROPERTIES:
+ :CUSTOM_ID: epsilon-dp
+ :END:
+*Definition (Mechanisms)*. Let \(X\) be a space with a metric
+\(d: X \times X \to \mathbb N\). A /mechanism/ \(M\) is a function that
+takes \(x \in X\) as input and outputs a random variable on \(Y\).
+
+In this post, \(X = Z^m\) is the space of datasets of \(m\) rows for some
+integer \(m\), where each item resides in some space \(Z\). In this case the
+distance \(d(x, x') := \#\{i: x_i \neq x'_i\}\) is the number of rows that
+differ between \(x\) and \(x'\).
+
+Normally we have a query \(f: X \to Y\), and construct the mechanism \(M\)
+from \(f\) by adding a noise:
+
+\[M(x) := f(x) + \text{noise}.\]
+
+Later, we will also consider mechanisms constructed from composition or
+mixture of other mechanisms.
+
+In this post \(Y = \mathbb R^d\) for some \(d\).
+
+*Definition (Sensitivity)*. Let \(f: X \to \mathbb R^d\) be a function.
+The /sensitivity/ \(S_f\) of \(f\) is defined as
+
+\[S_f := \sup_{x, x' \in X: d(x, x') = 1} \|f(x) - f(x')\|_2,\]
+
+where \(\|y\|_2 = \sqrt{y_1^2 + ... + y_d^2}\) is the \(\ell^2\)-norm.
+
+*Definition (Differential Privacy)*. A mechanism \(M\) is called
+\(\epsilon\)/-differential privacy/ (\(\epsilon\)-dp) if it satisfies the
+following condition: for all \(x, x' \in X\) with \(d(x, x') = 1\), and for
+all measureable set \(S \subset \mathbb R^n\),
+
+\[\mathbb P(M(x) \in S) \le e^\epsilon P(M(x') \in S). \qquad (1)\]
+
+Practically speaking, this means given the results from perturbed query
+on two known databases that differs by one row, it is hard to determine
+which result is from which database.
+
+An example of \(\epsilon\)-dp mechanism is the Laplace mechanism.
+
+*Definition*. The /Laplace distribution/ over \(\mathbb R\) with parameter
+\(b > 0\) has probability density function
+
+\[f_{\text{Lap}(b)}(x) = {1 \over 2 b} e^{- {|x| \over b}}.\]
+
+*Definition*. Let \(d = 1\). The /Laplace mechanism/ is defined by
+
+\[M(x) = f(x) + \text{Lap}(b).\]
+
+*Claim*. The Laplace mechanism with
+
+\[b \ge \epsilon^{-1} S_f \qquad (1.5)\]
+
+is \(\epsilon\)-dp.
+
+*Proof*. Quite straightforward. Let \(p\) and \(q\) be the laws of \(M(x)\)
+and \(M(x')\) respectively.
+
+\[{p (y) \over q (y)} = {f_{\text{Lap}(b)} (y - f(x)) \over f_{\text{Lap}(b)} (y - f(x'))} = \exp(b^{-1} (|y - f(x')| - |y - f(x)|))\]
+
+Using triangular inequality \(|A| - |B| \le |A - B|\) on the right hand
+side, we have
+
+\[{p (y) \over q (y)} \le \exp(b^{-1} (|f(x) - f(x')|)) \le \exp(\epsilon)\]
+
+where in the last step we use the condition (1.5). \(\square\)
+
+** Approximate differential privacy
+ :PROPERTIES:
+ :CUSTOM_ID: approximate-differential-privacy
+ :END:
+Unfortunately, \(\epsilon\)-dp does not apply to the most commonly used
+noise, the Gaussian noise. To fix this, we need to relax the definition
+a bit.
+
+*Definition*. A mechanism \(M\) is said to be
+\((\epsilon, \delta)\)/-differentially private/ if for all \(x, x' \in X\)
+with \(d(x, x') = 1\) and for all measureable \(S \subset \mathbb R^d\)
+
+\[\mathbb P(M(x) \in S) \le e^\epsilon P(M(x') \in S) + \delta. \qquad (2)\]
+
+Immediately we see that the \((\epsilon, \delta)\)-dp is meaningful only
+if \(\delta < 1\).
+
+*** Indistinguishability
+ :PROPERTIES:
+ :CUSTOM_ID: indistinguishability
+ :END:
+To understand \((\epsilon, \delta)\)-dp, it is helpful to study
+\((\epsilon, \delta)\)-indistinguishability.
+
+*Definition*. Two probability measures \(p\) and \(q\) on the same space are
+called \((\epsilon, \delta)\)/-ind(istinguishable)/ if for all measureable
+sets \(S\):
+
+$$\begin{aligned}
+p(S) \le e^\epsilon q(S) + \delta, \qquad (3) \\
+q(S) \le e^\epsilon p(S) + \delta. \qquad (4)
+\end{aligned}$$
+
+As before, we also call random variables \(\xi\) and \(\eta\) to be
+\((\epsilon, \delta)\)-ind if their laws are \((\epsilon, \delta)\)-ind.
+When \(\delta = 0\), we call it \(\epsilon\)-ind.
+
+Immediately we have
+
+*Claim 0*. \(M\) is \((\epsilon, \delta)\)-dp (resp. \(\epsilon\)-dp) iff
+\(M(x)\) and \(M(x')\) are \((\epsilon, \delta)\)-ind (resp. \(\epsilon\)-ind)
+for all \(x\) and \(x'\) with distance \(1\).
+
+*Definition (Divergence Variable)*. Let \(p\) and \(q\) be two probability
+measures. Let \(\xi\) be a random variable distributed according to \(p\),
+we define a random variable \(L(p || q)\) by
+
+\[L(p || q) := \log {p(\xi) \over q(\xi)},\]
+
+and call it the /divergence variable/ of \((p, q)\).
+
+One interesting and readily verifiable fact is
+
+\[\mathbb E L(p || q) = D(p || q)\]
+
+where \(D\) is the
+[[https://en.wikipedia.org/wiki/Kullback%E2%80%93Leibler_divergence][KL-divergence]].
+
+*Claim 1*. If
+
+$$\begin{aligned}
+\mathbb P(L(p || q) \le \epsilon) &\ge 1 - \delta, \qquad(5) \\
+\mathbb P(L(q || p) \le \epsilon) &\ge 1 - \delta
+\end{aligned}$$
+
+then \(p\) and \(q\) are \((\epsilon, \delta)\)-ind.
+
+*Proof*. We verify (3), and (4) can be shown in the same way. Let
+\(A := \{y \in Y: \log {p(y) \over q(y)} > \epsilon\}\), then by (5) we
+have
+
+\[p(A) < \delta.\]
+
+So
+
+\[p(S) = p(S \cap A) + p(S \setminus A) \le \delta + e^\epsilon q(S \setminus A) \le \delta + e^\epsilon q(S).\]
+
+\(\square\)
+
+This Claim translates differential privacy to the tail bound of
+divergence variables, and for the rest of this post all dp results are
+obtained by estimating this tail bound.
+
+In the following we discuss the converse of Claim 1. The discussions are
+rather technical, and readers can skip to the
+[[#back-to-approximate-differential-privacy][next subsection]] on first
+reading.
+
+The converse of Claim 1 is not true.
+
+*Claim 2*. There exists \(\epsilon, \delta > 0\), and \(p\) and \(q\) that are
+\((\epsilon, \delta)\)-ind, such that
+
+$$\begin{aligned}
+\mathbb P(L(p || q) \le \epsilon) &< 1 - \delta, \\
+\mathbb P(L(q || p) \le \epsilon) &< 1 - \delta
+\end{aligned}$$
+
+*Proof*. Here's a example. Let \(Y = \{0, 1\}\), and \(p(0) = q(1) = 2 / 5\)
+and \(p(1) = q(0) = 3 / 5\). Then it is not hard to verify that \(p\) and
+\(q\) are \((\log {4 \over 3}, {1 \over 3})\)-ind: just check (3) for all
+four possible \(S \subset Y\) and (4) holds by symmetry. On the other
+hand,
+
+\[\mathbb P(L(p || q) \le \log {4 \over 3}) = \mathbb P(L(q || p) \le \log {4 \over 3}) = {2 \over 5} < {2 \over 3}.\]
+
+\(\square\)
+
+A weaker version of the converse of Claim 1 is true
+(Kasiviswanathan-Smith 2015), though:
+
+*Claim 3*. Let \(\alpha > 1\). If \(p\) and \(q\) are
+\((\epsilon, \delta)\)-ind, then
+
+\[\mathbb P(L(p || q) > \alpha \epsilon) < {1 \over 1 - \exp((1 - \alpha) \epsilon)} \delta.\]
+
+*Proof*. Define
+
+\[S = \{y: p(y) > e^{\alpha \epsilon} q(y)\}.\]
+
+Then we have
+
+\[e^{\alpha \epsilon} q(S) < p(S) \le e^\epsilon q(S) + \delta,\]
+
+where the first inequality is due to the definition of \(S\), and the
+second due to the \((\epsilon, \delta)\)-ind. Therefore
+
+\[q(S) \le {\delta \over e^{\alpha \epsilon} - e^\epsilon}.\]
+
+Using the \((\epsilon, \delta)\)-ind again we have
+
+\[p(S) \le e^\epsilon q(S) + \delta = {1 \over 1 - e^{(1 - \alpha) \epsilon}} \delta.\]
+
+\(\square\)
+
+This can be quite bad if \(\epsilon\) is small.
+
+To prove the composition theorems in the next section, we need a
+condition better than that in Claim 1 so that we can go back and forth
+between indistinguishability and such condition. In other words, we need
+a /characterisation/ of indistinguishability.
+
+Let us take a careful look at the condition in Claim 1 and call it *C1*:
+
+*C1*. \(\mathbb P(L(p || q) \le \epsilon) \ge 1 - \delta\) and
+\(\mathbb P(L(q || p) \le \epsilon) \ge 1 - \delta\)
+
+It is equivalent to
+
+*C2*. there exist events \(A, B \subset Y\) with probabilities \(p(A)\) and
+\(q(B)\) at least \(1 - \delta\) such that
+\(\log p(y) - \log q(y) \le \epsilon\) for all \(y \in A\) and
+\(\log q(y) - \log p(y) \le \epsilon\) for all \(y \in B\).
+
+A similar-looking condition to *C2* is the following:
+
+*C3*. Let \(\Omega\) be the
+[[https://en.wikipedia.org/wiki/Probability_space#Definition][underlying
+probability space]]. There exist two events \(E, F \subset \Omega\) with
+\(\mathbb P(E), \mathbb P(F) \ge 1 - \delta\), such that
+\(|\log p_{|E}(y) - \log q_{|F}(y)| \le \epsilon\) for all \(y \in Y\).
+
+Here \(p_{|E}\) (resp. \(q_{|F}\)) is \(p\) (resp. \(q\)) conditioned on event
+\(E\) (resp. \(F\)).
+
+*Remark*. Note that the events in *C2* and *C3* are in different spaces,
+and therefore we can not write \(p_{|E}(S)\) as \(p(S | E)\) or \(q_{|F}(S)\)
+as \(q(S | F)\). In fact, if we let \(E\) and \(F\) in *C3* be subsets of \(Y\)
+with \(p(E), q(F) \ge 1 - \delta\) and assume \(p\) and \(q\) have the same
+supports, then *C3* degenerates to a stronger condition than *C2*.
+Indeed, in this case \(p_E(y) = p(y) 1_{y \in E}\) and
+\(q_F(y) = q(y) 1_{y \in F}\), and so \(p_E(y) \le e^\epsilon q_F(y)\)
+forces \(E \subset F\). We also obtain \(F \subset E\) in the same way. This
+gives us \(E = F\), and *C3* becomes *C2* with \(A = B = E = F\).
+
+As it turns out, *C3* is the condition we need.
+
+*Claim 4*. Two probability measures \(p\) and \(q\) are
+\((\epsilon, \delta)\)-ind if and only if *C3* holds.
+
+*Proof*(Murtagh-Vadhan 2018). The "if" direction is proved in the same
+way as Claim 1. Without loss of generality we may assume
+\(\mathbb P(E) = \mathbb P(F) \ge 1 - \delta\). To see this, suppose \(F\)
+has higher probability than \(E\), then we can substitute \(F\) with a
+subset of \(F\) that has the same probability as \(E\) (with possible
+enlargement of the probability space).
+
+Let \(\xi \sim p\) and \(\eta \sim q\) be two independent random variables,
+then
+
+$$\begin{aligned}
+p(S) &= \mathbb P(\xi \in S | E) \mathbb P(E) + \mathbb P(\xi \in S; E^c) \\
+&\le e^\epsilon \mathbb P(\eta \in S | F) \mathbb P(E) + \delta \\
+&= e^\epsilon \mathbb P(\eta \in S | F) \mathbb P(F) + \delta\\
+&\le e^\epsilon q(S) + \delta.
+\end{aligned}$$
+
+The "only-if" direction is more involved.
+
+We construct events \(E\) and \(F\) by constructing functions
+\(e, f: Y \to [0, \infty)\) satisfying the following conditions:
+
+1. \(0 \le e(y) \le p(y)\) and \(0 \le f(y) \le q(y)\) for all \(y \in Y\).
+2. \(|\log e(y) - \log f(y)| \le \epsilon\) for all \(y \in Y\).
+3. \(e(Y), f(Y) \ge 1 - \delta\).
+4. \(e(Y) = f(Y)\).
+
+Here for a set \(S \subset Y\), \(e(S) := \int_S e(y) dy\), and the same
+goes for \(f(S)\).
+
+Let \(\xi \sim p\) and \(\eta \sim q\). Then we define \(E\) and \(F\) by
+
+$$\mathbb P(E | \xi = y) = e(y) / p(y) \\
+\mathbb P(F | \eta = y) = f(y) / q(y).$$
+
+*Remark inside proof*. This can seem a bit confusing. Intuitively, we
+can think of it this way when \(Y\) is finite: Recall a random variable on
+\(Y\) is a function from the probability space \(\Omega\) to \(Y\). Let event
+\(G_y \subset \Omega\) be defined as \(G_y = \xi^{-1} (y)\). We cut \(G_y\)
+into the disjoint union of \(E_y\) and \(G_y \setminus E_y\) such that
+\(\mathbb P(E_y) = e(y)\). Then \(E = \bigcup_{y \in Y} E_y\). So \(e(y)\) can
+be seen as the "density" of \(E\).
+
+Indeed, given \(E\) and \(F\) defined this way, we have
+
+\[p_E(y) = {e(y) \over e(Y)} \le {\exp(\epsilon) f(y) \over e(Y)} = {\exp(\epsilon) f(y) \over f(Y)} = \exp(\epsilon) q_F(y).\]
+
+and
+
+\[\mathbb P(E) = \int \mathbb P(E | \xi = y) p(y) dy = e(Y) \ge 1 - \delta,\]
+
+and the same goes for \(\mathbb P(F)\).
+
+What remains is to construct \(e(y)\) and \(f(y)\) satisfying the four
+conditions.
+
+Like in the proof of Claim 1, let \(S, T \subset Y\) be defined as
+
+$$\begin{aligned}
+S := \{y: p(y) > \exp(\epsilon) q(y)\},\\
+T := \{y: q(y) > \exp(\epsilon) p(y)\}.
+\end{aligned}$$
+
+Let
+
+$$\begin{aligned}
+e(y) &:= \exp(\epsilon) q(y) 1_{y \in S} + p(y) 1_{y \notin S}\\
+f(y) &:= \exp(\epsilon) p(y) 1_{y \in T} + q(y) 1_{y \notin T}. \qquad (6)
+\end{aligned}$$
+
+By checking them on the three disjoint subsets \(S\), \(T\), \((S \cup T)^c\),
+it is not hard to verify that the \(e(y)\) and \(f(y)\) constructed this way
+satisfy the first two conditions. They also satisfy the third condition:
+
+$$\begin{aligned}
+e(Y) &= 1 - (p(S) - \exp(\epsilon) q(S)) \ge 1 - \delta, \\
+f(Y) &= 1 - (q(T) - \exp(\epsilon) p(T)) \ge 1 - \delta.
+\end{aligned}$$
+
+If \(e(Y) = f(Y)\) then we are done. Otherwise, without loss of
+generality, assume \(e(Y) < f(Y)\), then all it remains to do is to reduce
+the value of \(f(y)\) while preserving Condition 1, 2 and 3, until
+\(f(Y) = e(Y)\).
+
+As it turns out, this can be achieved by reducing \(f(y)\) on the set
+\(\{y \in Y: q(y) > p(y)\}\). To see this, let us rename the \(f(y)\)
+defined in (6) \(f_+(y)\), and construct \(f_-(y)\) by
+
+\[f_-(y) := p(y) 1_{y \in T} + (q(y) \wedge p(y)) 1_{y \notin T}.\]
+
+It is not hard to show that not only \(e(y)\) and \(f_-(y)\) also satisfy
+conditions 1-3, but
+
+\[e(y) \ge f_-(y), \forall y \in Y,\]
+
+and thus \(e(Y) \ge f_-(Y)\). Therefore there exists an \(f\) that
+interpolates between \(f_-\) and \(f_+\) with \(f(Y) = e(Y)\). \(\square\)
+
+To prove the adaptive composition theorem for approximate differential
+privacy, we need a similar claim (We use index shorthand
+\(\xi_{< i} = \xi_{1 : i - 1}\) and similarly for other notations):
+
+*Claim 5*. Let \(\xi_{1 : i}\) and \(\eta_{1 : i}\) be random variables. Let
+
+$$\begin{aligned}
+p_i(S | y_{1 : i - 1}) := \mathbb P(\xi_i \in S | \xi_{1 : i - 1} = y_{1 : i - 1})\\
+q_i(S | y_{1 : i - 1}) := \mathbb P(\eta_i \in S | \eta_{1 : i - 1} = y_{1 : i - 1})
+\end{aligned}$$
+
+be the conditional laws of \(\xi_i | \xi_{< i}\) and \(\eta_i | \eta_{< i}\)
+respectively. Then the following are equivalent:
+
+1. For any \(y_{< i} \in Y^{i - 1}\), \(p_i(\cdot | y_{< i})\) and
+ \(q_i(\cdot | y_{< i})\) are \((\epsilon, \delta)\)-ind
+
+2. There exists events \(E_i, F_i \subset \Omega\) with
+ \(\mathbb P(E_i | \xi_{<i} = y_{<i}) = \mathbb P(F_i | \eta_{<i} = y_{< i}) \ge 1 - \delta\)
+ for any \(y_{< i}\), such that \(p_{i | E_i}(\cdot | y_{< i})\) and
+ \(q_{i | E_i} (\cdot | y_{< i})\) are \(\epsilon\)-ind for any \(y_{< i}\),
+ where $$\begin{aligned}
+ p_{i | E_i}(S | y_{1 : i - 1}) := \mathbb P(\xi_i \in S | E_i, \xi_{1 : i - 1} = y_{1 : i - 1})\\
+ q_{i | F_i}(S | y_{1 : i - 1}) := \mathbb P(\eta_i \in S | F_i, \eta_{1 : i - 1} = y_{1 : i - 1})
+ \end{aligned}$$
+
+ are \(p_i\) and \(q_i\) conditioned on \(E_i\) and \(F_i\) respectively.
+
+*Proof*. Item 2 => Item 1: as in the Proof of Claim 4,
+
+$$\begin{aligned}
+p_i(S | y_{< i}) &= p_{i | E_i} (S | y_{< i}) \mathbb P(E_i | \xi_{< i} = y_{< i}) + p_{i | E_i^c}(S | y_{< i}) \mathbb P(E_i^c | \xi_{< i} = y_{< i}) \\
+&\le p_{i | E_i} (S | y_{< i}) \mathbb P(E_i | \xi_{< i} = y_{< i}) + \delta \\
+&= p_{i | E_i} (S | y_{< i}) \mathbb P(F_i | \xi_{< i} = y_{< i}) + \delta \\
+&\le e^\epsilon q_{i | F_i} (S | y_{< i}) \mathbb P(F_i | \xi_{< i} = y_{< i}) + \delta \\
+&= e^\epsilon q_i (S | y_{< i}) + \delta.
+\end{aligned}$$
+
+The direction from
+\(q_i(S | y_{< i}) \le e^\epsilon p_i(S | y_{< i}) + \delta\) can be shown
+in the same way.
+
+Item 1 => Item 2: as in the Proof of Claim 4 we construct \(e(y_{1 : i})\)
+and \(f(y_{1 : i})\) as "densities" of events \(E_i\) and \(F_i\).
+
+Let
+
+$$\begin{aligned}
+e(y_{1 : i}) &:= e^\epsilon q_i(y_i | y_{< i}) 1_{y_i \in S_i(y_{< i})} + p_i(y_i | y_{< i}) 1_{y_i \notin S_i(y_{< i})}\\
+f(y_{1 : i}) &:= e^\epsilon p_i(y_i | y_{< i}) 1_{y_i \in T_i(y_{< i})} + q_i(y_i | y_{< i}) 1_{y_i \notin T_i(y_{< i})}\\
+\end{aligned}$$
+
+where
+
+$$\begin{aligned}
+S_i(y_{< i}) = \{y_i \in Y: p_i(y_i | y_{< i}) > e^\epsilon q_i(y_i | y_{< i})\}\\
+T_i(y_{< i}) = \{y_i \in Y: q_i(y_i | y_{< i}) > e^\epsilon p_i(y_i | y_{< i})\}.
+\end{aligned}$$
+
+Then \(E_i\) and \(F_i\) are defined as
+
+$$\begin{aligned}
+\mathbb P(E_i | \xi_{\le i} = y_{\le i}) &= {e(y_{\le i}) \over p_i(y_{\le i})},\\
+\mathbb P(F_i | \xi_{\le i} = y_{\le i}) &= {f(y_{\le i}) \over q_i(y_{\le i})}.
+\end{aligned}$$
+
+The rest of the proof is almost the same as the proof of Claim 4.
+\(\square\)
+
+*** Back to approximate differential privacy
+ :PROPERTIES:
+ :CUSTOM_ID: back-to-approximate-differential-privacy
+ :END:
+By Claim 0 and 1 we have
+
+*Claim 6*. If for all \(x, x' \in X\) with distance \(1\)
+
+\[\mathbb P(L(M(x) || M(x')) \le \epsilon) \ge 1 - \delta,\]
+
+then \(M\) is \((\epsilon, \delta)\)-dp.
+
+Note that in the literature the divergence variable \(L(M(x) || M(x'))\)
+is also called the /privacy loss/.
+
+By Claim 0 and Claim 4 we have
+
+*Claim 7*. \(M\) is \((\epsilon, \delta)\)-dp if and only if for every
+\(x, x' \in X\) with distance \(1\), there exist events
+\(E, F \subset \Omega\) with \(\mathbb P(E) = \mathbb P(F) \ge 1 - \delta\),
+\(M(x) | E\) and \(M(x') | F\) are \(\epsilon\)-ind.
+
+We can further simplify the privacy loss \(L(M(x) || M(x'))\), by
+observing the translational and scaling invariance of \(L(\cdot||\cdot)\):
+
+$$\begin{aligned}
+L(\xi || \eta) &\overset{d}{=} L(\alpha \xi + \beta || \alpha \eta + \beta), \qquad \alpha \neq 0. \qquad (6.1)
+\end{aligned}$$
+
+With this and the definition
+
+\[M(x) = f(x) + \zeta\]
+
+for some random variable \(\zeta\), we have
+
+\[L(M(x) || M(x')) \overset{d}{=} L(\zeta || \zeta + f(x') - f(x)).\]
+
+Without loss of generality, we can consider \(f\) with sensitivity \(1\),
+for
+
+\[L(f(x) + S_f \zeta || f(x') + S_f \zeta) \overset{d}{=} L(S_f^{-1} f(x) + \zeta || S_f^{-1} f(x') + \zeta)\]
+
+so for any noise \(\zeta\) that achieves \((\epsilon, \delta)\)-dp for a
+function with sensitivity \(1\), we have the same privacy guarantee by for
+an arbitrary function with sensitivity \(S_f\) by adding a noise
+\(S_f \zeta\).
+
+With Claim 6 we can show that the Gaussian mechanism is approximately
+differentially private. But first we need to define it.
+
+*Definition (Gaussian mechanism)*. Given a query \(f: X \to Y\), the
+/Gaussian mechanism/ \(M\) adds a Gaussian noise to the query:
+
+\[M(x) = f(x) + N(0, \sigma^2 I).\]
+
+Some tail bounds for the Gaussian distribution will be useful.
+
+*Claim 8 (Gaussian tail bounds)*. Let \(\xi \sim N(0, 1)\) be a standard
+normal distribution. Then for \(t > 0\)
+
+\[\mathbb P(\xi > t) < {1 \over \sqrt{2 \pi} t} e^{- {t^2 \over 2}}, \qquad (6.3)\]
+
+and
+
+\[\mathbb P(\xi > t) < e^{- {t^2 \over 2}}. \qquad (6.5)\]
+
+*Proof*. Both bounds are well known. The first can be proved using
+
+\[\int_t^\infty e^{- {y^2 \over 2}} dy < \int_t^\infty {y \over t} e^{- {y^2 \over 2}} dy.\]
+
+The second is shown using
+[[https://en.wikipedia.org/wiki/Chernoff_bound][Chernoff bound]]. For
+any random variable \(\xi\),
+
+\[\mathbb P(\xi > t) < {\mathbb E \exp(\lambda \xi) \over \exp(\lambda t)} = \exp(\kappa_\xi(\lambda) - \lambda t), \qquad (6.7)\]
+
+where \(\kappa_\xi(\lambda) = \log \mathbb E \exp(\lambda \xi)\) is the
+cumulant of \(\xi\). Since (6.7) holds for any \(\lambda\), we can get the
+best bound by minimising \(\kappa_\xi(\lambda) - \lambda t\) (a.k.a. the
+[[https://en.wikipedia.org/wiki/Legendre_transformation][Legendre
+transformation]]). When \(\xi\) is standard normal, we get (6.5).
+\(\square\)
+
+*Remark*. We will use the Chernoff bound extensively in the second part
+of this post when considering Rényi differential privacy.
+
+*Claim 9*. The Gaussian mechanism on a query \(f\) is
+\((\epsilon, \delta)\)-dp, where
+
+\[\delta = \exp(- (\epsilon \sigma / S_f - (2 \sigma / S_f)^{-1})^2 / 2). \qquad (6.8)\]
+
+Conversely, to achieve give \((\epsilon, \delta)\)-dp, we may set
+
+\[\sigma > \left(\epsilon^{-1} \sqrt{2 \log \delta^{-1}} + (2 \epsilon)^{- {1 \over 2}}\right) S_f \qquad (6.81)\]
+
+or
+
+\[\sigma > (\epsilon^{-1} (1 \vee \sqrt{(\log (2 \pi)^{-1} \delta^{-2})_+}) + (2 \epsilon)^{- {1 \over 2}}) S_f \qquad (6.82)\]
+
+or
+
+\[\sigma > \epsilon^{-1} \sqrt{\log e^\epsilon \delta^{-2}} S_f \qquad (6.83)\]
+
+or
+
+\[\sigma > \epsilon^{-1} (\sqrt{1 + \epsilon} \vee \sqrt{(\log e^\epsilon (2 \pi)^{-1} \delta^{-2})_+}) S_f. \qquad (6.84)\]
+
+*Proof*. As discussed before we only need to consider the case where
+\(S_f = 1\). Fix arbitrary \(x, x' \in X\) with \(d(x, x') = 1\). Let
+\(\zeta = (\zeta_1, ..., \zeta_d) \sim N(0, I_d)\).
+
+By Claim 6 it suffices to bound
+
+\[\mathbb P(L(M(x) || M(x')) > \epsilon)\]
+
+We have by the linear invariance of \(L\),
+
+\[L(M(x) || M(x')) = L(f(x) + \sigma \zeta || f(x') + \sigma \zeta) \overset{d}{=} L(\zeta|| \zeta + \Delta / \sigma),\]
+
+where \(\Delta := f(x') - f(x)\).
+
+Plugging in the Gaussian density, we have
+
+\[L(M(x) || M(x')) \overset{d}{=} \sum_i {\Delta_i \over \sigma} \zeta_i + \sum_i {\Delta_i^2 \over 2 \sigma^2} \overset{d}{=} {\|\Delta\|_2 \over \sigma} \xi + {\|\Delta\|_2^2 \over 2 \sigma^2}.\]
+
+where \(\xi \sim N(0, 1)\).
+
+Hence
+
+\[\mathbb P(L(M(x) || M(x')) > \epsilon) = \mathbb P(\zeta > {\sigma \over \|\Delta\|_2} \epsilon - {\|\Delta\|_2 \over 2 \sigma}).\]
+
+Since \(\|\Delta\|_2 \le S_f = 1\), we have
+
+\[\mathbb P(L(M(x) || M(x')) > \epsilon) \le \mathbb P(\xi > \sigma \epsilon - (2 \sigma)^{-1}).\]
+
+Thus the problem is reduced to the tail bound of a standard normal
+distribution, so we can use Claim 8. Note that we implicitly require
+\(\sigma > (2 \epsilon)^{- 1 / 2}\) here so that
+\(\sigma \epsilon - (2 \sigma)^{-1} > 0\) and we can use the tail bounds.
+
+Using (6.3) we have
+
+\[\mathbb P(L(M(x) || M(x')) > \epsilon) < \exp(- (\epsilon \sigma - (2 \sigma)^{-1})^2 / 2).\]
+
+This gives us (6.8).
+
+To bound the right hand by \(\delta\), we require
+
+\[\epsilon \sigma - {1 \over 2 \sigma} > \sqrt{2 \log \delta^{-1}}. \qquad (6.91)\]
+
+Solving this inequality we have
+
+\[\sigma > {\sqrt{2 \log \delta^{-1}} + \sqrt{2 \log \delta^{-1} + 2 \epsilon} \over 2 \epsilon}.\]
+
+Using
+\(\sqrt{2 \log \delta^{-1} + 2 \epsilon} \le \sqrt{2 \log \delta^{-1}} + \sqrt{2 \epsilon}\),
+we can achieve the above inequality by having
+
+\[\sigma > \epsilon^{-1} \sqrt{2 \log \delta^{-1}} + (2 \epsilon)^{-{1 \over 2}}.\]
+
+This gives us (6.81).
+
+Alternatively, we can use the concavity of \(\sqrt{\cdot}\):
+
+\[(2 \epsilon)^{-1} (\sqrt{2 \log \delta^{-1}} + \sqrt{2 \log \delta^{-1} + 2 \epsilon}) \le \epsilon^{-1} \sqrt{\log e^\epsilon \delta^{-2}},\]
+
+which gives us (6.83)
+
+Back to (6.9), if we use (6.5) instead, we need
+
+\[\log t + {t^2 \over 2} > \log {(2 \pi)^{- 1 / 2} \delta^{-1}}\]
+
+where \(t = \epsilon \sigma - (2 \sigma)^{-1}\). This can be satisfied if
+
+$$\begin{aligned}
+t &> 1 \qquad (6.93)\\
+t &> \sqrt{\log (2 \pi)^{-1} \delta^{-2}}. \qquad (6.95)
+\end{aligned}$$
+
+We can solve both inequalities as before and obtain
+
+\[\sigma > \epsilon^{-1} (1 \vee \sqrt{(\log (2 \pi)^{-1} \delta^{-2})_+}) + (2 \epsilon)^{- {1 \over 2}},\]
+
+or
+
+\[\sigma > \epsilon^{-1}(\sqrt{1 + \epsilon} \vee \sqrt{(\log e^\epsilon (2 \pi)^{-1} \delta^{-2})_+}).\]
+
+This gives us (6.82)(6.84). \(\square\)
+
+When \(\epsilon \le \alpha\) is bounded, by (6.83) (6.84) we can require
+either
+
+\[\sigma > \epsilon^{-1} (\sqrt{\log e^\alpha \delta^{-2}}) S_f\]
+
+or
+
+\[\sigma > \epsilon^{-1} (\sqrt{1 + \alpha} \vee \sqrt{(\log (2 \pi)^{-1} e^\alpha \delta^{-2})_+}) S_f.\]
+
+The second bound is similar to and slightly better than the one in
+Theorem A.1 of Dwork-Roth 2013, where \(\alpha = 1\):
+
+\[\sigma > \epsilon^{-1} \left({3 \over 2} \vee \sqrt{(2 \log {5 \over 4} \delta^{-1})_+}\right) S_f.\]
+
+Note that the lower bound of \({3 \over 2}\) is implicitly required in the
+proof of Theorem A.1.
+
+** Composition theorems
+ :PROPERTIES:
+ :CUSTOM_ID: composition-theorems
+ :END:
+So far we have seen how a mechanism made of a single query plus a noise
+can be proved to be differentially private. But we need to understand
+the privacy when composing several mechanisms, combinatorially or
+sequentially. Let us first define the combinatorial case:
+
+*Definition (Independent composition)*. Let \(M_1, ..., M_k\) be \(k\)
+mechanisms with independent noises. The mechanism \(M = (M_1, ..., M_k)\)
+is called the /independent composition/ of \(M_{1 : k}\).
+
+To define the adaptive composition, let us motivate it with an example
+of gradient descent. Consider the loss function \(\ell(x; \theta)\) of a
+neural network, where \(\theta\) is the parameter and \(x\) the input,
+gradient descent updates its parameter \(\theta\) at each time \(t\):
+
+\[\theta_{t} = \theta_{t - 1} - \alpha m^{-1} \sum_{i = 1 : m} \nabla_\theta \ell(x_i; \theta) |_{\theta = \theta_{t - 1}}.\]
+
+We may add privacy by adding noise \(\zeta_t\) at each step:
+
+\[\theta_{t} = \theta_{t - 1} - \alpha m^{-1} \sum_{i = 1 : m} \nabla_\theta \ell(x_i; \theta) |_{\theta = \theta_{t - 1}} + \zeta_t. \qquad (6.97)\]
+
+Viewed as a sequence of mechanism, we have that at each time \(t\), the
+mechanism \(M_t\) takes input \(x\), and outputs \(\theta_t\). But \(M_t\) also
+depends on the output of the previous mechanism \(M_{t - 1}\). To this
+end, we define the adaptive composition.
+
+*Definition (Adaptive composition)*. Let
+\(({M_i(y_{1 : i - 1})})_{i = 1 : k}\) be \(k\) mechanisms with independent
+noises, where \(M_1\) has no parameter, \(M_2\) has one parameter in \(Y\),
+\(M_3\) has two parameters in \(Y\) and so on. For \(x \in X\), define \(\xi_i\)
+recursively by
+
+$$\begin{aligned}
+\xi_1 &:= M_1(x)\\
+\xi_i &:= M_i(\xi_1, \xi_2, ..., \xi_{i - 1}) (x).
+\end{aligned}$$
+
+The /adaptive composition/ of \(M_{1 : k}\) is defined by
+\(M(x) := (\xi_1, \xi_2, ..., \xi_k)\).
+
+The definition of adaptive composition may look a bit complicated, but
+the point is to describe \(k\) mechanisms such that for each \(i\), the
+output of the first, second, ..., \(i - 1\)th mechanisms determine the
+\(i\)th mechanism, like in the case of gradient descent.
+
+It is not hard to write down the differentially private gradient descent
+as a sequential composition:
+
+\[M_t(\theta_{1 : t - 1})(x) = \theta_{t - 1} - \alpha m^{-1} \sum_{i = 1 : m} \nabla_\theta \ell(x_i; \theta) |_{\theta = \theta_{t - 1}} + \zeta_t.\]
+
+In Dwork-Rothblum-Vadhan 2010 (see also Dwork-Roth 2013) the adaptive
+composition is defined in a more general way, but the definition is
+based on the same principle, and proofs in this post on adaptive
+compositions carry over.
+
+It is not hard to see that the adaptive composition degenerates to
+independent composition when each \(M_i(y_{1 : i})\) evaluates to the same
+mechanism regardless of \(y_{1 : i}\), in which case the \(\xi_i\)s are
+independent.
+
+In the following when discussing adaptive compositions we sometimes omit
+the parameters for convenience without risk of ambiguity, and write
+\(M_i(y_{1 : i})\) as \(M_i\), but keep in mind of the dependence on the
+parameters.
+
+It is time to state and prove the composition theorems. In this section
+we consider \(2 \times 2 \times 2 = 8\) cases, i.e. situations of three
+dimensions, where there are two choices in each dimension:
+
+1. Composition of \(\epsilon\)-dp or more generally
+ \((\epsilon, \delta)\)-dp mechanisms
+2. Composition of independent or more generally adaptive mechanisms
+3. Basic or advanced compositions
+
+Note that in the first two dimensions the second choice is more general
+than the first.
+
+The proofs of these composition theorems will be laid out as follows:
+
+1. Claim 10 - Basic composition theorem for \((\epsilon, \delta)\)-dp with
+ adaptive mechanisms: by a direct proof with an induction argument
+2. Claim 14 - Advanced composition theorem for \(\epsilon\)-dp with
+ independent mechanisms: by factorising privacy loss and using
+ [[https://en.wikipedia.org/wiki/Hoeffding%27s_inequality][Hoeffding's
+ Inequality]]
+3. Claim 16 - Advanced composition theorem for \(\epsilon\)-dp with
+ adaptive mechanisms: by factorising privacy loss and using
+ [[https://en.wikipedia.org/wiki/Azuma%27s_inequality][Azuma's
+ Inequality]]
+4. Claims 17 and 18 - Advanced composition theorem for
+ \((\epsilon, \delta)\)-dp with independent / adaptive mechanisms: by
+ using characterisations of \((\epsilon, \delta)\)-dp in Claims 4 and 5
+ as an approximation of \(\epsilon\)-dp and then using Proofs in Item 2
+ or 3.
+
+*Claim 10 (Basic composition theorem).* Let \(M_{1 : k}\) be \(k\)
+mechanisms with independent noises such that for each \(i\) and
+\(y_{1 : i - 1}\), \(M_i(y_{1 : i - 1})\) is \((\epsilon_i, \delta_i)\)-dp.
+Then the adpative composition of \(M_{1 : k}\) is
+\((\sum_i \epsilon_i, \sum_i \delta_i)\)-dp.
+
+*Proof (Dwork-Lei 2009, see also Dwork-Roth 2013 Appendix B.1)*. Let \(x\)
+and \(x'\) be neighbouring points in \(X\). Let \(M\) be the adaptive
+composition of \(M_{1 : k}\). Define
+
+\[\xi_{1 : k} := M(x), \qquad \eta_{1 : k} := M(x').\]
+
+Let \(p^i\) and \(q^i\) be the laws of \((\xi_{1 : i})\) and \((\eta_{1 : i})\)
+respectively.
+
+Let \(S_1, ..., S_k \subset Y\) and \(T_i := \prod_{j = 1 : i} S_j\). We use
+two tricks.
+
+1. Since \(\xi_i | \xi_{< i} = y_{< i}\) and
+ \(\eta_i | \eta_{< i} = y_{< i}\) are \((\epsilon_i, \delta_i)\)-ind, and
+ a probability is no greater than \(1\), $$\begin{aligned}
+ \mathbb P(\xi_i \in S_i | \xi_{< i} = y_{< i}) &\le (e^{\epsilon_i} \mathbb P(\eta_i \in S_i | \eta_{< i} = y_{< i}) + \delta_i) \wedge 1 \\
+ &\le (e^{\epsilon_i} \mathbb P(\eta_i \in S_i | \eta_{< i} = y_{< i}) + \delta_i) \wedge (1 + \delta_i) \\
+ &= (e^{\epsilon_i} \mathbb P(\eta_i \in S_i | \eta_{< i} = y_{< i}) \wedge 1) + \delta_i
+ \end{aligned}$$
+
+2. Given \(p\) and \(q\) that are \((\epsilon, \delta)\)-ind, define
+ \[\mu(x) = (p(x) - e^\epsilon q(x))_+.\]
+
+ We have \[\mu(S) \le \delta, \forall S\]
+
+ In the following we define
+ \(\mu^{i - 1} = (p^{i - 1} - e^\epsilon q^{i - 1})_+\) for the same
+ purpose.
+
+We use an inductive argument to prove the theorem:
+
+$$\begin{aligned}
+\mathbb P(\xi_{\le i} \in T_i) &= \int_{T_{i - 1}} \mathbb P(\xi_i \in S_i | \xi_{< i} = y_{< i}) p^{i - 1} (y_{< i}) dy_{< i} \\
+&\le \int_{T_{i - 1}} (e^{\epsilon_i} \mathbb P(\eta_i \in S_i | \eta_{< i} = y_{< i}) \wedge 1) p^{i - 1}(y_{< i}) dy_{< i} + \delta_i\\
+&\le \int_{T_{i - 1}} (e^{\epsilon_i} \mathbb P(\eta_i \in S_i | \eta_{< i} = y_{< i}) \wedge 1) (e^{\epsilon_1 + ... + \epsilon_{i - 1}} q^{i - 1}(y_{< i}) + \mu^{i - 1} (y_{< i})) dy_{< i} + \delta_i\\
+&\le \int_{T_{i - 1}} e^{\epsilon_i} \mathbb P(\eta_i \in S_i | \eta_{< i} = y_{< i}) e^{\epsilon_1 + ... + \epsilon_{i - 1}} q^{i - 1}(y_{< i}) dy_{< i} + \mu_{i - 1}(T_{i - 1}) + \delta_i\\
+&\le e^{\epsilon_1 + ... + \epsilon_i} \mathbb P(\eta_{\le i} \in T_i) + \delta_1 + ... + \delta_{i - 1} + \delta_i.\\
+\end{aligned}$$
+
+In the second line we use Trick 1; in the third line we use the
+induction assumption; in the fourth line we multiply the first term in
+the first braket with first term in the second braket, and the second
+term (i.e. \(1\)) in the first braket with the second term in the second
+braket (i.e. the \(\mu\) term); in the last line we use Trick 2.
+
+The base case \(i = 1\) is true since \(M_1\) is
+\((\epsilon_1, \delta_1)\)-dp. \(\square\)
+
+To prove the advanced composition theorem, we start with some lemmas.
+
+*Claim 11*. If \(p\) and \(q\) are \(\epsilon\)-ind, then
+
+\[D(p || q) + D(q || p) \le \epsilon(e^\epsilon - 1).\]
+
+*Proof*. Since \(p\) and \(q\) are \(\epsilon\)-ind, we have
+\(|\log p(x) - \log q(x)| \le \epsilon\) for all \(x\). Let
+\(S := \{x: p(x) > q(x)\}\). Then we have on
+
+$$\begin{aligned}
+D(p || q) + D(q || p) &= \int (p(x) - q(x)) (\log p(x) - \log q(x)) dx\\
+&= \int_S (p(x) - q(x)) (\log p(x) - \log q(x)) dx + \int_{S^c} (q(x) - p(x)) (\log q(x) - \log p(x)) dx\\
+&\le \epsilon(\int_S p(x) - q(x) dx + \int_{S^c} q(x) - p(x) dx)
+\end{aligned}$$
+
+Since on \(S\) we have \(q(x) \le p(x) \le e^\epsilon q(x)\), and on \(S^c\)
+we have \(p(x) \le q(x) \le e^\epsilon p(x)\), we obtain
+
+\[D(p || q) + D(q || p) \le \epsilon \int_S (1 - e^{-\epsilon}) p(x) dx + \epsilon \int_{S^c} (e^{\epsilon} - 1) p(x) dx \le \epsilon (e^{\epsilon} - 1),\]
+
+where in the last step we use \(e^\epsilon - 1 \ge 1 - e^{- \epsilon}\)
+and \(p(S) + p(S^c) = 1\). \(\square\)
+
+*Claim 12*. If \(p\) and \(q\) are \(\epsilon\)-ind, then
+
+\[D(p || q) \le a(\epsilon) \ge D(q || p),\]
+
+where
+
+\[a(\epsilon) = \epsilon (e^\epsilon - 1) 1_{\epsilon \le \log 2} + \epsilon 1_{\epsilon > \log 2} \le (\log 2)^{-1} \epsilon^2 1_{\epsilon \le \log 2} + \epsilon 1_{\epsilon > \log 2}. \qquad (6.98)\]
+
+*Proof*. Since \(p\) and \(q\) are \(\epsilon\)-ind, we have
+
+\[D(p || q) = \mathbb E_{\xi \sim p} \log {p(\xi) \over q(\xi)} \le \max_y {\log p(y) \over \log q(y)} \le \epsilon.\]
+
+Comparing the quantity in Claim 11 (\(\epsilon(e^\epsilon - 1)\)) with the
+quantity above (\(\epsilon\)), we arrive at the conclusion. \(\square\)
+
+*Claim 13
+([[https://en.wikipedia.org/wiki/Hoeffding%27s_inequality][Hoeffding's
+Inequality]])*. Let \(L_i\) be independent random variables with
+\(|L_i| \le b\), and let \(L = L_1 + ... + L_k\), then for \(t > 0\),
+
+\[\mathbb P(L - \mathbb E L \ge t) \le \exp(- {t^2 \over 2 k b^2}).\]
+
+*Claim 14 (Advanced Independent Composition Theorem)* (\(\delta = 0\)).
+Fix \(0 < \beta < 1\). Let \(M_1, ..., M_k\) be \(\epsilon\)-dp, then the
+independent composition \(M\) of \(M_{1 : k}\) is
+\((k a(\epsilon) + \sqrt{2 k \log \beta^{-1}} \epsilon, \beta)\)-dp.
+
+*Remark*. By (6.98) we know that
+\(k a(\epsilon) + \sqrt{2 k \log \beta^{-1}} \epsilon = \sqrt{2 k \log \beta^{-1}} \epsilon + k O(\epsilon^2)\)
+when \(\epsilon\) is sufficiently small, in which case the leading term is
+of order \(O(\sqrt k \epsilon)\) and we save a \(\sqrt k\) in the
+\(\epsilon\)-part compared to the Basic Composition Theorem (Claim 10).
+
+*Remark*. In practice one can try different choices of \(\beta\) and
+settle with the one that gives the best privacy guarantee. See the
+discussions at the end of
+[[/posts/2019-03-14-great-but-manageable-expectations.html][Part 2 of
+this post]].
+
+*Proof*. Let \(p_i\), \(q_i\), \(p\) and \(q\) be the laws of \(M_i(x)\),
+\(M_i(x')\), \(M(x)\) and \(M(x')\) respectively.
+
+\[\mathbb E L_i = D(p_i || q_i) \le a(\epsilon),\]
+
+where \(L_i := L(p_i || q_i)\). Due to \(\epsilon\)-ind also have
+
+\[|L_i| \le \epsilon.\]
+
+Therefore, by Hoeffding's Inequality,
+
+\[\mathbb P(L - k a(\epsilon) \ge t) \le \mathbb P(L - \mathbb E L \ge t) \le \exp(- t^2 / 2 k \epsilon^2),\]
+
+where \(L := \sum_i L_i = L(p || q)\).
+
+Plugging in \(t = \sqrt{2 k \epsilon^2 \log \beta^{-1}}\), we have
+
+\[\mathbb P(L(p || q) \le k a(\epsilon) + \sqrt{2 k \epsilon^2 \log \beta^{-1}}) \ge 1 - \beta.\]
+
+Similarly we also have
+
+\[\mathbb P(L(q || p) \le k a(\epsilon) + \sqrt{2 k \epsilon^2 \log \beta^{-1}}) \ge 1 - \beta.\]
+
+By Claim 1 we arrive at the conclusion. \(\square\)
+
+*Claim 15 ([[https://en.wikipedia.org/wiki/Azuma%27s_inequality][Azuma's
+Inequality]])*. Let \(X_{0 : k}\) be a
+[[https://en.wikipedia.org/wiki/Martingale_(probability_theory)][supermartingale]].
+If \(|X_i - X_{i - 1}| \le b\), then
+
+\[\mathbb P(X_k - X_0 \ge t) \le \exp(- {t^2 \over 2 k b^2}).\]
+
+Azuma's Inequality implies a slightly weaker version of Hoeffding's
+Inequality. To see this, let \(L_{1 : k}\) be independent variables with
+\(|L_i| \le b\). Let \(X_i = \sum_{j = 1 : i} L_j - \mathbb E L_j\). Then
+\(X_{0 : k}\) is a martingale, and
+
+\[| X_i - X_{i - 1} | = | L_i - \mathbb E L_i | \le 2 b,\]
+
+since \(\|L_i\|_1 \le \|L_i\|_\infty\). Hence by Azuma's Inequality,
+
+\[\mathbb P(L - \mathbb E L \ge t) \le \exp(- {t^2 \over 8 k b^2}).\]
+
+Of course here we have made no assumption on \(\mathbb E L_i\). If instead
+we have some bound for the expectation, say \(|\mathbb E L_i| \le a\),
+then by the same derivation we have
+
+\[\mathbb P(L - \mathbb E L \ge t) \le \exp(- {t^2 \over 2 k (a + b)^2}).\]
+
+It is not hard to see what Azuma is to Hoeffding is like adaptive
+composition to independent composition. Indeed, we can use Azuma's
+Inequality to prove the Advanced Adaptive Composition Theorem for
+\(\delta = 0\).
+
+*Claim 16 (Advanced Adaptive Composition Theorem)* (\(\delta = 0\)). Let
+\(\beta > 0\). Let \(M_{1 : k}\) be \(k\) mechanisms with independent noises
+such that for each \(i\) and \(y_{1 : i}\), \(M_i(y_{1 : i})\) is
+\((\epsilon, 0)\)-dp. Then the adpative composition of \(M_{1 : k}\) is
+\((k a(\epsilon) + \sqrt{2 k \log \beta^{-1}} (\epsilon + a(\epsilon)), \beta)\)-dp.
+
+*Proof*. As before, let \(\xi_{1 : k} \overset{d}{=} M(x)\) and
+\(\eta_{1 : k} \overset{d}{=} M(x')\), where \(M\) is the adaptive
+composition of \(M_{1 : k}\). Let \(p_i\) (resp. \(q_i\)) be the law of
+\(\xi_i | \xi_{< i}\) (resp. \(\eta_i | \eta_{< i}\)). Let \(p^i\) (resp.
+\(q^i\)) be the law of \(\xi_{\le i}\) (resp. \(\eta_{\le i}\)). We want to
+construct supermartingale \(X\). To this end, let
+
+\[X_i = \log {p^i(\xi_{\le i}) \over q^i(\xi_{\le i})} - i a(\epsilon) \]
+
+We show that \((X_i)\) is a supermartingale:
+
+$$\begin{aligned}
+\mathbb E(X_i - X_{i - 1} | X_{i - 1}) &= \mathbb E \left(\log {p_i (\xi_i | \xi_{< i}) \over q_i (\xi_i | \xi_{< i})} - a(\epsilon) | \log {p^{i - 1} (\xi_{< i}) \over q^{i - 1} (\xi_{< i})}\right) \\
+&= \mathbb E \left( \mathbb E \left(\log {p_i (\xi_i | \xi_{< i}) \over q_i (\xi_i | \xi_{< i})} | \xi_{< i}\right) | \log {p^{i - 1} (\xi_{< i}) \over q^{i - 1} (\xi_{< i})}\right) - a(\epsilon) \\
+&= \mathbb E \left( D(p_i (\cdot | \xi_{< i}) || q_i (\cdot | \xi_{< i})) | \log {p^{i - 1} (\xi_{< i}) \over q^{i - 1} (\xi_{< i})}\right) - a(\epsilon) \\
+&\le 0,
+\end{aligned}$$
+
+since by Claim 12
+\(D(p_i(\cdot | y_{< i}) || q_i(\cdot | y_{< i})) \le a(\epsilon)\) for
+all \(y_{< i}\).
+
+Since
+
+\[| X_i - X_{i - 1} | = | \log {p_i(\xi_i | \xi_{< i}) \over q_i(\xi_i | \xi_{< i})} - a(\epsilon) | \le \epsilon + a(\epsilon),\]
+
+by Azuma's Inequality,
+
+\[\mathbb P(\log {p^k(\xi_{1 : k}) \over q^k(\xi_{1 : k})} \ge k a(\epsilon) + t) \le \exp(- {t^2 \over 2 k (\epsilon + a(\epsilon))^2}). \qquad(6.99)\]
+
+Let \(t = \sqrt{2 k \log \beta^{-1}} (\epsilon + a(\epsilon))\) we are
+done. \(\square\)
+
+*Claim 17 (Advanced Independent Composition Theorem)*. Fix
+\(0 < \beta < 1\). Let \(M_1, ..., M_k\) be \((\epsilon, \delta)\)-dp, then
+the independent composition \(M\) of \(M_{1 : k}\) is
+\((k a(\epsilon) + \sqrt{2 k \log \beta^{-1}} \epsilon, k \delta + \beta)\)-dp.
+
+*Proof*. By Claim 4, there exist events \(E_{1 : k}\) and \(F_{1 : k}\) such
+that
+
+1. The laws \(p_{i | E_i}\) and \(q_{i | F_i}\) are \(\epsilon\)-ind.
+2. \(\mathbb P(E_i), \mathbb P(F_i) \ge 1 - \delta\).
+
+Let \(E := \bigcap E_i\) and \(F := \bigcap F_i\), then they both have
+probability at least \(1 - k \delta\), and \(p_{i | E}\) and \(q_{i | F}\) are
+\(\epsilon\)-ind.
+
+By Claim 14, \(p_{|E}\) and \(q_{|F}\) are
+\((\epsilon' := k a(\epsilon) + \sqrt{2 k \epsilon^2 \log \beta^{-1}}, \beta)\)-ind.
+Let us shrink the bigger event between \(E\) and \(F\) so that they have
+equal probabilities. Then
+
+$$\begin{aligned}
+p (S) &\le p_{|E}(S) \mathbb P(E) + \mathbb P(E^c) \\
+&\le (e^{\epsilon'} q_{|F}(S) + \beta) \mathbb P(F) + k \delta\\
+&\le e^{\epsilon'} q(S) + \beta + k \delta.
+\end{aligned}$$
+
+\(\square\)
+
+*Claim 18 (Advanced Adaptive Composition Theorem)*. Fix \(0 < \beta < 1\).
+Let \(M_{1 : k}\) be \(k\) mechanisms with independent noises such that for
+each \(i\) and \(y_{1 : i}\), \(M_i(y_{1 : i})\) is \((\epsilon, \delta)\)-dp.
+Then the adpative composition of \(M_{1 : k}\) is
+\((k a(\epsilon) + \sqrt{2 k \log \beta^{-1}} (\epsilon + a(\epsilon)), \beta + k \delta)\)-dp.
+
+*Remark*. This theorem appeared in Dwork-Rothblum-Vadhan 2010, but I
+could not find a proof there. A proof can be found in Dwork-Roth 2013
+(See Theorem 3.20 there). Here I prove it in a similar way, except that
+instead of the use of an intermediate random variable there, I use the
+conditional probability results from Claim 5, the approach mentioned in
+Vadhan 2017.
+
+*Proof*. By Claim 5, there exist events \(E_{1 : k}\) and \(F_{1 : k}\) such
+that
+
+1. The laws \(p_{i | E_i}(\cdot | y_{< i})\) and
+ \(q_{i | F_i}(\cdot | y_{< i})\) are \(\epsilon\)-ind for all \(y_{< i}\).
+2. \(\mathbb P(E_i | y_{< i}), \mathbb P(F_i | y_{< i}) \ge 1 - \delta\)
+ for all \(y_{< i}\).
+
+Let \(E := \bigcap E_i\) and \(F := \bigcap F_i\), then they both have
+probability at least \(1 - k \delta\), and \(p_{i | E}(\cdot | y_{< i}\) and
+\(q_{i | F}(\cdot | y_{< i})\) are \(\epsilon\)-ind.
+
+By Advanced Adaptive Composition Theorem (\(\delta = 0\)), \(p_{|E}\) and
+\(q_{|F}\) are
+\((\epsilon' := k a(\epsilon) + \sqrt{2 k \log \beta^{-1}} (\epsilon + a(\epsilon)), \beta)\)-ind.
+
+The rest is the same as in the proof of Claim 17. \(\square\)
+
+** Subsampling
+ :PROPERTIES:
+ :CUSTOM_ID: subsampling
+ :END:
+Stochastic gradient descent is like gradient descent, but with random
+subsampling.
+
+Recall we have been considering databases in the space \(Z^m\). Let
+\(n < m\) be a positive integer,
+\(\mathcal I := \{I \subset [m]: |I| = n\}\) be the set of subsets of
+\([m]\) of size \(n\), and \(\gamma\) a random subset sampled uniformly from
+\(\mathcal I\). Let \(r = {n \over m}\) which we call the subsampling rate.
+Then we may add a subsampling module to the noisy gradient descent
+algorithm (6.97) considered before
+
+\[\theta_{t} = \theta_{t - 1} - \alpha n^{-1} \sum_{i \in \gamma} \nabla_\theta h_\theta(x_i) |_{\theta = \theta_{t - 1}} + \zeta_t. \qquad (7)\]
+
+It turns out subsampling has an amplification effect on privacy.
+
+*Claim 19 (Ullman 2017)*. Fix \(r \in [0, 1]\). Let \(n \le m\) be two
+nonnegative integers with \(n = r m\). Let \(N\) be an
+\((\epsilon, \delta)\)-dp mechanism on \(Z^n\). Define mechanism \(M\) on
+\(Z^m\) by
+
+\[M(x) = N(x_\gamma)\]
+
+Then \(M\) is \((\log (1 + r(e^\epsilon - 1)), r \delta)\)-dp.
+
+*Remark*. Some seem to cite
+Kasiviswanathan-Lee-Nissim-Raskhodnikova-Smith 2005 for this result, but
+it is not clear to me how it appears there.
+
+*Proof*. Let \(x, x' \in Z^n\) such that they differ by one row
+\(x_i \neq x_i'\). Naturally we would like to consider the cases where the
+index \(i\) is picked and the ones where it is not separately. Let
+\(\mathcal I_\in\) and \(\mathcal I_\notin\) be these two cases:
+
+$$\begin{aligned}
+\mathcal I_\in = \{J \subset \mathcal I: i \in J\}\\
+\mathcal I_\notin = \{J \subset \mathcal I: i \notin J\}\\
+\end{aligned}$$
+
+We will use these notations later. Let \(A\) be the event
+\(\{\gamma \ni i\}\).
+
+Let \(p\) and \(q\) be the laws of \(M(x)\) and \(M(x')\) respectively. We
+collect some useful facts about them. First due to \(N\) being
+\((\epsilon, \delta)\)-dp,
+
+\[p_{|A}(S) \le e^\epsilon q_{|A}(S) + \delta.\]
+
+Also,
+
+\[p_{|A}(S) \le e^\epsilon p_{|A^c}(S) + \delta.\]
+
+To see this, note that being conditional laws, \(p_A\) and \(p_{A^c}\) are
+averages of laws over \(\mathcal I_\in\) and \(\mathcal I_\notin\)
+respectively:
+
+$$\begin{aligned}
+p_{|A}(S) = |\mathcal I_\in|^{-1} \sum_{I \in \mathcal I_\in} \mathbb P(N(x_I) \in S)\\
+p_{|A^c}(S) = |\mathcal I_\notin|^{-1} \sum_{J \in \mathcal I_\notin} \mathbb P(N(x_J) \in S).
+\end{aligned}$$
+
+Now we want to pair the \(I\)'s in \(\mathcal I_\in\) and \(J\)'s in
+\(\mathcal I_\notin\) so that they differ by one index only, which means
+\(d(x_I, x_J) = 1\). Formally, this means we want to consider the set:
+
+\[\mathcal D := \{(I, J) \in \mathcal I_\in \times \mathcal I_\notin: |I \cap J| = n - 1\}.\]
+
+We may observe by trying out some simple cases that every
+\(I \in \mathcal I_\in\) is paired with \(n\) elements in
+\(\mathcal I_\notin\), and every \(J \in \mathcal I_\notin\) is paired with
+\(m - n\) elements in \(\mathcal I_\in\). Therefore
+
+\[p_{|A}(S) = |\mathcal D|^{-1} \sum_{(I, J) \in \mathcal D} \mathbb P(N(x_I \in S)) \le |\mathcal D|^{-1} \sum_{(I, J) \in \mathcal D} (e^\epsilon \mathbb P(N(x_J \in S)) + \delta) = e^\epsilon p_{|A^c} (S) + \delta.\]
+
+Since each of the \(m\) indices is picked independently with probability
+\(r\), we have
+
+\[\mathbb P(A) = r.\]
+
+Let \(t \in [0, 1]\) to be determined. We may write
+
+$$\begin{aligned}
+p(S) &= r p_{|A} (S) + (1 - r) p_{|A^c} (S)\\
+&\le r(t e^\epsilon q_{|A}(S) + (1 - t) e^\epsilon q_{|A^c}(S) + \delta) + (1 - r) q_{|A^c} (S)\\
+&= rte^\epsilon q_{|A}(S) + (r(1 - t) e^\epsilon + (1 - r)) q_{|A^c} (S) + r \delta\\
+&= te^\epsilon r q_{|A}(S) + \left({r \over 1 - r}(1 - t) e^\epsilon + 1\right) (1 - r) q_{|A^c} (S) + r \delta \\
+&\le \left(t e^\epsilon \wedge \left({r \over 1 - r} (1 - t) e^\epsilon + 1\right)\right) q(S) + r \delta. \qquad (7.5)
+\end{aligned}$$
+
+We can see from the last line that the best bound we can get is when
+
+\[t e^\epsilon = {r \over 1 - r} (1 - t) e^\epsilon + 1.\]
+
+Solving this equation we obtain
+
+\[t = r + e^{- \epsilon} - r e^{- \epsilon}\]
+
+and plugging this in (7.5) we have
+
+\[p(S) \le (1 + r(e^\epsilon - 1)) q(S) + r \delta.\]
+
+\(\square\)
+
+Since \(\log (1 + x) < x\) for \(x > 0\), we can rewrite the conclusion of
+the Claim to \((r(e^\epsilon - 1), r \delta)\)-dp. Further more, if
+\(\epsilon < \alpha\) for some \(\alpha\), we can rewrite it as
+\((r \alpha^{-1} (e^\alpha - 1) \epsilon, r \delta)\)-dp or
+\((O(r \epsilon), r \delta)\)-dp.
+
+Let \(\epsilon < 1\). We see that if the mechanism \(N\) is
+\((\epsilon, \delta)\)-dp on \(Z^n\), then \(M\) is
+\((2 r \epsilon, r \delta)\)-dp, and if we run it over \(k / r\)
+minibatches, by Advanced Adaptive Composition theorem, we have
+\((\sqrt{2 k r \log \beta^{-1}} \epsilon + 2 k r \epsilon^2, k \delta + \beta)\)-dp.
+
+This is better than the privacy guarantee without subsampling, where we
+run over \(k\) iterations and obtain
+\((\sqrt{2 k \log \beta^{-1}} \epsilon + 2 k \epsilon^2, k \delta + \beta)\)-dp.
+So with subsampling we gain an extra \(\sqrt r\) in the \(\epsilon\)-part of
+the privacy guarantee. But, smaller subsampling rate means smaller
+minibatch size, which would result in bigger variance, so there is a
+trade-off here.
+
+Finally we define the differentially private stochastic gradient descent
+(DP-SGD) with the Gaussian mechanism
+(Abadi-Chu-Goodfellow-McMahan-Mironov-Talwar-Zhang 2016), which is (7)
+with the noise specialised to Gaussian and an added clipping operation
+to bound to sensitivity of the query to a chosen \(C\):
+
+\[\theta_{t} = \theta_{t - 1} - \alpha \left(n^{-1} \sum_{i \in \gamma} \nabla_\theta \ell(x_i; \theta) |_{\theta = \theta_{t - 1}}\right)_{\text{Clipped at }C / 2} + N(0, \sigma^2 C^2 I),\]
+
+where
+
+\[y_{\text{Clipped at } \alpha} := y / (1 \vee {\|y\|_2 \over \alpha})\]
+
+is \(y\) clipped to have norm at most \(\alpha\).
+
+Note that the clipping in DP-SGD is much stronger than making the query
+have sensitivity \(C\). It makes the difference between the query results
+of two /arbitrary/ inputs bounded by \(C\), rather than /neighbouring/
+inputs.
+
+In [[/posts/2019-03-14-great-but-manageable-expectations.html][Part 2 of
+this post]] we will use the tools developed above to discuss the privacy
+guarantee for DP-SGD, among other things.
+
+** References
+ :PROPERTIES:
+ :CUSTOM_ID: references
+ :END:
+
+- Abadi, Martín, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya
+ Mironov, Kunal Talwar, and Li Zhang. "Deep Learning with Differential
+ Privacy." Proceedings of the 2016 ACM SIGSAC Conference on Computer
+ and Communications Security - CCS'16, 2016, 308--18.
+ [[https://doi.org/10.1145/2976749.2978318]].
+- Dwork, Cynthia, and Aaron Roth. "The Algorithmic Foundations of
+ Differential Privacy." Foundations and Trends® in Theoretical Computer
+ Science 9, no. 3--4 (2013): 211--407.
+ [[https://doi.org/10.1561/0400000042]].
+- Dwork, Cynthia, Guy N. Rothblum, and Salil Vadhan. "Boosting and
+ Differential Privacy." In 2010 IEEE 51st Annual Symposium on
+ Foundations of Computer Science, 51--60. Las Vegas, NV, USA:
+ IEEE, 2010. [[https://doi.org/10.1109/FOCS.2010.12]].
+- Shiva Prasad Kasiviswanathan, Homin K. Lee, Kobbi Nissim, Sofya
+ Raskhodnikova, and Adam Smith. "What Can We Learn Privately?" In 46th
+ Annual IEEE Symposium on Foundations of Computer Science (FOCS'05).
+ Pittsburgh, PA, USA: IEEE, 2005.
+ [[https://doi.org/10.1109/SFCS.2005.1]].
+- Murtagh, Jack, and Salil Vadhan. "The Complexity of Computing the
+ Optimal Composition of Differential Privacy." In Theory of
+ Cryptography, edited by Eyal Kushilevitz and Tal Malkin, 9562:157--75.
+ Berlin, Heidelberg: Springer Berlin Heidelberg, 2016.
+ [[https://doi.org/10.1007/978-3-662-49096-9_7]].
+- Ullman, Jonathan. "Solution to CS7880 Homework 1.", 2017.
+ [[http://www.ccs.neu.edu/home/jullman/cs7880s17/HW1sol.pdf]]
+- Vadhan, Salil. "The Complexity of Differential Privacy." In Tutorials
+ on the Foundations of Cryptography, edited by Yehuda Lindell,
+ 347--450. Cham: Springer International Publishing, 2017.
+ [[https://doi.org/10.1007/978-3-319-57048-8_7]].
+
+[fn:1] For those who have read about differential privacy and never
+ heard of the term "divergence variable", it is closely related to
+ the notion of "privacy loss", see the paragraph under Claim 6 in
+ [[#back-to-approximate-differential-privacy][Back to approximate
+ differential privacy]]. I defined the term this way so that we
+ can focus on the more general stuff: compared to the privacy loss
+ \(L(M(x) || M(x'))\), the term \(L(p || q)\) removes the "distracting
+ information" that \(p\) and \(q\) are related to databases, queries,
+ mechanisms etc., but merely probability laws. By removing the
+ distraction, we simplify the analysis. And once we are done with
+ the analysis of \(L(p || q)\), we can apply the results obtained in
+ the general setting to the special case where \(p\) is the law of
+ \(M(x)\) and \(q\) is the law of \(M(x')\).
diff --git a/posts/2019-03-14-great-but-manageable-expectations.org b/posts/2019-03-14-great-but-manageable-expectations.org
new file mode 100644
index 0000000..6438090
--- /dev/null
+++ b/posts/2019-03-14-great-but-manageable-expectations.org
@@ -0,0 +1,836 @@
+#+title: Great but Manageable Expectations
+
+#+date: <2019-03-14>
+
+This is Part 2 of a two-part blog post on differential privacy.
+Continuing from [[file:2019-03-13-a-tail-of-two-densities.org][Part 1]], I discuss the Rényi differential privacy, corresponding to the
+Rényi divergence, a study of the
+[[https://en.wikipedia.org/wiki/Moment-generating_function][moment
+generating functions]] of the divergence between probability measures in
+order to derive the tail bounds.
+
+Like in Part 1, I prove a composition theorem and a subsampling theorem.
+
+I also attempt to reproduce a seemingly better moment bound for the
+Gaussian mechanism with subsampling, with one intermediate step which I
+am not able to prove.
+
+After that I explain the Tensorflow implementation of differential
+privacy in its
+[[https://github.com/tensorflow/privacy/tree/master/privacy][Privacy]]
+module, which focuses on the differentially private stochastic gradient
+descent algorithm (DP-SGD).
+
+Finally I use the results from both Part 1 and Part 2 to obtain some
+privacy guarantees for composed subsampling queries in general, and for
+DP-SGD in particular. I also compare these privacy guarantees.
+
+/If you are confused by any notations, ask me or try
+[[/notations.html][this]]./
+
+** Rényi divergence and differential privacy
+ :PROPERTIES:
+ :CUSTOM_ID: rényi-divergence-and-differential-privacy
+ :END:
+Recall in the proof of Gaussian mechanism privacy guarantee (Claim 8) we
+used the Chernoff bound for the Gaussian noise. Why not use the Chernoff
+bound for the divergence variable / privacy loss directly, since the
+latter is closer to the core subject than the noise? This leads us to
+the study of Rényi divergence.
+
+So far we have seen several notions of divergence used in differential
+privacy: the max divergence which is \(\epsilon\)-ind in disguise:
+
+\[D_\infty(p || q) := \max_y \log {p(y) \over q(y)},\]
+
+the \(\delta\)-approximate max divergence that defines the
+\((\epsilon, \delta)\)-ind:
+
+\[D_\infty^\delta(p || q) := \max_y \log{p(y) - \delta \over q(y)},\]
+
+and the KL-divergence which is the expectation of the divergence
+variable:
+
+\[D(p || q) = \mathbb E L(p || q) = \int \log {p(y) \over q(y)} p(y) dy.\]
+
+The Rényi divergence is an interpolation between the max divergence and
+the KL-divergence, defined as the log moment generating function /
+cumulants of the divergence variable:
+
+\[D_\lambda(p || q) = (\lambda - 1)^{-1} \log \mathbb E \exp((\lambda - 1) L(p || q)) = (\lambda - 1)^{-1} \log \int {p(y)^\lambda \over q(y)^{\lambda - 1}} dy.\]
+
+Indeed, when \(\lambda \to \infty\) we recover the max divergence, and
+when \(\lambda \to 1\), by recognising \(D_\lambda\) as a derivative in
+\(\lambda\) at \(\lambda = 1\), we recover the KL divergence. In this post
+we only consider \(\lambda > 1\).
+
+Using the Rényi divergence we may define:
+
+*Definition (Rényi differential privacy)* (Mironov 2017). An mechanism
+\(M\) is \((\lambda, \rho)\)/-Rényi differentially private/
+(\((\lambda, \rho)\)-rdp) if for all \(x\) and \(x'\) with distance \(1\),
+
+\[D_\lambda(M(x) || M(x')) \le \rho.\]
+
+For convenience we also define two related notions, \(G_\lambda (f || g)\)
+and \(\kappa_{f, g} (t)\) for \(\lambda > 1\), \(t > 0\) and positive
+functions \(f\) and \(g\):
+
+\[G_\lambda(f || g) = \int f(y)^{\lambda} g(y)^{1 - \lambda} dy; \qquad \kappa_{f, g} (t) = \log G_{t + 1}(f || g).\]
+
+For probability densities \(p\) and \(q\), \(G_{t + 1}(p || q)\) and
+\(\kappa_{p, q}(t)\) are the \(t\)th moment generating function and
+[[https://en.wikipedia.org/wiki/Cumulant][cumulant]] of the divergence
+variable \(L(p || q)\), and
+
+\[D_\lambda(p || q) = (\lambda - 1)^{-1} \kappa_{p, q}(\lambda - 1).\]
+
+In the following, whenever you see \(t\), think of it as \(\lambda - 1\).
+
+*Example 1 (RDP for the Gaussian mechanism)*. Using the scaling and
+translation invariance of \(L\) (6.1), we have that the divergence
+variable for two Gaussians with the same variance is
+
+\[L(N(\mu_1, \sigma^2 I) || N(\mu_2, \sigma^2 I)) \overset{d}{=} L(N(0, I) || N((\mu_2 - \mu_1) / \sigma, I)).\]
+
+With this we get
+
+\[D_\lambda(N(\mu_1, \sigma^2 I) || N(\mu_2, \sigma^2 I)) = {\lambda \|\mu_2 - \mu_1\|_2^2 \over 2 \sigma^2} = D_\lambda(N(\mu_2, \sigma^2 I) || N(\mu_1, \sigma^2 I)).\]
+
+Again due to the scaling invariance of \(L\), we only need to consider \(f\)
+with sensitivity \(1\), see the discussion under (6.1). The Gaussian
+mechanism on query \(f\) is thus \((\lambda, \lambda / 2 \sigma^2)\)-rdp for
+any \(\lambda > 1\).
+
+From the example of Gaussian mechanism, we see that the relation between
+\(\lambda\) and \(\rho\) is like that between \(\epsilon\) and \(\delta\). Given
+\(\lambda\) (resp. \(\rho\)) and parameters like variance of the noise and
+the sensitivity of the query, we can write \(\rho = \rho(\lambda)\) (resp.
+\(\lambda = \lambda(\rho)\)).
+
+Using the Chernoff bound (6.7), we can bound the divergence variable:
+
+\[\mathbb P(L(p || q) \ge \epsilon) \le {\mathbb E \exp(t L(p || q)) \over \exp(t \epsilon))} = \exp (\kappa_{p, q}(t) - \epsilon t). \qquad (7.7)\]
+
+For a function \(f: I \to \mathbb R\), denote its
+[[https://en.wikipedia.org/wiki/Legendre_transformation][Legendre
+transform]] by
+
+\[f^*(\epsilon) := \sup_{t \in I} (\epsilon t - f(t)).\]
+
+By taking infimum on the RHS of (7.7), we obtain
+
+*Claim 20*. Two probability densities \(p\) and \(q\) are
+\((\epsilon, \exp(-\kappa_{p, q}^*(\epsilon)))\)-ind.
+
+Given a mechanism \(M\), let \(\kappa_M(t)\) denote an upper bound for the
+cumulant of its privacy loss:
+
+\[\log \mathbb E \exp(t L(M(x) || M(x'))) \le \kappa_M(t), \qquad \forall x, x'\text{ with } d(x, x') = 1.\]
+
+For example, we can set \(\kappa_M(t) = t \rho(t + 1)\). Using the same
+argument we have the following:
+
+*Claim 21*. If \(M\) is \((\lambda, \rho)\)-rdp, then
+
+1. it is also \((\epsilon, \exp((\lambda - 1) (\rho - \epsilon)))\)-dp for
+ any \(\epsilon \ge \rho\).
+2. Alternatively, \(M\) is \((\epsilon, - \exp(\kappa_M^*(\epsilon)))\)-dp
+ for any \(\epsilon > 0\).
+3. Alternatively, for any \(0 < \delta \le 1\), \(M\) is
+ \((\rho + (\lambda - 1)^{-1} \log \delta^{-1}, \delta)\)-dp.
+
+*Example 2 (Gaussian mechanism)*. We can apply the above argument to the
+Gaussian mechanism on query \(f\) and get:
+
+\[\delta \le \inf_{\lambda > 1} \exp((\lambda - 1) ({\lambda \over 2 \sigma^2} - \epsilon))\]
+
+By assuming \(\sigma^2 > (2 \epsilon)^{-1}\) we have that the infimum is
+achieved when \(\lambda = (1 + 2 \epsilon / \sigma^2) / 2\) and
+
+\[\delta \le \exp(- ((2 \sigma)^{-1} - \epsilon \sigma)^2 / 2)\]
+
+which is the same result as (6.8), obtained using the Chernoff bound of
+the noise.
+
+However, as we will see later, compositions will yield different results
+from those obtained from methods in
+[[/posts/2019-03-13-a-tail-of-two-densities.html][Part 1]] when
+considering Rényi dp.
+
+*** Moment Composition
+ :PROPERTIES:
+ :CUSTOM_ID: moment-composition
+ :END:
+*Claim 22 (Moment Composition Theorem)*. Let \(M\) be the adaptive
+composition of \(M_{1 : k}\). Suppose for any \(y_{< i}\), \(M_i(y_{< i})\) is
+\((\lambda, \rho)\)-rdp. Then \(M\) is \((\lambda, k\rho)\)-rdp.
+
+*Proof*. Rather straightforward. As before let \(p_i\) and \(q_i\) be the
+conditional laws of adpative composition of \(M_{1 : i}\) at \(x\) and \(x'\)
+respectively, and \(p^i\) and \(q^i\) be the joint laws of \(M_{1 : i}\) at
+\(x\) and \(x'\) respectively. Denote
+
+\[D_i = \mathbb E \exp((\lambda - 1)\log {p^i(\xi_{1 : i}) \over q^i(\xi_{1 : i})})\]
+
+Then
+
+$$\begin{aligned}
+D_i &= \mathbb E\mathbb E \left(\exp((\lambda - 1)\log {p_i(\xi_i | \xi_{< i}) \over q_i(\xi_i | \xi_{< i})}) \exp((\lambda - 1)\log {p^{i - 1}(\xi_{< i}) \over q^{i - 1}(\xi_{< i})}) \big| \xi_{< i}\right) \\
+&= \mathbb E \mathbb E \left(\exp((\lambda - 1)\log {p_i(\xi_i | \xi_{< i}) \over q_i(\xi_i | \xi_{< i})}) | \xi_{< i}\right) \exp\left((\lambda - 1)\log {p^{i - 1}(\xi_{< i}) \over q^{i - 1}(\xi_{< i})}\right)\\
+&\le \mathbb E \exp((\lambda - 1) \rho) \exp\left((\lambda - 1)\log {p^{i - 1}(\xi_{< i}) \over q^{i - 1}(\xi_{< i})}\right)\\
+&= \exp((\lambda - 1) \rho) D_{i - 1}.
+\end{aligned}$$
+
+Applying this recursively we have
+
+\[D_k \le \exp(k(\lambda - 1) \rho),\]
+
+and so
+
+\[(\lambda - 1)^{-1} \log \mathbb E \exp((\lambda - 1)\log {p^k(\xi_{1 : i}) \over q^k(\xi_{1 : i})}) = (\lambda - 1)^{-1} \log D_k \le k \rho.\]
+
+Since this holds for all \(x\) and \(x'\), we are done. \(\square\)
+
+This, together with the scaling property of the legendre transformation:
+
+\[(k f)^*(x) = k f^*(x / k)\]
+
+yields
+
+*Claim 23*. The \(k\)-fold adaptive composition of
+\((\lambda, \rho(\lambda))\)-rdp mechanisms is
+\((\epsilon, \exp(- k \kappa^*(\epsilon / k)))\)-dp, where
+\(\kappa(t) := t \rho(t + 1)\).
+
+*Example 3 (Gaussian mechanism)*. We can apply the above claim to
+Gaussian mechanism. Again, without loss of generality we assume
+\(S_f = 1\). But let us do it manually to get the same results. If we
+apply the Moment Composition Theorem to the an adaptive composition of
+Gaussian mechanisms on the same query, then since each \(M_i\) is
+\((\lambda, (2 \sigma^2)^{-1} \lambda)\)-rdp, the composition \(M\) is
+\((\lambda, (2 \sigma^2)^{-1} k \lambda)\)-rdp. Processing this using the
+Chernoff bound as in the previous example, we have
+
+\[\delta = \exp(- ((2 \sigma / \sqrt k)^{-1} - \epsilon \sigma / \sqrt k)^2 / 2),\]
+
+Substituting \(\sigma\) with \(\sigma / \sqrt k\) in (6.81), we conclude
+that if
+
+\[\sigma > \sqrt k \left(\epsilon^{-1} \sqrt{2 \log \delta^{-1}} + (2 \epsilon)^{- {1 \over 2}}\right)\]
+
+then the composition \(M\) is \((\epsilon, \delta)\)-dp.
+
+As we will see in the discussions at the end of this post, this result
+is different from (and probably better than) the one obtained by using
+the Advanced Composition Theorem (Claim 18).
+
+*** Subsampling
+ :PROPERTIES:
+ :CUSTOM_ID: subsampling
+ :END:
+We also have a subsampling theorem for the Rényi dp.
+
+*Claim 24*. Fix \(r \in [0, 1]\). Let \(m \le n\) be two nonnegative
+integers with \(m = r n\). Let \(N\) be a \((\lambda, \rho)\)-rdp machanism on
+\(X^m\). Let \(\mathcal I := \{J \subset [n]: |J| = m\}\) be the set of
+subsets of \([n]\) of size \(m\). Define mechanism \(M\) on \(X^n\) by
+
+\[M(x) = N(x_\gamma)\]
+
+where \(\gamma\) is sampled uniformly from \(\mathcal I\). Then \(M\) is
+\((\lambda, {1 \over \lambda - 1} \log (1 + r(e^{(\lambda - 1) \rho} - 1)))\)-rdp.
+
+To prove Claim 24, we need a useful lemma:
+
+*Claim 25*. Let \(p_{1 : n}\) and \(q_{1 : n}\) be nonnegative integers, and
+\(\lambda > 1\). Then
+
+\[{(\sum p_i)^\lambda \over (\sum q_i)^{\lambda - 1}} \le \sum_i {p_i^\lambda \over q_i^{\lambda - 1}}. \qquad (8)\]
+
+*Proof*. Let
+
+\[r(i) := p_i / P, \qquad u(i) := q_i / Q\]
+
+where
+
+\[P := \sum p_i, \qquad Q := \sum q_i\]
+
+then \(r\) and \(u\) are probability mass functions. Plugging in
+\(p_i = r(i) P\) and \(q_i = u(i) Q\) into the objective (8), it suffices to
+show
+
+\[1 \le \sum_i {r(i)^\lambda \over u(i)^{\lambda - 1}} = \mathbb E_{\xi \sim u} \left({r(\xi) \over u(\xi)}\right)^\lambda\]
+
+This is true due to Jensen's Inequality:
+
+\[\mathbb E_{\xi \sim u} \left({r(\xi) \over u(\xi)}\right)^\lambda \ge \left(\mathbb E_{\xi \sim u} {r(\xi) \over u(\xi)} \right)^\lambda = 1.\]
+
+\(\square\)
+
+*Proof of Claim 24*. Define \(\mathcal I\) as before.
+
+Let \(p\) and \(q\) be the laws of \(M(x)\) and \(M(x')\) respectively. For any
+\(I \in \mathcal I\), let \(p_I\) and \(q_I\) be the laws of \(N(x_I)\) and
+\(N(x_I')\) respectively. Then we have
+
+$$\begin{aligned}
+p(y) &= n^{-1} \sum_{I \in \mathcal I} p_I(y) \\
+q(y) &= n^{-1} \sum_{I \in \mathcal I} q_I(y),
+\end{aligned}$$
+
+where \(n = |\mathcal I|\).
+
+The MGF of \(L(p || q)\) is thus
+
+\[\mathbb E((\lambda - 1) L(p || q)) = n^{-1} \int {(\sum_I p_I(y))^\lambda \over (\sum_I q_I(y))^{\lambda - 1}} dy \le n^{-1} \sum_I \int {p_I(y)^\lambda \over q_I(y)^{\lambda - 1}} dy \qquad (9)\]
+
+where in the last step we used Claim 25. As in the proof of Claim 19, we
+divide \(\mathcal I\) into disjoint sets \(\mathcal I_\in\) and
+\(\mathcal I_\notin\). Furthermore we denote by \(n_\in\) and \(n_\notin\)
+their cardinalities. Then the right hand side of (9) becomes
+
+\[n^{-1} \sum_{I \in \mathcal I_\in} \int {p_I(y)^\lambda \over q_I(y)^{\lambda - 1}} dy + n^{-1} \sum_{I \in \mathcal I_\notin} \int {p_I(y)^\lambda \over q_I(y)^{\lambda - 1}} dy\]
+
+The summands in the first are the MGF of \(L(p_I || q_I)\), and the
+summands in the second term are \(1\), so
+
+$$\begin{aligned}
+\mathbb E((\lambda - 1) L(p || q)) &\le n^{-1} \sum_{I \in \mathcal I_\in} \mathbb E \exp((\lambda - 1) L(p_I || q_I)) + (1 - r) \\
+&\le n^{-1} \sum_{I \in \mathcal I_\in} \exp((\lambda - 1) D_\lambda(p_I || q_I)) + (1 - r) \\
+&\le r \exp((\lambda - 1) \rho) + (1 - r).
+\end{aligned}$$
+
+Taking log and dividing by \((\lambda - 1)\) on both sides we have
+
+\[D_\lambda(p || q) \le (\lambda - 1)^{-1} \log (1 + r(\exp((\lambda - 1) \rho) - 1)).\]
+
+\(\square\)
+
+As before, we can rewrite the conclusion of Lemma 6 using
+\(1 + z \le e^z\) and obtain
+\((\lambda, (\lambda - 1)^{-1} r (e^{(\lambda - 1) \rho} - 1))\)-rdp,
+which further gives \((\lambda, \alpha^{-1} (e^\alpha - 1) r \rho)\)-rdp
+(or \((\lambda, O(r \rho))\)-rdp) if \((\lambda - 1) \rho < \alpha\) for
+some \(\alpha\).
+
+It is not hard to see that the subsampling theorem in moment method,
+even though similar to the results of that in the usual method, does not
+help due to lack of an analogue of advanced composition theorem of the
+moments.
+
+*Example 4 (Gaussian mechanism)*. Applying the moment subsampling
+theorem to the Gaussian mechanism, we obtain
+\((\lambda, O(r \lambda / \sigma^2))\)-rdp for a subsampled Gaussian
+mechanism with rate \(r\).
+Abadi-Chu-Goodfellow-McMahan-Mironov-Talwar-Zhang 2016 (ACGMMTZ16 in the
+following), however, gains an extra \(r\) in the bound given certain
+assumptions.
+
+** ACGMMTZ16
+ :PROPERTIES:
+ :CUSTOM_ID: acgmmtz16
+ :END:
+What follows is my understanding of this result. I call it a conjecture
+because there is a gap which I am not able to reproduce their proof or
+prove it myself. This does not mean the result is false. On the
+contrary, I am inclined to believe it is true.
+
+*Claim 26*. Assuming Conjecture 1 (see below) is true. For a subsampled
+Gaussian mechanism with ratio \(r\), if \(r = O(\sigma^{-1})\) and
+\(\lambda = O(\sigma^2)\), then we have
+\((\lambda, O(r^2 \lambda / \sigma^2))\)-rdp.
+
+Wait, why is there a conjecture? Well, I have tried but not been able to
+prove the following, which is a hidden assumption in the original proof:
+
+*Conjecture 1*. Let \(M\) be the Gaussian mechanism with subsampling rate
+\(r\), and \(p\) and \(q\) be the laws of \(M(x)\) and \(M(x')\) respectively,
+where \(d(x, x') = 1\). Then
+
+\[D_\lambda (p || q) \le D_\lambda (r \mu_1 + (1 - r) \mu_0 || \mu_0)\]
+
+where \(\mu_i = N(i, \sigma^2)\).
+
+*Remark*. Conjecture 1 is heuristically reasonable. To see this, let us
+use the notations \(p_I\) and \(q_I\) to be \(q\) and \(p\) conditioned on the
+subsampling index \(I\), just like in the proof of the subsampling
+theorems (Claim 19 and 24). Then for \(I \in \mathcal I_\in\),
+
+\[D_\lambda(p_I || q_I) \le D_\lambda(\mu_0 || \mu_1),\]
+
+and for \(I \in \mathcal I_\notin\),
+
+\[D_\lambda(p_I || q_I) = 0 = D_\lambda(\mu_0 || \mu_0).\]
+
+Since we are taking an average over \(\mathcal I\), of which
+\(r |\mathcal I|\) are in \(\mathcal I_\in\) and \((1 - r) |\mathcal I|\) are
+in \(\mathcal I_\notin\), (9.3) says "the inequalities carry over
+averaging".
+
+[[https://math.stackexchange.com/a/3152296/149540][A more general
+version of Conjecture 1 has been proven false]]. The counter example for
+the general version does not apply here, so it is still possible
+Conjecture 1 is true.
+
+Let \(p_\in\) (resp. \(q_\in\)) be the average of \(p_I\) (resp. \(q_I\)) over
+\(I \in \mathcal I_\in\), and \(p_\notin\) (resp. \(q_\notin\)) be the average
+of \(p_I\) (resp. \(q_I\)) over \(I \in \mathcal I_\notin\).
+
+Immediately we have \(p_\notin = q_\notin\), hence
+
+\[D_\lambda(p_\notin || q_\notin) = 0 = D_\lambda(\mu_0 || \mu_0). \qquad(9.7)\]
+
+By Claim 25, we have
+
+\[D_\lambda(p_\in || q_\in) \le D_\lambda (\mu_1 || \mu_0). \qquad(9.9) \]
+
+So one way to prove Conjecture 1 is perhaps prove a more specialised
+comparison theorem than the false conjecture:
+
+Given (9.7) and (9.9), show that
+
+\[D_\lambda(r p_\in + (1 - r) p_\notin || r q_\in + (1 - r) q_\notin) \le D_\lambda(r \mu_1 + (1 - r) \mu_0 || \mu_0).\]
+
+[End of Remark]
+
+#+begin_html
+ <!---
+ **Conjecture 1** \[Probably [FALSE](https://math.stackexchange.com/a/3152296/149540), to be removed\]. Let \(p_i\), \(q_i\), \(\mu_i\), \(\nu_i\) be
+ probability densities on the same space for \(i = 1 : n\). If
+ \(D_\lambda(p_i || q_i) \le D_\lambda(\mu_i || \nu_i)\) for all \(i\), then
+
+ \[D_\lambda(n^{-1} \sum_i p_i || n^{-1} \sum_i q_i) \le D_\lambda(n^{-1} \sum_i \mu_i || n^{-1} \sum_i \nu_i).\]
+
+ Basically, it is saying \"if for each \(i\), \(p_i\) and \(q_i\) are closer to
+ each other than \(\mu_i\) and \(\nu_i\), then so are their averages over
+ \(i\)\".
+ So it is heuristically reasonable.
+ But it is probably [**FALSE**](https://math.stackexchange.com/a/3152296/149540).
+ This does not mean Claim 26 is false, as it might still be possible that Conjecture 2 (see below) is true.
+
+ This conjecture is equivalent to its special case when \(n = 2\) by an induction argument
+ (replacing one pair of densities at a time).
+ -->
+#+end_html
+
+Recall the definition of \(G_\lambda\) under the definition of Rényi
+differential privacy. The following Claim will be useful.
+
+*Claim 27*. Let \(\lambda\) be a positive integer, then
+
+\[G_\lambda(r p + (1 - r) q || q) = \sum_{k = 1 : \lambda} {\lambda \choose k} r^k (1 - r)^{\lambda - k} G_k(p || q).\]
+
+*Proof*. Quite straightforward, by expanding the numerator
+\((r p + (1 - r) q)^\lambda\) using binomial expansion. \(\square\)
+
+*Proof of Claim 26*. By Conjecture 1, it suffices to prove the
+following:
+
+If \(r \le c_1 \sigma^{-1}\) and \(\lambda \le c_2 \sigma^2\) for some
+positive constant \(c_1\) and \(c_2\), then there exists \(C = C(c_1, c_2)\)
+such that \(G_\lambda (r \mu_1 + (1 - r) \mu_0 || \mu_0) \le C\) (since
+\(O(r^2 \lambda^2 / \sigma^2) = O(1)\)).
+
+*Remark in the proof*. Note that the choice of \(c_1\), \(c_2\) and the
+function \(C(c_1, c_2)\) are important to the practicality and usefulness
+of Claim 26.
+
+#+begin_html
+ <!---
+ Part 1 can be derived using Conjecture 1, but since Conjecture 1 is probably false,
+ let us rename Part 1 itself _Conjecture 2_, which needs to be verified by other means.
+ We use the notations \(p_I\) and \(q_I\) to be \(q\) and \(p\) conditioned on
+ the subsampling index \(I\), just like in the proof of the subsampling theorems (Claim 19 and 24).
+ Then
+
+ $$D_\lambda(q_I || p_I) = D_\lambda(p_I || q_I)
+ \begin{cases}
+ \le D_\lambda(\mu_0 || \mu_1) = D_\lambda(\mu_1 || \mu_0), & I \in \mathcal I_\in\\
+ = D_\lambda(\mu_0 || \mu_0) = D_\lambda(\mu_1 || \mu_1) = 0 & I \in \mathcal I_\notin
+ \end{cases}$$
+
+ Since \(p = |\mathcal I|^{-1} \sum_{I \in \mathcal I} p_I\) and
+ \(q = |\mathcal I|^{-1} \sum_{I \in \mathcal I} q_I\) and
+ \(|\mathcal I_\in| = r |\mathcal I|\), by Conjecture 1, we have Part 1.
+
+ **Remark in the proof**. As we can see here, instead of trying to prove Conjecture 1,
+ it suffices to prove a weaker version of it, by specialising on mixture of Gaussians,
+ in order to have a Claim 26 without any conjectural assumptions.
+ I have in fact posted the Conjecture on [Stackexchange](https://math.stackexchange.com/questions/3147963/an-inequality-related-to-the-renyi-divergence).
+
+ Now let us verify Part 2.
+ -->
+#+end_html
+
+Using Claim 27 and Example 1, we have
+
+$$\begin{aligned}
+G_\lambda(r \mu_1 + (1 - r) \mu_0 || \mu_0)) &= \sum_{j = 0 : \lambda} {\lambda \choose j} r^j (1 - r)^{\lambda - j} G_j(\mu_1 || \mu_0)\\
+&=\sum_{j = 0 : \lambda} {\lambda \choose j} r^j (1 - r)^{\lambda - j} \exp(j (j - 1) / 2 \sigma^2). \qquad (9.5)
+\end{aligned}$$
+
+Denote by \(n = \lceil \sigma^2 \rceil\). It suffices to show
+
+\[\sum_{j = 0 : n} {n \choose j} (c_1 n^{- 1 / 2})^j (1 - c_1 n^{- 1 / 2})^{n - j} \exp(c_2 j (j - 1) / 2 n) \le C\]
+
+Note that we can discard the linear term \(- c_2 j / \sigma^2\) in the
+exponential term since we want to bound the sum from above.
+
+We examine the asymptotics of this sum when \(n\) is large, and treat the
+sum as an approximation to an integration of a function
+\(\phi: [0, 1] \to \mathbb R\). For \(j = x n\), where \(x \in (0, 1)\),
+\(\phi\) is thus defined as (note we multiply the summand with \(n\) to
+compensate the uniform measure on \(1, ..., n\):
+
+$$\begin{aligned}
+\phi_n(x) &:= n {n \choose j} (c_1 n^{- 1 / 2})^j (1 - c_1 n^{- 1 / 2})^{n - j} \exp(c_2 j^2 / 2 n) \\
+&= n {n \choose x n} (c_1 n^{- 1 / 2})^{x n} (1 - c_1 n^{- 1 / 2})^{(1 - x) n} \exp(c_2 x^2 n / 2)
+\end{aligned}$$
+
+Using Stirling's approximation
+
+\[n! \approx \sqrt{2 \pi n} n^n e^{- n},\]
+
+we can approach the binomial coefficient:
+
+\[{n \choose x n} \approx (\sqrt{2 \pi x (1 - x)} x^{x n} (1 - x)^{(1 - x) n})^{-1}.\]
+
+We also approximate
+
+\[(1 - c_1 n^{- 1 / 2})^{(1 - x) n} \approx \exp(- c_1 \sqrt{n} (1 - x)).\]
+
+With these we have
+
+\[\phi_n(x) \approx {1 \over \sqrt{2 \pi x (1 - x)}} \exp\left(- {1 \over 2} x n \log n + (x \log c_1 - x \log x - (1 - x) \log (1 - x) + {1 \over 2} c_2 x^2) n + {1 \over 2} \log n\right).\]
+
+This vanishes as \(n \to \infty\), and since \(\phi_n(x)\) is bounded above
+by the integrable function \({1 \over \sqrt{2 \pi x (1 - x)}}\) (c.f. the
+arcsine law), and below by \(0\), we may invoke the dominant convergence
+theorem and exchange the integral with the limit and get
+
+$$\begin{aligned}
+\lim_{n \to \infty} &G_n (r \mu_1 + (1 - r) \mu_0 || \mu_0)) \\
+&\le \lim_{n \to \infty} \int \phi_n(x) dx = \int \lim_{n \to \infty} \phi_n(x) dx = 0.
+\end{aligned}$$
+
+Thus we have that the generating function of the divergence variable
+\(L(r \mu_1 + (1 - r) \mu_0 || \mu_0)\) is bounded.
+
+Can this be true for better orders
+
+\[r \le c_1 \sigma^{- d_r},\qquad \lambda \le c_2 \sigma^{d_\lambda}\]
+
+for some \(d_r \in (0, 1]\) and \(d_\lambda \in [2, \infty)\)? If we follow
+the same approximation using these exponents, then letting
+\(n = c_2 \sigma^{d_\lambda}\),
+
+$$\begin{aligned}
+{n \choose j} &r^j (1 - r)^{n - j} G_j(\mu_0 || \mu_1) \le \phi_n(x) \\
+&\approx {1 \over \sqrt{2 \pi x (1 - x)}} \exp\left({1 \over 2} c_2^{2 \over d_\lambda} x^2 n^{2 - {2 \over d_\lambda}} - {d_r \over 2} x n \log n + (x \log c_1 - x \log x - (1 - x) \log (1 - x)) n + {1 \over 2} \log n\right).
+\end{aligned}$$
+
+So we see that to keep the divergence moments bounded it is possible to
+have any \(r = O(\sigma^{- d_r})\) for \(d_r \in (0, 1)\), but relaxing
+\(\lambda\) may not be safe.
+
+If we relax \(r\), then we get
+
+\[G_\lambda(r \mu_1 + (1 - r) \mu_0 || \mu_0) = O(r^{2 / d_r} \lambda^2 \sigma^{-2}) = O(1).\]
+
+Note that now the constant \(C\) depends on \(d_r\) as well. Numerical
+experiments seem to suggest that \(C\) can increase quite rapidly as \(d_r\)
+decreases from \(1\). \(\square\)
+
+In the following for consistency we retain \(k\) as the number of epochs,
+and use \(T := k / r\) to denote the number of compositions / steps /
+minibatches. With Claim 26 we have:
+
+*Claim 28*. Assuming Conjecture 1 is true. Let \(\epsilon, c_1, c_2 > 0\),
+\(r \le c_1 \sigma^{-1}\),
+\(T = {c_2 \over 2 C(c_1, c_2)} \epsilon \sigma^2\). then the DP-SGD with
+subsampling rate \(r\), and \(T\) steps is \((\epsilon, \delta)\)-dp for
+
+\[\delta = \exp(- {1 \over 2} c_2 \sigma^2 \epsilon).\]
+
+In other words, for
+
+\[\sigma \ge \sqrt{2 c_2^{-1}} \epsilon^{- {1 \over 2}} \sqrt{\log \delta^{-1}},\]
+
+we can achieve \((\epsilon, \delta)\)-dp.
+
+*Proof*. By Claim 26 and the Moment Composition Theorem (Claim 22), for
+\(\lambda = c_2 \sigma^2\), substituting
+\(T = {c_2 \over 2 C(c_1, c_2)} \epsilon \sigma^2\), we have
+
+\[\mathbb P(L(p || q) \ge \epsilon) \le \exp(k C(c_1, c_2) - \lambda \epsilon) = \exp\left(- {1 \over 2} c_2 \sigma^2 \epsilon\right).\]
+
+\(\square\)
+
+*Remark*. Claim 28 is my understanding / version of Theorem 1 in
+[ACGMMTZ16], by using the same proof technique. Here I quote the
+original version of theorem with notions and notations altered for
+consistency with this post:
+
+#+begin_quote
+ There exists constants \(c_1', c_2' > 0\) so that for any
+ \(\epsilon < c_1' r^2 T\), DP-SGD is \((\epsilon, \delta)\)-differentially
+ private for any \(\delta > 0\) if we choose
+#+end_quote
+
+\[\sigma \ge c_2' {r \sqrt{T \log (1 / \delta)} \over \epsilon}. \qquad (10)\]
+
+I am however unable to reproduce this version, assuming Conjecture 1 is
+true, for the following reasons:
+
+1. In the proof in the paper, we have \(\epsilon = c_1' r^2 T\) instead of
+ "less than" in the statement of the Theorem. If we change it to
+ \(\epsilon < c_1' r^2 T\) then the direction of the inequality becomes
+ opposite to the direction we want to prove:
+ \[\exp(k C(c_1, c_2) - \lambda \epsilon) \ge ...\]
+
+2. The condition \(r = O(\sigma^{-1})\) of Claim 26 whose result is used
+ in the proof of this theorem is not mentioned in the statement of the
+ proof. The implication is that (10) becomes an ill-formed condition
+ as the right hand side also depends on \(\sigma\).
+
+** Tensorflow implementation
+ :PROPERTIES:
+ :CUSTOM_ID: tensorflow-implementation
+ :END:
+The DP-SGD is implemented in
+[[https://github.com/tensorflow/privacy][TensorFlow Privacy]]. In the
+following I discuss the package in the current state (2019-03-11). It is
+divided into two parts:
+[[https://github.com/tensorflow/privacy/tree/master/privacy/optimizers][=optimizers=]]
+which implements the actual differentially private algorithms, and
+[[https://github.com/tensorflow/privacy/tree/master/privacy/analysis][=analysis=]]
+which computes the privacy guarantee.
+
+The =analysis= part implements a privacy ledger that "keeps a record of
+all queries executed over a given dataset for the purpose of computing
+privacy guarantees". On the other hand, all the computation is done in
+[[https://github.com/tensorflow/privacy/blob/7e2d796bdee9b60dce21a82a397eefda35b0ac10/privacy/analysis/rdp_accountant.py][=rdp_accountant.py=]].
+At this moment, =rdp_accountant.py= only implements the computation of
+the privacy guarantees for DP-SGD with Gaussian mechanism. In the
+following I will briefly explain the code in this file.
+
+Some notational correspondences: their =alpha= is our \(\lambda\), their
+=q= is our \(r\), their =A_alpha= (in the comments) is our
+\(\kappa_{r N(1, \sigma^2) + (1 - r) N(0, \sigma^2)} (\lambda - 1)\), at
+least when \(\lambda\) is an integer.
+
+- The function =_compute_log_a= presumably computes the cumulants
+ \(\kappa_{r N(1, \sigma^2) + (1 - r) N(0, \sigma^2), N(0, \sigma^2)}(\lambda - 1)\).
+ It calls =_compute_log_a_int= or =_compute_log_a_frac= depending on
+ whether \(\lambda\) is an integer.
+- The function =_compute_log_a_int= computes the cumulant using (9.5).
+- When \(\lambda\) is not an integer, we can't use (9.5). I have yet to
+ decode how =_compute_log_a_frac= computes the cumulant (or an upper
+ bound of it) in this case
+- The function =_compute_delta= computes \(\delta\)s for a list of
+ \(\lambda\)s and \(\kappa\)s using Item 1 of Claim 25 and return the
+ smallest one, and the function =_compute_epsilon= computes epsilon
+ uses Item 3 in Claim 25 in the same way.
+
+In =optimizers=, among other things, the DP-SGD with Gaussian mechanism
+is implemented in =dp_optimizer.py= and =gaussian_query.py=. See the
+definition of =DPGradientDescentGaussianOptimizer= in =dp_optimizer.py=
+and trace the calls therein.
+
+At this moment, the privacy guarantee computation part and the optimizer
+part are separated, with =rdp_accountant.py= called in
+=compute_dp_sgd_privacy.py= with user-supplied parameters. I think this
+is due to the lack of implementation in =rdp_accountant.py= of any
+non-DPSGD-with-Gaussian privacy guarantee computation. There is already
+[[https://github.com/tensorflow/privacy/issues/23][an issue on this]],
+so hopefully it won't be long before the privacy guarantees can be
+automatically computed given a DP-SGD instance.
+
+** Comparison among different methods
+ :PROPERTIES:
+ :CUSTOM_ID: comparison-among-different-methods
+ :END:
+So far we have seen three routes to compute the privacy guarantees for
+DP-SGD with the Gaussian mechanism:
+
+1. Claim 9 (single Gaussian mechanism privacy guarantee) -> Claim 19
+ (Subsampling theorem) -> Claim 18 (Advanced Adaptive Composition
+ Theorem)
+2. Example 1 (RDP for the Gaussian mechanism) -> Claim 22 (Moment
+ Composition Theorem) -> Example 3 (Moment composition applied to the
+ Gaussian mechanism)
+3. Claim 26 (RDP for Gaussian mechanism with specific magnitudes for
+ subsampling rate) -> Claim 28 (Moment Composition Theorem and
+ translation to conventional DP)
+
+Which one is the best?
+
+To make fair comparison, we may use one parameter as the metric and set
+all others to be the same. For example, we can
+
+1. Given the same \(\epsilon\), \(r\) (in Route 1 and 3), \(k\), \(\sigma\),
+ compare the \(\delta\)s
+2. Given the same \(\epsilon\), \(r\) (in Route 1 and 3), \(k\), \(\delta\),
+ compare the \(\sigma\)s
+3. Given the same \(\delta\), \(r\) (in Route 1 and 3), \(k\), \(\sigma\),
+ compare the \(\epsilon\)s.
+
+I find that the first one, where \(\delta\) is used as a metric, the best.
+This is because we have the tightest bounds and the cleanest formula
+when comparing the \(\delta\). For example, the Azuma and Chernoff bounds
+are both expressed as a bound for \(\delta\). On the other hand, the
+inversion of these bounds either requires a cost in the tightness (Claim
+9, bounds on \(\sigma\)) or in the complexity of the formula (Claim 16
+Advanced Adaptive Composition Theorem, bounds on \(\epsilon\)).
+
+So if we use \(\sigma\) or \(\epsilon\) as a metric, either we get a less
+fair comparison, or have to use a much more complicated formula as the
+bounds.
+
+Let us first compare Route 1 and Route 2 without specialising to the
+Gaussian mechanism.
+
+*Warning*. What follows is a bit messy.
+
+Suppose each mechanism \(N_i\) satisfies
+\((\epsilon', \delta(\epsilon'))\)-dp. Let
+\(\tilde \epsilon := \log (1 + r (e^{\epsilon'} - 1))\), then we have the
+subsampled mechanism \(M_i(x) = N_i(x_\gamma)\) is
+\((\tilde \epsilon, r \tilde \delta(\tilde \epsilon))\)-dp, where
+
+\[\tilde \delta(\tilde \epsilon) = \delta(\log (r^{-1} (\exp(\tilde \epsilon) - 1) + 1))\]
+
+Using the Azuma bound in the proof of Advanced Adaptive Composition
+Theorem (6.99):
+
+\[\mathbb P(L(p^k || q^k) \ge \epsilon) \le \exp(- {(\epsilon - r^{-1} k a(\tilde\epsilon))^2 \over 2 r^{-1} k (\tilde\epsilon + a(\tilde\epsilon))^2}).\]
+
+So we have the final bound for Route 1:
+
+\[\delta_1(\epsilon) = \min_{\tilde \epsilon: \epsilon > r^{-1} k a(\tilde \epsilon)} \exp(- {(\epsilon - r^{-1} k a(\tilde\epsilon))^2 \over 2 r^{-1} k (\tilde\epsilon + a(\tilde\epsilon))^2}) + k \tilde \delta(\tilde \epsilon).\]
+
+As for Route 2, since we do not gain anything from subsampling in RDP,
+we do not subsample at all.
+
+By Claim 23, we have the bound for Route 2:
+
+\[\delta_2(\epsilon) = \exp(- k \kappa^* (\epsilon / k)).\]
+
+On one hand, one can compare \(\delta_1\) and \(\delta_2\) with numerical
+experiments. On the other hand, if we further specify
+\(\delta(\epsilon')\) in Route 1 as the Chernoff bound for the cumulants
+of divergence variable, i.e.
+
+\[\delta(\epsilon') = \exp(- \kappa^* (\epsilon')),\]
+
+we have
+
+\[\delta_1 (\epsilon) = \min_{\tilde \epsilon: a(\tilde \epsilon) < r k^{-1} \epsilon} \exp(- {(\epsilon - r^{-1} k a(\tilde\epsilon))^2 \over 2 r^{-1} k (\tilde\epsilon + a(\tilde\epsilon))^2}) + k \exp(- \kappa^* (b(\tilde\epsilon))),\]
+
+where
+
+\[b(\tilde \epsilon) := \log (r^{-1} (\exp(\tilde \epsilon) - 1) + 1) \le r^{-1} \tilde\epsilon.\]
+
+We note that since
+\(a(\tilde \epsilon) = \tilde\epsilon(e^{\tilde \epsilon} - 1) 1_{\tilde\epsilon < \log 2} + \tilde\epsilon 1_{\tilde\epsilon \ge \log 2}\),
+we may compare the two cases separately.
+
+Note that we have \(\kappa^*\) is a monotonously increasing function,
+therefore
+
+\[\kappa^* (b(\tilde\epsilon)) \le \kappa^*(r^{-1} \tilde\epsilon).\]
+
+So for \(\tilde \epsilon \ge \log 2\), we have
+
+\[k \exp(- \kappa^*(b(\tilde\epsilon))) \ge k \exp(- \kappa^*(r^{-1} \tilde \epsilon)) \ge k \exp(- \kappa^*(k^{-1} \epsilon)) \ge \delta_2(\epsilon).\]
+
+For \(\tilde\epsilon < \log 2\), it is harder to compare, as now
+
+\[k \exp(- \kappa^*(b(\tilde\epsilon))) \ge k \exp(- \kappa^*(\epsilon / \sqrt{r k})).\]
+
+It is tempting to believe that this should also be greater than
+\(\delta_2(\epsilon)\). But I can not say for sure. At least in the
+special case of Gaussian, we have
+
+\[k \exp(- \kappa^*(\epsilon / \sqrt{r k})) = k \exp(- (\sigma \sqrt{\epsilon / k r} - (2 \sigma)^{-1})^2) \ge \exp(- k ({\sigma \epsilon \over k} - (2 \sigma)^{-1})^2) = \delta_2(\epsilon)\]
+
+when \(\epsilon\) is sufficiently small. However we still need to consider
+the case where \(\epsilon\) is not too small. But overall it seems most
+likely Route 2 is superior than Route 1.
+
+So let us compare Route 2 with Route 3:
+
+Given the condition to obtain the Chernoff bound
+
+\[{\sigma \epsilon \over k} > (2 \sigma)^{-1}\]
+
+we have
+
+\[\delta_2(\epsilon) > \exp(- k (\sigma \epsilon / k)^2) = \exp(- \sigma^2 \epsilon^2 / k).\]
+
+For this to achieve the same bound
+
+\[\delta_3(\epsilon) = \exp\left(- {1 \over 2} c_2 \sigma^2 \epsilon\right)\]
+
+we need \(k < {2 \epsilon \over c_2}\). This is only possible if \(c_2\) is
+small or \(\epsilon\) is large, since \(k\) is a positive integer.
+
+So taking at face value, Route 3 seems to achieve the best results.
+However, it also has some similar implicit conditions that need to be
+satisfied: First \(T\) needs to be at least \(1\), meaning
+
+\[{c_2 \over C(c_1, c_2)} \epsilon \sigma^2 \ge 1.\]
+
+Second, \(k\) needs to be at least \(1\) as well, i.e.
+
+\[k = r T \ge {c_1 c_2 \over C(c_1, c_2)} \epsilon \sigma \ge 1.\]
+
+Both conditions rely on the magnitudes of \(\epsilon\), \(\sigma\), \(c_1\),
+\(c_2\), and the rate of growth of \(C(c_1, c_2)\). The biggest problem in
+this list is the last, because if we know how fast \(C\) grows then we'll
+have a better idea what are the constraints for the parameters to
+achieve the result in Route 3.
+
+** Further questions
+ :PROPERTIES:
+ :CUSTOM_ID: further-questions
+ :END:
+Here is a list of what I think may be interesting topics or potential
+problems to look at, with no guarantee that they are all awesome
+untouched research problems:
+
+1. Prove Conjecture 1
+2. Find a theoretically definitive answer whether the methods in Part 1
+ or Part 2 yield better privacy guarantees.
+3. Study the non-Gaussian cases, general or specific. Let \(p\) be some
+ probability density, what is the tail bound of
+ \(L(p(y) || p(y + \alpha))\) for \(|\alpha| \le 1\)? Can you find
+ anything better than Gaussian? For a start, perhaps the nice tables
+ of Rényi divergence in Gil-Alajaji-Linder 2013 may be useful?
+4. Find out how useful Claim 26 is. Perhaps start with computing the
+ constant \(C\) nemerically.
+5. Help with [[https://github.com/tensorflow/privacy/issues/23][the
+ aforementioned issue]] in the Tensorflow privacy package.
+
+** References
+ :PROPERTIES:
+ :CUSTOM_ID: references
+ :END:
+
+- Abadi, Martín, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya
+ Mironov, Kunal Talwar, and Li Zhang. "Deep Learning with Differential
+ Privacy." Proceedings of the 2016 ACM SIGSAC Conference on Computer
+ and Communications Security - CCS'16, 2016, 308--18.
+ [[https://doi.org/10.1145/2976749.2978318]].
+- Erven, Tim van, and Peter Harremoës. "R\'enyi Divergence and
+ Kullback-Leibler Divergence." IEEE Transactions on Information Theory
+ 60, no. 7 (July 2014): 3797--3820.
+ [[https://doi.org/10.1109/TIT.2014.2320500]].
+- Gil, M., F. Alajaji, and T. Linder. "Rényi Divergence Measures for
+ Commonly Used Univariate Continuous Distributions." Information
+ Sciences 249 (November 2013): 124--31.
+ [[https://doi.org/10.1016/j.ins.2013.06.018]].
+- Mironov, Ilya. "Renyi Differential Privacy." 2017 IEEE 30th Computer
+ Security Foundations Symposium (CSF), August 2017, 263--75.
+ [[https://doi.org/10.1109/CSF.2017.11]].
diff --git a/posts/blog.html b/posts/blog.html
new file mode 100644
index 0000000..80176e7
--- /dev/null
+++ b/posts/blog.html
@@ -0,0 +1,21 @@
+#+TITLE: All posts
+
+- *[[file:sitemap.org][All posts]]* - 2021-06-17
+- *[[file:2019-03-14-great-but-manageable-expectations.org][Great but Manageable Expectations]]* - 2019-03-14
+- *[[file:2019-03-13-a-tail-of-two-densities.org][A Tail of Two Densities]]* - 2019-03-13
+- *[[file:2019-02-14-raise-your-elbo.org][Raise your ELBO]]* - 2019-02-14
+- *[[file:2019-01-03-discriminant-analysis.org][Discriminant analysis]]* - 2019-01-03
+- *[[file:2018-12-02-lime-shapley.org][Shapley, LIME and SHAP]]* - 2018-12-02
+- *[[file:2018-06-03-automatic_differentiation.org][Automatic differentiation]]* - 2018-06-03
+- *[[file:2018-04-10-update-open-research.org][Updates on open research]]* - 2018-04-29
+- *[[file:2017-08-07-mathematical_bazaar.org][The Mathematical Bazaar]]* - 2017-08-07
+- *[[file:2017-04-25-open_research_toywiki.org][Open mathematical research and launching toywiki]]* - 2017-04-25
+- *[[file:2016-10-13-q-robinson-schensted-knuth-polymer.org][A \(q\)-Robinson-Schensted-Knuth algorithm and a \(q\)-polymer]]* - 2016-10-13
+- *[[file:2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.org][AMS review of 'Double Macdonald polynomials as the stable limit of Macdonald superpolynomials' by Blondeau-Fournier, Lapointe and Mathieu]]* - 2015-07-15
+- *[[file:2015-07-01-causal-quantum-product-levy-area.org][On a causal quantum double product integral related to Lévy stochastic area.]]* - 2015-07-01
+- *[[file:2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.org][AMS review of 'Infinite binary words containing repetitions of odd period' by Badkobeh and Crochemore]]* - 2015-05-30
+- *[[file:2015-04-02-juggling-skill-tree.org][jst]]* - 2015-04-02
+- *[[file:2015-04-01-unitary-double-products.org][Unitary causal quantum stochastic double products as universal]]* - 2015-04-01
+- *[[file:2015-01-20-weighted-interpretation-super-catalan-numbers.org][AMS review of 'A weighted interpretation for the super Catalan]]* - 2015-01-20
+- *[[file:2014-04-01-q-robinson-schensted-symmetry-paper.org][Symmetry property of \(q\)-weighted Robinson-Schensted algorithms and branching algorithms]]* - 2014-04-01
+- *[[file:2013-06-01-q-robinson-schensted-paper.org][A \(q\)-weighted Robinson-Schensted algorithm]]* - 2013-06-01 \ No newline at end of file
diff --git a/publish.el b/publish.el
new file mode 100644
index 0000000..a9c3a5c
--- /dev/null
+++ b/publish.el
@@ -0,0 +1,119 @@
+(require 'ox-publish)
+
+(defvar this-date-format "%Y-%m-%d")
+
+(defun me/html-preamble-post (plist)
+ "PLIST: An entry."
+ (if (org-export-get-date plist this-date-format)
+ (plist-put plist
+ :subtitle (format "Published on %s by %s"
+ (org-export-get-date plist this-date-format)
+ (car (plist-get plist :author)))))
+ ;; Preamble
+ (with-temp-buffer
+ (insert-file-contents "../html-templates/post-preamble.html") (buffer-string)))
+
+(defun me/org-posts-sitemap-format-entry (entry style project)
+ "Format posts with author and published data in the index page.
+
+ENTRY: file-name
+STYLE:
+PROJECT: `posts in this case."
+ (cond ((not (directory-name-p entry))
+ (format "*[[file:posts/%s][%s]]* - %s"
+ entry
+ (org-publish-find-title entry project)
+ (format-time-string this-date-format
+ (org-publish-find-date entry project))))
+ ((eq style 'tree) (file-name-nondirectory (directory-file-name entry)))
+ (t entry)))
+
+(defun me/org-microposts-sitemap (title list)
+ "Default site map, as a string.
+TITLE is the title of the site map. LIST is an internal
+representation for the files to include, as returned by
+`org-list-to-lisp'. PROJECT is the current project."
+ (concat "#+TITLE: " title "\n\n"
+ (org-list-to-org list)))
+
+
+(defun org-publish-find-content (file project)
+ (let ((file (org-publish--expand-file-name file project)))
+ (when (and (file-readable-p file) (not (directory-name-p file)))
+ (with-temp-buffer
+ (insert-file-contents file)
+ (goto-char (point-min))
+ (let ((beg (+ 1 (re-search-forward "^$"))))
+; (print (concat file ": " (number-to-string beg) ", " (number-to-string (point-max))))
+ (buffer-substring beg (point-max)))))))
+
+(defun me/org-microposts-sitemap-format-entry (entry style project)
+ "Format posts with author and published data in the index page.
+
+ENTRY: file-name
+STYLE:
+PROJECT: `posts in this case."
+ (cond ((not (directory-name-p entry))
+ (format "%s - *[[file:microposts/%s][%s]]*\n\n%s"
+ (format-time-string this-date-format
+ (org-publish-find-date entry project))
+ entry
+ (org-publish-find-title entry project)
+ (org-publish-find-content entry project)
+ ))
+ ((eq style 'tree) (file-name-nondirectory (directory-file-name entry)))
+ (t entry)))
+
+(setq org-publish-project-alist
+ '(("posts"
+ :base-directory "posts/"
+ :base-extension "org"
+ :publishing-directory "site/posts"
+ :recursive t
+ :publishing-function org-html-publish-to-html
+ :auto-sitemap t
+ :sitemap-format-entry me/org-posts-sitemap-format-entry
+ :sitemap-title "All posts"
+ :sitemap-sort-files anti-chronologically
+ :sitemap-filename "../pages/blog.org"
+ :html-head "<link rel='stylesheet' href='../css/default.css' type='text/css'/>"
+ :html-preamble me/html-preamble-post
+ :author ("Yuchen Pei")
+ :html-postamble ""
+ )
+ ("microposts"
+ :base-directory "microposts/"
+ :base-extension "org"
+ :publishing-directory "site/microposts"
+ :recursive t
+ :publishing-function org-html-publish-to-html
+ :auto-sitemap t
+ :sitemap-format-entry me/org-microposts-sitemap-format-entry
+ :sitemap-function me/org-microposts-sitemap
+ :sitemap-title "Microblog"
+ :sitemap-sort-files anti-chronologically
+ :sitemap-filename "../pages/microblog.org"
+ :html-head "<link rel='stylesheet' href='../css/default.css' type='text/css'/>"
+ :html-preamble me/html-preamble-post
+ :author ("Yuchen Pei")
+ :html-postamble ""
+ )
+ ("pages"
+ :base-directory "pages/"
+ :base-extension "org"
+ :publishing-directory "site/"
+ :recursive t
+ :publishing-function org-html-publish-to-html
+ :html-head "<link rel='stylesheet' href='../css/default.css' type='text/css'/>"
+ :html-preamble me/html-preamble-post
+ :author ("Yuchen Pei")
+ :html-postamble ""
+ )
+ ("css"
+ :base-directory "css/"
+ :base-extension "css"
+ :publishing-directory "site/css"
+ :publishing-function org-publish-attachment
+ :recursive t
+ )
+ ("all" :components ("posts" "microposts" "pages" "css"))))
diff --git a/site-from-md/assets b/site-from-md/assets
new file mode 120000
index 0000000..bae6859
--- /dev/null
+++ b/site-from-md/assets
@@ -0,0 +1 @@
+../assets/ \ No newline at end of file
diff --git a/site-from-md/blog-feed.xml b/site-from-md/blog-feed.xml
new file mode 100644
index 0000000..9606227
--- /dev/null
+++ b/site-from-md/blog-feed.xml
@@ -0,0 +1,1864 @@
+<?xml version="1.0" encoding="utf-8"?>
+<feed xmlns="http://www.w3.org/2005/Atom">
+ <title type="text">Yuchen Pei's Blog</title>
+ <id>https://ypei.me/blog-feed.xml</id>
+ <updated>2019-03-14T00:00:00Z</updated>
+ <link href="https://ypei.me" />
+ <link href="https://ypei.me/blog-feed.xml" rel="self" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <generator>PyAtom</generator>
+ <entry xml:base="https://ypei.me/blog-feed.xml">
+ <title type="text">Great but Manageable Expectations</title>
+ <id>posts/2019-03-14-great-but-manageable-expectations.html</id>
+ <updated>2019-03-14T00:00:00Z</updated>
+ <link href="posts/2019-03-14-great-but-manageable-expectations.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;This is Part 2 of a two-part blog post on differential privacy. Continuing from &lt;a href="/posts/2019-03-13-a-tail-of-two-densities.html"&gt;Part 1&lt;/a&gt;, I discuss the Rényi differential privacy, corresponding to the Rényi divergence, a study of the moment generating functions the divergence between probability measures to derive the tail bounds.&lt;/p&gt;
+&lt;p&gt;Like in Part 1, I prove a composition theorem and a subsampling theorem.&lt;/p&gt;
+&lt;p&gt;I also attempt to reproduce a seemingly better moment bound for the Gaussian mechanism with subsampling, with one intermediate step which I am not able to prove.&lt;/p&gt;
+&lt;p&gt;After that I explain the Tensorflow implementation of differential privacy in its &lt;a href="https://github.com/tensorflow/privacy/tree/master/privacy"&gt;Privacy&lt;/a&gt; module, which focuses on the differentially private stochastic gradient descent algorithm (DP-SGD).&lt;/p&gt;
+&lt;p&gt;Finally I use the results from both Part 1 and Part 2 to obtain some privacy guarantees for composed subsampling queries in general, and for DP-SGD in particular. I also compare these privacy guarantees.&lt;/p&gt;
+&lt;p&gt;&lt;em&gt;If you are confused by any notations, ask me or try &lt;a href="/notations.html"&gt;this&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
+&lt;h2 id="rényi-divergence-and-differential-privacy"&gt;Rényi divergence and differential privacy&lt;/h2&gt;
+&lt;p&gt;Recall in the proof of Gaussian mechanism privacy guarantee (Claim 8) we used the Chernoff bound for the Gaussian noise. Why not use the Chernoff bound for the divergence variable / privacy loss directly, since the latter is closer to the core subject than the noise? This leads us to the study of Rényi divergence.&lt;/p&gt;
+&lt;p&gt;So far we have seen several notions of divergence used in differential privacy: the max divergence which is &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-ind in disguise:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[D_\infty(p || q) := \max_y \log {p(y) \over q(y)},\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;the &lt;span class="math inline"&gt;\(\delta\)&lt;/span&gt;-approximate max divergence that defines the &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-ind:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[D_\infty^\delta(p || q) := \max_y \log{p(y) - \delta \over q(y)},\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;and the KL-divergence which is the expectation of the divergence variable:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[D(p || q) = \mathbb E L(p || q) = \int \log {p(y) \over q(y)} p(y) dy.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;The Rényi divergence is an interpolation between the max divergence and the KL-divergence, defined as the log moment generating function / cumulants of the divergence variable:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[D_\lambda(p || q) = (\lambda - 1)^{-1} \log \mathbb E \exp((\lambda - 1) L(p || q)) = (\lambda - 1)^{-1} \log \int {p(y)^\lambda \over q(y)^{\lambda - 1}} dx.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Indeed, when &lt;span class="math inline"&gt;\(\lambda \to \infty\)&lt;/span&gt; we recover the max divergence, and when &lt;span class="math inline"&gt;\(\lambda \to 1\)&lt;/span&gt;, by recognising &lt;span class="math inline"&gt;\(D_\lambda\)&lt;/span&gt; as a derivative in &lt;span class="math inline"&gt;\(\lambda\)&lt;/span&gt; at &lt;span class="math inline"&gt;\(\lambda = 1\)&lt;/span&gt;, we recover the KL divergence. In this post we only consider &lt;span class="math inline"&gt;\(\lambda &amp;gt; 1\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Using the Rényi divergence we may define:&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Definition (Rényi differential privacy)&lt;/strong&gt; (Mironov 2017). An mechanism &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\lambda, \rho)\)&lt;/span&gt;&lt;em&gt;-Rényi differentially private&lt;/em&gt; (&lt;span class="math inline"&gt;\((\lambda, \rho)\)&lt;/span&gt;-rdp) if for all &lt;span class="math inline"&gt;\(x\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(x&amp;#39;\)&lt;/span&gt; with distance &lt;span class="math inline"&gt;\(1\)&lt;/span&gt;,&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[D_\lambda(M(x) || M(x&amp;#39;)) \le \rho.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;For convenience we also define two related notions, &lt;span class="math inline"&gt;\(G_\lambda (f || g)\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\kappa_{f, g} (t)\)&lt;/span&gt; for &lt;span class="math inline"&gt;\(\lambda &amp;gt; 1\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(t &amp;gt; 0\)&lt;/span&gt; and positive functions &lt;span class="math inline"&gt;\(f\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(g\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[G_\lambda(f || g) = \int f(y)^{\lambda} g(y)^{1 - \lambda} dy; \qquad \kappa_{f, g} (t) = \log G_{t + 1}(f || g).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;For probability densities &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(G_{t + 1}(p || q)\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\kappa_{p, q}(t)\)&lt;/span&gt; are the &lt;span class="math inline"&gt;\(t\)&lt;/span&gt;th moment generating function and cumulant of the divergence variable &lt;span class="math inline"&gt;\(L(p || q)\)&lt;/span&gt;, and&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[D_\lambda(p || q) = (\lambda - 1)^{-1} \kappa_{p, q}(\lambda - 1).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;In the following, whenever you see &lt;span class="math inline"&gt;\(t\)&lt;/span&gt;, think of it as &lt;span class="math inline"&gt;\(\lambda - 1\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Example 1 (RDP for the Gaussian mechanism)&lt;/strong&gt;. Using the scaling and translation invariance of &lt;span class="math inline"&gt;\(L\)&lt;/span&gt; (6.1), we have that the divergence variable for two Gaussians with the same variance is&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[L(N(\mu_1, \sigma^2 I) || N(\mu_2, \sigma^2 I)) \overset{d}{=} L(N(0, I) || N((\mu_2 - \mu_1) / \sigma, I)).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;With this we get&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[D_\lambda(N(\mu_1, \sigma^2 I) || N(\mu_2, \sigma^2 I)) = {\lambda \|\mu_2 - \mu_1\|_2^2 \over 2 \sigma^2} = D_\lambda(N(\mu_2, \sigma^2 I) || N(\mu_1, \sigma^2 I)).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Again due to the scaling invariance of &lt;span class="math inline"&gt;\(L\)&lt;/span&gt;, we only need to consider &lt;span class="math inline"&gt;\(f\)&lt;/span&gt; with sensitivity &lt;span class="math inline"&gt;\(1\)&lt;/span&gt;, see the discussion under (6.1). The Gaussian mechanism on query &lt;span class="math inline"&gt;\(f\)&lt;/span&gt; is thus &lt;span class="math inline"&gt;\((\lambda, \lambda / 2 \sigma^2)\)&lt;/span&gt;-rdp for any &lt;span class="math inline"&gt;\(\lambda &amp;gt; 1\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;From the example of Gaussian mechanism, we see that the relation between &lt;span class="math inline"&gt;\(\lambda\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\rho\)&lt;/span&gt; is like that between &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\delta\)&lt;/span&gt;. Given &lt;span class="math inline"&gt;\(\lambda\)&lt;/span&gt; (resp. &lt;span class="math inline"&gt;\(\rho\)&lt;/span&gt;) and parameters like variance of the noise and the sensitivity of the query, we can write &lt;span class="math inline"&gt;\(\rho = \rho(\lambda)\)&lt;/span&gt; (resp. &lt;span class="math inline"&gt;\(\lambda = \lambda(\rho)\)&lt;/span&gt;).&lt;/p&gt;
+&lt;p&gt;Using the Chernoff bound (6.7), we can bound the divergence variable:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(L(p || q) \ge \epsilon) \le {\mathbb E \exp(t L(p || q)) \over \exp(t \epsilon))} = \exp (\kappa_{p, q}(t) - \epsilon t). \qquad (7.7)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;For a function &lt;span class="math inline"&gt;\(f: I \to \mathbb R\)&lt;/span&gt;, denote its Legendre transform by&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[f^*(\epsilon) := \sup_{t \in I} (\epsilon t - f(t)).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;By taking infimum on the RHS of (7.7), we obtain&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 20&lt;/strong&gt;. Two probability densities &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; are &lt;span class="math inline"&gt;\((\epsilon, \exp(-\kappa_{p, q}^*(\epsilon)))\)&lt;/span&gt;-ind.&lt;/p&gt;
+&lt;p&gt;Given a mechanism &lt;span class="math inline"&gt;\(M\)&lt;/span&gt;, let &lt;span class="math inline"&gt;\(\kappa_M(t)\)&lt;/span&gt; denote an upper bound for the cumulant of its privacy loss:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\log \mathbb E \exp(t L(M(x) || M(x&amp;#39;))) \le \kappa_M(t), \qquad \forall x, x&amp;#39;\text{ with } d(x, x&amp;#39;) = 1.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;For example, we can set &lt;span class="math inline"&gt;\(\kappa_M(t) = t \rho(t + 1)\)&lt;/span&gt;. Using the same argument we have the following:&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 21&lt;/strong&gt;. If &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\lambda, \rho)\)&lt;/span&gt;-rdp, then&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;it is also &lt;span class="math inline"&gt;\((\epsilon, \exp((\lambda - 1) (\rho - \epsilon)))\)&lt;/span&gt;-dp for any &lt;span class="math inline"&gt;\(\epsilon \ge \rho\)&lt;/span&gt;.&lt;/li&gt;
+&lt;li&gt;Alternatively, &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\epsilon, - \exp(\kappa_M^*(\epsilon)))\)&lt;/span&gt;-dp for any &lt;span class="math inline"&gt;\(\epsilon &amp;gt; 0\)&lt;/span&gt;.&lt;/li&gt;
+&lt;li&gt;Alternatively, for any &lt;span class="math inline"&gt;\(0 &amp;lt; \delta \le 1\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\rho + (\lambda - 1)^{-1} \log \delta^{-1}, \delta)\)&lt;/span&gt;-dp.&lt;/li&gt;
+&lt;/ol&gt;
+&lt;p&gt;&lt;strong&gt;Example 2 (Gaussian mechanism)&lt;/strong&gt;. We can apply the above argument to the Gaussian mechanism on query &lt;span class="math inline"&gt;\(f\)&lt;/span&gt; and get:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\delta \le \inf_{\lambda &amp;gt; 1} \exp((\lambda - 1) ({\lambda \over 2 \sigma^2} - \epsilon))\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;By assuming &lt;span class="math inline"&gt;\(\sigma^2 &amp;gt; (2 \epsilon)^{-1}\)&lt;/span&gt; we have that the infimum is achieved when &lt;span class="math inline"&gt;\(\lambda = (1 + 2 \epsilon / \sigma^2) / 2\)&lt;/span&gt; and&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\delta \le \exp(- ((2 \sigma)^{-1} - \epsilon \sigma)^2 / 2)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;which is the same result as (6.8), obtained using the Chernoff bound of the noise.&lt;/p&gt;
+&lt;p&gt;However, as we will see later, compositions will yield different results from those obtained from methods in &lt;a href="/posts/2019-03-13-a-tail-of-two-densities.html"&gt;Part 1&lt;/a&gt; when considering Rényi dp.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 22 (Moment Composition Theorem)&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; be the adaptive composition of &lt;span class="math inline"&gt;\(M_{1 : k}\)&lt;/span&gt;. Suppose for any &lt;span class="math inline"&gt;\(y_{&amp;lt; i}\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(M_i(y_{&amp;lt; i})\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\lambda, \rho)\)&lt;/span&gt;-rdp. Then &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\lambda, k\rho)\)&lt;/span&gt;-rdp.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. Rather straightforward. As before let &lt;span class="math inline"&gt;\(p_i\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q_i\)&lt;/span&gt; be the conditional laws of adpative composition of &lt;span class="math inline"&gt;\(M_{1 : i}\)&lt;/span&gt; at &lt;span class="math inline"&gt;\(x\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(x&amp;#39;\)&lt;/span&gt; respectively, and &lt;span class="math inline"&gt;\(p^i\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q^i\)&lt;/span&gt; be the joint laws of &lt;span class="math inline"&gt;\(M_{1 : i}\)&lt;/span&gt; at &lt;span class="math inline"&gt;\(x\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(x&amp;#39;\)&lt;/span&gt; respectively. Denote&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[D_i = \mathbb E \exp((\lambda - 1)\log {p^i(\xi_{1 : i}) \over q^i(\xi_{1 : i})})\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Then&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+D_i &amp;amp;= \mathbb E\mathbb E \left(\exp((\lambda - 1)\log {p_i(\xi_i | \xi_{&amp;lt; i}) \over q_i(\xi_i | \xi_{&amp;lt; i})}) \exp((\lambda - 1)\log {p^{i - 1}(\xi_{&amp;lt; i}) \over q^{i - 1}(\xi_{&amp;lt; i})}) \big| \xi_{&amp;lt; i}\right) \\
+&amp;amp;= \mathbb E \mathbb E \left(\exp((\lambda - 1)\log {p_i(\xi_i | \xi_{&amp;lt; i}) \over q_i(\xi_i | \xi_{&amp;lt; i})}) | \xi_{&amp;lt; i}\right) \exp\left((\lambda - 1)\log {p^{i - 1}(\xi_{&amp;lt; i}) \over q^{i - 1}(\xi_{&amp;lt; i})}\right)\\
+&amp;amp;\le \mathbb E \exp((\lambda - 1) \rho) \exp\left((\lambda - 1)\log {p^{i - 1}(\xi_{&amp;lt; i}) \over q^{i - 1}(\xi_{&amp;lt; i})}\right)\\
+&amp;amp;= \exp((\lambda - 1) \rho) D_{i - 1}.
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Applying this recursively we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[D_k \le \exp(k(\lambda - 1) \rho),\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;and so&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[(\lambda - 1)^{-1} \log \mathbb E \exp((\lambda - 1)\log {p^k(\xi_{1 : i}) \over q^k(\xi_{1 : i})}) = (\lambda - 1)^{-1} \log D_k \le k \rho.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Since this holds for all &lt;span class="math inline"&gt;\(x\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(x&amp;#39;\)&lt;/span&gt;, we are done. &lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;This, together with the scaling property of the legendre transformation:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[(k f)^*(x) = k f^*(x / k)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;yields&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 23&lt;/strong&gt;. The &lt;span class="math inline"&gt;\(k\)&lt;/span&gt;-fold adaptive composition of &lt;span class="math inline"&gt;\((\lambda, \rho(\lambda))\)&lt;/span&gt;-rdp mechanisms is &lt;span class="math inline"&gt;\((\epsilon, \exp(- k \kappa^*(\epsilon / k)))\)&lt;/span&gt;-dp, where &lt;span class="math inline"&gt;\(\kappa(t) := t \rho(t + 1)\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Example 3 (Gaussian mechanism)&lt;/strong&gt;. We can apply the above claim to Gaussian mechanism. Again, without loss of generality we assume &lt;span class="math inline"&gt;\(S_f = 1\)&lt;/span&gt;. But let us do it manually to get the same results. If we apply the Moment Composition Theorem to the an adaptive composition of Gaussian mechanisms on the same query, then since each &lt;span class="math inline"&gt;\(M_i\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\lambda, (2 \sigma^2)^{-1} \lambda)\)&lt;/span&gt;-rdp, the composition &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\lambda, (2 \sigma^2)^{-1} k \lambda)\)&lt;/span&gt;-rdp. Processing this using the Chernoff bound as in the previous example, we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\delta = \exp(- ((2 \sigma / \sqrt k)^{-1} - \epsilon \sigma / \sqrt k)^2 / 2),\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Substituting &lt;span class="math inline"&gt;\(\sigma\)&lt;/span&gt; with &lt;span class="math inline"&gt;\(\sigma / \sqrt k\)&lt;/span&gt; in (6.81), we conclude that if&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sigma &amp;gt; \sqrt k \left(\epsilon^{-1} \sqrt{2 \log \delta^{-1}} + (2 \epsilon)^{- {1 \over 2}}\right)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;then the composition &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp.&lt;/p&gt;
+&lt;p&gt;As we will see in the discussions at the end of this post, this result is different from (and probably better than) the one obtained by using the Advanced Composition Theorem (Claim 18).&lt;/p&gt;
+&lt;p&gt;We also have a subsampling theorem for the Rényi dp.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 24&lt;/strong&gt;. Fix &lt;span class="math inline"&gt;\(r \in [0, 1]\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(m \le n\)&lt;/span&gt; be two nonnegative integers with &lt;span class="math inline"&gt;\(m = r n\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(N\)&lt;/span&gt; be a &lt;span class="math inline"&gt;\((\lambda, \rho)\)&lt;/span&gt;-rdp machanism on &lt;span class="math inline"&gt;\(X^m\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(\mathcal I := \{J \subset [n]: |J| = m\}\)&lt;/span&gt; be the set of subsets of &lt;span class="math inline"&gt;\([n]\)&lt;/span&gt; of size &lt;span class="math inline"&gt;\(m\)&lt;/span&gt;. Define mechanism &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; on &lt;span class="math inline"&gt;\(X^n\)&lt;/span&gt; by&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[M(x) = N(x_\gamma)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(\gamma\)&lt;/span&gt; is sampled uniformly from &lt;span class="math inline"&gt;\(\mathcal I\)&lt;/span&gt;. Then &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\lambda, {1 \over \lambda - 1} \log (1 + r(e^{(\lambda - 1) \rho} - 1)))\)&lt;/span&gt;-rdp.&lt;/p&gt;
+&lt;p&gt;To prove Claim 24, we need a useful lemma:&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 25&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(p_{1 : n}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q_{1 : n}\)&lt;/span&gt; be nonnegative integers, and &lt;span class="math inline"&gt;\(\lambda &amp;gt; 1\)&lt;/span&gt;. Then&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[{(\sum p_i)^\lambda \over (\sum q_i)^{\lambda - 1}} \le \sum_i {p_i^\lambda \over q_i^{\lambda - 1}}. \qquad (8)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. Let&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[r(i) := p_i / P, \qquad u(i) := q_i / Q\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[P := \sum p_i, \qquad Q := \sum q_i\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;then &lt;span class="math inline"&gt;\(r\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(u\)&lt;/span&gt; are probability mass functions. Plugging in &lt;span class="math inline"&gt;\(p_i = r(i) P\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q_i = u(i) Q\)&lt;/span&gt; into the objective (8), it suffices to show&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[1 \le \sum_i {r(i)^\lambda \over u(i)^{\lambda - 1}} = \mathbb E_{\xi \sim u} \left({r(\xi) \over u(\xi)}\right)^\lambda\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;This is true due to Jensen's Inequality:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb E_{\xi \sim u} \left({r(\xi) \over u(\xi)}\right)^\lambda \ge \left(\mathbb E_{\xi \sim u} {r(\xi) \over u(\xi)} \right)^\lambda = 1.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof of Claim 24&lt;/strong&gt;. Define &lt;span class="math inline"&gt;\(\mathcal I\)&lt;/span&gt; as before.&lt;/p&gt;
+&lt;p&gt;Let &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; be the laws of &lt;span class="math inline"&gt;\(M(x)\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(M(x&amp;#39;)\)&lt;/span&gt; respectively. For any &lt;span class="math inline"&gt;\(I \in \mathcal I\)&lt;/span&gt;, let &lt;span class="math inline"&gt;\(p_I\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q_I\)&lt;/span&gt; be the laws of &lt;span class="math inline"&gt;\(N(x_I)\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(N(x_I&amp;#39;)\)&lt;/span&gt; respectively. Then we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+p(y) &amp;amp;= n^{-1} \sum_{I \in \mathcal I} p_I(y) \\
+q(y) &amp;amp;= n^{-1} \sum_{I \in \mathcal I} q_I(y),
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(n = |\mathcal I|\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;The MGF of &lt;span class="math inline"&gt;\(L(p || q)\)&lt;/span&gt; is thus&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb E((\lambda - 1) L(p || q)) = n^{-1} \int {(\sum_I p_I(y))^\lambda \over (\sum_I q_I(y))^{\lambda - 1}} dy \le n^{-1} \sum_I \int {p_I(y)^\lambda \over q_I(y)^{\lambda - 1}} dy \qquad (9)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where in the last step we used Claim 25. As in the proof of Claim 19, we divide &lt;span class="math inline"&gt;\(\mathcal I\)&lt;/span&gt; into disjoint sets &lt;span class="math inline"&gt;\(\mathcal I_\in\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\mathcal I_\notin\)&lt;/span&gt;. Furthermore we denote by &lt;span class="math inline"&gt;\(n_\in\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(n_\notin\)&lt;/span&gt; their cardinalities. Then the right hand side of (9) becomes&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[n^{-1} \sum_{I \in \mathcal I_\in} \int {p_I(y)^\lambda \over q_I(y)^{\lambda - 1}} dy + n^{-1} \sum_{I \in \mathcal I_\notin} \int {p_I(y)^\lambda \over q_I(y)^{\lambda - 1}} dy\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;The summands in the first are the MGF of &lt;span class="math inline"&gt;\(L(p_I || q_I)\)&lt;/span&gt;, and the summands in the second term are &lt;span class="math inline"&gt;\(1\)&lt;/span&gt;, so&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\mathbb E((\lambda - 1) L(p || q)) &amp;amp;\le n^{-1} \sum_{I \in \mathcal I_\in} \mathbb E \exp((\lambda - 1) L(p_I || q_I)) + (1 - r) \\
+&amp;amp;\le n^{-1} \sum_{I \in \mathcal I_\in} \exp((\lambda - 1) D_\lambda(p_I || q_I)) + (1 - r) \\
+&amp;amp;\le r \exp((\lambda - 1) \rho) + (1 - r).
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Taking log and dividing by &lt;span class="math inline"&gt;\((\lambda - 1)\)&lt;/span&gt; on both sides we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[D_\lambda(p || q) \le (\lambda - 1)^{-1} \log (1 + r(\exp((\lambda - 1) \rho) - 1)).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;As before, we can rewrite the conclusion of Lemma 6 using &lt;span class="math inline"&gt;\(1 + z \le e^z\)&lt;/span&gt; and obtain &lt;span class="math inline"&gt;\((\lambda, (\lambda - 1)^{-1} r (e^{(\lambda - 1) \rho} - 1))\)&lt;/span&gt;-rdp, which further gives &lt;span class="math inline"&gt;\((\lambda, \alpha^{-1} (e^\alpha - 1) r \rho)\)&lt;/span&gt;-rdp (or &lt;span class="math inline"&gt;\((\lambda, O(r \rho))\)&lt;/span&gt;-rdp) if &lt;span class="math inline"&gt;\((\lambda - 1) \rho &amp;lt; \alpha\)&lt;/span&gt; for some &lt;span class="math inline"&gt;\(\alpha\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;It is not hard to see that the subsampling theorem in moment method, even though similar to the results of that in the usual method, does not help due to lack of an analogue of advanced composition theorem of the moments.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Example 4 (Gaussian mechanism)&lt;/strong&gt;. Applying the moment subsampling theorem to the Gaussian mechanism, we obtain &lt;span class="math inline"&gt;\((\lambda, O(r \lambda / \sigma^2))\)&lt;/span&gt;-rdp for a subsampled Gaussian mechanism with rate &lt;span class="math inline"&gt;\(r\)&lt;/span&gt;. Abadi-Chu-Goodfellow-McMahan-Mironov-Talwar-Zhang 2016 (ACGMMTZ16 in the following), however, gains an extra &lt;span class="math inline"&gt;\(r\)&lt;/span&gt; in the bound given certain assumptions.&lt;/p&gt;
+&lt;h2 id="acgmmtz16"&gt;ACGMMTZ16&lt;/h2&gt;
+&lt;p&gt;What follows is my understanding of this result. I call it a conjecture because there is a gap which I am not able to reproduce their proof or prove it myself. This does not mean the result is false. On the contrary, I am inclined to believe it is true.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 26&lt;/strong&gt;. Assuming Conjecture 1 (see below) is true. For a subsampled Gaussian mechanism with ratio &lt;span class="math inline"&gt;\(r\)&lt;/span&gt;, if &lt;span class="math inline"&gt;\(r = O(\sigma^{-1})\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\lambda = O(\sigma^2)\)&lt;/span&gt;, then we have &lt;span class="math inline"&gt;\((\lambda, O(r^2 \lambda / \sigma^2))\)&lt;/span&gt;-rdp.&lt;/p&gt;
+&lt;p&gt;Wait, why is there a conjecture? Well, I have tried but not been able to prove the following, which is a hidden assumption in the original proof:&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Conjecture 1&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(p_i\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(q_i\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(\mu_i\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(\nu_i\)&lt;/span&gt; be probability densities on the same space for &lt;span class="math inline"&gt;\(i = 1 : n\)&lt;/span&gt;. If &lt;span class="math inline"&gt;\(D_\lambda(p_i || q_i) \le D_\lambda(\mu_i || \nu_i)\)&lt;/span&gt; for all &lt;span class="math inline"&gt;\(i\)&lt;/span&gt;, then&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[D_\lambda(n^{-1} \sum_i p_i || n^{-1} \sum_i q_i) \le D_\lambda(n^{-1} \sum_i \mu_i || n^{-1} \sum_i \nu_i).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Basically, it is saying "if for each &lt;span class="math inline"&gt;\(i\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(p_i\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q_i\)&lt;/span&gt; are closer to each other than &lt;span class="math inline"&gt;\(\mu_i\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\nu_i\)&lt;/span&gt;, then so are their averages over &lt;span class="math inline"&gt;\(i\)&lt;/span&gt;". So it is heuristically reasonable.&lt;/p&gt;
+&lt;p&gt;This conjecture is equivalent to its special case when &lt;span class="math inline"&gt;\(n = 2\)&lt;/span&gt; by an induction argument (replacing one pair of densities at a time).&lt;/p&gt;
+&lt;p&gt;Recall the definition of &lt;span class="math inline"&gt;\(G_\lambda\)&lt;/span&gt; under the definition of Rényi differential privacy. The following Claim will be useful.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 27&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(\lambda\)&lt;/span&gt; be a positive integer, then&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[G_\lambda(r p + (1 - r) q || q) = \sum_{k = 1 : \lambda} {\lambda \choose k} r^k (1 - r)^{\lambda - k} G_k(p || q).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. Quite straightforward, by expanding the numerator &lt;span class="math inline"&gt;\((r p + (1 - r) q)^\lambda\)&lt;/span&gt; using binomial expansion. &lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof of Claim 26&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; be the Gaussian mechanism with subsampling rate &lt;span class="math inline"&gt;\(r\)&lt;/span&gt;, and &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; be the laws of &lt;span class="math inline"&gt;\(M(x)\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(M(x&amp;#39;)\)&lt;/span&gt; respectively, where &lt;span class="math inline"&gt;\(d(x, x&amp;#39;) = 1\)&lt;/span&gt;. I will break the proof into two parts:&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;The MGF of the privacy loss &lt;span class="math inline"&gt;\(L(p || q)\)&lt;/span&gt; is bounded by that of &lt;span class="math inline"&gt;\(L(r \mu_1 + (1 - r) \mu_0 || \mu_0)\)&lt;/span&gt; where &lt;span class="math inline"&gt;\(\mu_i = N(i, \sigma^2)\)&lt;/span&gt;.&lt;/li&gt;
+&lt;li&gt;If &lt;span class="math inline"&gt;\(r \le c_1 \sigma^{-1}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\lambda \le c_2 \sigma^2\)&lt;/span&gt;, then there exists &lt;span class="math inline"&gt;\(C = C(c_1, c_2)\)&lt;/span&gt; such that &lt;span class="math inline"&gt;\(G_\lambda (r \mu_1 + (1 - r) \mu_0 || \mu_0) \le C\)&lt;/span&gt; (since &lt;span class="math inline"&gt;\(O(r^2 \lambda^2 / \sigma^2) = O(1)\)&lt;/span&gt;).&lt;/li&gt;
+&lt;/ol&gt;
+&lt;p&gt;&lt;strong&gt;Remark in the proof&lt;/strong&gt;. Note that the choice of &lt;span class="math inline"&gt;\(c_1\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(c_2\)&lt;/span&gt; and the function &lt;span class="math inline"&gt;\(C(c_1, c_2)\)&lt;/span&gt; are important to the practicality and usefulness of Conjecture 0.&lt;/p&gt;
+&lt;p&gt;Part 1 can be derived using Conjecture 1. We use the notations &lt;span class="math inline"&gt;\(p_I\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q_I\)&lt;/span&gt; to be &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; conditioned on the subsampling index &lt;span class="math inline"&gt;\(I\)&lt;/span&gt;, just like in the proof of the subsampling theorems (Claim 19 and 24). Then&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[D_\lambda(q_I || p_I) = D_\lambda(p_I || q_I)
+\begin{cases}
+\le D_\lambda(\mu_0 || \mu_1) = D_\lambda(\mu_1 || \mu_0), &amp;amp; I \in \mathcal I_\in\\
+= D_\lambda(\mu_0 || \mu_0) = D_\lambda(\mu_1 || \mu_1) = 0 &amp;amp; I \in \mathcal I_\notin
+\end{cases}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Since &lt;span class="math inline"&gt;\(p = |\mathcal I|^{-1} \sum_{I \in \mathcal I} p_I\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q = |\mathcal I|^{-1} \sum_{I \in \mathcal I} q_I\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(|\mathcal I_\in| = r |\mathcal I|\)&lt;/span&gt;, by Conjecture 1, we have Part 1.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Remark in the proof&lt;/strong&gt;. As we can see here, instead of trying to prove Conjecture 1, it suffices to prove a weaker version of it, by specialising on mixture of Gaussians, in order to have a Claim 26 without any conjectural assumptions. I have in fact posted the Conjecture on &lt;a href="https://math.stackexchange.com/questions/3147963/an-inequality-related-to-the-renyi-divergence"&gt;Stackexchange&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;Now let us verify Part 2.&lt;/p&gt;
+&lt;p&gt;Using Claim 27 and Example 1, we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+G_\lambda(r \mu_1 + (1 - r) \mu_0 || \mu_0)) &amp;amp;= \sum_{j = 0 : \lambda} {\lambda \choose j} r^j (1 - r)^{\lambda - j} G_j(\mu_1 || \mu_0)\\
+&amp;amp;=\sum_{j = 0 : \lambda} {\lambda \choose j} r^j (1 - r)^{\lambda - j} \exp(j (j - 1) / 2 \sigma^2). \qquad (9.5)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Denote by &lt;span class="math inline"&gt;\(n = \lceil \sigma^2 \rceil\)&lt;/span&gt;. It suffices to show&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sum_{j = 0 : n} {n \choose j} (c_1 n^{- 1 / 2})^j (1 - c_1 n^{- 1 / 2})^{n - j} \exp(c_2 j (j - 1) / 2 n) \le C\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Note that we can discard the linear term &lt;span class="math inline"&gt;\(- c_2 j / \sigma^2\)&lt;/span&gt; in the exponential term since we want to bound the sum from above.&lt;/p&gt;
+&lt;p&gt;We examine the asymptotics of this sum when &lt;span class="math inline"&gt;\(n\)&lt;/span&gt; is large, and treat the sum as an approximation to an integration of a function &lt;span class="math inline"&gt;\(\phi: [0, 1] \to \mathbb R\)&lt;/span&gt;. For &lt;span class="math inline"&gt;\(j = x n\)&lt;/span&gt;, where &lt;span class="math inline"&gt;\(x \in (0, 1)\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(\phi\)&lt;/span&gt; is thus defined as (note we multiply the summand with &lt;span class="math inline"&gt;\(n\)&lt;/span&gt; to compensate the uniform measure on &lt;span class="math inline"&gt;\(1, ..., n\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\phi_n(x) &amp;amp;:= n {n \choose j} (c_1 n^{- 1 / 2})^j (1 - c_1 n^{- 1 / 2})^{n - j} \exp(c_2 j^2 / 2 n) \\
+&amp;amp;= n {n \choose x n} (c_1 n^{- 1 / 2})^{x n} (1 - c_1 n^{- 1 / 2})^{(1 - x) n} \exp(c_2 x^2 n / 2)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Using Stirling's approximation&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[n! \approx \sqrt{2 \pi n} n^n e^{- n},\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;we can approach the binomial coefficient:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[{n \choose x n} \approx (\sqrt{2 \pi x (1 - x)} x^{x n} (1 - x)^{(1 - x) n})^{-1}.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;We also approximate&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[(1 - c_1 n^{- 1 / 2})^{(1 - x) n} \approx \exp(- c_1 \sqrt{n} (1 - x)).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;With these we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\phi_n(x) \approx {1 \over \sqrt{2 \pi x (1 - x)}} \exp\left(- {1 \over 2} x n \log n + (x \log c_1 - x \log x - (1 - x) \log (1 - x) + {1 \over 2} c_2 x^2) n + {1 \over 2} \log n\right).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;This vanishes as &lt;span class="math inline"&gt;\(n \to \infty\)&lt;/span&gt;, and since &lt;span class="math inline"&gt;\(\phi_n(x)\)&lt;/span&gt; is bounded above by the integrable function &lt;span class="math inline"&gt;\({1 \over \sqrt{2 \pi x (1 - x)}}\)&lt;/span&gt; (c.f. the arcsine law), and below by &lt;span class="math inline"&gt;\(0\)&lt;/span&gt;, we may invoke the dominant convergence theorem and exchange the integral with the limit and get&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\lim_{n \to \infty} &amp;amp;G_n (r \mu_1 + (1 - r) \mu_0 || \mu_0)) \\
+&amp;amp;\le \lim_{n \to \infty} \int \phi_n(x) dx = \int \lim_{n \to \infty} \phi_n(x) dx = 0.
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Thus we have that the generating function of the divergence variable &lt;span class="math inline"&gt;\(L(r \mu_1 + (1 - r) \mu_0 || \mu_0)\)&lt;/span&gt; is bounded.&lt;/p&gt;
+&lt;p&gt;Can this be true for better orders&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[r \le c_1 \sigma^{- d_r},\qquad \lambda \le c_2 \sigma^{d_\lambda}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;for some &lt;span class="math inline"&gt;\(d_r \in (0, 1]\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(d_\lambda \in [2, \infty)\)&lt;/span&gt;? If we follow the same approximation using these exponents, then letting &lt;span class="math inline"&gt;\(n = c_2 \sigma^{d_\lambda}\)&lt;/span&gt;,&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+{n \choose j} &amp;amp;r^j (1 - r)^{n - j} G_j(\mu_0 || \mu_1) \le \phi_n(x) \\
+&amp;amp;\approx {1 \over \sqrt{2 \pi x (1 - x)}} \exp\left({1 \over 2} c_2^{2 \over d_\lambda} x^2 n^{2 - {2 \over d_\lambda}} - {d_r \over 2} x n \log n + (x \log c_1 - x \log x - (1 - x) \log (1 - x)) n + {1 \over 2} \log n\right).
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;So we see that to keep the divergence moments bounded it is possible to have any &lt;span class="math inline"&gt;\(r = O(\sigma^{- d_r})\)&lt;/span&gt; for &lt;span class="math inline"&gt;\(d_r \in (0, 1)\)&lt;/span&gt;, but relaxing &lt;span class="math inline"&gt;\(\lambda\)&lt;/span&gt; may not be safe.&lt;/p&gt;
+&lt;p&gt;If we relax &lt;span class="math inline"&gt;\(r\)&lt;/span&gt;, then we get&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[G_\lambda(r \mu_1 + (1 - r) \mu_0 || \mu_0) = O(r^{2 / d_r} \lambda^2 \sigma^{-2}) = O(1).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Note that now the constant &lt;span class="math inline"&gt;\(C\)&lt;/span&gt; depends on &lt;span class="math inline"&gt;\(d_r\)&lt;/span&gt; as well. Numerical experiments seem to suggest that &lt;span class="math inline"&gt;\(C\)&lt;/span&gt; can increase quite rapidly as &lt;span class="math inline"&gt;\(d_r\)&lt;/span&gt; decreases from &lt;span class="math inline"&gt;\(1\)&lt;/span&gt;. &lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;In the following for consistency we retain &lt;span class="math inline"&gt;\(k\)&lt;/span&gt; as the number of epochs, and use &lt;span class="math inline"&gt;\(T := k / r\)&lt;/span&gt; to denote the number of compositions / steps / minibatches. With Conjecture 0 we have:&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 28&lt;/strong&gt;. Assuming Conjecture 1 is true. Let &lt;span class="math inline"&gt;\(\epsilon, c_1, c_2 &amp;gt; 0\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(r \le c_1 \sigma^{-1}\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(T = {c_2 \over 2 C(c_1, c_2)} \epsilon \sigma^2\)&lt;/span&gt;. then the DP-SGD with subsampling rate &lt;span class="math inline"&gt;\(r\)&lt;/span&gt;, and &lt;span class="math inline"&gt;\(T\)&lt;/span&gt; steps is &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp for&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\delta = \exp(- {1 \over 2} c_2 \sigma^2 \epsilon).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;In other words, for&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sigma \ge \sqrt{2 c_2^{-1}} \epsilon^{- {1 \over 2}} \sqrt{\log \delta^{-1}},\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;we can achieve &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. By Claim 26 and the Moment Composition Theorem (Claim 22), for &lt;span class="math inline"&gt;\(\lambda = c_2 \sigma^2\)&lt;/span&gt;, substituting &lt;span class="math inline"&gt;\(T = {c_2 \over 2 C(c_1, c_2)} \epsilon \sigma^2\)&lt;/span&gt;, we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(L(p || q) \ge \epsilon) \le \exp(k C(c_1, c_2) - \lambda \epsilon) = \exp\left(- {1 \over 2} c_2 \sigma^2 \epsilon\right).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Remark&lt;/strong&gt;. Claim 28 is my understanding / version of Theorem 1 in [ACGMMTZ16], by using the same proof technique. Here I quote the original version of theorem with notions and notations altered for consistency with this post:&lt;/p&gt;
+&lt;blockquote&gt;
+&lt;p&gt;There exists constants &lt;span class="math inline"&gt;\(c_1&amp;#39;, c_2&amp;#39; &amp;gt; 0\)&lt;/span&gt; so that for any &lt;span class="math inline"&gt;\(\epsilon &amp;lt; c_1&amp;#39; r^2 T\)&lt;/span&gt;, DP-SGD is &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-differentially private for any &lt;span class="math inline"&gt;\(\delta &amp;gt; 0\)&lt;/span&gt; if we choose&lt;/p&gt;
+&lt;/blockquote&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sigma \ge c_2&amp;#39; {r \sqrt{T \log (1 / \delta)} \over \epsilon}. \qquad (10)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;I am however unable to reproduce this version, assuming Conjecture 0 is true, for the following reasons:&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;&lt;p&gt;In the proof in the paper, we have &lt;span class="math inline"&gt;\(\epsilon = c_1&amp;#39; r^2 T\)&lt;/span&gt; instead of "less than" in the statement of the Theorem. If we change it to &lt;span class="math inline"&gt;\(\epsilon &amp;lt; c_1&amp;#39; r^2 T\)&lt;/span&gt; then the direction of the inequality becomes opposite to the direction we want to prove: &lt;span class="math display"&gt;\[\exp(k C(c_1, c_2) - \lambda \epsilon) \ge ...\]&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;The implicit condition &lt;span class="math inline"&gt;\(r = O(\sigma^{-1})\)&lt;/span&gt; of Conjecture 0 whose result is used in the proof of this theorem is not mentioned in the statement of the proof. The implication is that (10) becomes an ill-formed condition as the right hand side also depends on &lt;span class="math inline"&gt;\(\sigma\)&lt;/span&gt;.&lt;/p&gt;&lt;/li&gt;
+&lt;/ol&gt;
+&lt;h2 id="tensorflow-implementation"&gt;Tensorflow implementation&lt;/h2&gt;
+&lt;p&gt;The DP-SGD is implemented in &lt;a href="https://github.com/tensorflow/privacy"&gt;TensorFlow Privacy&lt;/a&gt;. In the following I discuss the package in the current state (2019-03-11). It is divided into two parts: &lt;a href="https://github.com/tensorflow/privacy/tree/master/privacy/optimizers"&gt;&lt;code&gt;optimizers&lt;/code&gt;&lt;/a&gt; which implements the actual differentially private algorithms, and &lt;a href="https://github.com/tensorflow/privacy/tree/master/privacy/analysis"&gt;&lt;code&gt;analysis&lt;/code&gt;&lt;/a&gt; which computes the privacy guarantee.&lt;/p&gt;
+&lt;p&gt;The &lt;code&gt;analysis&lt;/code&gt; part implements a privacy ledger that "keeps a record of all queries executed over a given dataset for the purpose of computing privacy guarantees". On the other hand, all the computation is done in &lt;a href="https://github.com/tensorflow/privacy/blob/7e2d796bdee9b60dce21a82a397eefda35b0ac10/privacy/analysis/rdp_accountant.py"&gt;&lt;code&gt;rdp_accountant.py&lt;/code&gt;&lt;/a&gt;. At this moment, &lt;code&gt;rdp_accountant.py&lt;/code&gt; only implements the computation of the privacy guarantees for DP-SGD with Gaussian mechanism. In the following I will briefly explain the code in this file.&lt;/p&gt;
+&lt;p&gt;Some notational correspondences: their &lt;code&gt;alpha&lt;/code&gt; is our &lt;span class="math inline"&gt;\(\lambda\)&lt;/span&gt;, their &lt;code&gt;q&lt;/code&gt; is our &lt;span class="math inline"&gt;\(r\)&lt;/span&gt;, their &lt;code&gt;A_alpha&lt;/code&gt; (in the comments) is our &lt;span class="math inline"&gt;\(\kappa_{r N(1, \sigma^2) + (1 - r) N(0, \sigma^2)} (\lambda - 1)\)&lt;/span&gt;, at least when &lt;span class="math inline"&gt;\(\lambda\)&lt;/span&gt; is an integer.&lt;/p&gt;
+&lt;ul&gt;
+&lt;li&gt;The function &lt;code&gt;_compute_log_a&lt;/code&gt; presumably computes the cumulants &lt;span class="math inline"&gt;\(\kappa_{r N(1, \sigma^2) + (1 - r) N(0, \sigma^2), N(0, \sigma^2)}(\lambda - 1)\)&lt;/span&gt;. It calls &lt;code&gt;_compute_log_a_int&lt;/code&gt; or &lt;code&gt;_compute_log_a_frac&lt;/code&gt; depending on whether &lt;span class="math inline"&gt;\(\lambda\)&lt;/span&gt; is an integer.&lt;/li&gt;
+&lt;li&gt;The function &lt;code&gt;_compute_log_a_int&lt;/code&gt; computes the cumulant using (9.5).&lt;/li&gt;
+&lt;li&gt;When &lt;span class="math inline"&gt;\(\lambda\)&lt;/span&gt; is not an integer, we can't use (9.5). I have yet to decode how &lt;code&gt;_compute_log_a_frac&lt;/code&gt; computes the cumulant (or an upper bound of it) in this case&lt;/li&gt;
+&lt;li&gt;The function &lt;code&gt;_compute_delta&lt;/code&gt; computes &lt;span class="math inline"&gt;\(\delta\)&lt;/span&gt;s for a list of &lt;span class="math inline"&gt;\(\lambda\)&lt;/span&gt;s and &lt;span class="math inline"&gt;\(\kappa\)&lt;/span&gt;s using Item 1 of Claim 25 and return the smallest one, and the function &lt;code&gt;_compute_epsilon&lt;/code&gt; computes epsilon uses Item 3 in Claim 25 in the same way.&lt;/li&gt;
+&lt;/ul&gt;
+&lt;p&gt;In &lt;code&gt;optimizers&lt;/code&gt;, among other things, the DP-SGD with Gaussian mechanism is implemented in &lt;code&gt;dp_optimizer.py&lt;/code&gt; and &lt;code&gt;gaussian_query.py&lt;/code&gt;. See the definition of &lt;code&gt;DPGradientDescentGaussianOptimizer&lt;/code&gt; in &lt;code&gt;dp_optimizer.py&lt;/code&gt; and trace the calls therein.&lt;/p&gt;
+&lt;p&gt;At this moment, the privacy guarantee computation part and the optimizer part are separated, with &lt;code&gt;rdp_accountant.py&lt;/code&gt; called in &lt;code&gt;compute_dp_sgd_privacy.py&lt;/code&gt; with user-supplied parameters. I think this is due to the lack of implementation in &lt;code&gt;rdp_accountant.py&lt;/code&gt; of any non-DPSGD-with-Gaussian privacy guarantee computation. There is already &lt;a href="https://github.com/tensorflow/privacy/issues/23"&gt;an issue on this&lt;/a&gt;, so hopefully it won't be long before the privacy guarantees can be automatically computed given a DP-SGD instance.&lt;/p&gt;
+&lt;h2 id="comparison-among-different-methods"&gt;Comparison among different methods&lt;/h2&gt;
+&lt;p&gt;So far we have seen three routes to compute the privacy guarantees for DP-SGD with the Gaussian mechanism:&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;Claim 9 (single Gaussian mechanism privacy guarantee) -&amp;gt; Claim 19 (Subsampling theorem) -&amp;gt; Claim 18 (Advanced Adaptive Composition Theorem)&lt;/li&gt;
+&lt;li&gt;Example 1 (RDP for the Gaussian mechanism) -&amp;gt; Claim 22 (Moment Composition Theorem) -&amp;gt; Example 3 (Moment composition applied to the Gaussian mechanism)&lt;/li&gt;
+&lt;li&gt;Claim 26 (RDP for Gaussian mechanism with specific magnitudes for subsampling rate) -&amp;gt; Claim 28 (Moment Composition Theorem and translation to conventional DP)&lt;/li&gt;
+&lt;/ol&gt;
+&lt;p&gt;Which one is the best?&lt;/p&gt;
+&lt;p&gt;To make fair comparison, we may use one parameter as the metric and set all others to be the same. For example, we can&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;Given the same &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(r\)&lt;/span&gt; (in Route 1 and 3), &lt;span class="math inline"&gt;\(k\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(\sigma\)&lt;/span&gt;, compare the &lt;span class="math inline"&gt;\(\delta\)&lt;/span&gt;s&lt;/li&gt;
+&lt;li&gt;Given the same &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(r\)&lt;/span&gt; (in Route 1 and 3), &lt;span class="math inline"&gt;\(k\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(\delta\)&lt;/span&gt;, compare the &lt;span class="math inline"&gt;\(\sigma\)&lt;/span&gt;s&lt;/li&gt;
+&lt;li&gt;Given the same &lt;span class="math inline"&gt;\(\delta\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(r\)&lt;/span&gt; (in Route 1 and 3), &lt;span class="math inline"&gt;\(k\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(\sigma\)&lt;/span&gt;, compare the &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;s.&lt;/li&gt;
+&lt;/ol&gt;
+&lt;p&gt;I find that the first one, where &lt;span class="math inline"&gt;\(\delta\)&lt;/span&gt; is used as a metric, the best. This is because we have the tightest bounds and the cleanest formula when comparing the &lt;span class="math inline"&gt;\(\delta\)&lt;/span&gt;. For example, the Azuma and Chernoff bounds are both expressed as a bound for &lt;span class="math inline"&gt;\(\delta\)&lt;/span&gt;. On the other hand, the inversion of these bounds either requires a cost in the tightness (Claim 9, bounds on &lt;span class="math inline"&gt;\(\sigma\)&lt;/span&gt;) or in the complexity of the formula (Claim 16 Advanced Adaptive Composition Theorem, bounds on &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;).&lt;/p&gt;
+&lt;p&gt;So if we use &lt;span class="math inline"&gt;\(\sigma\)&lt;/span&gt; or &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt; as a metric, either we get a less fair comparison, or have to use a much more complicated formula as the bounds.&lt;/p&gt;
+&lt;p&gt;Let us first compare Route 1 and Route 2 without specialising to the Gaussian mechanism.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Disclaimer&lt;/strong&gt;. What follows is a bit messy and has not been reviewed by anyone.&lt;/p&gt;
+&lt;p&gt;Suppose each mechanism &lt;span class="math inline"&gt;\(N_i\)&lt;/span&gt; satisfies &lt;span class="math inline"&gt;\((\epsilon&amp;#39;, \delta(\epsilon&amp;#39;))\)&lt;/span&gt;-dp. Let &lt;span class="math inline"&gt;\(\tilde \epsilon := \log (1 + r (e^{\epsilon&amp;#39;} - 1))\)&lt;/span&gt;, then we have the subsampled mechanism &lt;span class="math inline"&gt;\(M_i(x) = N_i(x_\gamma)\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\tilde \epsilon, r \tilde \delta(\tilde \epsilon))\)&lt;/span&gt;-dp, where&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\tilde \delta(\tilde \epsilon) = \delta(\log (r^{-1} (\exp(\tilde \epsilon) - 1) + 1))\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Using the Azuma bound in the proof of Advanced Adaptive Composition Theorem (6.99):&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(L(p^k || q^k) \ge \epsilon) \le \exp(- {(\epsilon - r^{-1} k a(\tilde\epsilon))^2 \over 2 r^{-1} k (\tilde\epsilon + a(\tilde\epsilon))^2}).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;So we have the final bound for Route 1:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\delta_1(\epsilon) = \min_{\tilde \epsilon: \epsilon &amp;gt; r^{-1} k a(\tilde \epsilon)} \exp(- {(\epsilon - r^{-1} k a(\tilde\epsilon))^2 \over 2 r^{-1} k (\tilde\epsilon + a(\tilde\epsilon))^2}) + k \tilde \delta(\tilde \epsilon).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;As for Route 2, since we do not gain anything from subsampling in RDP, we do not subsample at all.&lt;/p&gt;
+&lt;p&gt;By Claim 23, we have the bound for Route 2:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\delta_2(\epsilon) = \exp(- k \kappa^* (\epsilon / k)).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;On one hand, one can compare &lt;span class="math inline"&gt;\(\delta_1\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\delta_2\)&lt;/span&gt; with numerical experiments. On the other hand, if we further specify &lt;span class="math inline"&gt;\(\delta(\epsilon&amp;#39;)\)&lt;/span&gt; in Route 1 as the Chernoff bound for the cumulants of divergence variable, i.e.&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\delta(\epsilon&amp;#39;) = \exp(- \kappa^* (\epsilon&amp;#39;)),\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\delta_1 (\epsilon) = \min_{\tilde \epsilon: a(\tilde \epsilon) &amp;lt; r k^{-1} \epsilon} \exp(- {(\epsilon - r^{-1} k a(\tilde\epsilon))^2 \over 2 r^{-1} k (\tilde\epsilon + a(\tilde\epsilon))^2}) + k \exp(- \kappa^* (b(\tilde\epsilon))),\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[b(\tilde \epsilon) := \log (r^{-1} (\exp(\tilde \epsilon) - 1) + 1) \le r^{-1} \tilde\epsilon.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;We note that since &lt;span class="math inline"&gt;\(a(\tilde \epsilon) = \tilde\epsilon(e^{\tilde \epsilon} - 1) 1_{\tilde\epsilon &amp;lt; \log 2} + \tilde\epsilon 1_{\tilde\epsilon \ge \log 2}\)&lt;/span&gt;, we may compare the two cases separately.&lt;/p&gt;
+&lt;p&gt;Note that we have &lt;span class="math inline"&gt;\(\kappa^*\)&lt;/span&gt; is a monotonously increasing function, therefore&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\kappa^* (b(\tilde\epsilon)) \le \kappa^*(r^{-1} \tilde\epsilon).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;So for &lt;span class="math inline"&gt;\(\tilde \epsilon \ge \log 2\)&lt;/span&gt;, we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[k \exp(- \kappa^*(b(\tilde\epsilon))) \ge k \exp(- \kappa^*(r^{-1} \tilde \epsilon)) \ge k \exp(- \kappa^*(k^{-1} \epsilon)) \ge \delta_2(\epsilon).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;For &lt;span class="math inline"&gt;\(\tilde\epsilon &amp;lt; \log 2\)&lt;/span&gt;, it is harder to compare, as now&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[k \exp(- \kappa^*(b(\tilde\epsilon))) \ge k \exp(- \kappa^*(\epsilon / \sqrt{r k})).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;It is tempting to believe that this should also be greater than &lt;span class="math inline"&gt;\(\delta_2(\epsilon)\)&lt;/span&gt;. But I can not say for sure. At least in the special case of Gaussian, we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[k \exp(- \kappa^*(\epsilon / \sqrt{r k})) = k \exp(- (\sigma \sqrt{\epsilon / k r} - (2 \sigma)^{-1})^2) \ge \exp(- k ({\sigma \epsilon \over k} - (2 \sigma)^{-1})^2) = \delta_2(\epsilon)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;when &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt; is sufficiently small. However we still need to consider the case where &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt; is not too small. But overall it seems most likely Route 2 is superior than Route 1.&lt;/p&gt;
+&lt;p&gt;So let us compare Route 2 with Route 3:&lt;/p&gt;
+&lt;p&gt;Given the condition to obtain the Chernoff bound&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[{\sigma \epsilon \over k} &amp;gt; (2 \sigma)^{-1}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\delta_2(\epsilon) &amp;gt; \exp(- k (\sigma \epsilon / k)^2) = \exp(- \sigma^2 \epsilon^2 / k).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;For this to achieve the same bound&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\delta_3(\epsilon) = \exp\left(- {1 \over 2} c_2 \sigma^2 \epsilon\right)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;we need &lt;span class="math inline"&gt;\(k &amp;lt; {2 \epsilon \over c_2}\)&lt;/span&gt;. This is only possible if &lt;span class="math inline"&gt;\(c_2\)&lt;/span&gt; is small or &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt; is large, since &lt;span class="math inline"&gt;\(k\)&lt;/span&gt; is a positive integer.&lt;/p&gt;
+&lt;p&gt;So taking at face value, Route 3 seems to achieve the best results. However, it also has some similar implicit conditions that need to be satisfied: First &lt;span class="math inline"&gt;\(T\)&lt;/span&gt; needs to be at least &lt;span class="math inline"&gt;\(1\)&lt;/span&gt;, meaning&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[{c_2 \over C(c_1, c_2)} \epsilon \sigma^2 \ge 1.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Second, &lt;span class="math inline"&gt;\(k\)&lt;/span&gt; needs to be at least &lt;span class="math inline"&gt;\(1\)&lt;/span&gt; as well, i.e.&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[k = r T \ge {c_1 c_2 \over C(c_1, c_2)} \epsilon \sigma \ge 1.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Both conditions rely on the magnitudes of &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(\sigma\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(c_1\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(c_2\)&lt;/span&gt;, and the rate of growth of &lt;span class="math inline"&gt;\(C(c_1, c_2)\)&lt;/span&gt;. The biggest problem in this list is the last, because if we know how fast &lt;span class="math inline"&gt;\(C\)&lt;/span&gt; grows then we'll have a better idea what are the constraints for the parameters to achieve the result in Route 3.&lt;/p&gt;
+&lt;h2 id="further-questions"&gt;Further questions&lt;/h2&gt;
+&lt;p&gt;Here is a list of what I think may be interesting topics or potential problems to look at, with no guarantee that they are all awesome untouched research problems:&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;Prove Conjecture 1&lt;/li&gt;
+&lt;li&gt;Find a theoretically definitive answer whether the methods in Part 1 or Part 2 yield better privacy guarantees.&lt;/li&gt;
+&lt;li&gt;Study the non-Gaussian cases, general or specific. Let &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; be some probability density, what is the tail bound of &lt;span class="math inline"&gt;\(L(p(y) || p(y + \alpha))\)&lt;/span&gt; for &lt;span class="math inline"&gt;\(|\alpha| \le 1\)&lt;/span&gt;? Can you find anything better than Gaussian? For a start, perhaps the nice tables of Rényi divergence in Gil-Alajaji-Linder 2013 may be useful?&lt;/li&gt;
+&lt;li&gt;Find out how useful Claim 26 is. Perhaps start with computing the constant &lt;span class="math inline"&gt;\(C\)&lt;/span&gt; nemerically.&lt;/li&gt;
+&lt;li&gt;Help with &lt;a href="https://github.com/tensorflow/privacy/issues/23"&gt;the aforementioned issue&lt;/a&gt; in the Tensorflow privacy package.&lt;/li&gt;
+&lt;/ol&gt;
+&lt;h2 id="references"&gt;References&lt;/h2&gt;
+&lt;ul&gt;
+&lt;li&gt;Abadi, Martín, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. “Deep Learning with Differential Privacy.” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security - CCS’16, 2016, 308–18. &lt;a href="https://doi.org/10.1145/2976749.2978318" class="uri"&gt;https://doi.org/10.1145/2976749.2978318&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;Erven, Tim van, and Peter Harremoës. “R\’enyi Divergence and Kullback-Leibler Divergence.” IEEE Transactions on Information Theory 60, no. 7 (July 2014): 3797–3820. &lt;a href="https://doi.org/10.1109/TIT.2014.2320500" class="uri"&gt;https://doi.org/10.1109/TIT.2014.2320500&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;Gil, M., F. Alajaji, and T. Linder. “Rényi Divergence Measures for Commonly Used Univariate Continuous Distributions.” Information Sciences 249 (November 2013): 124–31. &lt;a href="https://doi.org/10.1016/j.ins.2013.06.018" class="uri"&gt;https://doi.org/10.1016/j.ins.2013.06.018&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;Mironov, Ilya. “Renyi Differential Privacy.” 2017 IEEE 30th Computer Security Foundations Symposium (CSF), August 2017, 263–75. &lt;a href="https://doi.org/10.1109/CSF.2017.11" class="uri"&gt;https://doi.org/10.1109/CSF.2017.11&lt;/a&gt;.&lt;/li&gt;
+&lt;/ul&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/blog-feed.xml">
+ <title type="text">A Tail of Two Densities</title>
+ <id>posts/2019-03-13-a-tail-of-two-densities.html</id>
+ <updated>2019-03-13T00:00:00Z</updated>
+ <link href="posts/2019-03-13-a-tail-of-two-densities.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;This is Part 1 of a two-part post where I give an introduction to differential privacy, which is a study of tail bounds of the divergence between probability measures, with the end goal of applying it to stochastic gradient descent.&lt;/p&gt;
+&lt;p&gt;I start with the definition of &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-differential privacy (corresponding to max divergence), followed by &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-differential privacy (a.k.a. approximate differential privacy, corresponding to the &lt;span class="math inline"&gt;\(\delta\)&lt;/span&gt;-approximate max divergence). I show a characterisation of the &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-differential privacy as conditioned &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-differential privacy. Also, as examples, I illustrate the &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-dp with Laplace mechanism and, using some common tail bounds, the approximate dp with the Gaussian mechanism.&lt;/p&gt;
+&lt;p&gt;Then I continue to show the effect of combinatorial and sequential compositions of randomised queries (called mechanisms) on privacy by stating and proving the composition theorems for differential privacy, as well as the effect of mixing mechanisms, by presenting the subsampling theorem (a.k.a. amplification theorem).&lt;/p&gt;
+&lt;p&gt;In &lt;a href="/posts/2019-03-14-great-but-manageable-expectations.html"&gt;Part 2&lt;/a&gt;, I discuss the Rényi differential privacy, corresponding to the Rényi divergence, a study of the moment generating functions of the divergence between probability measures to derive the tail bounds.&lt;/p&gt;
+&lt;p&gt;Like in Part 1, I prove a composition theorem and a subsampling theorem.&lt;/p&gt;
+&lt;p&gt;I also attempt to reproduce a seemingly better moment bound for the Gaussian mechanism with subsampling, with one intermediate step which I am not able to prove.&lt;/p&gt;
+&lt;p&gt;After that I explain the Tensorflow implementation of differential privacy in its &lt;a href="https://github.com/tensorflow/privacy/tree/master/privacy"&gt;Privacy&lt;/a&gt; module, which focuses on the differentially private stochastic gradient descent algorithm (DP-SGD).&lt;/p&gt;
+&lt;p&gt;Finally I use the results from both Part 1 and Part 2 to obtain some privacy guarantees for composed subsampling queries in general, and for DP-SGD in particular. I also compare these privacy guarantees.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Acknowledgement&lt;/strong&gt;. I would like to thank &lt;a href="https://stockholm.ai"&gt;Stockholm AI&lt;/a&gt; for introducing me to the subject of differential privacy. Thanks to (in chronological order) Reynaldo Boulogne, Martin Abedi, Ilya Mironov, Kurt Johansson, Mark Bun, Salil Vadhan, Jonathan Ullman, Yuanyuan Xu and Yiting Li for communication and discussions. The research was done while working at &lt;a href="https://www.kth.se/en/sci/institutioner/math"&gt;KTH Department of Mathematics&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;&lt;em&gt;If you are confused by any notations, ask me or try &lt;a href="/notations.html"&gt;this&lt;/a&gt;. This post (including both Part 1 and Part2) is licensed under &lt;a href="https://creativecommons.org/licenses/by-sa/4.0/"&gt;CC BY-SA&lt;/a&gt; and &lt;a href="https://www.gnu.org/licenses/fdl.html"&gt;GNU FDL&lt;/a&gt;.&lt;/em&gt;&lt;/p&gt;
+&lt;h2 id="the-gist-of-differential-privacy"&gt;The gist of differential privacy&lt;/h2&gt;
+&lt;p&gt;If you only have one minute, here is what differential privacy is about:&lt;/p&gt;
+&lt;p&gt;Let &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; be two probability densities, we define the &lt;em&gt;divergence variable&lt;/em&gt; of &lt;span class="math inline"&gt;\((p, q)\)&lt;/span&gt; to be&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[L(p || q) := \log {p(\xi) \over q(\xi)}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(\xi\)&lt;/span&gt; is a random variable distributed according to &lt;span class="math inline"&gt;\(p\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Roughly speaking, differential privacy is the study of the tail bound of &lt;span class="math inline"&gt;\(L(p || q)\)&lt;/span&gt;: for certain &lt;span class="math inline"&gt;\(p\)&lt;/span&gt;s and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt;s, and for &lt;span class="math inline"&gt;\(\epsilon &amp;gt; 0\)&lt;/span&gt;, find &lt;span class="math inline"&gt;\(\delta(\epsilon)\)&lt;/span&gt; such that&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(L(p || q) &amp;gt; \epsilon) &amp;lt; \delta(\epsilon),\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; are the laws of the outputs of a randomised functions on two very similar inputs. Moreover, to make matters even simpler, only three situations need to be considered:&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;(General case) &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; is in the form of &lt;span class="math inline"&gt;\(q(y) = p(y + \Delta)\)&lt;/span&gt; for some bounded constant &lt;span class="math inline"&gt;\(\Delta\)&lt;/span&gt;.&lt;/li&gt;
+&lt;li&gt;(Compositions) &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; are combinatorial or sequential compositions of some simpler &lt;span class="math inline"&gt;\(p_i\)&lt;/span&gt;’s and &lt;span class="math inline"&gt;\(q_i\)&lt;/span&gt;’s respectively&lt;/li&gt;
+&lt;li&gt;(Subsampling) &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; are mixtures / averages of some simpler &lt;span class="math inline"&gt;\(p_i\)&lt;/span&gt;’s and &lt;span class="math inline"&gt;\(q_i\)&lt;/span&gt;’s respectively&lt;/li&gt;
+&lt;/ol&gt;
+&lt;p&gt;In applications, the inputs are databases and the randomised functions are queries with an added noise, and the tail bounds give privacy guarantees. When it comes to gradient descent, the input is the training dataset, and the query updates the parameters, and privacy is achieved by adding noise to the gradients.&lt;/p&gt;
+&lt;p&gt;Now if you have an hour...&lt;/p&gt;
+&lt;h2 id="epsilon-dp"&gt;&lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-dp&lt;/h2&gt;
+&lt;p&gt;&lt;strong&gt;Definition (Mechanisms)&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(X\)&lt;/span&gt; be a space with a metric &lt;span class="math inline"&gt;\(d: X \times X \to \mathbb N\)&lt;/span&gt;. A &lt;em&gt;mechanism&lt;/em&gt; &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; is a function that takes &lt;span class="math inline"&gt;\(x \in X\)&lt;/span&gt; as input and outputs a random variable on &lt;span class="math inline"&gt;\(Y\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;In this post, &lt;span class="math inline"&gt;\(X = Z^m\)&lt;/span&gt; is the space of datasets of &lt;span class="math inline"&gt;\(m\)&lt;/span&gt; rows for some integer &lt;span class="math inline"&gt;\(m\)&lt;/span&gt;, where each item resides in &lt;span class="math inline"&gt;\(Z\)&lt;/span&gt;. In this case the distance &lt;span class="math inline"&gt;\(d(x, x&amp;#39;) := \#\{i: x_i \neq x&amp;#39;_i\}\)&lt;/span&gt; is the number of rows that differ between &lt;span class="math inline"&gt;\(x\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(x&amp;#39;\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Normally we have a query &lt;span class="math inline"&gt;\(f: X \to Y\)&lt;/span&gt;, and construct the mechanism &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; from &lt;span class="math inline"&gt;\(f\)&lt;/span&gt; by adding a noise:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[M(x) := f(x) + \text{noise}.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Later, we will also consider mechanisms constructed from composition or mixture of other mechanisms.&lt;/p&gt;
+&lt;p&gt;In this post &lt;span class="math inline"&gt;\(Y = \mathbb R^d\)&lt;/span&gt; for some &lt;span class="math inline"&gt;\(d\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Definition (Sensitivity)&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(f: X \to \mathbb R^d\)&lt;/span&gt; be a function. The &lt;em&gt;sensitivity&lt;/em&gt; &lt;span class="math inline"&gt;\(S_f\)&lt;/span&gt; of &lt;span class="math inline"&gt;\(f\)&lt;/span&gt; is defined as&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[S_f := \sup_{x, x&amp;#39; \in X: d(x, x&amp;#39;) = 1} \|f(x) - f(x&amp;#39;)\|_2,\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(\|y\|_2 = \sqrt{y_1^2 + ... + y_d^2}\)&lt;/span&gt; is the &lt;span class="math inline"&gt;\(\ell^2\)&lt;/span&gt;-norm.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Definition (Differential Privacy)&lt;/strong&gt;. A mechanism &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; is called &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;&lt;em&gt;-differential privacy&lt;/em&gt; (&lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-dp) if it satisfies the following condition: for all &lt;span class="math inline"&gt;\(x, x&amp;#39; \in X\)&lt;/span&gt; with &lt;span class="math inline"&gt;\(d(x, x&amp;#39;) = 1\)&lt;/span&gt;, and for all measureable set &lt;span class="math inline"&gt;\(S \subset \mathbb R^n\)&lt;/span&gt;,&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(M(x) \in S) \le e^\epsilon P(M(x&amp;#39;) \in S). \qquad (1)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;An example of &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-dp mechanism is the Laplace mechanism.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Definition&lt;/strong&gt;. The Laplace distribution over &lt;span class="math inline"&gt;\(\mathbb R\)&lt;/span&gt; with parameter &lt;span class="math inline"&gt;\(b &amp;gt; 0\)&lt;/span&gt; has probability density function&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[f_{\text{Lap}(b)}(x) = {1 \over 2 b} e^{- {|x| \over b}}.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Definition&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(d = 1\)&lt;/span&gt;. The Laplace mechanism is defined by&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[M(x) = f(x) + \text{Lap}(b).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim&lt;/strong&gt;. The Laplace mechanism with&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[b \ge \epsilon^{-1} S_f \qquad (1.5)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;is &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-dp.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. Quite straightforward. Let &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; be the laws of &lt;span class="math inline"&gt;\(M(x)\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(M(x&amp;#39;)\)&lt;/span&gt; respectively.&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[{p (y) \over q (y)} = {f_{\text{Lap}(b)} (y - f(x)) \over f_{\text{Lap}(b)} (y - f(x&amp;#39;))} = \exp(b^{-1} (|y - f(x&amp;#39;)| - |y - f(x)|))\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Using triangular inequality &lt;span class="math inline"&gt;\(|A| - |B| \le |A - B|\)&lt;/span&gt; on the right hand side, we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[{p (y) \over q (y)} \le \exp(b^{-1} (|f(x) - f(x&amp;#39;)|)) \le \exp(\epsilon)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where in the last step we use the condition (1.5). &lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;h2 id="approximate-differential-privacy"&gt;Approximate differential privacy&lt;/h2&gt;
+&lt;p&gt;Unfortunately, &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-dp does not apply to the most commonly used noise, the Gaussian noise. To fix this, we need to relax the definition a bit.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Definition&lt;/strong&gt;. A mechanism &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; is said to be &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;&lt;em&gt;-differentially private&lt;/em&gt; if for all &lt;span class="math inline"&gt;\(x, x&amp;#39; \in X\)&lt;/span&gt; with &lt;span class="math inline"&gt;\(d(x, x&amp;#39;) = 1\)&lt;/span&gt; and for all measureable &lt;span class="math inline"&gt;\(S \subset \mathbb R^d\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(M(x) \in S) \le e^\epsilon P(M(x&amp;#39;) \in S) + \delta. \qquad (2)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Immediately we see that the &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp is meaningful only if &lt;span class="math inline"&gt;\(\delta &amp;lt; 1\)&lt;/span&gt;.&lt;/p&gt;
+&lt;h3 id="indistinguishability"&gt;Indistinguishability&lt;/h3&gt;
+&lt;p&gt;To understand &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp, it is helpful to study &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-indistinguishability.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Definition&lt;/strong&gt;. Two probability measures &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; on the same space are called &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;&lt;em&gt;-ind(istinguishable)&lt;/em&gt; if for all measureable sets &lt;span class="math inline"&gt;\(S\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+p(S) \le e^\epsilon q(S) + \delta, \qquad (3) \\
+q(S) \le e^\epsilon p(S) + \delta. \qquad (4)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;As before, we also call random variables &lt;span class="math inline"&gt;\(\xi\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\eta\)&lt;/span&gt; to be &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-ind if their laws are &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-ind. When &lt;span class="math inline"&gt;\(\delta = 0\)&lt;/span&gt;, we call it &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-ind.&lt;/p&gt;
+&lt;p&gt;Immediately we have&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 0&lt;/strong&gt;. &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp (resp. &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-dp) iff &lt;span class="math inline"&gt;\(M(x)\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(M(x&amp;#39;)\)&lt;/span&gt; are &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-ind (resp. &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-ind) for all &lt;span class="math inline"&gt;\(x\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(x&amp;#39;\)&lt;/span&gt; with distance &lt;span class="math inline"&gt;\(1\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Definition (Divergence Variable)&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; be two probability measures. Let &lt;span class="math inline"&gt;\(\xi\)&lt;/span&gt; be a random variable distributed according to &lt;span class="math inline"&gt;\(p\)&lt;/span&gt;, we define a random variable &lt;span class="math inline"&gt;\(L(p || q)\)&lt;/span&gt; by&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[L(p || q) := \log {p(\xi) \over q(\xi)},\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;and call it the &lt;em&gt;divergence variable&lt;/em&gt; of &lt;span class="math inline"&gt;\((p, q)\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;One interesting and readily verifiable fact is&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb E L(p || q) = D(p || q)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(D\)&lt;/span&gt; is the KL-divergence.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 1&lt;/strong&gt;. If&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\mathbb P(L(p || q) \le \epsilon) &amp;amp;\ge 1 - \delta, \qquad(5) \\
+\mathbb P(L(q || p) \le \epsilon) &amp;amp;\ge 1 - \delta
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;then &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; are &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-ind.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. We verify (3), and (4) can be shown in the same way. Let &lt;span class="math inline"&gt;\(A := \{y \in Y: \log {p(y) \over q(y)} &amp;gt; \epsilon\}\)&lt;/span&gt;, then by (5) we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p(A) &amp;lt; \delta.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;So&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p(S) = p(S \cap A) + p(S \setminus A) \le \delta + e^\epsilon q(S \setminus A) \le \delta + e^\epsilon q(S).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;This Claim translates differential privacy to the tail bound of divergence variables, and for the rest of this post all dp results are obtained by estimating this tail bound.&lt;/p&gt;
+&lt;p&gt;In the following we discuss the converse of Claim 1. The discussions are rather technical, and readers can skip to the next subsection on first reading.&lt;/p&gt;
+&lt;p&gt;The converse of Claim 1 is not true.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 2&lt;/strong&gt;. There exists &lt;span class="math inline"&gt;\(\epsilon, \delta &amp;gt; 0\)&lt;/span&gt;, and &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; that are &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-ind, such that&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\mathbb P(L(p || q) \le \epsilon) &amp;amp;&amp;lt; 1 - \delta, \\
+\mathbb P(L(q || p) \le \epsilon) &amp;amp;&amp;lt; 1 - \delta
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. Here's a example. Let &lt;span class="math inline"&gt;\(Y = \{0, 1\}\)&lt;/span&gt;, and &lt;span class="math inline"&gt;\(p(0) = q(1) = 2 / 5\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(p(1) = q(0) = 3 / 5\)&lt;/span&gt;. Then it is not hard to verify that &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; are &lt;span class="math inline"&gt;\((\log {4 \over 3}, {1 \over 3})\)&lt;/span&gt;-ind: just check (3) for all four possible &lt;span class="math inline"&gt;\(S \subset Y\)&lt;/span&gt; and (4) holds by symmetry. On the other hand,&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(L(p || q) \le \log {4 \over 3}) = \mathbb P(L(q || p) \le \log {4 \over 3}) = {2 \over 5} &amp;lt; {2 \over 3}.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;A weaker version of the converse of Claim 1 is true (Kasiviswanathan-Smith 2015), though:&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 3&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(\alpha &amp;gt; 1\)&lt;/span&gt;. If &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; are &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-ind, then&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(L(p || q) &amp;gt; \alpha \epsilon) &amp;lt; {1 \over 1 - \exp((1 - \alpha) \epsilon)} \delta.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. Define&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[S = \{y: p(y) &amp;gt; e^{\alpha \epsilon} q(y)\}.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Then we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[e^{\alpha \epsilon} q(S) &amp;lt; p(S) \le e^\epsilon q(S) + \delta,\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where the first inequality is due to the definition of &lt;span class="math inline"&gt;\(S\)&lt;/span&gt;, and the second due to the &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-ind. Therefore&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[q(S) \le {\delta \over e^{\alpha \epsilon} - e^\epsilon}.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Using the &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-ind again we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p(S) \le e^\epsilon q(S) + \delta = {1 \over 1 - e^{(1 - \alpha) \epsilon}} \delta.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;This can be quite bad if &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt; is small.&lt;/p&gt;
+&lt;p&gt;To prove the composition theorems in the next section, we need a condition better than that in Claim 1 so that we can go back and forth between indistinguishability and such condition. In other words, we need a &lt;em&gt;characterisation&lt;/em&gt; of indistinguishability.&lt;/p&gt;
+&lt;p&gt;Let us take a careful look at the condition in Claim 1 and call it &lt;strong&gt;C1&lt;/strong&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;C1&lt;/strong&gt;. &lt;span class="math inline"&gt;\(\mathbb P(L(p || q) \le \epsilon) \ge 1 - \delta\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\mathbb P(L(q || p) \le \epsilon) \ge 1 - \delta\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;It is equivalent to&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;C2&lt;/strong&gt;. there exist events &lt;span class="math inline"&gt;\(A, B \subset Y\)&lt;/span&gt; with probabilities &lt;span class="math inline"&gt;\(p(A)\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q(B)\)&lt;/span&gt; at least &lt;span class="math inline"&gt;\(1 - \delta\)&lt;/span&gt; such that &lt;span class="math inline"&gt;\(\log p(y) - \log q(y) \le \epsilon\)&lt;/span&gt; for all &lt;span class="math inline"&gt;\(y \in A\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\log q(y) - \log p(y) \le \epsilon\)&lt;/span&gt; for all &lt;span class="math inline"&gt;\(y \in B\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;A similar-looking condition to &lt;strong&gt;C2&lt;/strong&gt; is the following:&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;C3&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(\Omega\)&lt;/span&gt; be the &lt;a href="https://en.wikipedia.org/wiki/Probability_space#Definition"&gt;underlying probability space&lt;/a&gt;. There exist two events &lt;span class="math inline"&gt;\(E, F \subset \Omega\)&lt;/span&gt; with &lt;span class="math inline"&gt;\(\mathbb P(E), \mathbb P(F) \ge 1 - \delta\)&lt;/span&gt;, such that &lt;span class="math inline"&gt;\(|\log p_{|E}(y) - \log q_{|F}(y)| \le \epsilon\)&lt;/span&gt; for all &lt;span class="math inline"&gt;\(y \in Y\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Here &lt;span class="math inline"&gt;\(p_{|E}\)&lt;/span&gt; (resp. &lt;span class="math inline"&gt;\(q_{|F}\)&lt;/span&gt;) is &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; (resp. &lt;span class="math inline"&gt;\(q\)&lt;/span&gt;) conditioned on event &lt;span class="math inline"&gt;\(E\)&lt;/span&gt; (resp. &lt;span class="math inline"&gt;\(F\)&lt;/span&gt;).&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Remark&lt;/strong&gt;. Note that the events in &lt;strong&gt;C2&lt;/strong&gt; and &lt;strong&gt;C3&lt;/strong&gt; are in different spaces, and therefore we can not write &lt;span class="math inline"&gt;\(p_{|E}(S)\)&lt;/span&gt; as &lt;span class="math inline"&gt;\(p(S | E)\)&lt;/span&gt; or &lt;span class="math inline"&gt;\(q_{|F}(S)\)&lt;/span&gt; as &lt;span class="math inline"&gt;\(q(S | F)\)&lt;/span&gt;. In fact, if we let &lt;span class="math inline"&gt;\(E\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(F\)&lt;/span&gt; in &lt;strong&gt;C3&lt;/strong&gt; be subsets of &lt;span class="math inline"&gt;\(Y\)&lt;/span&gt; with &lt;span class="math inline"&gt;\(p(E), q(F) \ge 1 - \delta\)&lt;/span&gt; and assume &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; have the same supports, then &lt;strong&gt;C3&lt;/strong&gt; degenerates to a stronger condition than &lt;strong&gt;C2&lt;/strong&gt;. Indeed, in this case &lt;span class="math inline"&gt;\(p_E(y) = p(y) 1_{y \in E}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q_F(y) = q(y) 1_{y \in F}\)&lt;/span&gt;, and so &lt;span class="math inline"&gt;\(p_E(y) \le e^\epsilon q_F(y)\)&lt;/span&gt; forces &lt;span class="math inline"&gt;\(E \subset F\)&lt;/span&gt;. We also obtain &lt;span class="math inline"&gt;\(F \subset E\)&lt;/span&gt; in the same way. This gives us &lt;span class="math inline"&gt;\(E = F\)&lt;/span&gt;, and &lt;strong&gt;C3&lt;/strong&gt; becomes &lt;strong&gt;C2&lt;/strong&gt; with &lt;span class="math inline"&gt;\(A = B = E = F\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;As it turns out, &lt;strong&gt;C3&lt;/strong&gt; is the condition we need.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 4&lt;/strong&gt;. Two probability measures &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; are &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-ind if and only if &lt;strong&gt;C3&lt;/strong&gt; holds.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;(Murtagh-Vadhan 2018). The "if" direction is proved in the same way as Lemma 1. Without loss of generality we may assume &lt;span class="math inline"&gt;\(\mathbb P(E) = \mathbb P(F) \ge 1 - \delta\)&lt;/span&gt;. To see this, suppose &lt;span class="math inline"&gt;\(F\)&lt;/span&gt; has higher probability than &lt;span class="math inline"&gt;\(E\)&lt;/span&gt;, then we can substitute &lt;span class="math inline"&gt;\(F\)&lt;/span&gt; with a subset of &lt;span class="math inline"&gt;\(F\)&lt;/span&gt; that has the same probability as &lt;span class="math inline"&gt;\(E\)&lt;/span&gt; (with possible enlargement of the probability space).&lt;/p&gt;
+&lt;p&gt;Let &lt;span class="math inline"&gt;\(\xi \sim p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\eta \sim q\)&lt;/span&gt; be two independent random variables, then&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+p(S) &amp;amp;= \mathbb P(\xi \in S | E) \mathbb P(E) + \mathbb P(\xi \in S; E^c) \\
+&amp;amp;\le e^\epsilon \mathbb P(\eta \in S | F) \mathbb P(E) + \delta \\
+&amp;amp;= e^\epsilon \mathbb P(\eta \in S | F) \mathbb P(F) + \delta\\
+&amp;amp;\le e^\epsilon q(S) + \delta.
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;The "only-if" direction is more involved.&lt;/p&gt;
+&lt;p&gt;We construct events &lt;span class="math inline"&gt;\(E\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(F\)&lt;/span&gt; by constructing functions &lt;span class="math inline"&gt;\(e, f: Y \to [0, \infty)\)&lt;/span&gt; satisfying the following conditions:&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(0 \le e(y) \le p(y)\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(0 \le f(y) \le q(y)\)&lt;/span&gt; for all &lt;span class="math inline"&gt;\(y \in Y\)&lt;/span&gt;.&lt;/li&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(|\log e(y) - \log f(y)| \le \epsilon\)&lt;/span&gt; for all &lt;span class="math inline"&gt;\(y \in Y\)&lt;/span&gt;.&lt;/li&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(e(Y), f(Y) \ge 1 - \delta\)&lt;/span&gt;.&lt;/li&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(e(Y) = f(Y)\)&lt;/span&gt;.&lt;/li&gt;
+&lt;/ol&gt;
+&lt;p&gt;Here for a set &lt;span class="math inline"&gt;\(S \subset Y\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(e(S) := \int_S e(y) dy\)&lt;/span&gt;, and the same goes for &lt;span class="math inline"&gt;\(f(S)\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Let &lt;span class="math inline"&gt;\(\xi \sim p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\eta \sim q\)&lt;/span&gt;. Then we define &lt;span class="math inline"&gt;\(E\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(F\)&lt;/span&gt; by&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(E | \xi = y) = e(y) / p(y) \\
+\mathbb P(F | \eta = y) = f(y) / q(y).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Remark inside proof&lt;/strong&gt;. This can seem a bit confusing. Intuitively, we can think of it this way when &lt;span class="math inline"&gt;\(Y\)&lt;/span&gt; is finite: Recall a random variable on &lt;span class="math inline"&gt;\(Y\)&lt;/span&gt; is a function from the probability space &lt;span class="math inline"&gt;\(\Omega\)&lt;/span&gt; to &lt;span class="math inline"&gt;\(Y\)&lt;/span&gt;. Let event &lt;span class="math inline"&gt;\(G_y \subset \Omega\)&lt;/span&gt; be defined as &lt;span class="math inline"&gt;\(G_y = \xi^{-1} (y)\)&lt;/span&gt;. We cut &lt;span class="math inline"&gt;\(G_y\)&lt;/span&gt; into the disjoint union of &lt;span class="math inline"&gt;\(E_y\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(G_y \setminus E_y\)&lt;/span&gt; such that &lt;span class="math inline"&gt;\(\mathbb P(E_y) = e(y)\)&lt;/span&gt;. Then &lt;span class="math inline"&gt;\(E = \bigcup_{y \in Y} E_y\)&lt;/span&gt;. So &lt;span class="math inline"&gt;\(e(y)\)&lt;/span&gt; can be seen as the "density" of &lt;span class="math inline"&gt;\(E\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Indeed, given &lt;span class="math inline"&gt;\(E\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(F\)&lt;/span&gt; defined this way, we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p_E(y) = {e(y) \over e(Y)} \le {\exp(\epsilon) f(y) \over e(Y)} = {\exp(\epsilon) f(y) \over f(Y)} = \exp(\epsilon) q_F(y).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;and&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(E) = \int \mathbb P(E | \xi = y) p(y) dy = e(Y) \ge 1 - \delta,\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;and the same goes for &lt;span class="math inline"&gt;\(\mathbb P(F)\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;What remains is to construct &lt;span class="math inline"&gt;\(e(y)\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(f(y)\)&lt;/span&gt; satisfying the four conditions.&lt;/p&gt;
+&lt;p&gt;Like in the proof of Claim 1, let &lt;span class="math inline"&gt;\(S, T \subset Y\)&lt;/span&gt; be defined as&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+S := \{y: p(y) &amp;gt; \exp(\epsilon) q(y)\},\\
+T := \{y: q(y) &amp;gt; \exp(\epsilon) p(y)\}.
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Let&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+e(y) &amp;amp;:= \exp(\epsilon) q(y) 1_{y \in S} + p(y) 1_{y \notin S}\\
+f(y) &amp;amp;:= \exp(\epsilon) p(y) 1_{y \in T} + q(y) 1_{y \notin T}. \qquad (6)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;By checking them on the three disjoint subsets &lt;span class="math inline"&gt;\(S\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(T\)&lt;/span&gt;, &lt;span class="math inline"&gt;\((S \cup T)^c\)&lt;/span&gt;, it is not hard to verify that the &lt;span class="math inline"&gt;\(e(y)\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(f(y)\)&lt;/span&gt; constructed this way satisfy the first two conditions. They also satisfy the third condition:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+e(Y) &amp;amp;= 1 - (p(S) - \exp(\epsilon) q(S)) \ge 1 - \delta, \\
+f(Y) &amp;amp;= 1 - (q(T) - \exp(\epsilon) p(T)) \ge 1 - \delta.
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;If &lt;span class="math inline"&gt;\(e(Y) = f(Y)\)&lt;/span&gt; then we are done. Otherwise, without loss of generality, assume &lt;span class="math inline"&gt;\(e(Y) &amp;lt; f(Y)\)&lt;/span&gt;, then all it remains to do is to reduce the value of &lt;span class="math inline"&gt;\(f(y)\)&lt;/span&gt; while preserving Condition 1, 2 and 3, until &lt;span class="math inline"&gt;\(f(Y) = e(Y)\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;As it turns out, this can be achieved by reducing &lt;span class="math inline"&gt;\(f(y)\)&lt;/span&gt; on the set &lt;span class="math inline"&gt;\(\{y \in Y: q(y) &amp;gt; p(y)\}\)&lt;/span&gt;. To see this, let us rename the &lt;span class="math inline"&gt;\(f(y)\)&lt;/span&gt; defined in (6) &lt;span class="math inline"&gt;\(f_+(y)\)&lt;/span&gt;, and construct &lt;span class="math inline"&gt;\(f_-(y)\)&lt;/span&gt; by&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[f_-(y) := p(y) 1_{y \in T} + (q(y) \wedge p(y)) 1_{y \notin T}.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;It is not hard to show that not only &lt;span class="math inline"&gt;\(e(y)\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(f_-(y)\)&lt;/span&gt; also satisfy conditions 1-3, but&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[e(y) \ge f_-(y), \forall y \in Y,\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;and thus &lt;span class="math inline"&gt;\(e(Y) \ge f_-(Y)\)&lt;/span&gt;. Therefore there exists an &lt;span class="math inline"&gt;\(f\)&lt;/span&gt; that interpolates between &lt;span class="math inline"&gt;\(f_-\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(f_+\)&lt;/span&gt; with &lt;span class="math inline"&gt;\(f(Y) = e(Y)\)&lt;/span&gt;. &lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;To prove the adaptive composition theorem for approximate differential privacy, we need a similar claim (We use index shorthand &lt;span class="math inline"&gt;\(\xi_{&amp;lt; i} = \xi_{1 : i - 1}\)&lt;/span&gt; and similarly for other notations):&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 5&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(\xi_{1 : i}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\eta_{1 : i}\)&lt;/span&gt; be random variables. Let&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+p_i(S | y_{1 : i - 1}) := \mathbb P(\xi_i \in S | \xi_{1 : i - 1} = y_{1 : i - 1})\\
+q_i(S | y_{1 : i - 1}) := \mathbb P(\eta_i \in S | \eta_{1 : i - 1} = y_{1 : i - 1})
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;be the conditional laws of &lt;span class="math inline"&gt;\(\xi_i | \xi_{&amp;lt; i}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\eta_i | \eta_{&amp;lt; i}\)&lt;/span&gt; respectively. Then the following are equivalent:&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;For any &lt;span class="math inline"&gt;\(y_{&amp;lt; i} \in Y^{i - 1}\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(p_i(\cdot | y_{&amp;lt; i})\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q_i(\cdot | y_{&amp;lt; i})\)&lt;/span&gt; are &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-ind&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;There exists events &lt;span class="math inline"&gt;\(E_i, F_i \subset \Omega\)&lt;/span&gt; with &lt;span class="math inline"&gt;\(\mathbb P(E_i | \xi_{&amp;lt;i} = y_{&amp;lt;i}) = \mathbb P(F_i | \eta_{&amp;lt;i} = y_{&amp;lt; i}) \ge 1 - \delta\)&lt;/span&gt; for any &lt;span class="math inline"&gt;\(y_{&amp;lt; i}\)&lt;/span&gt;, such that &lt;span class="math inline"&gt;\(p_{i | E_i}(\cdot | y_{&amp;lt; i})\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q_{i | E_i} (\cdot | y_{&amp;lt; i})\)&lt;/span&gt; are &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-ind for any &lt;span class="math inline"&gt;\(y_{&amp;lt; i}\)&lt;/span&gt;, where &lt;span class="math display"&gt;\[\begin{aligned}
+p_{i | E_i}(S | y_{1 : i - 1}) := \mathbb P(\xi_i \in S | E_i, \xi_{1 : i - 1} = y_{1 : i - 1})\\
+ q_{i | F_i}(S | y_{1 : i - 1}) := \mathbb P(\eta_i \in S | F_i, \eta_{1 : i - 1} = y_{1 : i - 1})
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;are &lt;span class="math inline"&gt;\(p_i\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q_i\)&lt;/span&gt; conditioned on &lt;span class="math inline"&gt;\(E_i\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(F_i\)&lt;/span&gt; respectively.&lt;/p&gt;&lt;/li&gt;
+&lt;/ol&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. Item 2 =&amp;gt; Item 1: as in the Proof of Claim 4,&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+p_i(S | y_{&amp;lt; i}) &amp;amp;= p_{i | E_i} (S | y_{&amp;lt; i}) \mathbb P(E_i | \xi_{&amp;lt; i} = y_{&amp;lt; i}) + p_{i | E_i^c}(S | y_{&amp;lt; i}) \mathbb P(E_i^c | \xi_{&amp;lt; i} = y_{&amp;lt; i}) \\
+&amp;amp;\le p_{i | E_i} (S | y_{&amp;lt; i}) \mathbb P(E_i | \xi_{&amp;lt; i} = y_{&amp;lt; i}) + \delta \\
+&amp;amp;= p_{i | E_i} (S | y_{&amp;lt; i}) \mathbb P(F_i | \xi_{&amp;lt; i} = y_{&amp;lt; i}) + \delta \\
+&amp;amp;\le e^\epsilon q_{i | F_i} (S | y_{&amp;lt; i}) \mathbb P(F_i | \xi_{&amp;lt; i} = y_{&amp;lt; i}) + \delta \\
+&amp;amp;= e^\epsilon q_i (S | y_{&amp;lt; i}) + \delta.
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;The direction from &lt;span class="math inline"&gt;\(q_i(S | y_{&amp;lt; i}) \le e^\epsilon p_i(S | y_{&amp;lt; i}) + \delta\)&lt;/span&gt; can be shown in the same way.&lt;/p&gt;
+&lt;p&gt;Item 1 =&amp;gt; Item 2: as in the Proof of Claim 4 we construct &lt;span class="math inline"&gt;\(e(y_{1 : i})\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(f(y_{1 : i})\)&lt;/span&gt; as "densities" of events &lt;span class="math inline"&gt;\(E_i\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(F_i\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Let&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+e(y_{1 : i}) &amp;amp;:= e^\epsilon q_i(y_i | y_{&amp;lt; i}) 1_{y_i \in S_i(y_{&amp;lt; i})} + p_i(y_i | y_{&amp;lt; i}) 1_{y_i \notin S_i(y_{&amp;lt; i})}\\
+f(y_{1 : i}) &amp;amp;:= e^\epsilon p_i(y_i | y_{&amp;lt; i}) 1_{y_i \in T_i(y_{&amp;lt; i})} + q_i(y_i | y_{&amp;lt; i}) 1_{y_i \notin T_i(y_{&amp;lt; i})}\\
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+S_i(y_{&amp;lt; i}) = \{y_i \in Y: p_i(y_i | y_{&amp;lt; i}) &amp;gt; e^\epsilon q_i(y_i | y_{&amp;lt; i})\}\\
+T_i(y_{&amp;lt; i}) = \{y_i \in Y: q_i(y_i | y_{&amp;lt; i}) &amp;gt; e^\epsilon p_i(y_i | y_{&amp;lt; i})\}.
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Then &lt;span class="math inline"&gt;\(E_i\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(F_i\)&lt;/span&gt; are defined as&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\mathbb P(E_i | \xi_{\le i} = y_{\le i}) &amp;amp;= {e(y_{\le i}) \over p_i(y_{\le i})},\\
+\mathbb P(F_i | \xi_{\le i} = y_{\le i}) &amp;amp;= {f(y_{\le i}) \over q_i(y_{\le i})}.
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;The rest of the proof is almost the same as the proof of Lemma 2. &lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;h3 id="back-to-approximate-differential-privacy"&gt;Back to approximate differential privacy&lt;/h3&gt;
+&lt;p&gt;By Claim 0 and 1 we have&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 6&lt;/strong&gt;. If for all &lt;span class="math inline"&gt;\(x, x&amp;#39; \in X\)&lt;/span&gt; with distance &lt;span class="math inline"&gt;\(1\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(L(M(x) || M(x&amp;#39;)) \le \epsilon) \ge 1 - \delta,\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;then &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp.&lt;/p&gt;
+&lt;p&gt;Note that in the literature the divergence variable &lt;span class="math inline"&gt;\(L(M(x) || M(x&amp;#39;))\)&lt;/span&gt; is also called the &lt;em&gt;privacy loss&lt;/em&gt;.&lt;/p&gt;
+&lt;p&gt;By Claim 0 and Claim 4 we have&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 7&lt;/strong&gt;. &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp if and only if for every &lt;span class="math inline"&gt;\(x, x&amp;#39; \in X\)&lt;/span&gt; with distance &lt;span class="math inline"&gt;\(1\)&lt;/span&gt;, there exist events &lt;span class="math inline"&gt;\(E, F \subset \Omega\)&lt;/span&gt; with &lt;span class="math inline"&gt;\(\mathbb P(E) = \mathbb P(F) \ge 1 - \delta\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(M(x) | E\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(M(x&amp;#39;) | F\)&lt;/span&gt; are &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-ind.&lt;/p&gt;
+&lt;p&gt;We can further simplify the privacy loss &lt;span class="math inline"&gt;\(L(M(x) || M(x&amp;#39;))\)&lt;/span&gt;, by observing the translational and scaling invariance of &lt;span class="math inline"&gt;\(L(\cdot||\cdot)\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+L(\xi || \eta) &amp;amp;\overset{d}{=} L(\alpha \xi + \beta || \alpha \eta + \beta), \qquad \alpha \neq 0. \qquad (6.1)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;With this and the definition&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[M(x) = f(x) + \zeta\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;for some random variable &lt;span class="math inline"&gt;\(\zeta\)&lt;/span&gt;, we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[L(M(x) || M(x&amp;#39;)) \overset{d}{=} L(\zeta || \zeta + f(x&amp;#39;) - f(x)).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Without loss of generality, we can consider &lt;span class="math inline"&gt;\(f\)&lt;/span&gt; with sensitivity &lt;span class="math inline"&gt;\(1\)&lt;/span&gt;, for&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[L(f(x) + S_f \zeta || f(x&amp;#39;) + S_f \zeta) \overset{d}{=} L(S_f^{-1} f(x) + \zeta || S_f^{-1} f(x&amp;#39;) + \zeta)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;so for any noise &lt;span class="math inline"&gt;\(\zeta\)&lt;/span&gt; that achieves &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp for a function with sensitivity &lt;span class="math inline"&gt;\(1\)&lt;/span&gt;, we have the same privacy guarantee by for an arbitrary function with sensitivity &lt;span class="math inline"&gt;\(S_f\)&lt;/span&gt; by adding a noise &lt;span class="math inline"&gt;\(S_f \zeta\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;With Claim 6 we can show that the Gaussian mechanism is approximately differentially private. But first we need to define it.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Definition (Gaussian mechanism)&lt;/strong&gt;. Given a query &lt;span class="math inline"&gt;\(f: X \to Y\)&lt;/span&gt;, the &lt;em&gt;Gaussian mechanism&lt;/em&gt; &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; adds a Gaussian noise to the query:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[M(x) = f(x) + N(0, \sigma^2 I).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Some tail bounds for the Gaussian distribution will be useful.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 8 (Gaussian tail bounds)&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(\xi \sim N(0, 1)\)&lt;/span&gt; be a standard normal distribution. Then for &lt;span class="math inline"&gt;\(t &amp;gt; 0\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(\xi &amp;gt; t) &amp;lt; {1 \over \sqrt{2 \pi} t} e^{- {t^2 \over 2}}, \qquad (6.3)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;and&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(\xi &amp;gt; t) &amp;lt; e^{- {t^2 \over 2}}. \qquad (6.5)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. Both bounds are well known. The first can be proved using&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\int_t^\infty e^{- {y^2 \over 2}} dy &amp;lt; \int_t^\infty {y \over t} e^{- {y^2 \over 2}} dy.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;The second is shown using Chernoff bound. For any random variable &lt;span class="math inline"&gt;\(\xi\)&lt;/span&gt;,&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(\xi &amp;gt; t) &amp;lt; {\mathbb E \exp(\lambda \xi) \over \exp(\lambda t)} = \exp(\kappa_\xi(\lambda) - \lambda t), \qquad (6.7)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(\kappa_\xi(\lambda) = \log \mathbb E \exp(\lambda \xi)\)&lt;/span&gt; is the cumulant of &lt;span class="math inline"&gt;\(\xi\)&lt;/span&gt;. Since (6.7) holds for any &lt;span class="math inline"&gt;\(\lambda\)&lt;/span&gt;, we can get the best bound by minimising &lt;span class="math inline"&gt;\(\kappa_\xi(\lambda) - \lambda t\)&lt;/span&gt; (a.k.a. the Legendre transformation). When &lt;span class="math inline"&gt;\(\xi\)&lt;/span&gt; is standard normal, we get (6.5). &lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Remark&lt;/strong&gt;. We will use the Chernoff bound extensively in the second part of this post when considering Rényi differential privacy.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 9&lt;/strong&gt;. The Gaussian mechanism on a query &lt;span class="math inline"&gt;\(f\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp, where&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\delta = \exp(- (\epsilon \sigma / S_f - (2 \sigma / S_f)^{-1})^2 / 2). \qquad (6.8)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Conversely, to achieve give &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp, we may set&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sigma &amp;gt; \left(\epsilon^{-1} \sqrt{2 \log \delta^{-1}} + (2 \epsilon)^{- {1 \over 2}}\right) S_f \qquad (6.81)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;or&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sigma &amp;gt; (\epsilon^{-1} (1 \vee \sqrt{(\log (2 \pi)^{-1} \delta^{-2})_+}) + (2 \epsilon)^{- {1 \over 2}}) S_f \qquad (6.82)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;or&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sigma &amp;gt; \epsilon^{-1} \sqrt{\log e^\epsilon \delta^{-2}} S_f \qquad (6.83)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;or&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sigma &amp;gt; \epsilon^{-1} (\sqrt{1 + \epsilon} \vee \sqrt{(\log e^\epsilon (2 \pi)^{-1} \delta^{-2})_+}) S_f. \qquad (6.84)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. As discussed before we only need to consider the case where &lt;span class="math inline"&gt;\(S_f = 1\)&lt;/span&gt;. Fix arbitrary &lt;span class="math inline"&gt;\(x, x&amp;#39; \in X\)&lt;/span&gt; with &lt;span class="math inline"&gt;\(d(x, x&amp;#39;) = 1\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(\zeta = (\zeta_1, ..., \zeta_d) \sim N(0, I_d)\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;By Claim 6 it suffices to bound&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(L(M(x) || M(x&amp;#39;)) &amp;gt; \epsilon)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;We have by the linear invariance of &lt;span class="math inline"&gt;\(L\)&lt;/span&gt;,&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[L(M(x) || M(x&amp;#39;)) = L(f(x) + \sigma \zeta || f(x&amp;#39;) + \sigma \zeta) \overset{d}{=} L(\zeta|| \zeta + \Delta / \sigma),\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(\Delta := f(x&amp;#39;) - f(x)\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Plugging in the Gaussian density, we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[L(M(x) || M(x&amp;#39;)) \overset{d}{=} \sum_i {\Delta_i \over \sigma} \zeta_i + \sum_i {\Delta_i^2 \over 2 \sigma^2} \overset{d}{=} {\|\Delta\|_2 \over \sigma} \xi + {\|\Delta\|_2^2 \over 2 \sigma^2}.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(\xi \sim N(0, 1)\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Hence&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(L(M(x) || M(x&amp;#39;)) &amp;gt; \epsilon) = \mathbb P(\zeta &amp;gt; {\sigma \over \|\Delta\|_2} \epsilon - {\|\Delta\|_2 \over 2 \sigma}).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Since &lt;span class="math inline"&gt;\(\|\Delta\|_2 \le S_f = 1\)&lt;/span&gt;, we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(L(M(x) || M(x&amp;#39;)) &amp;gt; \epsilon) \le \mathbb P(\xi &amp;gt; \sigma \epsilon - (2 \sigma)^{-1}).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Thus the problem is reduced to the tail bound of a standard normal distribution, so we can use Claim 8. Note that we implicitly require &lt;span class="math inline"&gt;\(\sigma &amp;gt; (2 \epsilon)^{- 1 / 2}\)&lt;/span&gt; here so that &lt;span class="math inline"&gt;\(\sigma \epsilon - (2 \sigma)^{-1} &amp;gt; 0\)&lt;/span&gt; and we can use the tail bounds.&lt;/p&gt;
+&lt;p&gt;Using (6.3) we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(L(M(x) || M(x&amp;#39;)) &amp;gt; \epsilon) &amp;lt; \exp(- (\epsilon \sigma - (2 \sigma)^{-1})^2 / 2).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;This gives us (6.8).&lt;/p&gt;
+&lt;p&gt;To bound the right hand by &lt;span class="math inline"&gt;\(\delta\)&lt;/span&gt;, we require&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\epsilon \sigma - {1 \over 2 \sigma} &amp;gt; \sqrt{2 \log \delta^{-1}}. \qquad (6.91)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Solving this inequality we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sigma &amp;gt; {\sqrt{2 \log \delta^{-1}} + \sqrt{2 \log \delta^{-1} + 2 \epsilon} \over 2 \epsilon}.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Using &lt;span class="math inline"&gt;\(\sqrt{2 \log \delta^{-1} + 2 \epsilon} \le \sqrt{2 \log \delta^{-1}} + \sqrt{2 \epsilon}\)&lt;/span&gt;, we can achieve the above inequality by having&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sigma &amp;gt; \epsilon^{-1} \sqrt{2 \log \delta^{-1}} + (2 \epsilon)^{-{1 \over 2}}.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;This gives us (6.81).&lt;/p&gt;
+&lt;p&gt;Alternatively, we can use the concavity of &lt;span class="math inline"&gt;\(\sqrt{\cdot}\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[(2 \epsilon)^{-1} (\sqrt{2 \log \delta^{-1}} + \sqrt{2 \log \delta^{-1} + 2 \epsilon}) \le \epsilon^{-1} \sqrt{\log e^\epsilon \delta^{-2}},\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;which gives us (6.83)&lt;/p&gt;
+&lt;p&gt;Back to (6.9), if we use (6.5) instead, we need&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\log t + {t^2 \over 2} &amp;gt; \log {(2 \pi)^{- 1 / 2} \delta^{-1}}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(t = \epsilon \sigma - (2 \sigma)^{-1}\)&lt;/span&gt;. This can be satisfied if&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+t &amp;amp;&amp;gt; 1 \qquad (6.93)\\
+t &amp;amp;&amp;gt; \sqrt{\log (2 \pi)^{-1} \delta^{-2}}. \qquad (6.95)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;We can solve both inequalities as before and obtain&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sigma &amp;gt; \epsilon^{-1} (1 \vee \sqrt{(\log (2 \pi)^{-1} \delta^{-2})_+}) + (2 \epsilon)^{- {1 \over 2}},\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;or&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sigma &amp;gt; \epsilon^{-1}(\sqrt{1 + \epsilon} \vee \sqrt{(\log e^\epsilon (2 \pi)^{-1} \delta^{-2})_+}).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;This gives us (6.82)(6.84). &lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;When &lt;span class="math inline"&gt;\(\epsilon \le \alpha\)&lt;/span&gt; is bounded, by (6.83) (6.84) we can require either&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sigma &amp;gt; \epsilon^{-1} (\sqrt{\log e^\alpha \delta^{-2}}) S_f\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;or&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sigma &amp;gt; \epsilon^{-1} (\sqrt{1 + \alpha} \vee \sqrt{(\log (2 \pi)^{-1} e^\alpha \delta^{-2})_+}).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;The second bound is similar to and slightly better than the one in Theorem A.1 of Dwork-Roth 2013, where &lt;span class="math inline"&gt;\(\alpha = 1\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sigma &amp;gt; \epsilon^{-1} \left({3 \over 2} \vee \sqrt{(2 \log {5 \over 4} \delta^{-1})_+}\right) S_f.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Note that the lower bound of &lt;span class="math inline"&gt;\({3 \over 2}\)&lt;/span&gt; is implicitly required in the proof of Theorem A.1.&lt;/p&gt;
+&lt;h2 id="composition-theorems"&gt;Composition theorems&lt;/h2&gt;
+&lt;p&gt;So far we have seen how a mechanism made of a single query plus a noise can be proved to be differentially private. But we need to understand the privacy when composing several mechanisms, combinatorially or sequentially. Let us first define the combinatorial case:&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Definition (Independent composition)&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(M_1, ..., M_k\)&lt;/span&gt; be &lt;span class="math inline"&gt;\(k\)&lt;/span&gt; mechanisms with independent noises. The mechanism &lt;span class="math inline"&gt;\(M = (M_1, ..., M_k)\)&lt;/span&gt; is called the &lt;em&gt;independent composition&lt;/em&gt; of &lt;span class="math inline"&gt;\(M_{1 : k}\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;To define the adaptive composition, let us motivate it with an example of gradient descent. Consider the loss function &lt;span class="math inline"&gt;\(\ell(x; \theta)\)&lt;/span&gt; of a neural network, where &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt; is the parameter and &lt;span class="math inline"&gt;\(x\)&lt;/span&gt; the input, gradient descent updates its parameter &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt; at each time &lt;span class="math inline"&gt;\(t\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\theta_{t} = \theta_{t - 1} - \alpha m^{-1} \sum_{i = 1 : m} \nabla_\theta \ell(x_i; \theta) |_{\theta = \theta_{t - 1}}.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;We may add privacy by adding noise &lt;span class="math inline"&gt;\(\zeta_t\)&lt;/span&gt; at each step:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\theta_{t} = \theta_{t - 1} - \alpha m^{-1} \sum_{i = 1 : m} \nabla_\theta \ell(x_i; \theta) |_{\theta = \theta_{t - 1}} + \zeta_t. \qquad (6.97)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Viewed as a sequence of mechanism, we have that at each time &lt;span class="math inline"&gt;\(t\)&lt;/span&gt;, the mechanism &lt;span class="math inline"&gt;\(M_t\)&lt;/span&gt; takes input &lt;span class="math inline"&gt;\(x\)&lt;/span&gt;, and outputs &lt;span class="math inline"&gt;\(\theta_t\)&lt;/span&gt;. But &lt;span class="math inline"&gt;\(M_t\)&lt;/span&gt; also depends on the output of the previous mechanism &lt;span class="math inline"&gt;\(M_{t - 1}\)&lt;/span&gt;. To this end, we define the adaptive composition.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Definition (Adaptive composition)&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(({M_i(y_{1 : i - 1})})_{i = 1 : k}\)&lt;/span&gt; be &lt;span class="math inline"&gt;\(k\)&lt;/span&gt; mechanisms with independent noises, where &lt;span class="math inline"&gt;\(M_1\)&lt;/span&gt; has no parameter, &lt;span class="math inline"&gt;\(M_2\)&lt;/span&gt; has one parameter in &lt;span class="math inline"&gt;\(Y\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(M_3\)&lt;/span&gt; has two parameters in &lt;span class="math inline"&gt;\(Y\)&lt;/span&gt; and so on. For &lt;span class="math inline"&gt;\(x \in X\)&lt;/span&gt;, define &lt;span class="math inline"&gt;\(\xi_i\)&lt;/span&gt; recursively by&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\xi_1 &amp;amp;:= M_1(x)\\
+\xi_i &amp;amp;:= M_i(\xi_1, \xi_2, ..., \xi_{i - 1}) (x).
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;The &lt;em&gt;adaptive composition&lt;/em&gt; of &lt;span class="math inline"&gt;\(M_{1 : k}\)&lt;/span&gt; is defined by &lt;span class="math inline"&gt;\(M(x) := (\xi_1, \xi_2, ..., \xi_k)\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;The definition of adaptive composition may look a bit complicated, but the point is to describe &lt;span class="math inline"&gt;\(k\)&lt;/span&gt; mechanisms such that for each &lt;span class="math inline"&gt;\(i\)&lt;/span&gt;, the output of the first, second, ..., &lt;span class="math inline"&gt;\(i - 1\)&lt;/span&gt;th mechanisms determine the &lt;span class="math inline"&gt;\(i\)&lt;/span&gt;th mechanism, like in the case of gradient descent.&lt;/p&gt;
+&lt;p&gt;It is not hard to write down the differentially private gradient descent as a sequential composition:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[M_t(\theta_{1 : t - 1})(x) = \theta_{t - 1} - \alpha m^{-1} \sum_{i = 1 : m} \nabla_\theta \ell(x_i; \theta) |_{\theta = \theta_{t - 1}} + \zeta_t.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;In Dwork-Rothblum-Vadhan 2010 (see also Dwork-Roth 2013) the adaptive composition is defined in a more general way, but the definition is based on the same principle, and proofs in this post on adaptive compositions carry over.&lt;/p&gt;
+&lt;p&gt;It is not hard to see that the adaptive composition degenerates to independent composition when each &lt;span class="math inline"&gt;\(M_i(y_{1 : i})\)&lt;/span&gt; evaluates to the same mechanism regardless of &lt;span class="math inline"&gt;\(y_{1 : i}\)&lt;/span&gt;, in which case the &lt;span class="math inline"&gt;\(\xi_i\)&lt;/span&gt;s are independent.&lt;/p&gt;
+&lt;p&gt;In the following when discussing adaptive compositions we sometimes omit the parameters for convenience without risk of ambiguity, and write &lt;span class="math inline"&gt;\(M_i(y_{1 : i})\)&lt;/span&gt; as &lt;span class="math inline"&gt;\(M_i\)&lt;/span&gt;, but keep in mind of the dependence on the parameters.&lt;/p&gt;
+&lt;p&gt;It is time to state and prove the composition theorems. In this section we consider &lt;span class="math inline"&gt;\(2 \times 2 \times 2 = 8\)&lt;/span&gt; cases, i.e. situations of three dimensions, where there are two choices in each dimension:&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;Composition of &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-dp or more generally &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp mechanisms&lt;/li&gt;
+&lt;li&gt;Composition of independent or more generally adaptive mechanisms&lt;/li&gt;
+&lt;li&gt;Basic or advanced compositions&lt;/li&gt;
+&lt;/ol&gt;
+&lt;p&gt;Note that in the first two dimensions the second choice is more general than the first.&lt;/p&gt;
+&lt;p&gt;The proofs of these composition theorems will be laid out as follows:&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;Claim 10 - Basic composition theorem for &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp with adaptive mechanisms: by a direct proof with an induction argument&lt;/li&gt;
+&lt;li&gt;Claim 14 - Advanced composition theorem for &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-dp with independent mechanisms: by factorising privacy loss and using Hoeffding's Inequality&lt;/li&gt;
+&lt;li&gt;Claim 16 - Advanced composition theorem for &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-dp with adaptive mechanisms: by factorising privacy loss and using Azuma's Inequality&lt;/li&gt;
+&lt;li&gt;Claims 17 and 18 - Advanced composition theorem for &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp with independent / adaptive mechanisms: by using characterisations of &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp in Claims 4 and 5 as an approximation of &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-dp and then using Proofs in Item 2 or 3.&lt;/li&gt;
+&lt;/ol&gt;
+&lt;p&gt;&lt;strong&gt;Claim 10 (Basic composition theorem).&lt;/strong&gt; Let &lt;span class="math inline"&gt;\(M_{1 : k}\)&lt;/span&gt; be &lt;span class="math inline"&gt;\(k\)&lt;/span&gt; mechanisms with independent noises such that for each &lt;span class="math inline"&gt;\(i\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(y_{1 : i - 1}\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(M_i(y_{1 : i - 1})\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\epsilon_i, \delta_i)\)&lt;/span&gt;-dp. Then the adpative composition of &lt;span class="math inline"&gt;\(M_{1 : k}\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\sum_i \epsilon_i, \sum_i \delta_i)\)&lt;/span&gt;-dp.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof (Dwork-Lei 2009, see also Dwork-Roth 2013 Appendix B.1)&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(x\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(x&amp;#39;\)&lt;/span&gt; be neighbouring points in &lt;span class="math inline"&gt;\(X\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; be the adaptive composition of &lt;span class="math inline"&gt;\(M_{1 : k}\)&lt;/span&gt;. Define&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\xi_{1 : k} := M(x), \qquad \eta_{1 : k} := M(x&amp;#39;).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Let &lt;span class="math inline"&gt;\(p^i\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q^i\)&lt;/span&gt; be the laws of &lt;span class="math inline"&gt;\((\xi_{1 : i})\)&lt;/span&gt; and &lt;span class="math inline"&gt;\((\eta_{1 : i})\)&lt;/span&gt; respectively.&lt;/p&gt;
+&lt;p&gt;Let &lt;span class="math inline"&gt;\(S_1, ..., S_k \subset Y\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(T_i := \prod_{j = 1 : i} S_j\)&lt;/span&gt;. We use two tricks.&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;&lt;p&gt;Since &lt;span class="math inline"&gt;\(\xi_i | \xi_{&amp;lt; i} = y_{&amp;lt; i}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\eta_i | \eta_{&amp;lt; i} = y_{&amp;lt; i}\)&lt;/span&gt; are &lt;span class="math inline"&gt;\((\epsilon_i, \delta_i)\)&lt;/span&gt;-ind, and a probability is no greater than &lt;span class="math inline"&gt;\(1\)&lt;/span&gt;, &lt;span class="math display"&gt;\[\begin{aligned}
+\mathbb P(\xi_i \in S_i | \xi_{&amp;lt; i} = y_{&amp;lt; i}) &amp;amp;\le (e^{\epsilon_i} \mathbb P(\eta_i \in S_i | \eta_{&amp;lt; i} = y_{&amp;lt; i}) + \delta_i) \wedge 1 \\
+ &amp;amp;\le (e^{\epsilon_i} \mathbb P(\eta_i \in S_i | \eta_{&amp;lt; i} = y_{&amp;lt; i}) + \delta_i) \wedge (1 + \delta_i) \\
+ &amp;amp;= (e^{\epsilon_i} \mathbb P(\eta_i \in S_i | \eta_{&amp;lt; i} = y_{&amp;lt; i}) \wedge 1) + \delta_i
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;Given &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; that are &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-ind, define &lt;span class="math display"&gt;\[\mu(x) = (p(x) - e^\epsilon q(x))_+.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;We have &lt;span class="math display"&gt;\[\mu(S) \le \delta, \forall S\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;In the following we define &lt;span class="math inline"&gt;\(\mu^{i - 1} = (p^{i - 1} - e^\epsilon q^{i - 1})_+\)&lt;/span&gt; for the same purpose.&lt;/p&gt;&lt;/li&gt;
+&lt;/ol&gt;
+&lt;p&gt;We use an inductive argument to prove the theorem:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\mathbb P(\xi_{\le i} \in T_i) &amp;amp;= \int_{T_{i - 1}} \mathbb P(\xi_i \in S_i | \xi_{&amp;lt; i} = y_{&amp;lt; i}) p^{i - 1} (y_{&amp;lt; i}) dy_{&amp;lt; i} \\
+&amp;amp;\le \int_{T_{i - 1}} (e^{\epsilon_i} \mathbb P(\eta_i \in S_i | \eta_{&amp;lt; i} = y_{&amp;lt; i}) \wedge 1) p^{i - 1}(y_{&amp;lt; i}) dy_{&amp;lt; i} + \delta_i\\
+&amp;amp;\le \int_{T_{i - 1}} (e^{\epsilon_i} \mathbb P(\eta_i \in S_i | \eta_{&amp;lt; i} = y_{&amp;lt; i}) \wedge 1) (e^{\epsilon_1 + ... + \epsilon_{i - 1}} q^{i - 1}(y_{&amp;lt; i}) + \mu^{i - 1} (y_{&amp;lt; i})) dy_{&amp;lt; i} + \delta_i\\
+&amp;amp;\le \int_{T_{i - 1}} e^{\epsilon_i} \mathbb P(\eta_i \in S_i | \eta_{&amp;lt; i} = y_{&amp;lt; i}) e^{\epsilon_1 + ... + \epsilon_{i - 1}} q^{i - 1}(y_{&amp;lt; i}) dy_{&amp;lt; i} + \mu_{i - 1}(T_{i - 1}) + \delta_i\\
+&amp;amp;\le e^{\epsilon_1 + ... + \epsilon_i} \mathbb P(\eta_{\le i} \in T_i) + \delta_1 + ... + \delta_{i - 1} + \delta_i.\\
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;In the second line we use Trick 1; in the third line we use the induction assumption; in the fourth line we multiply the first term in the first braket with first term in the second braket, and the second term (i.e. &lt;span class="math inline"&gt;\(1\)&lt;/span&gt;) in the first braket with the second term in the second braket (i.e. the &lt;span class="math inline"&gt;\(\mu\)&lt;/span&gt; term); in the last line we use Trick 2.&lt;/p&gt;
+&lt;p&gt;The base case &lt;span class="math inline"&gt;\(i = 1\)&lt;/span&gt; is true since &lt;span class="math inline"&gt;\(M_1\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\epsilon_1, \delta_1)\)&lt;/span&gt;-dp. &lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;To prove the advanced composition theorem, we start with some lemmas.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 11&lt;/strong&gt;. If &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; are &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-ind, then&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[D(p || q) + D(q || p) \le \epsilon(e^\epsilon - 1).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. Since &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; are &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-ind, we have &lt;span class="math inline"&gt;\(|\log p(x) - \log q(x)| \le \epsilon\)&lt;/span&gt; for all &lt;span class="math inline"&gt;\(x\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(S := \{x: p(x) &amp;gt; q(x)\}\)&lt;/span&gt;. Then we have on&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+D(p || q) + D(q || p) &amp;amp;= \int (p(x) - q(x)) (\log p(x) - \log q(x)) dx\\
+&amp;amp;= \int_S (p(x) - q(x)) (\log p(x) - \log q(x)) dx + \int_{S^c} (q(x) - p(x)) (\log q(x) - \log p(x)) dx\\
+&amp;amp;\le \epsilon(\int_S p(x) - q(x) dx + \int_{S^c} q(x) - p(x) dx)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Since on &lt;span class="math inline"&gt;\(S\)&lt;/span&gt; we have &lt;span class="math inline"&gt;\(q(x) \le p(x) \le e^\epsilon q(x)\)&lt;/span&gt;, and on &lt;span class="math inline"&gt;\(S^c\)&lt;/span&gt; we have &lt;span class="math inline"&gt;\(p(x) \le q(x) \le e^\epsilon p(x)\)&lt;/span&gt;, we obtain&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[D(p || q) + D(q || p) \le \epsilon \int_S (1 - e^{-\epsilon}) p(x) dx + \epsilon \int_{S^c} (e^{\epsilon} - 1) p(x) dx \le \epsilon (e^{\epsilon} - 1),\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where in the last step we use &lt;span class="math inline"&gt;\(e^\epsilon - 1 \ge 1 - e^{- \epsilon}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(p(S) + p(S^c) = 1\)&lt;/span&gt;. &lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 12&lt;/strong&gt;. If &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; are &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-ind, then&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[D(p || q) \le a(\epsilon) \ge D(q || p),\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[a(\epsilon) = \epsilon (e^\epsilon - 1) 1_{\epsilon \le \log 2} + \epsilon 1_{\epsilon &amp;gt; \log 2} \le (\log 2)^{-1} \epsilon^2 1_{\epsilon \le \log 2} + \epsilon 1_{\epsilon &amp;gt; \log 2}. \qquad (6.98)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. Since &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; are &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-ind, we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[D(p || q) = \mathbb E_{\xi \sim p} \log {p(\xi) \over q(\xi)} \le \max_y {\log p(y) \over \log q(y)} \le \epsilon.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Comparing the quantity in Claim 11 (&lt;span class="math inline"&gt;\(\epsilon(e^\epsilon - 1)\)&lt;/span&gt;) with the quantity above (&lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;), we arrive at the conclusion. &lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 13 (Hoeffding's Inequality)&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(L_i\)&lt;/span&gt; be independent random variables with &lt;span class="math inline"&gt;\(|L_i| \le b\)&lt;/span&gt;, and let &lt;span class="math inline"&gt;\(L = L_1 + ... + L_k\)&lt;/span&gt;, then for &lt;span class="math inline"&gt;\(t &amp;gt; 0\)&lt;/span&gt;,&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(L - \mathbb E L \ge t) \le \exp(- {t^2 \over 2 k b^2}).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 14 (Advanced Independent Composition Theorem)&lt;/strong&gt; (&lt;span class="math inline"&gt;\(\delta = 0\)&lt;/span&gt;). Fix &lt;span class="math inline"&gt;\(0 &amp;lt; \beta &amp;lt; 1\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(M_1, ..., M_k\)&lt;/span&gt; be &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-dp, then the independent composition &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; of &lt;span class="math inline"&gt;\(M_{1 : k}\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((k a(\epsilon) + \sqrt{2 k \log \beta^{-1}} \epsilon, \beta)\)&lt;/span&gt;-dp.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Remark&lt;/strong&gt;. By (6.98) we know that &lt;span class="math inline"&gt;\(k a(\epsilon) + \sqrt{2 k \log \beta^{-1}} \epsilon = \sqrt{2 k \log \beta^{-1}} \epsilon + k O(\epsilon^2)\)&lt;/span&gt; when &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt; is sufficiently small, in which case the leading term is of order &lt;span class="math inline"&gt;\(O(\sqrt k \epsilon)\)&lt;/span&gt; and we save a &lt;span class="math inline"&gt;\(\sqrt k\)&lt;/span&gt; in the &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-part compared to the Basic Composition Theorem (Claim 10).&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Remark&lt;/strong&gt;. In practice one can try different choices of &lt;span class="math inline"&gt;\(\beta\)&lt;/span&gt; and settle with the one that gives the best privacy guarantee. See the discussions at the end of &lt;a href="/posts/2019-03-14-great-but-manageable-expectations.html"&gt;Part 2 of this post&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(p_i\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(q_i\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; be the laws of &lt;span class="math inline"&gt;\(M_i(x)\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(M_i(x&amp;#39;)\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(M(x)\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(M(x&amp;#39;)\)&lt;/span&gt; respectively.&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb E L_i = D(p_i || q_i) \le a(\epsilon),\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(L_i := L(p_i || q_i)\)&lt;/span&gt;. Due to &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-ind also have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[|L_i| \le \epsilon.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Therefore, by Hoeffding's Inequality,&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(L - k a(\epsilon) \ge t) \le \mathbb P(L - \mathbb E L \ge t) \le \exp(- t^2 / 2 k \epsilon^2),\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(L := \sum_i L_i = L(p || q)\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Plugging in &lt;span class="math inline"&gt;\(t = \sqrt{2 k \epsilon^2 \log \beta^{-1}}\)&lt;/span&gt;, we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(L(p || q) \le k a(\epsilon) + \sqrt{2 k \epsilon^2 \log \beta^{-1}}) \ge 1 - \beta.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Similarly we also have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(L(q || p) \le k a(\epsilon) + \sqrt{2 k \epsilon^2 \log \beta^{-1}}) \ge 1 - \beta.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;By Claim 1 we arrive at the conclusion. &lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 15 (Azuma's Inequality)&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(X_{0 : k}\)&lt;/span&gt; be a supermartingale. If &lt;span class="math inline"&gt;\(|X_i - X_{i - 1}| \le b\)&lt;/span&gt;, then&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(X_k - X_0 \ge t) \le \exp(- {t^2 \over 2 k b^2}).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Azuma's Inequality implies a slightly weaker version of Hoeffding's Inequality. To see this, let &lt;span class="math inline"&gt;\(L_{1 : k}\)&lt;/span&gt; be independent variables with &lt;span class="math inline"&gt;\(|L_i| \le b\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(X_i = \sum_{j = 1 : i} L_j - \mathbb E L_j\)&lt;/span&gt;. Then &lt;span class="math inline"&gt;\(X_{0 : k}\)&lt;/span&gt; is a martingale, and&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[| X_i - X_{i - 1} | = | L_i - \mathbb E L_i | \le 2 b,\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;since &lt;span class="math inline"&gt;\(\|L_i\|_1 \le \|L_i\|_\infty\)&lt;/span&gt;. Hence by Azuma's Inequality,&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(L - \mathbb E L \ge t) \le \exp(- {t^2 \over 8 k b^2}).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Of course here we have made no assumption on &lt;span class="math inline"&gt;\(\mathbb E L_i\)&lt;/span&gt;. If instead we have some bound for the expectation, say &lt;span class="math inline"&gt;\(|\mathbb E L_i| \le a\)&lt;/span&gt;, then by the same derivation we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(L - \mathbb E L \ge t) \le \exp(- {t^2 \over 2 k (a + b)^2}).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;It is not hard to see what Azuma is to Hoeffding is like adaptive composition to independent composition. Indeed, we can use Azuma's Inequality to prove the Advanced Adaptive Composition Theorem for &lt;span class="math inline"&gt;\(\delta = 0\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 16 (Advanced Adaptive Composition Theorem)&lt;/strong&gt; (&lt;span class="math inline"&gt;\(\delta = 0\)&lt;/span&gt;). Let &lt;span class="math inline"&gt;\(\beta &amp;gt; 0\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(M_{1 : k}\)&lt;/span&gt; be &lt;span class="math inline"&gt;\(k\)&lt;/span&gt; mechanisms with independent noises such that for each &lt;span class="math inline"&gt;\(i\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(y_{1 : i}\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(M_i(y_{1 : i})\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\epsilon, 0)\)&lt;/span&gt;-dp. Then the adpative composition of &lt;span class="math inline"&gt;\(M_{1 : k}\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((k a(\epsilon) + \sqrt{2 k \log \beta^{-1}} (\epsilon + a(\epsilon)), \beta)\)&lt;/span&gt;-dp.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. As before, let &lt;span class="math inline"&gt;\(\xi_{1 : k} \overset{d}{=} M(x)\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\eta_{1 : k} \overset{d}{=} M(x&amp;#39;)\)&lt;/span&gt;, where &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; is the adaptive composition of &lt;span class="math inline"&gt;\(M_{1 : k}\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(p_i\)&lt;/span&gt; (resp. &lt;span class="math inline"&gt;\(q_i\)&lt;/span&gt;) be the law of &lt;span class="math inline"&gt;\(\xi_i | \xi_{&amp;lt; i}\)&lt;/span&gt; (resp. &lt;span class="math inline"&gt;\(\eta_i | \eta_{&amp;lt; i}\)&lt;/span&gt;). Let &lt;span class="math inline"&gt;\(p^i\)&lt;/span&gt; (resp. &lt;span class="math inline"&gt;\(q^i\)&lt;/span&gt;) be the law of &lt;span class="math inline"&gt;\(\xi_{\le i}\)&lt;/span&gt; (resp. &lt;span class="math inline"&gt;\(\eta_{\le i}\)&lt;/span&gt;). We want to construct supermartingale &lt;span class="math inline"&gt;\(X\)&lt;/span&gt;. To this end, let&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[X_i = \log {p^i(\xi_{\le i}) \over q^i(\xi_{\le i})} - i a(\epsilon) \]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;We show that &lt;span class="math inline"&gt;\((X_i)\)&lt;/span&gt; is a supermartingale:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\mathbb E(X_i - X_{i - 1} | X_{i - 1}) &amp;amp;= \mathbb E \left(\log {p_i (\xi_i | \xi_{&amp;lt; i}) \over q_i (\xi_i | \xi_{&amp;lt; i})} - a(\epsilon) | \log {p^{i - 1} (\xi_{&amp;lt; i}) \over q^{i - 1} (\xi_{&amp;lt; i})}\right) \\
+&amp;amp;= \mathbb E \left( \mathbb E \left(\log {p_i (\xi_i | \xi_{&amp;lt; i}) \over q_i (\xi_i | \xi_{&amp;lt; i})} | \xi_{&amp;lt; i}\right) | \log {p^{i - 1} (\xi_{&amp;lt; i}) \over q^{i - 1} (\xi_{&amp;lt; i})}\right) - a(\epsilon) \\
+&amp;amp;= \mathbb E \left( D(p_i (\cdot | \xi_{&amp;lt; i}) || q_i (\cdot | \xi_{&amp;lt; i})) | \log {p^{i - 1} (\xi_{&amp;lt; i}) \over q^{i - 1} (\xi_{&amp;lt; i})}\right) - a(\epsilon) \\
+&amp;amp;\le 0,
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;since by Claim 12 &lt;span class="math inline"&gt;\(D(p_i(\cdot | y_{&amp;lt; i}) || q_i(\cdot | y_{&amp;lt; i})) \le a(\epsilon)\)&lt;/span&gt; for all &lt;span class="math inline"&gt;\(y_{&amp;lt; i}\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Since&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[| X_i - X_{i - 1} | = | \log {p_i(\xi_i | \xi_{&amp;lt; i}) \over q_i(\xi_i | \xi_{&amp;lt; i})} - a(\epsilon) | \le \epsilon + a(\epsilon),\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;by Azuma's Inequality,&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(\log {p^k(\xi_{1 : k}) \over q^k(\xi_{1 : k})} \ge k a(\epsilon) + t) \le \exp(- {t^2 \over 2 k (\epsilon + a(\epsilon))^2}). \qquad(6.99)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Let &lt;span class="math inline"&gt;\(t = \sqrt{2 k \log \beta^{-1}} (\epsilon + a(\epsilon))\)&lt;/span&gt; we are done. &lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 17 (Advanced Independent Composition Theorem)&lt;/strong&gt;. Fix &lt;span class="math inline"&gt;\(0 &amp;lt; \beta &amp;lt; 1\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(M_1, ..., M_k\)&lt;/span&gt; be &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp, then the independent composition &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; of &lt;span class="math inline"&gt;\(M_{1 : k}\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((k a(\epsilon) + \sqrt{2 k \log \beta^{-1}} \epsilon, k \delta + \beta)\)&lt;/span&gt;-dp.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. By Claim 4, there exist events &lt;span class="math inline"&gt;\(E_{1 : k}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(F_{1 : k}\)&lt;/span&gt; such that&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;The laws &lt;span class="math inline"&gt;\(p_{i | E_i}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q_{i | F_i}\)&lt;/span&gt; are &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-ind.&lt;/li&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(\mathbb P(E_i), \mathbb P(F_i) \ge 1 - \delta\)&lt;/span&gt;.&lt;/li&gt;
+&lt;/ol&gt;
+&lt;p&gt;Let &lt;span class="math inline"&gt;\(E := \bigcap E_i\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(F := \bigcap F_i\)&lt;/span&gt;, then they both have probability at least &lt;span class="math inline"&gt;\(1 - k \delta\)&lt;/span&gt;, and &lt;span class="math inline"&gt;\(p_{i | E}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q_{i | F}\)&lt;/span&gt; are &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-ind.&lt;/p&gt;
+&lt;p&gt;By Claim 14, &lt;span class="math inline"&gt;\(p_{|E}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q_{|F}\)&lt;/span&gt; are &lt;span class="math inline"&gt;\((\epsilon&amp;#39; := k a(\epsilon) + \sqrt{2 k \epsilon^2 \log \beta^{-1}}, \beta)\)&lt;/span&gt;-ind. Let us shrink the bigger event between &lt;span class="math inline"&gt;\(E\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(F\)&lt;/span&gt; so that they have equal probabilities. Then&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+p (S) &amp;amp;\le p_{|E}(S) \mathbb P(E) + \mathbb P(E^c) \\
+&amp;amp;\le (e^{\epsilon&amp;#39;} q_{|F}(S) + \beta) \mathbb P(F) + k \delta\\
+&amp;amp;\le e^{\epsilon&amp;#39;} q(S) + \beta + k \delta.
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 18 (Advanced Adaptive Composition Theorem)&lt;/strong&gt;. Fix &lt;span class="math inline"&gt;\(0 &amp;lt; \beta &amp;lt; 1\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(M_{1 : k}\)&lt;/span&gt; be &lt;span class="math inline"&gt;\(k\)&lt;/span&gt; mechanisms with independent noises such that for each &lt;span class="math inline"&gt;\(i\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(y_{1 : i}\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(M_i(y_{1 : i})\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp. Then the adpative composition of &lt;span class="math inline"&gt;\(M_{1 : k}\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((k a(\epsilon) + \sqrt{2 k \log \beta^{-1}} (\epsilon + a(\epsilon)), \beta + k \delta)\)&lt;/span&gt;-dp.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. By Claim 5, there exist events &lt;span class="math inline"&gt;\(E_{1 : k}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(F_{1 : k}\)&lt;/span&gt; such that&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;The laws &lt;span class="math inline"&gt;\(p_{i | E_i}(\cdot | y_{&amp;lt; i})\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q_{i | F_i}(\cdot | y_{&amp;lt; i})\)&lt;/span&gt; are &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-ind for all &lt;span class="math inline"&gt;\(y_{&amp;lt; i}\)&lt;/span&gt;.&lt;/li&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(\mathbb P(E_i | y_{&amp;lt; i}), \mathbb P(F_i | y_{&amp;lt; i}) \ge 1 - \delta\)&lt;/span&gt; for all &lt;span class="math inline"&gt;\(y_{&amp;lt; i}\)&lt;/span&gt;.&lt;/li&gt;
+&lt;/ol&gt;
+&lt;p&gt;Let &lt;span class="math inline"&gt;\(E := \bigcap E_i\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(F := \bigcap F_i\)&lt;/span&gt;, then they both have probability at least &lt;span class="math inline"&gt;\(1 - k \delta\)&lt;/span&gt;, and &lt;span class="math inline"&gt;\(p_{i | E}(\cdot | y_{&amp;lt; i}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q_{i | F}(\cdot | y_{&amp;lt; i})\)&lt;/span&gt; are &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-ind.&lt;/p&gt;
+&lt;p&gt;By Advanced Adaptive Composition Theorem (&lt;span class="math inline"&gt;\(\delta = 0\)&lt;/span&gt;), &lt;span class="math inline"&gt;\(p_{|E}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q_{|F}\)&lt;/span&gt; are &lt;span class="math inline"&gt;\((\epsilon&amp;#39; := k a(\epsilon) + \sqrt{2 k \log \beta^{-1}} (\epsilon + a(\epsilon)), \beta)\)&lt;/span&gt;-ind.&lt;/p&gt;
+&lt;p&gt;The rest is the same as in the proof of Claim 17. &lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;h2 id="subsampling"&gt;Subsampling&lt;/h2&gt;
+&lt;p&gt;Stochastic gradient descent is like gradient descent, but with random subsampling.&lt;/p&gt;
+&lt;p&gt;Recall we have been considering databases in the space &lt;span class="math inline"&gt;\(Z^m\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(n &amp;lt; m\)&lt;/span&gt; be a positive integer, &lt;span class="math inline"&gt;\(\mathcal I := \{I \subset [m]: |I| = n\}\)&lt;/span&gt; be the set of subsets of &lt;span class="math inline"&gt;\([m]\)&lt;/span&gt; of size &lt;span class="math inline"&gt;\(n\)&lt;/span&gt;, and &lt;span class="math inline"&gt;\(\gamma\)&lt;/span&gt; a random subset sampled uniformly from &lt;span class="math inline"&gt;\(\mathcal I\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(r = {n \over m}\)&lt;/span&gt; which we call the subsampling rate. Then we may add a subsampling module to the noisy gradient descent algorithm (6.97) considered before&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\theta_{t} = \theta_{t - 1} - \alpha n^{-1} \sum_{i \in \gamma} \nabla_\theta h_\theta(x_i) |_{\theta = \theta_{t - 1}} + \zeta_t. \qquad (7)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;It turns out subsampling has an amplification effect on privacy.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim 19 (Ullman 2017)&lt;/strong&gt;. Fix &lt;span class="math inline"&gt;\(r \in [0, 1]\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(n \le m\)&lt;/span&gt; be two nonnegative integers with &lt;span class="math inline"&gt;\(n = r m\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(N\)&lt;/span&gt; be an &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp machanism on &lt;span class="math inline"&gt;\(X^n\)&lt;/span&gt;. Define mechanism &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; on &lt;span class="math inline"&gt;\(X^m\)&lt;/span&gt; by&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[M(x) = N(x_\gamma)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Then &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\log (1 + r(e^\epsilon - 1)), r \delta)\)&lt;/span&gt;-dp.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Remark&lt;/strong&gt;. Some seem to cite Kasiviswanathan-Lee-Nissim-Raskhodnikova-Smith 2005 for this result, but it is not clear to me how it appears there.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(x, x&amp;#39; \in X^n\)&lt;/span&gt; such that they differ by one row &lt;span class="math inline"&gt;\(x_i \neq x_i&amp;#39;\)&lt;/span&gt;. Naturally we would like to consider the cases where the index &lt;span class="math inline"&gt;\(i\)&lt;/span&gt; is picked and the ones where it is not separately. Let &lt;span class="math inline"&gt;\(\mathcal I_\in\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\mathcal I_\notin\)&lt;/span&gt; be these two cases:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\mathcal I_\in = \{J \subset \mathcal I: i \in J\}\\
+\mathcal I_\notin = \{J \subset \mathcal I: i \notin J\}\\
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;We will use these notations later. Let &lt;span class="math inline"&gt;\(A\)&lt;/span&gt; be the event &lt;span class="math inline"&gt;\(\{\gamma \ni i\}\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Let &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; be the laws of &lt;span class="math inline"&gt;\(M(x)\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(M(x&amp;#39;)\)&lt;/span&gt; respectively. We collect some useful facts about them. First due to &lt;span class="math inline"&gt;\(N\)&lt;/span&gt; being &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp,&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p_{|A}(S) \le e^\epsilon q_{|A}(S) + \delta.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Also,&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p_{|A}(S) \le e^\epsilon p_{|A^c}(S) + \delta.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;To see this, note that being conditional laws, &lt;span class="math inline"&gt;\(p_A\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(p_{A^c}\)&lt;/span&gt; are averages of laws over &lt;span class="math inline"&gt;\(\mathcal I_\in\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\mathcal I_\notin\)&lt;/span&gt; respectively:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+p_{|A}(S) = |\mathcal I_\in|^{-1} \sum_{I \in \mathcal I_\in} \mathbb P(N(x_I) \in S)\\
+p_{|A^c}(S) = |\mathcal I_\notin|^{-1} \sum_{J \in \mathcal I_\notin} \mathbb P(N(x_J) \in S).
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Now we want to pair the &lt;span class="math inline"&gt;\(I\)&lt;/span&gt;'s in &lt;span class="math inline"&gt;\(\mathcal I_\in\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(J\)&lt;/span&gt;'s in &lt;span class="math inline"&gt;\(\mathcal I_\notin\)&lt;/span&gt; so that they differ by one index only, which means &lt;span class="math inline"&gt;\(d(x_I, x_J) = 1\)&lt;/span&gt;. Formally, this means we want to consider the set:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathcal D := \{(I, J) \in \mathcal I_\in \times \mathcal I_\notin: |I \cap J| = n - 1\}.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;We may observe by trying out some simple cases that every &lt;span class="math inline"&gt;\(I \in \mathcal I_\in\)&lt;/span&gt; is paired with &lt;span class="math inline"&gt;\(n\)&lt;/span&gt; elements in &lt;span class="math inline"&gt;\(\mathcal I_\notin\)&lt;/span&gt;, and every &lt;span class="math inline"&gt;\(J \in \mathcal I_\notin\)&lt;/span&gt; is paired with &lt;span class="math inline"&gt;\(m - n\)&lt;/span&gt; elements in &lt;span class="math inline"&gt;\(\mathcal I_\in\)&lt;/span&gt;. Therefore&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p_{|A}(S) = |\mathcal D|^{-1} \sum_{(I, J) \in \mathcal D} \mathbb P(N(x_I \in S)) \le |\mathcal D|^{-1} \sum_{(I, J) \in \mathcal D} (e^\epsilon \mathbb P(N(x_J \in S)) + \delta) = e^\epsilon p_{|A^c} (S) + \delta.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Since each of the &lt;span class="math inline"&gt;\(m\)&lt;/span&gt; indices is picked independently with probability &lt;span class="math inline"&gt;\(r\)&lt;/span&gt;, we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(A) = r.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Let &lt;span class="math inline"&gt;\(t \in [0, 1]\)&lt;/span&gt; to be determined. We may write&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+p(S) &amp;amp;= r p_{|A} (S) + (1 - r) p_{|A^c} (S)\\
+&amp;amp;\le r(t e^\epsilon q_{|A}(S) + (1 - t) e^\epsilon q_{|A^c}(S) + \delta) + (1 - r) q_{|A^c} (S)\\
+&amp;amp;= rte^\epsilon q_{|A}(S) + (r(1 - t) e^\epsilon + (1 - r)) q_{|A^c} (S) + r \delta\\
+&amp;amp;= te^\epsilon r q_{|A}(S) + \left({r \over 1 - r}(1 - t) e^\epsilon + 1\right) (1 - r) q_{|A^c} (S) + r \delta \\
+&amp;amp;\le \left(t e^\epsilon \wedge \left({r \over 1 - r} (1 - t) e^\epsilon + 1\right)\right) q(S) + r \delta. \qquad (7.5)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;We can see from the last line that the best bound we can get is when&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[t e^\epsilon = {r \over 1 - r} (1 - t) e^\epsilon + 1.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Solving this equation we obtain&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[t = r + e^{- \epsilon} - r e^{- \epsilon}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;and plugging this in (7.5) we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p(S) \le (1 + r(e^\epsilon - 1)) q(S) + r \delta.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Since &lt;span class="math inline"&gt;\(\log (1 + x) &amp;lt; x\)&lt;/span&gt; for &lt;span class="math inline"&gt;\(x &amp;gt; 0\)&lt;/span&gt;, we can rewrite the conclusion of the Claim to &lt;span class="math inline"&gt;\((r(e^\epsilon - 1), r \delta)\)&lt;/span&gt;-dp. Further more, if &lt;span class="math inline"&gt;\(\epsilon &amp;lt; \alpha\)&lt;/span&gt; for some &lt;span class="math inline"&gt;\(\alpha\)&lt;/span&gt;, we can rewrite it as &lt;span class="math inline"&gt;\((r \alpha^{-1} (e^\alpha - 1) \epsilon, r \delta)\)&lt;/span&gt;-dp or &lt;span class="math inline"&gt;\((O(r \epsilon), r \delta)\)&lt;/span&gt;-dp.&lt;/p&gt;
+&lt;p&gt;Let &lt;span class="math inline"&gt;\(\epsilon &amp;lt; 1\)&lt;/span&gt;. We see that if the mechanism &lt;span class="math inline"&gt;\(N\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((\epsilon, \delta)\)&lt;/span&gt;-dp on &lt;span class="math inline"&gt;\(Z^n\)&lt;/span&gt;, then &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; is &lt;span class="math inline"&gt;\((2 r \epsilon, r \delta)\)&lt;/span&gt;-dp, and if we run it over &lt;span class="math inline"&gt;\(k / r\)&lt;/span&gt; minibatches, by Advanced Adaptive Composition theorem, we have &lt;span class="math inline"&gt;\((\sqrt{2 k r \log \beta^{-1}} \epsilon + 2 k r \epsilon^2, k \delta + \beta)\)&lt;/span&gt;-dp.&lt;/p&gt;
+&lt;p&gt;This is better than the privacy guarantee without subsampling, where we run over &lt;span class="math inline"&gt;\(k\)&lt;/span&gt; iterations and obtain &lt;span class="math inline"&gt;\((\sqrt{2 k \log \beta^{-1}} \epsilon + 2 k \epsilon^2, k \delta + \beta)\)&lt;/span&gt;-dp. So with subsampling we gain an extra &lt;span class="math inline"&gt;\(\sqrt r\)&lt;/span&gt; in the &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt;-part of the privacy guarantee. But, smaller subsampling rate means smaller minibatch size, which would result in bigger variance, so there is a trade-off here.&lt;/p&gt;
+&lt;p&gt;Finally we define the differentially private stochastic gradient descent (DP-SGD) with the Gaussian mechanism (Abadi-Chu-Goodfellow-McMahan-Mironov-Talwar-Zhang 2016), which is (7) with the noise specialised to Gaussian and an added clipping operation to bound to sensitivity of the query to a chosen &lt;span class="math inline"&gt;\(C\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\theta_{t} = \theta_{t - 1} - \alpha \left(n^{-1} \sum_{i \in \gamma} \nabla_\theta \ell(x_i; \theta) |_{\theta = \theta_{t - 1}}\right)_{\text{Clipped at }C / 2} + N(0, \sigma^2 C^2 I),\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[y_{\text{Clipped at } \alpha} := y / (1 \vee {\|y\|_2 \over \alpha})\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;is &lt;span class="math inline"&gt;\(y\)&lt;/span&gt; clipped to have norm at most &lt;span class="math inline"&gt;\(\alpha\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Note that the clipping in DP-SGD is much stronger than making the query have sensitivity &lt;span class="math inline"&gt;\(C\)&lt;/span&gt;. It makes the difference between the query results of two &lt;em&gt;arbitrary&lt;/em&gt; inputs bounded by &lt;span class="math inline"&gt;\(C\)&lt;/span&gt;, rather than &lt;em&gt;neighbouring&lt;/em&gt; inputs.&lt;/p&gt;
+&lt;p&gt;In &lt;a href="/posts/2019-03-14-great-but-manageable-expectations.html"&gt;Part 2 of this post&lt;/a&gt; we will use the tools developed above to discuss the privacy guarantee for DP-SGD, among other things.&lt;/p&gt;
+&lt;h2 id="references"&gt;References&lt;/h2&gt;
+&lt;ul&gt;
+&lt;li&gt;Abadi, Martín, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. “Deep Learning with Differential Privacy.” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security - CCS’16, 2016, 308–18. &lt;a href="https://doi.org/10.1145/2976749.2978318" class="uri"&gt;https://doi.org/10.1145/2976749.2978318&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;Dwork, Cynthia, and Aaron Roth. “The Algorithmic Foundations of Differential Privacy.” Foundations and Trends® in Theoretical Computer Science 9, no. 3–4 (2013): 211–407. &lt;a href="https://doi.org/10.1561/0400000042" class="uri"&gt;https://doi.org/10.1561/0400000042&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;Dwork, Cynthia, Guy N. Rothblum, and Salil Vadhan. “Boosting and Differential Privacy.” In 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, 51–60. Las Vegas, NV, USA: IEEE, 2010. &lt;a href="https://doi.org/10.1109/FOCS.2010.12" class="uri"&gt;https://doi.org/10.1109/FOCS.2010.12&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;Shiva Prasad Kasiviswanathan, Homin K. Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. “What Can We Learn Privately?” In 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS’05). Pittsburgh, PA, USA: IEEE, 2005. &lt;a href="https://doi.org/10.1109/SFCS.2005.1" class="uri"&gt;https://doi.org/10.1109/SFCS.2005.1&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;Murtagh, Jack, and Salil Vadhan. “The Complexity of Computing the Optimal Composition of Differential Privacy.” In Theory of Cryptography, edited by Eyal Kushilevitz and Tal Malkin, 9562:157–75. Berlin, Heidelberg: Springer Berlin Heidelberg, 2016. &lt;a href="https://doi.org/10.1007/978-3-662-49096-9_7" class="uri"&gt;https://doi.org/10.1007/978-3-662-49096-9_7&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;Ullman, Jonathan. “Solution to CS7880 Homework 1.”, 2017. &lt;a href="http://www.ccs.neu.edu/home/jullman/cs7880s17/HW1sol.pdf" class="uri"&gt;http://www.ccs.neu.edu/home/jullman/cs7880s17/HW1sol.pdf&lt;/a&gt;&lt;/li&gt;
+&lt;li&gt;Vadhan, Salil. “The Complexity of Differential Privacy.” In Tutorials on the Foundations of Cryptography, edited by Yehuda Lindell, 347–450. Cham: Springer International Publishing, 2017. &lt;a href="https://doi.org/10.1007/978-3-319-57048-8_7" class="uri"&gt;https://doi.org/10.1007/978-3-319-57048-8_7&lt;/a&gt;.&lt;/li&gt;
+&lt;/ul&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/blog-feed.xml">
+ <title type="text">Raise your ELBO</title>
+ <id>posts/2019-02-14-raise-your-elbo.html</id>
+ <updated>2019-02-14T00:00:00Z</updated>
+ <link href="posts/2019-02-14-raise-your-elbo.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;In this post I give an introduction to variational inference, which is about maximising the evidence lower bound (ELBO).&lt;/p&gt;
+&lt;p&gt;I use a top-down approach, starting with the KL divergence and the ELBO, to lay the mathematical framework of all the models in this post.&lt;/p&gt;
+&lt;p&gt;Then I define mixture models and the EM algorithm, with Gaussian mixture model (GMM), probabilistic latent semantic analysis (pLSA) and the hidden markov model (HMM) as examples.&lt;/p&gt;
+&lt;p&gt;After that I present the fully Bayesian version of EM, also known as mean field approximation (MFA), and apply it to fully Bayesian mixture models, with fully Bayesian GMM (also known as variational GMM), latent Dirichlet allocation (LDA) and Dirichlet process mixture model (DPMM) as examples.&lt;/p&gt;
+&lt;p&gt;Then I explain stochastic variational inference, a modification of EM and MFA to improve efficiency.&lt;/p&gt;
+&lt;p&gt;Finally I talk about autoencoding variational Bayes (AEVB), a Monte-Carlo + neural network approach to raising the ELBO, exemplified by the variational autoencoder (VAE). I also show its fully Bayesian version.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Acknowledgement&lt;/strong&gt;. The following texts and resources were illuminating during the writing of this post: the Stanford CS228 notes (&lt;a href="https://ermongroup.github.io/cs228-notes/inference/variational/"&gt;1&lt;/a&gt;,&lt;a href="https://ermongroup.github.io/cs228-notes/learning/latent/"&gt;2&lt;/a&gt;), the &lt;a href="https://www.cs.tau.ac.il/~rshamir/algmb/presentations/EM-BW-Ron-16%20.pdf"&gt;Tel Aviv Algorithms in Molecular Biology slides&lt;/a&gt; (clear explanations of the connection between EM and Baum-Welch), Chapter 10 of &lt;a href="https://www.springer.com/us/book/9780387310732"&gt;Bishop's book&lt;/a&gt; (brilliant introduction to variational GMM), Section 2.5 of &lt;a href="http://cs.brown.edu/~sudderth/papers/sudderthPhD.pdf"&gt;Sudderth's thesis&lt;/a&gt; and &lt;a href="https://metacademy.org"&gt;metacademy&lt;/a&gt;. Also thanks to Josef Lindman Hörnlund for discussions. The research was done while working at KTH mathematics department.&lt;/p&gt;
+&lt;p&gt;&lt;em&gt;If you are reading on a mobile device, you may need to "request desktop site" for the equations to be properly displayed. This post is licensed under CC BY-SA and GNU FDL.&lt;/em&gt;&lt;/p&gt;
+&lt;h2 id="kl-divergence-and-elbo"&gt;KL divergence and ELBO&lt;/h2&gt;
+&lt;p&gt;Let &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; be two probability measures. The Kullback-Leibler (KL) divergence is defined as&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[D(q||p) = E_q \log{q \over p}.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;It achieves minimum &lt;span class="math inline"&gt;\(0\)&lt;/span&gt; when &lt;span class="math inline"&gt;\(p = q\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;If &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; can be further written as&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p(x) = {w(x) \over Z}, \qquad (0)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(Z\)&lt;/span&gt; is a normaliser, then&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\log Z = D(q||p) + L(w, q), \qquad(1)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(L(w, q)\)&lt;/span&gt; is called the evidence lower bound (ELBO), defined by&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[L(w, q) = E_q \log{w \over q}. \qquad (1.25)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;From (1), we see that to minimise the nonnegative term &lt;span class="math inline"&gt;\(D(q || p)\)&lt;/span&gt;, one can maximise the ELBO.&lt;/p&gt;
+&lt;p&gt;To this end, we can simply discard &lt;span class="math inline"&gt;\(D(q || p)\)&lt;/span&gt; in (1) and obtain:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\log Z \ge L(w, q) \qquad (1.3)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;and keep in mind that the inequality becomes an equality when &lt;span class="math inline"&gt;\(q = {w \over Z}\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;It is time to define the task of variational inference (VI), also known as variational Bayes (VB).&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Definition&lt;/strong&gt;. Variational inference is concerned with maximising the ELBO &lt;span class="math inline"&gt;\(L(w, q)\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;There are mainly two versions of VI, the half Bayesian and the fully Bayesian cases. Half Bayesian VI, to which expectation-maximisation algorithms (EM) apply, instantiates (1.3) with&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+Z &amp;amp;= p(x; \theta)\\
+w &amp;amp;= p(x, z; \theta)\\
+q &amp;amp;= q(z)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;and the dummy variable &lt;span class="math inline"&gt;\(x\)&lt;/span&gt; in Equation (0) is substituted with &lt;span class="math inline"&gt;\(z\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Fully Bayesian VI, often just called VI, has the following instantiations:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+Z &amp;amp;= p(x) \\
+w &amp;amp;= p(x, z, \theta) \\
+q &amp;amp;= q(z, \theta)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;and &lt;span class="math inline"&gt;\(x\)&lt;/span&gt; in Equation (0) is substituted with &lt;span class="math inline"&gt;\((z, \theta)\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;In both cases &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt; are parameters and &lt;span class="math inline"&gt;\(z\)&lt;/span&gt; are latent variables.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Remark on the naming of things&lt;/strong&gt;. The term "variational" comes from the fact that we perform calculus of variations: maximise some functional (&lt;span class="math inline"&gt;\(L(w, q)\)&lt;/span&gt;) over a set of functions (&lt;span class="math inline"&gt;\(q\)&lt;/span&gt;). Note however, most of the VI / VB algorithms do not concern any techniques in calculus of variations, but only uses Jensen's inequality / the fact the &lt;span class="math inline"&gt;\(D(q||p)\)&lt;/span&gt; reaches minimum when &lt;span class="math inline"&gt;\(p = q\)&lt;/span&gt;. Due to this reasoning of the naming, EM is also a kind of VI, even though in the literature VI often referes to its fully Bayesian version.&lt;/p&gt;
+&lt;h2 id="em"&gt;EM&lt;/h2&gt;
+&lt;p&gt;To illustrate the EM algorithms, we first define the mixture model.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Definition (mixture model)&lt;/strong&gt;. Given dataset &lt;span class="math inline"&gt;\(x_{1 : m}\)&lt;/span&gt;, we assume the data has some underlying latent variable &lt;span class="math inline"&gt;\(z_{1 : m}\)&lt;/span&gt; that may take a value from a finite set &lt;span class="math inline"&gt;\(\{1, 2, ..., n_z\}\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(p(z_{i}; \pi)\)&lt;/span&gt; be categorically distributed according to the probability vector &lt;span class="math inline"&gt;\(\pi\)&lt;/span&gt;. That is, &lt;span class="math inline"&gt;\(p(z_{i} = k; \pi) = \pi_k\)&lt;/span&gt;. Also assume &lt;span class="math inline"&gt;\(p(x_{i} | z_{i} = k; \eta) = p(x_{i}; \eta_k)\)&lt;/span&gt;. Find &lt;span class="math inline"&gt;\(\theta = (\pi, \eta)\)&lt;/span&gt; that maximises the likelihood &lt;span class="math inline"&gt;\(p(x_{1 : m}; \theta)\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Represented as a DAG (a.k.a the plate notations), the model looks like this:&lt;/p&gt;
+&lt;p&gt;&lt;img src="/assets/resources/mixture-model.png" style="width:250px" /&gt;&lt;/p&gt;
+&lt;p&gt;where the boxes with &lt;span class="math inline"&gt;\(m\)&lt;/span&gt; mean repitition for &lt;span class="math inline"&gt;\(m\)&lt;/span&gt; times, since there &lt;span class="math inline"&gt;\(m\)&lt;/span&gt; indepdent pairs of &lt;span class="math inline"&gt;\((x, z)\)&lt;/span&gt;, and the same goes for &lt;span class="math inline"&gt;\(\eta\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;The direct maximisation&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\max_\theta \sum_i \log p(x_{i}; \theta) = \max_\theta \sum_i \log \int p(x_{i} | z_i; \theta) p(z_i; \theta) dz_i\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;is hard because of the integral in the log.&lt;/p&gt;
+&lt;p&gt;We can fit this problem in (1.3) by having &lt;span class="math inline"&gt;\(Z = p(x_{1 : m}; \theta)\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(w = p(z_{1 : m}, x_{1 : m}; \theta)\)&lt;/span&gt;. The plan is to update &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt; repeatedly so that &lt;span class="math inline"&gt;\(L(p(z, x; \theta_t), q(z))\)&lt;/span&gt; is non decreasing over time &lt;span class="math inline"&gt;\(t\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Equation (1.3) at time &lt;span class="math inline"&gt;\(t\)&lt;/span&gt; for the &lt;span class="math inline"&gt;\(i\)&lt;/span&gt;th datapoint is&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\log p(x_{i}; \theta_t) \ge L(p(z_i, x_{i}; \theta_t), q(z_i)) \qquad (2)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Each timestep consists of two steps, the E-step and the M-step.&lt;/p&gt;
+&lt;p&gt;At E-step, we set&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[q(z_{i}) = p(z_{i}|x_{i}; \theta_t), \]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;to turn the inequality into equality. We denote &lt;span class="math inline"&gt;\(r_{ik} = q(z_i = k)\)&lt;/span&gt; and call them responsibilities, so the posterior &lt;span class="math inline"&gt;\(q(z_i)\)&lt;/span&gt; is categorical distribution with parameter &lt;span class="math inline"&gt;\(r_i = r_{i, 1 : n_z}\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;At M-step, we maximise &lt;span class="math inline"&gt;\(\sum_i L(p(x_{i}, z_{i}; \theta), q(z_{i}))\)&lt;/span&gt; over &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\theta_{t + 1} &amp;amp;= \text{argmax}_\theta \sum_i L(p(x_{i}, z_{i}; \theta), p(z_{i} | x_{i}; \theta_t)) \\
+&amp;amp;= \text{argmax}_\theta \sum_i \mathbb E_{p(z_{i} | x_{i}; \theta_t)} \log p(x_{i}, z_{i}; \theta) \qquad (2.3)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;So &lt;span class="math inline"&gt;\(\sum_i L(p(x_{i}, z_{i}; \theta), q(z_i))\)&lt;/span&gt; is non-decreasing at both the E-step and the M-step.&lt;/p&gt;
+&lt;p&gt;We can see from this derivation that EM is half-Bayesian. The E-step is Bayesian it computes the posterior of the latent variables and the M-step is frequentist because it performs maximum likelihood estimate of &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;It is clear that the ELBO sum coverges as it is nondecreasing with an upper bound, but it is not clear whether the sum converges to the correct value, i.e. &lt;span class="math inline"&gt;\(\max_\theta p(x_{1 : m}; \theta)\)&lt;/span&gt;. In fact it is said that the EM does get stuck in local maximum sometimes.&lt;/p&gt;
+&lt;p&gt;A different way of describing EM, which will be useful in hidden Markov model is:&lt;/p&gt;
+&lt;ul&gt;
+&lt;li&gt;&lt;p&gt;At E-step, one writes down the formula &lt;span class="math display"&gt;\[\sum_i \mathbb E_{p(z_i | x_{i}; \theta_t)} \log p(x_{i}, z_i; \theta). \qquad (2.5)\]&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;At M-setp, one finds &lt;span class="math inline"&gt;\(\theta_{t + 1}\)&lt;/span&gt; to be the &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt; that maximises the above formula.&lt;/p&gt;&lt;/li&gt;
+&lt;/ul&gt;
+&lt;h3 id="gmm"&gt;GMM&lt;/h3&gt;
+&lt;p&gt;Gaussian mixture model (GMM) is an example of mixture models.&lt;/p&gt;
+&lt;p&gt;The space of the data is &lt;span class="math inline"&gt;\(\mathbb R^n\)&lt;/span&gt;. We use the hypothesis that the data is Gaussian conditioned on the latent variable:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[(x_i; \eta_k) \sim N(\mu_k, \Sigma_k),\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;so we write &lt;span class="math inline"&gt;\(\eta_k = (\mu_k, \Sigma_k)\)&lt;/span&gt;,&lt;/p&gt;
+&lt;p&gt;During E-step, the &lt;span class="math inline"&gt;\(q(z_i)\)&lt;/span&gt; can be directly computed using Bayes’ theorem:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[r_{ik} = q(z_i = k) = \mathbb P(z_i = k | x_{i}; \theta_t)
+= {g_{\mu_{t, k}, \Sigma_{t, k}} (x_{i}) \pi_{t, k} \over \sum_{j = 1 : n_z} g_{\mu_{t, j}, \Sigma_{t, j}} (x_{i}) \pi_{t, j}},\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(g_{\mu, \Sigma} (x) = (2 \pi)^{- n / 2} \det \Sigma^{-1 / 2} \exp(- {1 \over 2} (x - \mu)^T \Sigma^{-1} (x - \mu))\)&lt;/span&gt; is the pdf of the Gaussian distribution &lt;span class="math inline"&gt;\(N(\mu, \Sigma)\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;During M-step, we need to compute&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\text{argmax}_{\Sigma, \mu, \pi} \sum_{i = 1 : m} \sum_{k = 1 : n_z} r_{ik} \log (g_{\mu_k, \Sigma_k}(x_{i}) \pi_k).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;This is similar to the quadratic discriminant analysis, and the solution is&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\pi_{k} &amp;amp;= {1 \over m} \sum_{i = 1 : m} r_{ik}, \\
+\mu_{k} &amp;amp;= {\sum_i r_{ik} x_{i} \over \sum_i r_{ik}}, \\
+\Sigma_{k} &amp;amp;= {\sum_i r_{ik} (x_{i} - \mu_{t, k}) (x_{i} - \mu_{t, k})^T \over \sum_i r_{ik}}.
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Remark&lt;/strong&gt;. The k-means algorithm is the &lt;span class="math inline"&gt;\(\epsilon \to 0\)&lt;/span&gt; limit of the GMM with constraints &lt;span class="math inline"&gt;\(\Sigma_k = \epsilon I\)&lt;/span&gt;. See Section 9.3.2 of Bishop 2006 for derivation. It is also briefly mentioned there that a variant in this setting where the covariance matrix is not restricted to &lt;span class="math inline"&gt;\(\epsilon I\)&lt;/span&gt; is called elliptical k-means algorithm.&lt;/p&gt;
+&lt;h3 id="smm"&gt;SMM&lt;/h3&gt;
+&lt;p&gt;As a transition to the next models to study, let us consider a simpler mixture model obtained by making one modification to GMM: change &lt;span class="math inline"&gt;\((x; \eta_k) \sim N(\mu_k, \Sigma_k)\)&lt;/span&gt; to &lt;span class="math inline"&gt;\(\mathbb P(x = w; \eta_k) = \eta_{kw}\)&lt;/span&gt; where &lt;span class="math inline"&gt;\(\eta\)&lt;/span&gt; is a stochastic matrix and &lt;span class="math inline"&gt;\(w\)&lt;/span&gt; is an arbitrary element of the space for &lt;span class="math inline"&gt;\(x\)&lt;/span&gt;. So now the space for both &lt;span class="math inline"&gt;\(x\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(z\)&lt;/span&gt; are finite. We call this model the simple mixture model (SMM).&lt;/p&gt;
+&lt;p&gt;As in GMM, at E-step &lt;span class="math inline"&gt;\(r_{ik}\)&lt;/span&gt; can be explicitly computed using Bayes' theorem.&lt;/p&gt;
+&lt;p&gt;It is not hard to write down the solution to the M-step in this case:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\pi_{k} &amp;amp;= {1 \over m} \sum_i r_{ik}, \qquad (2.7)\\
+\eta_{k, w} &amp;amp;= {\sum_i r_{ik} 1_{x_i = w} \over \sum_i r_{ik}}. \qquad (2.8)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(1_{x_i = w}\)&lt;/span&gt; is the &lt;a href="https://en.wikipedia.org/wiki/Indicator_function"&gt;indicator function&lt;/a&gt;, and evaluates to &lt;span class="math inline"&gt;\(1\)&lt;/span&gt; if &lt;span class="math inline"&gt;\(x_i = w\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(0\)&lt;/span&gt; otherwise.&lt;/p&gt;
+&lt;p&gt;Two trivial variants of the SMM are the two versions of probabilistic latent semantic analysis (pLSA), which we call pLSA1 and pLSA2.&lt;/p&gt;
+&lt;p&gt;The model pLSA1 is a probabilistic version of latent semantic analysis, which is basically a simple matrix factorisation model in collaborative filtering, whereas pLSA2 has a fully Bayesian version called latent Dirichlet allocation (LDA), not to be confused with the other LDA (linear discriminant analysis).&lt;/p&gt;
+&lt;h3 id="plsa"&gt;pLSA&lt;/h3&gt;
+&lt;p&gt;The pLSA model (Hoffman 2000) is a mixture model, where the dataset is now pairs &lt;span class="math inline"&gt;\((d_i, x_i)_{i = 1 : m}\)&lt;/span&gt;. In natural language processing, &lt;span class="math inline"&gt;\(x\)&lt;/span&gt; are words and &lt;span class="math inline"&gt;\(d\)&lt;/span&gt; are documents, and a pair &lt;span class="math inline"&gt;\((d, x)\)&lt;/span&gt; represent an ocurrance of word &lt;span class="math inline"&gt;\(x\)&lt;/span&gt; in document &lt;span class="math inline"&gt;\(d\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;For each datapoint &lt;span class="math inline"&gt;\((d_{i}, x_{i})\)&lt;/span&gt;,&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+p(d_i, x_i; \theta) &amp;amp;= \sum_{z_i} p(z_i; \theta) p(d_i | z_i; \theta) p(x_i | z_i; \theta) \qquad (2.91)\\
+&amp;amp;= p(d_i; \theta) \sum_z p(x_i | z_i; \theta) p (z_i | d_i; \theta) \qquad (2.92).
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Of the two formulations, (2.91) corresponds to pLSA type 1, and (2.92) corresponds to type 2.&lt;/p&gt;
+&lt;h4 id="plsa1"&gt;pLSA1&lt;/h4&gt;
+&lt;p&gt;The pLSA1 model (Hoffman 2000) is basically SMM with &lt;span class="math inline"&gt;\(x_i\)&lt;/span&gt; substituted with &lt;span class="math inline"&gt;\((d_i, x_i)\)&lt;/span&gt;, which conditioned on &lt;span class="math inline"&gt;\(z_i\)&lt;/span&gt; are independently categorically distributed:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p(d_i = u, x_i = w | z_i = k; \theta) = p(d_i ; \xi_k) p(x_i; \eta_k) = \xi_{ku} \eta_{kw}.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;The model can be illustrated in the plate notations:&lt;/p&gt;
+&lt;p&gt;&lt;img src="/assets/resources/plsa1.png" style="width:350px" /&gt;&lt;/p&gt;
+&lt;p&gt;So the solution of the M-step is&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\pi_{k} &amp;amp;= {1 \over m} \sum_i r_{ik} \\
+\xi_{k, u} &amp;amp;= {\sum_i r_{ik} 1_{d_{i} = u} \over \sum_i r_{ik}} \\
+\eta_{k, w} &amp;amp;= {\sum_i r_{ik} 1_{x_{i} = w} \over \sum_i r_{ik}}.
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Remark&lt;/strong&gt;. pLSA1 is the probabilistic version of LSA, also known as matrix factorisation.&lt;/p&gt;
+&lt;p&gt;Let &lt;span class="math inline"&gt;\(n_d\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(n_x\)&lt;/span&gt; be the number of values &lt;span class="math inline"&gt;\(d_i\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(x_i\)&lt;/span&gt; can take.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Problem&lt;/strong&gt; (LSA). Let &lt;span class="math inline"&gt;\(R\)&lt;/span&gt; be a &lt;span class="math inline"&gt;\(n_d \times n_x\)&lt;/span&gt; matrix, fix &lt;span class="math inline"&gt;\(s \le \min\{n_d, n_x\}\)&lt;/span&gt;. Find &lt;span class="math inline"&gt;\(n_d \times s\)&lt;/span&gt; matrix &lt;span class="math inline"&gt;\(D\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(n_x \times s\)&lt;/span&gt; matrix &lt;span class="math inline"&gt;\(X\)&lt;/span&gt; that minimises&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[J(D, X) = \|R - D X^T\|_F.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(\|\cdot\|_F\)&lt;/span&gt; is the Frobenius norm.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim&lt;/strong&gt;. Let &lt;span class="math inline"&gt;\(R = U \Sigma V^T\)&lt;/span&gt; be the SVD of &lt;span class="math inline"&gt;\(R\)&lt;/span&gt;, then the solution to the above problem is &lt;span class="math inline"&gt;\(D = U_s \Sigma_s^{{1 \over 2}}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(X = V_s \Sigma_s^{{1 \over 2}}\)&lt;/span&gt;, where &lt;span class="math inline"&gt;\(U_s\)&lt;/span&gt; (resp. &lt;span class="math inline"&gt;\(V_s\)&lt;/span&gt;) is the matrix of the first &lt;span class="math inline"&gt;\(s\)&lt;/span&gt; columns of &lt;span class="math inline"&gt;\(U\)&lt;/span&gt; (resp. &lt;span class="math inline"&gt;\(V\)&lt;/span&gt;) and &lt;span class="math inline"&gt;\(\Sigma_s\)&lt;/span&gt; is the &lt;span class="math inline"&gt;\(s \times s\)&lt;/span&gt; submatrix of &lt;span class="math inline"&gt;\(\Sigma\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;One can compare pLSA1 with LSA. Both procedures produce embeddings of &lt;span class="math inline"&gt;\(d\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(x\)&lt;/span&gt;: in pLSA we obtain &lt;span class="math inline"&gt;\(n_z\)&lt;/span&gt; dimensional embeddings &lt;span class="math inline"&gt;\(\xi_{\cdot, u}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\eta_{\cdot, w}\)&lt;/span&gt;, whereas in LSA we obtain &lt;span class="math inline"&gt;\(s\)&lt;/span&gt; dimensional embeddings &lt;span class="math inline"&gt;\(D_{u, \cdot}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(X_{w, \cdot}\)&lt;/span&gt;.&lt;/p&gt;
+&lt;h4 id="plsa2"&gt;pLSA2&lt;/h4&gt;
+&lt;p&gt;Let us turn to pLSA2 (Hoffman 2004), corresponding to (2.92). We rewrite it as&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p(x_i | d_i; \theta) = \sum_{z_i} p(x_i | z_i; \theta) p(z_i | d_i; \theta).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;To simplify notations, we collect all the &lt;span class="math inline"&gt;\(x_i\)&lt;/span&gt;s with the corresponding &lt;span class="math inline"&gt;\(d_i\)&lt;/span&gt; equal to 1 (suppose there are &lt;span class="math inline"&gt;\(m_1\)&lt;/span&gt; of them), and write them as &lt;span class="math inline"&gt;\((x_{1, j})_{j = 1 : m_1}\)&lt;/span&gt;. In the same fashion we construct &lt;span class="math inline"&gt;\(x_{2, 1 : m_2}, x_{3, 1 : m_3}, ... x_{n_d, 1 : m_{n_d}}\)&lt;/span&gt;. Similarly, we relabel the corresponding &lt;span class="math inline"&gt;\(d_i\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(z_i\)&lt;/span&gt; accordingly.&lt;/p&gt;
+&lt;p&gt;With almost no loss of generality, we assume all &lt;span class="math inline"&gt;\(m_\ell\)&lt;/span&gt;s are equal and write them as &lt;span class="math inline"&gt;\(m\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Now the model becomes&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p(x_{\ell, i} | d_{\ell, i} = \ell; \theta) = \sum_k p(x_{\ell, i} | z_{\ell, i} = k; \theta) p(z_{\ell, i} = k | d_{\ell, i} = \ell; \theta).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Since we have regrouped the &lt;span class="math inline"&gt;\(x\)&lt;/span&gt;’s and &lt;span class="math inline"&gt;\(z\)&lt;/span&gt;’s whose indices record the values of the &lt;span class="math inline"&gt;\(d\)&lt;/span&gt;’s, we can remove the &lt;span class="math inline"&gt;\(d\)&lt;/span&gt;’s from the equation altogether:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p(x_{\ell, i}; \theta) = \sum_k p(x_{\ell, i} | z_{\ell, i} = k; \theta) p(z_{\ell, i} = k; \theta).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;It is effectively a modification of SMM by making &lt;span class="math inline"&gt;\(n_d\)&lt;/span&gt; copies of &lt;span class="math inline"&gt;\(\pi\)&lt;/span&gt;. More specifically the parameters are &lt;span class="math inline"&gt;\(\theta = (\pi_{1 : n_d, 1 : n_z}, \eta_{1 : n_z, 1 : n_x})\)&lt;/span&gt;, where we model &lt;span class="math inline"&gt;\((z | d = \ell) \sim \text{Cat}(\pi_{\ell, \cdot})\)&lt;/span&gt; and, as in pLSA1, &lt;span class="math inline"&gt;\((x | z = k) \sim \text{Cat}(\eta_{k, \cdot})\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Illustrated in the plate notations, pLSA2 is:&lt;/p&gt;
+&lt;p&gt;&lt;img src="/assets/resources/plsa2.png" style="width:350px" /&gt;&lt;/p&gt;
+&lt;p&gt;The computation is basically adding an index &lt;span class="math inline"&gt;\(\ell\)&lt;/span&gt; to the computation of SMM wherever applicable.&lt;/p&gt;
+&lt;p&gt;The updates at the E-step is&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[r_{\ell i k} = p(z_{\ell i} = k | x_{\ell i}; \theta) \propto \pi_{\ell k} \eta_{k, x_{\ell i}}.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;And at the M-step&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\pi_{\ell k} &amp;amp;= {1 \over m} \sum_i r_{\ell i k} \\
+\eta_{k w} &amp;amp;= {\sum_{\ell, i} r_{\ell i k} 1_{x_{\ell i} = w} \over \sum_{\ell, i} r_{\ell i k}}.
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;h3 id="hmm"&gt;HMM&lt;/h3&gt;
+&lt;p&gt;The hidden markov model (HMM) is a sequential version of SMM, in the same sense that recurrent neural networks are sequential versions of feed-forward neural networks.&lt;/p&gt;
+&lt;p&gt;HMM is an example where the posterior &lt;span class="math inline"&gt;\(p(z_i | x_i; \theta)\)&lt;/span&gt; is not easy to compute, and one has to utilise properties of the underlying Bayesian network to go around it.&lt;/p&gt;
+&lt;p&gt;Now each sample is a sequence &lt;span class="math inline"&gt;\(x_i = (x_{ij})_{j = 1 : T}\)&lt;/span&gt;, and so are the latent variables &lt;span class="math inline"&gt;\(z_i = (z_{ij})_{j = 1 : T}\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;The latent variables are assumed to form a Markov chain with transition matrix &lt;span class="math inline"&gt;\((\xi_{k \ell})_{k \ell}\)&lt;/span&gt;, and &lt;span class="math inline"&gt;\(x_{ij}\)&lt;/span&gt; is completely dependent on &lt;span class="math inline"&gt;\(z_{ij}\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+p(z_{ij} | z_{i, j - 1}) &amp;amp;= \xi_{z_{i, j - 1}, z_{ij}},\\
+p(x_{ij} | z_{ij}) &amp;amp;= \eta_{z_{ij}, x_{ij}}.
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Also, the distribution of &lt;span class="math inline"&gt;\(z_{i1}\)&lt;/span&gt; is again categorical with parameter &lt;span class="math inline"&gt;\(\pi\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p(z_{i1}) = \pi_{z_{i1}}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;So the parameters are &lt;span class="math inline"&gt;\(\theta = (\pi, \xi, \eta)\)&lt;/span&gt;. And HMM can be shown in plate notations as:&lt;/p&gt;
+&lt;p&gt;&lt;img src="/assets/resources/hmm.png" style="width:350px" /&gt;&lt;/p&gt;
+&lt;p&gt;Now we apply EM to HMM, which is called the &lt;a href="https://en.wikipedia.org/wiki/Baum%E2%80%93Welch_algorithm"&gt;Baum-Welch algorithm&lt;/a&gt;. Unlike the previous examples, it is too messy to compute &lt;span class="math inline"&gt;\(p(z_i | x_{i}; \theta)\)&lt;/span&gt;, so during the E-step we instead write down formula (2.5) directly in hope of simplifying it:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\mathbb E_{p(z_i | x_i; \theta_t)} \log p(x_i, z_i; \theta_t) &amp;amp;=\mathbb E_{p(z_i | x_i; \theta_t)} \left(\log \pi_{z_{i1}} + \sum_{j = 2 : T} \log \xi_{z_{i, j - 1}, z_{ij}} + \sum_{j = 1 : T} \log \eta_{z_{ij}, x_{ij}}\right). \qquad (3)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Let us compute the summand in second term:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\mathbb E_{p(z_i | x_{i}; \theta_t)} \log \xi_{z_{i, j - 1}, z_{ij}} &amp;amp;= \sum_{k, \ell} (\log \xi_{k, \ell}) \mathbb E_{p(z_{i} | x_{i}; \theta_t)} 1_{z_{i, j - 1} = k, z_{i, j} = \ell} \\
+&amp;amp;= \sum_{k, \ell} p(z_{i, j - 1} = k, z_{ij} = \ell | x_{i}; \theta_t) \log \xi_{k, \ell}. \qquad (4)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Similarly, one can write down the first term and the summand in the third term to obtain&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\mathbb E_{p(z_i | x_{i}; \theta_t)} \log \pi_{z_{i1}} &amp;amp;= \sum_k p(z_{i1} = k | x_{i}; \theta_t), \qquad (5) \\
+\mathbb E_{p(z_i | x_{i}; \theta_t)} \log \eta_{z_{i, j}, x_{i, j}} &amp;amp;= \sum_{k, w} 1_{x_{ij} = w} p(z_{i, j} = k | x_i; \theta_t) \log \eta_{k, w}. \qquad (6)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;plugging (4)(5)(6) back into (3) and summing over &lt;span class="math inline"&gt;\(j\)&lt;/span&gt;, we obtain the formula to maximise over &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt; on:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sum_k \sum_i r_{i1k} \log \pi_k + \sum_{k, \ell} \sum_{j = 2 : T, i} s_{ijk\ell} \log \xi_{k, \ell} + \sum_{k, w} \sum_{j = 1 : T, i} r_{ijk} 1_{x_{ij} = w} \log \eta_{k, w},\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+r_{ijk} &amp;amp;:= p(z_{ij} = k | x_{i}; \theta_t), \\
+s_{ijk\ell} &amp;amp;:= p(z_{i, j - 1} = k, z_{ij} = \ell | x_{i}; \theta_t).
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Now we proceed to the M-step. Since each of the &lt;span class="math inline"&gt;\(\pi_k, \xi_{k, \ell}, \eta_{k, w}\)&lt;/span&gt; is nicely confined in the inner sum of each term, together with the constraint &lt;span class="math inline"&gt;\(\sum_k \pi_k = \sum_\ell \xi_{k, \ell} = \sum_w \eta_{k, w} = 1\)&lt;/span&gt; it is not hard to find the argmax at time &lt;span class="math inline"&gt;\(t + 1\)&lt;/span&gt; (the same way one finds the MLE for any categorical distribution):&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\pi_{k} &amp;amp;= {1 \over m} \sum_i r_{i1k}, \qquad (6.1) \\
+\xi_{k, \ell} &amp;amp;= {\sum_{j = 2 : T, i} s_{ijk\ell} \over \sum_{j = 1 : T - 1, i} r_{ijk}}, \qquad(6.2) \\
+\eta_{k, w} &amp;amp;= {\sum_{ij} 1_{x_{ij} = w} r_{ijk} \over \sum_{ij} r_{ijk}}. \qquad(6.3)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Note that (6.1)(6.3) are almost identical to (2.7)(2.8). This makes sense as the only modification HMM makes over SMM is the added dependencies between the latent variables.&lt;/p&gt;
+&lt;p&gt;What remains is to compute &lt;span class="math inline"&gt;\(r\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(s\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;This is done by using the forward and backward procedures which takes advantage of the conditioned independence / topology of the underlying Bayesian network. It is out of scope of this post, but for the sake of completeness I include it here.&lt;/p&gt;
+&lt;p&gt;Let&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\alpha_k(i, j) &amp;amp;:= p(x_{i, 1 : j}, z_{ij} = k; \theta_t), \\
+\beta_k(i, j) &amp;amp;:= p(x_{i, j + 1 : T} | z_{ij} = k; \theta_t).
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;They can be computed recursively as&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\alpha_k(i, j) &amp;amp;= \begin{cases}
+\eta_{k, x_{1j}} \pi_k, &amp;amp; j = 1; \\
+\eta_{k, x_{ij}} \sum_\ell \alpha_\ell(j - 1, i) \xi_{k\ell}, &amp;amp; j \ge 2.
+\end{cases}\\
+\beta_k(i, j) &amp;amp;= \begin{cases}
+1, &amp;amp; j = T;\\
+\sum_\ell \xi_{k\ell} \beta_\ell(j + 1, i) \eta_{\ell, x_{i, j + 1}}, &amp;amp; j &amp;lt; T.
+\end{cases}
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Then&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+p(z_{ij} = k, x_{i}; \theta_t) &amp;amp;= \alpha_k(i, j) \beta_k(i, j), \qquad (7)\\
+p(x_{i}; \theta_t) &amp;amp;= \sum_k \alpha_k(i, j) \beta_k(i, j),\forall j = 1 : T \qquad (8)\\
+p(z_{i, j - 1} = k, z_{i, j} = \ell, x_{i}; \theta_t) &amp;amp;= \alpha_k(i, j) \xi_{k\ell} \beta_\ell(i, j + 1) \eta_{\ell, x_{j + 1, i}}. \qquad (9)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;And this yields &lt;span class="math inline"&gt;\(r_{ijk}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(s_{ijk\ell}\)&lt;/span&gt; since they can be computed as &lt;span class="math inline"&gt;\({(7) \over (8)}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\({(9) \over (8)}\)&lt;/span&gt; respectively.&lt;/p&gt;
+&lt;h2 id="fully-bayesian-em-mfa"&gt;Fully Bayesian EM / MFA&lt;/h2&gt;
+&lt;p&gt;Let us now venture into the realm of full Bayesian.&lt;/p&gt;
+&lt;p&gt;In EM we aim to maximise the ELBO&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\int q(z) \log {p(x, z; \theta) \over q(z)} dz d\theta\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;alternately over &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt;. As mentioned before, the E-step of maximising over &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; is Bayesian, in that it computes the posterior of &lt;span class="math inline"&gt;\(z\)&lt;/span&gt;, whereas the M-step of maximising over &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt; is maximum likelihood and frequentist.&lt;/p&gt;
+&lt;p&gt;The fully Bayesian EM makes the M-step Bayesian by making &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt; a random variable, so the ELBO becomes&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[L(p(x, z, \theta), q(z, \theta)) = \int q(z, \theta) \log {p(x, z, \theta) \over q(z, \theta)} dz d\theta\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;We further assume &lt;span class="math inline"&gt;\(q\)&lt;/span&gt; can be factorised into distributions on &lt;span class="math inline"&gt;\(z\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt;: &lt;span class="math inline"&gt;\(q(z, \theta) = q_1(z) q_2(\theta)\)&lt;/span&gt;. So the above formula is rewritten as&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[L(p(x, z, \theta), q(z, \theta)) = \int q_1(z) q_2(\theta) \log {p(x, z, \theta) \over q_1(z) q_2(\theta)} dz d\theta\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;To find argmax over &lt;span class="math inline"&gt;\(q_1\)&lt;/span&gt;, we rewrite&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+L(p(x, z, \theta), q(z, \theta)) &amp;amp;= \int q_1(z) \left(\int q_2(\theta) \log p(x, z, \theta) d\theta\right) dz - \int q_1(z) \log q_1(z) dz - \int q_2(\theta) \log q_2(\theta) \\&amp;amp;= - D(q_1(z) || p_x(z)) + C,
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(p_x\)&lt;/span&gt; is a density in &lt;span class="math inline"&gt;\(z\)&lt;/span&gt; with&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\log p_x(z) = \mathbb E_{q_2(\theta)} \log p(x, z, \theta) + C.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;So the &lt;span class="math inline"&gt;\(q_1\)&lt;/span&gt; that maximises the ELBO is &lt;span class="math inline"&gt;\(q_1^* = p_x\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Similarly, the optimal &lt;span class="math inline"&gt;\(q_2\)&lt;/span&gt; is such that&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\log q_2^*(\theta) = \mathbb E_{q_1(z)} \log p(x, z, \theta) + C.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;The fully Bayesian EM thus alternately evaluates &lt;span class="math inline"&gt;\(q_1^*\)&lt;/span&gt; (E-step) and &lt;span class="math inline"&gt;\(q_2^*\)&lt;/span&gt; (M-step).&lt;/p&gt;
+&lt;p&gt;It is also called mean field approximation (MFA), and can be easily generalised to models with more than two groups of latent variables, see e.g. Section 10.1 of Bishop 2006.&lt;/p&gt;
+&lt;h3 id="application-to-mixture-models"&gt;Application to mixture models&lt;/h3&gt;
+&lt;p&gt;&lt;strong&gt;Definition (Fully Bayesian mixture model)&lt;/strong&gt;. The relations between &lt;span class="math inline"&gt;\(\pi\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(\eta\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(x\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(z\)&lt;/span&gt; are the same as in the definition of mixture models. Furthermore, we assume the distribution of &lt;span class="math inline"&gt;\((x | \eta_k)\)&lt;/span&gt; belongs to the &lt;a href="https://en.wikipedia.org/wiki/Exponential_family"&gt;exponential family&lt;/a&gt; (the definition of the exponential family is briefly touched at the end of this section). But now both &lt;span class="math inline"&gt;\(\pi\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\eta\)&lt;/span&gt; are random variables. Let the prior distribution &lt;span class="math inline"&gt;\(p(\pi)\)&lt;/span&gt; is Dirichlet with parameter &lt;span class="math inline"&gt;\((\alpha, \alpha, ..., \alpha)\)&lt;/span&gt;. Let the prior &lt;span class="math inline"&gt;\(p(\eta_k)\)&lt;/span&gt; be the conjugate prior of &lt;span class="math inline"&gt;\((x | \eta_k)\)&lt;/span&gt;, with parameter &lt;span class="math inline"&gt;\(\beta\)&lt;/span&gt;, we will see later in this section that the posterior &lt;span class="math inline"&gt;\(q(\eta_k)\)&lt;/span&gt; belongs to the same family as &lt;span class="math inline"&gt;\(p(\eta_k)\)&lt;/span&gt;. Represented in a plate notations, a fully Bayesian mixture model looks like:&lt;/p&gt;
+&lt;p&gt;&lt;img src="/assets/resources/fully-bayesian-mm.png" style="width:450px" /&gt;&lt;/p&gt;
+&lt;p&gt;Given this structure we can write down the mean-field approximation:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\log q(z) = \mathbb E_{q(\eta)q(\pi)} (\log(x | z, \eta) + \log(z | \pi)) + C.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Both sides can be factored into per-sample expressions, giving us&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\log q(z_i) = \mathbb E_{q(\eta)} \log p(x_i | z_i, \eta) + \mathbb E_{q(\pi)} \log p(z_i | \pi) + C\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Therefore&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\log r_{ik} = \log q(z_i = k) = \mathbb E_{q(\eta_k)} \log p(x_i | \eta_k) + \mathbb E_{q(\pi)} \log \pi_k + C. \qquad (9.1)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;So the posterior of each &lt;span class="math inline"&gt;\(z_i\)&lt;/span&gt; is categorical regardless of the &lt;span class="math inline"&gt;\(p\)&lt;/span&gt;s and &lt;span class="math inline"&gt;\(q\)&lt;/span&gt;s.&lt;/p&gt;
+&lt;p&gt;Computing the posterior of &lt;span class="math inline"&gt;\(\pi\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\eta\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\log q(\pi) + \log q(\eta) = \log p(\pi) + \log p(\eta) + \sum_i \mathbb E_{q(z_i)} p(x_i | z_i, \eta) + \sum_i \mathbb E_{q(z_i)} p(z_i | \pi) + C.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;So we can separate the terms involving &lt;span class="math inline"&gt;\(\pi\)&lt;/span&gt; and those involving &lt;span class="math inline"&gt;\(\eta\)&lt;/span&gt;. First compute the posterior of &lt;span class="math inline"&gt;\(\pi\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\log q(\pi) = \log p(\pi) + \sum_i \mathbb E_{q(z_i)} \log p(z_i | \pi) = \log p(\pi) + \sum_i \sum_k r_{ik} \log \pi_k + C.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;The right hand side is the log of a Dirichlet modulus the constant &lt;span class="math inline"&gt;\(C\)&lt;/span&gt;, from which we can update the posterior parameter &lt;span class="math inline"&gt;\(\phi^\pi\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\phi^\pi_k = \alpha + \sum_i r_{ik}. \qquad (9.3)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Similarly we can obtain the posterior of &lt;span class="math inline"&gt;\(\eta\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\log q(\eta) = \log p(\eta) + \sum_i \sum_k r_{ik} \log p(x_i | \eta_k) + C.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Again we can factor the terms with respect to &lt;span class="math inline"&gt;\(k\)&lt;/span&gt; and get:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\log q(\eta_k) = \log p(\eta_k) + \sum_i r_{ik} \log p(x_i | \eta_k) + C. \qquad (9.5)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Here we can see why conjugate prior works. Mathematically, given a probability distribution &lt;span class="math inline"&gt;\(p(x | \theta)\)&lt;/span&gt;, the distribution &lt;span class="math inline"&gt;\(p(\theta)\)&lt;/span&gt; is called conjugate prior of &lt;span class="math inline"&gt;\(p(x | \theta)\)&lt;/span&gt; if &lt;span class="math inline"&gt;\(\log p(\theta) + \log p(x | \theta)\)&lt;/span&gt; has the same form as &lt;span class="math inline"&gt;\(\log p(\theta)\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;For example, the conjugate prior for the exponential family &lt;span class="math inline"&gt;\(p(x | \theta) = h(x) \exp(\theta \cdot T(x) - A(\theta))\)&lt;/span&gt; where &lt;span class="math inline"&gt;\(T\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(A\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(h\)&lt;/span&gt; are some functions is &lt;span class="math inline"&gt;\(p(\theta; \chi, \nu) \propto \exp(\chi \cdot \theta - \nu A(\theta))\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Here what we want is a bit different from conjugate priors because of the coefficients &lt;span class="math inline"&gt;\(r_{ik}\)&lt;/span&gt;. But the computation carries over to the conjugate priors of the exponential family (try it yourself and you'll see). That is, if &lt;span class="math inline"&gt;\(p(x_i | \eta_k)\)&lt;/span&gt; belongs to the exponential family&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p(x_i | \eta_k) = h(x) \exp(\eta_k \cdot T(x) - A(\eta_k))\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;and if &lt;span class="math inline"&gt;\(p(\eta_k)\)&lt;/span&gt; is the conjugate prior of &lt;span class="math inline"&gt;\(p(x_i | \eta_k)\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p(\eta_k) \propto \exp(\chi \cdot \eta_k - \nu A(\eta_k))\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;then &lt;span class="math inline"&gt;\(q(\eta_k)\)&lt;/span&gt; has the same form as &lt;span class="math inline"&gt;\(p(\eta_k)\)&lt;/span&gt;, and from (9.5) we can compute the updates of &lt;span class="math inline"&gt;\(\phi^{\eta_k}\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\phi^{\eta_k}_1 &amp;amp;= \chi + \sum_i r_{ik} T(x_i), \qquad (9.7) \\
+\phi^{\eta_k}_2 &amp;amp;= \nu + \sum_i r_{ik}. \qquad (9.9)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;So the mean field approximation for the fully Bayesian mixture model is the alternate iteration of (9.1) (E-step) and (9.3)(9.7)(9.9) (M-step) until convergence.&lt;/p&gt;
+&lt;h3 id="fully-bayesian-gmm"&gt;Fully Bayesian GMM&lt;/h3&gt;
+&lt;p&gt;A typical example of fully Bayesian mixture models is the fully Bayesian Gaussian mixture model (Attias 2000, also called variational GMM in the literature). It is defined by applying the same modification to GMM as the difference between Fully Bayesian mixture model and the vanilla mixture model.&lt;/p&gt;
+&lt;p&gt;More specifically:&lt;/p&gt;
+&lt;ul&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(p(z_{i}) = \text{Cat}(\pi)\)&lt;/span&gt; as in vanilla GMM&lt;/li&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(p(\pi) = \text{Dir}(\alpha, \alpha, ..., \alpha)\)&lt;/span&gt; has Dirichlet distribution, the conjugate prior to the parameters of the categorical distribution.&lt;/li&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(p(x_i | z_i = k) = p(x_i | \eta_k) = N(\mu_{k}, \Sigma_{k})\)&lt;/span&gt; as in vanilla GMM&lt;/li&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(p(\mu_k, \Sigma_k) = \text{NIW} (\mu_0, \lambda, \Psi, \nu)\)&lt;/span&gt; is the normal-inverse-Wishart distribution, the conjugate prior to the mean and covariance matrix of the Gaussian distribution.&lt;/li&gt;
+&lt;/ul&gt;
+&lt;p&gt;The E-step and M-step can be computed using (9.1) and (9.3)(9.7)(9.9) in the previous section. The details of the computation can be found in Chapter 10.2 of Bishop 2006 or Attias 2000.&lt;/p&gt;
+&lt;h3 id="lda"&gt;LDA&lt;/h3&gt;
+&lt;p&gt;As the second example of fully Bayesian mixture models, Latent Dirichlet allocation (LDA) (Blei-Ng-Jordan 2003) is the fully Bayesian version of pLSA2, with the following plate notations:&lt;/p&gt;
+&lt;p&gt;&lt;img src="/assets/resources/lda.png" style="width:450px" /&gt;&lt;/p&gt;
+&lt;p&gt;It is the smoothed version in the paper.&lt;/p&gt;
+&lt;p&gt;More specifically, on the basis of pLSA2, we add prior distributions to &lt;span class="math inline"&gt;\(\eta\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\pi\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+p(\eta_k) &amp;amp;= \text{Dir} (\beta, ..., \beta), \qquad k = 1 : n_z \\
+p(\pi_\ell) &amp;amp;= \text{Dir} (\alpha, ..., \alpha), \qquad \ell = 1 : n_d \\
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;And as before, the prior of &lt;span class="math inline"&gt;\(z\)&lt;/span&gt; is&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p(z_{\ell, i}) = \text{Cat} (\pi_\ell), \qquad \ell = 1 : n_d, i = 1 : m\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;We also denote posterior distribution&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+q(\eta_k) &amp;amp;= \text{Dir} (\phi^{\eta_k}), \qquad k = 1 : n_z \\
+q(\pi_\ell) &amp;amp;= \text{Dir} (\phi^{\pi_\ell}), \qquad \ell = 1 : n_d \\
+p(z_{\ell, i}) &amp;amp;= \text{Cat} (r_{\ell, i}), \qquad \ell = 1 : n_d, i = 1 : m
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;As before, in E-step we update &lt;span class="math inline"&gt;\(r\)&lt;/span&gt;, and in M-step we update &lt;span class="math inline"&gt;\(\lambda\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\gamma\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;But in the LDA paper, one treats optimisation over &lt;span class="math inline"&gt;\(r\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(\lambda\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\gamma\)&lt;/span&gt; as a E-step, and treats &lt;span class="math inline"&gt;\(\alpha\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\beta\)&lt;/span&gt; as parameters, which is optmised over at M-step. This makes it more akin to the classical EM where the E-step is Bayesian and M-step MLE. This is more complicated, and we do not consider it this way here.&lt;/p&gt;
+&lt;p&gt;Plugging in (9.1) we obtain the updates at E-step&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[r_{\ell i k} \propto \exp(\psi(\phi^{\pi_\ell}_k) + \psi(\phi^{\eta_k}_{x_{\ell i}}) - \psi(\sum_w \phi^{\eta_k}_w)), \qquad (10)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(\psi\)&lt;/span&gt; is the digamma function. Similarly, plugging in (9.3)(9.7)(9.9), at M-step, we update the posterior of &lt;span class="math inline"&gt;\(\pi\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\eta\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\phi^{\pi_\ell}_k &amp;amp;= \alpha + \sum_i r_{\ell i k}. \qquad (11)\\
+%%}}$
+%%As for $\eta$, we have
+%%{{$%align%
+%%\log q(\eta) &amp;amp;= \sum_k \log p(\eta_k) + \sum_{\ell, i} \mathbb E_{q(z_{\ell i})} \log p(x_{\ell i} | z_{\ell i}, \eta) \\
+%%&amp;amp;= \sum_{k, j} (\sum_{\ell, i} r_{\ell i k} 1_{x_{\ell i} = j} + \beta - 1) \log \eta_{k j}
+%%}}$
+%%which gives us
+%%{{$
+\phi^{\eta_k}_w &amp;amp;= \beta + \sum_{\ell, i} r_{\ell i k} 1_{x_{\ell i} = w}. \qquad (12)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;So the algorithm iterates over (10) and (11)(12) until convergence.&lt;/p&gt;
+&lt;h3 id="dpmm"&gt;DPMM&lt;/h3&gt;
+&lt;p&gt;The Dirichlet process mixture model (DPMM) is like the fully Bayesian mixture model except &lt;span class="math inline"&gt;\(n_z = \infty\)&lt;/span&gt;, i.e. &lt;span class="math inline"&gt;\(z\)&lt;/span&gt; can take any positive integer value.&lt;/p&gt;
+&lt;p&gt;The probability of &lt;span class="math inline"&gt;\(z_i = k\)&lt;/span&gt; is defined using the so called stick-breaking process: let &lt;span class="math inline"&gt;\(v_i \sim \text{Beta} (\alpha, \beta)\)&lt;/span&gt; be i.i.d. random variables with Beta distributions, then&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(z_i = k | v_{1:\infty}) = (1 - v_1) (1 - v_2) ... (1 - v_{k - 1}) v_k.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;So &lt;span class="math inline"&gt;\(v\)&lt;/span&gt; plays a similar role to &lt;span class="math inline"&gt;\(\pi\)&lt;/span&gt; in the previous models.&lt;/p&gt;
+&lt;p&gt;As before, we have that the distribution of &lt;span class="math inline"&gt;\(x\)&lt;/span&gt; belongs to the exponential family:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p(x | z = k, \eta) = p(x | \eta_k) = h(x) \exp(\eta_k \cdot T(x) - A(\eta_k))\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;so the prior of &lt;span class="math inline"&gt;\(\eta_k\)&lt;/span&gt; is&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[p(\eta_k) \propto \exp(\chi \cdot \eta_k - \nu A(\eta_k)).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Because of the infinities we can't directly apply the formulas in the general fully Bayesian mixture models. So let us carefully derive the whole thing again.&lt;/p&gt;
+&lt;p&gt;As before, we can write down the ELBO:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[L(p(x, z, \theta), q(z, \theta)) = \mathbb E_{q(\theta)} \log {p(\theta) \over q(\theta)} + \mathbb E_{q(\theta) q(z)} \log {p(x, z | \theta) \over q(z)}.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Both terms are infinite series:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[L(p, q) = \sum_{k = 1 : \infty} \mathbb E_{q(\theta_k)} \log {p(\theta_k) \over q(\theta_k)} + \sum_{i = 1 : m} \sum_{k = 1 : \infty} q(z_i = k) \mathbb E_{q(\theta)} \log {p(x_i, z_i = k | \theta) \over q(z_i = k)}.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;There are several ways to deal with the infinities. One is to fix some level &lt;span class="math inline"&gt;\(T &amp;gt; 0\)&lt;/span&gt; and set &lt;span class="math inline"&gt;\(v_T = 1\)&lt;/span&gt; almost surely (Blei-Jordan 2006). This effectively turns the model into a finite one, and both terms become finite sums over &lt;span class="math inline"&gt;\(k = 1 : T\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Another walkaround (Kurihara-Welling-Vlassis 2007) is also a kind of truncation, but less heavy-handed: setting the posterior &lt;span class="math inline"&gt;\(q(\theta) = q(\eta) q(v)\)&lt;/span&gt; to be:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[q(\theta) = q(\theta_{1 : T}) p(\theta_{T + 1 : \infty}) =: q(\theta_{\le T}) p(\theta_{&amp;gt; T}).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;That is, tie the posterior after &lt;span class="math inline"&gt;\(T\)&lt;/span&gt; to the prior. This effectively turns the first term in the ELBO to a finite sum over &lt;span class="math inline"&gt;\(k = 1 : T\)&lt;/span&gt;, while keeping the second sum an infinite series:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[L(p, q) = \sum_{k = 1 : T} \mathbb E_{q(\theta_k)} \log {p(\theta_k) \over q(\theta_k)} + \sum_i \sum_{k = 1 : \infty} q(z_i = k) \mathbb E_{q(\theta)} \log {p(x_i, z_i = k | \theta) \over q(z_i = k)}. \qquad (13)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;The plate notation of this model is:&lt;/p&gt;
+&lt;p&gt;&lt;img src="/assets/resources/dpmm.png" style="width:450px" /&gt;&lt;/p&gt;
+&lt;p&gt;As it turns out, the infinities can be tamed in this case.&lt;/p&gt;
+&lt;p&gt;As before, the optimal &lt;span class="math inline"&gt;\(q(z_i)\)&lt;/span&gt; is computed as&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[r_{ik} = q(z_i = k) = s_{ik} / S_i\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+s_{ik} &amp;amp;= \exp(\mathbb E_{q(\theta)} \log p(x_i, z_i = k | \theta)) \\
+S_i &amp;amp;= \sum_{k = 1 : \infty} s_{ik}.
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Plugging this back to (13) we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\sum_{k = 1 : \infty} r_{ik} &amp;amp;\mathbb E_{q(\theta)} \log {p(x_i, z_i = k | \theta) \over r_{ik}} \\
+&amp;amp;= \sum_{k = 1 : \infty} r_{ik} \mathbb E_{q(\theta)} (\log p(x_i, z_i = k | \theta) - \mathbb E_{q(\theta)} \log p(x_i, z_i = k | \theta) + \log S_i) = \log S_i.
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;So it all rests upon &lt;span class="math inline"&gt;\(S_i\)&lt;/span&gt; being finite.&lt;/p&gt;
+&lt;p&gt;For &lt;span class="math inline"&gt;\(k \le T + 1\)&lt;/span&gt;, we compute the quantity &lt;span class="math inline"&gt;\(s_{ik}\)&lt;/span&gt; directly. For &lt;span class="math inline"&gt;\(k &amp;gt; T\)&lt;/span&gt;, it is not hard to show that&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[s_{ik} = s_{i, T + 1} \exp((k - T - 1) \mathbb E_{p(w)} \log (1 - w)),\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(w\)&lt;/span&gt; is a random variable with same distribution as &lt;span class="math inline"&gt;\(p(v_k)\)&lt;/span&gt;, i.e. &lt;span class="math inline"&gt;\(\text{Beta}(\alpha, \beta)\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Hence&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[S_i = \sum_{k = 1 : T} s_{ik} + {s_{i, T + 1} \over 1 - \exp(\psi(\beta) - \psi(\alpha + \beta))}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;is indeed finite. Similarly we also obtain&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[q(z_i &amp;gt; k) = S^{-1} \left(\sum_{\ell = k + 1 : T} s_\ell + {s_{i, T + 1} \over 1 - \exp(\psi(\beta) - \psi(\alpha + \beta))}\right), k \le T \qquad (14)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Now let us compute the posterior of &lt;span class="math inline"&gt;\(\theta_{\le T}\)&lt;/span&gt;. In the following we exchange the integrals without justifying them (c.f. Fubini's Theorem). Equation (13) can be rewritten as&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[L(p, q) = \mathbb E_{q(\theta_{\le T})} \left(\log p(\theta_{\le T}) + \sum_i \mathbb E_{q(z_i) p(\theta_{&amp;gt; T})} \log {p(x_i, z_i | \theta) \over q(z_i)} - \log q(\theta_{\le T})\right).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Note that unlike the derivation of the mean-field approximation, we keep the &lt;span class="math inline"&gt;\(- \mathbb E_{q(z)} \log q(z)\)&lt;/span&gt; term even though we are only interested in &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt; at this stage. This is again due to the problem of infinities: as in the computation of &lt;span class="math inline"&gt;\(S\)&lt;/span&gt;, we would like to cancel out some undesirable unbounded terms using &lt;span class="math inline"&gt;\(q(z)\)&lt;/span&gt;. We now have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\log q(\theta_{\le T}) = \log p(\theta_{\le T}) + \sum_i \mathbb E_{q(z_i) p(\theta_{&amp;gt; T})} \log {p(x_i, z_i | \theta) \over q(z_i)} + C.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;By plugging in &lt;span class="math inline"&gt;\(q(z = k)\)&lt;/span&gt; we obtain&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\log q(\theta_{\le T}) = \log p(\theta_{\le T}) + \sum_{k = 1 : \infty} q(z_i = k) \left(\mathbb E_{p(\theta_{&amp;gt; T})} \log {p(x_i, z_i = k | \theta) \over q(z_i = k)} - \mathbb E_{q(\theta)} \log {p(x_i, z_i = k | \theta) \over q(z_i = k)}\right) + C.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Again, we separate the &lt;span class="math inline"&gt;\(v_k\)&lt;/span&gt;'s and the &lt;span class="math inline"&gt;\(\eta_k\)&lt;/span&gt;'s to obtain&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[q(v_{\le T}) = \log p(v_{\le T}) + \sum_i \sum_k q(z_i = k) \left(\mathbb E_{p(v_{&amp;gt; T})} \log p(z_i = k | v) - \mathbb E_{q(v)} \log p (z_i = k | v)\right).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Denote by &lt;span class="math inline"&gt;\(D_k\)&lt;/span&gt; the difference between the two expetations on the right hand side. It is easy to show that&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[D_k = \begin{cases}
+\log(1 - v_1) ... (1 - v_{k - 1}) v_k - \mathbb E_{q(v)} \log (1 - v_1) ... (1 - v_{k - 1}) v_k &amp;amp; k \le T\\
+\log(1 - v_1) ... (1 - v_T) - \mathbb E_{q(v)} \log (1 - v_1) ... (1 - v_T) &amp;amp; k &amp;gt; T
+\end{cases}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;so &lt;span class="math inline"&gt;\(D_k\)&lt;/span&gt; is bounded. With this we can derive the update for &lt;span class="math inline"&gt;\(\phi^{v, 1}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\phi^{v, 2}\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\phi^{v, 1}_k &amp;amp;= \alpha + \sum_i q(z_i = k) \\
+\phi^{v, 2}_k &amp;amp;= \beta + \sum_i q(z_i &amp;gt; k),
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(q(z_i &amp;gt; k)\)&lt;/span&gt; can be computed as in (14).&lt;/p&gt;
+&lt;p&gt;When it comes to &lt;span class="math inline"&gt;\(\eta\)&lt;/span&gt;, we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\log q(\eta_{\le T}) = \log p(\eta_{\le T}) + \sum_i \sum_{k = 1 : \infty} q(z_i = k) (\mathbb E_{p(\eta_k)} \log p(x_i | \eta_k) - \mathbb E_{q(\eta_k)} \log p(x_i | \eta_k)).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Since &lt;span class="math inline"&gt;\(q(\eta_k) = p(\eta_k)\)&lt;/span&gt; for &lt;span class="math inline"&gt;\(k &amp;gt; T\)&lt;/span&gt;, the inner sum on the right hand side is a finite sum over &lt;span class="math inline"&gt;\(k = 1 : T\)&lt;/span&gt;. By factorising &lt;span class="math inline"&gt;\(q(\eta_{\le T})\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(p(\eta_{\le T})\)&lt;/span&gt;, we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\log q(\eta_k) = \log p(\eta_k) + \sum_i q(z_i = k) \log (x_i | \eta_k) + C,\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;which gives us&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\phi^{\eta, 1}_k &amp;amp;= \xi + \sum_i q(z_i = k) T(x_i) \\
+\phi^{\eta, 2}_k &amp;amp;= \nu + \sum_i q(z_i = k).
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;h2 id="svi"&gt;SVI&lt;/h2&gt;
+&lt;p&gt;In variational inference, the computation of some parameters are more expensive than others.&lt;/p&gt;
+&lt;p&gt;For example, the computation of M-step is often much more expensive than that of E-step:&lt;/p&gt;
+&lt;ul&gt;
+&lt;li&gt;In the vanilla mixture models with the EM algorithm, the update of &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt; requires the computation of &lt;span class="math inline"&gt;\(r_{ik}\)&lt;/span&gt; for all &lt;span class="math inline"&gt;\(i = 1 : m\)&lt;/span&gt;, see Eq (2.3).&lt;/li&gt;
+&lt;li&gt;In the fully Bayesian mixture model with mean field approximation, the updates of &lt;span class="math inline"&gt;\(\phi^\pi\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\phi^\eta\)&lt;/span&gt; require the computation of a sum over all samples (see Eq (9.3)(9.7)(9.9)).&lt;/li&gt;
+&lt;/ul&gt;
+&lt;p&gt;Similarly, in pLSA2 (resp. LDA), the updates of &lt;span class="math inline"&gt;\(\eta_k\)&lt;/span&gt; (resp. &lt;span class="math inline"&gt;\(\phi^{\eta_k}\)&lt;/span&gt;) requires a sum over &lt;span class="math inline"&gt;\(\ell = 1 : n_d\)&lt;/span&gt;, whereas the updates of other parameters do not.&lt;/p&gt;
+&lt;p&gt;In these cases, the parameter that requires more computations are called global and the other ones local.&lt;/p&gt;
+&lt;p&gt;Stochastic variational inference (SVI, Hoffman-Blei-Wang-Paisley 2012) addresses this problem in the same way as stochastic gradient descent improves efficiency of gradient descent.&lt;/p&gt;
+&lt;p&gt;Each time SVI picks a sample, updates the corresponding local parameters, and computes the update of the global parameters as if all the &lt;span class="math inline"&gt;\(m\)&lt;/span&gt; samples are identical to the picked sample. Finally it incorporates this global parameter value into previous computations of the global parameters, by means of an exponential moving average.&lt;/p&gt;
+&lt;p&gt;As an example, here's SVI applied to LDA:&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;Set &lt;span class="math inline"&gt;\(t = 1\)&lt;/span&gt;.&lt;/li&gt;
+&lt;li&gt;Pick &lt;span class="math inline"&gt;\(\ell\)&lt;/span&gt; uniformly from &lt;span class="math inline"&gt;\(\{1, 2, ..., n_d\}\)&lt;/span&gt;.&lt;/li&gt;
+&lt;li&gt;Repeat until convergence:
+&lt;ol type="1"&gt;
+&lt;li&gt;Compute &lt;span class="math inline"&gt;\((r_{\ell i k})_{i = 1 : m, k = 1 : n_z}\)&lt;/span&gt; using (10).&lt;/li&gt;
+&lt;li&gt;Compute &lt;span class="math inline"&gt;\((\phi^{\pi_\ell}_k)_{k = 1 : n_z}\)&lt;/span&gt; using (11).&lt;/li&gt;
+&lt;/ol&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;Compute &lt;span class="math inline"&gt;\((\tilde \phi^{\eta_k}_w)_{k = 1 : n_z, w = 1 : n_x}\)&lt;/span&gt; using the following formula (compare with (12)) &lt;span class="math display"&gt;\[\tilde \phi^{\eta_k}_w = \beta + n_d \sum_{i} r_{\ell i k} 1_{x_{\ell i} = w}\]&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;Update the exponential moving average &lt;span class="math inline"&gt;\((\phi^{\eta_k}_w)_{k = 1 : n_z, w = 1 : n_x}\)&lt;/span&gt;: &lt;span class="math display"&gt;\[\phi^{\eta_k}_w = (1 - \rho_t) \phi^{\eta_k}_w + \rho_t \tilde \phi^{\eta_k}_w\]&lt;/span&gt;&lt;/p&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;Increment &lt;span class="math inline"&gt;\(t\)&lt;/span&gt; and go back to Step 2.&lt;/p&gt;&lt;/li&gt;
+&lt;/ol&gt;
+&lt;p&gt;In the original paper, &lt;span class="math inline"&gt;\(\rho_t\)&lt;/span&gt; needs to satisfy some conditions that guarantees convergence of the global parameters:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\sum_t \rho_t = \infty \\
+\sum_t \rho_t^2 &amp;lt; \infty
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;and the choice made there is&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\rho_t = (t + \tau)^{-\kappa}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;for some &lt;span class="math inline"&gt;\(\kappa \in (.5, 1]\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\tau \ge 0\)&lt;/span&gt;.&lt;/p&gt;
+&lt;h2 id="aevb"&gt;AEVB&lt;/h2&gt;
+&lt;p&gt;SVI adds to variational inference stochastic updates similar to stochastic gradient descent. Why not just use neural networks with stochastic gradient descent while we are at it? Autoencoding variational Bayes (AEVB) (Kingma-Welling 2013) is such an algorithm.&lt;/p&gt;
+&lt;p&gt;Let's look back to the original problem of maximising the ELBO:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\max_{\theta, q} \sum_{i = 1 : m} L(p(x_i | z_i; \theta) p(z_i; \theta), q(z_i))\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Since for any given &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt;, the optimal &lt;span class="math inline"&gt;\(q(z_i)\)&lt;/span&gt; is the posterior &lt;span class="math inline"&gt;\(p(z_i | x_i; \theta)\)&lt;/span&gt;, the problem reduces to&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\max_{\theta} \sum_i L(p(x_i | z_i; \theta) p(z_i; \theta), p(z_i | x_i; \theta))\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Let us assume &lt;span class="math inline"&gt;\(p(z_i; \theta) = p(z_i)\)&lt;/span&gt; is independent of &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt; to simplify the problem. In the old mixture models, we have &lt;span class="math inline"&gt;\(p(x_i | z_i; \theta) = p(x_i; \eta_{z_i})\)&lt;/span&gt;, which we can generalise to &lt;span class="math inline"&gt;\(p(x_i; f(\theta, z_i))\)&lt;/span&gt; for some function &lt;span class="math inline"&gt;\(f\)&lt;/span&gt;. Using Beyes' theorem we can also write down &lt;span class="math inline"&gt;\(p(z_i | x_i; \theta) = q(z_i; g(\theta, x_i))\)&lt;/span&gt; for some function &lt;span class="math inline"&gt;\(g\)&lt;/span&gt;. So the problem becomes&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\max_{\theta} \sum_i L(p(x_i; f(\theta, z_i)) p(z_i), q(z_i; g(\theta, x_i)))\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;In some cases &lt;span class="math inline"&gt;\(g\)&lt;/span&gt; can be hard to write down or compute. AEVB addresses this problem by replacing &lt;span class="math inline"&gt;\(g(\theta, x_i)\)&lt;/span&gt; with a neural network &lt;span class="math inline"&gt;\(g_\phi(x_i)\)&lt;/span&gt; with input &lt;span class="math inline"&gt;\(x_i\)&lt;/span&gt; and some separate parameters &lt;span class="math inline"&gt;\(\phi\)&lt;/span&gt;. It also replaces &lt;span class="math inline"&gt;\(f(\theta, z_i)\)&lt;/span&gt; with a neural network &lt;span class="math inline"&gt;\(f_\theta(z_i)\)&lt;/span&gt; with input &lt;span class="math inline"&gt;\(z_i\)&lt;/span&gt; and parameters &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt;. And now the problem becomes&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\max_{\theta, \phi} \sum_i L(p(x_i; f_\theta(z_i)) p(z_i), q(z_i; g_\phi(x_i))).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;The objective function can be written as&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sum_i \mathbb E_{q(z_i; g_\phi(x_i))} \log p(x_i; f_\theta(z_i)) - D(q(z_i; g_\phi(x_i)) || p(z_i)).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;The first term is called the negative reconstruction error, like the &lt;span class="math inline"&gt;\(- \|decoder(encoder(x)) - x\|\)&lt;/span&gt; in autoencoders, which is where the "autoencoder" in the name comes from.&lt;/p&gt;
+&lt;p&gt;The second term is a regularisation term that penalises the posterior &lt;span class="math inline"&gt;\(q(z_i)\)&lt;/span&gt; that is very different from the prior &lt;span class="math inline"&gt;\(p(z_i)\)&lt;/span&gt;. We assume this term can be computed analytically.&lt;/p&gt;
+&lt;p&gt;So only the first term requires computing.&lt;/p&gt;
+&lt;p&gt;We can approximate the sum over &lt;span class="math inline"&gt;\(i\)&lt;/span&gt; in a similar fashion as SVI: pick &lt;span class="math inline"&gt;\(j\)&lt;/span&gt; uniformly randomly from &lt;span class="math inline"&gt;\(\{1 ... m\}\)&lt;/span&gt; and treat the whole dataset as &lt;span class="math inline"&gt;\(m\)&lt;/span&gt; replicates of &lt;span class="math inline"&gt;\(x_j\)&lt;/span&gt;, and approximate the expectation using Monte-Carlo:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[U(x_i, \theta, \phi) := \sum_i \mathbb E_{q(z_i; g_\phi(x_i))} \log p(x_i; f_\theta(z_i)) \approx m \mathbb E_{q(z_j; g_\phi(x_j))} \log p(x_j; f_\theta(z_j)) \approx {m \over L} \sum_{\ell = 1}^L \log p(x_j; f_\theta(z_{j, \ell})),\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where each &lt;span class="math inline"&gt;\(z_{j, \ell}\)&lt;/span&gt; is sampled from &lt;span class="math inline"&gt;\(q(z_j; g_\phi(x_j))\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;But then it is not easy to approximate the gradient over &lt;span class="math inline"&gt;\(\phi\)&lt;/span&gt;. One can use the log trick as in policy gradients, but it has the problem of high variance. In policy gradients this is overcome by using baseline subtractions. In the AEVB paper it is tackled with the reparameterisation trick.&lt;/p&gt;
+&lt;p&gt;Assume there exists a transformation &lt;span class="math inline"&gt;\(T_\phi\)&lt;/span&gt; and a random variable &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt; with distribution independent of &lt;span class="math inline"&gt;\(\phi\)&lt;/span&gt; or &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt;, such that &lt;span class="math inline"&gt;\(T_\phi(x_i, \epsilon)\)&lt;/span&gt; has distribution &lt;span class="math inline"&gt;\(q(z_i; g_\phi(x_i))\)&lt;/span&gt;. In this case we can rewrite &lt;span class="math inline"&gt;\(U(x, \phi, \theta)\)&lt;/span&gt; as&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sum_i \mathbb E_{\epsilon \sim p(\epsilon)} \log p(x_i; f_\theta(T_\phi(x_i, \epsilon))),\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;This way one can use Monte-Carlo to approximate &lt;span class="math inline"&gt;\(\nabla_\phi U(x, \phi, \theta)\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\nabla_\phi U(x, \phi, \theta) \approx {m \over L} \sum_{\ell = 1 : L} \nabla_\phi \log p(x_j; f_\theta(T_\phi(x_j, \epsilon_\ell))),\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where each &lt;span class="math inline"&gt;\(\epsilon_{\ell}\)&lt;/span&gt; is sampled from &lt;span class="math inline"&gt;\(p(\epsilon)\)&lt;/span&gt;. The approximation of &lt;span class="math inline"&gt;\(U(x, \phi, \theta)\)&lt;/span&gt; itself can be done similarly.&lt;/p&gt;
+&lt;h3 id="vae"&gt;VAE&lt;/h3&gt;
+&lt;p&gt;As an example of AEVB, the paper introduces variational autoencoder (VAE), with the following instantiations:&lt;/p&gt;
+&lt;ul&gt;
+&lt;li&gt;The prior &lt;span class="math inline"&gt;\(p(z_i) = N(0, I)\)&lt;/span&gt; is standard normal, thus independent of &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt;.&lt;/li&gt;
+&lt;li&gt;The distribution &lt;span class="math inline"&gt;\(p(x_i; \eta)\)&lt;/span&gt; is either Gaussian or categorical.&lt;/li&gt;
+&lt;li&gt;The distribution &lt;span class="math inline"&gt;\(q(z_i; \mu, \Sigma)\)&lt;/span&gt; is Gaussian with diagonal covariance matrix. So &lt;span class="math inline"&gt;\(g_\phi(z_i) = (\mu_\phi(x_i), \text{diag}(\sigma^2_\phi(x_i)_{1 : d}))\)&lt;/span&gt;. Thus in the reparameterisation trick &lt;span class="math inline"&gt;\(\epsilon \sim N(0, I)\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(T_\phi(x_i, \epsilon) = \epsilon \odot \sigma_\phi(x_i) + \mu_\phi(x_i)\)&lt;/span&gt;, where &lt;span class="math inline"&gt;\(\odot\)&lt;/span&gt; is elementwise multiplication.&lt;/li&gt;
+&lt;li&gt;The KL divergence can be easily computed analytically as &lt;span class="math inline"&gt;\(- D(q(z_i; g_\phi(x_i)) || p(z_i)) = {d \over 2} + \sum_{j = 1 : d} \log\sigma_\phi(x_i)_j - {1 \over 2} \sum_{j = 1 : d} (\mu_\phi(x_i)_j^2 + \sigma_\phi(x_i)_j^2)\)&lt;/span&gt;.&lt;/li&gt;
+&lt;/ul&gt;
+&lt;p&gt;With this, one can use backprop to maximise the ELBO.&lt;/p&gt;
+&lt;h3 id="fully-bayesian-aevb"&gt;Fully Bayesian AEVB&lt;/h3&gt;
+&lt;p&gt;Let us turn to fully Bayesian version of AEVB. Again, we first recall the ELBO of the fully Bayesian mixture models:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[L(p(x, z, \pi, \eta; \alpha, \beta), q(z, \pi, \eta; r, \phi)) = L(p(x | z, \eta) p(z | \pi) p(\pi; \alpha) p(\eta; \beta), q(z; r) q(\eta; \phi^\eta) q(\pi; \phi^\pi)).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;We write &lt;span class="math inline"&gt;\(\theta = (\pi, \eta)\)&lt;/span&gt;, rewrite &lt;span class="math inline"&gt;\(\alpha := (\alpha, \beta)\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(\phi := r\)&lt;/span&gt;, and &lt;span class="math inline"&gt;\(\gamma := (\phi^\eta, \phi^\pi)\)&lt;/span&gt;. Furthermore, as in the half-Bayesian version we assume &lt;span class="math inline"&gt;\(p(z | \theta) = p(z)\)&lt;/span&gt;, i.e. &lt;span class="math inline"&gt;\(z\)&lt;/span&gt; does not depend on &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt;. Similarly we also assume &lt;span class="math inline"&gt;\(p(\theta; \alpha) = p(\theta)\)&lt;/span&gt;. Now we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[L(p(x, z, \theta; \alpha), q(z, \theta; \phi, \gamma)) = L(p(x | z, \theta) p(z) p(\theta), q(z; \phi) q(\theta; \gamma)).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;And the objective is to maximise it over &lt;span class="math inline"&gt;\(\phi\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\gamma\)&lt;/span&gt;. We no longer maximise over &lt;span class="math inline"&gt;\(\theta\)&lt;/span&gt;, because it is now a random variable, like &lt;span class="math inline"&gt;\(z\)&lt;/span&gt;. Now let us transform it to a neural network model, as in the half-Bayesian case:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[L\left(\left(\prod_{i = 1 : m} p(x_i; f_\theta(z_i))\right) \left(\prod_{i = 1 : m} p(z_i) \right) p(\theta), \left(\prod_{i = 1 : m} q(z_i; g_\phi(x_i))\right) q(\theta; h_\gamma(x))\right).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(f_\theta\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(g_\phi\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(h_\gamma\)&lt;/span&gt; are neural networks. Again, by separating out KL-divergence terms, the above formula becomes&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sum_i \mathbb E_{q(\theta; h_\gamma(x))q(z_i; g_\phi(x_i))} \log p(x_i; f_\theta(z_i)) - \sum_i D(q(z_i; g_\phi(x_i)) || p(z_i)) - D(q(\theta; h_\gamma(x)) || p(\theta)).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Again, we assume the latter two terms can be computed analytically. Using reparameterisation trick, we write&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\theta &amp;amp;= R_\gamma(\zeta, x) \\
+z_i &amp;amp;= T_\phi(\epsilon, x_i)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;for some transformations &lt;span class="math inline"&gt;\(R_\gamma\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(T_\phi\)&lt;/span&gt; and random variables &lt;span class="math inline"&gt;\(\zeta\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\epsilon\)&lt;/span&gt; so that the output has the desired distributions.&lt;/p&gt;
+&lt;p&gt;Then the first term can be written as&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb E_{\zeta, \epsilon} \log p(x_i; f_{R_\gamma(\zeta, x)} (T_\phi(\epsilon, x_i))),\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;so that the gradients can be computed accordingly.&lt;/p&gt;
+&lt;p&gt;Again, one may use Monte-Carlo to approximate this expetation.&lt;/p&gt;
+&lt;h2 id="references"&gt;References&lt;/h2&gt;
+&lt;ul&gt;
+&lt;li&gt;Attias, Hagai. "A variational baysian framework for graphical models." In Advances in neural information processing systems, pp. 209-215. 2000.&lt;/li&gt;
+&lt;li&gt;Bishop, Christopher M. Neural networks for pattern recognition. Springer. 2006.&lt;/li&gt;
+&lt;li&gt;Blei, David M., and Michael I. Jordan. “Variational Inference for Dirichlet Process Mixtures.” Bayesian Analysis 1, no. 1 (March 2006): 121–43. &lt;a href="https://doi.org/10.1214/06-BA104" class="uri"&gt;https://doi.org/10.1214/06-BA104&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;Blei, David M., Andrew Y. Ng, and Michael I. Jordan. “Latent Dirichlet Allocation.” Journal of Machine Learning Research 3, no. Jan (2003): 993–1022.&lt;/li&gt;
+&lt;li&gt;Hofmann, Thomas. “Latent Semantic Models for Collaborative Filtering.” ACM Transactions on Information Systems 22, no. 1 (January 1, 2004): 89–115. &lt;a href="https://doi.org/10.1145/963770.963774" class="uri"&gt;https://doi.org/10.1145/963770.963774&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;Hofmann, Thomas. "Learning the similarity of documents: An information-geometric approach to document retrieval and categorization." In Advances in neural information processing systems, pp. 914-920. 2000.&lt;/li&gt;
+&lt;li&gt;Hoffman, Matt, David M. Blei, Chong Wang, and John Paisley. “Stochastic Variational Inference.” ArXiv:1206.7051 [Cs, Stat], June 29, 2012. &lt;a href="http://arxiv.org/abs/1206.7051" class="uri"&gt;http://arxiv.org/abs/1206.7051&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;Kingma, Diederik P., and Max Welling. “Auto-Encoding Variational Bayes.” ArXiv:1312.6114 [Cs, Stat], December 20, 2013. &lt;a href="http://arxiv.org/abs/1312.6114" class="uri"&gt;http://arxiv.org/abs/1312.6114&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;Kurihara, Kenichi, Max Welling, and Nikos Vlassis. "Accelerated variational Dirichlet process mixtures." In Advances in neural information processing systems, pp. 761-768. 2007.&lt;/li&gt;
+&lt;li&gt;Sudderth, Erik Blaine. "Graphical models for visual object recognition and tracking." PhD diss., Massachusetts Institute of Technology, 2006.&lt;/li&gt;
+&lt;/ul&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/blog-feed.xml">
+ <title type="text">Discriminant analysis</title>
+ <id>posts/2019-01-03-discriminant-analysis.html</id>
+ <updated>2019-01-03T00:00:00Z</updated>
+ <link href="posts/2019-01-03-discriminant-analysis.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;In this post I talk about the theory and implementation of linear and quadratic discriminant analysis, classical methods in statistical learning.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Acknowledgement&lt;/strong&gt;. Various sources were of great help to my understanding of the subject, including Chapter 4 of &lt;a href="https://web.stanford.edu/~hastie/ElemStatLearn/"&gt;The Elements of Statistical Learning&lt;/a&gt;, &lt;a href="http://cs229.stanford.edu/notes/cs229-notes2.pdf"&gt;Stanford CS229 Lecture notes&lt;/a&gt;, and &lt;a href="https://github.com/scikit-learn/scikit-learn/blob/7389dba/sklearn/discriminant_analysis.py"&gt;the scikit-learn code&lt;/a&gt;. Research was done while working at KTH mathematics department.&lt;/p&gt;
+&lt;p&gt;&lt;em&gt;If you are reading on a mobile device, you may need to “request desktop site” for the equations to be properly displayed. This post is licensed under CC BY-SA and GNU FDL.&lt;/em&gt;&lt;/p&gt;
+&lt;h2 id="theory"&gt;Theory&lt;/h2&gt;
+&lt;p&gt;Quadratic discriminant analysis (QDA) is a classical classification algorithm. It assumes that the data is generated by Gaussian distributions, where each class has its own mean and covariance.&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[(x | y = i) \sim N(\mu_i, \Sigma_i).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;It also assumes a categorical class prior:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mathbb P(y = i) = \pi_i\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;The log-likelihood is thus&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\log \mathbb P(y = i | x) &amp;amp;= \log \mathbb P(x | y = i) \log \mathbb P(y = i) + C\\
+&amp;amp;= - {1 \over 2} \log \det \Sigma_i - {1 \over 2} (x - \mu_i)^T \Sigma_i^{-1} (x - \mu_i) + \log \pi_i + C&amp;#39;, \qquad (0)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(C\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(C&amp;#39;\)&lt;/span&gt; are constants.&lt;/p&gt;
+&lt;p&gt;Thus the prediction is done by taking argmax of the above formula.&lt;/p&gt;
+&lt;p&gt;In training, let &lt;span class="math inline"&gt;\(X\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(y\)&lt;/span&gt; be the input data, where &lt;span class="math inline"&gt;\(X\)&lt;/span&gt; is of shape &lt;span class="math inline"&gt;\(m \times n\)&lt;/span&gt;, and &lt;span class="math inline"&gt;\(y\)&lt;/span&gt; of shape &lt;span class="math inline"&gt;\(m\)&lt;/span&gt;. We adopt the convention that each row of &lt;span class="math inline"&gt;\(X\)&lt;/span&gt; is a sample &lt;span class="math inline"&gt;\(x^{(i)T}\)&lt;/span&gt;. So there are &lt;span class="math inline"&gt;\(m\)&lt;/span&gt; samples and &lt;span class="math inline"&gt;\(n\)&lt;/span&gt; features. Denote by &lt;span class="math inline"&gt;\(m_i = \#\{j: y_j = i\}\)&lt;/span&gt; be the number of samples in class &lt;span class="math inline"&gt;\(i\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(n_c\)&lt;/span&gt; be the number of classes.&lt;/p&gt;
+&lt;p&gt;We estimate &lt;span class="math inline"&gt;\(\mu_i\)&lt;/span&gt; by the sample means, and &lt;span class="math inline"&gt;\(\pi_i\)&lt;/span&gt; by the frequencies:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\mu_i &amp;amp;:= {1 \over m_i} \sum_{j: y_j = i} x^{(j)}, \\
+\pi_i &amp;amp;:= \mathbb P(y = i) = {m_i \over m}.
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Linear discriminant analysis (LDA) is a specialisation of QDA: it assumes all classes share the same covariance, i.e. &lt;span class="math inline"&gt;\(\Sigma_i = \Sigma\)&lt;/span&gt; for all &lt;span class="math inline"&gt;\(i\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Guassian Naive Bayes is a different specialisation of QDA: it assumes that all &lt;span class="math inline"&gt;\(\Sigma_i\)&lt;/span&gt; are diagonal, since all the features are assumed to be independent.&lt;/p&gt;
+&lt;h3 id="qda"&gt;QDA&lt;/h3&gt;
+&lt;p&gt;We look at QDA.&lt;/p&gt;
+&lt;p&gt;We estimate &lt;span class="math inline"&gt;\(\Sigma_i\)&lt;/span&gt; by the variance mean:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\Sigma_i &amp;amp;= {1 \over m_i - 1} \sum_{j: y_j = i} \hat x^{(j)} \hat x^{(j)T}.
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(\hat x^{(j)} = x^{(j)} - \mu_{y_j}\)&lt;/span&gt; are the centred &lt;span class="math inline"&gt;\(x^{(j)}\)&lt;/span&gt;. Plugging this into (0) we are done.&lt;/p&gt;
+&lt;p&gt;There are two problems that can break the algorithm. First, if one of the &lt;span class="math inline"&gt;\(m_i\)&lt;/span&gt; is &lt;span class="math inline"&gt;\(1\)&lt;/span&gt;, then &lt;span class="math inline"&gt;\(\Sigma_i\)&lt;/span&gt; is ill-defined. Second, one of &lt;span class="math inline"&gt;\(\Sigma_i\)&lt;/span&gt;'s might be singular.&lt;/p&gt;
+&lt;p&gt;In either case, there is no way around it, and the implementation should throw an exception.&lt;/p&gt;
+&lt;p&gt;This won't be a problem of the LDA, though, unless there is only one sample for each class.&lt;/p&gt;
+&lt;h3 id="vanilla-lda"&gt;Vanilla LDA&lt;/h3&gt;
+&lt;p&gt;Now let us look at LDA.&lt;/p&gt;
+&lt;p&gt;Since all classes share the same covariance, we estimate &lt;span class="math inline"&gt;\(\Sigma\)&lt;/span&gt; using sample variance&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+\Sigma &amp;amp;= {1 \over m - n_c} \sum_j \hat x^{(j)} \hat x^{(j)T},
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(\hat x^{(j)} = x^{(j)} - \mu_{y_j}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\({1 \over m - n_c}\)&lt;/span&gt; comes from &lt;a href="https://en.wikipedia.org/wiki/Bessel%27s_correction"&gt;Bessel's Correction&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;Let us write down the decision function (0). We can remove the first term on the right hand side, since all &lt;span class="math inline"&gt;\(\Sigma_i\)&lt;/span&gt; are the same, and we only care about argmax of that equation. Thus it becomes&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[- {1 \over 2} (x - \mu_i)^T \Sigma^{-1} (x - \mu_i) + \log\pi_i. \qquad (1)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Notice that we just walked around the problem of figuring out &lt;span class="math inline"&gt;\(\log \det \Sigma\)&lt;/span&gt; when &lt;span class="math inline"&gt;\(\Sigma\)&lt;/span&gt; is singular.&lt;/p&gt;
+&lt;p&gt;But how about &lt;span class="math inline"&gt;\(\Sigma^{-1}\)&lt;/span&gt;?&lt;/p&gt;
+&lt;p&gt;We sidestep this problem by using the pseudoinverse of &lt;span class="math inline"&gt;\(\Sigma\)&lt;/span&gt; instead. This can be seen as applying a linear transformation to &lt;span class="math inline"&gt;\(X\)&lt;/span&gt; to turn its covariance matrix to identity. And thus the model becomes a sort of a nearest neighbour classifier.&lt;/p&gt;
+&lt;h3 id="nearest-neighbour-classifier"&gt;Nearest neighbour classifier&lt;/h3&gt;
+&lt;p&gt;More specifically, we want to transform the first term of (0) to a norm to get a classifier based on nearest neighbour modulo &lt;span class="math inline"&gt;\(\log \pi_i\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[- {1 \over 2} \|A(x - \mu_i)\|^2 + \log\pi_i\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;To compute &lt;span class="math inline"&gt;\(A\)&lt;/span&gt;, we denote&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[X_c = X - M,\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where the &lt;span class="math inline"&gt;\(i\)&lt;/span&gt;th row of &lt;span class="math inline"&gt;\(M\)&lt;/span&gt; is &lt;span class="math inline"&gt;\(\mu_{y_i}^T\)&lt;/span&gt;, the mean of the class &lt;span class="math inline"&gt;\(x_i\)&lt;/span&gt; belongs to, so that &lt;span class="math inline"&gt;\(\Sigma = {1 \over m - n_c} X_c^T X_c\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Let&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[{1 \over \sqrt{m - n_c}} X_c = U_x \Sigma_x V_x^T\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;be the SVD of &lt;span class="math inline"&gt;\({1 \over \sqrt{m - n_c}}X_c\)&lt;/span&gt;. Let &lt;span class="math inline"&gt;\(D_x = \text{diag} (s_1, ..., s_r)\)&lt;/span&gt; be the diagonal matrix with all the nonzero singular values, and rewrite &lt;span class="math inline"&gt;\(V_x\)&lt;/span&gt; as an &lt;span class="math inline"&gt;\(n \times r\)&lt;/span&gt; matrix consisting of the first &lt;span class="math inline"&gt;\(r\)&lt;/span&gt; columns of &lt;span class="math inline"&gt;\(V_x\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Then with an abuse of notation, the pseudoinverse of &lt;span class="math inline"&gt;\(\Sigma\)&lt;/span&gt; is&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\Sigma^{-1} = V_x D_x^{-2} V_x^T.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;So we just need to make &lt;span class="math inline"&gt;\(A = D_x^{-1} V_x^T\)&lt;/span&gt;. When it comes to prediction, just transform &lt;span class="math inline"&gt;\(x\)&lt;/span&gt; with &lt;span class="math inline"&gt;\(A\)&lt;/span&gt;, and find the nearest centroid &lt;span class="math inline"&gt;\(A \mu_i\)&lt;/span&gt; (again, modulo &lt;span class="math inline"&gt;\(\log \pi_i\)&lt;/span&gt;) and label the input with &lt;span class="math inline"&gt;\(i\)&lt;/span&gt;.&lt;/p&gt;
+&lt;h3 id="dimensionality-reduction"&gt;Dimensionality reduction&lt;/h3&gt;
+&lt;p&gt;We can further simplify the prediction by dimensionality reduction. Assume &lt;span class="math inline"&gt;\(n_c \le n\)&lt;/span&gt;. Then the centroid spans an affine space of dimension &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; which is at most &lt;span class="math inline"&gt;\(n_c - 1\)&lt;/span&gt;. So what we can do is to project both the transformed sample &lt;span class="math inline"&gt;\(Ax\)&lt;/span&gt; and centroids &lt;span class="math inline"&gt;\(A\mu_i\)&lt;/span&gt; to the linear subspace parallel to the affine space, and do the nearest neighbour classification there.&lt;/p&gt;
+&lt;p&gt;So we can perform SVD on the matrix &lt;span class="math inline"&gt;\((M - \bar x) V_x D_x^{-1}\)&lt;/span&gt; where &lt;span class="math inline"&gt;\(\bar x\)&lt;/span&gt;, a row vector, is the sample mean of all data i.e. average of rows of &lt;span class="math inline"&gt;\(X\)&lt;/span&gt;:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[(M - \bar x) V_x D_x^{-1} = U_m \Sigma_m V_m^T.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Again, we let &lt;span class="math inline"&gt;\(V_m\)&lt;/span&gt; be the &lt;span class="math inline"&gt;\(r \times p\)&lt;/span&gt; matrix by keeping the first &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; columns of &lt;span class="math inline"&gt;\(V_m\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;The projection operator is thus &lt;span class="math inline"&gt;\(V_m\)&lt;/span&gt;. And so the final transformation is &lt;span class="math inline"&gt;\(V_m^T D_x^{-1} V_x^T\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;There is no reason to stop here, and we can set &lt;span class="math inline"&gt;\(p\)&lt;/span&gt; even smaller, which will result in a lossy compression / regularisation equivalent to doing &lt;a href="https://en.wikipedia.org/wiki/Principal_component_analysis"&gt;principle component analysis&lt;/a&gt; on &lt;span class="math inline"&gt;\((M - \bar x) V_x D_x^{-1}\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Note that as of 2019-01-04, in the &lt;a href="https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/discriminant_analysis.py"&gt;scikit-learn implementation of LDA&lt;/a&gt;, the prediction is done without any lossy compression, even if the parameter &lt;code&gt;n_components&lt;/code&gt; is set to be smaller than dimension of the affine space spanned by the centroids. In other words, the prediction does not change regardless of &lt;code&gt;n_components&lt;/code&gt;.&lt;/p&gt;
+&lt;h3 id="fisher-discriminant-analysis"&gt;Fisher discriminant analysis&lt;/h3&gt;
+&lt;p&gt;The Fisher discriminant analysis involves finding an &lt;span class="math inline"&gt;\(n\)&lt;/span&gt;-dimensional vector &lt;span class="math inline"&gt;\(a\)&lt;/span&gt; that maximises between-class covariance with respect to within-class covariance:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[{a^T M_c^T M_c a \over a^T X_c^T X_c a},\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(M_c = M - \bar x\)&lt;/span&gt; is the centred sample mean matrix.&lt;/p&gt;
+&lt;p&gt;As it turns out, this is (almost) equivalent to the derivation above, modulo a constant. In particular, &lt;span class="math inline"&gt;\(a = c V_m^T D_x^{-1} V_x^T\)&lt;/span&gt; where &lt;span class="math inline"&gt;\(p = 1\)&lt;/span&gt; for arbitrary constant &lt;span class="math inline"&gt;\(c\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;To see this, we can first multiply the denominator with a constant &lt;span class="math inline"&gt;\({1 \over m - n_c}\)&lt;/span&gt; so that the matrix in the denominator becomes the covariance estimate &lt;span class="math inline"&gt;\(\Sigma\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;We decompose &lt;span class="math inline"&gt;\(a\)&lt;/span&gt;: &lt;span class="math inline"&gt;\(a = V_x D_x^{-1} b + \tilde V_x \tilde b\)&lt;/span&gt;, where &lt;span class="math inline"&gt;\(\tilde V_x\)&lt;/span&gt; consists of column vectors orthogonal to the column space of &lt;span class="math inline"&gt;\(V_x\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;We ignore the second term in the decomposition. In other words, we only consider &lt;span class="math inline"&gt;\(a\)&lt;/span&gt; in the column space of &lt;span class="math inline"&gt;\(V_x\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Then the problem is to find an &lt;span class="math inline"&gt;\(r\)&lt;/span&gt;-dimensional vector &lt;span class="math inline"&gt;\(b\)&lt;/span&gt; to maximise&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[{b^T (M_c V_x D_x^{-1})^T (M_c V_x D_x^{-1}) b \over b^T b}.\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;This is the problem of principle component analysis, and so &lt;span class="math inline"&gt;\(b\)&lt;/span&gt; is first column of &lt;span class="math inline"&gt;\(V_m\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Therefore, the solution to Fisher discriminant analysis is &lt;span class="math inline"&gt;\(a = c V_x D_x^{-1} V_m\)&lt;/span&gt; with &lt;span class="math inline"&gt;\(p = 1\)&lt;/span&gt;.&lt;/p&gt;
+&lt;h3 id="linear-model"&gt;Linear model&lt;/h3&gt;
+&lt;p&gt;The model is called linear discriminant analysis because it is a linear model. To see this, let &lt;span class="math inline"&gt;\(B = V_m^T D_x^{-1} V_x^T\)&lt;/span&gt; be the matrix of transformation. Now we are comparing&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[- {1 \over 2} \| B x - B \mu_k\|^2 + \log \pi_k\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;across all &lt;span class="math inline"&gt;\(k\)&lt;/span&gt;s. Expanding the norm and removing the common term &lt;span class="math inline"&gt;\(\|B x\|^2\)&lt;/span&gt;, we see a linear form:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\mu_k^T B^T B x - {1 \over 2} \|B \mu_k\|^2 + \log\pi_k\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;So the prediction for &lt;span class="math inline"&gt;\(X_{\text{new}}\)&lt;/span&gt; is&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\text{argmax}_{\text{axis}=0} \left(K B^T B X_{\text{new}}^T - {1 \over 2} \|K B^T\|_{\text{axis}=1}^2 + \log \pi\right)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;thus the decision boundaries are linear.&lt;/p&gt;
+&lt;p&gt;This is how scikit-learn implements LDA, by inheriting from &lt;code&gt;LinearClassifierMixin&lt;/code&gt; and redirecting the classification there.&lt;/p&gt;
+&lt;h2 id="implementation"&gt;Implementation&lt;/h2&gt;
+&lt;p&gt;This is where things get interesting. How do I validate my understanding of the theory? By implementing and testing the algorithm.&lt;/p&gt;
+&lt;p&gt;I try to implement it as close as possible to the natural language / mathematical descriptions of the model, which means clarity over performance.&lt;/p&gt;
+&lt;p&gt;How about testing? Numerical experiments are harder to test than combinatorial / discrete algorithms in general because the output is less verifiable by hand. My shortcut solution to this problem is to test against output from the scikit-learn package.&lt;/p&gt;
+&lt;p&gt;It turned out to be harder than expected, as I had to dig into the code of scikit-learn when the outputs mismatch. Their code is quite well-written though.&lt;/p&gt;
+&lt;p&gt;The result is &lt;a href="https://github.com/ycpei/machine-learning/tree/master/discriminant-analysis"&gt;here&lt;/a&gt;.&lt;/p&gt;
+&lt;h3 id="fun-facts-about-lda"&gt;Fun facts about LDA&lt;/h3&gt;
+&lt;p&gt;One property that can be used to test the LDA implementation is the fact that the scatter matrix &lt;span class="math inline"&gt;\(B(X - \bar x)^T (X - \bar X) B^T\)&lt;/span&gt; of the transformed centred sample is diagonal.&lt;/p&gt;
+&lt;p&gt;This can be derived by using another fun fact that the sum of the in-class scatter matrix and the between-class scatter matrix is the sample scatter matrix:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[X_c^T X_c + M_c^T M_c = (X - \bar x)^T (X - \bar x) = (X_c + M_c)^T (X_c + M_c).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;The verification is not very hard and left as an exercise.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/blog-feed.xml">
+ <title type="text">Shapley, LIME and SHAP</title>
+ <id>posts/2018-12-02-lime-shapley.html</id>
+ <updated>2018-12-02T00:00:00Z</updated>
+ <link href="posts/2018-12-02-lime-shapley.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;In this post I explain LIME (Ribeiro et. al. 2016), the Shapley values (Shapley, 1953) and the SHAP values (Strumbelj-Kononenko, 2014; Lundberg-Lee, 2017).&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Acknowledgement&lt;/strong&gt;. Thanks to Josef Lindman Hörnlund for bringing the LIME and SHAP papers to my attention. The research was done while working at KTH mathematics department.&lt;/p&gt;
+&lt;p&gt;&lt;em&gt;If you are reading on a mobile device, you may need to “request desktop site” for the equations to be properly displayed. This post is licensed under CC BY-SA and GNU FDL.&lt;/em&gt;&lt;/p&gt;
+&lt;h2 id="shapley-values"&gt;Shapley values&lt;/h2&gt;
+&lt;p&gt;A coalitional game &lt;span class="math inline"&gt;\((v, N)\)&lt;/span&gt; of &lt;span class="math inline"&gt;\(n\)&lt;/span&gt; players involves&lt;/p&gt;
+&lt;ul&gt;
+&lt;li&gt;The set &lt;span class="math inline"&gt;\(N = \{1, 2, ..., n\}\)&lt;/span&gt; that represents the players.&lt;/li&gt;
+&lt;li&gt;A function &lt;span class="math inline"&gt;\(v: 2^N \to \mathbb R\)&lt;/span&gt;, where &lt;span class="math inline"&gt;\(v(S)\)&lt;/span&gt; is the worth of coalition &lt;span class="math inline"&gt;\(S \subset N\)&lt;/span&gt;.&lt;/li&gt;
+&lt;/ul&gt;
+&lt;p&gt;The Shapley values &lt;span class="math inline"&gt;\(\phi_i(v)\)&lt;/span&gt; of such a game specify a fair way to distribute the total worth &lt;span class="math inline"&gt;\(v(N)\)&lt;/span&gt; to the players. It is defined as (in the following, for a set &lt;span class="math inline"&gt;\(S \subset N\)&lt;/span&gt; we use the convention &lt;span class="math inline"&gt;\(s = |S|\)&lt;/span&gt; to be the number of elements of set &lt;span class="math inline"&gt;\(S\)&lt;/span&gt; and the shorthand &lt;span class="math inline"&gt;\(S - i := S \setminus \{i\}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(S + i := S \cup \{i\}\)&lt;/span&gt;)&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\phi_i(v) = \sum_{S: i \in S} {(n - s)! (s - 1)! \over n!} (v(S) - v(S - i)).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;It is not hard to see that &lt;span class="math inline"&gt;\(\phi_i(v)\)&lt;/span&gt; can be viewed as an expectation:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\phi_i(v) = \mathbb E_{S \sim \nu_i} (v(S) - v(S - i))\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(\nu_i(S) = n^{-1} {n - 1 \choose s - 1}^{-1} 1_{i \in S}\)&lt;/span&gt;, that is, first pick the size &lt;span class="math inline"&gt;\(s\)&lt;/span&gt; uniformly from &lt;span class="math inline"&gt;\(\{1, 2, ..., n\}\)&lt;/span&gt;, then pick &lt;span class="math inline"&gt;\(S\)&lt;/span&gt; uniformly from the subsets of &lt;span class="math inline"&gt;\(N\)&lt;/span&gt; that has size &lt;span class="math inline"&gt;\(s\)&lt;/span&gt; and contains &lt;span class="math inline"&gt;\(i\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;The Shapley values satisfy some nice properties which are readily verified, including:&lt;/p&gt;
+&lt;ul&gt;
+&lt;li&gt;&lt;strong&gt;Efficiency&lt;/strong&gt;. &lt;span class="math inline"&gt;\(\sum_i \phi_i(v) = v(N) - v(\emptyset)\)&lt;/span&gt;.&lt;/li&gt;
+&lt;li&gt;&lt;strong&gt;Symmetry&lt;/strong&gt;. If for some &lt;span class="math inline"&gt;\(i, j \in N\)&lt;/span&gt;, for all &lt;span class="math inline"&gt;\(S \subset N\)&lt;/span&gt;, we have &lt;span class="math inline"&gt;\(v(S + i) = v(S + j)\)&lt;/span&gt;, then &lt;span class="math inline"&gt;\(\phi_i(v) = \phi_j(v)\)&lt;/span&gt;.&lt;/li&gt;
+&lt;li&gt;&lt;strong&gt;Null player&lt;/strong&gt;. If for some &lt;span class="math inline"&gt;\(i \in N\)&lt;/span&gt;, for all &lt;span class="math inline"&gt;\(S \subset N\)&lt;/span&gt;, we have &lt;span class="math inline"&gt;\(v(S + i) = v(S)\)&lt;/span&gt;, then &lt;span class="math inline"&gt;\(\phi_i(v) = 0\)&lt;/span&gt;.&lt;/li&gt;
+&lt;li&gt;&lt;strong&gt;Linearity&lt;/strong&gt;. &lt;span class="math inline"&gt;\(\phi_i\)&lt;/span&gt; is linear in games. That is &lt;span class="math inline"&gt;\(\phi_i(v) + \phi_i(w) = \phi_i(v + w)\)&lt;/span&gt;, where &lt;span class="math inline"&gt;\(v + w\)&lt;/span&gt; is defined by &lt;span class="math inline"&gt;\((v + w)(S) := v(S) + w(S)\)&lt;/span&gt;.&lt;/li&gt;
+&lt;/ul&gt;
+&lt;p&gt;In the literature, an added assumption &lt;span class="math inline"&gt;\(v(\emptyset) = 0\)&lt;/span&gt; is often given, in which case the Efficiency property is defined as &lt;span class="math inline"&gt;\(\sum_i \phi_i(v) = v(N)\)&lt;/span&gt;. Here I discard this assumption to avoid minor inconsistencies across different sources. For example, in the LIME paper, the local model is defined without an intercept, even though the underlying &lt;span class="math inline"&gt;\(v(\emptyset)\)&lt;/span&gt; may not be &lt;span class="math inline"&gt;\(0\)&lt;/span&gt;. In the SHAP paper, an intercept &lt;span class="math inline"&gt;\(\phi_0 = v(\emptyset)\)&lt;/span&gt; is added which fixes this problem when making connections to the Shapley values.&lt;/p&gt;
+&lt;p&gt;Conversely, according to Strumbelj-Kononenko (2010), it was shown in Shapley's original paper (Shapley, 1953) that these four properties together with &lt;span class="math inline"&gt;\(v(\emptyset) = 0\)&lt;/span&gt; defines the Shapley values.&lt;/p&gt;
+&lt;h2 id="lime"&gt;LIME&lt;/h2&gt;
+&lt;p&gt;LIME (Ribeiro et. al. 2016) is a model that offers a way to explain feature contributions of supervised learning models locally.&lt;/p&gt;
+&lt;p&gt;Let &lt;span class="math inline"&gt;\(f: X_1 \times X_2 \times ... \times X_n \to \mathbb R\)&lt;/span&gt; be a function. We can think of &lt;span class="math inline"&gt;\(f\)&lt;/span&gt; as a model, where &lt;span class="math inline"&gt;\(X_j\)&lt;/span&gt; is the space of &lt;span class="math inline"&gt;\(j\)&lt;/span&gt;th feature. For example, in a language model, &lt;span class="math inline"&gt;\(X_j\)&lt;/span&gt; may correspond to the count of the &lt;span class="math inline"&gt;\(j\)&lt;/span&gt;th word in the vocabulary, i.e. the bag-of-words model.&lt;/p&gt;
+&lt;p&gt;The output may be something like housing price, or log-probability of something.&lt;/p&gt;
+&lt;p&gt;LIME tries to assign a value to each feature &lt;em&gt;locally&lt;/em&gt;. By locally, we mean that given a specific sample &lt;span class="math inline"&gt;\(x \in X := \prod_{i = 1}^n X_i\)&lt;/span&gt;, we want to fit a model around it.&lt;/p&gt;
+&lt;p&gt;More specifically, let &lt;span class="math inline"&gt;\(h_x: 2^N \to X\)&lt;/span&gt; be a function defined by&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[(h_x(S))_i =
+\begin{cases}
+x_i, &amp;amp; \text{if }i \in S; \\
+0, &amp;amp; \text{otherwise.}
+\end{cases}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;That is, &lt;span class="math inline"&gt;\(h_x(S)\)&lt;/span&gt; masks the features that are not in &lt;span class="math inline"&gt;\(S\)&lt;/span&gt;, or in other words, we are perturbing the sample &lt;span class="math inline"&gt;\(x\)&lt;/span&gt;. Specifically, &lt;span class="math inline"&gt;\(h_x(N) = x\)&lt;/span&gt;. Alternatively, the &lt;span class="math inline"&gt;\(0\)&lt;/span&gt; in the "otherwise" case can be replaced by some kind of default value (see the section titled SHAP in this post).&lt;/p&gt;
+&lt;p&gt;For a set &lt;span class="math inline"&gt;\(S \subset N\)&lt;/span&gt;, let us denote &lt;span class="math inline"&gt;\(1_S \in \{0, 1\}^n\)&lt;/span&gt; to be an &lt;span class="math inline"&gt;\(n\)&lt;/span&gt;-bit where the &lt;span class="math inline"&gt;\(k\)&lt;/span&gt;th bit is &lt;span class="math inline"&gt;\(1\)&lt;/span&gt; if and only if &lt;span class="math inline"&gt;\(k \in S\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Basically, LIME samples &lt;span class="math inline"&gt;\(S_1, S_2, ..., S_m \subset N\)&lt;/span&gt; to obtain a set of perturbed samples &lt;span class="math inline"&gt;\(x_i = h_x(S_i)\)&lt;/span&gt; in the &lt;span class="math inline"&gt;\(X\)&lt;/span&gt; space, and then fits a linear model &lt;span class="math inline"&gt;\(g\)&lt;/span&gt; using &lt;span class="math inline"&gt;\(1_{S_i}\)&lt;/span&gt; as the input samples and &lt;span class="math inline"&gt;\(f(h_x(S_i))\)&lt;/span&gt; as the output samples:&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Problem&lt;/strong&gt;(LIME). Find &lt;span class="math inline"&gt;\(w = (w_1, w_2, ..., w_n)\)&lt;/span&gt; that minimises&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sum_i (w \cdot 1_{S_i} - f(h_x(S_i)))^2 \pi_x(h_x(S_i))\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(\pi_x(x&amp;#39;)\)&lt;/span&gt; is a function that penalises &lt;span class="math inline"&gt;\(x&amp;#39;\)&lt;/span&gt;s that are far away from &lt;span class="math inline"&gt;\(x\)&lt;/span&gt;. In the LIME paper the Gaussian kernel was used:&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\pi_x(x&amp;#39;) = \exp\left({- \|x - x&amp;#39;\|^2 \over \sigma^2}\right).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Then &lt;span class="math inline"&gt;\(w_i\)&lt;/span&gt; represents the importance of the &lt;span class="math inline"&gt;\(i\)&lt;/span&gt;th feature.&lt;/p&gt;
+&lt;p&gt;The LIME model has a more general framework, but the specific model considered in the paper is the one described above, with a Lasso for feature selection.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Remark&lt;/strong&gt;. One difference between our account here and the one in the LIME paper is: the dimension of the data space may differ from &lt;span class="math inline"&gt;\(n\)&lt;/span&gt; (see Section 3.1 of that paper). But in the case of text data, they do use bag-of-words (our &lt;span class="math inline"&gt;\(X\)&lt;/span&gt;) for an “intermediate” representation. So my understanding is, in their context, there is an “original” data space (let’s call it &lt;span class="math inline"&gt;\(X&amp;#39;\)&lt;/span&gt;). And there is a one-one correspondence between &lt;span class="math inline"&gt;\(X&amp;#39;\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(X\)&lt;/span&gt; (let’s call it &lt;span class="math inline"&gt;\(r: X&amp;#39; \to X\)&lt;/span&gt;), so that given a sample &lt;span class="math inline"&gt;\(x&amp;#39; \in X&amp;#39;\)&lt;/span&gt;, we can compute the output of &lt;span class="math inline"&gt;\(S\)&lt;/span&gt; in the local model with &lt;span class="math inline"&gt;\(f(r^{-1}(h_{r(x&amp;#39;)}(S)))\)&lt;/span&gt;. As an example, in the example of &lt;span class="math inline"&gt;\(X\)&lt;/span&gt; being the bag of words, &lt;span class="math inline"&gt;\(X&amp;#39;\)&lt;/span&gt; may be the embedding vector space, so that &lt;span class="math inline"&gt;\(r(x&amp;#39;) = A^{-1} x&amp;#39;\)&lt;/span&gt;, where &lt;span class="math inline"&gt;\(A\)&lt;/span&gt; is the word embedding matrix. Therefore, without loss of generality, we assume the input space to be &lt;span class="math inline"&gt;\(X\)&lt;/span&gt; which is of dimension &lt;span class="math inline"&gt;\(n\)&lt;/span&gt;.&lt;/p&gt;
+&lt;h2 id="shapley-values-and-lime"&gt;Shapley values and LIME&lt;/h2&gt;
+&lt;p&gt;The connection between the Shapley values and LIME is noted in Lundberg-Lee (2017), but the underlying connection goes back to 1988 (Charnes et. al.).&lt;/p&gt;
+&lt;p&gt;To see the connection, we need to modify LIME a bit.&lt;/p&gt;
+&lt;p&gt;First, we need to make LIME less efficient by considering &lt;em&gt;all&lt;/em&gt; the &lt;span class="math inline"&gt;\(2^n\)&lt;/span&gt; subsets instead of the &lt;span class="math inline"&gt;\(m\)&lt;/span&gt; samples &lt;span class="math inline"&gt;\(S_1, S_2, ..., S_{m}\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Then we need to relax the definition of &lt;span class="math inline"&gt;\(\pi_x\)&lt;/span&gt;. It no longer needs to penalise samples that are far away from &lt;span class="math inline"&gt;\(x\)&lt;/span&gt;. In fact, we will see later than the choice of &lt;span class="math inline"&gt;\(\pi_x(x&amp;#39;)\)&lt;/span&gt; that yields the Shapley values is high when &lt;span class="math inline"&gt;\(x&amp;#39;\)&lt;/span&gt; is very close or very far away from &lt;span class="math inline"&gt;\(x\)&lt;/span&gt;, and low otherwise. We further add the restriction that &lt;span class="math inline"&gt;\(\pi_x(h_x(S))\)&lt;/span&gt; only depends on the size of &lt;span class="math inline"&gt;\(S\)&lt;/span&gt;, thus we rewrite it as &lt;span class="math inline"&gt;\(q(s)\)&lt;/span&gt; instead.&lt;/p&gt;
+&lt;p&gt;We also denote &lt;span class="math inline"&gt;\(v(S) := f(h_x(S))\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(w(S) = \sum_{i \in S} w_i\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Finally, we add the Efficiency property as a constraint: &lt;span class="math inline"&gt;\(\sum_{i = 1}^n w_i = f(x) - f(h_x(\emptyset)) = v(N) - v(\emptyset)\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Then the problem becomes a weighted linear regression:&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Problem&lt;/strong&gt;. minimises &lt;span class="math inline"&gt;\(\sum_{S \subset N} (w(S) - v(S))^2 q(s)\)&lt;/span&gt; over &lt;span class="math inline"&gt;\(w\)&lt;/span&gt; subject to &lt;span class="math inline"&gt;\(w(N) = v(N) - v(\emptyset)\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Claim&lt;/strong&gt; (Charnes et. al. 1988). The solution to this problem is&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[w_i = {1 \over n} (v(N) - v(\emptyset)) + \left(\sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s)\right)^{-1} \sum_{S \subset N: i \in S} \left({n - s \over n} q(s) v(S) - {s - 1 \over n} q(s - 1) v(S - i)\right). \qquad (-1)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Specifically, if we choose&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[q(s) = c {n - 2 \choose s - 1}^{-1}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;for any constant &lt;span class="math inline"&gt;\(c\)&lt;/span&gt;, then &lt;span class="math inline"&gt;\(w_i = \phi_i(v)\)&lt;/span&gt; are the Shapley values.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Remark&lt;/strong&gt;. Don't worry about this specific choice of &lt;span class="math inline"&gt;\(q(s)\)&lt;/span&gt; when &lt;span class="math inline"&gt;\(s = 0\)&lt;/span&gt; or &lt;span class="math inline"&gt;\(n\)&lt;/span&gt;, because &lt;span class="math inline"&gt;\(q(0)\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(q(n)\)&lt;/span&gt; do not appear on the right hand side of (-1). Therefore they can be defined to be of any value. A common convention of the binomial coefficients is to set &lt;span class="math inline"&gt;\({\ell \choose k} = 0\)&lt;/span&gt; if &lt;span class="math inline"&gt;\(k &amp;lt; 0\)&lt;/span&gt; or &lt;span class="math inline"&gt;\(k &amp;gt; \ell\)&lt;/span&gt;, in which case &lt;span class="math inline"&gt;\(q(0) = q(n) = \infty\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;In Lundberg-Lee (2017), &lt;span class="math inline"&gt;\(c\)&lt;/span&gt; is chosen to be &lt;span class="math inline"&gt;\(1 / n\)&lt;/span&gt;, see Theorem 2 there.&lt;/p&gt;
+&lt;p&gt;In Charnes et. al. 1988, the &lt;span class="math inline"&gt;\(w_i\)&lt;/span&gt;s defined in (-1) are called the generalised Shapley values.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Proof&lt;/strong&gt;. The Lagrangian is&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[L(w, \lambda) = \sum_{S \subset N} (v(S) - w(S))^2 q(s) - \lambda(w(N) - v(N) + v(\emptyset)).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;and by making &lt;span class="math inline"&gt;\(\partial_{w_i} L(w, \lambda) = 0\)&lt;/span&gt; we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[{1 \over 2} \lambda = \sum_{S \subset N: i \in S} (w(S) - v(S)) q(s). \qquad (0)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Summing (0) over &lt;span class="math inline"&gt;\(i\)&lt;/span&gt; and divide by &lt;span class="math inline"&gt;\(n\)&lt;/span&gt;, we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[{1 \over 2} \lambda = {1 \over n} \sum_i \sum_{S: i \in S} (w(S) q(s) - v(S) q(s)). \qquad (1)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;We examine each of the two terms on the right hand side.&lt;/p&gt;
+&lt;p&gt;Counting the terms involving &lt;span class="math inline"&gt;\(w_i\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(w_j\)&lt;/span&gt; for &lt;span class="math inline"&gt;\(j \neq i\)&lt;/span&gt;, and using &lt;span class="math inline"&gt;\(w(N) = v(N) - v(\emptyset)\)&lt;/span&gt; we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+&amp;amp;\sum_{S \subset N: i \in S} w(S) q(s) \\
+&amp;amp;= \sum_{s = 1}^n {n - 1 \choose s - 1} q(s) w_i + \sum_{j \neq i}\sum_{s = 2}^n {n - 2 \choose s - 2} q(s) w_j \\
+&amp;amp;= q(1) w_i + \sum_{s = 2}^n q(s) \left({n - 1 \choose s - 1} w_i + \sum_{j \neq i} {n - 2 \choose s - 2} w_j\right) \\
+&amp;amp;= q(1) w_i + \sum_{s = 2}^n \left({n - 2 \choose s - 1} w_i + {n - 2 \choose s - 2} (v(N) - v(\emptyset))\right) q(s) \\
+&amp;amp;= \sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s) w_i + \sum_{s = 2}^n {n - 2 \choose s - 2} q(s) (v(N) - v(\emptyset)). \qquad (2)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Summing (2) over &lt;span class="math inline"&gt;\(i\)&lt;/span&gt;, we obtain&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\begin{aligned}
+&amp;amp;\sum_i \sum_{S: i \in S} w(S) q(s)\\
+&amp;amp;= \sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s) (v(N) - v(\emptyset)) + \sum_{s = 2}^n n {n - 2 \choose s - 2} q(s) (v(N) - v(\emptyset))\\
+&amp;amp;= \sum_{s = 1}^n s{n - 1 \choose s - 1} q(s) (v(N) - v(\emptyset)). \qquad (3)
+\end{aligned}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;For the second term in (1), we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sum_i \sum_{S: i \in S} v(S) q(s) = \sum_{S \subset N} s v(S) q(s). \qquad (4)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Plugging (3)(4) in (1), we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[{1 \over 2} \lambda = {1 \over n} \left(\sum_{S \subset N} s q(s) v(S) - \sum_{s = 1}^n s {n - 1 \choose s - 1} q(s) (v(N) - v(\emptyset))\right). \qquad (5)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Plugging (5)(2) in (0) and solve for &lt;span class="math inline"&gt;\(w_i\)&lt;/span&gt;, we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[w_i = {1 \over n} (v(N) - v(\emptyset)) + \left(\sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s) \right)^{-1} \left( \sum_{S: i \in S} q(s) v(S) - {1 \over n} \sum_{S \subset N} s q(s) v(S) \right). \qquad (6)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;By splitting all subsets of &lt;span class="math inline"&gt;\(N\)&lt;/span&gt; into ones that contain &lt;span class="math inline"&gt;\(i\)&lt;/span&gt; and ones that do not and pair them up, we have&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[\sum_{S \subset N} s q(s) v(S) = \sum_{S: i \in S} (s q(s) v(S) + (s - 1) q(s - 1) v(S - i)).\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;Plugging this back into (6) we get the desired result. &lt;span class="math inline"&gt;\(\square\)&lt;/span&gt;&lt;/p&gt;
+&lt;h2 id="shap"&gt;SHAP&lt;/h2&gt;
+&lt;p&gt;The paper that coined the term "SHAP values" (Lundberg-Lee 2017) is not clear in its definition of the "SHAP values" and its relation to LIME, so the following is my interpretation of their interpretation model, which coincide with a model studied in Strumbelj-Kononenko 2014.&lt;/p&gt;
+&lt;p&gt;Recall that we want to calculate feature contributions to a model &lt;span class="math inline"&gt;\(f\)&lt;/span&gt; at a sample &lt;span class="math inline"&gt;\(x\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Let &lt;span class="math inline"&gt;\(\mu\)&lt;/span&gt; be a probability density function over the input space &lt;span class="math inline"&gt;\(X = X_1 \times ... \times X_n\)&lt;/span&gt;. A natural choice would be the density that generates the data, or one that approximates such density (e.g. empirical distribution).&lt;/p&gt;
+&lt;p&gt;The feature contribution (SHAP value) is thus defined as the Shapley value &lt;span class="math inline"&gt;\(\phi_i(v)\)&lt;/span&gt;, where&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[v(S) = \mathbb E_{z \sim \mu} (f(z) | z_S = x_S). \qquad (7)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;So it is a conditional expectation where &lt;span class="math inline"&gt;\(z_i\)&lt;/span&gt; is clamped for &lt;span class="math inline"&gt;\(i \in S\)&lt;/span&gt;. In fact, the definition of feature contributions in this form predates Lundberg-Lee 2017. For example, it can be found in Strumbelj-Kononenko 2014.&lt;/p&gt;
+&lt;p&gt;One simplification is to assume the &lt;span class="math inline"&gt;\(n\)&lt;/span&gt; features are independent, thus &lt;span class="math inline"&gt;\(\mu = \mu_1 \times \mu_2 \times ... \times \mu_n\)&lt;/span&gt;. In this case, (7) becomes&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[v(S) = \mathbb E_{z_{N \setminus S} \sim \mu_{N \setminus S}} f(x_S, z_{N \setminus S}) \qquad (8)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;For example, Strumbelj-Kononenko (2010) considers this scenario where &lt;span class="math inline"&gt;\(\mu\)&lt;/span&gt; is the uniform distribution over &lt;span class="math inline"&gt;\(X\)&lt;/span&gt;, see Definition 4 there.&lt;/p&gt;
+&lt;p&gt;A further simplification is model linearity, which means &lt;span class="math inline"&gt;\(f\)&lt;/span&gt; is linear. In this case, (8) becomes&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[v(S) = f(x_S, \mathbb E_{\mu_{N \setminus S}} z_{N \setminus S}). \qquad (9)\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;It is worth noting that to make the modified LIME model considered in the previous section fall under the linear SHAP framework (9), we need to make two further specialisations, the first is rather cosmetic: we need to change the definition of &lt;span class="math inline"&gt;\(h_x(S)\)&lt;/span&gt; to&lt;/p&gt;
+&lt;p&gt;&lt;span class="math display"&gt;\[(h_x(S))_i =
+\begin{cases}
+x_i, &amp;amp; \text{if }i \in S; \\
+\mathbb E_{\mu_i} z_i, &amp;amp; \text{otherwise.}
+\end{cases}\]&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;But we also need to boldly assume the original &lt;span class="math inline"&gt;\(f\)&lt;/span&gt; to be linear, which in my view, defeats the purpose of interpretability, because linear models are interpretable by themselves.&lt;/p&gt;
+&lt;p&gt;One may argue that perhaps we do not need linearity to define &lt;span class="math inline"&gt;\(v(S)\)&lt;/span&gt; as in (9). If we do so, however, then (9) loses mathematical meaning. A bigger question is: how effective is SHAP? An even bigger question: in general, how do we evaluate models of interpretation?&lt;/p&gt;
+&lt;h2 id="evaluating-shap"&gt;Evaluating SHAP&lt;/h2&gt;
+&lt;p&gt;The quest of the SHAP paper can be decoupled into two independent components: showing the niceties of Shapley values and choosing the coalitional game &lt;span class="math inline"&gt;\(v\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;The SHAP paper argues that Shapley values &lt;span class="math inline"&gt;\(\phi_i(v)\)&lt;/span&gt; are a good measurement because they are the only values satisfying the some nice properties including the Efficiency property mentioned at the beginning of the post, invariance under permutation and monotonicity, see the paragraph below Theorem 1 there, which refers to Theorem 2 of Young (1985).&lt;/p&gt;
+&lt;p&gt;Indeed, both efficiency (the “additive feature attribution methods” in the paper) and monotonicity are meaningful when considering &lt;span class="math inline"&gt;\(\phi_i(v)\)&lt;/span&gt; as the feature contribution of the &lt;span class="math inline"&gt;\(i\)&lt;/span&gt;th feature.&lt;/p&gt;
+&lt;p&gt;The question is thus reduced to the second component: what constitutes a nice choice of &lt;span class="math inline"&gt;\(v\)&lt;/span&gt;?&lt;/p&gt;
+&lt;p&gt;The SHAP paper answers this question with 3 options with increasing simplification: (7)(8)(9) in the previous section of this post (corresponding to (9)(11)(12) in the paper). They are intuitive, but it will be interesting to see more concrete (or even mathematical) justifications of such choices.&lt;/p&gt;
+&lt;h2 id="references"&gt;References&lt;/h2&gt;
+&lt;ul&gt;
+&lt;li&gt;Charnes, A., B. Golany, M. Keane, and J. Rousseau. “Extremal Principle Solutions of Games in Characteristic Function Form: Core, Chebychev and Shapley Value Generalizations.” In Econometrics of Planning and Efficiency, edited by Jati K. Sengupta and Gopal K. Kadekodi, 123–33. Dordrecht: Springer Netherlands, 1988. &lt;a href="https://doi.org/10.1007/978-94-009-3677-5_7" class="uri"&gt;https://doi.org/10.1007/978-94-009-3677-5_7&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;Lundberg, Scott, and Su-In Lee. “A Unified Approach to Interpreting Model Predictions.” ArXiv:1705.07874 [Cs, Stat], May 22, 2017. &lt;a href="http://arxiv.org/abs/1705.07874" class="uri"&gt;http://arxiv.org/abs/1705.07874&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier.” ArXiv:1602.04938 [Cs, Stat], February 16, 2016. &lt;a href="http://arxiv.org/abs/1602.04938" class="uri"&gt;http://arxiv.org/abs/1602.04938&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;Shapley, L. S. “17. A Value for n-Person Games.” In Contributions to the Theory of Games (AM-28), Volume II, Vol. 2. Princeton: Princeton University Press, 1953. &lt;a href="https://doi.org/10.1515/9781400881970-018" class="uri"&gt;https://doi.org/10.1515/9781400881970-018&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;Strumbelj, Erik, and Igor Kononenko. “An Efficient Explanation of Individual Classifications Using Game Theory.” J. Mach. Learn. Res. 11 (March 2010): 1–18.&lt;/li&gt;
+&lt;li&gt;Strumbelj, Erik, and Igor Kononenko. “Explaining Prediction Models and Individual Predictions with Feature Contributions.” Knowledge and Information Systems 41, no. 3 (December 2014): 647–65. &lt;a href="https://doi.org/10.1007/s10115-013-0679-x" class="uri"&gt;https://doi.org/10.1007/s10115-013-0679-x&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;Young, H. P. “Monotonic Solutions of Cooperative Games.” International Journal of Game Theory 14, no. 2 (June 1, 1985): 65–72. &lt;a href="https://doi.org/10.1007/BF01769885" class="uri"&gt;https://doi.org/10.1007/BF01769885&lt;/a&gt;.&lt;/li&gt;
+&lt;/ul&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/blog-feed.xml">
+ <title type="text">Automatic differentiation</title>
+ <id>posts/2018-06-03-automatic_differentiation.html</id>
+ <updated>2018-06-03T00:00:00Z</updated>
+ <link href="posts/2018-06-03-automatic_differentiation.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;This post serves as a note and explainer of autodiff. It is licensed under &lt;a href="https://www.gnu.org/licenses/fdl.html"&gt;GNU FDL&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;For my learning I benefited a lot from &lt;a href="http://www.cs.toronto.edu/%7Ergrosse/courses/csc321_2018/slides/lec10.pdf"&gt;Toronto CSC321 slides&lt;/a&gt; and the &lt;a href="https://github.com/mattjj/autodidact/"&gt;autodidact&lt;/a&gt; project which is a pedagogical implementation of &lt;a href="https://github.com/hips/autograd"&gt;Autograd&lt;/a&gt;. That said, any mistakes in this note are mine (especially since some of the knowledge is obtained from interpreting slides!), and if you do spot any I would be grateful if you can let me know.&lt;/p&gt;
+&lt;p&gt;Automatic differentiation (AD) is a way to compute derivatives. It does so by traversing through a computational graph using the chain rule.&lt;/p&gt;
+&lt;p&gt;There are the forward mode AD and reverse mode AD, which are kind of symmetric to each other and understanding one of them results in little to no difficulty in understanding the other.&lt;/p&gt;
+&lt;p&gt;In the language of neural networks, one can say that the forward mode AD is used when one wants to compute the derivatives of functions at all layers with respect to input layer weights, whereas the reverse mode AD is used to compute the derivatives of output functions with respect to weights at all layers. Therefore reverse mode AD (rmAD) is the one to use for gradient descent, which is the one we focus in this post.&lt;/p&gt;
+&lt;p&gt;Basically rmAD requires the computation to be sufficiently decomposed, so that in the computational graph, each node as a function of its parent nodes is an elementary function that the AD engine has knowledge about.&lt;/p&gt;
+&lt;p&gt;For example, the Sigmoid activation &lt;span class="math inline"&gt;\(a&amp;#39; = \sigma(w a + b)\)&lt;/span&gt; is quite simple, but it should be decomposed to simpler computations:&lt;/p&gt;
+&lt;ul&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(a&amp;#39; = 1 / t_1\)&lt;/span&gt;&lt;/li&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(t_1 = 1 + t_2\)&lt;/span&gt;&lt;/li&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(t_2 = \exp(t_3)\)&lt;/span&gt;&lt;/li&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(t_3 = - t_4\)&lt;/span&gt;&lt;/li&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(t_4 = t_5 + b\)&lt;/span&gt;&lt;/li&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(t_5 = w a\)&lt;/span&gt;&lt;/li&gt;
+&lt;/ul&gt;
+&lt;p&gt;Thus the function &lt;span class="math inline"&gt;\(a&amp;#39;(a)\)&lt;/span&gt; is decomposed to elementary operations like addition, subtraction, multiplication, reciprocitation, exponentiation, logarithm etc. And the rmAD engine stores the Jacobian of these elementary operations.&lt;/p&gt;
+&lt;p&gt;Since in neural networks we want to find derivatives of a single loss function &lt;span class="math inline"&gt;\(L(x; \theta)\)&lt;/span&gt;, we can omit &lt;span class="math inline"&gt;\(L\)&lt;/span&gt; when writing derivatives and denote, say &lt;span class="math inline"&gt;\(\bar \theta_k := \partial_{\theta_k} L\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;In implementations of rmAD, one can represent the Jacobian as a transformation &lt;span class="math inline"&gt;\(j: (x \to y) \to (y, \bar y, x) \to \bar x\)&lt;/span&gt;. &lt;span class="math inline"&gt;\(j\)&lt;/span&gt; is called the &lt;em&gt;Vector Jacobian Product&lt;/em&gt; (VJP). For example, &lt;span class="math inline"&gt;\(j(\exp)(y, \bar y, x) = y \bar y\)&lt;/span&gt; since given &lt;span class="math inline"&gt;\(y = \exp(x)\)&lt;/span&gt;,&lt;/p&gt;
+&lt;p&gt;&lt;span class="math inline"&gt;\(\partial_x L = \partial_x y \cdot \partial_y L = \partial_x \exp(x) \cdot \partial_y L = y \bar y\)&lt;/span&gt;&lt;/p&gt;
+&lt;p&gt;as another example, &lt;span class="math inline"&gt;\(j(+)(y, \bar y, x_1, x_2) = (\bar y, \bar y)\)&lt;/span&gt; since given &lt;span class="math inline"&gt;\(y = x_1 + x_2\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(\bar{x_1} = \bar{x_2} = \bar y\)&lt;/span&gt;.&lt;/p&gt;
+&lt;p&gt;Similarly,&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(j(/)(y, \bar y, x_1, x_2) = (\bar y / x_2, - \bar y x_1 / x_2^2)\)&lt;/span&gt;&lt;/li&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(j(\log)(y, \bar y, x) = \bar y / x\)&lt;/span&gt;&lt;/li&gt;
+&lt;li&gt;&lt;span class="math inline"&gt;\(j((A, \beta) \mapsto A \beta)(y, \bar y, A, \beta) = (\bar y \otimes \beta, A^T \bar y)\)&lt;/span&gt;.&lt;/li&gt;
+&lt;li&gt;etc...&lt;/li&gt;
+&lt;/ol&gt;
+&lt;p&gt;In the third one, the function is a matrix &lt;span class="math inline"&gt;\(A\)&lt;/span&gt; multiplied on the right by a column vector &lt;span class="math inline"&gt;\(\beta\)&lt;/span&gt;, and &lt;span class="math inline"&gt;\(\bar y \otimes \beta\)&lt;/span&gt; is the tensor product which is a fancy way of writing &lt;span class="math inline"&gt;\(\bar y \beta^T\)&lt;/span&gt;. See &lt;a href="https://github.com/mattjj/autodidact/blob/master/autograd/numpy/numpy_vjps.py"&gt;numpy_vjps.py&lt;/a&gt; for the implementation in autodidact.&lt;/p&gt;
+&lt;p&gt;So, given a node say &lt;span class="math inline"&gt;\(y = y(x_1, x_2, ..., x_n)\)&lt;/span&gt;, and given the value of &lt;span class="math inline"&gt;\(y\)&lt;/span&gt;, &lt;span class="math inline"&gt;\(x_{1 : n}\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(\bar y\)&lt;/span&gt;, rmAD computes the values of &lt;span class="math inline"&gt;\(\bar x_{1 : n}\)&lt;/span&gt; by using the Jacobians.&lt;/p&gt;
+&lt;p&gt;This is the gist of rmAD. It stores the values of each node in a forward pass, and computes the derivatives of each node exactly once in a backward pass.&lt;/p&gt;
+&lt;p&gt;It is a nice exercise to derive the backpropagation in the fully connected feedforward neural networks (e.g. &lt;a href="http://neuralnetworksanddeeplearning.com/chap2.html#the_four_fundamental_equations_behind_backpropagation"&gt;the one for MNIST in Neural Networks and Deep Learning&lt;/a&gt;) using rmAD.&lt;/p&gt;
+&lt;p&gt;AD is an approach lying between the extremes of numerical approximation (e.g. finite difference) and symbolic evaluation. It uses exact formulas (VJP) at each elementary operation like symbolic evaluation, while evaluates each VJP numerically rather than lumping all the VJPs into an unwieldy symbolic formula.&lt;/p&gt;
+&lt;p&gt;Things to look further into: the higher-order functional currying form &lt;span class="math inline"&gt;\(j: (x \to y) \to (y, \bar y, x) \to \bar x\)&lt;/span&gt; begs for a functional programming implementation.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/blog-feed.xml">
+ <title type="text">Updates on open research</title>
+ <id>posts/2018-04-10-update-open-research.html</id>
+ <updated>2018-04-29T00:00:00Z</updated>
+ <link href="posts/2018-04-10-update-open-research.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;It has been 9 months since I last wrote about open (maths) research. Since then two things happened which prompted me to write an update.&lt;/p&gt;
+&lt;p&gt;As always I discuss open research only in mathematics, not because I think it should not be applied to other disciplines, but simply because I do not have experience nor sufficient interests in non-mathematical subjects.&lt;/p&gt;
+&lt;p&gt;First, I read about Richard Stallman the founder of the free software movement, in &lt;a href="http://shop.oreilly.com/product/9780596002879.do"&gt;his biography by Sam Williams&lt;/a&gt; and his own collection of essays &lt;a href="https://shop.fsf.org/books-docs/free-software-free-society-selected-essays-richard-m-stallman-3rd-edition"&gt;&lt;em&gt;Free software, free society&lt;/em&gt;&lt;/a&gt;, from which I learned a bit more about the context and philosophy of free software and its relation to that of open source software. For anyone interested in open research, I highly recommend having a look at these two books. I am also reading Levy’s &lt;a href="http://www.stevenlevy.com/index.php/books/hackers"&gt;Hackers&lt;/a&gt;, which documented the development of the hacker culture predating Stallman. I can see the connection of ideas from the hacker ethic to the free software philosophy and to the open source philosophy. My guess is that the software world is fortunate to have pioneers who advocated for various kinds of freedom and openness from the beginning, whereas for academia which has a much longer history, credit protection has always been a bigger concern.&lt;/p&gt;
+&lt;p&gt;Also a month ago I attended a workshop called &lt;a href="https://www.perimeterinstitute.ca/conferences/open-research-rethinking-scientific-collaboration"&gt;Open research: rethinking scientific collaboration&lt;/a&gt;. That was the first time I met a group of people (mostly physicists) who also want open research to happen, and we had some stimulating discussions. Many thanks to the organisers at Perimeter Institute for organising the event, and special thanks to &lt;a href="https://www.perimeterinstitute.ca/people/matteo-smerlak"&gt;Matteo Smerlak&lt;/a&gt; and &lt;a href="https://www.perimeterinstitute.ca/people/ashley-milsted"&gt;Ashley Milsted&lt;/a&gt; for invitation and hosting.&lt;/p&gt;
+&lt;p&gt;From both of these I feel like I should write an updated post on open research.&lt;/p&gt;
+&lt;h3 id="freedom-and-community"&gt;Freedom and community&lt;/h3&gt;
+&lt;p&gt;Ideals matter. Stallman’s struggles stemmed from the frustration of denied request of source code (a frustration I shared in academia except source code is replaced by maths knowledge), and revolved around two things that underlie the free software movement: freedom and community. That is, the freedom to use, modify and share a work, and by sharing, to help the community.&lt;/p&gt;
+&lt;p&gt;Likewise, as for open research, apart from the utilitarian view that open research is more efficient / harder for credit theft, we should not ignore the ethical aspect that open research is right and fair. In particular, I think freedom and community can also serve as principles in open research. One way to make this argument more concrete is to describe what I feel are the central problems: NDAs (non-disclosure agreements) and reproducibility.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;NDAs&lt;/strong&gt;. It is assumed that when establishing a research collaboration, or just having a discussion, all those involved own the joint work in progress, and no one has the freedom to disclose any information e.g. intermediate results without getting permission from all collaborators. In effect this amounts to signing an NDA. NDAs are harmful because they restrict people’s freedom from sharing information that can benefit their own or others’ research. Considering that in contrast to the private sector, the primary goal of academia is knowledge but not profit, NDAs in research are unacceptable.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Reproducibility&lt;/strong&gt;. Research papers written down are not necessarily reproducible, even though they appear on peer-reviewed journals. This is because the peer-review process is opaque and the proofs in the papers may not be clear to everyone. To make things worse, there are no open channels to discuss results in these papers and one may have to rely on interacting with the small circle of the informed. One example is folk theorems. Another is trade secrets required to decipher published works.&lt;/p&gt;
+&lt;p&gt;I should clarify that freedom works both ways. One should have the freedom to disclose maths knowledge, but they should also be free to withhold any information that does not hamper the reproducibility of published works (e.g. results in ongoing research yet to be published), even though it may not be nice to do so when such information can help others with their research.&lt;/p&gt;
+&lt;p&gt;Similar to the solution offered by the free software movement, we need a community that promotes and respects free flow of maths knowledge, in the spirit of the &lt;a href="https://www.gnu.org/philosophy/"&gt;four essential freedoms&lt;/a&gt;, a community that rejects NDAs and upholds reproducibility.&lt;/p&gt;
+&lt;p&gt;Here are some ideas on how to tackle these two problems and build the community:&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;Free licensing. It solves NDA problem - free licenses permit redistribution and modification of works, so if you adopt them in your joint work, then you have the freedom to modify and distribute the work; it also helps with reproducibility - if a paper is not clear, anyone can write their own version and publish it. Bonus points with the use of copyleft licenses like &lt;a href="https://creativecommons.org/licenses/by-sa/4.0/"&gt;Creative Commons Share-Alike&lt;/a&gt; or the &lt;a href="https://www.gnu.org/licenses/fdl.html"&gt;GNU Free Documentation License&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;A forum for discussions of mathematics. It helps solve the reproducibility problem - public interaction may help quickly clarify problems. By the way, Math Overflow is not a forum.&lt;/li&gt;
+&lt;li&gt;An infrastructure of mathematical knowledge. Like the GNU system, a mathematics encyclopedia under a copyleft license maintained in the Github-style rather than Wikipedia-style by a “Free Mathematics Foundation”, and drawing contributions from the public (inside or outside of the academia). To begin with, crowd-source (again, Github-style) the proofs of say 1000 foundational theorems covered in the curriculum of a bachelor’s degree. Perhaps start with taking contributions from people with some credentials (e.g. having a bachelor degree in maths) and then expand the contribution permission to the public, or taking advantage of existing corpus under free license like Wikipedia.&lt;/li&gt;
+&lt;li&gt;Citing with care: if a work is considered authorative but you couldn’t reproduce the results, whereas another paper which tries to explain or discuss similar results makes the first paper understandable to you, give both papers due attribution (something like: see [1], but I couldn’t reproduce the proof in [1], and the proofs in [2] helped clarify it). No one should be offended if you say you can not reproduce something - there may be causes on both sides, whereas citing [2] is fairer and helps readers with a similar background.&lt;/li&gt;
+&lt;/ol&gt;
+&lt;h3 id="tools-for-open-research"&gt;Tools for open research&lt;/h3&gt;
+&lt;p&gt;The open research workshop revolved around how to lead academia towards a more open culture. There were discussions on open research tools, improving credit attributions, the peer-review process and the path to adoption.&lt;/p&gt;
+&lt;p&gt;During the workshop many efforts for open research were mentioned, and afterwards I was also made aware by more of them, like the following:&lt;/p&gt;
+&lt;ul&gt;
+&lt;li&gt;&lt;a href="https://osf.io"&gt;OSF&lt;/a&gt;, an online research platform. It has a clean and simple interface with commenting, wiki, citation generation, DOI generation, tags, license generation etc. Like Github it supports private and public repositories (but defaults to private), version control, with the ability to fork or bookmark a project.&lt;/li&gt;
+&lt;li&gt;&lt;a href="https://scipost.org/"&gt;SciPost&lt;/a&gt;, physics journals whose peer review reports and responses are public (peer-witnessed refereeing), and allows comments (post-publication evaluation). Like arXiv, it requires some academic credential (PhD or above) to register.&lt;/li&gt;
+&lt;li&gt;&lt;a href="https://knowen.org/"&gt;Knowen&lt;/a&gt;, a platform to organise knowledge in directed acyclic graphs. Could be useful for building the infrastructure of mathematical knowledge.&lt;/li&gt;
+&lt;li&gt;&lt;a href="https://fermatslibrary.com/"&gt;Fermat’s Library&lt;/a&gt;, the journal club website that crowd-annotates one notable paper per week released a Chrome extension &lt;a href="https://fermatslibrary.com/librarian"&gt;Librarian&lt;/a&gt; that overlays a commenting interface on arXiv. As an example Ian Goodfellow did an &lt;a href="https://fermatslibrary.com/arxiv_comments?url=https://arxiv.org/pdf/1406.2661.pdf"&gt;AMA (ask me anything) on his GAN paper&lt;/a&gt;.&lt;/li&gt;
+&lt;li&gt;&lt;a href="https://polymathprojects.org/"&gt;The Polymath project&lt;/a&gt;, the famous massive collaborative mathematical project. Not exactly new, the Polymath project is the only open maths research project that has gained some traction and recognition. However, it does not have many active projects (&lt;a href="http://michaelnielsen.org/polymath1/index.php?title=Main_Page"&gt;currently only one active project&lt;/a&gt;).&lt;/li&gt;
+&lt;li&gt;&lt;a href="https://stacks.math.columbia.edu/"&gt;The Stacks Project&lt;/a&gt;. I was made aware of this project by &lt;a href="https://people.kth.se/~yitingl/"&gt;Yiting&lt;/a&gt;. Its data is hosted on github and accepts contributions via pull requests and is licensed under the GNU Free Documentation License, ticking many boxes of the free and open source model.&lt;/li&gt;
+&lt;/ul&gt;
+&lt;h3 id="an-anecdote-from-the-workshop"&gt;An anecdote from the workshop&lt;/h3&gt;
+&lt;p&gt;In a conversation during the workshop, one of the participants called open science “normal science”, because reproducibility, open access, collaborations, and fair attributions are all what science is supposed to be, and practices like treating the readers as buyers rather than users should be called “bad science”, rather than “closed science”.&lt;/p&gt;
+&lt;p&gt;To which an organiser replied: maybe we should rename the workshop “Not-bad science”.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/blog-feed.xml">
+ <title type="text">The Mathematical Bazaar</title>
+ <id>posts/2017-08-07-mathematical_bazaar.html</id>
+ <updated>2017-08-07T00:00:00Z</updated>
+ <link href="posts/2017-08-07-mathematical_bazaar.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;In this essay I describe some problems in academia of mathematics and propose an open source model, which I call open research in mathematics.&lt;/p&gt;
+&lt;p&gt;This essay is a work in progress - comments and criticisms are welcome! &lt;a href="#fn1" class="footnote-ref" id="fnref1"&gt;&lt;sup&gt;1&lt;/sup&gt;&lt;/a&gt;&lt;/p&gt;
+&lt;p&gt;Before I start I should point out that&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;Open research is &lt;em&gt;not&lt;/em&gt; open access. In fact the latter is a prerequisite to the former.&lt;/li&gt;
+&lt;li&gt;I am not proposing to replace the current academic model with the open model - I know academia works well for many people and I am happy for them, but I think an open research community is long overdue since the wide adoption of the World Wide Web more than two decades ago. In fact, I fail to see why an open model can not run in tandem with the academia, just like open source and closed source software development coexist today.&lt;/li&gt;
+&lt;/ol&gt;
+&lt;h2 id="problems-of-academia"&gt;problems of academia&lt;/h2&gt;
+&lt;p&gt;Open source projects are characterised by publicly available source codes as well as open invitations for public collaborations, whereas closed source projects do not make source codes accessible to the public. How about mathematical academia then, is it open source or closed source? The answer is neither.&lt;/p&gt;
+&lt;p&gt;Compared to some other scientific disciplines, mathematics does not require expensive equipments or resources to replicate results; compared to programming in conventional software industry, mathematical findings are not meant to be commercial, as credits and reputation rather than money are the direct incentives (even though the former are commonly used to trade for the latter). It is also a custom and common belief that mathematical derivations and theorems shouldn't be patented. Because of this, mathematical research is an open source activity in the sense that proofs to new results are all available in papers, and thanks to open access e.g. the arXiv preprint repository most of the new mathematical knowledge is accessible for free.&lt;/p&gt;
+&lt;p&gt;Then why, you may ask, do I claim that maths research is not open sourced? Well, this is because 1. mathematical arguments are not easily replicable and 2. mathematical research projects are mostly not open for public participation.&lt;/p&gt;
+&lt;p&gt;Compared to computer programs, mathematical arguments are not written in an unambiguous language, and they are terse and not written in maximum verbosity (this is especially true in research papers as journals encourage limiting the length of submissions), so the understanding of a proof depends on whether the reader is equipped with the right background knowledge, and the completeness of a proof is highly subjective. More generally speaking, computer programs are mostly portable because all machines with the correct configurations can understand and execute a piece of program, whereas humans are subject to their environment, upbringings, resources etc. to have a brain ready to comprehend a proof that interests them. (these barriers are softer than the expensive equipments and resources in other scientific fields mentioned before because it is all about having access to the right information)&lt;/p&gt;
+&lt;p&gt;On the other hand, as far as the pursuit of reputation and prestige (which can be used to trade for the scarce resource of research positions and grant money) goes, there is often little practical motivation for career mathematicians to explain their results to the public carefully. And so the weird reality of the mathematical academia is that it is not an uncommon practice to keep trade secrets in order to protect one's territory and maintain a monopoly. This is doable because as long as a paper passes the opaque and sometimes political peer review process and is accepted by a journal, it is considered work done, accepted by the whole academic community and adds to the reputation of the author(s). Just like in the software industry, trade secrets and monopoly hinder the development of research as a whole, as well as demoralise outsiders who are interested in participating in related research.&lt;/p&gt;
+&lt;p&gt;Apart from trade secrets and territoriality, another reason to the nonexistence of open research community is an elitist tradition in the mathematical academia, which goes as follows:&lt;/p&gt;
+&lt;ul&gt;
+&lt;li&gt;Whoever is not good at mathematics or does not possess a degree in maths is not eligible to do research, or else they run high risks of being labelled a crackpot.&lt;/li&gt;
+&lt;li&gt;Mistakes made by established mathematicians are more tolerable than those less established.&lt;/li&gt;
+&lt;li&gt;Good mathematical writings should be deep, and expositions of non-original results are viewed as inferior work and do not add to (and in some cases may even damage) one's reputation.&lt;/li&gt;
+&lt;/ul&gt;
+&lt;p&gt;All these customs potentially discourage public participations in mathematical research, and I do not see them easily go away unless an open source community gains momentum.&lt;/p&gt;
+&lt;p&gt;To solve the above problems, I propose a open source model of mathematical research, which has high levels of openness and transparency and also has some added benefits listed in the last section of this essay. This model tries to achieve two major goals:&lt;/p&gt;
+&lt;ul&gt;
+&lt;li&gt;Open and public discussions and collaborations of mathematical research projects online&lt;/li&gt;
+&lt;li&gt;Open review to validate results, where author name, reviewer name, comments and responses are all publicly available online.&lt;/li&gt;
+&lt;/ul&gt;
+&lt;p&gt;To this end, a Github model is fitting. Let me first describe how open source collaboration works on Github.&lt;/p&gt;
+&lt;h2 id="open-source-collaborations-on-github"&gt;open source collaborations on Github&lt;/h2&gt;
+&lt;p&gt;On &lt;a href="https://github.com"&gt;Github&lt;/a&gt;, every project is publicly available in a repository (we do not consider private repos). The owner can update the project by "committing" changes, which include a message of what has been changed, the author of the changes and a timestamp. Each project has an issue tracker, which is basically a discussion forum about the project, where anyone can open an issue (start a discussion), and the owner of the project as well as the original poster of the issue can close it if it is resolved, e.g. bug fixed, feature added, or out of the scope of the project. Closing the issue is like ending the discussion, except that the thread is still open to more posts for anyone interested. People can react to each issue post, e.g. upvote, downvote, celebration, and importantly, all the reactions are public too, so you can find out who upvoted or downvoted your post.&lt;/p&gt;
+&lt;p&gt;When one is interested in contributing code to a project, they fork it, i.e. make a copy of the project, and make the changes they like in the fork. Once they are happy with the changes, they submit a pull request to the original project. The owner of the original project may accept or reject the request, and they can comment on the code in the pull request, asking for clarification, pointing out problematic part of the code etc and the author of the pull request can respond to the comments. Anyone, not just the owner can participate in this review process, turning it into a public discussion. In fact, a pull request is a special issue thread. Once the owner is happy with the pull request, they accept it and the changes are merged into the original project. The author of the changes will show up in the commit history of the original project, so they get the credits.&lt;/p&gt;
+&lt;p&gt;As an alternative to forking, if one is interested in a project but has a different vision, or that the maintainer has stopped working on it, they can clone it and make their own version. This is a more independent kind of fork because there is no longer intention to contribute back to the original project.&lt;/p&gt;
+&lt;p&gt;Moreover, on Github there is no way to send private messages, which forces people to interact publicly. If say you want someone to see and reply to your comment in an issue post or pull request, you simply mention them by &lt;code&gt;@someone&lt;/code&gt;.&lt;/p&gt;
+&lt;h2 id="open-research-in-mathematics"&gt;open research in mathematics&lt;/h2&gt;
+&lt;p&gt;All this points to a promising direction of open research. A maths project may have a wiki / collection of notes, the paper being written, computer programs implementing the results etc. The issue tracker can serve as a discussion forum about the project as well as a platform for open review (bugs are analogous to mistakes, enhancements are possible ways of improving the main results etc.), and anyone can make their own version of the project, and (optionally) contribute back by making pull requests, which will also be openly reviewed. One may want to add an extra "review this project" functionality, so that people can comment on the original project like they do in a pull request. This may or may not be necessary, as anyone can make comments or point out mistakes in the issue tracker.&lt;/p&gt;
+&lt;p&gt;One may doubt this model due to concerns of credits because work in progress is available to anyone. Well, since all the contributions are trackable in project commit history and public discussions in issues and pull request reviews, there is in fact &lt;em&gt;less&lt;/em&gt; room for cheating than the current model in academia, where scooping can happen without any witnesses. What we need is a platform with a good amount of trust like arXiv, so that the open research community honours (and can not ignore) the commit history, and the chance of mis-attribution can be reduced to minimum.&lt;/p&gt;
+&lt;p&gt;Compared to the academic model, open research also has the following advantages:&lt;/p&gt;
+&lt;ul&gt;
+&lt;li&gt;Anyone in the world with Internet access will have a chance to participate in research, whether they are affiliated to a university, have the financial means to attend conferences, or are colleagues of one of the handful experts in a specific field.&lt;/li&gt;
+&lt;li&gt;The problem of replicating / understanding maths results will be solved, as people help each other out. This will also remove the burden of answering queries about one's research. For example, say one has a project "Understanding the fancy results in [paper name]", they write up some initial notes but get stuck understanding certain arguments. In this case they can simply post the questions on the issue tracker, and anyone who knows the answer, or just has a speculation can participate in the discussion. In the end the problem may be resolved without the authors of the paper being bothered, who may be too busy to answer.&lt;/li&gt;
+&lt;li&gt;Similarly, the burden of peer review can also be shifted from a few appointed reviewers to the crowd.&lt;/li&gt;
+&lt;/ul&gt;
+&lt;h2 id="related-readings"&gt;related readings&lt;/h2&gt;
+&lt;ul&gt;
+&lt;li&gt;&lt;a href="http://www.catb.org/esr/writings/cathedral-bazaar/"&gt;The Cathedral and the Bazaar by Eric Raymond&lt;/a&gt;&lt;/li&gt;
+&lt;li&gt;&lt;a href="http://michaelnielsen.org/blog/doing-science-online/"&gt;Doing sience online by Michael Nielson&lt;/a&gt;&lt;/li&gt;
+&lt;li&gt;&lt;a href="https://gowers.wordpress.com/2009/01/27/is-massively-collaborative-mathematics-possible/"&gt;Is massively collaborative mathematics possible? by Timothy Gowers&lt;/a&gt;&lt;/li&gt;
+&lt;/ul&gt;
+&lt;section class="footnotes"&gt;
+&lt;hr /&gt;
+&lt;ol&gt;
+&lt;li id="fn1"&gt;&lt;p&gt;Please send your comments to my email address - I am still looking for ways to add a comment functionality to this website.&lt;a href="#fnref1" class="footnote-back"&gt;↩&lt;/a&gt;&lt;/p&gt;&lt;/li&gt;
+&lt;/ol&gt;
+&lt;/section&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/blog-feed.xml">
+ <title type="text">Open mathematical research and launching toywiki</title>
+ <id>posts/2017-04-25-open_research_toywiki.html</id>
+ <updated>2017-04-25T00:00:00Z</updated>
+ <link href="posts/2017-04-25-open_research_toywiki.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;As an experimental project, I am launching toywiki.&lt;/p&gt;
+&lt;p&gt;It hosts a collection of my research notes.&lt;/p&gt;
+&lt;p&gt;It takes some ideas from the open source culture and apply them to mathematical research: 1. It uses a very permissive license (CC-BY-SA). For example anyone can fork the project and make their own version if they have a different vision and want to build upon the project. 2. All edits will done with maximum transparency, and discussions of any of notes should also be as public as possible (e.g. Github issues) 3. Anyone can suggest changes by opening issues and submitting pull requests&lt;/p&gt;
+&lt;p&gt;Here are the links: &lt;a href="http://toywiki.xyz"&gt;toywiki&lt;/a&gt; and &lt;a href="https://github.com/ycpei/toywiki"&gt;github repo&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;Feedbacks are welcome by email.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/blog-feed.xml">
+ <title type="text">A \(q\)-Robinson-Schensted-Knuth algorithm and a \(q\)-polymer</title>
+ <id>posts/2016-10-13-q-robinson-schensted-knuth-polymer.html</id>
+ <updated>2016-10-13T00:00:00Z</updated>
+ <link href="posts/2016-10-13-q-robinson-schensted-knuth-polymer.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;(Latest update: 2017-01-12) In &lt;a href="http://arxiv.org/abs/1504.00666"&gt;Matveev-Petrov 2016&lt;/a&gt; a \(q\)-deformed Robinson-Schensted-Knuth algorithm (\(q\)RSK) was introduced. In this article we give reformulations of this algorithm in terms of Noumi-Yamada description, growth diagrams and local moves. We show that the algorithm is symmetric, namely the output tableaux pair are swapped in a sense of distribution when the input matrix is transposed. We also formulate a \(q\)-polymer model based on the \(q\)RSK and prove the corresponding Burke property, which we use to show a strong law of large numbers for the partition function given stationary boundary conditions and \(q\)-geometric weights. We use the \(q\)-local moves to define a generalisation of the \(q\)RSK taking a Young diagram-shape of array as the input. We write down the joint distribution of partition functions in the space-like direction of the \(q\)-polymer in \(q\)-geometric environment, formulate a \(q\)-version of the multilayer polynuclear growth model (\(q\)PNG) and write down the joint distribution of the \(q\)-polymer partition functions at a fixed time.&lt;/p&gt;
+&lt;p&gt;This article is available at &lt;a href="https://arxiv.org/abs/1610.03692"&gt;arXiv&lt;/a&gt;. It seems to me that one difference between arXiv and Github is that on arXiv each preprint has a few versions only. In Github many projects have a “dev” branch hosting continuous updates, whereas the master branch is where the stable releases live.&lt;/p&gt;
+&lt;p&gt;&lt;a href="%7B%7B%20site.url%20%7D%7D/assets/resources/qrsklatest.pdf"&gt;Here&lt;/a&gt; is a “dev” version of the article, which I shall push to arXiv when it stablises. Below is the changelog.&lt;/p&gt;
+&lt;ul&gt;
+&lt;li&gt;2017-01-12: Typos and grammar, arXiv v2.&lt;/li&gt;
+&lt;li&gt;2016-12-20: Added remarks on the geometric \(q\)-pushTASEP. Added remarks on the converse of the Burke property. Added natural language description of the \(q\)RSK. Fixed typos.&lt;/li&gt;
+&lt;li&gt;2016-11-13: Fixed some typos in the proof of Theorem 3.&lt;/li&gt;
+&lt;li&gt;2016-11-07: Fixed some typos. The \(q\)-Burke property is now stated in a more symmetric way, so is the law of large numbers Theorem 2.&lt;/li&gt;
+&lt;li&gt;2016-10-20: Fixed a few typos. Updated some references. Added a reference: &lt;a href="http://web.mit.edu/~shopkins/docs/rsk.pdf"&gt;a set of notes titled “RSK via local transformations”&lt;/a&gt;. It is written by &lt;a href="http://web.mit.edu/~shopkins/"&gt;Sam Hopkins&lt;/a&gt; in 2014 as an expository article based on MIT combinatorics preseminar presentations of Alex Postnikov. It contains some idea (applying local moves to a general Young-diagram shaped array in the order that matches any growth sequence of the underlying Young diagram) which I thought I was the first one to write down.&lt;/li&gt;
+&lt;/ul&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/blog-feed.xml">
+ <title type="text">AMS review of 'Double Macdonald polynomials as the stable limit of Macdonald superpolynomials' by Blondeau-Fournier, Lapointe and Mathieu</title>
+ <id>posts/2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.html</id>
+ <updated>2015-07-15T00:00:00Z</updated>
+ <link href="posts/2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;A Macdonald superpolynomial (introduced in [O. Blondeau-Fournier et al., Lett. Math. Phys. &lt;span class="bf"&gt;101&lt;/span&gt; (2012), no. 1, 27–47; &lt;a href="http://www.ams.org/mathscinet/search/publdoc.html?pg1=MR&amp;amp;s1=2935476&amp;amp;loc=fromrevtext"&gt;MR2935476&lt;/a&gt;; J. Comb. &lt;span class="bf"&gt;3&lt;/span&gt; (2012), no. 3, 495–561; &lt;a href="http://www.ams.org/mathscinet/search/publdoc.html?pg1=MR&amp;amp;s1=3029444&amp;amp;loc=fromrevtext"&gt;MR3029444&lt;/a&gt;]) in \(N\) Grassmannian variables indexed by a superpartition \(\Lambda\) is said to be stable if \({m (m + 1) \over 2} \ge |\Lambda|\) and \(N \ge |\Lambda| - {m (m - 3) \over 2}\) , where \(m\) is the fermionic degree. A stable Macdonald superpolynomial (corresponding to a bisymmetric polynomial) is also called a double Macdonald polynomial (dMp). The main result of this paper is the factorisation of a dMp into plethysms of two classical Macdonald polynomials (Theorem 5). Based on this result, this paper&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;&lt;p&gt;shows that the dMp has a unique decomposition into bisymmetric monomials;&lt;/p&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;calculates the norm of the dMp;&lt;/p&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;calculates the kernel of the Cauchy-Littlewood-type identity of the dMp;&lt;/p&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;shows the specialisation of the aforementioned factorisation to the Jack, Hall-Littlewood and Schur cases. One of the three Schur specialisations, denoted as \(s_{\lambda, \mu}\), also appears in (7) and (9) below;&lt;/p&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;defines the \(\omega\) -automorphism in this setting, which was used to prove an identity involving products of four Littlewood-Richardson coefficients;&lt;/p&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;shows an explicit evaluation of the dMp motivated by the most general evaluation of the usual Macdonald polynomials;&lt;/p&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;relates dMps with the representation theory of the hyperoctahedral group \(B_n\) via the double Kostka coefficients (which are defined as the entries of the transition matrix from the bisymmetric Schur functions \(s_{\lambda, \mu}\) to the modified dMps);&lt;/p&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;shows that the double Kostka coefficients have the positivity and the symmetry property, and can be written as sums of products of the usual Kostka coefficients;&lt;/p&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;defines an operator \(\nabla^B\) as an analogue of the nabla operator \(\nabla\) introduced in [F. Bergeron and A. M. Garsia, in &lt;em&gt;Algebraic methods and \(q\)-special functions&lt;/em&gt; (Montréal, QC, 1996), 1–52, CRM Proc. Lecture Notes, 22, Amer. Math. Soc., Providence, RI, 1999; &lt;a href="http://www.ams.org/mathscinet/search/publdoc.html?r=1&amp;amp;pg1=MR&amp;amp;s1=1726826&amp;amp;loc=fromrevtext"&gt;MR1726826&lt;/a&gt;]. The action of \(\nabla^B\) on the bisymmetric Schur function \(s_{\lambda, \mu}\) yields the dimension formula \((h + 1)^r\) for the corresponding representation of \(B_n\) , where \(h\) and \(r\) are the Coxeter number and the rank of \(B_n\) , in the same way that the action of \(\nabla\) on the \(n\) th elementary symmetric function leads to the same formula for the group of type \(A_n\) .&lt;/p&gt;&lt;/li&gt;
+&lt;/ol&gt;
+&lt;p&gt;Copyright notice: This review is published at http://www.ams.org/mathscinet-getitem?mr=3306078, its copyright owned by the AMS.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/blog-feed.xml">
+ <title type="text">On a causal quantum double product integral related to Lévy stochastic area.</title>
+ <id>posts/2015-07-01-causal-quantum-product-levy-area.html</id>
+ <updated>2015-07-01T00:00:00Z</updated>
+ <link href="posts/2015-07-01-causal-quantum-product-levy-area.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;In &lt;a href="https://arxiv.org/abs/1506.04294"&gt;this paper&lt;/a&gt; with &lt;a href="http://homepages.lboro.ac.uk/~marh3/"&gt;Robin&lt;/a&gt; we study the family of causal double product integrals \[ \prod_{a &amp;lt; x &amp;lt; y &amp;lt; b}\left(1 + i{\lambda \over 2}(dP_x dQ_y - dQ_x dP_y) + i {\mu \over 2}(dP_x dP_y + dQ_x dQ_y)\right) \]&lt;/p&gt;
+&lt;p&gt;where &lt;span class="math inline"&gt;\(P\)&lt;/span&gt; and &lt;span class="math inline"&gt;\(Q\)&lt;/span&gt; are the mutually noncommuting momentum and position Brownian motions of quantum stochastic calculus. The evaluation is motivated heuristically by approximating the continuous double product by a discrete product in which infinitesimals are replaced by finite increments. The latter is in turn approximated by the second quantisation of a discrete double product of rotation-like operators in different planes due to a result in &lt;a href="http://www.actaphys.uj.edu.pl/findarticle?series=Reg&amp;amp;vol=46&amp;amp;page=1851"&gt;(Hudson-Pei2015)&lt;/a&gt;. The main problem solved in this paper is the explicit evaluation of the continuum limit &lt;span class="math inline"&gt;\(W\)&lt;/span&gt; of the latter, and showing that &lt;span class="math inline"&gt;\(W\)&lt;/span&gt; is a unitary operator. The kernel of &lt;span class="math inline"&gt;\(W\)&lt;/span&gt; is written in terms of Bessel functions, and the evaluation is achieved by working on a lattice path model and enumerating linear extensions of related partial orderings, where the enumeration turns out to be heavily related to Dyck paths and generalisations of Catalan numbers.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/blog-feed.xml">
+ <title type="text">AMS review of 'Infinite binary words containing repetitions of odd period' by Badkobeh and Crochemore</title>
+ <id>posts/2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.html</id>
+ <updated>2015-05-30T00:00:00Z</updated>
+ <link href="posts/2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;This paper is about the existence of pattern-avoiding infinite binary words, where the patterns are squares, cubes and \(3^+\)-powers.    There are mainly two kinds of results, positive (existence of an infinite binary word avoiding a certain pattern) and negative (non-existence of such a word). Each positive result is proved by the construction of a word with finitely many squares and cubes which are listed explicitly. First a synchronising (also known as comma-free) uniform morphism \(g\: \Sigma_3^* \to \Sigma_2^*\)&lt;/p&gt;
+&lt;p&gt;is constructed. Then an argument is given to show that the length of squares in the code \(g(w)\) for a squarefree \(w\) is bounded, hence all the squares can be obtained by examining all \(g(s)\) for \(s\) of bounded lengths. The argument resembles that of the proof of, e.g., Theorem 1, Lemma 2, Theorem 3 and Lemma 4 in [N. Rampersad, J. O. Shallit and M. Wang, Theoret. Comput. Sci. &lt;strong&gt;339&lt;/strong&gt; (2005), no. 1, 19–34; &lt;a href="http://www.ams.org/mathscinet/search/publdoc.html?r=1&amp;amp;pg1=MR&amp;amp;s1=2142071&amp;amp;loc=fromrevtext"&gt;MR2142071&lt;/a&gt;]. The negative results are proved by traversing all possible finite words satisfying the conditions.&lt;/p&gt;
+&lt;p&gt;   Let \(L(n_2, n_3, S)\) be the maximum length of a word with \(n_2\) distinct squares, \(n_3\) distinct cubes and that the periods of the squares can take values only in \(S\) , where \(n_2, n_3 \in \Bbb N \cup \{\infty, \omega\}\) and \(S \subset \Bbb N_+\) .    \(n_k = 0\) corresponds to \(k\)-free, \(n_k = \infty\) means no restriction on the number of distinct \(k\)-powers, and \(n_k = \omega\) means \(k^+\)-free.&lt;/p&gt;
+&lt;p&gt;   Below is the summary of the positive and negative results:&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;&lt;p&gt;(Negative) \(L(\infty, \omega, 2 \Bbb N) &amp;lt; \infty\) : \(\nexists\) an infinite \(3^+\) -free binary word avoiding all squares of odd periods. (Proposition 1)&lt;/p&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;(Negative) \(L(\infty, 0, 2 \Bbb N + 1) \le 23\) : \(\nexists\) an infinite 3-free binary word, avoiding squares of even periods. The longest one has length \(\le 23\) (Proposition 2).&lt;/p&gt;&lt;/li&gt;
+&lt;li&gt;(Positive) \(L(\infty, \omega, 2 \Bbb N +
+&lt;ol type="1"&gt;
+&lt;li&gt;&lt;dl&gt;
+&lt;dt&gt;= \infty\)&lt;/dt&gt;
+&lt;dd&gt;\(\exists\) an infinite \(3^+\) -free binary word avoiding squares of even periods (Theorem 1).
+&lt;/dd&gt;
+&lt;/dl&gt;&lt;/li&gt;
+&lt;/ol&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;(Positive) \(L(\infty, \omega, \{1, 3\}) = \infty\) : \(\exists\) an infinite \(3^+\) -free binary word containing only squares of period 1 or 3 (Theorem 2).&lt;/p&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;(Negative) \(L(6, 1, 2 \Bbb N + 1) = 57\) : \(\nexists\) an infinite binary word avoiding squares of even period containing \(&amp;lt; 7\) squares and \(&amp;lt; 2\) cubes. The longest one containing 6 squares and 1 cube has length 57 (Proposition 6).&lt;/p&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;(Positive) \(L(7, 1, 2 \Bbb N + 1) = \infty\) : \(\exists\) an infinite \(3^+\) -free binary word avoiding squares of even period with 1 cube and 7 squares (Theorem 3).&lt;/p&gt;&lt;/li&gt;
+&lt;li&gt;&lt;p&gt;(Positive) \(L(4, 2, 2 \Bbb N + 1) = \infty\) : \(\exists\) an infinite \(3^+\) -free binary words avoiding squares of even period and containing 2 cubes and 4 squares (Theorem 4).&lt;/p&gt;&lt;/li&gt;
+&lt;/ol&gt;
+&lt;p&gt;Copyright notice: This review is published at http://www.ams.org/mathscinet-getitem?mr=3313467, its copyright owned by the AMS.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/blog-feed.xml">
+ <title type="text">jst</title>
+ <id>posts/2015-04-02-juggling-skill-tree.html</id>
+ <updated>2015-04-02T00:00:00Z</updated>
+ <link href="posts/2015-04-02-juggling-skill-tree.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;jst = juggling skill tree&lt;/p&gt;
+&lt;p&gt;If you have ever played a computer role playing game, you may have noticed the protagonist sometimes has a skill “tree” (most of the time it is actually a directed acyclic graph), where certain skills leads to others. For example, &lt;a href="http://hydra-media.cursecdn.com/diablo.gamepedia.com/3/37/Sorceress_Skill_Trees_%28Diablo_II%29.png?version=b74b3d4097ef7ad4e26ebee0dcf33d01"&gt;here&lt;/a&gt; is the skill tree of sorceress in &lt;a href="https://en.wikipedia.org/wiki/Diablo_II"&gt;Diablo II&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;Now suppose our hero embarks on a quest for learning all the possible juggling patterns. Everyone would agree she should start with cascade, the simplest nontrivial 3-ball pattern, but what afterwards? A few other accessible patterns for beginners are juggler’s tennis, two in one and even reverse cascade, but what to learn after that? The encyclopeadic &lt;a href="http://libraryofjuggling.com/"&gt;Library of Juggling&lt;/a&gt; serves as a good guide, as it records more than 160 patterns, some of which very aesthetically appealing. On this website almost all the patterns have a “prerequisite” section, indicating what one should learn beforehand. I have therefore written a script using &lt;a href="http://python.org"&gt;Python&lt;/a&gt;, &lt;a href="http://www.crummy.com/software/BeautifulSoup/"&gt;BeautifulSoup&lt;/a&gt; and &lt;a href="http://pygraphviz.github.io/"&gt;pygraphviz&lt;/a&gt; to generate a jst (graded by difficulties, which is the leftmost column) from the Library of Juggling (click the image for the full size):&lt;/p&gt;
+&lt;p&gt;&lt;a href="../assets/resources/juggling.png"&gt;&lt;img src="../assets/resources/juggling.png" alt="The juggling skill tree" style="width:38em" /&gt;&lt;/a&gt;&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/blog-feed.xml">
+ <title type="text">Unitary causal quantum stochastic double products as universal interactions I</title>
+ <id>posts/2015-04-01-unitary-double-products.html</id>
+ <updated>2015-04-01T00:00:00Z</updated>
+ <link href="posts/2015-04-01-unitary-double-products.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;In &lt;a href="http://www.actaphys.uj.edu.pl/findarticle?series=Reg&amp;amp;vol=46&amp;amp;page=1851"&gt;this paper&lt;/a&gt; with &lt;a href="http://homepages.lboro.ac.uk/~marh3/"&gt;Robin&lt;/a&gt; we show the explicit formulae for a family of unitary triangular and rectangular double product integrals which can be described as second quantisations.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/blog-feed.xml">
+ <title type="text">AMS review of 'A weighted interpretation for the super Catalan numbers' by Allen and Gheorghiciuc</title>
+ <id>posts/2015-01-20-weighted-interpretation-super-catalan-numbers.html</id>
+ <updated>2015-01-20T00:00:00Z</updated>
+ <link href="posts/2015-01-20-weighted-interpretation-super-catalan-numbers.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;The super Catalan numbers are defined as $$ T(m,n) = {(2 m)! (2 n)! 2 m! n! (m + n)!}. $$&lt;/p&gt;
+&lt;p&gt;   This paper has two main results. First a combinatorial interpretation of the super Catalan numbers is given: $$ T(m,n) = P(m,n) - N(m,n) $$ where \(P(m,n)\) enumerates the number of 2-Motzkin paths whose \(m\) -th step begins at an even level (called \(m\)-positive paths) and \(N(m,n)\) those with \(m\)-th step beginning at an odd level (\(m\)-negative paths). The proof uses a recursive argument on the number of \(m\)-positive and -negative paths, based on a recursion of the super Catalan numbers appearing in [I. M. Gessel, J. Symbolic Comput. &lt;strong&gt;14&lt;/strong&gt; (1992), no. 2-3, 179–194; &lt;a href="http://www.ams.org/mathscinet/search/publdoc.html?r=1&amp;amp;pg1=MR&amp;amp;s1=1187230&amp;amp;loc=fromrevtext"&gt;MR1187230&lt;/a&gt;]: $$ 4T(m,n) = T(m+1, n) + T(m, n+1). $$ This result gives an expression for the super Catalan numbers in terms of numbers counting the so-called ballot paths. The latter sometimes are also referred to as the generalised Catalan numbers forming the entries of the Catalan triangle.&lt;/p&gt;
+&lt;p&gt;   Based on the first result, the second result is a combinatorial interpretation of the super Catalan numbers \(T(2,n)\) in terms of counting certain Dyck paths. This is equivalent to a theorem, which represents \(T(2,n)\) as counting of certain pairs of Dyck paths, in [I. M. Gessel and G. Xin, J. Integer Seq. &lt;strong&gt;8&lt;/strong&gt; (2005), no. 2, Article 05.2.3, 13 pp.; &lt;a href="http://www.ams.org/mathscinet/search/publdoc.html?r=1&amp;amp;pg1=MR&amp;amp;s1=2134162&amp;amp;loc=fromrevtext"&gt;MR2134162&lt;/a&gt;], and the equivalence is explained at the end of the paper by a bijection between the Dyck paths and the pairs of Dyck paths. The proof of the theorem itself is also done by constructing two bijections between Dyck paths satisfying certain conditions. All the three bijections are formulated by locating, removing and adding steps.&lt;/p&gt;
+&lt;p&gt;Copyright notice: This review is published at http://www.ams.org/mathscinet-getitem?mr=3275875, its copyright owned by the AMS.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/blog-feed.xml">
+ <title type="text">Symmetry property of \(q\)-weighted Robinson-Schensted algorithms and branching algorithms</title>
+ <id>posts/2014-04-01-q-robinson-schensted-symmetry-paper.html</id>
+ <updated>2014-04-01T00:00:00Z</updated>
+ <link href="posts/2014-04-01-q-robinson-schensted-symmetry-paper.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;In &lt;a href="http://link.springer.com/article/10.1007/s10801-014-0505-x"&gt;this paper&lt;/a&gt; a symmetry property analogous to the well known symmetry property of the normal Robinson-Schensted algorithm has been shown for the \(q\)-weighted Robinson-Schensted algorithm. The proof uses a generalisation of the growth diagram approach introduced by Fomin. This approach, which uses “growth graphs”, can also be applied to a wider class of insertion algorithms which have a branching structure.&lt;/p&gt;
+&lt;figure&gt;
+&lt;img src="../assets/resources/1423graph.jpg" alt="Growth graph of q-RS for 1423" /&gt;&lt;figcaption&gt;Growth graph of q-RS for 1423&lt;/figcaption&gt;
+&lt;/figure&gt;
+&lt;p&gt;Above is the growth graph of the \(q\)-weighted Robinson-Schensted algorithm for the permutation \({1 2 3 4\choose1 4 2 3}\).&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/blog-feed.xml">
+ <title type="text">A \(q\)-weighted Robinson-Schensted algorithm</title>
+ <id>posts/2013-06-01-q-robinson-schensted-paper.html</id>
+ <updated>2013-06-01T00:00:00Z</updated>
+ <link href="posts/2013-06-01-q-robinson-schensted-paper.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;In &lt;a href="https://projecteuclid.org/euclid.ejp/1465064320"&gt;this paper&lt;/a&gt; with &lt;a href="http://www.bristol.ac.uk/maths/people/neil-m-oconnell/"&gt;Neil&lt;/a&gt; we construct a \(q\)-version of the Robinson-Schensted algorithm with column insertion. Like the &lt;a href="http://en.wikipedia.org/wiki/Robinson–Schensted_correspondence"&gt;usual RS correspondence&lt;/a&gt; with column insertion, this algorithm could take words as input. Unlike the usual RS algorithm, the output is a set of weighted pairs of semistandard and standard Young tableaux \((P,Q)\) with the same shape. The weights are rational functions of indeterminant \(q\).&lt;/p&gt;
+&lt;p&gt;If \(q\in[0,1]\), the algorithm can be considered as a randomised RS algorithm, with 0 and 1 being two interesting cases. When \(q\to0\), it is reduced to the latter usual RS algorithm; while when \(q\to1\) with proper scaling it should scale to directed random polymer model in &lt;a href="http://arxiv.org/abs/0910.0069"&gt;(O’Connell 2012)&lt;/a&gt;. When the input word \(w\) is a random walk:&lt;/p&gt;
+&lt;p&gt;\begin{align*}\mathbb P(w=v)=\prod_{i=1}^na_{v_i},\qquad\sum_ja_j=1\end{align*}&lt;/p&gt;
+&lt;p&gt;the shape of output evolves as a Markov chain with kernel related to \(q\)-Whittaker functions, which are Macdonald functions when \(t=0\) with a factor.&lt;/p&gt;
+</content>
+ </entry>
+</feed>
diff --git a/site-from-md/blog.html b/site-from-md/blog.html
new file mode 100644
index 0000000..e8ab9a1
--- /dev/null
+++ b/site-from-md/blog.html
@@ -0,0 +1,62 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>Yuchen's Blog</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="postlist.html">All posts</a><a href="index.html">About</a><a href="blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <a href="posts/2019-03-14-great-but-manageable-expectations.html"><h2> Great but Manageable Expectations </h2></a>
+ <p>Posted on 2019-03-14</p>
+ <p>This is Part 2 of a two-part blog post on differential privacy. Continuing from <a href="/posts/2019-03-13-a-tail-of-two-densities.html">Part 1</a>, I discuss the Rényi differential privacy, corresponding to the Rényi divergence, a study of the moment generating functions the divergence between probability measures to derive the tail bounds.</p>
+
+ <a href="posts/2019-03-14-great-but-manageable-expectations.html">Continue reading</a>
+</div>
+<div class="bodyitem">
+ <a href="posts/2019-03-13-a-tail-of-two-densities.html"><h2> A Tail of Two Densities </h2></a>
+ <p>Posted on 2019-03-13</p>
+ <p>This is Part 1 of a two-part post where I give an introduction to differential privacy, which is a study of tail bounds of the divergence between probability measures, with the end goal of applying it to stochastic gradient descent.</p>
+
+ <a href="posts/2019-03-13-a-tail-of-two-densities.html">Continue reading</a>
+</div>
+<div class="bodyitem">
+ <a href="posts/2019-02-14-raise-your-elbo.html"><h2> Raise your ELBO </h2></a>
+ <p>Posted on 2019-02-14</p>
+ <p>In this post I give an introduction to variational inference, which is about maximising the evidence lower bound (ELBO).</p>
+
+ <a href="posts/2019-02-14-raise-your-elbo.html">Continue reading</a>
+</div>
+<div class="bodyitem">
+ <a href="posts/2019-01-03-discriminant-analysis.html"><h2> Discriminant analysis </h2></a>
+ <p>Posted on 2019-01-03</p>
+ <p>In this post I talk about the theory and implementation of linear and quadratic discriminant analysis, classical methods in statistical learning.</p>
+
+ <a href="posts/2019-01-03-discriminant-analysis.html">Continue reading</a>
+</div>
+<div class="bodyitem">
+ <a href="posts/2018-12-02-lime-shapley.html"><h2> Shapley, LIME and SHAP </h2></a>
+ <p>Posted on 2018-12-02</p>
+ <p>In this post I explain LIME (Ribeiro et. al. 2016), the Shapley values (Shapley, 1953) and the SHAP values (Strumbelj-Kononenko, 2014; Lundberg-Lee, 2017).</p>
+
+ <a href="posts/2018-12-02-lime-shapley.html">Continue reading</a>
+</div>
+
+ <div class="bodyitem">
+ <p><a href="postlist.html">older posts</a></p>
+ </div>
+ </div>
+ </body>
+</html>
diff --git a/site-from-md/index.html b/site-from-md/index.html
new file mode 100644
index 0000000..d61b4e6
--- /dev/null
+++ b/site-from-md/index.html
@@ -0,0 +1,54 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>Yuchen Pei</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="index.html">Yuchen Pei</a>
+ </span>
+ <nav>
+ <a href="blog.html">Blog</a><a href="microblog.html">Microblog</a><a href="links.html">Links</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<p>Yuchen is a post-doctoral researcher in mathematics at the <a href="https://www.math.kth.se/RMSMA/">KTH RMSMA group</a>. Before KTH he did a PhD at the <a href="https://warwick.ac.uk/fac/sci/masdoc">MASDOC program at Warwick</a>, and spent two years in a postdoc position at <a href="http://cmsa.fas.harvard.edu">CMSA at Harvard</a>.</p>
+<p>He is interested in machine learning, with a preference to its theoretical (which really translates to mathematical because he views statistics and theoretical computer science as subsets of mathematics) aspects.</p>
+<p>As an academic his job is to seek truth and share his findings with the public.</p>
+<p>He is also interested in the idea of open research and open sourced his research in Robinson-Schensted algorithms as a <a href="https://toywiki.xyz">wiki</a>.</p>
+<p>He can be reached at: ypei@kth.se | hi@ypei.me | <a href="https://github.com/ycpei">Github</a> | <a href="https://www.linkedin.com/in/ycpei/">LinkedIn</a></p>
+<p>This website is made using a <a href="https://github.com/ycpei/ypei.me/blob/master/engine/engine.py">handmade static site generator</a>.</p>
+<p>Unless otherwise specified, all contents on this website are licensed under <a href="https://creativecommons.org/licenses/by-nd/4.0/">Creative Commons Attribution-NoDerivatives 4.0 International License</a>.</p>
+</body>
+</html>
+
+ </div>
+ </div>
+
+ </body>
+</html>
diff --git a/site-from-md/links.html b/site-from-md/links.html
new file mode 100644
index 0000000..aa808c9
--- /dev/null
+++ b/site-from-md/links.html
@@ -0,0 +1,113 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>Links</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="index.html">Yuchen Pei</a>
+ </span>
+ <nav>
+ <a href="blog.html">Blog</a><a href="microblog.html">Microblog</a><a href="links.html">Links</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<p>Here are some links I find interesting or helpful, or both. Listed in no particular order.</p>
+<ul>
+<li><a href="http://worrydream.com/">Bret Victor</a></li>
+<li><a href="https://www.peterkrautzberger.org/archive/">Peter Krautzberger</a></li>
+<li><a href="https://web.stanford.edu/~cpiech/bio/index.html">Chris Piech</a></li>
+<li><a href="https://www.scilag.net/">SciLag</a></li>
+<li><a href="https://satwcomic.com/">Scandinavia and the World</a></li>
+<li><a href="http://www.arxiv-sanity.com/">Arxiv Sanity Preserver</a></li>
+<li><a href="http://www.shortscience.org/">ShortScience.org</a></li>
+<li><a href="https://paperswithcode.com/">Papers with Code</a></li>
+<li><a href="https://distill.pub/">Distill</a></li>
+<li><a href="https://competitions.codalab.org/competitions/">CodaLab</a></li>
+<li><a href="https://haskellformaths.blogspot.com/">HaskellForMaths</a></li>
+<li><a href="http://www.openproblemgarden.org/">Open Problem Garden</a></li>
+<li><a href="http://www.ams.org/open-math-notes">AMS open notes</a></li>
+<li><a href="http://garsia.math.yorku.ca/MPWP/">Macdonald polynomials webpage</a></li>
+<li><a href="https://news.ycombinator.com/">Hacker News</a></li>
+<li><a href="http://arminstraub.com/">Armin Straub</a></li>
+<li><a href="http://www-math.ucdenver.edu/~wcherowi/">Bill Cherowitzo</a></li>
+<li><a href="https://stallman.org/">Richard Stallman</a></li>
+<li><a href="http://www.aaronsw.com/">Aaron Swartz</a> - The Internet’s own boy</li>
+<li><a href="https://docs.google.com/document/d/10eA5-mCZLSS4MQY5QGb5ewC3VAL6pLkT53V_81ZyitM/preview">False, Misleading, Clickbait-y, and/or Satirical “News” Sources</a></li>
+<li><a href="http://www.math.utah.edu/~jasonu/deala/">Differential Equations &amp; Linear Algebra</a> - Lecture notes on the web</li>
+<li><a href="http://wstein.org/">William Stein</a> - William Stein, the creator of SageMath
+<ul>
+<li><a href="http://wstein.org/talks/2016-06-sage-bp/">The origins of SageMath</a> Stein’s BP centenary talk at Harvard</li>
+</ul></li>
+<li><a href="http://www.sagemath.org/">SageMath</a> - Open-source maths software system<br />
+</li>
+<li><a href="https://projecteuler.net/">Project Euler</a></li>
+<li><a href="https://blockly-games.appspot.com/about?lang=en">Blockly games</a></li>
+<li><a href="https://jeremykun.com/">Math ∩ Programming</a></li>
+<li><a href="https://www.authorea.com/">Authorea</a></li>
+<li><a href="http://bigdata.show">Big Data</a></li>
+<li><a href="http://fermatslibrary.com/">Fermat’s Library</a></li>
+<li><a href="http://www.tricki.org/">Tricki</a></li>
+<li><a href="http://www.ams.org/samplings/feature-column/fc-current.cgi">AMS Feature Column</a></li>
+<li><a href="https://arxiv.org">arXiv</a></li>
+<li><a href="https://terrytao.wordpress.com/">What’s new</a> - Terence Tao’s blog</li>
+<li><a href="https://gowers.wordpress.com/">Gowers’s weblog</a> - Timothy Gowers’s blog</li>
+<li><a href="http://michaelnielsen.org/polymath1/index.php?title=Main_Page">Polymath</a> - MMO maths research</li>
+<li><a href="https://oeis.org/">OEIS</a> - The On-Line Encyclopedia of Integer Sequences® (OEIS®)</li>
+<li><a href="http://www.vim.org">Vi IMproved</a> - the one true text editor. Plugins:
+<ul>
+<li><a href="http://vim-latex.sourceforge.net/">vim-latex</a> - for latexing</li>
+<li><a href="https://code.google.com/p/vimwiki/">vimwiki</a> - a wiki tool with google wiki-like markup</li>
+<li><a href="https://github.com/Shougo/neocomplete.vim">neocomplete</a> - for auto-completion</li>
+</ul></li>
+<li><a href="http://www.vimperator.org/vimperator">vimperator</a> - turn your Firefox into Vim</li>
+<li><a href="http://www.vimperator.org/muttator">muttator</a> - turn your Thunderbird into Vim</li>
+<li><a href="http://pwmt.org/projects/zathura/">zathura</a> - turn your pdf reader into Vim</li>
+<li><a href="https://i3wm.org/">i3wm</a> - turn your window manager into Vim</li>
+<li><a href="http://www.vimgolf.com/">VimGolf</a></li>
+<li><a href="http://regex.alf.nu/">Regex Golf</a></li>
+<li><a href="http://regexcrossword.com/">Regex Crossword</a></li>
+<li><a href="http://archlinux.org">Arch Linux</a></li>
+<li><a href="https://jupyter.org/">Jupyter notebook</a> - An open-source notebook</li>
+<li>Stackexchange sites
+<ul>
+<li><a href="https://mathoverflow.net/">Mathoverflow</a></li>
+<li><a href="https://math.stackexchange.com/">Mathematics</a></li>
+<li><a href="https://codegolf.stackexchange.com/">Codegolf</a> - The most fun corner of Stackexchange.</li>
+</ul></li>
+<li><a href="http://math.stanford.edu/~bump/">Danial Bump</a></li>
+<li><a href="http://www.math.ubc.ca/~cass/">Bill Casselman</a></li>
+</ul>
+</body>
+</html>
+
+ </div>
+ </div>
+
+ </body>
+</html>
diff --git a/site-from-md/microblog-feed.xml b/site-from-md/microblog-feed.xml
new file mode 100644
index 0000000..4563861
--- /dev/null
+++ b/site-from-md/microblog-feed.xml
@@ -0,0 +1,291 @@
+<?xml version="1.0" encoding="utf-8"?>
+<feed xmlns="http://www.w3.org/2005/Atom">
+ <title type="text">Yuchen Pei's Microblog</title>
+ <id>https://ypei.me/microblog-feed.xml</id>
+ <updated>2018-05-30T00:00:00Z</updated>
+ <link href="https://ypei.me" />
+ <link href="https://ypei.me/microblog-feed.xml" rel="self" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <generator>PyAtom</generator>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-05-30</title>
+ <id>microblog.html</id>
+ <updated>2018-05-30T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;Roger Grosse’s post &lt;a href="https://metacademy.org/roadmaps/rgrosse/learn_on_your_own"&gt;How to learn on your own (2015)&lt;/a&gt; is an excellent modern guide on how to learn and research technical stuff (especially machine learning and maths) on one’s own.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-05-25</title>
+ <id>microblog.html</id>
+ <updated>2018-05-25T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;&lt;a href="http://jdlm.info/articles/2018/03/18/markov-decision-process-2048.html"&gt;This post&lt;/a&gt; models 2048 as an MDP and solves it using policy iteration and backward induction.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-05-22</title>
+ <id>microblog.html</id>
+ <updated>2018-05-22T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;blockquote&gt;
+&lt;p&gt;ATS (Applied Type System) is a programming language designed to unify programming with formal specification. ATS has support for combining theorem proving with practical programming through the use of advanced type systems. A past version of The Computer Language Benchmarks Game has demonstrated that the performance of ATS is comparable to that of the C and C++ programming languages. By using theorem proving and strict type checking, the compiler can detect and prove that its implemented functions are not susceptible to bugs such as division by zero, memory leaks, buffer overflow, and other forms of memory corruption by verifying pointer arithmetic and reference counting before the program compiles. Additionally, by using the integrated theorem-proving system of ATS (ATS/LF), the programmer may make use of static constructs that are intertwined with the operative code to prove that a function attains its specification.&lt;/p&gt;
+&lt;/blockquote&gt;
+&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/ATS_(programming_language)"&gt;Wikipedia entry on ATS&lt;/a&gt;&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-05-20</title>
+ <id>microblog.html</id>
+ <updated>2018-05-20T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;(5-second fame) I sent a picture of my kitchen sink to BBC and got mentioned in the &lt;a href="https://www.bbc.co.uk/programmes/w3cswg8c"&gt;latest Boston Calling episode&lt;/a&gt; (listen at 25:54).&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-05-18</title>
+ <id>microblog.html</id>
+ <updated>2018-05-18T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;&lt;a href="https://colah.github.io/"&gt;colah’s blog&lt;/a&gt; has a cool feature that allows you to comment on any paragraph of a blog post. Here’s an &lt;a href="https://colah.github.io/posts/2015-08-Understanding-LSTMs/"&gt;example&lt;/a&gt;. If it is doable on a static site hosted on Github pages, I suppose it shouldn’t be too hard to implement. This also seems to work more seamlessly than &lt;a href="https://fermatslibrary.com/"&gt;Fermat’s Library&lt;/a&gt;, because the latter has to embed pdfs in webpages. Now fantasy time: imagine that one day arXiv shows html versions of papers (through author uploading or conversion from TeX) with this feature.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-05-15</title>
+ <id>microblog.html</id>
+ <updated>2018-05-15T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;h3 id="notes-on-random-froests"&gt;Notes on random froests&lt;/h3&gt;
+&lt;p&gt;&lt;a href="https://lagunita.stanford.edu/courses/HumanitiesSciences/StatLearning/Winter2016/info"&gt;Stanford Lagunita’s statistical learning course&lt;/a&gt; has some excellent lectures on random forests. It starts with explanations of decision trees, followed by bagged trees and random forests, and ends with boosting. From these lectures it seems that:&lt;/p&gt;
+&lt;ol type="1"&gt;
+&lt;li&gt;The term “predictors” in statistical learning = “features” in machine learning.&lt;/li&gt;
+&lt;li&gt;The main idea of random forests of dropping predictors for individual trees and aggregate by majority or average is the same as the idea of dropout in neural networks, where a proportion of neurons in the hidden layers are dropped temporarily during different minibatches of training, effectively averaging over an emsemble of subnetworks. Both tricks are used as regularisations, i.e. to reduce the variance. The only difference is: in random forests, all but a square root number of the total number of features are dropped, whereas the dropout ratio in neural networks is usually a half.&lt;/li&gt;
+&lt;/ol&gt;
+&lt;p&gt;By the way, here’s a comparison between statistical learning and machine learning from the slides of the Statistcal Learning course:&lt;/p&gt;
+&lt;p&gt;&lt;a href="../assets/resources/sl-vs-ml.png"&gt;&lt;img src="../assets/resources/sl-vs-ml.png" alt="SL vs ML" style="width:38em" /&gt;&lt;/a&gt;&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-05-14</title>
+ <id>microblog.html</id>
+ <updated>2018-05-14T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;h3 id="open-peer-review"&gt;Open peer review&lt;/h3&gt;
+&lt;p&gt;Open peer review means peer review process where communications e.g. comments and responses are public.&lt;/p&gt;
+&lt;p&gt;Like &lt;a href="https://scipost.org/"&gt;SciPost&lt;/a&gt; mentioned in &lt;a href="/posts/2018-04-10-update-open-research.html"&gt;my post&lt;/a&gt;, &lt;a href="https://openreview.net"&gt;OpenReview.net&lt;/a&gt; is an example of open peer review in research. It looks like their focus is machine learning. Their &lt;a href="https://openreview.net/about"&gt;about page&lt;/a&gt; states their mission, and here’s &lt;a href="https://openreview.net/group?id=ICLR.cc/2018/Conference"&gt;an example&lt;/a&gt; where you can click on each entry to see what it is like. We definitely need this in the maths research community.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-05-11</title>
+ <id>microblog.html</id>
+ <updated>2018-05-11T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;h3 id="some-notes-on-rnn-fsm-fa-tm-and-utm"&gt;Some notes on RNN, FSM / FA, TM and UTM&lt;/h3&gt;
+&lt;p&gt;Related to &lt;a href="#neural-turing-machine"&gt;a previous micropost&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;&lt;a href="http://www.cs.toronto.edu/~rgrosse/csc321/lec9.pdf"&gt;These slides from Toronto&lt;/a&gt; are a nice introduction to RNN (recurrent neural network) from a computational point of view. It states that RNN can simulate any FSM (finite state machine, a.k.a. finite automata abbr. FA) with a toy example computing the parity of a binary string.&lt;/p&gt;
+&lt;p&gt;&lt;a href="http://www.deeplearningbook.org/contents/rnn.html"&gt;Goodfellow et. al.’s book&lt;/a&gt; (see page 372 and 374) goes one step further, stating that RNN with a hidden-to-hidden layer can simulate Turing machines, and not only that, but also the &lt;em&gt;universal&lt;/em&gt; Turing machine abbr. UTM (the book referenced &lt;a href="https://www.sciencedirect.com/science/article/pii/S0022000085710136"&gt;Siegelmann-Sontag&lt;/a&gt;), a property not shared by the weaker network where the hidden-to-hidden layer is replaced by an output-to-hidden layer (page 376).&lt;/p&gt;
+&lt;p&gt;By the way, the RNN with a hidden-to-hidden layer has the same architecture as the so-called linear dynamical system mentioned in &lt;a href="https://www.coursera.org/learn/neural-networks/lecture/Fpa7y/modeling-sequences-a-brief-overview"&gt;Hinton’s video&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;From what I have learned, the universality of RNN and feedforward networks are therefore due to different arguments, the former coming from Turing machines and the latter from an analytical view of approximation by step functions.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-05-10</title>
+ <id>microblog.html</id>
+ <updated>2018-05-10T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;h3 id="writing-readable-mathematics-like-writing-an-operating-system"&gt;Writing readable mathematics like writing an operating system&lt;/h3&gt;
+&lt;p&gt;One way to write readable mathematics is to decouple concepts. One idea is the following template. First write a toy example with all the important components present in this example, then analyse each component individually and elaborate how (perhaps more complex) variations of the component can extend the toy example and induce more complex or powerful versions of the toy example. Through such incremental development, one should be able to arrive at any result in cutting edge research after a pleasant journey.&lt;/p&gt;
+&lt;p&gt;It’s a bit like the UNIX philosophy, where you have a basic system of modules like IO, memory management, graphics etc, and modify / improve each module individually (H/t &lt;a href="http://nand2tetris.org/"&gt;NAND2Tetris&lt;/a&gt;).&lt;/p&gt;
+&lt;p&gt;The book &lt;a href="http://neuralnetworksanddeeplearning.com/"&gt;Neutral networks and deep learning&lt;/a&gt; by Michael Nielsen is an example of such approach. It begins the journey with a very simple neutral net with one hidden layer, no regularisation, and sigmoid activations. It then analyses each component including cost functions, the back propagation algorithm, the activation functions, regularisation and the overall architecture (from fully connected to CNN) individually and improve the toy example incrementally. Over the course the accuracy of the example of mnist grows incrementally from 95.42% to 99.67%.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-05-09</title>
+ <id>microblog.html</id>
+ <updated>2018-05-09T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;blockquote&gt;
+&lt;p&gt;What makes the rectified linear activation function better than the sigmoid or tanh functions? At present, we have a poor understanding of the answer to this question. Indeed, rectified linear units have only begun to be widely used in the past few years. The reason for that recent adoption is empirical: a few people tried rectified linear units, often on the basis of hunches or heuristic arguments. They got good results classifying benchmark data sets, and the practice has spread. In an ideal world we’d have a theory telling us which activation function to pick for which application. But at present we’re a long way from such a world. I should not be at all surprised if further major improvements can be obtained by an even better choice of activation function. And I also expect that in coming decades a powerful theory of activation functions will be developed. Today, we still have to rely on poorly understood rules of thumb and experience.&lt;/p&gt;
+&lt;/blockquote&gt;
+&lt;p&gt;Michael Nielsen, &lt;a href="http://neuralnetworksanddeeplearning.com/chap6.html#convolutional_neural_networks_in_practice"&gt;Neutral networks and deep learning&lt;/a&gt;&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-05-09</title>
+ <id>microblog.html</id>
+ <updated>2018-05-09T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;blockquote&gt;
+&lt;p&gt;One way RNNs are currently being used is to connect neural networks more closely to traditional ways of thinking about algorithms, ways of thinking based on concepts such as Turing machines and (conventional) programming languages. &lt;a href="https://arxiv.org/abs/1410.4615"&gt;A 2014 paper&lt;/a&gt; developed an RNN which could take as input a character-by-character description of a (very, very simple!) Python program, and use that description to predict the output. Informally, the network is learning to “understand” certain Python programs. &lt;a href="https://arxiv.org/abs/1410.5401"&gt;A second paper, also from 2014&lt;/a&gt;, used RNNs as a starting point to develop what they called a neural Turing machine (NTM). This is a universal computer whose entire structure can be trained using gradient descent. They trained their NTM to infer algorithms for several simple problems, such as sorting and copying.&lt;/p&gt;
+&lt;p&gt;As it stands, these are extremely simple toy models. Learning to execute the Python program &lt;code&gt;print(398345+42598)&lt;/code&gt; doesn’t make a network into a full-fledged Python interpreter! It’s not clear how much further it will be possible to push the ideas. Still, the results are intriguing. Historically, neural networks have done well at pattern recognition problems where conventional algorithmic approaches have trouble. Vice versa, conventional algorithmic approaches are good at solving problems that neural nets aren’t so good at. No-one today implements a web server or a database program using a neural network! It’d be great to develop unified models that integrate the strengths of both neural networks and more traditional approaches to algorithms. RNNs and ideas inspired by RNNs may help us do that.&lt;/p&gt;
+&lt;/blockquote&gt;
+&lt;p&gt;Michael Nielsen, &lt;a href="http://neuralnetworksanddeeplearning.com/chap6.html#other_approaches_to_deep_neural_nets"&gt;Neural networks and deep learning&lt;/a&gt;&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-05-08</title>
+ <id>microblog.html</id>
+ <updated>2018-05-08T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;Primer Science is a tool by a startup called Primer that uses NLP to summarize contents (but not single papers, yet) on arxiv. A developer of this tool predicts in &lt;a href="https://twimlai.com/twiml-talk-136-taming-arxiv-w-natural-language-processing-with-john-bohannon/#"&gt;an interview&lt;/a&gt; that progress on AI’s ability to extract meanings from AI research papers will be the biggest accelerant on AI research.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-05-08</title>
+ <id>microblog.html</id>
+ <updated>2018-05-08T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;blockquote&gt;
+&lt;p&gt;no-one has yet developed an entirely convincing theoretical explanation for why regularization helps networks generalize. Indeed, researchers continue to write papers where they try different approaches to regularization, compare them to see which works better, and attempt to understand why different approaches work better or worse. And so you can view regularization as something of a kludge. While it often helps, we don’t have an entirely satisfactory systematic understanding of what’s going on, merely incomplete heuristics and rules of thumb.&lt;/p&gt;
+&lt;p&gt;There’s a deeper set of issues here, issues which go to the heart of science. It’s the question of how we generalize. Regularization may give us a computational magic wand that helps our networks generalize better, but it doesn’t give us a principled understanding of how generalization works, nor of what the best approach is.&lt;/p&gt;
+&lt;/blockquote&gt;
+&lt;p&gt;Michael Nielsen, &lt;a href="http://neuralnetworksanddeeplearning.com/chap3.html#why_does_regularization_help_reduce_overfitting"&gt;Neural networks and deep learning&lt;/a&gt;&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-05-08</title>
+ <id>microblog.html</id>
+ <updated>2018-05-08T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;p&gt;Computerphile has some brilliant educational videos on computer science, like &lt;a href="https://www.youtube.com/watch?v=ciNHn38EyRc"&gt;a demo of SQL injection&lt;/a&gt;, &lt;a href="https://www.youtube.com/watch?v=eis11j_iGMs"&gt;a toy example of the lambda calculus&lt;/a&gt;, and &lt;a href="https://www.youtube.com/watch?v=9T8A89jgeTI"&gt;explaining the Y combinator&lt;/a&gt;.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-05-07</title>
+ <id>microblog.html</id>
+ <updated>2018-05-07T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;h3 id="learning-via-knowledge-graph-and-reddit-journal-clubs"&gt;Learning via knowledge graph and reddit journal clubs&lt;/h3&gt;
+&lt;p&gt;It is a natural idea to look for ways to learn things like going through a skill tree in a computer RPG.&lt;/p&gt;
+&lt;p&gt;For example I made a &lt;a href="https://ypei.me/posts/2015-04-02-juggling-skill-tree.html"&gt;DAG for juggling&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;Websites like &lt;a href="https://knowen.org"&gt;Knowen&lt;/a&gt; and &lt;a href="https://metacademy.org"&gt;Metacademy&lt;/a&gt; explore this idea with added flavour of open collaboration.&lt;/p&gt;
+&lt;p&gt;The design of Metacademy looks quite promising. It also has a nice tagline: “your package manager for knowledge”.&lt;/p&gt;
+&lt;p&gt;There are so so many tools to assist learning / research / knowledge sharing today, and we should keep experimenting, in the hope that eventually one of them will scale.&lt;/p&gt;
+&lt;p&gt;On another note, I often complain about the lack of a place to discuss math research online, but today I found on Reddit some journal clubs on machine learning: &lt;a href="https://www.reddit.com/r/MachineLearning/comments/8aluhs/d_machine_learning_wayr_what_are_you_reading_week/"&gt;1&lt;/a&gt;, &lt;a href="https://www.reddit.com/r/MachineLearning/comments/8elmd8/d_anyone_having_trouble_reading_a_particular/"&gt;2&lt;/a&gt;. If only we had this for maths. On the other hand r/math does have some interesting recurring threads as well: &lt;a href="https://www.reddit.com/r/math/wiki/everythingaboutx"&gt;Everything about X&lt;/a&gt; and &lt;a href="https://www.reddit.com/r/math/search?q=what+are+you+working+on?+author:automoderator+&amp;amp;sort=new&amp;amp;restrict_sr=on&amp;amp;t=all"&gt;What Are You Working On?&lt;/a&gt;. Hopefully these threads can last for years to come.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-05-02</title>
+ <id>microblog.html</id>
+ <updated>2018-05-02T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;h3 id="pastebin-for-the-win"&gt;Pastebin for the win&lt;/h3&gt;
+&lt;p&gt;The lack of maths rendering in major online communication platforms like instant messaging, email or Github has been a minor obsession of mine for quite a while, as I saw it as a big factor preventing people from talking more maths online. But today I realised this is totally a non-issue. Just do what people on IRC have been doing since the inception of the universe: use a (latex) pastebin.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-05-01</title>
+ <id>microblog.html</id>
+ <updated>2018-05-01T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;blockquote&gt;
+&lt;p&gt;Neural networks are one of the most beautiful programming paradigms ever invented. In the conventional approach to programming, we tell the computer what to do, breaking big problems up into many small, precisely defined tasks that the computer can easily perform. By contrast, in a neural network we don’t tell the computer how to solve our problem. Instead, it learns from observational data, figuring out its own solution to the problem at hand.&lt;/p&gt;
+&lt;/blockquote&gt;
+&lt;p&gt;Michael Nielsen - &lt;a href="http://neuralnetworksanddeeplearning.com/about.html"&gt;What this book (Neural Networks and Deep Learning) is about&lt;/a&gt;&lt;/p&gt;
+&lt;p&gt;Unrelated to the quote, note that Nielsen’s book is licensed under &lt;a href="https://creativecommons.org/licenses/by-nc/3.0/deed.en_GB"&gt;CC BY-NC&lt;/a&gt;, so one can build on it and redistribute non-commercially.&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-04-30</title>
+ <id>microblog.html</id>
+ <updated>2018-04-30T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;blockquote&gt;
+&lt;p&gt;But, users have learned to accommodate to Google not the other way around. We know what kinds of things we can type into Google and what we can’t and we keep our searches to things that Google is likely to help with. We know we are looking for texts and not answers to start a conversation with an entity that knows what we really need to talk about. People learn from conversation and Google can’t have one. It can pretend to have one using Siri but really those conversations tend to get tiresome when you are past asking about where to eat.&lt;/p&gt;
+&lt;/blockquote&gt;
+&lt;p&gt;Roger Schank - &lt;a href="http://www.rogerschank.com/fraudulent-claims-made-by-IBM-about-Watson-and-AI"&gt;Fraudulent claims made by IBM about Watson and AI&lt;/a&gt;&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-04-06</title>
+ <id>microblog.html</id>
+ <updated>2018-04-06T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;blockquote&gt;
+&lt;ul&gt;
+&lt;li&gt;Access to computers—and anything that might teach you something about the way the world works—should be unlimited and total. Always yield to the Hands-On Imperative!&lt;/li&gt;
+&lt;li&gt;All information should be free.&lt;/li&gt;
+&lt;li&gt;Mistrust Authority—Promote Decentralization.&lt;/li&gt;
+&lt;li&gt;Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, or position.&lt;/li&gt;
+&lt;li&gt;You can create art and beauty on a computer.&lt;/li&gt;
+&lt;li&gt;Computers can change your life for the better.&lt;/li&gt;
+&lt;/ul&gt;
+&lt;/blockquote&gt;
+&lt;p&gt;&lt;a href="https://en.wikipedia.org/wiki/Hacker_ethic"&gt;The Hacker Ethic&lt;/a&gt;, &lt;a href="https://en.wikipedia.org/wiki/Hackers:_Heroes_of_the_Computer_Revolution"&gt;Hackers: Heroes of Computer Revolution&lt;/a&gt;, by Steven Levy&lt;/p&gt;
+</content>
+ </entry>
+ <entry xml:base="https://ypei.me/microblog-feed.xml">
+ <title type="text">2018-03-23</title>
+ <id>microblog.html</id>
+ <updated>2018-03-23T00:00:00Z</updated>
+ <link href="microblog.html" />
+ <author>
+ <name>Yuchen Pei</name>
+ </author>
+ <content type="html">&lt;blockquote&gt;
+&lt;p&gt;“Static site generators seem like music databases, in that everyone eventually writes their own crappy one that just barely scratches the itch they had (and I’m no exception).”&lt;/p&gt;
+&lt;/blockquote&gt;
+&lt;p&gt;&lt;a href="https://news.ycombinator.com/item?id=7747651"&gt;__david__@hackernews&lt;/a&gt;&lt;/p&gt;
+&lt;p&gt;So did I.&lt;/p&gt;
+</content>
+ </entry>
+</feed>
diff --git a/site-from-md/microblog.html b/site-from-md/microblog.html
new file mode 100644
index 0000000..7bb7da6
--- /dev/null
+++ b/site-from-md/microblog.html
@@ -0,0 +1,341 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>Yuchen's Microblog</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="microblog.html">Yuchen's Microblog</a>
+ </span>
+ <nav>
+ <a href="index.html">About</a><a href="microblog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <span id=decss-haiku><p><a href="#decss-haiku">2019-03-16</a></p></span>
+ <blockquote>
+<pre><code>Muse! When we learned to
+count, little did we know all
+the things we could do
+
+some day by shuffling
+those numbers: Pythagoras
+said &quot;All is number&quot;
+
+long before he saw
+computers and their effects,
+or what they could do
+
+by computation,
+naive and mechanical
+fast arithmetic.
+
+It changed the world, it
+changed our consciousness and lives
+to have such fast math
+
+available to
+us and anyone who cared
+to learn programming.
+
+Now help me, Muse, for
+I wish to tell a piece of
+controversial math,
+
+for which the lawyers
+of DVD CCA
+don&#39;t forbear to sue:
+
+that they alone should
+know or have the right to teach
+these skills and these rules.
+
+(Do they understand
+the content, or is it just
+the effects they see?)
+
+And all mathematics
+is full of stories (just read
+Eric Temple Bell);
+
+and CSS is
+no exception to this rule.
+Sing, Muse, decryption
+
+once secret, as all
+knowledge, once unknown: how to
+decrypt DVDs.</code></pre>
+</blockquote>
+<p>Seth Schoen, <a href="https://en.wikipedia.org/wiki/DeCSS_haiku">DeCSS haiku</a></p>
+
+</div>
+<div class="bodyitem">
+ <span id=learning-undecidable><p><a href="#learning-undecidable">2019-01-27</a></p></span>
+ <p>My take on the <a href="https://www.nature.com/articles/s42256-018-0002-3">Nature paper <em>Learning can be undecidable</em></a>:</p>
+<p>Fantastic article, very clearly written.</p>
+<p>So it reduces a kind of learninability called estimating the maximum (EMX) to the cardinality of real numbers which is undecidable.</p>
+<p>When it comes to the relation between EMX and the rest of machine learning framework, the article mentions that EMX belongs to “extensions of PAC learnability include Vapnik’s statistical learning setting and the equivalent general learning setting by Shalev-Shwartz and colleagues” (I have no idea what these two things are), but it does not say whether EMX is representative of or reduces to common learning tasks. So it is not clear whether its undecidability applies to ML at large.</p>
+<p>Another condition to the main theorem is the union bounded closure assumption. It seems a reasonable property of a family of sets, but then again I wonder how that translates to learning.</p>
+<p>The article says “By now, we know of quite a few independence [from mathematical axioms] results, mostly for set theoretic questions like the continuum hypothesis, but also for results in algebra, analysis, infinite combinatorics and more. Machine learning, so far, has escaped this fate.” but the description of the EMX learnability makes it more like a classical mathematical / theoretical computer science problem rather than machine learning.</p>
+<p>An insightful conclusion: “How come learnability can neither be proved nor refuted? A closer look reveals that the source of the problem is in defining learnability as the existence of a learning function rather than the existence of a learning algorithm. In contrast with the existence of algorithms, the existence of functions over infinite domains is a (logically) subtle issue.”</p>
+<p>In relation to practical problems, it uses an example of ad targeting. However, A lot is lost in translation from the main theorem to this ad example.</p>
+<p>The EMX problem states: given a domain X, a distribution P over X which is unknown, some samples from P, and a family of subsets of X called F, find A in F that approximately maximises P(A).</p>
+<p>The undecidability rests on X being the continuous [0, 1] interval, and from the insight, we know the problem comes from the cardinality of subsets of the [0, 1] interval, which is “logically subtle”.</p>
+<p>In the ad problem, the domain X is all potential visitors, which is finite because there are finite number of people in the world. In this case P is a categorical distribution over the 1..n where n is the population of the world. One can have a good estimate of the parameters of a categorical distribution by asking for sufficiently large number of samples and computing the empirical distribution. Let’s call the estimated distribution Q. One can choose the from F (also finite) the set that maximises Q(A) which will be a solution to EMX.</p>
+<p>In other words, the theorem states: EMX is undecidable because not all EMX instances are decidable, because there are some nasty ones due to infinities. That does not mean no EMX instance is decidable. And I think the ad instance is decidable. Is there a learning task that actually corresponds to an undecidable EMX instance? I don’t know, but I will not believe the result of this paper is useful until I see one.</p>
+<p>h/t Reynaldo Boulogne</p>
+
+</div>
+<div class="bodyitem">
+ <span id=gavin-belson><p><a href="#gavin-belson">2018-12-11</a></p></span>
+ <blockquote>
+<p>I don’t know about you people, but I don’t want to live in a world where someone else makes the world a better place better than we do.</p>
+</blockquote>
+<p>Gavin Belson, Silicon Valley S2E1.</p>
+<p>I came across this quote in <a href="https://slate.com/business/2018/12/facebook-emails-lawsuit-embarrassing-mark-zuckerberg.html">a Slate post about Facebook</a></p>
+
+</div>
+<div class="bodyitem">
+ <span id=margins><p><a href="#margins">2018-10-05</a></p></span>
+ <p>With Fermat’s Library’s new tool <a href="https://fermatslibrary.com/margins">margins</a>, you can host your own journal club.</p>
+
+</div>
+<div class="bodyitem">
+ <span id=rnn-turing><p><a href="#rnn-turing">2018-09-18</a></p></span>
+ <p>Just some non-rigorous guess / thought: Feedforward networks are like combinatorial logic, and recurrent networks are like sequential logic (e.g. data flip-flop is like the feedback connection in RNN). Since NAND + combinatorial logic + sequential logic = von Neumann machine which is an approximation of the Turing machine, it is not surprising that RNN (with feedforward networks) is Turing complete (assuming that neural networks can learn the NAND gate).</p>
+
+</div>
+<div class="bodyitem">
+ <span id=zitierkartell><p><a href="#zitierkartell">2018-09-07</a></p></span>
+ <p><a href="https://academia.stackexchange.com/questions/116489/counter-strategy-against-group-that-repeatedly-does-strategic-self-citations-and">Counter strategy against group that repeatedly does strategic self-citations and ignores other relevant research</a></p>
+
+</div>
+<div class="bodyitem">
+ <span id=short-science><p><a href="#short-science">2018-09-05</a></p></span>
+ <blockquote>
+<ul>
+<li>ShortScience.org is a platform for post-publication discussion aiming to improve accessibility and reproducibility of research ideas.</li>
+<li>The website has over 800 summaries, mostly in machine learning, written by the community and organized by paper, conference, and year.</li>
+<li>Reading summaries of papers is useful to obtain the perspective and insight of another reader, why they liked or disliked it, and their attempt to demystify complicated sections.</li>
+<li>Also, writing summaries is a good exercise to understand the content of a paper because you are forced to challenge your assumptions when explaining it.</li>
+<li>Finally, you can keep up to date with the flood of research by reading the latest summaries on our Twitter and Facebook pages.</li>
+</ul>
+</blockquote>
+<p><a href="https://shortscience.org">ShortScience.org</a></p>
+
+</div>
+<div class="bodyitem">
+ <span id=darknet-diaries><p><a href="#darknet-diaries">2018-08-13</a></p></span>
+ <p><a href="https://darknetdiaries.com">Darknet Diaries</a> is a cool podcast. According to its about page it covers “true stories from the dark side of the Internet. Stories about hackers, defenders, threats, malware, botnets, breaches, and privacy.”</p>
+
+</div>
+<div class="bodyitem">
+ <span id=coursera-basic-income><p><a href="#coursera-basic-income">2018-06-20</a></p></span>
+ <p>Coursera is having <a href="https://www.coursera.org/learn/exploring-basic-income-in-a-changing-economy">a Teach-Out on Basic Income</a>.</p>
+
+</div>
+<div class="bodyitem">
+ <span id=pun-generator><p><a href="#pun-generator">2018-06-19</a></p></span>
+ <p><a href="https://en.wikipedia.org/wiki/Computational_humor#Pun_generation">Pun generators exist</a>.</p>
+
+</div>
+<div class="bodyitem">
+ <span id=hackers-excerpt><p><a href="#hackers-excerpt">2018-06-15</a></p></span>
+ <blockquote>
+<p>But as more nontechnical people bought computers, the things that impressed hackers were not as essential. While the programs themselves had to maintain a certain standard of quality, it was quite possible that the most exacting standards—those applied by a hacker who wanted to add one more feature, or wouldn’t let go of a project until it was demonstrably faster than anything else around—were probably counterproductive. What seemed more important was marketing. There were plenty of brilliant programs which no one knew about. Sometimes hackers would write programs and put them in the public domain, give them away as easily as John Harris had lent his early copy of Jawbreaker to the guys at the Fresno computer store. But rarely would people ask for public domain programs by name: they wanted the ones they saw advertised and discussed in magazines, demonstrated in computer stores. It was not so important to have amazingly clever algorithms. Users would put up with more commonplace ones.</p>
+<p>The Hacker Ethic, of course, held that every program should be as good as you could make it (or better), infinitely flexible, admired for its brilliance of concept and execution, and designed to extend the user’s powers. Selling computer programs like toothpaste was heresy. But it was happening. Consider the prescription for success offered by one of a panel of high-tech venture capitalists, gathered at a 1982 software show: “I can summarize what it takes in three words: marketing, marketing, marketing.” When computers are sold like toasters, programs will be sold like toothpaste. The Hacker Ethic notwithstanding.</p>
+</blockquote>
+<p><a href="http://www.stevenlevy.com/index.php/books/hackers">Hackers: Heroes of Computer Revolution</a>, by Steven Levy.</p>
+
+</div>
+<div class="bodyitem">
+ <span id=catalan-overflow><p><a href="#catalan-overflow">2018-06-11</a></p></span>
+ <p>To compute Catalan numbers without unnecessary overflow, use the recurrence formula <span class="math inline">\(C_n = {4 n - 2 \over n + 1} C_{n - 1}\)</span>.</p>
+
+</div>
+<div class="bodyitem">
+ <span id=boyer-moore><p><a href="#boyer-moore">2018-06-04</a></p></span>
+ <p>The <a href="https://en.wikipedia.org/wiki/Boyer–Moore_majority_vote_algorithm">Boyer-Moore algorithm for finding the majority of a sequence of elements</a> falls in the category of “very clever algorithms”.</p>
+<pre><code>int majorityElement(vector&lt;int&gt;&amp; xs) {
+ int count = 0;
+ int maj = xs[0];
+ for (auto x : xs) {
+ if (x == maj) count++;
+ else if (count == 0) maj = x;
+ else count--;
+ }
+ return maj;
+}</code></pre>
+
+</div>
+<div class="bodyitem">
+ <span id=how-to-learn-on-your-own><p><a href="#how-to-learn-on-your-own">2018-05-30</a></p></span>
+ <p>Roger Grosse’s post <a href="https://metacademy.org/roadmaps/rgrosse/learn_on_your_own">How to learn on your own (2015)</a> is an excellent modern guide on how to learn and research technical stuff (especially machine learning and maths) on one’s own.</p>
+
+</div>
+<div class="bodyitem">
+ <span id=2048-mdp><p><a href="#2048-mdp">2018-05-25</a></p></span>
+ <p><a href="http://jdlm.info/articles/2018/03/18/markov-decision-process-2048.html">This post</a> models 2048 as an MDP and solves it using policy iteration and backward induction.</p>
+
+</div>
+<div class="bodyitem">
+ <span id=ats><p><a href="#ats">2018-05-22</a></p></span>
+ <blockquote>
+<p>ATS (Applied Type System) is a programming language designed to unify programming with formal specification. ATS has support for combining theorem proving with practical programming through the use of advanced type systems. A past version of The Computer Language Benchmarks Game has demonstrated that the performance of ATS is comparable to that of the C and C++ programming languages. By using theorem proving and strict type checking, the compiler can detect and prove that its implemented functions are not susceptible to bugs such as division by zero, memory leaks, buffer overflow, and other forms of memory corruption by verifying pointer arithmetic and reference counting before the program compiles. Additionally, by using the integrated theorem-proving system of ATS (ATS/LF), the programmer may make use of static constructs that are intertwined with the operative code to prove that a function attains its specification.</p>
+</blockquote>
+<p><a href="https://en.wikipedia.org/wiki/ATS_(programming_language)">Wikipedia entry on ATS</a></p>
+
+</div>
+<div class="bodyitem">
+ <span id=bostoncalling><p><a href="#bostoncalling">2018-05-20</a></p></span>
+ <p>(5-second fame) I sent a picture of my kitchen sink to BBC and got mentioned in the <a href="https://www.bbc.co.uk/programmes/w3cswg8c">latest Boston Calling episode</a> (listen at 25:54).</p>
+
+</div>
+<div class="bodyitem">
+ <span id=colah-blog><p><a href="#colah-blog">2018-05-18</a></p></span>
+ <p><a href="https://colah.github.io/">colah’s blog</a> has a cool feature that allows you to comment on any paragraph of a blog post. Here’s an <a href="https://colah.github.io/posts/2015-08-Understanding-LSTMs/">example</a>. If it is doable on a static site hosted on Github pages, I suppose it shouldn’t be too hard to implement. This also seems to work more seamlessly than <a href="https://fermatslibrary.com/">Fermat’s Library</a>, because the latter has to embed pdfs in webpages. Now fantasy time: imagine that one day arXiv shows html versions of papers (through author uploading or conversion from TeX) with this feature.</p>
+
+</div>
+<div class="bodyitem">
+ <span id=random-forests><p><a href="#random-forests">2018-05-15</a></p></span>
+ <h3 id="notes-on-random-froests">Notes on random froests</h3>
+<p><a href="https://lagunita.stanford.edu/courses/HumanitiesSciences/StatLearning/Winter2016/info">Stanford Lagunita’s statistical learning course</a> has some excellent lectures on random forests. It starts with explanations of decision trees, followed by bagged trees and random forests, and ends with boosting. From these lectures it seems that:</p>
+<ol type="1">
+<li>The term “predictors” in statistical learning = “features” in machine learning.</li>
+<li>The main idea of random forests of dropping predictors for individual trees and aggregate by majority or average is the same as the idea of dropout in neural networks, where a proportion of neurons in the hidden layers are dropped temporarily during different minibatches of training, effectively averaging over an emsemble of subnetworks. Both tricks are used as regularisations, i.e. to reduce the variance. The only difference is: in random forests, all but a square root number of the total number of features are dropped, whereas the dropout ratio in neural networks is usually a half.</li>
+</ol>
+<p>By the way, here’s a comparison between statistical learning and machine learning from the slides of the Statistcal Learning course:</p>
+<p><a href="../assets/resources/sl-vs-ml.png"><img src="../assets/resources/sl-vs-ml.png" alt="SL vs ML" style="width:38em" /></a></p>
+
+</div>
+<div class="bodyitem">
+ <span id=open-review-net><p><a href="#open-review-net">2018-05-14</a></p></span>
+ <h3 id="open-peer-review">Open peer review</h3>
+<p>Open peer review means peer review process where communications e.g. comments and responses are public.</p>
+<p>Like <a href="https://scipost.org/">SciPost</a> mentioned in <a href="/posts/2018-04-10-update-open-research.html">my post</a>, <a href="https://openreview.net">OpenReview.net</a> is an example of open peer review in research. It looks like their focus is machine learning. Their <a href="https://openreview.net/about">about page</a> states their mission, and here’s <a href="https://openreview.net/group?id=ICLR.cc/2018/Conference">an example</a> where you can click on each entry to see what it is like. We definitely need this in the maths research community.</p>
+
+</div>
+<div class="bodyitem">
+ <span id=rnn-fsm><p><a href="#rnn-fsm">2018-05-11</a></p></span>
+ <h3 id="some-notes-on-rnn-fsm-fa-tm-and-utm">Some notes on RNN, FSM / FA, TM and UTM</h3>
+<p>Related to <a href="#neural-turing-machine">a previous micropost</a>.</p>
+<p><a href="http://www.cs.toronto.edu/~rgrosse/csc321/lec9.pdf">These slides from Toronto</a> are a nice introduction to RNN (recurrent neural network) from a computational point of view. It states that RNN can simulate any FSM (finite state machine, a.k.a. finite automata abbr. FA) with a toy example computing the parity of a binary string.</p>
+<p><a href="http://www.deeplearningbook.org/contents/rnn.html">Goodfellow et. al.’s book</a> (see page 372 and 374) goes one step further, stating that RNN with a hidden-to-hidden layer can simulate Turing machines, and not only that, but also the <em>universal</em> Turing machine abbr. UTM (the book referenced <a href="https://www.sciencedirect.com/science/article/pii/S0022000085710136">Siegelmann-Sontag</a>), a property not shared by the weaker network where the hidden-to-hidden layer is replaced by an output-to-hidden layer (page 376).</p>
+<p>By the way, the RNN with a hidden-to-hidden layer has the same architecture as the so-called linear dynamical system mentioned in <a href="https://www.coursera.org/learn/neural-networks/lecture/Fpa7y/modeling-sequences-a-brief-overview">Hinton’s video</a>.</p>
+<p>From what I have learned, the universality of RNN and feedforward networks are therefore due to different arguments, the former coming from Turing machines and the latter from an analytical view of approximation by step functions.</p>
+
+</div>
+<div class="bodyitem">
+ <span id=math-writing-decoupling><p><a href="#math-writing-decoupling">2018-05-10</a></p></span>
+ <h3 id="writing-readable-mathematics-like-writing-an-operating-system">Writing readable mathematics like writing an operating system</h3>
+<p>One way to write readable mathematics is to decouple concepts. One idea is the following template. First write a toy example with all the important components present in this example, then analyse each component individually and elaborate how (perhaps more complex) variations of the component can extend the toy example and induce more complex or powerful versions of the toy example. Through such incremental development, one should be able to arrive at any result in cutting edge research after a pleasant journey.</p>
+<p>It’s a bit like the UNIX philosophy, where you have a basic system of modules like IO, memory management, graphics etc, and modify / improve each module individually (H/t <a href="http://nand2tetris.org/">NAND2Tetris</a>).</p>
+<p>The book <a href="http://neuralnetworksanddeeplearning.com/">Neutral networks and deep learning</a> by Michael Nielsen is an example of such approach. It begins the journey with a very simple neutral net with one hidden layer, no regularisation, and sigmoid activations. It then analyses each component including cost functions, the back propagation algorithm, the activation functions, regularisation and the overall architecture (from fully connected to CNN) individually and improve the toy example incrementally. Over the course the accuracy of the example of mnist grows incrementally from 95.42% to 99.67%.</p>
+
+</div>
+<div class="bodyitem">
+ <span id=neural-nets-activation><p><a href="#neural-nets-activation">2018-05-09</a></p></span>
+ <blockquote>
+<p>What makes the rectified linear activation function better than the sigmoid or tanh functions? At present, we have a poor understanding of the answer to this question. Indeed, rectified linear units have only begun to be widely used in the past few years. The reason for that recent adoption is empirical: a few people tried rectified linear units, often on the basis of hunches or heuristic arguments. They got good results classifying benchmark data sets, and the practice has spread. In an ideal world we’d have a theory telling us which activation function to pick for which application. But at present we’re a long way from such a world. I should not be at all surprised if further major improvements can be obtained by an even better choice of activation function. And I also expect that in coming decades a powerful theory of activation functions will be developed. Today, we still have to rely on poorly understood rules of thumb and experience.</p>
+</blockquote>
+<p>Michael Nielsen, <a href="http://neuralnetworksanddeeplearning.com/chap6.html#convolutional_neural_networks_in_practice">Neutral networks and deep learning</a></p>
+
+</div>
+<div class="bodyitem">
+ <span id=neural-turing-machine><p><a href="#neural-turing-machine">2018-05-09</a></p></span>
+ <blockquote>
+<p>One way RNNs are currently being used is to connect neural networks more closely to traditional ways of thinking about algorithms, ways of thinking based on concepts such as Turing machines and (conventional) programming languages. <a href="https://arxiv.org/abs/1410.4615">A 2014 paper</a> developed an RNN which could take as input a character-by-character description of a (very, very simple!) Python program, and use that description to predict the output. Informally, the network is learning to “understand” certain Python programs. <a href="https://arxiv.org/abs/1410.5401">A second paper, also from 2014</a>, used RNNs as a starting point to develop what they called a neural Turing machine (NTM). This is a universal computer whose entire structure can be trained using gradient descent. They trained their NTM to infer algorithms for several simple problems, such as sorting and copying.</p>
+<p>As it stands, these are extremely simple toy models. Learning to execute the Python program <code>print(398345+42598)</code> doesn’t make a network into a full-fledged Python interpreter! It’s not clear how much further it will be possible to push the ideas. Still, the results are intriguing. Historically, neural networks have done well at pattern recognition problems where conventional algorithmic approaches have trouble. Vice versa, conventional algorithmic approaches are good at solving problems that neural nets aren’t so good at. No-one today implements a web server or a database program using a neural network! It’d be great to develop unified models that integrate the strengths of both neural networks and more traditional approaches to algorithms. RNNs and ideas inspired by RNNs may help us do that.</p>
+</blockquote>
+<p>Michael Nielsen, <a href="http://neuralnetworksanddeeplearning.com/chap6.html#other_approaches_to_deep_neural_nets">Neural networks and deep learning</a></p>
+
+</div>
+<div class="bodyitem">
+ <span id=nlp-arxiv><p><a href="#nlp-arxiv">2018-05-08</a></p></span>
+ <p>Primer Science is a tool by a startup called Primer that uses NLP to summarize contents (but not single papers, yet) on arxiv. A developer of this tool predicts in <a href="https://twimlai.com/twiml-talk-136-taming-arxiv-w-natural-language-processing-with-john-bohannon/#">an interview</a> that progress on AI’s ability to extract meanings from AI research papers will be the biggest accelerant on AI research.</p>
+
+</div>
+<div class="bodyitem">
+ <span id=neural-nets-regularization><p><a href="#neural-nets-regularization">2018-05-08</a></p></span>
+ <blockquote>
+<p>no-one has yet developed an entirely convincing theoretical explanation for why regularization helps networks generalize. Indeed, researchers continue to write papers where they try different approaches to regularization, compare them to see which works better, and attempt to understand why different approaches work better or worse. And so you can view regularization as something of a kludge. While it often helps, we don’t have an entirely satisfactory systematic understanding of what’s going on, merely incomplete heuristics and rules of thumb.</p>
+<p>There’s a deeper set of issues here, issues which go to the heart of science. It’s the question of how we generalize. Regularization may give us a computational magic wand that helps our networks generalize better, but it doesn’t give us a principled understanding of how generalization works, nor of what the best approach is.</p>
+</blockquote>
+<p>Michael Nielsen, <a href="http://neuralnetworksanddeeplearning.com/chap3.html#why_does_regularization_help_reduce_overfitting">Neural networks and deep learning</a></p>
+
+</div>
+<div class="bodyitem">
+ <span id=sql-injection-video><p><a href="#sql-injection-video">2018-05-08</a></p></span>
+ <p>Computerphile has some brilliant educational videos on computer science, like <a href="https://www.youtube.com/watch?v=ciNHn38EyRc">a demo of SQL injection</a>, <a href="https://www.youtube.com/watch?v=eis11j_iGMs">a toy example of the lambda calculus</a>, and <a href="https://www.youtube.com/watch?v=9T8A89jgeTI">explaining the Y combinator</a>.</p>
+
+</div>
+<div class="bodyitem">
+ <span id=learning-knowledge-graph-reddit-journal-club><p><a href="#learning-knowledge-graph-reddit-journal-club">2018-05-07</a></p></span>
+ <h3 id="learning-via-knowledge-graph-and-reddit-journal-clubs">Learning via knowledge graph and reddit journal clubs</h3>
+<p>It is a natural idea to look for ways to learn things like going through a skill tree in a computer RPG.</p>
+<p>For example I made a <a href="https://ypei.me/posts/2015-04-02-juggling-skill-tree.html">DAG for juggling</a>.</p>
+<p>Websites like <a href="https://knowen.org">Knowen</a> and <a href="https://metacademy.org">Metacademy</a> explore this idea with added flavour of open collaboration.</p>
+<p>The design of Metacademy looks quite promising. It also has a nice tagline: “your package manager for knowledge”.</p>
+<p>There are so so many tools to assist learning / research / knowledge sharing today, and we should keep experimenting, in the hope that eventually one of them will scale.</p>
+<p>On another note, I often complain about the lack of a place to discuss math research online, but today I found on Reddit some journal clubs on machine learning: <a href="https://www.reddit.com/r/MachineLearning/comments/8aluhs/d_machine_learning_wayr_what_are_you_reading_week/">1</a>, <a href="https://www.reddit.com/r/MachineLearning/comments/8elmd8/d_anyone_having_trouble_reading_a_particular/">2</a>. If only we had this for maths. On the other hand r/math does have some interesting recurring threads as well: <a href="https://www.reddit.com/r/math/wiki/everythingaboutx">Everything about X</a> and <a href="https://www.reddit.com/r/math/search?q=what+are+you+working+on?+author:automoderator+&amp;sort=new&amp;restrict_sr=on&amp;t=all">What Are You Working On?</a>. Hopefully these threads can last for years to come.</p>
+
+</div>
+<div class="bodyitem">
+ <span id=simple-solution-lack-of-math-rendering><p><a href="#simple-solution-lack-of-math-rendering">2018-05-02</a></p></span>
+ <h3 id="pastebin-for-the-win">Pastebin for the win</h3>
+<p>The lack of maths rendering in major online communication platforms like instant messaging, email or Github has been a minor obsession of mine for quite a while, as I saw it as a big factor preventing people from talking more maths online. But today I realised this is totally a non-issue. Just do what people on IRC have been doing since the inception of the universe: use a (latex) pastebin.</p>
+
+</div>
+<div class="bodyitem">
+ <span id=neural-networks-programming-paradigm><p><a href="#neural-networks-programming-paradigm">2018-05-01</a></p></span>
+ <blockquote>
+<p>Neural networks are one of the most beautiful programming paradigms ever invented. In the conventional approach to programming, we tell the computer what to do, breaking big problems up into many small, precisely defined tasks that the computer can easily perform. By contrast, in a neural network we don’t tell the computer how to solve our problem. Instead, it learns from observational data, figuring out its own solution to the problem at hand.</p>
+</blockquote>
+<p>Michael Nielsen - <a href="http://neuralnetworksanddeeplearning.com/about.html">What this book (Neural Networks and Deep Learning) is about</a></p>
+<p>Unrelated to the quote, note that Nielsen’s book is licensed under <a href="https://creativecommons.org/licenses/by-nc/3.0/deed.en_GB">CC BY-NC</a>, so one can build on it and redistribute non-commercially.</p>
+
+</div>
+<div class="bodyitem">
+ <span id=google-search-not-ai><p><a href="#google-search-not-ai">2018-04-30</a></p></span>
+ <blockquote>
+<p>But, users have learned to accommodate to Google not the other way around. We know what kinds of things we can type into Google and what we can’t and we keep our searches to things that Google is likely to help with. We know we are looking for texts and not answers to start a conversation with an entity that knows what we really need to talk about. People learn from conversation and Google can’t have one. It can pretend to have one using Siri but really those conversations tend to get tiresome when you are past asking about where to eat.</p>
+</blockquote>
+<p>Roger Schank - <a href="http://www.rogerschank.com/fraudulent-claims-made-by-IBM-about-Watson-and-AI">Fraudulent claims made by IBM about Watson and AI</a></p>
+
+</div>
+<div class="bodyitem">
+ <span id=hacker-ethics><p><a href="#hacker-ethics">2018-04-06</a></p></span>
+ <blockquote>
+<ul>
+<li>Access to computers—and anything that might teach you something about the way the world works—should be unlimited and total. Always yield to the Hands-On Imperative!</li>
+<li>All information should be free.</li>
+<li>Mistrust Authority—Promote Decentralization.</li>
+<li>Hackers should be judged by their hacking, not bogus criteria such as degrees, age, race, or position.</li>
+<li>You can create art and beauty on a computer.</li>
+<li>Computers can change your life for the better.</li>
+</ul>
+</blockquote>
+<p><a href="https://en.wikipedia.org/wiki/Hacker_ethic">The Hacker Ethic</a>, <a href="https://en.wikipedia.org/wiki/Hackers:_Heroes_of_the_Computer_Revolution">Hackers: Heroes of Computer Revolution</a>, by Steven Levy</p>
+
+</div>
+<div class="bodyitem">
+ <span id=static-site-generator><p><a href="#static-site-generator">2018-03-23</a></p></span>
+ <blockquote>
+<p>“Static site generators seem like music databases, in that everyone eventually writes their own crappy one that just barely scratches the itch they had (and I’m no exception).”</p>
+</blockquote>
+<p><a href="https://news.ycombinator.com/item?id=7747651">__david__@hackernews</a></p>
+<p>So did I.</p>
+
+</div>
+
+ </div>
+
+ </body>
+</html>
diff --git a/site-from-md/notations.html b/site-from-md/notations.html
new file mode 100644
index 0000000..753cff7
--- /dev/null
+++ b/site-from-md/notations.html
@@ -0,0 +1,67 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>List of Notations</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="index.html">Yuchen Pei</a>
+ </span>
+ <nav>
+ <a href="blog.html">Blog</a><a href="microblog.html">Microblog</a><a href="links.html">Links</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/MathJax.js?config=TeX-AMS_CHTML-full" type="text/javascript"></script>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<p>Here I list meanings of notations that may have not been explained elsewhere.</p>
+<ul>
+<li><span class="math inline">\(\text{ty}\)</span>: type. Given a word <span class="math inline">\(w \in [n]^\ell\)</span>, <span class="math inline">\(\text{ty} w = (m_1, m_2, ..., m_n)\)</span> where <span class="math inline">\(m_i\)</span> is the number of <span class="math inline">\(i\)</span>'s in <span class="math inline">\(w\)</span>. For example <span class="math inline">\(\text{ty} (1, 2, 2, 1, 4, 2) = (2, 3, 0, 1)\)</span>. The definition of <span class="math inline">\(\text{ty} T\)</span> for a tableau <span class="math inline">\(T\)</span> is similar.</li>
+<li><span class="math inline">\([n]\)</span>: for <span class="math inline">\(n \in \mathbb N_{&gt;0}\)</span>, <span class="math inline">\([n]\)</span> stands for the set <span class="math inline">\(\{1, 2, ..., n\}\)</span>.</li>
+<li><span class="math inline">\(i : j\)</span>: for <span class="math inline">\(i, j \in \mathbb Z\)</span>, <span class="math inline">\(i : j\)</span> stands for the set <span class="math inline">\(\{i, i + 1, ..., j\}\)</span>, or the sequence <span class="math inline">\((i, i + 1, ..., j)\)</span>, depending on the context.</li>
+<li><span class="math inline">\(k = i : j\)</span>: means <span class="math inline">\(k\)</span> iterates over <span class="math inline">\(i\)</span>, <span class="math inline">\(i + 1\)</span>,..., <span class="math inline">\(j\)</span>. For example <span class="math inline">\(\sum_{k = 1 : n} a_k := \sum_{k = 1}^n a_k\)</span>.</li>
+<li><span class="math inline">\(x_{i : j}\)</span>: stands for the set <span class="math inline">\(\{x_k: k = i : j\}\)</span> or the sequence <span class="math inline">\((x_i, x_{i + 1}, ..., x_j)\)</span>, depending on the context. So are notations like <span class="math inline">\(f(i : j)\)</span>, <span class="math inline">\(y^{i : j}\)</span> etc.</li>
+<li><span class="math inline">\(\mathbb N\)</span>: the set of natural numbers / nonnegative integer numbers <span class="math inline">\(\{0, 1, 2,...\}\)</span>, whereas</li>
+<li><span class="math inline">\(\mathbb N_{&gt;0}\)</span> or <span class="math inline">\(\mathbb N^+\)</span>: Are the set of positive integer numbers.</li>
+<li><span class="math inline">\(x^w\)</span>: when both <span class="math inline">\(x\)</span> and <span class="math inline">\(w\)</span> are tuples of objects, this means <span class="math inline">\(\prod_i x_{w_i}\)</span>. For example say <span class="math inline">\(w = (1, 2, 2, 1, 4, 2)\)</span>, and <span class="math inline">\(x = x_{1 : 7}\)</span>, then <span class="math inline">\(x^w = x_1^2 x_2^3 x_4\)</span>.</li>
+<li><span class="math inline">\(LHS\)</span>, LHS, <span class="math inline">\(RHS\)</span>, RHS: left hand side and right hand side of a formula</li>
+<li><span class="math inline">\(e_i\)</span>: the <span class="math inline">\(i\)</span>th standard basis in a vector space: <span class="math inline">\(e_i = (0, 0, ..., 0, 1, 0, 0, ...)\)</span> where the sequence is finite or infinite depending on the dimension of the vector space and the <span class="math inline">\(1\)</span> is the <span class="math inline">\(i\)</span>th entry and all other entries are <span class="math inline">\(0\)</span>.</li>
+<li><span class="math inline">\(1_{A}(x)\)</span> where <span class="math inline">\(A\)</span> is a set: an indicator function, which evaluates to <span class="math inline">\(1\)</span> if <span class="math inline">\(x \in A\)</span>, and <span class="math inline">\(0\)</span> otherwise.</li>
+<li><span class="math inline">\(1_{p}\)</span>: an indicator function, which evaluates to <span class="math inline">\(1\)</span> if the predicate <span class="math inline">\(p\)</span> is true and <span class="math inline">\(0\)</span> otherwise. Example: <span class="math inline">\(1_{x \in A}\)</span>, same as <span class="math inline">\(1_A(x)\)</span>.</li>
+<li><span class="math inline">\(\xi \sim p\)</span>: the random variable <span class="math inline">\(xi\)</span> is distributed according to the probability density function / probability mass function / probability measure <span class="math inline">\(p\)</span>.</li>
+<li><span class="math inline">\(\xi \overset{d}{=} \eta\)</span>: the random variables <span class="math inline">\(\xi\)</span> and <span class="math inline">\(\eta\)</span> have the same distribution.</li>
+<li><span class="math inline">\(\mathbb E f(\xi)\)</span>: expectation of <span class="math inline">\(f(\xi)\)</span>.</li>
+<li><span class="math inline">\(\mathbb P(A)\)</span>: probability of event <span class="math inline">\(A\)</span>.</li>
+</ul>
+</body>
+</html>
+
+ </div>
+ </div>
+
+ </body>
+</html>
diff --git a/site-from-md/postlist.html b/site-from-md/postlist.html
new file mode 100644
index 0000000..1f016d0
--- /dev/null
+++ b/site-from-md/postlist.html
@@ -0,0 +1,82 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>All posts</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a>All posts</a><a href="index.html">About</a><a href="blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <ul class="postlist">
+ <li class="postlistitem">
+ <a href="posts/2019-03-14-great-but-manageable-expectations.html">Great but Manageable Expectations</a> - 2019-03-14
+</li>
+<li class="postlistitem">
+ <a href="posts/2019-03-13-a-tail-of-two-densities.html">A Tail of Two Densities</a> - 2019-03-13
+</li>
+<li class="postlistitem">
+ <a href="posts/2019-02-14-raise-your-elbo.html">Raise your ELBO</a> - 2019-02-14
+</li>
+<li class="postlistitem">
+ <a href="posts/2019-01-03-discriminant-analysis.html">Discriminant analysis</a> - 2019-01-03
+</li>
+<li class="postlistitem">
+ <a href="posts/2018-12-02-lime-shapley.html">Shapley, LIME and SHAP</a> - 2018-12-02
+</li>
+<li class="postlistitem">
+ <a href="posts/2018-06-03-automatic_differentiation.html">Automatic differentiation</a> - 2018-06-03
+</li>
+<li class="postlistitem">
+ <a href="posts/2018-04-10-update-open-research.html">Updates on open research</a> - 2018-04-29
+</li>
+<li class="postlistitem">
+ <a href="posts/2017-08-07-mathematical_bazaar.html">The Mathematical Bazaar</a> - 2017-08-07
+</li>
+<li class="postlistitem">
+ <a href="posts/2017-04-25-open_research_toywiki.html">Open mathematical research and launching toywiki</a> - 2017-04-25
+</li>
+<li class="postlistitem">
+ <a href="posts/2016-10-13-q-robinson-schensted-knuth-polymer.html">A \(q\)-Robinson-Schensted-Knuth algorithm and a \(q\)-polymer</a> - 2016-10-13
+</li>
+<li class="postlistitem">
+ <a href="posts/2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.html">AMS review of 'Double Macdonald polynomials as the stable limit of Macdonald superpolynomials' by Blondeau-Fournier, Lapointe and Mathieu</a> - 2015-07-15
+</li>
+<li class="postlistitem">
+ <a href="posts/2015-07-01-causal-quantum-product-levy-area.html">On a causal quantum double product integral related to Lévy stochastic area.</a> - 2015-07-01
+</li>
+<li class="postlistitem">
+ <a href="posts/2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.html">AMS review of 'Infinite binary words containing repetitions of odd period' by Badkobeh and Crochemore</a> - 2015-05-30
+</li>
+<li class="postlistitem">
+ <a href="posts/2015-04-02-juggling-skill-tree.html">jst</a> - 2015-04-02
+</li>
+<li class="postlistitem">
+ <a href="posts/2015-04-01-unitary-double-products.html">Unitary causal quantum stochastic double products as universal interactions I</a> - 2015-04-01
+</li>
+<li class="postlistitem">
+ <a href="posts/2015-01-20-weighted-interpretation-super-catalan-numbers.html">AMS review of 'A weighted interpretation for the super Catalan numbers' by Allen and Gheorghiciuc</a> - 2015-01-20
+</li>
+<li class="postlistitem">
+ <a href="posts/2014-04-01-q-robinson-schensted-symmetry-paper.html">Symmetry property of \(q\)-weighted Robinson-Schensted algorithms and branching algorithms</a> - 2014-04-01
+</li>
+<li class="postlistitem">
+ <a href="posts/2013-06-01-q-robinson-schensted-paper.html">A \(q\)-weighted Robinson-Schensted algorithm</a> - 2013-06-01
+</li>
+
+ </ul>
+ </div>
+ </div>
+ </body>
+</html>
diff --git a/site-from-md/posts/2013-06-01-q-robinson-schensted-paper.html b/site-from-md/posts/2013-06-01-q-robinson-schensted-paper.html
new file mode 100644
index 0000000..2ccaf87
--- /dev/null
+++ b/site-from-md/posts/2013-06-01-q-robinson-schensted-paper.html
@@ -0,0 +1,52 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>A \(q\)-weighted Robinson-Schensted algorithm</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <h2> A \(q\)-weighted Robinson-Schensted algorithm </h2>
+ <p>Posted on 2013-06-01</p>
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<p>In <a href="https://projecteuclid.org/euclid.ejp/1465064320">this paper</a> with <a href="http://www.bristol.ac.uk/maths/people/neil-m-oconnell/">Neil</a> we construct a \(q\)-version of the Robinson-Schensted algorithm with column insertion. Like the <a href="http://en.wikipedia.org/wiki/Robinson–Schensted_correspondence">usual RS correspondence</a> with column insertion, this algorithm could take words as input. Unlike the usual RS algorithm, the output is a set of weighted pairs of semistandard and standard Young tableaux \((P,Q)\) with the same shape. The weights are rational functions of indeterminant \(q\).</p>
+<p>If \(q\in[0,1]\), the algorithm can be considered as a randomised RS algorithm, with 0 and 1 being two interesting cases. When \(q\to0\), it is reduced to the latter usual RS algorithm; while when \(q\to1\) with proper scaling it should scale to directed random polymer model in <a href="http://arxiv.org/abs/0910.0069">(O’Connell 2012)</a>. When the input word \(w\) is a random walk:</p>
+<p>\begin{align*}\mathbb P(w=v)=\prod_{i=1}^na_{v_i},\qquad\sum_ja_j=1\end{align*}</p>
+<p>the shape of output evolves as a Markov chain with kernel related to \(q\)-Whittaker functions, which are Macdonald functions when \(t=0\) with a factor.</p>
+</body>
+</html>
+
+ </div>
+ </div>
+ </body>
+</html>
diff --git a/site-from-md/posts/2014-04-01-q-robinson-schensted-symmetry-paper.html b/site-from-md/posts/2014-04-01-q-robinson-schensted-symmetry-paper.html
new file mode 100644
index 0000000..215183b
--- /dev/null
+++ b/site-from-md/posts/2014-04-01-q-robinson-schensted-symmetry-paper.html
@@ -0,0 +1,53 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>Symmetry property of \(q\)-weighted Robinson-Schensted algorithms and branching algorithms</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <h2> Symmetry property of \(q\)-weighted Robinson-Schensted algorithms and branching algorithms </h2>
+ <p>Posted on 2014-04-01</p>
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<p>In <a href="http://link.springer.com/article/10.1007/s10801-014-0505-x">this paper</a> a symmetry property analogous to the well known symmetry property of the normal Robinson-Schensted algorithm has been shown for the \(q\)-weighted Robinson-Schensted algorithm. The proof uses a generalisation of the growth diagram approach introduced by Fomin. This approach, which uses “growth graphs”, can also be applied to a wider class of insertion algorithms which have a branching structure.</p>
+<figure>
+<img src="../assets/resources/1423graph.jpg" alt="Growth graph of q-RS for 1423" /><figcaption>Growth graph of q-RS for 1423</figcaption>
+</figure>
+<p>Above is the growth graph of the \(q\)-weighted Robinson-Schensted algorithm for the permutation \({1 2 3 4\choose1 4 2 3}\).</p>
+</body>
+</html>
+
+ </div>
+ </div>
+ </body>
+</html>
diff --git a/site-from-md/posts/2015-01-20-weighted-interpretation-super-catalan-numbers.html b/site-from-md/posts/2015-01-20-weighted-interpretation-super-catalan-numbers.html
new file mode 100644
index 0000000..3fcd94b
--- /dev/null
+++ b/site-from-md/posts/2015-01-20-weighted-interpretation-super-catalan-numbers.html
@@ -0,0 +1,52 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>AMS review of 'A weighted interpretation for the super Catalan numbers' by Allen and Gheorghiciuc</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <h2> AMS review of 'A weighted interpretation for the super Catalan numbers' by Allen and Gheorghiciuc </h2>
+ <p>Posted on 2015-01-20</p>
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<p>The super Catalan numbers are defined as $$ T(m,n) = {(2 m)! (2 n)! 2 m! n! (m + n)!}. $$</p>
+<p>   This paper has two main results. First a combinatorial interpretation of the super Catalan numbers is given: $$ T(m,n) = P(m,n) - N(m,n) $$ where \(P(m,n)\) enumerates the number of 2-Motzkin paths whose \(m\) -th step begins at an even level (called \(m\)-positive paths) and \(N(m,n)\) those with \(m\)-th step beginning at an odd level (\(m\)-negative paths). The proof uses a recursive argument on the number of \(m\)-positive and -negative paths, based on a recursion of the super Catalan numbers appearing in [I. M. Gessel, J. Symbolic Comput. <strong>14</strong> (1992), no. 2-3, 179–194; <a href="http://www.ams.org/mathscinet/search/publdoc.html?r=1&amp;pg1=MR&amp;s1=1187230&amp;loc=fromrevtext">MR1187230</a>]: $$ 4T(m,n) = T(m+1, n) + T(m, n+1). $$ This result gives an expression for the super Catalan numbers in terms of numbers counting the so-called ballot paths. The latter sometimes are also referred to as the generalised Catalan numbers forming the entries of the Catalan triangle.</p>
+<p>   Based on the first result, the second result is a combinatorial interpretation of the super Catalan numbers \(T(2,n)\) in terms of counting certain Dyck paths. This is equivalent to a theorem, which represents \(T(2,n)\) as counting of certain pairs of Dyck paths, in [I. M. Gessel and G. Xin, J. Integer Seq. <strong>8</strong> (2005), no. 2, Article 05.2.3, 13 pp.; <a href="http://www.ams.org/mathscinet/search/publdoc.html?r=1&amp;pg1=MR&amp;s1=2134162&amp;loc=fromrevtext">MR2134162</a>], and the equivalence is explained at the end of the paper by a bijection between the Dyck paths and the pairs of Dyck paths. The proof of the theorem itself is also done by constructing two bijections between Dyck paths satisfying certain conditions. All the three bijections are formulated by locating, removing and adding steps.</p>
+<p>Copyright notice: This review is published at http://www.ams.org/mathscinet-getitem?mr=3275875, its copyright owned by the AMS.</p>
+</body>
+</html>
+
+ </div>
+ </div>
+ </body>
+</html>
diff --git a/site-from-md/posts/2015-04-01-unitary-double-products.html b/site-from-md/posts/2015-04-01-unitary-double-products.html
new file mode 100644
index 0000000..adbef65
--- /dev/null
+++ b/site-from-md/posts/2015-04-01-unitary-double-products.html
@@ -0,0 +1,49 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>Unitary causal quantum stochastic double products as universal interactions I</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <h2> Unitary causal quantum stochastic double products as universal interactions I </h2>
+ <p>Posted on 2015-04-01</p>
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<p>In <a href="http://www.actaphys.uj.edu.pl/findarticle?series=Reg&amp;vol=46&amp;page=1851">this paper</a> with <a href="http://homepages.lboro.ac.uk/~marh3/">Robin</a> we show the explicit formulae for a family of unitary triangular and rectangular double product integrals which can be described as second quantisations.</p>
+</body>
+</html>
+
+ </div>
+ </div>
+ </body>
+</html>
diff --git a/site-from-md/posts/2015-04-02-juggling-skill-tree.html b/site-from-md/posts/2015-04-02-juggling-skill-tree.html
new file mode 100644
index 0000000..273709e
--- /dev/null
+++ b/site-from-md/posts/2015-04-02-juggling-skill-tree.html
@@ -0,0 +1,52 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>jst</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <h2> jst </h2>
+ <p>Posted on 2015-04-02</p>
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<p>jst = juggling skill tree</p>
+<p>If you have ever played a computer role playing game, you may have noticed the protagonist sometimes has a skill “tree” (most of the time it is actually a directed acyclic graph), where certain skills leads to others. For example, <a href="http://hydra-media.cursecdn.com/diablo.gamepedia.com/3/37/Sorceress_Skill_Trees_%28Diablo_II%29.png?version=b74b3d4097ef7ad4e26ebee0dcf33d01">here</a> is the skill tree of sorceress in <a href="https://en.wikipedia.org/wiki/Diablo_II">Diablo II</a>.</p>
+<p>Now suppose our hero embarks on a quest for learning all the possible juggling patterns. Everyone would agree she should start with cascade, the simplest nontrivial 3-ball pattern, but what afterwards? A few other accessible patterns for beginners are juggler’s tennis, two in one and even reverse cascade, but what to learn after that? The encyclopeadic <a href="http://libraryofjuggling.com/">Library of Juggling</a> serves as a good guide, as it records more than 160 patterns, some of which very aesthetically appealing. On this website almost all the patterns have a “prerequisite” section, indicating what one should learn beforehand. I have therefore written a script using <a href="http://python.org">Python</a>, <a href="http://www.crummy.com/software/BeautifulSoup/">BeautifulSoup</a> and <a href="http://pygraphviz.github.io/">pygraphviz</a> to generate a jst (graded by difficulties, which is the leftmost column) from the Library of Juggling (click the image for the full size):</p>
+<p><a href="../assets/resources/juggling.png"><img src="../assets/resources/juggling.png" alt="The juggling skill tree" style="width:38em" /></a></p>
+</body>
+</html>
+
+ </div>
+ </div>
+ </body>
+</html>
diff --git a/site-from-md/posts/2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.html b/site-from-md/posts/2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.html
new file mode 100644
index 0000000..3f47467
--- /dev/null
+++ b/site-from-md/posts/2015-05-30-infinite-binary-words-containing-repetitions-odd-periods.html
@@ -0,0 +1,69 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>AMS review of 'Infinite binary words containing repetitions of odd period' by Badkobeh and Crochemore</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <h2> AMS review of 'Infinite binary words containing repetitions of odd period' by Badkobeh and Crochemore </h2>
+ <p>Posted on 2015-05-30</p>
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<p>This paper is about the existence of pattern-avoiding infinite binary words, where the patterns are squares, cubes and \(3^+\)-powers.    There are mainly two kinds of results, positive (existence of an infinite binary word avoiding a certain pattern) and negative (non-existence of such a word). Each positive result is proved by the construction of a word with finitely many squares and cubes which are listed explicitly. First a synchronising (also known as comma-free) uniform morphism \(g\: \Sigma_3^* \to \Sigma_2^*\)</p>
+<p>is constructed. Then an argument is given to show that the length of squares in the code \(g(w)\) for a squarefree \(w\) is bounded, hence all the squares can be obtained by examining all \(g(s)\) for \(s\) of bounded lengths. The argument resembles that of the proof of, e.g., Theorem 1, Lemma 2, Theorem 3 and Lemma 4 in [N. Rampersad, J. O. Shallit and M. Wang, Theoret. Comput. Sci. <strong>339</strong> (2005), no. 1, 19–34; <a href="http://www.ams.org/mathscinet/search/publdoc.html?r=1&amp;pg1=MR&amp;s1=2142071&amp;loc=fromrevtext">MR2142071</a>]. The negative results are proved by traversing all possible finite words satisfying the conditions.</p>
+<p>   Let \(L(n_2, n_3, S)\) be the maximum length of a word with \(n_2\) distinct squares, \(n_3\) distinct cubes and that the periods of the squares can take values only in \(S\) , where \(n_2, n_3 \in \Bbb N \cup \{\infty, \omega\}\) and \(S \subset \Bbb N_+\) .    \(n_k = 0\) corresponds to \(k\)-free, \(n_k = \infty\) means no restriction on the number of distinct \(k\)-powers, and \(n_k = \omega\) means \(k^+\)-free.</p>
+<p>   Below is the summary of the positive and negative results:</p>
+<ol type="1">
+<li><p>(Negative) \(L(\infty, \omega, 2 \Bbb N) &lt; \infty\) : \(\nexists\) an infinite \(3^+\) -free binary word avoiding all squares of odd periods. (Proposition 1)</p></li>
+<li><p>(Negative) \(L(\infty, 0, 2 \Bbb N + 1) \le 23\) : \(\nexists\) an infinite 3-free binary word, avoiding squares of even periods. The longest one has length \(\le 23\) (Proposition 2).</p></li>
+<li>(Positive) \(L(\infty, \omega, 2 \Bbb N +
+<ol type="1">
+<li><dl>
+<dt>= \infty\)</dt>
+<dd>\(\exists\) an infinite \(3^+\) -free binary word avoiding squares of even periods (Theorem 1).
+</dd>
+</dl></li>
+</ol></li>
+<li><p>(Positive) \(L(\infty, \omega, \{1, 3\}) = \infty\) : \(\exists\) an infinite \(3^+\) -free binary word containing only squares of period 1 or 3 (Theorem 2).</p></li>
+<li><p>(Negative) \(L(6, 1, 2 \Bbb N + 1) = 57\) : \(\nexists\) an infinite binary word avoiding squares of even period containing \(&lt; 7\) squares and \(&lt; 2\) cubes. The longest one containing 6 squares and 1 cube has length 57 (Proposition 6).</p></li>
+<li><p>(Positive) \(L(7, 1, 2 \Bbb N + 1) = \infty\) : \(\exists\) an infinite \(3^+\) -free binary word avoiding squares of even period with 1 cube and 7 squares (Theorem 3).</p></li>
+<li><p>(Positive) \(L(4, 2, 2 \Bbb N + 1) = \infty\) : \(\exists\) an infinite \(3^+\) -free binary words avoiding squares of even period and containing 2 cubes and 4 squares (Theorem 4).</p></li>
+</ol>
+<p>Copyright notice: This review is published at http://www.ams.org/mathscinet-getitem?mr=3313467, its copyright owned by the AMS.</p>
+</body>
+</html>
+
+ </div>
+ </div>
+ </body>
+</html>
diff --git a/site-from-md/posts/2015-07-01-causal-quantum-product-levy-area.html b/site-from-md/posts/2015-07-01-causal-quantum-product-levy-area.html
new file mode 100644
index 0000000..57b4fcd
--- /dev/null
+++ b/site-from-md/posts/2015-07-01-causal-quantum-product-levy-area.html
@@ -0,0 +1,51 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>On a causal quantum double product integral related to Lévy stochastic area.</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <h2> On a causal quantum double product integral related to Lévy stochastic area. </h2>
+ <p>Posted on 2015-07-01</p>
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/MathJax.js?config=TeX-AMS_CHTML-full" type="text/javascript"></script>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<p>In <a href="https://arxiv.org/abs/1506.04294">this paper</a> with <a href="http://homepages.lboro.ac.uk/~marh3/">Robin</a> we study the family of causal double product integrals \[ \prod_{a &lt; x &lt; y &lt; b}\left(1 + i{\lambda \over 2}(dP_x dQ_y - dQ_x dP_y) + i {\mu \over 2}(dP_x dP_y + dQ_x dQ_y)\right) \]</p>
+<p>where <span class="math inline">\(P\)</span> and <span class="math inline">\(Q\)</span> are the mutually noncommuting momentum and position Brownian motions of quantum stochastic calculus. The evaluation is motivated heuristically by approximating the continuous double product by a discrete product in which infinitesimals are replaced by finite increments. The latter is in turn approximated by the second quantisation of a discrete double product of rotation-like operators in different planes due to a result in <a href="http://www.actaphys.uj.edu.pl/findarticle?series=Reg&amp;vol=46&amp;page=1851">(Hudson-Pei2015)</a>. The main problem solved in this paper is the explicit evaluation of the continuum limit <span class="math inline">\(W\)</span> of the latter, and showing that <span class="math inline">\(W\)</span> is a unitary operator. The kernel of <span class="math inline">\(W\)</span> is written in terms of Bessel functions, and the evaluation is achieved by working on a lattice path model and enumerating linear extensions of related partial orderings, where the enumeration turns out to be heavily related to Dyck paths and generalisations of Catalan numbers.</p>
+</body>
+</html>
+
+ </div>
+ </div>
+ </body>
+</html>
diff --git a/site-from-md/posts/2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.html b/site-from-md/posts/2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.html
new file mode 100644
index 0000000..71ee1b9
--- /dev/null
+++ b/site-from-md/posts/2015-07-15-double-macdonald-polynomials-macdonald-superpolynomials.html
@@ -0,0 +1,61 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>AMS review of 'Double Macdonald polynomials as the stable limit of Macdonald superpolynomials' by Blondeau-Fournier, Lapointe and Mathieu</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <h2> AMS review of 'Double Macdonald polynomials as the stable limit of Macdonald superpolynomials' by Blondeau-Fournier, Lapointe and Mathieu </h2>
+ <p>Posted on 2015-07-15</p>
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<p>A Macdonald superpolynomial (introduced in [O. Blondeau-Fournier et al., Lett. Math. Phys. <span class="bf">101</span> (2012), no. 1, 27–47; <a href="http://www.ams.org/mathscinet/search/publdoc.html?pg1=MR&amp;s1=2935476&amp;loc=fromrevtext">MR2935476</a>; J. Comb. <span class="bf">3</span> (2012), no. 3, 495–561; <a href="http://www.ams.org/mathscinet/search/publdoc.html?pg1=MR&amp;s1=3029444&amp;loc=fromrevtext">MR3029444</a>]) in \(N\) Grassmannian variables indexed by a superpartition \(\Lambda\) is said to be stable if \({m (m + 1) \over 2} \ge |\Lambda|\) and \(N \ge |\Lambda| - {m (m - 3) \over 2}\) , where \(m\) is the fermionic degree. A stable Macdonald superpolynomial (corresponding to a bisymmetric polynomial) is also called a double Macdonald polynomial (dMp). The main result of this paper is the factorisation of a dMp into plethysms of two classical Macdonald polynomials (Theorem 5). Based on this result, this paper</p>
+<ol type="1">
+<li><p>shows that the dMp has a unique decomposition into bisymmetric monomials;</p></li>
+<li><p>calculates the norm of the dMp;</p></li>
+<li><p>calculates the kernel of the Cauchy-Littlewood-type identity of the dMp;</p></li>
+<li><p>shows the specialisation of the aforementioned factorisation to the Jack, Hall-Littlewood and Schur cases. One of the three Schur specialisations, denoted as \(s_{\lambda, \mu}\), also appears in (7) and (9) below;</p></li>
+<li><p>defines the \(\omega\) -automorphism in this setting, which was used to prove an identity involving products of four Littlewood-Richardson coefficients;</p></li>
+<li><p>shows an explicit evaluation of the dMp motivated by the most general evaluation of the usual Macdonald polynomials;</p></li>
+<li><p>relates dMps with the representation theory of the hyperoctahedral group \(B_n\) via the double Kostka coefficients (which are defined as the entries of the transition matrix from the bisymmetric Schur functions \(s_{\lambda, \mu}\) to the modified dMps);</p></li>
+<li><p>shows that the double Kostka coefficients have the positivity and the symmetry property, and can be written as sums of products of the usual Kostka coefficients;</p></li>
+<li><p>defines an operator \(\nabla^B\) as an analogue of the nabla operator \(\nabla\) introduced in [F. Bergeron and A. M. Garsia, in <em>Algebraic methods and \(q\)-special functions</em> (Montréal, QC, 1996), 1–52, CRM Proc. Lecture Notes, 22, Amer. Math. Soc., Providence, RI, 1999; <a href="http://www.ams.org/mathscinet/search/publdoc.html?r=1&amp;pg1=MR&amp;s1=1726826&amp;loc=fromrevtext">MR1726826</a>]. The action of \(\nabla^B\) on the bisymmetric Schur function \(s_{\lambda, \mu}\) yields the dimension formula \((h + 1)^r\) for the corresponding representation of \(B_n\) , where \(h\) and \(r\) are the Coxeter number and the rank of \(B_n\) , in the same way that the action of \(\nabla\) on the \(n\) th elementary symmetric function leads to the same formula for the group of type \(A_n\) .</p></li>
+</ol>
+<p>Copyright notice: This review is published at http://www.ams.org/mathscinet-getitem?mr=3306078, its copyright owned by the AMS.</p>
+</body>
+</html>
+
+ </div>
+ </div>
+ </body>
+</html>
diff --git a/site-from-md/posts/2016-10-13-q-robinson-schensted-knuth-polymer.html b/site-from-md/posts/2016-10-13-q-robinson-schensted-knuth-polymer.html
new file mode 100644
index 0000000..593bf6e
--- /dev/null
+++ b/site-from-md/posts/2016-10-13-q-robinson-schensted-knuth-polymer.html
@@ -0,0 +1,58 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>A \(q\)-Robinson-Schensted-Knuth algorithm and a \(q\)-polymer</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <h2> A \(q\)-Robinson-Schensted-Knuth algorithm and a \(q\)-polymer </h2>
+ <p>Posted on 2016-10-13</p>
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<p>(Latest update: 2017-01-12) In <a href="http://arxiv.org/abs/1504.00666">Matveev-Petrov 2016</a> a \(q\)-deformed Robinson-Schensted-Knuth algorithm (\(q\)RSK) was introduced. In this article we give reformulations of this algorithm in terms of Noumi-Yamada description, growth diagrams and local moves. We show that the algorithm is symmetric, namely the output tableaux pair are swapped in a sense of distribution when the input matrix is transposed. We also formulate a \(q\)-polymer model based on the \(q\)RSK and prove the corresponding Burke property, which we use to show a strong law of large numbers for the partition function given stationary boundary conditions and \(q\)-geometric weights. We use the \(q\)-local moves to define a generalisation of the \(q\)RSK taking a Young diagram-shape of array as the input. We write down the joint distribution of partition functions in the space-like direction of the \(q\)-polymer in \(q\)-geometric environment, formulate a \(q\)-version of the multilayer polynuclear growth model (\(q\)PNG) and write down the joint distribution of the \(q\)-polymer partition functions at a fixed time.</p>
+<p>This article is available at <a href="https://arxiv.org/abs/1610.03692">arXiv</a>. It seems to me that one difference between arXiv and Github is that on arXiv each preprint has a few versions only. In Github many projects have a “dev” branch hosting continuous updates, whereas the master branch is where the stable releases live.</p>
+<p><a href="%7B%7B%20site.url%20%7D%7D/assets/resources/qrsklatest.pdf">Here</a> is a “dev” version of the article, which I shall push to arXiv when it stablises. Below is the changelog.</p>
+<ul>
+<li>2017-01-12: Typos and grammar, arXiv v2.</li>
+<li>2016-12-20: Added remarks on the geometric \(q\)-pushTASEP. Added remarks on the converse of the Burke property. Added natural language description of the \(q\)RSK. Fixed typos.</li>
+<li>2016-11-13: Fixed some typos in the proof of Theorem 3.</li>
+<li>2016-11-07: Fixed some typos. The \(q\)-Burke property is now stated in a more symmetric way, so is the law of large numbers Theorem 2.</li>
+<li>2016-10-20: Fixed a few typos. Updated some references. Added a reference: <a href="http://web.mit.edu/~shopkins/docs/rsk.pdf">a set of notes titled “RSK via local transformations”</a>. It is written by <a href="http://web.mit.edu/~shopkins/">Sam Hopkins</a> in 2014 as an expository article based on MIT combinatorics preseminar presentations of Alex Postnikov. It contains some idea (applying local moves to a general Young-diagram shaped array in the order that matches any growth sequence of the underlying Young diagram) which I thought I was the first one to write down.</li>
+</ul>
+</body>
+</html>
+
+ </div>
+ </div>
+ </body>
+</html>
diff --git a/site-from-md/posts/2017-04-25-open_research_toywiki.html b/site-from-md/posts/2017-04-25-open_research_toywiki.html
new file mode 100644
index 0000000..26ed9a7
--- /dev/null
+++ b/site-from-md/posts/2017-04-25-open_research_toywiki.html
@@ -0,0 +1,53 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>Open mathematical research and launching toywiki</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <h2> Open mathematical research and launching toywiki </h2>
+ <p>Posted on 2017-04-25</p>
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<p>As an experimental project, I am launching toywiki.</p>
+<p>It hosts a collection of my research notes.</p>
+<p>It takes some ideas from the open source culture and apply them to mathematical research: 1. It uses a very permissive license (CC-BY-SA). For example anyone can fork the project and make their own version if they have a different vision and want to build upon the project. 2. All edits will done with maximum transparency, and discussions of any of notes should also be as public as possible (e.g. Github issues) 3. Anyone can suggest changes by opening issues and submitting pull requests</p>
+<p>Here are the links: <a href="http://toywiki.xyz">toywiki</a> and <a href="https://github.com/ycpei/toywiki">github repo</a>.</p>
+<p>Feedbacks are welcome by email.</p>
+</body>
+</html>
+
+ </div>
+ </div>
+ </body>
+</html>
diff --git a/site-from-md/posts/2017-08-07-mathematical_bazaar.html b/site-from-md/posts/2017-08-07-mathematical_bazaar.html
new file mode 100644
index 0000000..651fe73
--- /dev/null
+++ b/site-from-md/posts/2017-08-07-mathematical_bazaar.html
@@ -0,0 +1,108 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>The Mathematical Bazaar</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <h2> The Mathematical Bazaar </h2>
+ <p>Posted on 2017-08-07</p>
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<nav id="TOC">
+<ul>
+<li><a href="#problems-of-academia">problems of academia</a></li>
+<li><a href="#open-source-collaborations-on-github">open source collaborations on Github</a></li>
+<li><a href="#open-research-in-mathematics">open research in mathematics</a></li>
+<li><a href="#related-readings">related readings</a></li>
+</ul>
+</nav>
+<p>In this essay I describe some problems in academia of mathematics and propose an open source model, which I call open research in mathematics.</p>
+<p>This essay is a work in progress - comments and criticisms are welcome! <a href="#fn1" class="footnote-ref" id="fnref1"><sup>1</sup></a></p>
+<p>Before I start I should point out that</p>
+<ol type="1">
+<li>Open research is <em>not</em> open access. In fact the latter is a prerequisite to the former.</li>
+<li>I am not proposing to replace the current academic model with the open model - I know academia works well for many people and I am happy for them, but I think an open research community is long overdue since the wide adoption of the World Wide Web more than two decades ago. In fact, I fail to see why an open model can not run in tandem with the academia, just like open source and closed source software development coexist today.</li>
+</ol>
+<h2 id="problems-of-academia">problems of academia</h2>
+<p>Open source projects are characterised by publicly available source codes as well as open invitations for public collaborations, whereas closed source projects do not make source codes accessible to the public. How about mathematical academia then, is it open source or closed source? The answer is neither.</p>
+<p>Compared to some other scientific disciplines, mathematics does not require expensive equipments or resources to replicate results; compared to programming in conventional software industry, mathematical findings are not meant to be commercial, as credits and reputation rather than money are the direct incentives (even though the former are commonly used to trade for the latter). It is also a custom and common belief that mathematical derivations and theorems shouldn't be patented. Because of this, mathematical research is an open source activity in the sense that proofs to new results are all available in papers, and thanks to open access e.g. the arXiv preprint repository most of the new mathematical knowledge is accessible for free.</p>
+<p>Then why, you may ask, do I claim that maths research is not open sourced? Well, this is because 1. mathematical arguments are not easily replicable and 2. mathematical research projects are mostly not open for public participation.</p>
+<p>Compared to computer programs, mathematical arguments are not written in an unambiguous language, and they are terse and not written in maximum verbosity (this is especially true in research papers as journals encourage limiting the length of submissions), so the understanding of a proof depends on whether the reader is equipped with the right background knowledge, and the completeness of a proof is highly subjective. More generally speaking, computer programs are mostly portable because all machines with the correct configurations can understand and execute a piece of program, whereas humans are subject to their environment, upbringings, resources etc. to have a brain ready to comprehend a proof that interests them. (these barriers are softer than the expensive equipments and resources in other scientific fields mentioned before because it is all about having access to the right information)</p>
+<p>On the other hand, as far as the pursuit of reputation and prestige (which can be used to trade for the scarce resource of research positions and grant money) goes, there is often little practical motivation for career mathematicians to explain their results to the public carefully. And so the weird reality of the mathematical academia is that it is not an uncommon practice to keep trade secrets in order to protect one's territory and maintain a monopoly. This is doable because as long as a paper passes the opaque and sometimes political peer review process and is accepted by a journal, it is considered work done, accepted by the whole academic community and adds to the reputation of the author(s). Just like in the software industry, trade secrets and monopoly hinder the development of research as a whole, as well as demoralise outsiders who are interested in participating in related research.</p>
+<p>Apart from trade secrets and territoriality, another reason to the nonexistence of open research community is an elitist tradition in the mathematical academia, which goes as follows:</p>
+<ul>
+<li>Whoever is not good at mathematics or does not possess a degree in maths is not eligible to do research, or else they run high risks of being labelled a crackpot.</li>
+<li>Mistakes made by established mathematicians are more tolerable than those less established.</li>
+<li>Good mathematical writings should be deep, and expositions of non-original results are viewed as inferior work and do not add to (and in some cases may even damage) one's reputation.</li>
+</ul>
+<p>All these customs potentially discourage public participations in mathematical research, and I do not see them easily go away unless an open source community gains momentum.</p>
+<p>To solve the above problems, I propose a open source model of mathematical research, which has high levels of openness and transparency and also has some added benefits listed in the last section of this essay. This model tries to achieve two major goals:</p>
+<ul>
+<li>Open and public discussions and collaborations of mathematical research projects online</li>
+<li>Open review to validate results, where author name, reviewer name, comments and responses are all publicly available online.</li>
+</ul>
+<p>To this end, a Github model is fitting. Let me first describe how open source collaboration works on Github.</p>
+<h2 id="open-source-collaborations-on-github">open source collaborations on Github</h2>
+<p>On <a href="https://github.com">Github</a>, every project is publicly available in a repository (we do not consider private repos). The owner can update the project by "committing" changes, which include a message of what has been changed, the author of the changes and a timestamp. Each project has an issue tracker, which is basically a discussion forum about the project, where anyone can open an issue (start a discussion), and the owner of the project as well as the original poster of the issue can close it if it is resolved, e.g. bug fixed, feature added, or out of the scope of the project. Closing the issue is like ending the discussion, except that the thread is still open to more posts for anyone interested. People can react to each issue post, e.g. upvote, downvote, celebration, and importantly, all the reactions are public too, so you can find out who upvoted or downvoted your post.</p>
+<p>When one is interested in contributing code to a project, they fork it, i.e. make a copy of the project, and make the changes they like in the fork. Once they are happy with the changes, they submit a pull request to the original project. The owner of the original project may accept or reject the request, and they can comment on the code in the pull request, asking for clarification, pointing out problematic part of the code etc and the author of the pull request can respond to the comments. Anyone, not just the owner can participate in this review process, turning it into a public discussion. In fact, a pull request is a special issue thread. Once the owner is happy with the pull request, they accept it and the changes are merged into the original project. The author of the changes will show up in the commit history of the original project, so they get the credits.</p>
+<p>As an alternative to forking, if one is interested in a project but has a different vision, or that the maintainer has stopped working on it, they can clone it and make their own version. This is a more independent kind of fork because there is no longer intention to contribute back to the original project.</p>
+<p>Moreover, on Github there is no way to send private messages, which forces people to interact publicly. If say you want someone to see and reply to your comment in an issue post or pull request, you simply mention them by <code>@someone</code>.</p>
+<h2 id="open-research-in-mathematics">open research in mathematics</h2>
+<p>All this points to a promising direction of open research. A maths project may have a wiki / collection of notes, the paper being written, computer programs implementing the results etc. The issue tracker can serve as a discussion forum about the project as well as a platform for open review (bugs are analogous to mistakes, enhancements are possible ways of improving the main results etc.), and anyone can make their own version of the project, and (optionally) contribute back by making pull requests, which will also be openly reviewed. One may want to add an extra "review this project" functionality, so that people can comment on the original project like they do in a pull request. This may or may not be necessary, as anyone can make comments or point out mistakes in the issue tracker.</p>
+<p>One may doubt this model due to concerns of credits because work in progress is available to anyone. Well, since all the contributions are trackable in project commit history and public discussions in issues and pull request reviews, there is in fact <em>less</em> room for cheating than the current model in academia, where scooping can happen without any witnesses. What we need is a platform with a good amount of trust like arXiv, so that the open research community honours (and can not ignore) the commit history, and the chance of mis-attribution can be reduced to minimum.</p>
+<p>Compared to the academic model, open research also has the following advantages:</p>
+<ul>
+<li>Anyone in the world with Internet access will have a chance to participate in research, whether they are affiliated to a university, have the financial means to attend conferences, or are colleagues of one of the handful experts in a specific field.</li>
+<li>The problem of replicating / understanding maths results will be solved, as people help each other out. This will also remove the burden of answering queries about one's research. For example, say one has a project "Understanding the fancy results in [paper name]", they write up some initial notes but get stuck understanding certain arguments. In this case they can simply post the questions on the issue tracker, and anyone who knows the answer, or just has a speculation can participate in the discussion. In the end the problem may be resolved without the authors of the paper being bothered, who may be too busy to answer.</li>
+<li>Similarly, the burden of peer review can also be shifted from a few appointed reviewers to the crowd.</li>
+</ul>
+<h2 id="related-readings">related readings</h2>
+<ul>
+<li><a href="http://www.catb.org/esr/writings/cathedral-bazaar/">The Cathedral and the Bazaar by Eric Raymond</a></li>
+<li><a href="http://michaelnielsen.org/blog/doing-science-online/">Doing sience online by Michael Nielson</a></li>
+<li><a href="https://gowers.wordpress.com/2009/01/27/is-massively-collaborative-mathematics-possible/">Is massively collaborative mathematics possible? by Timothy Gowers</a></li>
+</ul>
+<section class="footnotes">
+<hr />
+<ol>
+<li id="fn1"><p>Please send your comments to my email address - I am still looking for ways to add a comment functionality to this website.<a href="#fnref1" class="footnote-back">↩</a></p></li>
+</ol>
+</section>
+</body>
+</html>
+
+ </div>
+ </div>
+ </body>
+</html>
diff --git a/site-from-md/posts/2018-04-10-update-open-research.html b/site-from-md/posts/2018-04-10-update-open-research.html
new file mode 100644
index 0000000..d0ce675
--- /dev/null
+++ b/site-from-md/posts/2018-04-10-update-open-research.html
@@ -0,0 +1,104 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>Updates on open research</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script data-isso="/comments/"
+ data-isso-css="true"
+ data-isso-lang="en"
+ data-isso-reply-to-self="false"
+ data-isso-require-author="true"
+ data-isso-require-email="true"
+ data-isso-max-comments-top="10"
+ data-isso-max-comments-nested="5"
+ data-isso-reveal-on-click="5"
+ data-isso-avatar="true"
+ data-isso-avatar-bg="#f0f0f0"
+ data-isso-avatar-fg="#9abf88 #5698c4 #e279a3 #9163b6 ..."
+ data-isso-vote="true"
+ data-vote-levels=""
+ src="/comments/js/embed.min.js"></script>
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <h2> Updates on open research </h2>
+ <p>Posted on 2018-04-29 | <a href="/posts/2018-04-10-update-open-research.html#isso-thread">Comments</a> </p>
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<nav id="TOC">
+<ul>
+<li><a href="#freedom-and-community">Freedom and community</a></li>
+<li><a href="#tools-for-open-research">Tools for open research</a></li>
+<li><a href="#an-anecdote-from-the-workshop">An anecdote from the workshop</a></li>
+</ul>
+</nav>
+<p>It has been 9 months since I last wrote about open (maths) research. Since then two things happened which prompted me to write an update.</p>
+<p>As always I discuss open research only in mathematics, not because I think it should not be applied to other disciplines, but simply because I do not have experience nor sufficient interests in non-mathematical subjects.</p>
+<p>First, I read about Richard Stallman the founder of the free software movement, in <a href="http://shop.oreilly.com/product/9780596002879.do">his biography by Sam Williams</a> and his own collection of essays <a href="https://shop.fsf.org/books-docs/free-software-free-society-selected-essays-richard-m-stallman-3rd-edition"><em>Free software, free society</em></a>, from which I learned a bit more about the context and philosophy of free software and its relation to that of open source software. For anyone interested in open research, I highly recommend having a look at these two books. I am also reading Levy’s <a href="http://www.stevenlevy.com/index.php/books/hackers">Hackers</a>, which documented the development of the hacker culture predating Stallman. I can see the connection of ideas from the hacker ethic to the free software philosophy and to the open source philosophy. My guess is that the software world is fortunate to have pioneers who advocated for various kinds of freedom and openness from the beginning, whereas for academia which has a much longer history, credit protection has always been a bigger concern.</p>
+<p>Also a month ago I attended a workshop called <a href="https://www.perimeterinstitute.ca/conferences/open-research-rethinking-scientific-collaboration">Open research: rethinking scientific collaboration</a>. That was the first time I met a group of people (mostly physicists) who also want open research to happen, and we had some stimulating discussions. Many thanks to the organisers at Perimeter Institute for organising the event, and special thanks to <a href="https://www.perimeterinstitute.ca/people/matteo-smerlak">Matteo Smerlak</a> and <a href="https://www.perimeterinstitute.ca/people/ashley-milsted">Ashley Milsted</a> for invitation and hosting.</p>
+<p>From both of these I feel like I should write an updated post on open research.</p>
+<h3 id="freedom-and-community">Freedom and community</h3>
+<p>Ideals matter. Stallman’s struggles stemmed from the frustration of denied request of source code (a frustration I shared in academia except source code is replaced by maths knowledge), and revolved around two things that underlie the free software movement: freedom and community. That is, the freedom to use, modify and share a work, and by sharing, to help the community.</p>
+<p>Likewise, as for open research, apart from the utilitarian view that open research is more efficient / harder for credit theft, we should not ignore the ethical aspect that open research is right and fair. In particular, I think freedom and community can also serve as principles in open research. One way to make this argument more concrete is to describe what I feel are the central problems: NDAs (non-disclosure agreements) and reproducibility.</p>
+<p><strong>NDAs</strong>. It is assumed that when establishing a research collaboration, or just having a discussion, all those involved own the joint work in progress, and no one has the freedom to disclose any information e.g. intermediate results without getting permission from all collaborators. In effect this amounts to signing an NDA. NDAs are harmful because they restrict people’s freedom from sharing information that can benefit their own or others’ research. Considering that in contrast to the private sector, the primary goal of academia is knowledge but not profit, NDAs in research are unacceptable.</p>
+<p><strong>Reproducibility</strong>. Research papers written down are not necessarily reproducible, even though they appear on peer-reviewed journals. This is because the peer-review process is opaque and the proofs in the papers may not be clear to everyone. To make things worse, there are no open channels to discuss results in these papers and one may have to rely on interacting with the small circle of the informed. One example is folk theorems. Another is trade secrets required to decipher published works.</p>
+<p>I should clarify that freedom works both ways. One should have the freedom to disclose maths knowledge, but they should also be free to withhold any information that does not hamper the reproducibility of published works (e.g. results in ongoing research yet to be published), even though it may not be nice to do so when such information can help others with their research.</p>
+<p>Similar to the solution offered by the free software movement, we need a community that promotes and respects free flow of maths knowledge, in the spirit of the <a href="https://www.gnu.org/philosophy/">four essential freedoms</a>, a community that rejects NDAs and upholds reproducibility.</p>
+<p>Here are some ideas on how to tackle these two problems and build the community:</p>
+<ol type="1">
+<li>Free licensing. It solves NDA problem - free licenses permit redistribution and modification of works, so if you adopt them in your joint work, then you have the freedom to modify and distribute the work; it also helps with reproducibility - if a paper is not clear, anyone can write their own version and publish it. Bonus points with the use of copyleft licenses like <a href="https://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Share-Alike</a> or the <a href="https://www.gnu.org/licenses/fdl.html">GNU Free Documentation License</a>.</li>
+<li>A forum for discussions of mathematics. It helps solve the reproducibility problem - public interaction may help quickly clarify problems. By the way, Math Overflow is not a forum.</li>
+<li>An infrastructure of mathematical knowledge. Like the GNU system, a mathematics encyclopedia under a copyleft license maintained in the Github-style rather than Wikipedia-style by a “Free Mathematics Foundation”, and drawing contributions from the public (inside or outside of the academia). To begin with, crowd-source (again, Github-style) the proofs of say 1000 foundational theorems covered in the curriculum of a bachelor’s degree. Perhaps start with taking contributions from people with some credentials (e.g. having a bachelor degree in maths) and then expand the contribution permission to the public, or taking advantage of existing corpus under free license like Wikipedia.</li>
+<li>Citing with care: if a work is considered authorative but you couldn’t reproduce the results, whereas another paper which tries to explain or discuss similar results makes the first paper understandable to you, give both papers due attribution (something like: see [1], but I couldn’t reproduce the proof in [1], and the proofs in [2] helped clarify it). No one should be offended if you say you can not reproduce something - there may be causes on both sides, whereas citing [2] is fairer and helps readers with a similar background.</li>
+</ol>
+<h3 id="tools-for-open-research">Tools for open research</h3>
+<p>The open research workshop revolved around how to lead academia towards a more open culture. There were discussions on open research tools, improving credit attributions, the peer-review process and the path to adoption.</p>
+<p>During the workshop many efforts for open research were mentioned, and afterwards I was also made aware by more of them, like the following:</p>
+<ul>
+<li><a href="https://osf.io">OSF</a>, an online research platform. It has a clean and simple interface with commenting, wiki, citation generation, DOI generation, tags, license generation etc. Like Github it supports private and public repositories (but defaults to private), version control, with the ability to fork or bookmark a project.</li>
+<li><a href="https://scipost.org/">SciPost</a>, physics journals whose peer review reports and responses are public (peer-witnessed refereeing), and allows comments (post-publication evaluation). Like arXiv, it requires some academic credential (PhD or above) to register.</li>
+<li><a href="https://knowen.org/">Knowen</a>, a platform to organise knowledge in directed acyclic graphs. Could be useful for building the infrastructure of mathematical knowledge.</li>
+<li><a href="https://fermatslibrary.com/">Fermat’s Library</a>, the journal club website that crowd-annotates one notable paper per week released a Chrome extension <a href="https://fermatslibrary.com/librarian">Librarian</a> that overlays a commenting interface on arXiv. As an example Ian Goodfellow did an <a href="https://fermatslibrary.com/arxiv_comments?url=https://arxiv.org/pdf/1406.2661.pdf">AMA (ask me anything) on his GAN paper</a>.</li>
+<li><a href="https://polymathprojects.org/">The Polymath project</a>, the famous massive collaborative mathematical project. Not exactly new, the Polymath project is the only open maths research project that has gained some traction and recognition. However, it does not have many active projects (<a href="http://michaelnielsen.org/polymath1/index.php?title=Main_Page">currently only one active project</a>).</li>
+<li><a href="https://stacks.math.columbia.edu/">The Stacks Project</a>. I was made aware of this project by <a href="https://people.kth.se/~yitingl/">Yiting</a>. Its data is hosted on github and accepts contributions via pull requests and is licensed under the GNU Free Documentation License, ticking many boxes of the free and open source model.</li>
+</ul>
+<h3 id="an-anecdote-from-the-workshop">An anecdote from the workshop</h3>
+<p>In a conversation during the workshop, one of the participants called open science “normal science”, because reproducibility, open access, collaborations, and fair attributions are all what science is supposed to be, and practices like treating the readers as buyers rather than users should be called “bad science”, rather than “closed science”.</p>
+<p>To which an organiser replied: maybe we should rename the workshop “Not-bad science”.</p>
+</body>
+</html>
+
+ </div>
+ <section id="isso-thread"></section>
+ </div>
+ </body>
+</html>
diff --git a/site-from-md/posts/2018-06-03-automatic_differentiation.html b/site-from-md/posts/2018-06-03-automatic_differentiation.html
new file mode 100644
index 0000000..1f81337
--- /dev/null
+++ b/site-from-md/posts/2018-06-03-automatic_differentiation.html
@@ -0,0 +1,98 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>Automatic differentiation</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script data-isso="/comments/"
+ data-isso-css="true"
+ data-isso-lang="en"
+ data-isso-reply-to-self="false"
+ data-isso-require-author="true"
+ data-isso-require-email="true"
+ data-isso-max-comments-top="10"
+ data-isso-max-comments-nested="5"
+ data-isso-reveal-on-click="5"
+ data-isso-avatar="true"
+ data-isso-avatar-bg="#f0f0f0"
+ data-isso-avatar-fg="#9abf88 #5698c4 #e279a3 #9163b6 ..."
+ data-isso-vote="true"
+ data-vote-levels=""
+ src="/comments/js/embed.min.js"></script>
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <h2> Automatic differentiation </h2>
+ <p>Posted on 2018-06-03 | <a href="/posts/2018-06-03-automatic_differentiation.html#isso-thread">Comments</a> </p>
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/MathJax.js?config=TeX-AMS_CHTML-full" type="text/javascript"></script>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<p>This post serves as a note and explainer of autodiff. It is licensed under <a href="https://www.gnu.org/licenses/fdl.html">GNU FDL</a>.</p>
+<p>For my learning I benefited a lot from <a href="http://www.cs.toronto.edu/%7Ergrosse/courses/csc321_2018/slides/lec10.pdf">Toronto CSC321 slides</a> and the <a href="https://github.com/mattjj/autodidact/">autodidact</a> project which is a pedagogical implementation of <a href="https://github.com/hips/autograd">Autograd</a>. That said, any mistakes in this note are mine (especially since some of the knowledge is obtained from interpreting slides!), and if you do spot any I would be grateful if you can let me know.</p>
+<p>Automatic differentiation (AD) is a way to compute derivatives. It does so by traversing through a computational graph using the chain rule.</p>
+<p>There are the forward mode AD and reverse mode AD, which are kind of symmetric to each other and understanding one of them results in little to no difficulty in understanding the other.</p>
+<p>In the language of neural networks, one can say that the forward mode AD is used when one wants to compute the derivatives of functions at all layers with respect to input layer weights, whereas the reverse mode AD is used to compute the derivatives of output functions with respect to weights at all layers. Therefore reverse mode AD (rmAD) is the one to use for gradient descent, which is the one we focus in this post.</p>
+<p>Basically rmAD requires the computation to be sufficiently decomposed, so that in the computational graph, each node as a function of its parent nodes is an elementary function that the AD engine has knowledge about.</p>
+<p>For example, the Sigmoid activation <span class="math inline">\(a&#39; = \sigma(w a + b)\)</span> is quite simple, but it should be decomposed to simpler computations:</p>
+<ul>
+<li><span class="math inline">\(a&#39; = 1 / t_1\)</span></li>
+<li><span class="math inline">\(t_1 = 1 + t_2\)</span></li>
+<li><span class="math inline">\(t_2 = \exp(t_3)\)</span></li>
+<li><span class="math inline">\(t_3 = - t_4\)</span></li>
+<li><span class="math inline">\(t_4 = t_5 + b\)</span></li>
+<li><span class="math inline">\(t_5 = w a\)</span></li>
+</ul>
+<p>Thus the function <span class="math inline">\(a&#39;(a)\)</span> is decomposed to elementary operations like addition, subtraction, multiplication, reciprocitation, exponentiation, logarithm etc. And the rmAD engine stores the Jacobian of these elementary operations.</p>
+<p>Since in neural networks we want to find derivatives of a single loss function <span class="math inline">\(L(x; \theta)\)</span>, we can omit <span class="math inline">\(L\)</span> when writing derivatives and denote, say <span class="math inline">\(\bar \theta_k := \partial_{\theta_k} L\)</span>.</p>
+<p>In implementations of rmAD, one can represent the Jacobian as a transformation <span class="math inline">\(j: (x \to y) \to (y, \bar y, x) \to \bar x\)</span>. <span class="math inline">\(j\)</span> is called the <em>Vector Jacobian Product</em> (VJP). For example, <span class="math inline">\(j(\exp)(y, \bar y, x) = y \bar y\)</span> since given <span class="math inline">\(y = \exp(x)\)</span>,</p>
+<p><span class="math inline">\(\partial_x L = \partial_x y \cdot \partial_y L = \partial_x \exp(x) \cdot \partial_y L = y \bar y\)</span></p>
+<p>as another example, <span class="math inline">\(j(+)(y, \bar y, x_1, x_2) = (\bar y, \bar y)\)</span> since given <span class="math inline">\(y = x_1 + x_2\)</span>, <span class="math inline">\(\bar{x_1} = \bar{x_2} = \bar y\)</span>.</p>
+<p>Similarly,</p>
+<ol type="1">
+<li><span class="math inline">\(j(/)(y, \bar y, x_1, x_2) = (\bar y / x_2, - \bar y x_1 / x_2^2)\)</span></li>
+<li><span class="math inline">\(j(\log)(y, \bar y, x) = \bar y / x\)</span></li>
+<li><span class="math inline">\(j((A, \beta) \mapsto A \beta)(y, \bar y, A, \beta) = (\bar y \otimes \beta, A^T \bar y)\)</span>.</li>
+<li>etc...</li>
+</ol>
+<p>In the third one, the function is a matrix <span class="math inline">\(A\)</span> multiplied on the right by a column vector <span class="math inline">\(\beta\)</span>, and <span class="math inline">\(\bar y \otimes \beta\)</span> is the tensor product which is a fancy way of writing <span class="math inline">\(\bar y \beta^T\)</span>. See <a href="https://github.com/mattjj/autodidact/blob/master/autograd/numpy/numpy_vjps.py">numpy_vjps.py</a> for the implementation in autodidact.</p>
+<p>So, given a node say <span class="math inline">\(y = y(x_1, x_2, ..., x_n)\)</span>, and given the value of <span class="math inline">\(y\)</span>, <span class="math inline">\(x_{1 : n}\)</span> and <span class="math inline">\(\bar y\)</span>, rmAD computes the values of <span class="math inline">\(\bar x_{1 : n}\)</span> by using the Jacobians.</p>
+<p>This is the gist of rmAD. It stores the values of each node in a forward pass, and computes the derivatives of each node exactly once in a backward pass.</p>
+<p>It is a nice exercise to derive the backpropagation in the fully connected feedforward neural networks (e.g. <a href="http://neuralnetworksanddeeplearning.com/chap2.html#the_four_fundamental_equations_behind_backpropagation">the one for MNIST in Neural Networks and Deep Learning</a>) using rmAD.</p>
+<p>AD is an approach lying between the extremes of numerical approximation (e.g. finite difference) and symbolic evaluation. It uses exact formulas (VJP) at each elementary operation like symbolic evaluation, while evaluates each VJP numerically rather than lumping all the VJPs into an unwieldy symbolic formula.</p>
+<p>Things to look further into: the higher-order functional currying form <span class="math inline">\(j: (x \to y) \to (y, \bar y, x) \to \bar x\)</span> begs for a functional programming implementation.</p>
+</body>
+</html>
+
+ </div>
+ <section id="isso-thread"></section>
+ </div>
+ </body>
+</html>
diff --git a/site-from-md/posts/2018-12-02-lime-shapley.html b/site-from-md/posts/2018-12-02-lime-shapley.html
new file mode 100644
index 0000000..cd7903b
--- /dev/null
+++ b/site-from-md/posts/2018-12-02-lime-shapley.html
@@ -0,0 +1,202 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>Shapley, LIME and SHAP</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script data-isso="/comments/"
+ data-isso-css="true"
+ data-isso-lang="en"
+ data-isso-reply-to-self="false"
+ data-isso-require-author="true"
+ data-isso-require-email="true"
+ data-isso-max-comments-top="10"
+ data-isso-max-comments-nested="5"
+ data-isso-reveal-on-click="5"
+ data-isso-avatar="true"
+ data-isso-avatar-bg="#f0f0f0"
+ data-isso-avatar-fg="#9abf88 #5698c4 #e279a3 #9163b6 ..."
+ data-isso-vote="true"
+ data-vote-levels=""
+ src="/comments/js/embed.min.js"></script>
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <h2> Shapley, LIME and SHAP </h2>
+ <p>Posted on 2018-12-02 | <a href="/posts/2018-12-02-lime-shapley.html#isso-thread">Comments</a> </p>
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/MathJax.js?config=TeX-AMS_CHTML-full" type="text/javascript"></script>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<nav id="TOC">
+<ul>
+<li><a href="#shapley-values">Shapley values</a></li>
+<li><a href="#lime">LIME</a></li>
+<li><a href="#shapley-values-and-lime">Shapley values and LIME</a></li>
+<li><a href="#shap">SHAP</a></li>
+<li><a href="#evaluating-shap">Evaluating SHAP</a></li>
+<li><a href="#references">References</a></li>
+</ul>
+</nav>
+<p>In this post I explain LIME (Ribeiro et. al. 2016), the Shapley values (Shapley, 1953) and the SHAP values (Strumbelj-Kononenko, 2014; Lundberg-Lee, 2017).</p>
+<p><strong>Acknowledgement</strong>. Thanks to Josef Lindman Hörnlund for bringing the LIME and SHAP papers to my attention. The research was done while working at KTH mathematics department.</p>
+<p><em>If you are reading on a mobile device, you may need to “request desktop site” for the equations to be properly displayed. This post is licensed under CC BY-SA and GNU FDL.</em></p>
+<h2 id="shapley-values">Shapley values</h2>
+<p>A coalitional game <span class="math inline">\((v, N)\)</span> of <span class="math inline">\(n\)</span> players involves</p>
+<ul>
+<li>The set <span class="math inline">\(N = \{1, 2, ..., n\}\)</span> that represents the players.</li>
+<li>A function <span class="math inline">\(v: 2^N \to \mathbb R\)</span>, where <span class="math inline">\(v(S)\)</span> is the worth of coalition <span class="math inline">\(S \subset N\)</span>.</li>
+</ul>
+<p>The Shapley values <span class="math inline">\(\phi_i(v)\)</span> of such a game specify a fair way to distribute the total worth <span class="math inline">\(v(N)\)</span> to the players. It is defined as (in the following, for a set <span class="math inline">\(S \subset N\)</span> we use the convention <span class="math inline">\(s = |S|\)</span> to be the number of elements of set <span class="math inline">\(S\)</span> and the shorthand <span class="math inline">\(S - i := S \setminus \{i\}\)</span> and <span class="math inline">\(S + i := S \cup \{i\}\)</span>)</p>
+<p><span class="math display">\[\phi_i(v) = \sum_{S: i \in S} {(n - s)! (s - 1)! \over n!} (v(S) - v(S - i)).\]</span></p>
+<p>It is not hard to see that <span class="math inline">\(\phi_i(v)\)</span> can be viewed as an expectation:</p>
+<p><span class="math display">\[\phi_i(v) = \mathbb E_{S \sim \nu_i} (v(S) - v(S - i))\]</span></p>
+<p>where <span class="math inline">\(\nu_i(S) = n^{-1} {n - 1 \choose s - 1}^{-1} 1_{i \in S}\)</span>, that is, first pick the size <span class="math inline">\(s\)</span> uniformly from <span class="math inline">\(\{1, 2, ..., n\}\)</span>, then pick <span class="math inline">\(S\)</span> uniformly from the subsets of <span class="math inline">\(N\)</span> that has size <span class="math inline">\(s\)</span> and contains <span class="math inline">\(i\)</span>.</p>
+<p>The Shapley values satisfy some nice properties which are readily verified, including:</p>
+<ul>
+<li><strong>Efficiency</strong>. <span class="math inline">\(\sum_i \phi_i(v) = v(N) - v(\emptyset)\)</span>.</li>
+<li><strong>Symmetry</strong>. If for some <span class="math inline">\(i, j \in N\)</span>, for all <span class="math inline">\(S \subset N\)</span>, we have <span class="math inline">\(v(S + i) = v(S + j)\)</span>, then <span class="math inline">\(\phi_i(v) = \phi_j(v)\)</span>.</li>
+<li><strong>Null player</strong>. If for some <span class="math inline">\(i \in N\)</span>, for all <span class="math inline">\(S \subset N\)</span>, we have <span class="math inline">\(v(S + i) = v(S)\)</span>, then <span class="math inline">\(\phi_i(v) = 0\)</span>.</li>
+<li><strong>Linearity</strong>. <span class="math inline">\(\phi_i\)</span> is linear in games. That is <span class="math inline">\(\phi_i(v) + \phi_i(w) = \phi_i(v + w)\)</span>, where <span class="math inline">\(v + w\)</span> is defined by <span class="math inline">\((v + w)(S) := v(S) + w(S)\)</span>.</li>
+</ul>
+<p>In the literature, an added assumption <span class="math inline">\(v(\emptyset) = 0\)</span> is often given, in which case the Efficiency property is defined as <span class="math inline">\(\sum_i \phi_i(v) = v(N)\)</span>. Here I discard this assumption to avoid minor inconsistencies across different sources. For example, in the LIME paper, the local model is defined without an intercept, even though the underlying <span class="math inline">\(v(\emptyset)\)</span> may not be <span class="math inline">\(0\)</span>. In the SHAP paper, an intercept <span class="math inline">\(\phi_0 = v(\emptyset)\)</span> is added which fixes this problem when making connections to the Shapley values.</p>
+<p>Conversely, according to Strumbelj-Kononenko (2010), it was shown in Shapley's original paper (Shapley, 1953) that these four properties together with <span class="math inline">\(v(\emptyset) = 0\)</span> defines the Shapley values.</p>
+<h2 id="lime">LIME</h2>
+<p>LIME (Ribeiro et. al. 2016) is a model that offers a way to explain feature contributions of supervised learning models locally.</p>
+<p>Let <span class="math inline">\(f: X_1 \times X_2 \times ... \times X_n \to \mathbb R\)</span> be a function. We can think of <span class="math inline">\(f\)</span> as a model, where <span class="math inline">\(X_j\)</span> is the space of <span class="math inline">\(j\)</span>th feature. For example, in a language model, <span class="math inline">\(X_j\)</span> may correspond to the count of the <span class="math inline">\(j\)</span>th word in the vocabulary, i.e. the bag-of-words model.</p>
+<p>The output may be something like housing price, or log-probability of something.</p>
+<p>LIME tries to assign a value to each feature <em>locally</em>. By locally, we mean that given a specific sample <span class="math inline">\(x \in X := \prod_{i = 1}^n X_i\)</span>, we want to fit a model around it.</p>
+<p>More specifically, let <span class="math inline">\(h_x: 2^N \to X\)</span> be a function defined by</p>
+<p><span class="math display">\[(h_x(S))_i =
+\begin{cases}
+x_i, &amp; \text{if }i \in S; \\
+0, &amp; \text{otherwise.}
+\end{cases}\]</span></p>
+<p>That is, <span class="math inline">\(h_x(S)\)</span> masks the features that are not in <span class="math inline">\(S\)</span>, or in other words, we are perturbing the sample <span class="math inline">\(x\)</span>. Specifically, <span class="math inline">\(h_x(N) = x\)</span>. Alternatively, the <span class="math inline">\(0\)</span> in the "otherwise" case can be replaced by some kind of default value (see the section titled SHAP in this post).</p>
+<p>For a set <span class="math inline">\(S \subset N\)</span>, let us denote <span class="math inline">\(1_S \in \{0, 1\}^n\)</span> to be an <span class="math inline">\(n\)</span>-bit where the <span class="math inline">\(k\)</span>th bit is <span class="math inline">\(1\)</span> if and only if <span class="math inline">\(k \in S\)</span>.</p>
+<p>Basically, LIME samples <span class="math inline">\(S_1, S_2, ..., S_m \subset N\)</span> to obtain a set of perturbed samples <span class="math inline">\(x_i = h_x(S_i)\)</span> in the <span class="math inline">\(X\)</span> space, and then fits a linear model <span class="math inline">\(g\)</span> using <span class="math inline">\(1_{S_i}\)</span> as the input samples and <span class="math inline">\(f(h_x(S_i))\)</span> as the output samples:</p>
+<p><strong>Problem</strong>(LIME). Find <span class="math inline">\(w = (w_1, w_2, ..., w_n)\)</span> that minimises</p>
+<p><span class="math display">\[\sum_i (w \cdot 1_{S_i} - f(h_x(S_i)))^2 \pi_x(h_x(S_i))\]</span></p>
+<p>where <span class="math inline">\(\pi_x(x&#39;)\)</span> is a function that penalises <span class="math inline">\(x&#39;\)</span>s that are far away from <span class="math inline">\(x\)</span>. In the LIME paper the Gaussian kernel was used:</p>
+<p><span class="math display">\[\pi_x(x&#39;) = \exp\left({- \|x - x&#39;\|^2 \over \sigma^2}\right).\]</span></p>
+<p>Then <span class="math inline">\(w_i\)</span> represents the importance of the <span class="math inline">\(i\)</span>th feature.</p>
+<p>The LIME model has a more general framework, but the specific model considered in the paper is the one described above, with a Lasso for feature selection.</p>
+<p><strong>Remark</strong>. One difference between our account here and the one in the LIME paper is: the dimension of the data space may differ from <span class="math inline">\(n\)</span> (see Section 3.1 of that paper). But in the case of text data, they do use bag-of-words (our <span class="math inline">\(X\)</span>) for an “intermediate” representation. So my understanding is, in their context, there is an “original” data space (let’s call it <span class="math inline">\(X&#39;\)</span>). And there is a one-one correspondence between <span class="math inline">\(X&#39;\)</span> and <span class="math inline">\(X\)</span> (let’s call it <span class="math inline">\(r: X&#39; \to X\)</span>), so that given a sample <span class="math inline">\(x&#39; \in X&#39;\)</span>, we can compute the output of <span class="math inline">\(S\)</span> in the local model with <span class="math inline">\(f(r^{-1}(h_{r(x&#39;)}(S)))\)</span>. As an example, in the example of <span class="math inline">\(X\)</span> being the bag of words, <span class="math inline">\(X&#39;\)</span> may be the embedding vector space, so that <span class="math inline">\(r(x&#39;) = A^{-1} x&#39;\)</span>, where <span class="math inline">\(A\)</span> is the word embedding matrix. Therefore, without loss of generality, we assume the input space to be <span class="math inline">\(X\)</span> which is of dimension <span class="math inline">\(n\)</span>.</p>
+<h2 id="shapley-values-and-lime">Shapley values and LIME</h2>
+<p>The connection between the Shapley values and LIME is noted in Lundberg-Lee (2017), but the underlying connection goes back to 1988 (Charnes et. al.).</p>
+<p>To see the connection, we need to modify LIME a bit.</p>
+<p>First, we need to make LIME less efficient by considering <em>all</em> the <span class="math inline">\(2^n\)</span> subsets instead of the <span class="math inline">\(m\)</span> samples <span class="math inline">\(S_1, S_2, ..., S_{m}\)</span>.</p>
+<p>Then we need to relax the definition of <span class="math inline">\(\pi_x\)</span>. It no longer needs to penalise samples that are far away from <span class="math inline">\(x\)</span>. In fact, we will see later than the choice of <span class="math inline">\(\pi_x(x&#39;)\)</span> that yields the Shapley values is high when <span class="math inline">\(x&#39;\)</span> is very close or very far away from <span class="math inline">\(x\)</span>, and low otherwise. We further add the restriction that <span class="math inline">\(\pi_x(h_x(S))\)</span> only depends on the size of <span class="math inline">\(S\)</span>, thus we rewrite it as <span class="math inline">\(q(s)\)</span> instead.</p>
+<p>We also denote <span class="math inline">\(v(S) := f(h_x(S))\)</span> and <span class="math inline">\(w(S) = \sum_{i \in S} w_i\)</span>.</p>
+<p>Finally, we add the Efficiency property as a constraint: <span class="math inline">\(\sum_{i = 1}^n w_i = f(x) - f(h_x(\emptyset)) = v(N) - v(\emptyset)\)</span>.</p>
+<p>Then the problem becomes a weighted linear regression:</p>
+<p><strong>Problem</strong>. minimises <span class="math inline">\(\sum_{S \subset N} (w(S) - v(S))^2 q(s)\)</span> over <span class="math inline">\(w\)</span> subject to <span class="math inline">\(w(N) = v(N) - v(\emptyset)\)</span>.</p>
+<p><strong>Claim</strong> (Charnes et. al. 1988). The solution to this problem is</p>
+<p><span class="math display">\[w_i = {1 \over n} (v(N) - v(\emptyset)) + \left(\sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s)\right)^{-1} \sum_{S \subset N: i \in S} \left({n - s \over n} q(s) v(S) - {s - 1 \over n} q(s - 1) v(S - i)\right). \qquad (-1)\]</span></p>
+<p>Specifically, if we choose</p>
+<p><span class="math display">\[q(s) = c {n - 2 \choose s - 1}^{-1}\]</span></p>
+<p>for any constant <span class="math inline">\(c\)</span>, then <span class="math inline">\(w_i = \phi_i(v)\)</span> are the Shapley values.</p>
+<p><strong>Remark</strong>. Don't worry about this specific choice of <span class="math inline">\(q(s)\)</span> when <span class="math inline">\(s = 0\)</span> or <span class="math inline">\(n\)</span>, because <span class="math inline">\(q(0)\)</span> and <span class="math inline">\(q(n)\)</span> do not appear on the right hand side of (-1). Therefore they can be defined to be of any value. A common convention of the binomial coefficients is to set <span class="math inline">\({\ell \choose k} = 0\)</span> if <span class="math inline">\(k &lt; 0\)</span> or <span class="math inline">\(k &gt; \ell\)</span>, in which case <span class="math inline">\(q(0) = q(n) = \infty\)</span>.</p>
+<p>In Lundberg-Lee (2017), <span class="math inline">\(c\)</span> is chosen to be <span class="math inline">\(1 / n\)</span>, see Theorem 2 there.</p>
+<p>In Charnes et. al. 1988, the <span class="math inline">\(w_i\)</span>s defined in (-1) are called the generalised Shapley values.</p>
+<p><strong>Proof</strong>. The Lagrangian is</p>
+<p><span class="math display">\[L(w, \lambda) = \sum_{S \subset N} (v(S) - w(S))^2 q(s) - \lambda(w(N) - v(N) + v(\emptyset)).\]</span></p>
+<p>and by making <span class="math inline">\(\partial_{w_i} L(w, \lambda) = 0\)</span> we have</p>
+<p><span class="math display">\[{1 \over 2} \lambda = \sum_{S \subset N: i \in S} (w(S) - v(S)) q(s). \qquad (0)\]</span></p>
+<p>Summing (0) over <span class="math inline">\(i\)</span> and divide by <span class="math inline">\(n\)</span>, we have</p>
+<p><span class="math display">\[{1 \over 2} \lambda = {1 \over n} \sum_i \sum_{S: i \in S} (w(S) q(s) - v(S) q(s)). \qquad (1)\]</span></p>
+<p>We examine each of the two terms on the right hand side.</p>
+<p>Counting the terms involving <span class="math inline">\(w_i\)</span> and <span class="math inline">\(w_j\)</span> for <span class="math inline">\(j \neq i\)</span>, and using <span class="math inline">\(w(N) = v(N) - v(\emptyset)\)</span> we have</p>
+<p><span class="math display">\[\begin{aligned}
+&amp;\sum_{S \subset N: i \in S} w(S) q(s) \\
+&amp;= \sum_{s = 1}^n {n - 1 \choose s - 1} q(s) w_i + \sum_{j \neq i}\sum_{s = 2}^n {n - 2 \choose s - 2} q(s) w_j \\
+&amp;= q(1) w_i + \sum_{s = 2}^n q(s) \left({n - 1 \choose s - 1} w_i + \sum_{j \neq i} {n - 2 \choose s - 2} w_j\right) \\
+&amp;= q(1) w_i + \sum_{s = 2}^n \left({n - 2 \choose s - 1} w_i + {n - 2 \choose s - 2} (v(N) - v(\emptyset))\right) q(s) \\
+&amp;= \sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s) w_i + \sum_{s = 2}^n {n - 2 \choose s - 2} q(s) (v(N) - v(\emptyset)). \qquad (2)
+\end{aligned}\]</span></p>
+<p>Summing (2) over <span class="math inline">\(i\)</span>, we obtain</p>
+<p><span class="math display">\[\begin{aligned}
+&amp;\sum_i \sum_{S: i \in S} w(S) q(s)\\
+&amp;= \sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s) (v(N) - v(\emptyset)) + \sum_{s = 2}^n n {n - 2 \choose s - 2} q(s) (v(N) - v(\emptyset))\\
+&amp;= \sum_{s = 1}^n s{n - 1 \choose s - 1} q(s) (v(N) - v(\emptyset)). \qquad (3)
+\end{aligned}\]</span></p>
+<p>For the second term in (1), we have</p>
+<p><span class="math display">\[\sum_i \sum_{S: i \in S} v(S) q(s) = \sum_{S \subset N} s v(S) q(s). \qquad (4)\]</span></p>
+<p>Plugging (3)(4) in (1), we have</p>
+<p><span class="math display">\[{1 \over 2} \lambda = {1 \over n} \left(\sum_{S \subset N} s q(s) v(S) - \sum_{s = 1}^n s {n - 1 \choose s - 1} q(s) (v(N) - v(\emptyset))\right). \qquad (5)\]</span></p>
+<p>Plugging (5)(2) in (0) and solve for <span class="math inline">\(w_i\)</span>, we have</p>
+<p><span class="math display">\[w_i = {1 \over n} (v(N) - v(\emptyset)) + \left(\sum_{s = 1}^{n - 1} {n - 2 \choose s - 1} q(s) \right)^{-1} \left( \sum_{S: i \in S} q(s) v(S) - {1 \over n} \sum_{S \subset N} s q(s) v(S) \right). \qquad (6)\]</span></p>
+<p>By splitting all subsets of <span class="math inline">\(N\)</span> into ones that contain <span class="math inline">\(i\)</span> and ones that do not and pair them up, we have</p>
+<p><span class="math display">\[\sum_{S \subset N} s q(s) v(S) = \sum_{S: i \in S} (s q(s) v(S) + (s - 1) q(s - 1) v(S - i)).\]</span></p>
+<p>Plugging this back into (6) we get the desired result. <span class="math inline">\(\square\)</span></p>
+<h2 id="shap">SHAP</h2>
+<p>The paper that coined the term "SHAP values" (Lundberg-Lee 2017) is not clear in its definition of the "SHAP values" and its relation to LIME, so the following is my interpretation of their interpretation model, which coincide with a model studied in Strumbelj-Kononenko 2014.</p>
+<p>Recall that we want to calculate feature contributions to a model <span class="math inline">\(f\)</span> at a sample <span class="math inline">\(x\)</span>.</p>
+<p>Let <span class="math inline">\(\mu\)</span> be a probability density function over the input space <span class="math inline">\(X = X_1 \times ... \times X_n\)</span>. A natural choice would be the density that generates the data, or one that approximates such density (e.g. empirical distribution).</p>
+<p>The feature contribution (SHAP value) is thus defined as the Shapley value <span class="math inline">\(\phi_i(v)\)</span>, where</p>
+<p><span class="math display">\[v(S) = \mathbb E_{z \sim \mu} (f(z) | z_S = x_S). \qquad (7)\]</span></p>
+<p>So it is a conditional expectation where <span class="math inline">\(z_i\)</span> is clamped for <span class="math inline">\(i \in S\)</span>. In fact, the definition of feature contributions in this form predates Lundberg-Lee 2017. For example, it can be found in Strumbelj-Kononenko 2014.</p>
+<p>One simplification is to assume the <span class="math inline">\(n\)</span> features are independent, thus <span class="math inline">\(\mu = \mu_1 \times \mu_2 \times ... \times \mu_n\)</span>. In this case, (7) becomes</p>
+<p><span class="math display">\[v(S) = \mathbb E_{z_{N \setminus S} \sim \mu_{N \setminus S}} f(x_S, z_{N \setminus S}) \qquad (8)\]</span></p>
+<p>For example, Strumbelj-Kononenko (2010) considers this scenario where <span class="math inline">\(\mu\)</span> is the uniform distribution over <span class="math inline">\(X\)</span>, see Definition 4 there.</p>
+<p>A further simplification is model linearity, which means <span class="math inline">\(f\)</span> is linear. In this case, (8) becomes</p>
+<p><span class="math display">\[v(S) = f(x_S, \mathbb E_{\mu_{N \setminus S}} z_{N \setminus S}). \qquad (9)\]</span></p>
+<p>It is worth noting that to make the modified LIME model considered in the previous section fall under the linear SHAP framework (9), we need to make two further specialisations, the first is rather cosmetic: we need to change the definition of <span class="math inline">\(h_x(S)\)</span> to</p>
+<p><span class="math display">\[(h_x(S))_i =
+\begin{cases}
+x_i, &amp; \text{if }i \in S; \\
+\mathbb E_{\mu_i} z_i, &amp; \text{otherwise.}
+\end{cases}\]</span></p>
+<p>But we also need to boldly assume the original <span class="math inline">\(f\)</span> to be linear, which in my view, defeats the purpose of interpretability, because linear models are interpretable by themselves.</p>
+<p>One may argue that perhaps we do not need linearity to define <span class="math inline">\(v(S)\)</span> as in (9). If we do so, however, then (9) loses mathematical meaning. A bigger question is: how effective is SHAP? An even bigger question: in general, how do we evaluate models of interpretation?</p>
+<h2 id="evaluating-shap">Evaluating SHAP</h2>
+<p>The quest of the SHAP paper can be decoupled into two independent components: showing the niceties of Shapley values and choosing the coalitional game <span class="math inline">\(v\)</span>.</p>
+<p>The SHAP paper argues that Shapley values <span class="math inline">\(\phi_i(v)\)</span> are a good measurement because they are the only values satisfying the some nice properties including the Efficiency property mentioned at the beginning of the post, invariance under permutation and monotonicity, see the paragraph below Theorem 1 there, which refers to Theorem 2 of Young (1985).</p>
+<p>Indeed, both efficiency (the “additive feature attribution methods” in the paper) and monotonicity are meaningful when considering <span class="math inline">\(\phi_i(v)\)</span> as the feature contribution of the <span class="math inline">\(i\)</span>th feature.</p>
+<p>The question is thus reduced to the second component: what constitutes a nice choice of <span class="math inline">\(v\)</span>?</p>
+<p>The SHAP paper answers this question with 3 options with increasing simplification: (7)(8)(9) in the previous section of this post (corresponding to (9)(11)(12) in the paper). They are intuitive, but it will be interesting to see more concrete (or even mathematical) justifications of such choices.</p>
+<h2 id="references">References</h2>
+<ul>
+<li>Charnes, A., B. Golany, M. Keane, and J. Rousseau. “Extremal Principle Solutions of Games in Characteristic Function Form: Core, Chebychev and Shapley Value Generalizations.” In Econometrics of Planning and Efficiency, edited by Jati K. Sengupta and Gopal K. Kadekodi, 123–33. Dordrecht: Springer Netherlands, 1988. <a href="https://doi.org/10.1007/978-94-009-3677-5_7" class="uri">https://doi.org/10.1007/978-94-009-3677-5_7</a>.</li>
+<li>Lundberg, Scott, and Su-In Lee. “A Unified Approach to Interpreting Model Predictions.” ArXiv:1705.07874 [Cs, Stat], May 22, 2017. <a href="http://arxiv.org/abs/1705.07874" class="uri">http://arxiv.org/abs/1705.07874</a>.</li>
+<li>Ribeiro, Marco Tulio, Sameer Singh, and Carlos Guestrin. “‘Why Should I Trust You?’: Explaining the Predictions of Any Classifier.” ArXiv:1602.04938 [Cs, Stat], February 16, 2016. <a href="http://arxiv.org/abs/1602.04938" class="uri">http://arxiv.org/abs/1602.04938</a>.</li>
+<li>Shapley, L. S. “17. A Value for n-Person Games.” In Contributions to the Theory of Games (AM-28), Volume II, Vol. 2. Princeton: Princeton University Press, 1953. <a href="https://doi.org/10.1515/9781400881970-018" class="uri">https://doi.org/10.1515/9781400881970-018</a>.</li>
+<li>Strumbelj, Erik, and Igor Kononenko. “An Efficient Explanation of Individual Classifications Using Game Theory.” J. Mach. Learn. Res. 11 (March 2010): 1–18.</li>
+<li>Strumbelj, Erik, and Igor Kononenko. “Explaining Prediction Models and Individual Predictions with Feature Contributions.” Knowledge and Information Systems 41, no. 3 (December 2014): 647–65. <a href="https://doi.org/10.1007/s10115-013-0679-x" class="uri">https://doi.org/10.1007/s10115-013-0679-x</a>.</li>
+<li>Young, H. P. “Monotonic Solutions of Cooperative Games.” International Journal of Game Theory 14, no. 2 (June 1, 1985): 65–72. <a href="https://doi.org/10.1007/BF01769885" class="uri">https://doi.org/10.1007/BF01769885</a>.</li>
+</ul>
+</body>
+</html>
+
+ </div>
+ <section id="isso-thread"></section>
+ </div>
+ </body>
+</html>
diff --git a/site-from-md/posts/2019-01-03-discriminant-analysis.html b/site-from-md/posts/2019-01-03-discriminant-analysis.html
new file mode 100644
index 0000000..c28df60
--- /dev/null
+++ b/site-from-md/posts/2019-01-03-discriminant-analysis.html
@@ -0,0 +1,177 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>Discriminant analysis</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script data-isso="/comments/"
+ data-isso-css="true"
+ data-isso-lang="en"
+ data-isso-reply-to-self="false"
+ data-isso-require-author="true"
+ data-isso-require-email="true"
+ data-isso-max-comments-top="10"
+ data-isso-max-comments-nested="5"
+ data-isso-reveal-on-click="5"
+ data-isso-avatar="true"
+ data-isso-avatar-bg="#f0f0f0"
+ data-isso-avatar-fg="#9abf88 #5698c4 #e279a3 #9163b6 ..."
+ data-isso-vote="true"
+ data-vote-levels=""
+ src="/comments/js/embed.min.js"></script>
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <h2> Discriminant analysis </h2>
+ <p>Posted on 2019-01-03 | <a href="/posts/2019-01-03-discriminant-analysis.html#isso-thread">Comments</a> </p>
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/MathJax.js?config=TeX-AMS_CHTML-full" type="text/javascript"></script>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<nav id="TOC">
+<ul>
+<li><a href="#theory">Theory</a><ul>
+<li><a href="#qda">QDA</a></li>
+<li><a href="#vanilla-lda">Vanilla LDA</a></li>
+<li><a href="#nearest-neighbour-classifier">Nearest neighbour classifier</a></li>
+<li><a href="#dimensionality-reduction">Dimensionality reduction</a></li>
+<li><a href="#fisher-discriminant-analysis">Fisher discriminant analysis</a></li>
+<li><a href="#linear-model">Linear model</a></li>
+</ul></li>
+<li><a href="#implementation">Implementation</a><ul>
+<li><a href="#fun-facts-about-lda">Fun facts about LDA</a></li>
+</ul></li>
+</ul>
+</nav>
+<p>In this post I talk about the theory and implementation of linear and quadratic discriminant analysis, classical methods in statistical learning.</p>
+<p><strong>Acknowledgement</strong>. Various sources were of great help to my understanding of the subject, including Chapter 4 of <a href="https://web.stanford.edu/~hastie/ElemStatLearn/">The Elements of Statistical Learning</a>, <a href="http://cs229.stanford.edu/notes/cs229-notes2.pdf">Stanford CS229 Lecture notes</a>, and <a href="https://github.com/scikit-learn/scikit-learn/blob/7389dba/sklearn/discriminant_analysis.py">the scikit-learn code</a>. Research was done while working at KTH mathematics department.</p>
+<p><em>If you are reading on a mobile device, you may need to “request desktop site” for the equations to be properly displayed. This post is licensed under CC BY-SA and GNU FDL.</em></p>
+<h2 id="theory">Theory</h2>
+<p>Quadratic discriminant analysis (QDA) is a classical classification algorithm. It assumes that the data is generated by Gaussian distributions, where each class has its own mean and covariance.</p>
+<p><span class="math display">\[(x | y = i) \sim N(\mu_i, \Sigma_i).\]</span></p>
+<p>It also assumes a categorical class prior:</p>
+<p><span class="math display">\[\mathbb P(y = i) = \pi_i\]</span></p>
+<p>The log-likelihood is thus</p>
+<p><span class="math display">\[\begin{aligned}
+\log \mathbb P(y = i | x) &amp;= \log \mathbb P(x | y = i) \log \mathbb P(y = i) + C\\
+&amp;= - {1 \over 2} \log \det \Sigma_i - {1 \over 2} (x - \mu_i)^T \Sigma_i^{-1} (x - \mu_i) + \log \pi_i + C&#39;, \qquad (0)
+\end{aligned}\]</span></p>
+<p>where <span class="math inline">\(C\)</span> and <span class="math inline">\(C&#39;\)</span> are constants.</p>
+<p>Thus the prediction is done by taking argmax of the above formula.</p>
+<p>In training, let <span class="math inline">\(X\)</span>, <span class="math inline">\(y\)</span> be the input data, where <span class="math inline">\(X\)</span> is of shape <span class="math inline">\(m \times n\)</span>, and <span class="math inline">\(y\)</span> of shape <span class="math inline">\(m\)</span>. We adopt the convention that each row of <span class="math inline">\(X\)</span> is a sample <span class="math inline">\(x^{(i)T}\)</span>. So there are <span class="math inline">\(m\)</span> samples and <span class="math inline">\(n\)</span> features. Denote by <span class="math inline">\(m_i = \#\{j: y_j = i\}\)</span> be the number of samples in class <span class="math inline">\(i\)</span>. Let <span class="math inline">\(n_c\)</span> be the number of classes.</p>
+<p>We estimate <span class="math inline">\(\mu_i\)</span> by the sample means, and <span class="math inline">\(\pi_i\)</span> by the frequencies:</p>
+<p><span class="math display">\[\begin{aligned}
+\mu_i &amp;:= {1 \over m_i} \sum_{j: y_j = i} x^{(j)}, \\
+\pi_i &amp;:= \mathbb P(y = i) = {m_i \over m}.
+\end{aligned}\]</span></p>
+<p>Linear discriminant analysis (LDA) is a specialisation of QDA: it assumes all classes share the same covariance, i.e. <span class="math inline">\(\Sigma_i = \Sigma\)</span> for all <span class="math inline">\(i\)</span>.</p>
+<p>Guassian Naive Bayes is a different specialisation of QDA: it assumes that all <span class="math inline">\(\Sigma_i\)</span> are diagonal, since all the features are assumed to be independent.</p>
+<h3 id="qda">QDA</h3>
+<p>We look at QDA.</p>
+<p>We estimate <span class="math inline">\(\Sigma_i\)</span> by the variance mean:</p>
+<p><span class="math display">\[\begin{aligned}
+\Sigma_i &amp;= {1 \over m_i - 1} \sum_{j: y_j = i} \hat x^{(j)} \hat x^{(j)T}.
+\end{aligned}\]</span></p>
+<p>where <span class="math inline">\(\hat x^{(j)} = x^{(j)} - \mu_{y_j}\)</span> are the centred <span class="math inline">\(x^{(j)}\)</span>. Plugging this into (0) we are done.</p>
+<p>There are two problems that can break the algorithm. First, if one of the <span class="math inline">\(m_i\)</span> is <span class="math inline">\(1\)</span>, then <span class="math inline">\(\Sigma_i\)</span> is ill-defined. Second, one of <span class="math inline">\(\Sigma_i\)</span>'s might be singular.</p>
+<p>In either case, there is no way around it, and the implementation should throw an exception.</p>
+<p>This won't be a problem of the LDA, though, unless there is only one sample for each class.</p>
+<h3 id="vanilla-lda">Vanilla LDA</h3>
+<p>Now let us look at LDA.</p>
+<p>Since all classes share the same covariance, we estimate <span class="math inline">\(\Sigma\)</span> using sample variance</p>
+<p><span class="math display">\[\begin{aligned}
+\Sigma &amp;= {1 \over m - n_c} \sum_j \hat x^{(j)} \hat x^{(j)T},
+\end{aligned}\]</span></p>
+<p>where <span class="math inline">\(\hat x^{(j)} = x^{(j)} - \mu_{y_j}\)</span> and <span class="math inline">\({1 \over m - n_c}\)</span> comes from <a href="https://en.wikipedia.org/wiki/Bessel%27s_correction">Bessel's Correction</a>.</p>
+<p>Let us write down the decision function (0). We can remove the first term on the right hand side, since all <span class="math inline">\(\Sigma_i\)</span> are the same, and we only care about argmax of that equation. Thus it becomes</p>
+<p><span class="math display">\[- {1 \over 2} (x - \mu_i)^T \Sigma^{-1} (x - \mu_i) + \log\pi_i. \qquad (1)\]</span></p>
+<p>Notice that we just walked around the problem of figuring out <span class="math inline">\(\log \det \Sigma\)</span> when <span class="math inline">\(\Sigma\)</span> is singular.</p>
+<p>But how about <span class="math inline">\(\Sigma^{-1}\)</span>?</p>
+<p>We sidestep this problem by using the pseudoinverse of <span class="math inline">\(\Sigma\)</span> instead. This can be seen as applying a linear transformation to <span class="math inline">\(X\)</span> to turn its covariance matrix to identity. And thus the model becomes a sort of a nearest neighbour classifier.</p>
+<h3 id="nearest-neighbour-classifier">Nearest neighbour classifier</h3>
+<p>More specifically, we want to transform the first term of (0) to a norm to get a classifier based on nearest neighbour modulo <span class="math inline">\(\log \pi_i\)</span>:</p>
+<p><span class="math display">\[- {1 \over 2} \|A(x - \mu_i)\|^2 + \log\pi_i\]</span></p>
+<p>To compute <span class="math inline">\(A\)</span>, we denote</p>
+<p><span class="math display">\[X_c = X - M,\]</span></p>
+<p>where the <span class="math inline">\(i\)</span>th row of <span class="math inline">\(M\)</span> is <span class="math inline">\(\mu_{y_i}^T\)</span>, the mean of the class <span class="math inline">\(x_i\)</span> belongs to, so that <span class="math inline">\(\Sigma = {1 \over m - n_c} X_c^T X_c\)</span>.</p>
+<p>Let</p>
+<p><span class="math display">\[{1 \over \sqrt{m - n_c}} X_c = U_x \Sigma_x V_x^T\]</span></p>
+<p>be the SVD of <span class="math inline">\({1 \over \sqrt{m - n_c}}X_c\)</span>. Let <span class="math inline">\(D_x = \text{diag} (s_1, ..., s_r)\)</span> be the diagonal matrix with all the nonzero singular values, and rewrite <span class="math inline">\(V_x\)</span> as an <span class="math inline">\(n \times r\)</span> matrix consisting of the first <span class="math inline">\(r\)</span> columns of <span class="math inline">\(V_x\)</span>.</p>
+<p>Then with an abuse of notation, the pseudoinverse of <span class="math inline">\(\Sigma\)</span> is</p>
+<p><span class="math display">\[\Sigma^{-1} = V_x D_x^{-2} V_x^T.\]</span></p>
+<p>So we just need to make <span class="math inline">\(A = D_x^{-1} V_x^T\)</span>. When it comes to prediction, just transform <span class="math inline">\(x\)</span> with <span class="math inline">\(A\)</span>, and find the nearest centroid <span class="math inline">\(A \mu_i\)</span> (again, modulo <span class="math inline">\(\log \pi_i\)</span>) and label the input with <span class="math inline">\(i\)</span>.</p>
+<h3 id="dimensionality-reduction">Dimensionality reduction</h3>
+<p>We can further simplify the prediction by dimensionality reduction. Assume <span class="math inline">\(n_c \le n\)</span>. Then the centroid spans an affine space of dimension <span class="math inline">\(p\)</span> which is at most <span class="math inline">\(n_c - 1\)</span>. So what we can do is to project both the transformed sample <span class="math inline">\(Ax\)</span> and centroids <span class="math inline">\(A\mu_i\)</span> to the linear subspace parallel to the affine space, and do the nearest neighbour classification there.</p>
+<p>So we can perform SVD on the matrix <span class="math inline">\((M - \bar x) V_x D_x^{-1}\)</span> where <span class="math inline">\(\bar x\)</span>, a row vector, is the sample mean of all data i.e. average of rows of <span class="math inline">\(X\)</span>:</p>
+<p><span class="math display">\[(M - \bar x) V_x D_x^{-1} = U_m \Sigma_m V_m^T.\]</span></p>
+<p>Again, we let <span class="math inline">\(V_m\)</span> be the <span class="math inline">\(r \times p\)</span> matrix by keeping the first <span class="math inline">\(p\)</span> columns of <span class="math inline">\(V_m\)</span>.</p>
+<p>The projection operator is thus <span class="math inline">\(V_m\)</span>. And so the final transformation is <span class="math inline">\(V_m^T D_x^{-1} V_x^T\)</span>.</p>
+<p>There is no reason to stop here, and we can set <span class="math inline">\(p\)</span> even smaller, which will result in a lossy compression / regularisation equivalent to doing <a href="https://en.wikipedia.org/wiki/Principal_component_analysis">principle component analysis</a> on <span class="math inline">\((M - \bar x) V_x D_x^{-1}\)</span>.</p>
+<p>Note that as of 2019-01-04, in the <a href="https://github.com/scikit-learn/scikit-learn/blob/master/sklearn/discriminant_analysis.py">scikit-learn implementation of LDA</a>, the prediction is done without any lossy compression, even if the parameter <code>n_components</code> is set to be smaller than dimension of the affine space spanned by the centroids. In other words, the prediction does not change regardless of <code>n_components</code>.</p>
+<h3 id="fisher-discriminant-analysis">Fisher discriminant analysis</h3>
+<p>The Fisher discriminant analysis involves finding an <span class="math inline">\(n\)</span>-dimensional vector <span class="math inline">\(a\)</span> that maximises between-class covariance with respect to within-class covariance:</p>
+<p><span class="math display">\[{a^T M_c^T M_c a \over a^T X_c^T X_c a},\]</span></p>
+<p>where <span class="math inline">\(M_c = M - \bar x\)</span> is the centred sample mean matrix.</p>
+<p>As it turns out, this is (almost) equivalent to the derivation above, modulo a constant. In particular, <span class="math inline">\(a = c V_m^T D_x^{-1} V_x^T\)</span> where <span class="math inline">\(p = 1\)</span> for arbitrary constant <span class="math inline">\(c\)</span>.</p>
+<p>To see this, we can first multiply the denominator with a constant <span class="math inline">\({1 \over m - n_c}\)</span> so that the matrix in the denominator becomes the covariance estimate <span class="math inline">\(\Sigma\)</span>.</p>
+<p>We decompose <span class="math inline">\(a\)</span>: <span class="math inline">\(a = V_x D_x^{-1} b + \tilde V_x \tilde b\)</span>, where <span class="math inline">\(\tilde V_x\)</span> consists of column vectors orthogonal to the column space of <span class="math inline">\(V_x\)</span>.</p>
+<p>We ignore the second term in the decomposition. In other words, we only consider <span class="math inline">\(a\)</span> in the column space of <span class="math inline">\(V_x\)</span>.</p>
+<p>Then the problem is to find an <span class="math inline">\(r\)</span>-dimensional vector <span class="math inline">\(b\)</span> to maximise</p>
+<p><span class="math display">\[{b^T (M_c V_x D_x^{-1})^T (M_c V_x D_x^{-1}) b \over b^T b}.\]</span></p>
+<p>This is the problem of principle component analysis, and so <span class="math inline">\(b\)</span> is first column of <span class="math inline">\(V_m\)</span>.</p>
+<p>Therefore, the solution to Fisher discriminant analysis is <span class="math inline">\(a = c V_x D_x^{-1} V_m\)</span> with <span class="math inline">\(p = 1\)</span>.</p>
+<h3 id="linear-model">Linear model</h3>
+<p>The model is called linear discriminant analysis because it is a linear model. To see this, let <span class="math inline">\(B = V_m^T D_x^{-1} V_x^T\)</span> be the matrix of transformation. Now we are comparing</p>
+<p><span class="math display">\[- {1 \over 2} \| B x - B \mu_k\|^2 + \log \pi_k\]</span></p>
+<p>across all <span class="math inline">\(k\)</span>s. Expanding the norm and removing the common term <span class="math inline">\(\|B x\|^2\)</span>, we see a linear form:</p>
+<p><span class="math display">\[\mu_k^T B^T B x - {1 \over 2} \|B \mu_k\|^2 + \log\pi_k\]</span></p>
+<p>So the prediction for <span class="math inline">\(X_{\text{new}}\)</span> is</p>
+<p><span class="math display">\[\text{argmax}_{\text{axis}=0} \left(K B^T B X_{\text{new}}^T - {1 \over 2} \|K B^T\|_{\text{axis}=1}^2 + \log \pi\right)\]</span></p>
+<p>thus the decision boundaries are linear.</p>
+<p>This is how scikit-learn implements LDA, by inheriting from <code>LinearClassifierMixin</code> and redirecting the classification there.</p>
+<h2 id="implementation">Implementation</h2>
+<p>This is where things get interesting. How do I validate my understanding of the theory? By implementing and testing the algorithm.</p>
+<p>I try to implement it as close as possible to the natural language / mathematical descriptions of the model, which means clarity over performance.</p>
+<p>How about testing? Numerical experiments are harder to test than combinatorial / discrete algorithms in general because the output is less verifiable by hand. My shortcut solution to this problem is to test against output from the scikit-learn package.</p>
+<p>It turned out to be harder than expected, as I had to dig into the code of scikit-learn when the outputs mismatch. Their code is quite well-written though.</p>
+<p>The result is <a href="https://github.com/ycpei/machine-learning/tree/master/discriminant-analysis">here</a>.</p>
+<h3 id="fun-facts-about-lda">Fun facts about LDA</h3>
+<p>One property that can be used to test the LDA implementation is the fact that the scatter matrix <span class="math inline">\(B(X - \bar x)^T (X - \bar X) B^T\)</span> of the transformed centred sample is diagonal.</p>
+<p>This can be derived by using another fun fact that the sum of the in-class scatter matrix and the between-class scatter matrix is the sample scatter matrix:</p>
+<p><span class="math display">\[X_c^T X_c + M_c^T M_c = (X - \bar x)^T (X - \bar x) = (X_c + M_c)^T (X_c + M_c).\]</span></p>
+<p>The verification is not very hard and left as an exercise.</p>
+</body>
+</html>
+
+ </div>
+ <section id="isso-thread"></section>
+ </div>
+ </body>
+</html>
diff --git a/site-from-md/posts/2019-02-14-raise-your-elbo.html b/site-from-md/posts/2019-02-14-raise-your-elbo.html
new file mode 100644
index 0000000..a40ede8
--- /dev/null
+++ b/site-from-md/posts/2019-02-14-raise-your-elbo.html
@@ -0,0 +1,562 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>Raise your ELBO</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script data-isso="/comments/"
+ data-isso-css="true"
+ data-isso-lang="en"
+ data-isso-reply-to-self="false"
+ data-isso-require-author="true"
+ data-isso-require-email="true"
+ data-isso-max-comments-top="10"
+ data-isso-max-comments-nested="5"
+ data-isso-reveal-on-click="5"
+ data-isso-avatar="true"
+ data-isso-avatar-bg="#f0f0f0"
+ data-isso-avatar-fg="#9abf88 #5698c4 #e279a3 #9163b6 ..."
+ data-isso-vote="true"
+ data-vote-levels=""
+ src="/comments/js/embed.min.js"></script>
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <h2> Raise your ELBO </h2>
+ <p>Posted on 2019-02-14 | <a href="/posts/2019-02-14-raise-your-elbo.html#isso-thread">Comments</a> </p>
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/MathJax.js?config=TeX-AMS_CHTML-full" type="text/javascript"></script>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<nav id="TOC">
+<ul>
+<li><a href="#kl-divergence-and-elbo">KL divergence and ELBO</a></li>
+<li><a href="#em">EM</a><ul>
+<li><a href="#gmm">GMM</a></li>
+<li><a href="#smm">SMM</a></li>
+<li><a href="#plsa">pLSA</a></li>
+<li><a href="#hmm">HMM</a></li>
+</ul></li>
+<li><a href="#fully-bayesian-em-mfa">Fully Bayesian EM / MFA</a><ul>
+<li><a href="#application-to-mixture-models">Application to mixture models</a></li>
+<li><a href="#fully-bayesian-gmm">Fully Bayesian GMM</a></li>
+<li><a href="#lda">LDA</a></li>
+<li><a href="#dpmm">DPMM</a></li>
+</ul></li>
+<li><a href="#svi">SVI</a></li>
+<li><a href="#aevb">AEVB</a><ul>
+<li><a href="#vae">VAE</a></li>
+<li><a href="#fully-bayesian-aevb">Fully Bayesian AEVB</a></li>
+</ul></li>
+<li><a href="#references">References</a></li>
+</ul>
+</nav>
+<p>In this post I give an introduction to variational inference, which is about maximising the evidence lower bound (ELBO).</p>
+<p>I use a top-down approach, starting with the KL divergence and the ELBO, to lay the mathematical framework of all the models in this post.</p>
+<p>Then I define mixture models and the EM algorithm, with Gaussian mixture model (GMM), probabilistic latent semantic analysis (pLSA) and the hidden markov model (HMM) as examples.</p>
+<p>After that I present the fully Bayesian version of EM, also known as mean field approximation (MFA), and apply it to fully Bayesian mixture models, with fully Bayesian GMM (also known as variational GMM), latent Dirichlet allocation (LDA) and Dirichlet process mixture model (DPMM) as examples.</p>
+<p>Then I explain stochastic variational inference, a modification of EM and MFA to improve efficiency.</p>
+<p>Finally I talk about autoencoding variational Bayes (AEVB), a Monte-Carlo + neural network approach to raising the ELBO, exemplified by the variational autoencoder (VAE). I also show its fully Bayesian version.</p>
+<p><strong>Acknowledgement</strong>. The following texts and resources were illuminating during the writing of this post: the Stanford CS228 notes (<a href="https://ermongroup.github.io/cs228-notes/inference/variational/">1</a>,<a href="https://ermongroup.github.io/cs228-notes/learning/latent/">2</a>), the <a href="https://www.cs.tau.ac.il/~rshamir/algmb/presentations/EM-BW-Ron-16%20.pdf">Tel Aviv Algorithms in Molecular Biology slides</a> (clear explanations of the connection between EM and Baum-Welch), Chapter 10 of <a href="https://www.springer.com/us/book/9780387310732">Bishop's book</a> (brilliant introduction to variational GMM), Section 2.5 of <a href="http://cs.brown.edu/~sudderth/papers/sudderthPhD.pdf">Sudderth's thesis</a> and <a href="https://metacademy.org">metacademy</a>. Also thanks to Josef Lindman Hörnlund for discussions. The research was done while working at KTH mathematics department.</p>
+<p><em>If you are reading on a mobile device, you may need to "request desktop site" for the equations to be properly displayed. This post is licensed under CC BY-SA and GNU FDL.</em></p>
+<h2 id="kl-divergence-and-elbo">KL divergence and ELBO</h2>
+<p>Let <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> be two probability measures. The Kullback-Leibler (KL) divergence is defined as</p>
+<p><span class="math display">\[D(q||p) = E_q \log{q \over p}.\]</span></p>
+<p>It achieves minimum <span class="math inline">\(0\)</span> when <span class="math inline">\(p = q\)</span>.</p>
+<p>If <span class="math inline">\(p\)</span> can be further written as</p>
+<p><span class="math display">\[p(x) = {w(x) \over Z}, \qquad (0)\]</span></p>
+<p>where <span class="math inline">\(Z\)</span> is a normaliser, then</p>
+<p><span class="math display">\[\log Z = D(q||p) + L(w, q), \qquad(1)\]</span></p>
+<p>where <span class="math inline">\(L(w, q)\)</span> is called the evidence lower bound (ELBO), defined by</p>
+<p><span class="math display">\[L(w, q) = E_q \log{w \over q}. \qquad (1.25)\]</span></p>
+<p>From (1), we see that to minimise the nonnegative term <span class="math inline">\(D(q || p)\)</span>, one can maximise the ELBO.</p>
+<p>To this end, we can simply discard <span class="math inline">\(D(q || p)\)</span> in (1) and obtain:</p>
+<p><span class="math display">\[\log Z \ge L(w, q) \qquad (1.3)\]</span></p>
+<p>and keep in mind that the inequality becomes an equality when <span class="math inline">\(q = {w \over Z}\)</span>.</p>
+<p>It is time to define the task of variational inference (VI), also known as variational Bayes (VB).</p>
+<p><strong>Definition</strong>. Variational inference is concerned with maximising the ELBO <span class="math inline">\(L(w, q)\)</span>.</p>
+<p>There are mainly two versions of VI, the half Bayesian and the fully Bayesian cases. Half Bayesian VI, to which expectation-maximisation algorithms (EM) apply, instantiates (1.3) with</p>
+<p><span class="math display">\[\begin{aligned}
+Z &amp;= p(x; \theta)\\
+w &amp;= p(x, z; \theta)\\
+q &amp;= q(z)
+\end{aligned}\]</span></p>
+<p>and the dummy variable <span class="math inline">\(x\)</span> in Equation (0) is substituted with <span class="math inline">\(z\)</span>.</p>
+<p>Fully Bayesian VI, often just called VI, has the following instantiations:</p>
+<p><span class="math display">\[\begin{aligned}
+Z &amp;= p(x) \\
+w &amp;= p(x, z, \theta) \\
+q &amp;= q(z, \theta)
+\end{aligned}\]</span></p>
+<p>and <span class="math inline">\(x\)</span> in Equation (0) is substituted with <span class="math inline">\((z, \theta)\)</span>.</p>
+<p>In both cases <span class="math inline">\(\theta\)</span> are parameters and <span class="math inline">\(z\)</span> are latent variables.</p>
+<p><strong>Remark on the naming of things</strong>. The term "variational" comes from the fact that we perform calculus of variations: maximise some functional (<span class="math inline">\(L(w, q)\)</span>) over a set of functions (<span class="math inline">\(q\)</span>). Note however, most of the VI / VB algorithms do not concern any techniques in calculus of variations, but only uses Jensen's inequality / the fact the <span class="math inline">\(D(q||p)\)</span> reaches minimum when <span class="math inline">\(p = q\)</span>. Due to this reasoning of the naming, EM is also a kind of VI, even though in the literature VI often referes to its fully Bayesian version.</p>
+<h2 id="em">EM</h2>
+<p>To illustrate the EM algorithms, we first define the mixture model.</p>
+<p><strong>Definition (mixture model)</strong>. Given dataset <span class="math inline">\(x_{1 : m}\)</span>, we assume the data has some underlying latent variable <span class="math inline">\(z_{1 : m}\)</span> that may take a value from a finite set <span class="math inline">\(\{1, 2, ..., n_z\}\)</span>. Let <span class="math inline">\(p(z_{i}; \pi)\)</span> be categorically distributed according to the probability vector <span class="math inline">\(\pi\)</span>. That is, <span class="math inline">\(p(z_{i} = k; \pi) = \pi_k\)</span>. Also assume <span class="math inline">\(p(x_{i} | z_{i} = k; \eta) = p(x_{i}; \eta_k)\)</span>. Find <span class="math inline">\(\theta = (\pi, \eta)\)</span> that maximises the likelihood <span class="math inline">\(p(x_{1 : m}; \theta)\)</span>.</p>
+<p>Represented as a DAG (a.k.a the plate notations), the model looks like this:</p>
+<p><img src="/assets/resources/mixture-model.png" style="width:250px" /></p>
+<p>where the boxes with <span class="math inline">\(m\)</span> mean repitition for <span class="math inline">\(m\)</span> times, since there <span class="math inline">\(m\)</span> indepdent pairs of <span class="math inline">\((x, z)\)</span>, and the same goes for <span class="math inline">\(\eta\)</span>.</p>
+<p>The direct maximisation</p>
+<p><span class="math display">\[\max_\theta \sum_i \log p(x_{i}; \theta) = \max_\theta \sum_i \log \int p(x_{i} | z_i; \theta) p(z_i; \theta) dz_i\]</span></p>
+<p>is hard because of the integral in the log.</p>
+<p>We can fit this problem in (1.3) by having <span class="math inline">\(Z = p(x_{1 : m}; \theta)\)</span> and <span class="math inline">\(w = p(z_{1 : m}, x_{1 : m}; \theta)\)</span>. The plan is to update <span class="math inline">\(\theta\)</span> repeatedly so that <span class="math inline">\(L(p(z, x; \theta_t), q(z))\)</span> is non decreasing over time <span class="math inline">\(t\)</span>.</p>
+<p>Equation (1.3) at time <span class="math inline">\(t\)</span> for the <span class="math inline">\(i\)</span>th datapoint is</p>
+<p><span class="math display">\[\log p(x_{i}; \theta_t) \ge L(p(z_i, x_{i}; \theta_t), q(z_i)) \qquad (2)\]</span></p>
+<p>Each timestep consists of two steps, the E-step and the M-step.</p>
+<p>At E-step, we set</p>
+<p><span class="math display">\[q(z_{i}) = p(z_{i}|x_{i}; \theta_t), \]</span></p>
+<p>to turn the inequality into equality. We denote <span class="math inline">\(r_{ik} = q(z_i = k)\)</span> and call them responsibilities, so the posterior <span class="math inline">\(q(z_i)\)</span> is categorical distribution with parameter <span class="math inline">\(r_i = r_{i, 1 : n_z}\)</span>.</p>
+<p>At M-step, we maximise <span class="math inline">\(\sum_i L(p(x_{i}, z_{i}; \theta), q(z_{i}))\)</span> over <span class="math inline">\(\theta\)</span>:</p>
+<p><span class="math display">\[\begin{aligned}
+\theta_{t + 1} &amp;= \text{argmax}_\theta \sum_i L(p(x_{i}, z_{i}; \theta), p(z_{i} | x_{i}; \theta_t)) \\
+&amp;= \text{argmax}_\theta \sum_i \mathbb E_{p(z_{i} | x_{i}; \theta_t)} \log p(x_{i}, z_{i}; \theta) \qquad (2.3)
+\end{aligned}\]</span></p>
+<p>So <span class="math inline">\(\sum_i L(p(x_{i}, z_{i}; \theta), q(z_i))\)</span> is non-decreasing at both the E-step and the M-step.</p>
+<p>We can see from this derivation that EM is half-Bayesian. The E-step is Bayesian it computes the posterior of the latent variables and the M-step is frequentist because it performs maximum likelihood estimate of <span class="math inline">\(\theta\)</span>.</p>
+<p>It is clear that the ELBO sum coverges as it is nondecreasing with an upper bound, but it is not clear whether the sum converges to the correct value, i.e. <span class="math inline">\(\max_\theta p(x_{1 : m}; \theta)\)</span>. In fact it is said that the EM does get stuck in local maximum sometimes.</p>
+<p>A different way of describing EM, which will be useful in hidden Markov model is:</p>
+<ul>
+<li><p>At E-step, one writes down the formula <span class="math display">\[\sum_i \mathbb E_{p(z_i | x_{i}; \theta_t)} \log p(x_{i}, z_i; \theta). \qquad (2.5)\]</span></p></li>
+<li><p>At M-setp, one finds <span class="math inline">\(\theta_{t + 1}\)</span> to be the <span class="math inline">\(\theta\)</span> that maximises the above formula.</p></li>
+</ul>
+<h3 id="gmm">GMM</h3>
+<p>Gaussian mixture model (GMM) is an example of mixture models.</p>
+<p>The space of the data is <span class="math inline">\(\mathbb R^n\)</span>. We use the hypothesis that the data is Gaussian conditioned on the latent variable:</p>
+<p><span class="math display">\[(x_i; \eta_k) \sim N(\mu_k, \Sigma_k),\]</span></p>
+<p>so we write <span class="math inline">\(\eta_k = (\mu_k, \Sigma_k)\)</span>,</p>
+<p>During E-step, the <span class="math inline">\(q(z_i)\)</span> can be directly computed using Bayes’ theorem:</p>
+<p><span class="math display">\[r_{ik} = q(z_i = k) = \mathbb P(z_i = k | x_{i}; \theta_t)
+= {g_{\mu_{t, k}, \Sigma_{t, k}} (x_{i}) \pi_{t, k} \over \sum_{j = 1 : n_z} g_{\mu_{t, j}, \Sigma_{t, j}} (x_{i}) \pi_{t, j}},\]</span></p>
+<p>where <span class="math inline">\(g_{\mu, \Sigma} (x) = (2 \pi)^{- n / 2} \det \Sigma^{-1 / 2} \exp(- {1 \over 2} (x - \mu)^T \Sigma^{-1} (x - \mu))\)</span> is the pdf of the Gaussian distribution <span class="math inline">\(N(\mu, \Sigma)\)</span>.</p>
+<p>During M-step, we need to compute</p>
+<p><span class="math display">\[\text{argmax}_{\Sigma, \mu, \pi} \sum_{i = 1 : m} \sum_{k = 1 : n_z} r_{ik} \log (g_{\mu_k, \Sigma_k}(x_{i}) \pi_k).\]</span></p>
+<p>This is similar to the quadratic discriminant analysis, and the solution is</p>
+<p><span class="math display">\[\begin{aligned}
+\pi_{k} &amp;= {1 \over m} \sum_{i = 1 : m} r_{ik}, \\
+\mu_{k} &amp;= {\sum_i r_{ik} x_{i} \over \sum_i r_{ik}}, \\
+\Sigma_{k} &amp;= {\sum_i r_{ik} (x_{i} - \mu_{t, k}) (x_{i} - \mu_{t, k})^T \over \sum_i r_{ik}}.
+\end{aligned}\]</span></p>
+<p><strong>Remark</strong>. The k-means algorithm is the <span class="math inline">\(\epsilon \to 0\)</span> limit of the GMM with constraints <span class="math inline">\(\Sigma_k = \epsilon I\)</span>. See Section 9.3.2 of Bishop 2006 for derivation. It is also briefly mentioned there that a variant in this setting where the covariance matrix is not restricted to <span class="math inline">\(\epsilon I\)</span> is called elliptical k-means algorithm.</p>
+<h3 id="smm">SMM</h3>
+<p>As a transition to the next models to study, let us consider a simpler mixture model obtained by making one modification to GMM: change <span class="math inline">\((x; \eta_k) \sim N(\mu_k, \Sigma_k)\)</span> to <span class="math inline">\(\mathbb P(x = w; \eta_k) = \eta_{kw}\)</span> where <span class="math inline">\(\eta\)</span> is a stochastic matrix and <span class="math inline">\(w\)</span> is an arbitrary element of the space for <span class="math inline">\(x\)</span>. So now the space for both <span class="math inline">\(x\)</span> and <span class="math inline">\(z\)</span> are finite. We call this model the simple mixture model (SMM).</p>
+<p>As in GMM, at E-step <span class="math inline">\(r_{ik}\)</span> can be explicitly computed using Bayes' theorem.</p>
+<p>It is not hard to write down the solution to the M-step in this case:</p>
+<p><span class="math display">\[\begin{aligned}
+\pi_{k} &amp;= {1 \over m} \sum_i r_{ik}, \qquad (2.7)\\
+\eta_{k, w} &amp;= {\sum_i r_{ik} 1_{x_i = w} \over \sum_i r_{ik}}. \qquad (2.8)
+\end{aligned}\]</span></p>
+<p>where <span class="math inline">\(1_{x_i = w}\)</span> is the <a href="https://en.wikipedia.org/wiki/Indicator_function">indicator function</a>, and evaluates to <span class="math inline">\(1\)</span> if <span class="math inline">\(x_i = w\)</span> and <span class="math inline">\(0\)</span> otherwise.</p>
+<p>Two trivial variants of the SMM are the two versions of probabilistic latent semantic analysis (pLSA), which we call pLSA1 and pLSA2.</p>
+<p>The model pLSA1 is a probabilistic version of latent semantic analysis, which is basically a simple matrix factorisation model in collaborative filtering, whereas pLSA2 has a fully Bayesian version called latent Dirichlet allocation (LDA), not to be confused with the other LDA (linear discriminant analysis).</p>
+<h3 id="plsa">pLSA</h3>
+<p>The pLSA model (Hoffman 2000) is a mixture model, where the dataset is now pairs <span class="math inline">\((d_i, x_i)_{i = 1 : m}\)</span>. In natural language processing, <span class="math inline">\(x\)</span> are words and <span class="math inline">\(d\)</span> are documents, and a pair <span class="math inline">\((d, x)\)</span> represent an ocurrance of word <span class="math inline">\(x\)</span> in document <span class="math inline">\(d\)</span>.</p>
+<p>For each datapoint <span class="math inline">\((d_{i}, x_{i})\)</span>,</p>
+<p><span class="math display">\[\begin{aligned}
+p(d_i, x_i; \theta) &amp;= \sum_{z_i} p(z_i; \theta) p(d_i | z_i; \theta) p(x_i | z_i; \theta) \qquad (2.91)\\
+&amp;= p(d_i; \theta) \sum_z p(x_i | z_i; \theta) p (z_i | d_i; \theta) \qquad (2.92).
+\end{aligned}\]</span></p>
+<p>Of the two formulations, (2.91) corresponds to pLSA type 1, and (2.92) corresponds to type 2.</p>
+<h4 id="plsa1">pLSA1</h4>
+<p>The pLSA1 model (Hoffman 2000) is basically SMM with <span class="math inline">\(x_i\)</span> substituted with <span class="math inline">\((d_i, x_i)\)</span>, which conditioned on <span class="math inline">\(z_i\)</span> are independently categorically distributed:</p>
+<p><span class="math display">\[p(d_i = u, x_i = w | z_i = k; \theta) = p(d_i ; \xi_k) p(x_i; \eta_k) = \xi_{ku} \eta_{kw}.\]</span></p>
+<p>The model can be illustrated in the plate notations:</p>
+<p><img src="/assets/resources/plsa1.png" style="width:350px" /></p>
+<p>So the solution of the M-step is</p>
+<p><span class="math display">\[\begin{aligned}
+\pi_{k} &amp;= {1 \over m} \sum_i r_{ik} \\
+\xi_{k, u} &amp;= {\sum_i r_{ik} 1_{d_{i} = u} \over \sum_i r_{ik}} \\
+\eta_{k, w} &amp;= {\sum_i r_{ik} 1_{x_{i} = w} \over \sum_i r_{ik}}.
+\end{aligned}\]</span></p>
+<p><strong>Remark</strong>. pLSA1 is the probabilistic version of LSA, also known as matrix factorisation.</p>
+<p>Let <span class="math inline">\(n_d\)</span> and <span class="math inline">\(n_x\)</span> be the number of values <span class="math inline">\(d_i\)</span> and <span class="math inline">\(x_i\)</span> can take.</p>
+<p><strong>Problem</strong> (LSA). Let <span class="math inline">\(R\)</span> be a <span class="math inline">\(n_d \times n_x\)</span> matrix, fix <span class="math inline">\(s \le \min\{n_d, n_x\}\)</span>. Find <span class="math inline">\(n_d \times s\)</span> matrix <span class="math inline">\(D\)</span> and <span class="math inline">\(n_x \times s\)</span> matrix <span class="math inline">\(X\)</span> that minimises</p>
+<p><span class="math display">\[J(D, X) = \|R - D X^T\|_F.\]</span></p>
+<p>where <span class="math inline">\(\|\cdot\|_F\)</span> is the Frobenius norm.</p>
+<p><strong>Claim</strong>. Let <span class="math inline">\(R = U \Sigma V^T\)</span> be the SVD of <span class="math inline">\(R\)</span>, then the solution to the above problem is <span class="math inline">\(D = U_s \Sigma_s^{{1 \over 2}}\)</span> and <span class="math inline">\(X = V_s \Sigma_s^{{1 \over 2}}\)</span>, where <span class="math inline">\(U_s\)</span> (resp. <span class="math inline">\(V_s\)</span>) is the matrix of the first <span class="math inline">\(s\)</span> columns of <span class="math inline">\(U\)</span> (resp. <span class="math inline">\(V\)</span>) and <span class="math inline">\(\Sigma_s\)</span> is the <span class="math inline">\(s \times s\)</span> submatrix of <span class="math inline">\(\Sigma\)</span>.</p>
+<p>One can compare pLSA1 with LSA. Both procedures produce embeddings of <span class="math inline">\(d\)</span> and <span class="math inline">\(x\)</span>: in pLSA we obtain <span class="math inline">\(n_z\)</span> dimensional embeddings <span class="math inline">\(\xi_{\cdot, u}\)</span> and <span class="math inline">\(\eta_{\cdot, w}\)</span>, whereas in LSA we obtain <span class="math inline">\(s\)</span> dimensional embeddings <span class="math inline">\(D_{u, \cdot}\)</span> and <span class="math inline">\(X_{w, \cdot}\)</span>.</p>
+<h4 id="plsa2">pLSA2</h4>
+<p>Let us turn to pLSA2 (Hoffman 2004), corresponding to (2.92). We rewrite it as</p>
+<p><span class="math display">\[p(x_i | d_i; \theta) = \sum_{z_i} p(x_i | z_i; \theta) p(z_i | d_i; \theta).\]</span></p>
+<p>To simplify notations, we collect all the <span class="math inline">\(x_i\)</span>s with the corresponding <span class="math inline">\(d_i\)</span> equal to 1 (suppose there are <span class="math inline">\(m_1\)</span> of them), and write them as <span class="math inline">\((x_{1, j})_{j = 1 : m_1}\)</span>. In the same fashion we construct <span class="math inline">\(x_{2, 1 : m_2}, x_{3, 1 : m_3}, ... x_{n_d, 1 : m_{n_d}}\)</span>. Similarly, we relabel the corresponding <span class="math inline">\(d_i\)</span> and <span class="math inline">\(z_i\)</span> accordingly.</p>
+<p>With almost no loss of generality, we assume all <span class="math inline">\(m_\ell\)</span>s are equal and write them as <span class="math inline">\(m\)</span>.</p>
+<p>Now the model becomes</p>
+<p><span class="math display">\[p(x_{\ell, i} | d_{\ell, i} = \ell; \theta) = \sum_k p(x_{\ell, i} | z_{\ell, i} = k; \theta) p(z_{\ell, i} = k | d_{\ell, i} = \ell; \theta).\]</span></p>
+<p>Since we have regrouped the <span class="math inline">\(x\)</span>’s and <span class="math inline">\(z\)</span>’s whose indices record the values of the <span class="math inline">\(d\)</span>’s, we can remove the <span class="math inline">\(d\)</span>’s from the equation altogether:</p>
+<p><span class="math display">\[p(x_{\ell, i}; \theta) = \sum_k p(x_{\ell, i} | z_{\ell, i} = k; \theta) p(z_{\ell, i} = k; \theta).\]</span></p>
+<p>It is effectively a modification of SMM by making <span class="math inline">\(n_d\)</span> copies of <span class="math inline">\(\pi\)</span>. More specifically the parameters are <span class="math inline">\(\theta = (\pi_{1 : n_d, 1 : n_z}, \eta_{1 : n_z, 1 : n_x})\)</span>, where we model <span class="math inline">\((z | d = \ell) \sim \text{Cat}(\pi_{\ell, \cdot})\)</span> and, as in pLSA1, <span class="math inline">\((x | z = k) \sim \text{Cat}(\eta_{k, \cdot})\)</span>.</p>
+<p>Illustrated in the plate notations, pLSA2 is:</p>
+<p><img src="/assets/resources/plsa2.png" style="width:350px" /></p>
+<p>The computation is basically adding an index <span class="math inline">\(\ell\)</span> to the computation of SMM wherever applicable.</p>
+<p>The updates at the E-step is</p>
+<p><span class="math display">\[r_{\ell i k} = p(z_{\ell i} = k | x_{\ell i}; \theta) \propto \pi_{\ell k} \eta_{k, x_{\ell i}}.\]</span></p>
+<p>And at the M-step</p>
+<p><span class="math display">\[\begin{aligned}
+\pi_{\ell k} &amp;= {1 \over m} \sum_i r_{\ell i k} \\
+\eta_{k w} &amp;= {\sum_{\ell, i} r_{\ell i k} 1_{x_{\ell i} = w} \over \sum_{\ell, i} r_{\ell i k}}.
+\end{aligned}\]</span></p>
+<h3 id="hmm">HMM</h3>
+<p>The hidden markov model (HMM) is a sequential version of SMM, in the same sense that recurrent neural networks are sequential versions of feed-forward neural networks.</p>
+<p>HMM is an example where the posterior <span class="math inline">\(p(z_i | x_i; \theta)\)</span> is not easy to compute, and one has to utilise properties of the underlying Bayesian network to go around it.</p>
+<p>Now each sample is a sequence <span class="math inline">\(x_i = (x_{ij})_{j = 1 : T}\)</span>, and so are the latent variables <span class="math inline">\(z_i = (z_{ij})_{j = 1 : T}\)</span>.</p>
+<p>The latent variables are assumed to form a Markov chain with transition matrix <span class="math inline">\((\xi_{k \ell})_{k \ell}\)</span>, and <span class="math inline">\(x_{ij}\)</span> is completely dependent on <span class="math inline">\(z_{ij}\)</span>:</p>
+<p><span class="math display">\[\begin{aligned}
+p(z_{ij} | z_{i, j - 1}) &amp;= \xi_{z_{i, j - 1}, z_{ij}},\\
+p(x_{ij} | z_{ij}) &amp;= \eta_{z_{ij}, x_{ij}}.
+\end{aligned}\]</span></p>
+<p>Also, the distribution of <span class="math inline">\(z_{i1}\)</span> is again categorical with parameter <span class="math inline">\(\pi\)</span>:</p>
+<p><span class="math display">\[p(z_{i1}) = \pi_{z_{i1}}\]</span></p>
+<p>So the parameters are <span class="math inline">\(\theta = (\pi, \xi, \eta)\)</span>. And HMM can be shown in plate notations as:</p>
+<p><img src="/assets/resources/hmm.png" style="width:350px" /></p>
+<p>Now we apply EM to HMM, which is called the <a href="https://en.wikipedia.org/wiki/Baum%E2%80%93Welch_algorithm">Baum-Welch algorithm</a>. Unlike the previous examples, it is too messy to compute <span class="math inline">\(p(z_i | x_{i}; \theta)\)</span>, so during the E-step we instead write down formula (2.5) directly in hope of simplifying it:</p>
+<p><span class="math display">\[\begin{aligned}
+\mathbb E_{p(z_i | x_i; \theta_t)} \log p(x_i, z_i; \theta_t) &amp;=\mathbb E_{p(z_i | x_i; \theta_t)} \left(\log \pi_{z_{i1}} + \sum_{j = 2 : T} \log \xi_{z_{i, j - 1}, z_{ij}} + \sum_{j = 1 : T} \log \eta_{z_{ij}, x_{ij}}\right). \qquad (3)
+\end{aligned}\]</span></p>
+<p>Let us compute the summand in second term:</p>
+<p><span class="math display">\[\begin{aligned}
+\mathbb E_{p(z_i | x_{i}; \theta_t)} \log \xi_{z_{i, j - 1}, z_{ij}} &amp;= \sum_{k, \ell} (\log \xi_{k, \ell}) \mathbb E_{p(z_{i} | x_{i}; \theta_t)} 1_{z_{i, j - 1} = k, z_{i, j} = \ell} \\
+&amp;= \sum_{k, \ell} p(z_{i, j - 1} = k, z_{ij} = \ell | x_{i}; \theta_t) \log \xi_{k, \ell}. \qquad (4)
+\end{aligned}\]</span></p>
+<p>Similarly, one can write down the first term and the summand in the third term to obtain</p>
+<p><span class="math display">\[\begin{aligned}
+\mathbb E_{p(z_i | x_{i}; \theta_t)} \log \pi_{z_{i1}} &amp;= \sum_k p(z_{i1} = k | x_{i}; \theta_t), \qquad (5) \\
+\mathbb E_{p(z_i | x_{i}; \theta_t)} \log \eta_{z_{i, j}, x_{i, j}} &amp;= \sum_{k, w} 1_{x_{ij} = w} p(z_{i, j} = k | x_i; \theta_t) \log \eta_{k, w}. \qquad (6)
+\end{aligned}\]</span></p>
+<p>plugging (4)(5)(6) back into (3) and summing over <span class="math inline">\(j\)</span>, we obtain the formula to maximise over <span class="math inline">\(\theta\)</span> on:</p>
+<p><span class="math display">\[\sum_k \sum_i r_{i1k} \log \pi_k + \sum_{k, \ell} \sum_{j = 2 : T, i} s_{ijk\ell} \log \xi_{k, \ell} + \sum_{k, w} \sum_{j = 1 : T, i} r_{ijk} 1_{x_{ij} = w} \log \eta_{k, w},\]</span></p>
+<p>where</p>
+<p><span class="math display">\[\begin{aligned}
+r_{ijk} &amp;:= p(z_{ij} = k | x_{i}; \theta_t), \\
+s_{ijk\ell} &amp;:= p(z_{i, j - 1} = k, z_{ij} = \ell | x_{i}; \theta_t).
+\end{aligned}\]</span></p>
+<p>Now we proceed to the M-step. Since each of the <span class="math inline">\(\pi_k, \xi_{k, \ell}, \eta_{k, w}\)</span> is nicely confined in the inner sum of each term, together with the constraint <span class="math inline">\(\sum_k \pi_k = \sum_\ell \xi_{k, \ell} = \sum_w \eta_{k, w} = 1\)</span> it is not hard to find the argmax at time <span class="math inline">\(t + 1\)</span> (the same way one finds the MLE for any categorical distribution):</p>
+<p><span class="math display">\[\begin{aligned}
+\pi_{k} &amp;= {1 \over m} \sum_i r_{i1k}, \qquad (6.1) \\
+\xi_{k, \ell} &amp;= {\sum_{j = 2 : T, i} s_{ijk\ell} \over \sum_{j = 1 : T - 1, i} r_{ijk}}, \qquad(6.2) \\
+\eta_{k, w} &amp;= {\sum_{ij} 1_{x_{ij} = w} r_{ijk} \over \sum_{ij} r_{ijk}}. \qquad(6.3)
+\end{aligned}\]</span></p>
+<p>Note that (6.1)(6.3) are almost identical to (2.7)(2.8). This makes sense as the only modification HMM makes over SMM is the added dependencies between the latent variables.</p>
+<p>What remains is to compute <span class="math inline">\(r\)</span> and <span class="math inline">\(s\)</span>.</p>
+<p>This is done by using the forward and backward procedures which takes advantage of the conditioned independence / topology of the underlying Bayesian network. It is out of scope of this post, but for the sake of completeness I include it here.</p>
+<p>Let</p>
+<p><span class="math display">\[\begin{aligned}
+\alpha_k(i, j) &amp;:= p(x_{i, 1 : j}, z_{ij} = k; \theta_t), \\
+\beta_k(i, j) &amp;:= p(x_{i, j + 1 : T} | z_{ij} = k; \theta_t).
+\end{aligned}\]</span></p>
+<p>They can be computed recursively as</p>
+<p><span class="math display">\[\begin{aligned}
+\alpha_k(i, j) &amp;= \begin{cases}
+\eta_{k, x_{1j}} \pi_k, &amp; j = 1; \\
+\eta_{k, x_{ij}} \sum_\ell \alpha_\ell(j - 1, i) \xi_{k\ell}, &amp; j \ge 2.
+\end{cases}\\
+\beta_k(i, j) &amp;= \begin{cases}
+1, &amp; j = T;\\
+\sum_\ell \xi_{k\ell} \beta_\ell(j + 1, i) \eta_{\ell, x_{i, j + 1}}, &amp; j &lt; T.
+\end{cases}
+\end{aligned}\]</span></p>
+<p>Then</p>
+<p><span class="math display">\[\begin{aligned}
+p(z_{ij} = k, x_{i}; \theta_t) &amp;= \alpha_k(i, j) \beta_k(i, j), \qquad (7)\\
+p(x_{i}; \theta_t) &amp;= \sum_k \alpha_k(i, j) \beta_k(i, j),\forall j = 1 : T \qquad (8)\\
+p(z_{i, j - 1} = k, z_{i, j} = \ell, x_{i}; \theta_t) &amp;= \alpha_k(i, j) \xi_{k\ell} \beta_\ell(i, j + 1) \eta_{\ell, x_{j + 1, i}}. \qquad (9)
+\end{aligned}\]</span></p>
+<p>And this yields <span class="math inline">\(r_{ijk}\)</span> and <span class="math inline">\(s_{ijk\ell}\)</span> since they can be computed as <span class="math inline">\({(7) \over (8)}\)</span> and <span class="math inline">\({(9) \over (8)}\)</span> respectively.</p>
+<h2 id="fully-bayesian-em-mfa">Fully Bayesian EM / MFA</h2>
+<p>Let us now venture into the realm of full Bayesian.</p>
+<p>In EM we aim to maximise the ELBO</p>
+<p><span class="math display">\[\int q(z) \log {p(x, z; \theta) \over q(z)} dz d\theta\]</span></p>
+<p>alternately over <span class="math inline">\(q\)</span> and <span class="math inline">\(\theta\)</span>. As mentioned before, the E-step of maximising over <span class="math inline">\(q\)</span> is Bayesian, in that it computes the posterior of <span class="math inline">\(z\)</span>, whereas the M-step of maximising over <span class="math inline">\(\theta\)</span> is maximum likelihood and frequentist.</p>
+<p>The fully Bayesian EM makes the M-step Bayesian by making <span class="math inline">\(\theta\)</span> a random variable, so the ELBO becomes</p>
+<p><span class="math display">\[L(p(x, z, \theta), q(z, \theta)) = \int q(z, \theta) \log {p(x, z, \theta) \over q(z, \theta)} dz d\theta\]</span></p>
+<p>We further assume <span class="math inline">\(q\)</span> can be factorised into distributions on <span class="math inline">\(z\)</span> and <span class="math inline">\(\theta\)</span>: <span class="math inline">\(q(z, \theta) = q_1(z) q_2(\theta)\)</span>. So the above formula is rewritten as</p>
+<p><span class="math display">\[L(p(x, z, \theta), q(z, \theta)) = \int q_1(z) q_2(\theta) \log {p(x, z, \theta) \over q_1(z) q_2(\theta)} dz d\theta\]</span></p>
+<p>To find argmax over <span class="math inline">\(q_1\)</span>, we rewrite</p>
+<p><span class="math display">\[\begin{aligned}
+L(p(x, z, \theta), q(z, \theta)) &amp;= \int q_1(z) \left(\int q_2(\theta) \log p(x, z, \theta) d\theta\right) dz - \int q_1(z) \log q_1(z) dz - \int q_2(\theta) \log q_2(\theta) \\&amp;= - D(q_1(z) || p_x(z)) + C,
+\end{aligned}\]</span></p>
+<p>where <span class="math inline">\(p_x\)</span> is a density in <span class="math inline">\(z\)</span> with</p>
+<p><span class="math display">\[\log p_x(z) = \mathbb E_{q_2(\theta)} \log p(x, z, \theta) + C.\]</span></p>
+<p>So the <span class="math inline">\(q_1\)</span> that maximises the ELBO is <span class="math inline">\(q_1^* = p_x\)</span>.</p>
+<p>Similarly, the optimal <span class="math inline">\(q_2\)</span> is such that</p>
+<p><span class="math display">\[\log q_2^*(\theta) = \mathbb E_{q_1(z)} \log p(x, z, \theta) + C.\]</span></p>
+<p>The fully Bayesian EM thus alternately evaluates <span class="math inline">\(q_1^*\)</span> (E-step) and <span class="math inline">\(q_2^*\)</span> (M-step).</p>
+<p>It is also called mean field approximation (MFA), and can be easily generalised to models with more than two groups of latent variables, see e.g. Section 10.1 of Bishop 2006.</p>
+<h3 id="application-to-mixture-models">Application to mixture models</h3>
+<p><strong>Definition (Fully Bayesian mixture model)</strong>. The relations between <span class="math inline">\(\pi\)</span>, <span class="math inline">\(\eta\)</span>, <span class="math inline">\(x\)</span>, <span class="math inline">\(z\)</span> are the same as in the definition of mixture models. Furthermore, we assume the distribution of <span class="math inline">\((x | \eta_k)\)</span> belongs to the <a href="https://en.wikipedia.org/wiki/Exponential_family">exponential family</a> (the definition of the exponential family is briefly touched at the end of this section). But now both <span class="math inline">\(\pi\)</span> and <span class="math inline">\(\eta\)</span> are random variables. Let the prior distribution <span class="math inline">\(p(\pi)\)</span> is Dirichlet with parameter <span class="math inline">\((\alpha, \alpha, ..., \alpha)\)</span>. Let the prior <span class="math inline">\(p(\eta_k)\)</span> be the conjugate prior of <span class="math inline">\((x | \eta_k)\)</span>, with parameter <span class="math inline">\(\beta\)</span>, we will see later in this section that the posterior <span class="math inline">\(q(\eta_k)\)</span> belongs to the same family as <span class="math inline">\(p(\eta_k)\)</span>. Represented in a plate notations, a fully Bayesian mixture model looks like:</p>
+<p><img src="/assets/resources/fully-bayesian-mm.png" style="width:450px" /></p>
+<p>Given this structure we can write down the mean-field approximation:</p>
+<p><span class="math display">\[\log q(z) = \mathbb E_{q(\eta)q(\pi)} (\log(x | z, \eta) + \log(z | \pi)) + C.\]</span></p>
+<p>Both sides can be factored into per-sample expressions, giving us</p>
+<p><span class="math display">\[\log q(z_i) = \mathbb E_{q(\eta)} \log p(x_i | z_i, \eta) + \mathbb E_{q(\pi)} \log p(z_i | \pi) + C\]</span></p>
+<p>Therefore</p>
+<p><span class="math display">\[\log r_{ik} = \log q(z_i = k) = \mathbb E_{q(\eta_k)} \log p(x_i | \eta_k) + \mathbb E_{q(\pi)} \log \pi_k + C. \qquad (9.1)\]</span></p>
+<p>So the posterior of each <span class="math inline">\(z_i\)</span> is categorical regardless of the <span class="math inline">\(p\)</span>s and <span class="math inline">\(q\)</span>s.</p>
+<p>Computing the posterior of <span class="math inline">\(\pi\)</span> and <span class="math inline">\(\eta\)</span>:</p>
+<p><span class="math display">\[\log q(\pi) + \log q(\eta) = \log p(\pi) + \log p(\eta) + \sum_i \mathbb E_{q(z_i)} p(x_i | z_i, \eta) + \sum_i \mathbb E_{q(z_i)} p(z_i | \pi) + C.\]</span></p>
+<p>So we can separate the terms involving <span class="math inline">\(\pi\)</span> and those involving <span class="math inline">\(\eta\)</span>. First compute the posterior of <span class="math inline">\(\pi\)</span>:</p>
+<p><span class="math display">\[\log q(\pi) = \log p(\pi) + \sum_i \mathbb E_{q(z_i)} \log p(z_i | \pi) = \log p(\pi) + \sum_i \sum_k r_{ik} \log \pi_k + C.\]</span></p>
+<p>The right hand side is the log of a Dirichlet modulus the constant <span class="math inline">\(C\)</span>, from which we can update the posterior parameter <span class="math inline">\(\phi^\pi\)</span>:</p>
+<p><span class="math display">\[\phi^\pi_k = \alpha + \sum_i r_{ik}. \qquad (9.3)\]</span></p>
+<p>Similarly we can obtain the posterior of <span class="math inline">\(\eta\)</span>:</p>
+<p><span class="math display">\[\log q(\eta) = \log p(\eta) + \sum_i \sum_k r_{ik} \log p(x_i | \eta_k) + C.\]</span></p>
+<p>Again we can factor the terms with respect to <span class="math inline">\(k\)</span> and get:</p>
+<p><span class="math display">\[\log q(\eta_k) = \log p(\eta_k) + \sum_i r_{ik} \log p(x_i | \eta_k) + C. \qquad (9.5)\]</span></p>
+<p>Here we can see why conjugate prior works. Mathematically, given a probability distribution <span class="math inline">\(p(x | \theta)\)</span>, the distribution <span class="math inline">\(p(\theta)\)</span> is called conjugate prior of <span class="math inline">\(p(x | \theta)\)</span> if <span class="math inline">\(\log p(\theta) + \log p(x | \theta)\)</span> has the same form as <span class="math inline">\(\log p(\theta)\)</span>.</p>
+<p>For example, the conjugate prior for the exponential family <span class="math inline">\(p(x | \theta) = h(x) \exp(\theta \cdot T(x) - A(\theta))\)</span> where <span class="math inline">\(T\)</span>, <span class="math inline">\(A\)</span> and <span class="math inline">\(h\)</span> are some functions is <span class="math inline">\(p(\theta; \chi, \nu) \propto \exp(\chi \cdot \theta - \nu A(\theta))\)</span>.</p>
+<p>Here what we want is a bit different from conjugate priors because of the coefficients <span class="math inline">\(r_{ik}\)</span>. But the computation carries over to the conjugate priors of the exponential family (try it yourself and you'll see). That is, if <span class="math inline">\(p(x_i | \eta_k)\)</span> belongs to the exponential family</p>
+<p><span class="math display">\[p(x_i | \eta_k) = h(x) \exp(\eta_k \cdot T(x) - A(\eta_k))\]</span></p>
+<p>and if <span class="math inline">\(p(\eta_k)\)</span> is the conjugate prior of <span class="math inline">\(p(x_i | \eta_k)\)</span></p>
+<p><span class="math display">\[p(\eta_k) \propto \exp(\chi \cdot \eta_k - \nu A(\eta_k))\]</span></p>
+<p>then <span class="math inline">\(q(\eta_k)\)</span> has the same form as <span class="math inline">\(p(\eta_k)\)</span>, and from (9.5) we can compute the updates of <span class="math inline">\(\phi^{\eta_k}\)</span>:</p>
+<p><span class="math display">\[\begin{aligned}
+\phi^{\eta_k}_1 &amp;= \chi + \sum_i r_{ik} T(x_i), \qquad (9.7) \\
+\phi^{\eta_k}_2 &amp;= \nu + \sum_i r_{ik}. \qquad (9.9)
+\end{aligned}\]</span></p>
+<p>So the mean field approximation for the fully Bayesian mixture model is the alternate iteration of (9.1) (E-step) and (9.3)(9.7)(9.9) (M-step) until convergence.</p>
+<h3 id="fully-bayesian-gmm">Fully Bayesian GMM</h3>
+<p>A typical example of fully Bayesian mixture models is the fully Bayesian Gaussian mixture model (Attias 2000, also called variational GMM in the literature). It is defined by applying the same modification to GMM as the difference between Fully Bayesian mixture model and the vanilla mixture model.</p>
+<p>More specifically:</p>
+<ul>
+<li><span class="math inline">\(p(z_{i}) = \text{Cat}(\pi)\)</span> as in vanilla GMM</li>
+<li><span class="math inline">\(p(\pi) = \text{Dir}(\alpha, \alpha, ..., \alpha)\)</span> has Dirichlet distribution, the conjugate prior to the parameters of the categorical distribution.</li>
+<li><span class="math inline">\(p(x_i | z_i = k) = p(x_i | \eta_k) = N(\mu_{k}, \Sigma_{k})\)</span> as in vanilla GMM</li>
+<li><span class="math inline">\(p(\mu_k, \Sigma_k) = \text{NIW} (\mu_0, \lambda, \Psi, \nu)\)</span> is the normal-inverse-Wishart distribution, the conjugate prior to the mean and covariance matrix of the Gaussian distribution.</li>
+</ul>
+<p>The E-step and M-step can be computed using (9.1) and (9.3)(9.7)(9.9) in the previous section. The details of the computation can be found in Chapter 10.2 of Bishop 2006 or Attias 2000.</p>
+<h3 id="lda">LDA</h3>
+<p>As the second example of fully Bayesian mixture models, Latent Dirichlet allocation (LDA) (Blei-Ng-Jordan 2003) is the fully Bayesian version of pLSA2, with the following plate notations:</p>
+<p><img src="/assets/resources/lda.png" style="width:450px" /></p>
+<p>It is the smoothed version in the paper.</p>
+<p>More specifically, on the basis of pLSA2, we add prior distributions to <span class="math inline">\(\eta\)</span> and <span class="math inline">\(\pi\)</span>:</p>
+<p><span class="math display">\[\begin{aligned}
+p(\eta_k) &amp;= \text{Dir} (\beta, ..., \beta), \qquad k = 1 : n_z \\
+p(\pi_\ell) &amp;= \text{Dir} (\alpha, ..., \alpha), \qquad \ell = 1 : n_d \\
+\end{aligned}\]</span></p>
+<p>And as before, the prior of <span class="math inline">\(z\)</span> is</p>
+<p><span class="math display">\[p(z_{\ell, i}) = \text{Cat} (\pi_\ell), \qquad \ell = 1 : n_d, i = 1 : m\]</span></p>
+<p>We also denote posterior distribution</p>
+<p><span class="math display">\[\begin{aligned}
+q(\eta_k) &amp;= \text{Dir} (\phi^{\eta_k}), \qquad k = 1 : n_z \\
+q(\pi_\ell) &amp;= \text{Dir} (\phi^{\pi_\ell}), \qquad \ell = 1 : n_d \\
+p(z_{\ell, i}) &amp;= \text{Cat} (r_{\ell, i}), \qquad \ell = 1 : n_d, i = 1 : m
+\end{aligned}\]</span></p>
+<p>As before, in E-step we update <span class="math inline">\(r\)</span>, and in M-step we update <span class="math inline">\(\lambda\)</span> and <span class="math inline">\(\gamma\)</span>.</p>
+<p>But in the LDA paper, one treats optimisation over <span class="math inline">\(r\)</span>, <span class="math inline">\(\lambda\)</span> and <span class="math inline">\(\gamma\)</span> as a E-step, and treats <span class="math inline">\(\alpha\)</span> and <span class="math inline">\(\beta\)</span> as parameters, which is optmised over at M-step. This makes it more akin to the classical EM where the E-step is Bayesian and M-step MLE. This is more complicated, and we do not consider it this way here.</p>
+<p>Plugging in (9.1) we obtain the updates at E-step</p>
+<p><span class="math display">\[r_{\ell i k} \propto \exp(\psi(\phi^{\pi_\ell}_k) + \psi(\phi^{\eta_k}_{x_{\ell i}}) - \psi(\sum_w \phi^{\eta_k}_w)), \qquad (10)\]</span></p>
+<p>where <span class="math inline">\(\psi\)</span> is the digamma function. Similarly, plugging in (9.3)(9.7)(9.9), at M-step, we update the posterior of <span class="math inline">\(\pi\)</span> and <span class="math inline">\(\eta\)</span>:</p>
+<p><span class="math display">\[\begin{aligned}
+\phi^{\pi_\ell}_k &amp;= \alpha + \sum_i r_{\ell i k}. \qquad (11)\\
+%%}}$
+%%As for $\eta$, we have
+%%{{$%align%
+%%\log q(\eta) &amp;= \sum_k \log p(\eta_k) + \sum_{\ell, i} \mathbb E_{q(z_{\ell i})} \log p(x_{\ell i} | z_{\ell i}, \eta) \\
+%%&amp;= \sum_{k, j} (\sum_{\ell, i} r_{\ell i k} 1_{x_{\ell i} = j} + \beta - 1) \log \eta_{k j}
+%%}}$
+%%which gives us
+%%{{$
+\phi^{\eta_k}_w &amp;= \beta + \sum_{\ell, i} r_{\ell i k} 1_{x_{\ell i} = w}. \qquad (12)
+\end{aligned}\]</span></p>
+<p>So the algorithm iterates over (10) and (11)(12) until convergence.</p>
+<h3 id="dpmm">DPMM</h3>
+<p>The Dirichlet process mixture model (DPMM) is like the fully Bayesian mixture model except <span class="math inline">\(n_z = \infty\)</span>, i.e. <span class="math inline">\(z\)</span> can take any positive integer value.</p>
+<p>The probability of <span class="math inline">\(z_i = k\)</span> is defined using the so called stick-breaking process: let <span class="math inline">\(v_i \sim \text{Beta} (\alpha, \beta)\)</span> be i.i.d. random variables with Beta distributions, then</p>
+<p><span class="math display">\[\mathbb P(z_i = k | v_{1:\infty}) = (1 - v_1) (1 - v_2) ... (1 - v_{k - 1}) v_k.\]</span></p>
+<p>So <span class="math inline">\(v\)</span> plays a similar role to <span class="math inline">\(\pi\)</span> in the previous models.</p>
+<p>As before, we have that the distribution of <span class="math inline">\(x\)</span> belongs to the exponential family:</p>
+<p><span class="math display">\[p(x | z = k, \eta) = p(x | \eta_k) = h(x) \exp(\eta_k \cdot T(x) - A(\eta_k))\]</span></p>
+<p>so the prior of <span class="math inline">\(\eta_k\)</span> is</p>
+<p><span class="math display">\[p(\eta_k) \propto \exp(\chi \cdot \eta_k - \nu A(\eta_k)).\]</span></p>
+<p>Because of the infinities we can't directly apply the formulas in the general fully Bayesian mixture models. So let us carefully derive the whole thing again.</p>
+<p>As before, we can write down the ELBO:</p>
+<p><span class="math display">\[L(p(x, z, \theta), q(z, \theta)) = \mathbb E_{q(\theta)} \log {p(\theta) \over q(\theta)} + \mathbb E_{q(\theta) q(z)} \log {p(x, z | \theta) \over q(z)}.\]</span></p>
+<p>Both terms are infinite series:</p>
+<p><span class="math display">\[L(p, q) = \sum_{k = 1 : \infty} \mathbb E_{q(\theta_k)} \log {p(\theta_k) \over q(\theta_k)} + \sum_{i = 1 : m} \sum_{k = 1 : \infty} q(z_i = k) \mathbb E_{q(\theta)} \log {p(x_i, z_i = k | \theta) \over q(z_i = k)}.\]</span></p>
+<p>There are several ways to deal with the infinities. One is to fix some level <span class="math inline">\(T &gt; 0\)</span> and set <span class="math inline">\(v_T = 1\)</span> almost surely (Blei-Jordan 2006). This effectively turns the model into a finite one, and both terms become finite sums over <span class="math inline">\(k = 1 : T\)</span>.</p>
+<p>Another walkaround (Kurihara-Welling-Vlassis 2007) is also a kind of truncation, but less heavy-handed: setting the posterior <span class="math inline">\(q(\theta) = q(\eta) q(v)\)</span> to be:</p>
+<p><span class="math display">\[q(\theta) = q(\theta_{1 : T}) p(\theta_{T + 1 : \infty}) =: q(\theta_{\le T}) p(\theta_{&gt; T}).\]</span></p>
+<p>That is, tie the posterior after <span class="math inline">\(T\)</span> to the prior. This effectively turns the first term in the ELBO to a finite sum over <span class="math inline">\(k = 1 : T\)</span>, while keeping the second sum an infinite series:</p>
+<p><span class="math display">\[L(p, q) = \sum_{k = 1 : T} \mathbb E_{q(\theta_k)} \log {p(\theta_k) \over q(\theta_k)} + \sum_i \sum_{k = 1 : \infty} q(z_i = k) \mathbb E_{q(\theta)} \log {p(x_i, z_i = k | \theta) \over q(z_i = k)}. \qquad (13)\]</span></p>
+<p>The plate notation of this model is:</p>
+<p><img src="/assets/resources/dpmm.png" style="width:450px" /></p>
+<p>As it turns out, the infinities can be tamed in this case.</p>
+<p>As before, the optimal <span class="math inline">\(q(z_i)\)</span> is computed as</p>
+<p><span class="math display">\[r_{ik} = q(z_i = k) = s_{ik} / S_i\]</span></p>
+<p>where</p>
+<p><span class="math display">\[\begin{aligned}
+s_{ik} &amp;= \exp(\mathbb E_{q(\theta)} \log p(x_i, z_i = k | \theta)) \\
+S_i &amp;= \sum_{k = 1 : \infty} s_{ik}.
+\end{aligned}\]</span></p>
+<p>Plugging this back to (13) we have</p>
+<p><span class="math display">\[\begin{aligned}
+\sum_{k = 1 : \infty} r_{ik} &amp;\mathbb E_{q(\theta)} \log {p(x_i, z_i = k | \theta) \over r_{ik}} \\
+&amp;= \sum_{k = 1 : \infty} r_{ik} \mathbb E_{q(\theta)} (\log p(x_i, z_i = k | \theta) - \mathbb E_{q(\theta)} \log p(x_i, z_i = k | \theta) + \log S_i) = \log S_i.
+\end{aligned}\]</span></p>
+<p>So it all rests upon <span class="math inline">\(S_i\)</span> being finite.</p>
+<p>For <span class="math inline">\(k \le T + 1\)</span>, we compute the quantity <span class="math inline">\(s_{ik}\)</span> directly. For <span class="math inline">\(k &gt; T\)</span>, it is not hard to show that</p>
+<p><span class="math display">\[s_{ik} = s_{i, T + 1} \exp((k - T - 1) \mathbb E_{p(w)} \log (1 - w)),\]</span></p>
+<p>where <span class="math inline">\(w\)</span> is a random variable with same distribution as <span class="math inline">\(p(v_k)\)</span>, i.e. <span class="math inline">\(\text{Beta}(\alpha, \beta)\)</span>.</p>
+<p>Hence</p>
+<p><span class="math display">\[S_i = \sum_{k = 1 : T} s_{ik} + {s_{i, T + 1} \over 1 - \exp(\psi(\beta) - \psi(\alpha + \beta))}\]</span></p>
+<p>is indeed finite. Similarly we also obtain</p>
+<p><span class="math display">\[q(z_i &gt; k) = S^{-1} \left(\sum_{\ell = k + 1 : T} s_\ell + {s_{i, T + 1} \over 1 - \exp(\psi(\beta) - \psi(\alpha + \beta))}\right), k \le T \qquad (14)\]</span></p>
+<p>Now let us compute the posterior of <span class="math inline">\(\theta_{\le T}\)</span>. In the following we exchange the integrals without justifying them (c.f. Fubini's Theorem). Equation (13) can be rewritten as</p>
+<p><span class="math display">\[L(p, q) = \mathbb E_{q(\theta_{\le T})} \left(\log p(\theta_{\le T}) + \sum_i \mathbb E_{q(z_i) p(\theta_{&gt; T})} \log {p(x_i, z_i | \theta) \over q(z_i)} - \log q(\theta_{\le T})\right).\]</span></p>
+<p>Note that unlike the derivation of the mean-field approximation, we keep the <span class="math inline">\(- \mathbb E_{q(z)} \log q(z)\)</span> term even though we are only interested in <span class="math inline">\(\theta\)</span> at this stage. This is again due to the problem of infinities: as in the computation of <span class="math inline">\(S\)</span>, we would like to cancel out some undesirable unbounded terms using <span class="math inline">\(q(z)\)</span>. We now have</p>
+<p><span class="math display">\[\log q(\theta_{\le T}) = \log p(\theta_{\le T}) + \sum_i \mathbb E_{q(z_i) p(\theta_{&gt; T})} \log {p(x_i, z_i | \theta) \over q(z_i)} + C.\]</span></p>
+<p>By plugging in <span class="math inline">\(q(z = k)\)</span> we obtain</p>
+<p><span class="math display">\[\log q(\theta_{\le T}) = \log p(\theta_{\le T}) + \sum_{k = 1 : \infty} q(z_i = k) \left(\mathbb E_{p(\theta_{&gt; T})} \log {p(x_i, z_i = k | \theta) \over q(z_i = k)} - \mathbb E_{q(\theta)} \log {p(x_i, z_i = k | \theta) \over q(z_i = k)}\right) + C.\]</span></p>
+<p>Again, we separate the <span class="math inline">\(v_k\)</span>'s and the <span class="math inline">\(\eta_k\)</span>'s to obtain</p>
+<p><span class="math display">\[q(v_{\le T}) = \log p(v_{\le T}) + \sum_i \sum_k q(z_i = k) \left(\mathbb E_{p(v_{&gt; T})} \log p(z_i = k | v) - \mathbb E_{q(v)} \log p (z_i = k | v)\right).\]</span></p>
+<p>Denote by <span class="math inline">\(D_k\)</span> the difference between the two expetations on the right hand side. It is easy to show that</p>
+<p><span class="math display">\[D_k = \begin{cases}
+\log(1 - v_1) ... (1 - v_{k - 1}) v_k - \mathbb E_{q(v)} \log (1 - v_1) ... (1 - v_{k - 1}) v_k &amp; k \le T\\
+\log(1 - v_1) ... (1 - v_T) - \mathbb E_{q(v)} \log (1 - v_1) ... (1 - v_T) &amp; k &gt; T
+\end{cases}\]</span></p>
+<p>so <span class="math inline">\(D_k\)</span> is bounded. With this we can derive the update for <span class="math inline">\(\phi^{v, 1}\)</span> and <span class="math inline">\(\phi^{v, 2}\)</span>:</p>
+<p><span class="math display">\[\begin{aligned}
+\phi^{v, 1}_k &amp;= \alpha + \sum_i q(z_i = k) \\
+\phi^{v, 2}_k &amp;= \beta + \sum_i q(z_i &gt; k),
+\end{aligned}\]</span></p>
+<p>where <span class="math inline">\(q(z_i &gt; k)\)</span> can be computed as in (14).</p>
+<p>When it comes to <span class="math inline">\(\eta\)</span>, we have</p>
+<p><span class="math display">\[\log q(\eta_{\le T}) = \log p(\eta_{\le T}) + \sum_i \sum_{k = 1 : \infty} q(z_i = k) (\mathbb E_{p(\eta_k)} \log p(x_i | \eta_k) - \mathbb E_{q(\eta_k)} \log p(x_i | \eta_k)).\]</span></p>
+<p>Since <span class="math inline">\(q(\eta_k) = p(\eta_k)\)</span> for <span class="math inline">\(k &gt; T\)</span>, the inner sum on the right hand side is a finite sum over <span class="math inline">\(k = 1 : T\)</span>. By factorising <span class="math inline">\(q(\eta_{\le T})\)</span> and <span class="math inline">\(p(\eta_{\le T})\)</span>, we have</p>
+<p><span class="math display">\[\log q(\eta_k) = \log p(\eta_k) + \sum_i q(z_i = k) \log (x_i | \eta_k) + C,\]</span></p>
+<p>which gives us</p>
+<p><span class="math display">\[\begin{aligned}
+\phi^{\eta, 1}_k &amp;= \xi + \sum_i q(z_i = k) T(x_i) \\
+\phi^{\eta, 2}_k &amp;= \nu + \sum_i q(z_i = k).
+\end{aligned}\]</span></p>
+<h2 id="svi">SVI</h2>
+<p>In variational inference, the computation of some parameters are more expensive than others.</p>
+<p>For example, the computation of M-step is often much more expensive than that of E-step:</p>
+<ul>
+<li>In the vanilla mixture models with the EM algorithm, the update of <span class="math inline">\(\theta\)</span> requires the computation of <span class="math inline">\(r_{ik}\)</span> for all <span class="math inline">\(i = 1 : m\)</span>, see Eq (2.3).</li>
+<li>In the fully Bayesian mixture model with mean field approximation, the updates of <span class="math inline">\(\phi^\pi\)</span> and <span class="math inline">\(\phi^\eta\)</span> require the computation of a sum over all samples (see Eq (9.3)(9.7)(9.9)).</li>
+</ul>
+<p>Similarly, in pLSA2 (resp. LDA), the updates of <span class="math inline">\(\eta_k\)</span> (resp. <span class="math inline">\(\phi^{\eta_k}\)</span>) requires a sum over <span class="math inline">\(\ell = 1 : n_d\)</span>, whereas the updates of other parameters do not.</p>
+<p>In these cases, the parameter that requires more computations are called global and the other ones local.</p>
+<p>Stochastic variational inference (SVI, Hoffman-Blei-Wang-Paisley 2012) addresses this problem in the same way as stochastic gradient descent improves efficiency of gradient descent.</p>
+<p>Each time SVI picks a sample, updates the corresponding local parameters, and computes the update of the global parameters as if all the <span class="math inline">\(m\)</span> samples are identical to the picked sample. Finally it incorporates this global parameter value into previous computations of the global parameters, by means of an exponential moving average.</p>
+<p>As an example, here's SVI applied to LDA:</p>
+<ol type="1">
+<li>Set <span class="math inline">\(t = 1\)</span>.</li>
+<li>Pick <span class="math inline">\(\ell\)</span> uniformly from <span class="math inline">\(\{1, 2, ..., n_d\}\)</span>.</li>
+<li>Repeat until convergence:
+<ol type="1">
+<li>Compute <span class="math inline">\((r_{\ell i k})_{i = 1 : m, k = 1 : n_z}\)</span> using (10).</li>
+<li>Compute <span class="math inline">\((\phi^{\pi_\ell}_k)_{k = 1 : n_z}\)</span> using (11).</li>
+</ol></li>
+<li><p>Compute <span class="math inline">\((\tilde \phi^{\eta_k}_w)_{k = 1 : n_z, w = 1 : n_x}\)</span> using the following formula (compare with (12)) <span class="math display">\[\tilde \phi^{\eta_k}_w = \beta + n_d \sum_{i} r_{\ell i k} 1_{x_{\ell i} = w}\]</span></p></li>
+<li><p>Update the exponential moving average <span class="math inline">\((\phi^{\eta_k}_w)_{k = 1 : n_z, w = 1 : n_x}\)</span>: <span class="math display">\[\phi^{\eta_k}_w = (1 - \rho_t) \phi^{\eta_k}_w + \rho_t \tilde \phi^{\eta_k}_w\]</span></p></li>
+<li><p>Increment <span class="math inline">\(t\)</span> and go back to Step 2.</p></li>
+</ol>
+<p>In the original paper, <span class="math inline">\(\rho_t\)</span> needs to satisfy some conditions that guarantees convergence of the global parameters:</p>
+<p><span class="math display">\[\begin{aligned}
+\sum_t \rho_t = \infty \\
+\sum_t \rho_t^2 &lt; \infty
+\end{aligned}\]</span></p>
+<p>and the choice made there is</p>
+<p><span class="math display">\[\rho_t = (t + \tau)^{-\kappa}\]</span></p>
+<p>for some <span class="math inline">\(\kappa \in (.5, 1]\)</span> and <span class="math inline">\(\tau \ge 0\)</span>.</p>
+<h2 id="aevb">AEVB</h2>
+<p>SVI adds to variational inference stochastic updates similar to stochastic gradient descent. Why not just use neural networks with stochastic gradient descent while we are at it? Autoencoding variational Bayes (AEVB) (Kingma-Welling 2013) is such an algorithm.</p>
+<p>Let's look back to the original problem of maximising the ELBO:</p>
+<p><span class="math display">\[\max_{\theta, q} \sum_{i = 1 : m} L(p(x_i | z_i; \theta) p(z_i; \theta), q(z_i))\]</span></p>
+<p>Since for any given <span class="math inline">\(\theta\)</span>, the optimal <span class="math inline">\(q(z_i)\)</span> is the posterior <span class="math inline">\(p(z_i | x_i; \theta)\)</span>, the problem reduces to</p>
+<p><span class="math display">\[\max_{\theta} \sum_i L(p(x_i | z_i; \theta) p(z_i; \theta), p(z_i | x_i; \theta))\]</span></p>
+<p>Let us assume <span class="math inline">\(p(z_i; \theta) = p(z_i)\)</span> is independent of <span class="math inline">\(\theta\)</span> to simplify the problem. In the old mixture models, we have <span class="math inline">\(p(x_i | z_i; \theta) = p(x_i; \eta_{z_i})\)</span>, which we can generalise to <span class="math inline">\(p(x_i; f(\theta, z_i))\)</span> for some function <span class="math inline">\(f\)</span>. Using Beyes' theorem we can also write down <span class="math inline">\(p(z_i | x_i; \theta) = q(z_i; g(\theta, x_i))\)</span> for some function <span class="math inline">\(g\)</span>. So the problem becomes</p>
+<p><span class="math display">\[\max_{\theta} \sum_i L(p(x_i; f(\theta, z_i)) p(z_i), q(z_i; g(\theta, x_i)))\]</span></p>
+<p>In some cases <span class="math inline">\(g\)</span> can be hard to write down or compute. AEVB addresses this problem by replacing <span class="math inline">\(g(\theta, x_i)\)</span> with a neural network <span class="math inline">\(g_\phi(x_i)\)</span> with input <span class="math inline">\(x_i\)</span> and some separate parameters <span class="math inline">\(\phi\)</span>. It also replaces <span class="math inline">\(f(\theta, z_i)\)</span> with a neural network <span class="math inline">\(f_\theta(z_i)\)</span> with input <span class="math inline">\(z_i\)</span> and parameters <span class="math inline">\(\theta\)</span>. And now the problem becomes</p>
+<p><span class="math display">\[\max_{\theta, \phi} \sum_i L(p(x_i; f_\theta(z_i)) p(z_i), q(z_i; g_\phi(x_i))).\]</span></p>
+<p>The objective function can be written as</p>
+<p><span class="math display">\[\sum_i \mathbb E_{q(z_i; g_\phi(x_i))} \log p(x_i; f_\theta(z_i)) - D(q(z_i; g_\phi(x_i)) || p(z_i)).\]</span></p>
+<p>The first term is called the negative reconstruction error, like the <span class="math inline">\(- \|decoder(encoder(x)) - x\|\)</span> in autoencoders, which is where the "autoencoder" in the name comes from.</p>
+<p>The second term is a regularisation term that penalises the posterior <span class="math inline">\(q(z_i)\)</span> that is very different from the prior <span class="math inline">\(p(z_i)\)</span>. We assume this term can be computed analytically.</p>
+<p>So only the first term requires computing.</p>
+<p>We can approximate the sum over <span class="math inline">\(i\)</span> in a similar fashion as SVI: pick <span class="math inline">\(j\)</span> uniformly randomly from <span class="math inline">\(\{1 ... m\}\)</span> and treat the whole dataset as <span class="math inline">\(m\)</span> replicates of <span class="math inline">\(x_j\)</span>, and approximate the expectation using Monte-Carlo:</p>
+<p><span class="math display">\[U(x_i, \theta, \phi) := \sum_i \mathbb E_{q(z_i; g_\phi(x_i))} \log p(x_i; f_\theta(z_i)) \approx m \mathbb E_{q(z_j; g_\phi(x_j))} \log p(x_j; f_\theta(z_j)) \approx {m \over L} \sum_{\ell = 1}^L \log p(x_j; f_\theta(z_{j, \ell})),\]</span></p>
+<p>where each <span class="math inline">\(z_{j, \ell}\)</span> is sampled from <span class="math inline">\(q(z_j; g_\phi(x_j))\)</span>.</p>
+<p>But then it is not easy to approximate the gradient over <span class="math inline">\(\phi\)</span>. One can use the log trick as in policy gradients, but it has the problem of high variance. In policy gradients this is overcome by using baseline subtractions. In the AEVB paper it is tackled with the reparameterisation trick.</p>
+<p>Assume there exists a transformation <span class="math inline">\(T_\phi\)</span> and a random variable <span class="math inline">\(\epsilon\)</span> with distribution independent of <span class="math inline">\(\phi\)</span> or <span class="math inline">\(\theta\)</span>, such that <span class="math inline">\(T_\phi(x_i, \epsilon)\)</span> has distribution <span class="math inline">\(q(z_i; g_\phi(x_i))\)</span>. In this case we can rewrite <span class="math inline">\(U(x, \phi, \theta)\)</span> as</p>
+<p><span class="math display">\[\sum_i \mathbb E_{\epsilon \sim p(\epsilon)} \log p(x_i; f_\theta(T_\phi(x_i, \epsilon))),\]</span></p>
+<p>This way one can use Monte-Carlo to approximate <span class="math inline">\(\nabla_\phi U(x, \phi, \theta)\)</span>:</p>
+<p><span class="math display">\[\nabla_\phi U(x, \phi, \theta) \approx {m \over L} \sum_{\ell = 1 : L} \nabla_\phi \log p(x_j; f_\theta(T_\phi(x_j, \epsilon_\ell))),\]</span></p>
+<p>where each <span class="math inline">\(\epsilon_{\ell}\)</span> is sampled from <span class="math inline">\(p(\epsilon)\)</span>. The approximation of <span class="math inline">\(U(x, \phi, \theta)\)</span> itself can be done similarly.</p>
+<h3 id="vae">VAE</h3>
+<p>As an example of AEVB, the paper introduces variational autoencoder (VAE), with the following instantiations:</p>
+<ul>
+<li>The prior <span class="math inline">\(p(z_i) = N(0, I)\)</span> is standard normal, thus independent of <span class="math inline">\(\theta\)</span>.</li>
+<li>The distribution <span class="math inline">\(p(x_i; \eta)\)</span> is either Gaussian or categorical.</li>
+<li>The distribution <span class="math inline">\(q(z_i; \mu, \Sigma)\)</span> is Gaussian with diagonal covariance matrix. So <span class="math inline">\(g_\phi(z_i) = (\mu_\phi(x_i), \text{diag}(\sigma^2_\phi(x_i)_{1 : d}))\)</span>. Thus in the reparameterisation trick <span class="math inline">\(\epsilon \sim N(0, I)\)</span> and <span class="math inline">\(T_\phi(x_i, \epsilon) = \epsilon \odot \sigma_\phi(x_i) + \mu_\phi(x_i)\)</span>, where <span class="math inline">\(\odot\)</span> is elementwise multiplication.</li>
+<li>The KL divergence can be easily computed analytically as <span class="math inline">\(- D(q(z_i; g_\phi(x_i)) || p(z_i)) = {d \over 2} + \sum_{j = 1 : d} \log\sigma_\phi(x_i)_j - {1 \over 2} \sum_{j = 1 : d} (\mu_\phi(x_i)_j^2 + \sigma_\phi(x_i)_j^2)\)</span>.</li>
+</ul>
+<p>With this, one can use backprop to maximise the ELBO.</p>
+<h3 id="fully-bayesian-aevb">Fully Bayesian AEVB</h3>
+<p>Let us turn to fully Bayesian version of AEVB. Again, we first recall the ELBO of the fully Bayesian mixture models:</p>
+<p><span class="math display">\[L(p(x, z, \pi, \eta; \alpha, \beta), q(z, \pi, \eta; r, \phi)) = L(p(x | z, \eta) p(z | \pi) p(\pi; \alpha) p(\eta; \beta), q(z; r) q(\eta; \phi^\eta) q(\pi; \phi^\pi)).\]</span></p>
+<p>We write <span class="math inline">\(\theta = (\pi, \eta)\)</span>, rewrite <span class="math inline">\(\alpha := (\alpha, \beta)\)</span>, <span class="math inline">\(\phi := r\)</span>, and <span class="math inline">\(\gamma := (\phi^\eta, \phi^\pi)\)</span>. Furthermore, as in the half-Bayesian version we assume <span class="math inline">\(p(z | \theta) = p(z)\)</span>, i.e. <span class="math inline">\(z\)</span> does not depend on <span class="math inline">\(\theta\)</span>. Similarly we also assume <span class="math inline">\(p(\theta; \alpha) = p(\theta)\)</span>. Now we have</p>
+<p><span class="math display">\[L(p(x, z, \theta; \alpha), q(z, \theta; \phi, \gamma)) = L(p(x | z, \theta) p(z) p(\theta), q(z; \phi) q(\theta; \gamma)).\]</span></p>
+<p>And the objective is to maximise it over <span class="math inline">\(\phi\)</span> and <span class="math inline">\(\gamma\)</span>. We no longer maximise over <span class="math inline">\(\theta\)</span>, because it is now a random variable, like <span class="math inline">\(z\)</span>. Now let us transform it to a neural network model, as in the half-Bayesian case:</p>
+<p><span class="math display">\[L\left(\left(\prod_{i = 1 : m} p(x_i; f_\theta(z_i))\right) \left(\prod_{i = 1 : m} p(z_i) \right) p(\theta), \left(\prod_{i = 1 : m} q(z_i; g_\phi(x_i))\right) q(\theta; h_\gamma(x))\right).\]</span></p>
+<p>where <span class="math inline">\(f_\theta\)</span>, <span class="math inline">\(g_\phi\)</span> and <span class="math inline">\(h_\gamma\)</span> are neural networks. Again, by separating out KL-divergence terms, the above formula becomes</p>
+<p><span class="math display">\[\sum_i \mathbb E_{q(\theta; h_\gamma(x))q(z_i; g_\phi(x_i))} \log p(x_i; f_\theta(z_i)) - \sum_i D(q(z_i; g_\phi(x_i)) || p(z_i)) - D(q(\theta; h_\gamma(x)) || p(\theta)).\]</span></p>
+<p>Again, we assume the latter two terms can be computed analytically. Using reparameterisation trick, we write</p>
+<p><span class="math display">\[\begin{aligned}
+\theta &amp;= R_\gamma(\zeta, x) \\
+z_i &amp;= T_\phi(\epsilon, x_i)
+\end{aligned}\]</span></p>
+<p>for some transformations <span class="math inline">\(R_\gamma\)</span> and <span class="math inline">\(T_\phi\)</span> and random variables <span class="math inline">\(\zeta\)</span> and <span class="math inline">\(\epsilon\)</span> so that the output has the desired distributions.</p>
+<p>Then the first term can be written as</p>
+<p><span class="math display">\[\mathbb E_{\zeta, \epsilon} \log p(x_i; f_{R_\gamma(\zeta, x)} (T_\phi(\epsilon, x_i))),\]</span></p>
+<p>so that the gradients can be computed accordingly.</p>
+<p>Again, one may use Monte-Carlo to approximate this expetation.</p>
+<h2 id="references">References</h2>
+<ul>
+<li>Attias, Hagai. "A variational baysian framework for graphical models." In Advances in neural information processing systems, pp. 209-215. 2000.</li>
+<li>Bishop, Christopher M. Neural networks for pattern recognition. Springer. 2006.</li>
+<li>Blei, David M., and Michael I. Jordan. “Variational Inference for Dirichlet Process Mixtures.” Bayesian Analysis 1, no. 1 (March 2006): 121–43. <a href="https://doi.org/10.1214/06-BA104" class="uri">https://doi.org/10.1214/06-BA104</a>.</li>
+<li>Blei, David M., Andrew Y. Ng, and Michael I. Jordan. “Latent Dirichlet Allocation.” Journal of Machine Learning Research 3, no. Jan (2003): 993–1022.</li>
+<li>Hofmann, Thomas. “Latent Semantic Models for Collaborative Filtering.” ACM Transactions on Information Systems 22, no. 1 (January 1, 2004): 89–115. <a href="https://doi.org/10.1145/963770.963774" class="uri">https://doi.org/10.1145/963770.963774</a>.</li>
+<li>Hofmann, Thomas. "Learning the similarity of documents: An information-geometric approach to document retrieval and categorization." In Advances in neural information processing systems, pp. 914-920. 2000.</li>
+<li>Hoffman, Matt, David M. Blei, Chong Wang, and John Paisley. “Stochastic Variational Inference.” ArXiv:1206.7051 [Cs, Stat], June 29, 2012. <a href="http://arxiv.org/abs/1206.7051" class="uri">http://arxiv.org/abs/1206.7051</a>.</li>
+<li>Kingma, Diederik P., and Max Welling. “Auto-Encoding Variational Bayes.” ArXiv:1312.6114 [Cs, Stat], December 20, 2013. <a href="http://arxiv.org/abs/1312.6114" class="uri">http://arxiv.org/abs/1312.6114</a>.</li>
+<li>Kurihara, Kenichi, Max Welling, and Nikos Vlassis. "Accelerated variational Dirichlet process mixtures." In Advances in neural information processing systems, pp. 761-768. 2007.</li>
+<li>Sudderth, Erik Blaine. "Graphical models for visual object recognition and tracking." PhD diss., Massachusetts Institute of Technology, 2006.</li>
+</ul>
+</body>
+</html>
+
+ </div>
+ <section id="isso-thread"></section>
+ </div>
+ </body>
+</html>
diff --git a/site-from-md/posts/2019-03-13-a-tail-of-two-densities.html b/site-from-md/posts/2019-03-13-a-tail-of-two-densities.html
new file mode 100644
index 0000000..8f6a108
--- /dev/null
+++ b/site-from-md/posts/2019-03-13-a-tail-of-two-densities.html
@@ -0,0 +1,542 @@
+<!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <meta name="dcterms.date" content="2019-03-13" />
+ <title>A Tail of Two Densities</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/MathJax.js?config=TeX-AMS_CHTML-full" type="text/javascript"></script>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<header id="title-block-header">
+<h1 class="title">A Tail of Two Densities</h1>
+<p class="date">2019-03-13</p>
+</header>
+<nav id="TOC">
+<ul>
+<li><a href="#the-gist-of-differential-privacy">The gist of differential privacy</a></li>
+<li><a href="#epsilon-dp"><span class="math inline">\(\epsilon\)</span>-dp</a></li>
+<li><a href="#approximate-differential-privacy">Approximate differential privacy</a><ul>
+<li><a href="#indistinguishability">Indistinguishability</a></li>
+<li><a href="#back-to-approximate-differential-privacy">Back to approximate differential privacy</a></li>
+</ul></li>
+<li><a href="#composition-theorems">Composition theorems</a></li>
+<li><a href="#subsampling">Subsampling</a></li>
+<li><a href="#references">References</a></li>
+</ul>
+</nav>
+<p>This is Part 1 of a two-part post where I give an introduction to differential privacy, which is a study of tail bounds of the divergence between probability measures, with the end goal of applying it to stochastic gradient descent.</p>
+<p>I start with the definition of <span class="math inline">\(\epsilon\)</span>-differential privacy (corresponding to max divergence), followed by <span class="math inline">\((\epsilon, \delta)\)</span>-differential privacy (a.k.a. approximate differential privacy, corresponding to the <span class="math inline">\(\delta\)</span>-approximate max divergence). I show a characterisation of the <span class="math inline">\((\epsilon, \delta)\)</span>-differential privacy as conditioned <span class="math inline">\(\epsilon\)</span>-differential privacy. Also, as examples, I illustrate the <span class="math inline">\(\epsilon\)</span>-dp with Laplace mechanism and, using some common tail bounds, the approximate dp with the Gaussian mechanism.</p>
+<p>Then I continue to show the effect of combinatorial and sequential compositions of randomised queries (called mechanisms) on privacy by stating and proving the composition theorems for differential privacy, as well as the effect of mixing mechanisms, by presenting the subsampling theorem (a.k.a. amplification theorem).</p>
+<p>In <a href="/posts/2019-03-14-great-but-manageable-expectations.html">Part 2</a>, I discuss the Rényi differential privacy, corresponding to the Rényi divergence, a study of the moment generating functions of the divergence between probability measures to derive the tail bounds.</p>
+<p>Like in Part 1, I prove a composition theorem and a subsampling theorem.</p>
+<p>I also attempt to reproduce a seemingly better moment bound for the Gaussian mechanism with subsampling, with one intermediate step which I am not able to prove.</p>
+<p>After that I explain the Tensorflow implementation of differential privacy in its <a href="https://github.com/tensorflow/privacy/tree/master/privacy">Privacy</a> module, which focuses on the differentially private stochastic gradient descent algorithm (DP-SGD).</p>
+<p>Finally I use the results from both Part 1 and Part 2 to obtain some privacy guarantees for composed subsampling queries in general, and for DP-SGD in particular. I also compare these privacy guarantees.</p>
+<p><strong>Acknowledgement</strong>. I would like to thank <a href="https://stockholm.ai">Stockholm AI</a> for introducing me to the subject of differential privacy. Thanks to (in chronological order) Reynaldo Boulogne, Martin Abedi, Ilya Mironov, Kurt Johansson, Mark Bun, Salil Vadhan, Jonathan Ullman, Yuanyuan Xu and Yiting Li for communication and discussions. The research was done while working at <a href="https://www.kth.se/en/sci/institutioner/math">KTH Department of Mathematics</a>.</p>
+<p><em>If you are confused by any notations, ask me or try <a href="/notations.html">this</a>. This post (including both Part 1 and Part2) is licensed under <a href="https://creativecommons.org/licenses/by-sa/4.0/">CC BY-SA</a> and <a href="https://www.gnu.org/licenses/fdl.html">GNU FDL</a>.</em></p>
+<h2 id="the-gist-of-differential-privacy">The gist of differential privacy</h2>
+<p>If you only have one minute, here is what differential privacy is about:</p>
+<p>Let <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> be two probability densities, we define the <em>divergence variable</em> of <span class="math inline">\((p, q)\)</span> to be</p>
+<p><span class="math display">\[L(p || q) := \log {p(\xi) \over q(\xi)}\]</span></p>
+<p>where <span class="math inline">\(\xi\)</span> is a random variable distributed according to <span class="math inline">\(p\)</span>.</p>
+<p>Roughly speaking, differential privacy is the study of the tail bound of <span class="math inline">\(L(p || q)\)</span>: for certain <span class="math inline">\(p\)</span>s and <span class="math inline">\(q\)</span>s, and for <span class="math inline">\(\epsilon &gt; 0\)</span>, find <span class="math inline">\(\delta(\epsilon)\)</span> such that</p>
+<p><span class="math display">\[\mathbb P(L(p || q) &gt; \epsilon) &lt; \delta(\epsilon),\]</span></p>
+<p>where <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> are the laws of the outputs of a randomised functions on two very similar inputs. Moreover, to make matters even simpler, only three situations need to be considered:</p>
+<ol type="1">
+<li>(General case) <span class="math inline">\(q\)</span> is in the form of <span class="math inline">\(q(y) = p(y + \Delta)\)</span> for some bounded constant <span class="math inline">\(\Delta\)</span>.</li>
+<li>(Compositions) <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> are combinatorial or sequential compositions of some simpler <span class="math inline">\(p_i\)</span>’s and <span class="math inline">\(q_i\)</span>’s respectively</li>
+<li>(Subsampling) <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> are mixtures / averages of some simpler <span class="math inline">\(p_i\)</span>’s and <span class="math inline">\(q_i\)</span>’s respectively</li>
+</ol>
+<p>In applications, the inputs are databases and the randomised functions are queries with an added noise, and the tail bounds give privacy guarantees. When it comes to gradient descent, the input is the training dataset, and the query updates the parameters, and privacy is achieved by adding noise to the gradients.</p>
+<p>Now if you have an hour...</p>
+<h2 id="epsilon-dp"><span class="math inline">\(\epsilon\)</span>-dp</h2>
+<p><strong>Definition (Mechanisms)</strong>. Let <span class="math inline">\(X\)</span> be a space with a metric <span class="math inline">\(d: X \times X \to \mathbb N\)</span>. A <em>mechanism</em> <span class="math inline">\(M\)</span> is a function that takes <span class="math inline">\(x \in X\)</span> as input and outputs a random variable on <span class="math inline">\(Y\)</span>.</p>
+<p>In this post, <span class="math inline">\(X = Z^m\)</span> is the space of datasets of <span class="math inline">\(m\)</span> rows for some integer <span class="math inline">\(m\)</span>, where each item resides in <span class="math inline">\(Z\)</span>. In this case the distance <span class="math inline">\(d(x, x&#39;) := \#\{i: x_i \neq x&#39;_i\}\)</span> is the number of rows that differ between <span class="math inline">\(x\)</span> and <span class="math inline">\(x&#39;\)</span>.</p>
+<p>Normally we have a query <span class="math inline">\(f: X \to Y\)</span>, and construct the mechanism <span class="math inline">\(M\)</span> from <span class="math inline">\(f\)</span> by adding a noise:</p>
+<p><span class="math display">\[M(x) := f(x) + \text{noise}.\]</span></p>
+<p>Later, we will also consider mechanisms constructed from composition or mixture of other mechanisms.</p>
+<p>In this post <span class="math inline">\(Y = \mathbb R^d\)</span> for some <span class="math inline">\(d\)</span>.</p>
+<p><strong>Definition (Sensitivity)</strong>. Let <span class="math inline">\(f: X \to \mathbb R^d\)</span> be a function. The <em>sensitivity</em> <span class="math inline">\(S_f\)</span> of <span class="math inline">\(f\)</span> is defined as</p>
+<p><span class="math display">\[S_f := \sup_{x, x&#39; \in X: d(x, x&#39;) = 1} \|f(x) - f(x&#39;)\|_2,\]</span></p>
+<p>where <span class="math inline">\(\|y\|_2 = \sqrt{y_1^2 + ... + y_d^2}\)</span> is the <span class="math inline">\(\ell^2\)</span>-norm.</p>
+<p><strong>Definition (Differential Privacy)</strong>. A mechanism <span class="math inline">\(M\)</span> is called <span class="math inline">\(\epsilon\)</span><em>-differential privacy</em> (<span class="math inline">\(\epsilon\)</span>-dp) if it satisfies the following condition: for all <span class="math inline">\(x, x&#39; \in X\)</span> with <span class="math inline">\(d(x, x&#39;) = 1\)</span>, and for all measureable set <span class="math inline">\(S \subset \mathbb R^n\)</span>,</p>
+<p><span class="math display">\[\mathbb P(M(x) \in S) \le e^\epsilon P(M(x&#39;) \in S). \qquad (1)\]</span></p>
+<p>An example of <span class="math inline">\(\epsilon\)</span>-dp mechanism is the Laplace mechanism.</p>
+<p><strong>Definition</strong>. The Laplace distribution over <span class="math inline">\(\mathbb R\)</span> with parameter <span class="math inline">\(b &gt; 0\)</span> has probability density function</p>
+<p><span class="math display">\[f_{\text{Lap}(b)}(x) = {1 \over 2 b} e^{- {|x| \over b}}.\]</span></p>
+<p><strong>Definition</strong>. Let <span class="math inline">\(d = 1\)</span>. The Laplace mechanism is defined by</p>
+<p><span class="math display">\[M(x) = f(x) + \text{Lap}(b).\]</span></p>
+<p><strong>Claim</strong>. The Laplace mechanism with</p>
+<p><span class="math display">\[b \ge \epsilon^{-1} S_f \qquad (1.5)\]</span></p>
+<p>is <span class="math inline">\(\epsilon\)</span>-dp.</p>
+<p><strong>Proof</strong>. Quite straightforward. Let <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> be the laws of <span class="math inline">\(M(x)\)</span> and <span class="math inline">\(M(x&#39;)\)</span> respectively.</p>
+<p><span class="math display">\[{p (y) \over q (y)} = {f_{\text{Lap}(b)} (y - f(x)) \over f_{\text{Lap}(b)} (y - f(x&#39;))} = \exp(b^{-1} (|y - f(x&#39;)| - |y - f(x)|))\]</span></p>
+<p>Using triangular inequality <span class="math inline">\(|A| - |B| \le |A - B|\)</span> on the right hand side, we have</p>
+<p><span class="math display">\[{p (y) \over q (y)} \le \exp(b^{-1} (|f(x) - f(x&#39;)|)) \le \exp(\epsilon)\]</span></p>
+<p>where in the last step we use the condition (1.5). <span class="math inline">\(\square\)</span></p>
+<h2 id="approximate-differential-privacy">Approximate differential privacy</h2>
+<p>Unfortunately, <span class="math inline">\(\epsilon\)</span>-dp does not apply to the most commonly used noise, the Gaussian noise. To fix this, we need to relax the definition a bit.</p>
+<p><strong>Definition</strong>. A mechanism <span class="math inline">\(M\)</span> is said to be <span class="math inline">\((\epsilon, \delta)\)</span><em>-differentially private</em> if for all <span class="math inline">\(x, x&#39; \in X\)</span> with <span class="math inline">\(d(x, x&#39;) = 1\)</span> and for all measureable <span class="math inline">\(S \subset \mathbb R^d\)</span></p>
+<p><span class="math display">\[\mathbb P(M(x) \in S) \le e^\epsilon P(M(x&#39;) \in S) + \delta. \qquad (2)\]</span></p>
+<p>Immediately we see that the <span class="math inline">\((\epsilon, \delta)\)</span>-dp is meaningful only if <span class="math inline">\(\delta &lt; 1\)</span>.</p>
+<h3 id="indistinguishability">Indistinguishability</h3>
+<p>To understand <span class="math inline">\((\epsilon, \delta)\)</span>-dp, it is helpful to study <span class="math inline">\((\epsilon, \delta)\)</span>-indistinguishability.</p>
+<p><strong>Definition</strong>. Two probability measures <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> on the same space are called <span class="math inline">\((\epsilon, \delta)\)</span><em>-ind(istinguishable)</em> if for all measureable sets <span class="math inline">\(S\)</span>:</p>
+<p><span class="math display">\[\begin{aligned}
+p(S) \le e^\epsilon q(S) + \delta, \qquad (3) \\
+q(S) \le e^\epsilon p(S) + \delta. \qquad (4)
+\end{aligned}\]</span></p>
+<p>As before, we also call random variables <span class="math inline">\(\xi\)</span> and <span class="math inline">\(\eta\)</span> to be <span class="math inline">\((\epsilon, \delta)\)</span>-ind if their laws are <span class="math inline">\((\epsilon, \delta)\)</span>-ind. When <span class="math inline">\(\delta = 0\)</span>, we call it <span class="math inline">\(\epsilon\)</span>-ind.</p>
+<p>Immediately we have</p>
+<p><strong>Claim 0</strong>. <span class="math inline">\(M\)</span> is <span class="math inline">\((\epsilon, \delta)\)</span>-dp (resp. <span class="math inline">\(\epsilon\)</span>-dp) iff <span class="math inline">\(M(x)\)</span> and <span class="math inline">\(M(x&#39;)\)</span> are <span class="math inline">\((\epsilon, \delta)\)</span>-ind (resp. <span class="math inline">\(\epsilon\)</span>-ind) for all <span class="math inline">\(x\)</span> and <span class="math inline">\(x&#39;\)</span> with distance <span class="math inline">\(1\)</span>.</p>
+<p><strong>Definition (Divergence Variable)</strong>. Let <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> be two probability measures. Let <span class="math inline">\(\xi\)</span> be a random variable distributed according to <span class="math inline">\(p\)</span>, we define a random variable <span class="math inline">\(L(p || q)\)</span> by</p>
+<p><span class="math display">\[L(p || q) := \log {p(\xi) \over q(\xi)},\]</span></p>
+<p>and call it the <em>divergence variable</em> of <span class="math inline">\((p, q)\)</span>.</p>
+<p>One interesting and readily verifiable fact is</p>
+<p><span class="math display">\[\mathbb E L(p || q) = D(p || q)\]</span></p>
+<p>where <span class="math inline">\(D\)</span> is the KL-divergence.</p>
+<p><strong>Claim 1</strong>. If</p>
+<p><span class="math display">\[\begin{aligned}
+\mathbb P(L(p || q) \le \epsilon) &amp;\ge 1 - \delta, \qquad(5) \\
+\mathbb P(L(q || p) \le \epsilon) &amp;\ge 1 - \delta
+\end{aligned}\]</span></p>
+<p>then <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> are <span class="math inline">\((\epsilon, \delta)\)</span>-ind.</p>
+<p><strong>Proof</strong>. We verify (3), and (4) can be shown in the same way. Let <span class="math inline">\(A := \{y \in Y: \log {p(y) \over q(y)} &gt; \epsilon\}\)</span>, then by (5) we have</p>
+<p><span class="math display">\[p(A) &lt; \delta.\]</span></p>
+<p>So</p>
+<p><span class="math display">\[p(S) = p(S \cap A) + p(S \setminus A) \le \delta + e^\epsilon q(S \setminus A) \le \delta + e^\epsilon q(S).\]</span></p>
+<p><span class="math inline">\(\square\)</span></p>
+<p>This Claim translates differential privacy to the tail bound of divergence variables, and for the rest of this post all dp results are obtained by estimating this tail bound.</p>
+<p>In the following we discuss the converse of Claim 1. The discussions are rather technical, and readers can skip to the next subsection on first reading.</p>
+<p>The converse of Claim 1 is not true.</p>
+<p><strong>Claim 2</strong>. There exists <span class="math inline">\(\epsilon, \delta &gt; 0\)</span>, and <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> that are <span class="math inline">\((\epsilon, \delta)\)</span>-ind, such that</p>
+<p><span class="math display">\[\begin{aligned}
+\mathbb P(L(p || q) \le \epsilon) &amp;&lt; 1 - \delta, \\
+\mathbb P(L(q || p) \le \epsilon) &amp;&lt; 1 - \delta
+\end{aligned}\]</span></p>
+<p><strong>Proof</strong>. Here's a example. Let <span class="math inline">\(Y = \{0, 1\}\)</span>, and <span class="math inline">\(p(0) = q(1) = 2 / 5\)</span> and <span class="math inline">\(p(1) = q(0) = 3 / 5\)</span>. Then it is not hard to verify that <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> are <span class="math inline">\((\log {4 \over 3}, {1 \over 3})\)</span>-ind: just check (3) for all four possible <span class="math inline">\(S \subset Y\)</span> and (4) holds by symmetry. On the other hand,</p>
+<p><span class="math display">\[\mathbb P(L(p || q) \le \log {4 \over 3}) = \mathbb P(L(q || p) \le \log {4 \over 3}) = {2 \over 5} &lt; {2 \over 3}.\]</span></p>
+<p><span class="math inline">\(\square\)</span></p>
+<p>A weaker version of the converse of Claim 1 is true (Kasiviswanathan-Smith 2015), though:</p>
+<p><strong>Claim 3</strong>. Let <span class="math inline">\(\alpha &gt; 1\)</span>. If <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> are <span class="math inline">\((\epsilon, \delta)\)</span>-ind, then</p>
+<p><span class="math display">\[\mathbb P(L(p || q) &gt; \alpha \epsilon) &lt; {1 \over 1 - \exp((1 - \alpha) \epsilon)} \delta.\]</span></p>
+<p><strong>Proof</strong>. Define</p>
+<p><span class="math display">\[S = \{y: p(y) &gt; e^{\alpha \epsilon} q(y)\}.\]</span></p>
+<p>Then we have</p>
+<p><span class="math display">\[e^{\alpha \epsilon} q(S) &lt; p(S) \le e^\epsilon q(S) + \delta,\]</span></p>
+<p>where the first inequality is due to the definition of <span class="math inline">\(S\)</span>, and the second due to the <span class="math inline">\((\epsilon, \delta)\)</span>-ind. Therefore</p>
+<p><span class="math display">\[q(S) \le {\delta \over e^{\alpha \epsilon} - e^\epsilon}.\]</span></p>
+<p>Using the <span class="math inline">\((\epsilon, \delta)\)</span>-ind again we have</p>
+<p><span class="math display">\[p(S) \le e^\epsilon q(S) + \delta = {1 \over 1 - e^{(1 - \alpha) \epsilon}} \delta.\]</span></p>
+<p><span class="math inline">\(\square\)</span></p>
+<p>This can be quite bad if <span class="math inline">\(\epsilon\)</span> is small.</p>
+<p>To prove the composition theorems in the next section, we need a condition better than that in Claim 1 so that we can go back and forth between indistinguishability and such condition. In other words, we need a <em>characterisation</em> of indistinguishability.</p>
+<p>Let us take a careful look at the condition in Claim 1 and call it <strong>C1</strong>:</p>
+<p><strong>C1</strong>. <span class="math inline">\(\mathbb P(L(p || q) \le \epsilon) \ge 1 - \delta\)</span> and <span class="math inline">\(\mathbb P(L(q || p) \le \epsilon) \ge 1 - \delta\)</span></p>
+<p>It is equivalent to</p>
+<p><strong>C2</strong>. there exist events <span class="math inline">\(A, B \subset Y\)</span> with probabilities <span class="math inline">\(p(A)\)</span> and <span class="math inline">\(q(B)\)</span> at least <span class="math inline">\(1 - \delta\)</span> such that <span class="math inline">\(\log p(y) - \log q(y) \le \epsilon\)</span> for all <span class="math inline">\(y \in A\)</span> and <span class="math inline">\(\log q(y) - \log p(y) \le \epsilon\)</span> for all <span class="math inline">\(y \in B\)</span>.</p>
+<p>A similar-looking condition to <strong>C2</strong> is the following:</p>
+<p><strong>C3</strong>. Let <span class="math inline">\(\Omega\)</span> be the <a href="https://en.wikipedia.org/wiki/Probability_space#Definition">underlying probability space</a>. There exist two events <span class="math inline">\(E, F \subset \Omega\)</span> with <span class="math inline">\(\mathbb P(E), \mathbb P(F) \ge 1 - \delta\)</span>, such that <span class="math inline">\(|\log p_{|E}(y) - \log q_{|F}(y)| \le \epsilon\)</span> for all <span class="math inline">\(y \in Y\)</span>.</p>
+<p>Here <span class="math inline">\(p_{|E}\)</span> (resp. <span class="math inline">\(q_{|F}\)</span>) is <span class="math inline">\(p\)</span> (resp. <span class="math inline">\(q\)</span>) conditioned on event <span class="math inline">\(E\)</span> (resp. <span class="math inline">\(F\)</span>).</p>
+<p><strong>Remark</strong>. Note that the events in <strong>C2</strong> and <strong>C3</strong> are in different spaces, and therefore we can not write <span class="math inline">\(p_{|E}(S)\)</span> as <span class="math inline">\(p(S | E)\)</span> or <span class="math inline">\(q_{|F}(S)\)</span> as <span class="math inline">\(q(S | F)\)</span>. In fact, if we let <span class="math inline">\(E\)</span> and <span class="math inline">\(F\)</span> in <strong>C3</strong> be subsets of <span class="math inline">\(Y\)</span> with <span class="math inline">\(p(E), q(F) \ge 1 - \delta\)</span> and assume <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> have the same supports, then <strong>C3</strong> degenerates to a stronger condition than <strong>C2</strong>. Indeed, in this case <span class="math inline">\(p_E(y) = p(y) 1_{y \in E}\)</span> and <span class="math inline">\(q_F(y) = q(y) 1_{y \in F}\)</span>, and so <span class="math inline">\(p_E(y) \le e^\epsilon q_F(y)\)</span> forces <span class="math inline">\(E \subset F\)</span>. We also obtain <span class="math inline">\(F \subset E\)</span> in the same way. This gives us <span class="math inline">\(E = F\)</span>, and <strong>C3</strong> becomes <strong>C2</strong> with <span class="math inline">\(A = B = E = F\)</span>.</p>
+<p>As it turns out, <strong>C3</strong> is the condition we need.</p>
+<p><strong>Claim 4</strong>. Two probability measures <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> are <span class="math inline">\((\epsilon, \delta)\)</span>-ind if and only if <strong>C3</strong> holds.</p>
+<p><strong>Proof</strong>(Murtagh-Vadhan 2018). The "if" direction is proved in the same way as Lemma 1. Without loss of generality we may assume <span class="math inline">\(\mathbb P(E) = \mathbb P(F) \ge 1 - \delta\)</span>. To see this, suppose <span class="math inline">\(F\)</span> has higher probability than <span class="math inline">\(E\)</span>, then we can substitute <span class="math inline">\(F\)</span> with a subset of <span class="math inline">\(F\)</span> that has the same probability as <span class="math inline">\(E\)</span> (with possible enlargement of the probability space).</p>
+<p>Let <span class="math inline">\(\xi \sim p\)</span> and <span class="math inline">\(\eta \sim q\)</span> be two independent random variables, then</p>
+<p><span class="math display">\[\begin{aligned}
+p(S) &amp;= \mathbb P(\xi \in S | E) \mathbb P(E) + \mathbb P(\xi \in S; E^c) \\
+&amp;\le e^\epsilon \mathbb P(\eta \in S | F) \mathbb P(E) + \delta \\
+&amp;= e^\epsilon \mathbb P(\eta \in S | F) \mathbb P(F) + \delta\\
+&amp;\le e^\epsilon q(S) + \delta.
+\end{aligned}\]</span></p>
+<p>The "only-if" direction is more involved.</p>
+<p>We construct events <span class="math inline">\(E\)</span> and <span class="math inline">\(F\)</span> by constructing functions <span class="math inline">\(e, f: Y \to [0, \infty)\)</span> satisfying the following conditions:</p>
+<ol type="1">
+<li><span class="math inline">\(0 \le e(y) \le p(y)\)</span> and <span class="math inline">\(0 \le f(y) \le q(y)\)</span> for all <span class="math inline">\(y \in Y\)</span>.</li>
+<li><span class="math inline">\(|\log e(y) - \log f(y)| \le \epsilon\)</span> for all <span class="math inline">\(y \in Y\)</span>.</li>
+<li><span class="math inline">\(e(Y), f(Y) \ge 1 - \delta\)</span>.</li>
+<li><span class="math inline">\(e(Y) = f(Y)\)</span>.</li>
+</ol>
+<p>Here for a set <span class="math inline">\(S \subset Y\)</span>, <span class="math inline">\(e(S) := \int_S e(y) dy\)</span>, and the same goes for <span class="math inline">\(f(S)\)</span>.</p>
+<p>Let <span class="math inline">\(\xi \sim p\)</span> and <span class="math inline">\(\eta \sim q\)</span>. Then we define <span class="math inline">\(E\)</span> and <span class="math inline">\(F\)</span> by</p>
+<p><span class="math display">\[\mathbb P(E | \xi = y) = e(y) / p(y) \\
+\mathbb P(F | \eta = y) = f(y) / q(y).\]</span></p>
+<p><strong>Remark inside proof</strong>. This can seem a bit confusing. Intuitively, we can think of it this way when <span class="math inline">\(Y\)</span> is finite: Recall a random variable on <span class="math inline">\(Y\)</span> is a function from the probability space <span class="math inline">\(\Omega\)</span> to <span class="math inline">\(Y\)</span>. Let event <span class="math inline">\(G_y \subset \Omega\)</span> be defined as <span class="math inline">\(G_y = \xi^{-1} (y)\)</span>. We cut <span class="math inline">\(G_y\)</span> into the disjoint union of <span class="math inline">\(E_y\)</span> and <span class="math inline">\(G_y \setminus E_y\)</span> such that <span class="math inline">\(\mathbb P(E_y) = e(y)\)</span>. Then <span class="math inline">\(E = \bigcup_{y \in Y} E_y\)</span>. So <span class="math inline">\(e(y)\)</span> can be seen as the "density" of <span class="math inline">\(E\)</span>.</p>
+<p>Indeed, given <span class="math inline">\(E\)</span> and <span class="math inline">\(F\)</span> defined this way, we have</p>
+<p><span class="math display">\[p_E(y) = {e(y) \over e(Y)} \le {\exp(\epsilon) f(y) \over e(Y)} = {\exp(\epsilon) f(y) \over f(Y)} = \exp(\epsilon) q_F(y).\]</span></p>
+<p>and</p>
+<p><span class="math display">\[\mathbb P(E) = \int \mathbb P(E | \xi = y) p(y) dy = e(Y) \ge 1 - \delta,\]</span></p>
+<p>and the same goes for <span class="math inline">\(\mathbb P(F)\)</span>.</p>
+<p>What remains is to construct <span class="math inline">\(e(y)\)</span> and <span class="math inline">\(f(y)\)</span> satisfying the four conditions.</p>
+<p>Like in the proof of Claim 1, let <span class="math inline">\(S, T \subset Y\)</span> be defined as</p>
+<p><span class="math display">\[\begin{aligned}
+S := \{y: p(y) &gt; \exp(\epsilon) q(y)\},\\
+T := \{y: q(y) &gt; \exp(\epsilon) p(y)\}.
+\end{aligned}\]</span></p>
+<p>Let</p>
+<p><span class="math display">\[\begin{aligned}
+e(y) &amp;:= \exp(\epsilon) q(y) 1_{y \in S} + p(y) 1_{y \notin S}\\
+f(y) &amp;:= \exp(\epsilon) p(y) 1_{y \in T} + q(y) 1_{y \notin T}. \qquad (6)
+\end{aligned}\]</span></p>
+<p>By checking them on the three disjoint subsets <span class="math inline">\(S\)</span>, <span class="math inline">\(T\)</span>, <span class="math inline">\((S \cup T)^c\)</span>, it is not hard to verify that the <span class="math inline">\(e(y)\)</span> and <span class="math inline">\(f(y)\)</span> constructed this way satisfy the first two conditions. They also satisfy the third condition:</p>
+<p><span class="math display">\[\begin{aligned}
+e(Y) &amp;= 1 - (p(S) - \exp(\epsilon) q(S)) \ge 1 - \delta, \\
+f(Y) &amp;= 1 - (q(T) - \exp(\epsilon) p(T)) \ge 1 - \delta.
+\end{aligned}\]</span></p>
+<p>If <span class="math inline">\(e(Y) = f(Y)\)</span> then we are done. Otherwise, without loss of generality, assume <span class="math inline">\(e(Y) &lt; f(Y)\)</span>, then all it remains to do is to reduce the value of <span class="math inline">\(f(y)\)</span> while preserving Condition 1, 2 and 3, until <span class="math inline">\(f(Y) = e(Y)\)</span>.</p>
+<p>As it turns out, this can be achieved by reducing <span class="math inline">\(f(y)\)</span> on the set <span class="math inline">\(\{y \in Y: q(y) &gt; p(y)\}\)</span>. To see this, let us rename the <span class="math inline">\(f(y)\)</span> defined in (6) <span class="math inline">\(f_+(y)\)</span>, and construct <span class="math inline">\(f_-(y)\)</span> by</p>
+<p><span class="math display">\[f_-(y) := p(y) 1_{y \in T} + (q(y) \wedge p(y)) 1_{y \notin T}.\]</span></p>
+<p>It is not hard to show that not only <span class="math inline">\(e(y)\)</span> and <span class="math inline">\(f_-(y)\)</span> also satisfy conditions 1-3, but</p>
+<p><span class="math display">\[e(y) \ge f_-(y), \forall y \in Y,\]</span></p>
+<p>and thus <span class="math inline">\(e(Y) \ge f_-(Y)\)</span>. Therefore there exists an <span class="math inline">\(f\)</span> that interpolates between <span class="math inline">\(f_-\)</span> and <span class="math inline">\(f_+\)</span> with <span class="math inline">\(f(Y) = e(Y)\)</span>. <span class="math inline">\(\square\)</span></p>
+<p>To prove the adaptive composition theorem for approximate differential privacy, we need a similar claim (We use index shorthand <span class="math inline">\(\xi_{&lt; i} = \xi_{1 : i - 1}\)</span> and similarly for other notations):</p>
+<p><strong>Claim 5</strong>. Let <span class="math inline">\(\xi_{1 : i}\)</span> and <span class="math inline">\(\eta_{1 : i}\)</span> be random variables. Let</p>
+<p><span class="math display">\[\begin{aligned}
+p_i(S | y_{1 : i - 1}) := \mathbb P(\xi_i \in S | \xi_{1 : i - 1} = y_{1 : i - 1})\\
+q_i(S | y_{1 : i - 1}) := \mathbb P(\eta_i \in S | \eta_{1 : i - 1} = y_{1 : i - 1})
+\end{aligned}\]</span></p>
+<p>be the conditional laws of <span class="math inline">\(\xi_i | \xi_{&lt; i}\)</span> and <span class="math inline">\(\eta_i | \eta_{&lt; i}\)</span> respectively. Then the following are equivalent:</p>
+<ol type="1">
+<li>For any <span class="math inline">\(y_{&lt; i} \in Y^{i - 1}\)</span>, <span class="math inline">\(p_i(\cdot | y_{&lt; i})\)</span> and <span class="math inline">\(q_i(\cdot | y_{&lt; i})\)</span> are <span class="math inline">\((\epsilon, \delta)\)</span>-ind</li>
+<li><p>There exists events <span class="math inline">\(E_i, F_i \subset \Omega\)</span> with <span class="math inline">\(\mathbb P(E_i | \xi_{&lt;i} = y_{&lt;i}) = \mathbb P(F_i | \eta_{&lt;i} = y_{&lt; i}) \ge 1 - \delta\)</span> for any <span class="math inline">\(y_{&lt; i}\)</span>, such that <span class="math inline">\(p_{i | E_i}(\cdot | y_{&lt; i})\)</span> and <span class="math inline">\(q_{i | E_i} (\cdot | y_{&lt; i})\)</span> are <span class="math inline">\(\epsilon\)</span>-ind for any <span class="math inline">\(y_{&lt; i}\)</span>, where <span class="math display">\[\begin{aligned}
+p_{i | E_i}(S | y_{1 : i - 1}) := \mathbb P(\xi_i \in S | E_i, \xi_{1 : i - 1} = y_{1 : i - 1})\\
+ q_{i | F_i}(S | y_{1 : i - 1}) := \mathbb P(\eta_i \in S | F_i, \eta_{1 : i - 1} = y_{1 : i - 1})
+\end{aligned}\]</span></p>
+<p>are <span class="math inline">\(p_i\)</span> and <span class="math inline">\(q_i\)</span> conditioned on <span class="math inline">\(E_i\)</span> and <span class="math inline">\(F_i\)</span> respectively.</p></li>
+</ol>
+<p><strong>Proof</strong>. Item 2 =&gt; Item 1: as in the Proof of Claim 4,</p>
+<p><span class="math display">\[\begin{aligned}
+p_i(S | y_{&lt; i}) &amp;= p_{i | E_i} (S | y_{&lt; i}) \mathbb P(E_i | \xi_{&lt; i} = y_{&lt; i}) + p_{i | E_i^c}(S | y_{&lt; i}) \mathbb P(E_i^c | \xi_{&lt; i} = y_{&lt; i}) \\
+&amp;\le p_{i | E_i} (S | y_{&lt; i}) \mathbb P(E_i | \xi_{&lt; i} = y_{&lt; i}) + \delta \\
+&amp;= p_{i | E_i} (S | y_{&lt; i}) \mathbb P(F_i | \xi_{&lt; i} = y_{&lt; i}) + \delta \\
+&amp;\le e^\epsilon q_{i | F_i} (S | y_{&lt; i}) \mathbb P(F_i | \xi_{&lt; i} = y_{&lt; i}) + \delta \\
+&amp;= e^\epsilon q_i (S | y_{&lt; i}) + \delta.
+\end{aligned}\]</span></p>
+<p>The direction from <span class="math inline">\(q_i(S | y_{&lt; i}) \le e^\epsilon p_i(S | y_{&lt; i}) + \delta\)</span> can be shown in the same way.</p>
+<p>Item 1 =&gt; Item 2: as in the Proof of Claim 4 we construct <span class="math inline">\(e(y_{1 : i})\)</span> and <span class="math inline">\(f(y_{1 : i})\)</span> as "densities" of events <span class="math inline">\(E_i\)</span> and <span class="math inline">\(F_i\)</span>.</p>
+<p>Let</p>
+<p><span class="math display">\[\begin{aligned}
+e(y_{1 : i}) &amp;:= e^\epsilon q_i(y_i | y_{&lt; i}) 1_{y_i \in S_i(y_{&lt; i})} + p_i(y_i | y_{&lt; i}) 1_{y_i \notin S_i(y_{&lt; i})}\\
+f(y_{1 : i}) &amp;:= e^\epsilon p_i(y_i | y_{&lt; i}) 1_{y_i \in T_i(y_{&lt; i})} + q_i(y_i | y_{&lt; i}) 1_{y_i \notin T_i(y_{&lt; i})}\\
+\end{aligned}\]</span></p>
+<p>where</p>
+<p><span class="math display">\[\begin{aligned}
+S_i(y_{&lt; i}) = \{y_i \in Y: p_i(y_i | y_{&lt; i}) &gt; e^\epsilon q_i(y_i | y_{&lt; i})\}\\
+T_i(y_{&lt; i}) = \{y_i \in Y: q_i(y_i | y_{&lt; i}) &gt; e^\epsilon p_i(y_i | y_{&lt; i})\}.
+\end{aligned}\]</span></p>
+<p>Then <span class="math inline">\(E_i\)</span> and <span class="math inline">\(F_i\)</span> are defined as</p>
+<p><span class="math display">\[\begin{aligned}
+\mathbb P(E_i | \xi_{\le i} = y_{\le i}) &amp;= {e(y_{\le i}) \over p_i(y_{\le i})},\\
+\mathbb P(F_i | \xi_{\le i} = y_{\le i}) &amp;= {f(y_{\le i}) \over q_i(y_{\le i})}.
+\end{aligned}\]</span></p>
+<p>The rest of the proof is almost the same as the proof of Lemma 2. <span class="math inline">\(\square\)</span></p>
+<h3 id="back-to-approximate-differential-privacy">Back to approximate differential privacy</h3>
+<p>By Claim 0 and 1 we have</p>
+<p><strong>Claim 6</strong>. If for all <span class="math inline">\(x, x&#39; \in X\)</span> with distance <span class="math inline">\(1\)</span></p>
+<p><span class="math display">\[\mathbb P(L(M(x) || M(x&#39;)) \le \epsilon) \ge 1 - \delta,\]</span></p>
+<p>then <span class="math inline">\(M\)</span> is <span class="math inline">\((\epsilon, \delta)\)</span>-dp.</p>
+<p>Note that in the literature the divergence variable <span class="math inline">\(L(M(x) || M(x&#39;))\)</span> is also called the <em>privacy loss</em>.</p>
+<p>By Claim 0 and Claim 4 we have</p>
+<p><strong>Claim 7</strong>. <span class="math inline">\(M\)</span> is <span class="math inline">\((\epsilon, \delta)\)</span>-dp if and only if for every <span class="math inline">\(x, x&#39; \in X\)</span> with distance <span class="math inline">\(1\)</span>, there exist events <span class="math inline">\(E, F \subset \Omega\)</span> with <span class="math inline">\(\mathbb P(E) = \mathbb P(F) \ge 1 - \delta\)</span>, <span class="math inline">\(M(x) | E\)</span> and <span class="math inline">\(M(x&#39;) | F\)</span> are <span class="math inline">\(\epsilon\)</span>-ind.</p>
+<p>We can further simplify the privacy loss <span class="math inline">\(L(M(x) || M(x&#39;))\)</span>, by observing the translational and scaling invariance of <span class="math inline">\(L(\cdot||\cdot)\)</span>:</p>
+<p><span class="math display">\[\begin{aligned}
+L(\xi || \eta) &amp;\overset{d}{=} L(\alpha \xi + \beta || \alpha \eta + \beta), \qquad \alpha \neq 0. \qquad (6.1)
+\end{aligned}\]</span></p>
+<p>With this and the definition</p>
+<p><span class="math display">\[M(x) = f(x) + \zeta\]</span></p>
+<p>for some random variable <span class="math inline">\(\zeta\)</span>, we have</p>
+<p><span class="math display">\[L(M(x) || M(x&#39;)) \overset{d}{=} L(\zeta || \zeta + f(x&#39;) - f(x)).\]</span></p>
+<p>Without loss of generality, we can consider <span class="math inline">\(f\)</span> with sensitivity <span class="math inline">\(1\)</span>, for</p>
+<p><span class="math display">\[L(f(x) + S_f \zeta || f(x&#39;) + S_f \zeta) \overset{d}{=} L(S_f^{-1} f(x) + \zeta || S_f^{-1} f(x&#39;) + \zeta)\]</span></p>
+<p>so for any noise <span class="math inline">\(\zeta\)</span> that achieves <span class="math inline">\((\epsilon, \delta)\)</span>-dp for a function with sensitivity <span class="math inline">\(1\)</span>, we have the same privacy guarantee by for an arbitrary function with sensitivity <span class="math inline">\(S_f\)</span> by adding a noise <span class="math inline">\(S_f \zeta\)</span>.</p>
+<p>With Claim 6 we can show that the Gaussian mechanism is approximately differentially private. But first we need to define it.</p>
+<p><strong>Definition (Gaussian mechanism)</strong>. Given a query <span class="math inline">\(f: X \to Y\)</span>, the <em>Gaussian mechanism</em> <span class="math inline">\(M\)</span> adds a Gaussian noise to the query:</p>
+<p><span class="math display">\[M(x) = f(x) + N(0, \sigma^2 I).\]</span></p>
+<p>Some tail bounds for the Gaussian distribution will be useful.</p>
+<p><strong>Claim 8 (Gaussian tail bounds)</strong>. Let <span class="math inline">\(\xi \sim N(0, 1)\)</span> be a standard normal distribution. Then for <span class="math inline">\(t &gt; 0\)</span></p>
+<p><span class="math display">\[\mathbb P(\xi &gt; t) &lt; {1 \over \sqrt{2 \pi} t} e^{- {t^2 \over 2}}, \qquad (6.3)\]</span></p>
+<p>and</p>
+<p><span class="math display">\[\mathbb P(\xi &gt; t) &lt; e^{- {t^2 \over 2}}. \qquad (6.5)\]</span></p>
+<p><strong>Proof</strong>. Both bounds are well known. The first can be proved using</p>
+<p><span class="math display">\[\int_t^\infty e^{- {y^2 \over 2}} dy &lt; \int_t^\infty {y \over t} e^{- {y^2 \over 2}} dy.\]</span></p>
+<p>The second is shown using Chernoff bound. For any random variable <span class="math inline">\(\xi\)</span>,</p>
+<p><span class="math display">\[\mathbb P(\xi &gt; t) &lt; {\mathbb E \exp(\lambda \xi) \over \exp(\lambda t)} = \exp(\kappa_\xi(\lambda) - \lambda t), \qquad (6.7)\]</span></p>
+<p>where <span class="math inline">\(\kappa_\xi(\lambda) = \log \mathbb E \exp(\lambda \xi)\)</span> is the cumulant of <span class="math inline">\(\xi\)</span>. Since (6.7) holds for any <span class="math inline">\(\lambda\)</span>, we can get the best bound by minimising <span class="math inline">\(\kappa_\xi(\lambda) - \lambda t\)</span> (a.k.a. the Legendre transformation). When <span class="math inline">\(\xi\)</span> is standard normal, we get (6.5). <span class="math inline">\(\square\)</span></p>
+<p><strong>Remark</strong>. We will use the Chernoff bound extensively in the second part of this post when considering Rényi differential privacy.</p>
+<p><strong>Claim 9</strong>. The Gaussian mechanism on a query <span class="math inline">\(f\)</span> is <span class="math inline">\((\epsilon, \delta)\)</span>-dp, where</p>
+<p><span class="math display">\[\delta = \exp(- (\epsilon \sigma / S_f - (2 \sigma / S_f)^{-1})^2 / 2). \qquad (6.8)\]</span></p>
+<p>Conversely, to achieve give <span class="math inline">\((\epsilon, \delta)\)</span>-dp, we may set</p>
+<p><span class="math display">\[\sigma &gt; \left(\epsilon^{-1} \sqrt{2 \log \delta^{-1}} + (2 \epsilon)^{- {1 \over 2}}\right) S_f \qquad (6.81)\]</span></p>
+<p>or</p>
+<p><span class="math display">\[\sigma &gt; (\epsilon^{-1} (1 \vee \sqrt{(\log (2 \pi)^{-1} \delta^{-2})_+}) + (2 \epsilon)^{- {1 \over 2}}) S_f \qquad (6.82)\]</span></p>
+<p>or</p>
+<p><span class="math display">\[\sigma &gt; \epsilon^{-1} \sqrt{\log e^\epsilon \delta^{-2}} S_f \qquad (6.83)\]</span></p>
+<p>or</p>
+<p><span class="math display">\[\sigma &gt; \epsilon^{-1} (\sqrt{1 + \epsilon} \vee \sqrt{(\log e^\epsilon (2 \pi)^{-1} \delta^{-2})_+}) S_f. \qquad (6.84)\]</span></p>
+<p><strong>Proof</strong>. As discussed before we only need to consider the case where <span class="math inline">\(S_f = 1\)</span>. Fix arbitrary <span class="math inline">\(x, x&#39; \in X\)</span> with <span class="math inline">\(d(x, x&#39;) = 1\)</span>. Let <span class="math inline">\(\zeta = (\zeta_1, ..., \zeta_d) \sim N(0, I_d)\)</span>.</p>
+<p>By Claim 6 it suffices to bound</p>
+<p><span class="math display">\[\mathbb P(L(M(x) || M(x&#39;)) &gt; \epsilon)\]</span></p>
+<p>We have by the linear invariance of <span class="math inline">\(L\)</span>,</p>
+<p><span class="math display">\[L(M(x) || M(x&#39;)) = L(f(x) + \sigma \zeta || f(x&#39;) + \sigma \zeta) \overset{d}{=} L(\zeta|| \zeta + \Delta / \sigma),\]</span></p>
+<p>where <span class="math inline">\(\Delta := f(x&#39;) - f(x)\)</span>.</p>
+<p>Plugging in the Gaussian density, we have</p>
+<p><span class="math display">\[L(M(x) || M(x&#39;)) \overset{d}{=} \sum_i {\Delta_i \over \sigma} \zeta_i + \sum_i {\Delta_i^2 \over 2 \sigma^2} \overset{d}{=} {\|\Delta\|_2 \over \sigma} \xi + {\|\Delta\|_2^2 \over 2 \sigma^2}.\]</span></p>
+<p>where <span class="math inline">\(\xi \sim N(0, 1)\)</span>.</p>
+<p>Hence</p>
+<p><span class="math display">\[\mathbb P(L(M(x) || M(x&#39;)) &gt; \epsilon) = \mathbb P(\zeta &gt; {\sigma \over \|\Delta\|_2} \epsilon - {\|\Delta\|_2 \over 2 \sigma}).\]</span></p>
+<p>Since <span class="math inline">\(\|\Delta\|_2 \le S_f = 1\)</span>, we have</p>
+<p><span class="math display">\[\mathbb P(L(M(x) || M(x&#39;)) &gt; \epsilon) \le \mathbb P(\xi &gt; \sigma \epsilon - (2 \sigma)^{-1}).\]</span></p>
+<p>Thus the problem is reduced to the tail bound of a standard normal distribution, so we can use Claim 8. Note that we implicitly require <span class="math inline">\(\sigma &gt; (2 \epsilon)^{- 1 / 2}\)</span> here so that <span class="math inline">\(\sigma \epsilon - (2 \sigma)^{-1} &gt; 0\)</span> and we can use the tail bounds.</p>
+<p>Using (6.3) we have</p>
+<p><span class="math display">\[\mathbb P(L(M(x) || M(x&#39;)) &gt; \epsilon) &lt; \exp(- (\epsilon \sigma - (2 \sigma)^{-1})^2 / 2).\]</span></p>
+<p>This gives us (6.8).</p>
+<p>To bound the right hand by <span class="math inline">\(\delta\)</span>, we require</p>
+<p><span class="math display">\[\epsilon \sigma - {1 \over 2 \sigma} &gt; \sqrt{2 \log \delta^{-1}}. \qquad (6.91)\]</span></p>
+<p>Solving this inequality we have</p>
+<p><span class="math display">\[\sigma &gt; {\sqrt{2 \log \delta^{-1}} + \sqrt{2 \log \delta^{-1} + 2 \epsilon} \over 2 \epsilon}.\]</span></p>
+<p>Using <span class="math inline">\(\sqrt{2 \log \delta^{-1} + 2 \epsilon} \le \sqrt{2 \log \delta^{-1}} + \sqrt{2 \epsilon}\)</span>, we can achieve the above inequality by having</p>
+<p><span class="math display">\[\sigma &gt; \epsilon^{-1} \sqrt{2 \log \delta^{-1}} + (2 \epsilon)^{-{1 \over 2}}.\]</span></p>
+<p>This gives us (6.81).</p>
+<p>Alternatively, we can use the concavity of <span class="math inline">\(\sqrt{\cdot}\)</span>:</p>
+<p><span class="math display">\[(2 \epsilon)^{-1} (\sqrt{2 \log \delta^{-1}} + \sqrt{2 \log \delta^{-1} + 2 \epsilon}) \le \epsilon^{-1} \sqrt{\log e^\epsilon \delta^{-2}},\]</span></p>
+<p>which gives us (6.83)</p>
+<p>Back to (6.9), if we use (6.5) instead, we need</p>
+<p><span class="math display">\[\log t + {t^2 \over 2} &gt; \log {(2 \pi)^{- 1 / 2} \delta^{-1}}\]</span></p>
+<p>where <span class="math inline">\(t = \epsilon \sigma - (2 \sigma)^{-1}\)</span>. This can be satisfied if</p>
+<p><span class="math display">\[\begin{aligned}
+t &amp;&gt; 1 \qquad (6.93)\\
+t &amp;&gt; \sqrt{\log (2 \pi)^{-1} \delta^{-2}}. \qquad (6.95)
+\end{aligned}\]</span></p>
+<p>We can solve both inequalities as before and obtain</p>
+<p><span class="math display">\[\sigma &gt; \epsilon^{-1} (1 \vee \sqrt{(\log (2 \pi)^{-1} \delta^{-2})_+}) + (2 \epsilon)^{- {1 \over 2}},\]</span></p>
+<p>or</p>
+<p><span class="math display">\[\sigma &gt; \epsilon^{-1}(\sqrt{1 + \epsilon} \vee \sqrt{(\log e^\epsilon (2 \pi)^{-1} \delta^{-2})_+}).\]</span></p>
+<p>This gives us (6.82)(6.84). <span class="math inline">\(\square\)</span></p>
+<p>When <span class="math inline">\(\epsilon \le \alpha\)</span> is bounded, by (6.83) (6.84) we can require either</p>
+<p><span class="math display">\[\sigma &gt; \epsilon^{-1} (\sqrt{\log e^\alpha \delta^{-2}}) S_f\]</span></p>
+<p>or</p>
+<p><span class="math display">\[\sigma &gt; \epsilon^{-1} (\sqrt{1 + \alpha} \vee \sqrt{(\log (2 \pi)^{-1} e^\alpha \delta^{-2})_+}).\]</span></p>
+<p>The second bound is similar to and slightly better than the one in Theorem A.1 of Dwork-Roth 2013, where <span class="math inline">\(\alpha = 1\)</span>:</p>
+<p><span class="math display">\[\sigma &gt; \epsilon^{-1} \left({3 \over 2} \vee \sqrt{(2 \log {5 \over 4} \delta^{-1})_+}\right) S_f.\]</span></p>
+<p>Note that the lower bound of <span class="math inline">\({3 \over 2}\)</span> is implicitly required in the proof of Theorem A.1.</p>
+<h2 id="composition-theorems">Composition theorems</h2>
+<p>So far we have seen how a mechanism made of a single query plus a noise can be proved to be differentially private. But we need to understand the privacy when composing several mechanisms, combinatorially or sequentially. Let us first define the combinatorial case:</p>
+<p><strong>Definition (Independent composition)</strong>. Let <span class="math inline">\(M_1, ..., M_k\)</span> be <span class="math inline">\(k\)</span> mechanisms with independent noises. The mechanism <span class="math inline">\(M = (M_1, ..., M_k)\)</span> is called the <em>independent composition</em> of <span class="math inline">\(M_{1 : k}\)</span>.</p>
+<p>To define the adaptive composition, let us motivate it with an example of gradient descent. Consider the loss function <span class="math inline">\(\ell(x; \theta)\)</span> of a neural network, where <span class="math inline">\(\theta\)</span> is the parameter and <span class="math inline">\(x\)</span> the input, gradient descent updates its parameter <span class="math inline">\(\theta\)</span> at each time <span class="math inline">\(t\)</span>:</p>
+<p><span class="math display">\[\theta_{t} = \theta_{t - 1} - \alpha m^{-1} \sum_{i = 1 : m} \nabla_\theta \ell(x_i; \theta) |_{\theta = \theta_{t - 1}}.\]</span></p>
+<p>We may add privacy by adding noise <span class="math inline">\(\zeta_t\)</span> at each step:</p>
+<p><span class="math display">\[\theta_{t} = \theta_{t - 1} - \alpha m^{-1} \sum_{i = 1 : m} \nabla_\theta \ell(x_i; \theta) |_{\theta = \theta_{t - 1}} + \zeta_t. \qquad (6.97)\]</span></p>
+<p>Viewed as a sequence of mechanism, we have that at each time <span class="math inline">\(t\)</span>, the mechanism <span class="math inline">\(M_t\)</span> takes input <span class="math inline">\(x\)</span>, and outputs <span class="math inline">\(\theta_t\)</span>. But <span class="math inline">\(M_t\)</span> also depends on the output of the previous mechanism <span class="math inline">\(M_{t - 1}\)</span>. To this end, we define the adaptive composition.</p>
+<p><strong>Definition (Adaptive composition)</strong>. Let <span class="math inline">\(({M_i(y_{1 : i - 1})})_{i = 1 : k}\)</span> be <span class="math inline">\(k\)</span> mechanisms with independent noises, where <span class="math inline">\(M_1\)</span> has no parameter, <span class="math inline">\(M_2\)</span> has one parameter in <span class="math inline">\(Y\)</span>, <span class="math inline">\(M_3\)</span> has two parameters in <span class="math inline">\(Y\)</span> and so on. For <span class="math inline">\(x \in X\)</span>, define <span class="math inline">\(\xi_i\)</span> recursively by</p>
+<p><span class="math display">\[\begin{aligned}
+\xi_1 &amp;:= M_1(x)\\
+\xi_i &amp;:= M_i(\xi_1, \xi_2, ..., \xi_{i - 1}) (x).
+\end{aligned}\]</span></p>
+<p>The <em>adaptive composition</em> of <span class="math inline">\(M_{1 : k}\)</span> is defined by <span class="math inline">\(M(x) := (\xi_1, \xi_2, ..., \xi_k)\)</span>.</p>
+<p>The definition of adaptive composition may look a bit complicated, but the point is to describe <span class="math inline">\(k\)</span> mechanisms such that for each <span class="math inline">\(i\)</span>, the output of the first, second, ..., <span class="math inline">\(i - 1\)</span>th mechanisms determine the <span class="math inline">\(i\)</span>th mechanism, like in the case of gradient descent.</p>
+<p>It is not hard to write down the differentially private gradient descent as a sequential composition:</p>
+<p><span class="math display">\[M_t(\theta_{1 : t - 1})(x) = \theta_{t - 1} - \alpha m^{-1} \sum_{i = 1 : m} \nabla_\theta \ell(x_i; \theta) |_{\theta = \theta_{t - 1}} + \zeta_t.\]</span></p>
+<p>In Dwork-Rothblum-Vadhan 2010 (see also Dwork-Roth 2013) the adaptive composition is defined in a more general way, but the definition is based on the same principle, and proofs in this post on adaptive compositions carry over.</p>
+<p>It is not hard to see that the adaptive composition degenerates to independent composition when each <span class="math inline">\(M_i(y_{1 : i})\)</span> evaluates to the same mechanism regardless of <span class="math inline">\(y_{1 : i}\)</span>, in which case the <span class="math inline">\(\xi_i\)</span>s are independent.</p>
+<p>In the following when discussing adaptive compositions we sometimes omit the parameters for convenience without risk of ambiguity, and write <span class="math inline">\(M_i(y_{1 : i})\)</span> as <span class="math inline">\(M_i\)</span>, but keep in mind of the dependence on the parameters.</p>
+<p>It is time to state and prove the composition theorems. In this section we consider <span class="math inline">\(2 \times 2 \times 2 = 8\)</span> cases, i.e. situations of three dimensions, where there are two choices in each dimension:</p>
+<ol type="1">
+<li>Composition of <span class="math inline">\(\epsilon\)</span>-dp or more generally <span class="math inline">\((\epsilon, \delta)\)</span>-dp mechanisms</li>
+<li>Composition of independent or more generally adaptive mechanisms</li>
+<li>Basic or advanced compositions</li>
+</ol>
+<p>Note that in the first two dimensions the second choice is more general than the first.</p>
+<p>The proofs of these composition theorems will be laid out as follows:</p>
+<ol type="1">
+<li>Claim 10 - Basic composition theorem for <span class="math inline">\((\epsilon, \delta)\)</span>-dp with adaptive mechanisms: by a direct proof with an induction argument</li>
+<li>Claim 14 - Advanced composition theorem for <span class="math inline">\(\epsilon\)</span>-dp with independent mechanisms: by factorising privacy loss and using Hoeffding's Inequality</li>
+<li>Claim 16 - Advanced composition theorem for <span class="math inline">\(\epsilon\)</span>-dp with adaptive mechanisms: by factorising privacy loss and using Azuma's Inequality</li>
+<li>Claims 17 and 18 - Advanced composition theorem for <span class="math inline">\((\epsilon, \delta)\)</span>-dp with independent / adaptive mechanisms: by using characterisations of <span class="math inline">\((\epsilon, \delta)\)</span>-dp in Claims 4 and 5 as an approximation of <span class="math inline">\(\epsilon\)</span>-dp and then using Proofs in Item 2 or 3.</li>
+</ol>
+<p><strong>Claim 10 (Basic composition theorem).</strong> Let <span class="math inline">\(M_{1 : k}\)</span> be <span class="math inline">\(k\)</span> mechanisms with independent noises such that for each <span class="math inline">\(i\)</span> and <span class="math inline">\(y_{1 : i - 1}\)</span>, <span class="math inline">\(M_i(y_{1 : i - 1})\)</span> is <span class="math inline">\((\epsilon_i, \delta_i)\)</span>-dp. Then the adpative composition of <span class="math inline">\(M_{1 : k}\)</span> is <span class="math inline">\((\sum_i \epsilon_i, \sum_i \delta_i)\)</span>-dp.</p>
+<p><strong>Proof (Dwork-Lei 2009, see also Dwork-Roth 2013 Appendix B.1)</strong>. Let <span class="math inline">\(x\)</span> and <span class="math inline">\(x&#39;\)</span> be neighbouring points in <span class="math inline">\(X\)</span>. Let <span class="math inline">\(M\)</span> be the adaptive composition of <span class="math inline">\(M_{1 : k}\)</span>. Define</p>
+<p><span class="math display">\[\xi_{1 : k} := M(x), \qquad \eta_{1 : k} := M(x&#39;).\]</span></p>
+<p>Let <span class="math inline">\(p^i\)</span> and <span class="math inline">\(q^i\)</span> be the laws of <span class="math inline">\((\xi_{1 : i})\)</span> and <span class="math inline">\((\eta_{1 : i})\)</span> respectively.</p>
+<p>Let <span class="math inline">\(S_1, ..., S_k \subset Y\)</span> and <span class="math inline">\(T_i := \prod_{j = 1 : i} S_j\)</span>. We use two tricks.</p>
+<ol type="1">
+<li><p>Since <span class="math inline">\(\xi_i | \xi_{&lt; i} = y_{&lt; i}\)</span> and <span class="math inline">\(\eta_i | \eta_{&lt; i} = y_{&lt; i}\)</span> are <span class="math inline">\((\epsilon_i, \delta_i)\)</span>-ind, and a probability is no greater than <span class="math inline">\(1\)</span>, <span class="math display">\[\begin{aligned}
+\mathbb P(\xi_i \in S_i | \xi_{&lt; i} = y_{&lt; i}) &amp;\le (e^{\epsilon_i} \mathbb P(\eta_i \in S_i | \eta_{&lt; i} = y_{&lt; i}) + \delta_i) \wedge 1 \\
+ &amp;\le (e^{\epsilon_i} \mathbb P(\eta_i \in S_i | \eta_{&lt; i} = y_{&lt; i}) + \delta_i) \wedge (1 + \delta_i) \\
+ &amp;= (e^{\epsilon_i} \mathbb P(\eta_i \in S_i | \eta_{&lt; i} = y_{&lt; i}) \wedge 1) + \delta_i
+\end{aligned}\]</span></p></li>
+<li><p>Given <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> that are <span class="math inline">\((\epsilon, \delta)\)</span>-ind, define <span class="math display">\[\mu(x) = (p(x) - e^\epsilon q(x))_+.\]</span></p>
+<p>We have <span class="math display">\[\mu(S) \le \delta, \forall S\]</span></p>
+<p>In the following we define <span class="math inline">\(\mu^{i - 1} = (p^{i - 1} - e^\epsilon q^{i - 1})_+\)</span> for the same purpose.</p></li>
+</ol>
+<p>We use an inductive argument to prove the theorem:</p>
+<p><span class="math display">\[\begin{aligned}
+\mathbb P(\xi_{\le i} \in T_i) &amp;= \int_{T_{i - 1}} \mathbb P(\xi_i \in S_i | \xi_{&lt; i} = y_{&lt; i}) p^{i - 1} (y_{&lt; i}) dy_{&lt; i} \\
+&amp;\le \int_{T_{i - 1}} (e^{\epsilon_i} \mathbb P(\eta_i \in S_i | \eta_{&lt; i} = y_{&lt; i}) \wedge 1) p^{i - 1}(y_{&lt; i}) dy_{&lt; i} + \delta_i\\
+&amp;\le \int_{T_{i - 1}} (e^{\epsilon_i} \mathbb P(\eta_i \in S_i | \eta_{&lt; i} = y_{&lt; i}) \wedge 1) (e^{\epsilon_1 + ... + \epsilon_{i - 1}} q^{i - 1}(y_{&lt; i}) + \mu^{i - 1} (y_{&lt; i})) dy_{&lt; i} + \delta_i\\
+&amp;\le \int_{T_{i - 1}} e^{\epsilon_i} \mathbb P(\eta_i \in S_i | \eta_{&lt; i} = y_{&lt; i}) e^{\epsilon_1 + ... + \epsilon_{i - 1}} q^{i - 1}(y_{&lt; i}) dy_{&lt; i} + \mu_{i - 1}(T_{i - 1}) + \delta_i\\
+&amp;\le e^{\epsilon_1 + ... + \epsilon_i} \mathbb P(\eta_{\le i} \in T_i) + \delta_1 + ... + \delta_{i - 1} + \delta_i.\\
+\end{aligned}\]</span></p>
+<p>In the second line we use Trick 1; in the third line we use the induction assumption; in the fourth line we multiply the first term in the first braket with first term in the second braket, and the second term (i.e. <span class="math inline">\(1\)</span>) in the first braket with the second term in the second braket (i.e. the <span class="math inline">\(\mu\)</span> term); in the last line we use Trick 2.</p>
+<p>The base case <span class="math inline">\(i = 1\)</span> is true since <span class="math inline">\(M_1\)</span> is <span class="math inline">\((\epsilon_1, \delta_1)\)</span>-dp. <span class="math inline">\(\square\)</span></p>
+<p>To prove the advanced composition theorem, we start with some lemmas.</p>
+<p><strong>Claim 11</strong>. If <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> are <span class="math inline">\(\epsilon\)</span>-ind, then</p>
+<p><span class="math display">\[D(p || q) + D(q || p) \le \epsilon(e^\epsilon - 1).\]</span></p>
+<p><strong>Proof</strong>. Since <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> are <span class="math inline">\(\epsilon\)</span>-ind, we have <span class="math inline">\(|\log p(x) - \log q(x)| \le \epsilon\)</span> for all <span class="math inline">\(x\)</span>. Let <span class="math inline">\(S := \{x: p(x) &gt; q(x)\}\)</span>. Then we have on</p>
+<p><span class="math display">\[\begin{aligned}
+D(p || q) + D(q || p) &amp;= \int (p(x) - q(x)) (\log p(x) - \log q(x)) dx\\
+&amp;= \int_S (p(x) - q(x)) (\log p(x) - \log q(x)) dx + \int_{S^c} (q(x) - p(x)) (\log q(x) - \log p(x)) dx\\
+&amp;\le \epsilon(\int_S p(x) - q(x) dx + \int_{S^c} q(x) - p(x) dx)
+\end{aligned}\]</span></p>
+<p>Since on <span class="math inline">\(S\)</span> we have <span class="math inline">\(q(x) \le p(x) \le e^\epsilon q(x)\)</span>, and on <span class="math inline">\(S^c\)</span> we have <span class="math inline">\(p(x) \le q(x) \le e^\epsilon p(x)\)</span>, we obtain</p>
+<p><span class="math display">\[D(p || q) + D(q || p) \le \epsilon \int_S (1 - e^{-\epsilon}) p(x) dx + \epsilon \int_{S^c} (e^{\epsilon} - 1) p(x) dx \le \epsilon (e^{\epsilon} - 1),\]</span></p>
+<p>where in the last step we use <span class="math inline">\(e^\epsilon - 1 \ge 1 - e^{- \epsilon}\)</span> and <span class="math inline">\(p(S) + p(S^c) = 1\)</span>. <span class="math inline">\(\square\)</span></p>
+<p><strong>Claim 12</strong>. If <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> are <span class="math inline">\(\epsilon\)</span>-ind, then</p>
+<p><span class="math display">\[D(p || q) \le a(\epsilon) \ge D(q || p),\]</span></p>
+<p>where</p>
+<p><span class="math display">\[a(\epsilon) = \epsilon (e^\epsilon - 1) 1_{\epsilon \le \log 2} + \epsilon 1_{\epsilon &gt; \log 2} \le (\log 2)^{-1} \epsilon^2 1_{\epsilon \le \log 2} + \epsilon 1_{\epsilon &gt; \log 2}. \qquad (6.98)\]</span></p>
+<p><strong>Proof</strong>. Since <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> are <span class="math inline">\(\epsilon\)</span>-ind, we have</p>
+<p><span class="math display">\[D(p || q) = \mathbb E_{\xi \sim p} \log {p(\xi) \over q(\xi)} \le \max_y {\log p(y) \over \log q(y)} \le \epsilon.\]</span></p>
+<p>Comparing the quantity in Claim 11 (<span class="math inline">\(\epsilon(e^\epsilon - 1)\)</span>) with the quantity above (<span class="math inline">\(\epsilon\)</span>), we arrive at the conclusion. <span class="math inline">\(\square\)</span></p>
+<p><strong>Claim 13 (Hoeffding's Inequality)</strong>. Let <span class="math inline">\(L_i\)</span> be independent random variables with <span class="math inline">\(|L_i| \le b\)</span>, and let <span class="math inline">\(L = L_1 + ... + L_k\)</span>, then for <span class="math inline">\(t &gt; 0\)</span>,</p>
+<p><span class="math display">\[\mathbb P(L - \mathbb E L \ge t) \le \exp(- {t^2 \over 2 k b^2}).\]</span></p>
+<p><strong>Claim 14 (Advanced Independent Composition Theorem)</strong> (<span class="math inline">\(\delta = 0\)</span>). Fix <span class="math inline">\(0 &lt; \beta &lt; 1\)</span>. Let <span class="math inline">\(M_1, ..., M_k\)</span> be <span class="math inline">\(\epsilon\)</span>-dp, then the independent composition <span class="math inline">\(M\)</span> of <span class="math inline">\(M_{1 : k}\)</span> is <span class="math inline">\((k a(\epsilon) + \sqrt{2 k \log \beta^{-1}} \epsilon, \beta)\)</span>-dp.</p>
+<p><strong>Remark</strong>. By (6.98) we know that <span class="math inline">\(k a(\epsilon) + \sqrt{2 k \log \beta^{-1}} \epsilon = \sqrt{2 k \log \beta^{-1}} \epsilon + k O(\epsilon^2)\)</span> when <span class="math inline">\(\epsilon\)</span> is sufficiently small, in which case the leading term is of order <span class="math inline">\(O(\sqrt k \epsilon)\)</span> and we save a <span class="math inline">\(\sqrt k\)</span> in the <span class="math inline">\(\epsilon\)</span>-part compared to the Basic Composition Theorem (Claim 10).</p>
+<p><strong>Remark</strong>. In practice one can try different choices of <span class="math inline">\(\beta\)</span> and settle with the one that gives the best privacy guarantee. See the discussions at the end of <a href="/posts/2019-03-14-great-but-manageable-expectations.html">Part 2 of this post</a>.</p>
+<p><strong>Proof</strong>. Let <span class="math inline">\(p_i\)</span>, <span class="math inline">\(q_i\)</span>, <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> be the laws of <span class="math inline">\(M_i(x)\)</span>, <span class="math inline">\(M_i(x&#39;)\)</span>, <span class="math inline">\(M(x)\)</span> and <span class="math inline">\(M(x&#39;)\)</span> respectively.</p>
+<p><span class="math display">\[\mathbb E L_i = D(p_i || q_i) \le a(\epsilon),\]</span></p>
+<p>where <span class="math inline">\(L_i := L(p_i || q_i)\)</span>. Due to <span class="math inline">\(\epsilon\)</span>-ind also have</p>
+<p><span class="math display">\[|L_i| \le \epsilon.\]</span></p>
+<p>Therefore, by Hoeffding's Inequality,</p>
+<p><span class="math display">\[\mathbb P(L - k a(\epsilon) \ge t) \le \mathbb P(L - \mathbb E L \ge t) \le \exp(- t^2 / 2 k \epsilon^2),\]</span></p>
+<p>where <span class="math inline">\(L := \sum_i L_i = L(p || q)\)</span>.</p>
+<p>Plugging in <span class="math inline">\(t = \sqrt{2 k \epsilon^2 \log \beta^{-1}}\)</span>, we have</p>
+<p><span class="math display">\[\mathbb P(L(p || q) \le k a(\epsilon) + \sqrt{2 k \epsilon^2 \log \beta^{-1}}) \ge 1 - \beta.\]</span></p>
+<p>Similarly we also have</p>
+<p><span class="math display">\[\mathbb P(L(q || p) \le k a(\epsilon) + \sqrt{2 k \epsilon^2 \log \beta^{-1}}) \ge 1 - \beta.\]</span></p>
+<p>By Claim 1 we arrive at the conclusion. <span class="math inline">\(\square\)</span></p>
+<p><strong>Claim 15 (Azuma's Inequality)</strong>. Let <span class="math inline">\(X_{0 : k}\)</span> be a supermartingale. If <span class="math inline">\(|X_i - X_{i - 1}| \le b\)</span>, then</p>
+<p><span class="math display">\[\mathbb P(X_k - X_0 \ge t) \le \exp(- {t^2 \over 2 k b^2}).\]</span></p>
+<p>Azuma's Inequality implies a slightly weaker version of Hoeffding's Inequality. To see this, let <span class="math inline">\(L_{1 : k}\)</span> be independent variables with <span class="math inline">\(|L_i| \le b\)</span>. Let <span class="math inline">\(X_i = \sum_{j = 1 : i} L_j - \mathbb E L_j\)</span>. Then <span class="math inline">\(X_{0 : k}\)</span> is a martingale, and</p>
+<p><span class="math display">\[| X_i - X_{i - 1} | = | L_i - \mathbb E L_i | \le 2 b,\]</span></p>
+<p>since <span class="math inline">\(\|L_i\|_1 \le \|L_i\|_\infty\)</span>. Hence by Azuma's Inequality,</p>
+<p><span class="math display">\[\mathbb P(L - \mathbb E L \ge t) \le \exp(- {t^2 \over 8 k b^2}).\]</span></p>
+<p>Of course here we have made no assumption on <span class="math inline">\(\mathbb E L_i\)</span>. If instead we have some bound for the expectation, say <span class="math inline">\(|\mathbb E L_i| \le a\)</span>, then by the same derivation we have</p>
+<p><span class="math display">\[\mathbb P(L - \mathbb E L \ge t) \le \exp(- {t^2 \over 2 k (a + b)^2}).\]</span></p>
+<p>It is not hard to see what Azuma is to Hoeffding is like adaptive composition to independent composition. Indeed, we can use Azuma's Inequality to prove the Advanced Adaptive Composition Theorem for <span class="math inline">\(\delta = 0\)</span>.</p>
+<p><strong>Claim 16 (Advanced Adaptive Composition Theorem)</strong> (<span class="math inline">\(\delta = 0\)</span>). Let <span class="math inline">\(\beta &gt; 0\)</span>. Let <span class="math inline">\(M_{1 : k}\)</span> be <span class="math inline">\(k\)</span> mechanisms with independent noises such that for each <span class="math inline">\(i\)</span> and <span class="math inline">\(y_{1 : i}\)</span>, <span class="math inline">\(M_i(y_{1 : i})\)</span> is <span class="math inline">\((\epsilon, 0)\)</span>-dp. Then the adpative composition of <span class="math inline">\(M_{1 : k}\)</span> is <span class="math inline">\((k a(\epsilon) + \sqrt{2 k \log \beta^{-1}} (\epsilon + a(\epsilon)), \beta)\)</span>-dp.</p>
+<p><strong>Proof</strong>. As before, let <span class="math inline">\(\xi_{1 : k} \overset{d}{=} M(x)\)</span> and <span class="math inline">\(\eta_{1 : k} \overset{d}{=} M(x&#39;)\)</span>, where <span class="math inline">\(M\)</span> is the adaptive composition of <span class="math inline">\(M_{1 : k}\)</span>. Let <span class="math inline">\(p_i\)</span> (resp. <span class="math inline">\(q_i\)</span>) be the law of <span class="math inline">\(\xi_i | \xi_{&lt; i}\)</span> (resp. <span class="math inline">\(\eta_i | \eta_{&lt; i}\)</span>). Let <span class="math inline">\(p^i\)</span> (resp. <span class="math inline">\(q^i\)</span>) be the law of <span class="math inline">\(\xi_{\le i}\)</span> (resp. <span class="math inline">\(\eta_{\le i}\)</span>). We want to construct supermartingale <span class="math inline">\(X\)</span>. To this end, let</p>
+<p><span class="math display">\[X_i = \log {p^i(\xi_{\le i}) \over q^i(\xi_{\le i})} - i a(\epsilon) \]</span></p>
+<p>We show that <span class="math inline">\((X_i)\)</span> is a supermartingale:</p>
+<p><span class="math display">\[\begin{aligned}
+\mathbb E(X_i - X_{i - 1} | X_{i - 1}) &amp;= \mathbb E \left(\log {p_i (\xi_i | \xi_{&lt; i}) \over q_i (\xi_i | \xi_{&lt; i})} - a(\epsilon) | \log {p^{i - 1} (\xi_{&lt; i}) \over q^{i - 1} (\xi_{&lt; i})}\right) \\
+&amp;= \mathbb E \left( \mathbb E \left(\log {p_i (\xi_i | \xi_{&lt; i}) \over q_i (\xi_i | \xi_{&lt; i})} | \xi_{&lt; i}\right) | \log {p^{i - 1} (\xi_{&lt; i}) \over q^{i - 1} (\xi_{&lt; i})}\right) - a(\epsilon) \\
+&amp;= \mathbb E \left( D(p_i (\cdot | \xi_{&lt; i}) || q_i (\cdot | \xi_{&lt; i})) | \log {p^{i - 1} (\xi_{&lt; i}) \over q^{i - 1} (\xi_{&lt; i})}\right) - a(\epsilon) \\
+&amp;\le 0,
+\end{aligned}\]</span></p>
+<p>since by Claim 12 <span class="math inline">\(D(p_i(\cdot | y_{&lt; i}) || q_i(\cdot | y_{&lt; i})) \le a(\epsilon)\)</span> for all <span class="math inline">\(y_{&lt; i}\)</span>.</p>
+<p>Since</p>
+<p><span class="math display">\[| X_i - X_{i - 1} | = | \log {p_i(\xi_i | \xi_{&lt; i}) \over q_i(\xi_i | \xi_{&lt; i})} - a(\epsilon) | \le \epsilon + a(\epsilon),\]</span></p>
+<p>by Azuma's Inequality,</p>
+<p><span class="math display">\[\mathbb P(\log {p^k(\xi_{1 : k}) \over q^k(\xi_{1 : k})} \ge k a(\epsilon) + t) \le \exp(- {t^2 \over 2 k (\epsilon + a(\epsilon))^2}). \qquad(6.99)\]</span></p>
+<p>Let <span class="math inline">\(t = \sqrt{2 k \log \beta^{-1}} (\epsilon + a(\epsilon))\)</span> we are done. <span class="math inline">\(\square\)</span></p>
+<p><strong>Claim 17 (Advanced Independent Composition Theorem)</strong>. Fix <span class="math inline">\(0 &lt; \beta &lt; 1\)</span>. Let <span class="math inline">\(M_1, ..., M_k\)</span> be <span class="math inline">\((\epsilon, \delta)\)</span>-dp, then the independent composition <span class="math inline">\(M\)</span> of <span class="math inline">\(M_{1 : k}\)</span> is <span class="math inline">\((k a(\epsilon) + \sqrt{2 k \log \beta^{-1}} \epsilon, k \delta + \beta)\)</span>-dp.</p>
+<p><strong>Proof</strong>. By Claim 4, there exist events <span class="math inline">\(E_{1 : k}\)</span> and <span class="math inline">\(F_{1 : k}\)</span> such that</p>
+<ol type="1">
+<li>The laws <span class="math inline">\(p_{i | E_i}\)</span> and <span class="math inline">\(q_{i | F_i}\)</span> are <span class="math inline">\(\epsilon\)</span>-ind.</li>
+<li><span class="math inline">\(\mathbb P(E_i), \mathbb P(F_i) \ge 1 - \delta\)</span>.</li>
+</ol>
+<p>Let <span class="math inline">\(E := \bigcap E_i\)</span> and <span class="math inline">\(F := \bigcap F_i\)</span>, then they both have probability at least <span class="math inline">\(1 - k \delta\)</span>, and <span class="math inline">\(p_{i | E}\)</span> and <span class="math inline">\(q_{i | F}\)</span> are <span class="math inline">\(\epsilon\)</span>-ind.</p>
+<p>By Claim 14, <span class="math inline">\(p_{|E}\)</span> and <span class="math inline">\(q_{|F}\)</span> are <span class="math inline">\((\epsilon&#39; := k a(\epsilon) + \sqrt{2 k \epsilon^2 \log \beta^{-1}}, \beta)\)</span>-ind. Let us shrink the bigger event between <span class="math inline">\(E\)</span> and <span class="math inline">\(F\)</span> so that they have equal probabilities. Then</p>
+<p><span class="math display">\[\begin{aligned}
+p (S) &amp;\le p_{|E}(S) \mathbb P(E) + \mathbb P(E^c) \\
+&amp;\le (e^{\epsilon&#39;} q_{|F}(S) + \beta) \mathbb P(F) + k \delta\\
+&amp;\le e^{\epsilon&#39;} q(S) + \beta + k \delta.
+\end{aligned}\]</span></p>
+<p><span class="math inline">\(\square\)</span></p>
+<p><strong>Claim 18 (Advanced Adaptive Composition Theorem)</strong>. Fix <span class="math inline">\(0 &lt; \beta &lt; 1\)</span>. Let <span class="math inline">\(M_{1 : k}\)</span> be <span class="math inline">\(k\)</span> mechanisms with independent noises such that for each <span class="math inline">\(i\)</span> and <span class="math inline">\(y_{1 : i}\)</span>, <span class="math inline">\(M_i(y_{1 : i})\)</span> is <span class="math inline">\((\epsilon, \delta)\)</span>-dp. Then the adpative composition of <span class="math inline">\(M_{1 : k}\)</span> is <span class="math inline">\((k a(\epsilon) + \sqrt{2 k \log \beta^{-1}} (\epsilon + a(\epsilon)), \beta + k \delta)\)</span>-dp.</p>
+<p><strong>Proof</strong>. By Claim 5, there exist events <span class="math inline">\(E_{1 : k}\)</span> and <span class="math inline">\(F_{1 : k}\)</span> such that</p>
+<ol type="1">
+<li>The laws <span class="math inline">\(p_{i | E_i}(\cdot | y_{&lt; i})\)</span> and <span class="math inline">\(q_{i | F_i}(\cdot | y_{&lt; i})\)</span> are <span class="math inline">\(\epsilon\)</span>-ind for all <span class="math inline">\(y_{&lt; i}\)</span>.</li>
+<li><span class="math inline">\(\mathbb P(E_i | y_{&lt; i}), \mathbb P(F_i | y_{&lt; i}) \ge 1 - \delta\)</span> for all <span class="math inline">\(y_{&lt; i}\)</span>.</li>
+</ol>
+<p>Let <span class="math inline">\(E := \bigcap E_i\)</span> and <span class="math inline">\(F := \bigcap F_i\)</span>, then they both have probability at least <span class="math inline">\(1 - k \delta\)</span>, and <span class="math inline">\(p_{i | E}(\cdot | y_{&lt; i}\)</span> and <span class="math inline">\(q_{i | F}(\cdot | y_{&lt; i})\)</span> are <span class="math inline">\(\epsilon\)</span>-ind.</p>
+<p>By Advanced Adaptive Composition Theorem (<span class="math inline">\(\delta = 0\)</span>), <span class="math inline">\(p_{|E}\)</span> and <span class="math inline">\(q_{|F}\)</span> are <span class="math inline">\((\epsilon&#39; := k a(\epsilon) + \sqrt{2 k \log \beta^{-1}} (\epsilon + a(\epsilon)), \beta)\)</span>-ind.</p>
+<p>The rest is the same as in the proof of Claim 17. <span class="math inline">\(\square\)</span></p>
+<h2 id="subsampling">Subsampling</h2>
+<p>Stochastic gradient descent is like gradient descent, but with random subsampling.</p>
+<p>Recall we have been considering databases in the space <span class="math inline">\(Z^m\)</span>. Let <span class="math inline">\(n &lt; m\)</span> be a positive integer, <span class="math inline">\(\mathcal I := \{I \subset [m]: |I| = n\}\)</span> be the set of subsets of <span class="math inline">\([m]\)</span> of size <span class="math inline">\(n\)</span>, and <span class="math inline">\(\gamma\)</span> a random subset sampled uniformly from <span class="math inline">\(\mathcal I\)</span>. Let <span class="math inline">\(r = {n \over m}\)</span> which we call the subsampling rate. Then we may add a subsampling module to the noisy gradient descent algorithm (6.97) considered before</p>
+<p><span class="math display">\[\theta_{t} = \theta_{t - 1} - \alpha n^{-1} \sum_{i \in \gamma} \nabla_\theta h_\theta(x_i) |_{\theta = \theta_{t - 1}} + \zeta_t. \qquad (7)\]</span></p>
+<p>It turns out subsampling has an amplification effect on privacy.</p>
+<p><strong>Claim 19 (Ullman 2017)</strong>. Fix <span class="math inline">\(r \in [0, 1]\)</span>. Let <span class="math inline">\(n \le m\)</span> be two nonnegative integers with <span class="math inline">\(n = r m\)</span>. Let <span class="math inline">\(N\)</span> be an <span class="math inline">\((\epsilon, \delta)\)</span>-dp machanism on <span class="math inline">\(X^n\)</span>. Define mechanism <span class="math inline">\(M\)</span> on <span class="math inline">\(X^m\)</span> by</p>
+<p><span class="math display">\[M(x) = N(x_\gamma)\]</span></p>
+<p>Then <span class="math inline">\(M\)</span> is <span class="math inline">\((\log (1 + r(e^\epsilon - 1)), r \delta)\)</span>-dp.</p>
+<p><strong>Remark</strong>. Some seem to cite Kasiviswanathan-Lee-Nissim-Raskhodnikova-Smith 2005 for this result, but it is not clear to me how it appears there.</p>
+<p><strong>Proof</strong>. Let <span class="math inline">\(x, x&#39; \in X^n\)</span> such that they differ by one row <span class="math inline">\(x_i \neq x_i&#39;\)</span>. Naturally we would like to consider the cases where the index <span class="math inline">\(i\)</span> is picked and the ones where it is not separately. Let <span class="math inline">\(\mathcal I_\in\)</span> and <span class="math inline">\(\mathcal I_\notin\)</span> be these two cases:</p>
+<p><span class="math display">\[\begin{aligned}
+\mathcal I_\in = \{J \subset \mathcal I: i \in J\}\\
+\mathcal I_\notin = \{J \subset \mathcal I: i \notin J\}\\
+\end{aligned}\]</span></p>
+<p>We will use these notations later. Let <span class="math inline">\(A\)</span> be the event <span class="math inline">\(\{\gamma \ni i\}\)</span>.</p>
+<p>Let <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> be the laws of <span class="math inline">\(M(x)\)</span> and <span class="math inline">\(M(x&#39;)\)</span> respectively. We collect some useful facts about them. First due to <span class="math inline">\(N\)</span> being <span class="math inline">\((\epsilon, \delta)\)</span>-dp,</p>
+<p><span class="math display">\[p_{|A}(S) \le e^\epsilon q_{|A}(S) + \delta.\]</span></p>
+<p>Also,</p>
+<p><span class="math display">\[p_{|A}(S) \le e^\epsilon p_{|A^c}(S) + \delta.\]</span></p>
+<p>To see this, note that being conditional laws, <span class="math inline">\(p_A\)</span> and <span class="math inline">\(p_{A^c}\)</span> are averages of laws over <span class="math inline">\(\mathcal I_\in\)</span> and <span class="math inline">\(\mathcal I_\notin\)</span> respectively:</p>
+<p><span class="math display">\[\begin{aligned}
+p_{|A}(S) = |\mathcal I_\in|^{-1} \sum_{I \in \mathcal I_\in} \mathbb P(N(x_I) \in S)\\
+p_{|A^c}(S) = |\mathcal I_\notin|^{-1} \sum_{J \in \mathcal I_\notin} \mathbb P(N(x_J) \in S).
+\end{aligned}\]</span></p>
+<p>Now we want to pair the <span class="math inline">\(I\)</span>'s in <span class="math inline">\(\mathcal I_\in\)</span> and <span class="math inline">\(J\)</span>'s in <span class="math inline">\(\mathcal I_\notin\)</span> so that they differ by one index only, which means <span class="math inline">\(d(x_I, x_J) = 1\)</span>. Formally, this means we want to consider the set:</p>
+<p><span class="math display">\[\mathcal D := \{(I, J) \in \mathcal I_\in \times \mathcal I_\notin: |I \cap J| = n - 1\}.\]</span></p>
+<p>We may observe by trying out some simple cases that every <span class="math inline">\(I \in \mathcal I_\in\)</span> is paired with <span class="math inline">\(n\)</span> elements in <span class="math inline">\(\mathcal I_\notin\)</span>, and every <span class="math inline">\(J \in \mathcal I_\notin\)</span> is paired with <span class="math inline">\(m - n\)</span> elements in <span class="math inline">\(\mathcal I_\in\)</span>. Therefore</p>
+<p><span class="math display">\[p_{|A}(S) = |\mathcal D|^{-1} \sum_{(I, J) \in \mathcal D} \mathbb P(N(x_I \in S)) \le |\mathcal D|^{-1} \sum_{(I, J) \in \mathcal D} (e^\epsilon \mathbb P(N(x_J \in S)) + \delta) = e^\epsilon p_{|A^c} (S) + \delta.\]</span></p>
+<p>Since each of the <span class="math inline">\(m\)</span> indices is picked independently with probability <span class="math inline">\(r\)</span>, we have</p>
+<p><span class="math display">\[\mathbb P(A) = r.\]</span></p>
+<p>Let <span class="math inline">\(t \in [0, 1]\)</span> to be determined. We may write</p>
+<p><span class="math display">\[\begin{aligned}
+p(S) &amp;= r p_{|A} (S) + (1 - r) p_{|A^c} (S)\\
+&amp;\le r(t e^\epsilon q_{|A}(S) + (1 - t) e^\epsilon q_{|A^c}(S) + \delta) + (1 - r) q_{|A^c} (S)\\
+&amp;= rte^\epsilon q_{|A}(S) + (r(1 - t) e^\epsilon + (1 - r)) q_{|A^c} (S) + r \delta\\
+&amp;= te^\epsilon r q_{|A}(S) + \left({r \over 1 - r}(1 - t) e^\epsilon + 1\right) (1 - r) q_{|A^c} (S) + r \delta \\
+&amp;\le \left(t e^\epsilon \wedge \left({r \over 1 - r} (1 - t) e^\epsilon + 1\right)\right) q(S) + r \delta. \qquad (7.5)
+\end{aligned}\]</span></p>
+<p>We can see from the last line that the best bound we can get is when</p>
+<p><span class="math display">\[t e^\epsilon = {r \over 1 - r} (1 - t) e^\epsilon + 1.\]</span></p>
+<p>Solving this equation we obtain</p>
+<p><span class="math display">\[t = r + e^{- \epsilon} - r e^{- \epsilon}\]</span></p>
+<p>and plugging this in (7.5) we have</p>
+<p><span class="math display">\[p(S) \le (1 + r(e^\epsilon - 1)) q(S) + r \delta.\]</span></p>
+<p><span class="math inline">\(\square\)</span></p>
+<p>Since <span class="math inline">\(\log (1 + x) &lt; x\)</span> for <span class="math inline">\(x &gt; 0\)</span>, we can rewrite the conclusion of the Claim to <span class="math inline">\((r(e^\epsilon - 1), r \delta)\)</span>-dp. Further more, if <span class="math inline">\(\epsilon &lt; \alpha\)</span> for some <span class="math inline">\(\alpha\)</span>, we can rewrite it as <span class="math inline">\((r \alpha^{-1} (e^\alpha - 1) \epsilon, r \delta)\)</span>-dp or <span class="math inline">\((O(r \epsilon), r \delta)\)</span>-dp.</p>
+<p>Let <span class="math inline">\(\epsilon &lt; 1\)</span>. We see that if the mechanism <span class="math inline">\(N\)</span> is <span class="math inline">\((\epsilon, \delta)\)</span>-dp on <span class="math inline">\(Z^n\)</span>, then <span class="math inline">\(M\)</span> is <span class="math inline">\((2 r \epsilon, r \delta)\)</span>-dp, and if we run it over <span class="math inline">\(k / r\)</span> minibatches, by Advanced Adaptive Composition theorem, we have <span class="math inline">\((\sqrt{2 k r \log \beta^{-1}} \epsilon + 2 k r \epsilon^2, k \delta + \beta)\)</span>-dp.</p>
+<p>This is better than the privacy guarantee without subsampling, where we run over <span class="math inline">\(k\)</span> iterations and obtain <span class="math inline">\((\sqrt{2 k \log \beta^{-1}} \epsilon + 2 k \epsilon^2, k \delta + \beta)\)</span>-dp. So with subsampling we gain an extra <span class="math inline">\(\sqrt r\)</span> in the <span class="math inline">\(\epsilon\)</span>-part of the privacy guarantee. But, smaller subsampling rate means smaller minibatch size, which would result in bigger variance, so there is a trade-off here.</p>
+<p>Finally we define the differentially private stochastic gradient descent (DP-SGD) with the Gaussian mechanism (Abadi-Chu-Goodfellow-McMahan-Mironov-Talwar-Zhang 2016), which is (7) with the noise specialised to Gaussian and an added clipping operation to bound to sensitivity of the query to a chosen <span class="math inline">\(C\)</span>:</p>
+<p><span class="math display">\[\theta_{t} = \theta_{t - 1} - \alpha \left(n^{-1} \sum_{i \in \gamma} \nabla_\theta \ell(x_i; \theta) |_{\theta = \theta_{t - 1}}\right)_{\text{Clipped at }C / 2} + N(0, \sigma^2 C^2 I),\]</span></p>
+<p>where</p>
+<p><span class="math display">\[y_{\text{Clipped at } \alpha} := y / (1 \vee {\|y\|_2 \over \alpha})\]</span></p>
+<p>is <span class="math inline">\(y\)</span> clipped to have norm at most <span class="math inline">\(\alpha\)</span>.</p>
+<p>Note that the clipping in DP-SGD is much stronger than making the query have sensitivity <span class="math inline">\(C\)</span>. It makes the difference between the query results of two <em>arbitrary</em> inputs bounded by <span class="math inline">\(C\)</span>, rather than <em>neighbouring</em> inputs.</p>
+<p>In <a href="/posts/2019-03-14-great-but-manageable-expectations.html">Part 2 of this post</a> we will use the tools developed above to discuss the privacy guarantee for DP-SGD, among other things.</p>
+<h2 id="references">References</h2>
+<ul>
+<li>Abadi, Martín, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. “Deep Learning with Differential Privacy.” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security - CCS’16, 2016, 308–18. <a href="https://doi.org/10.1145/2976749.2978318" class="uri">https://doi.org/10.1145/2976749.2978318</a>.</li>
+<li>Dwork, Cynthia, and Aaron Roth. “The Algorithmic Foundations of Differential Privacy.” Foundations and Trends® in Theoretical Computer Science 9, no. 3–4 (2013): 211–407. <a href="https://doi.org/10.1561/0400000042" class="uri">https://doi.org/10.1561/0400000042</a>.</li>
+<li>Dwork, Cynthia, Guy N. Rothblum, and Salil Vadhan. “Boosting and Differential Privacy.” In 2010 IEEE 51st Annual Symposium on Foundations of Computer Science, 51–60. Las Vegas, NV, USA: IEEE, 2010. <a href="https://doi.org/10.1109/FOCS.2010.12" class="uri">https://doi.org/10.1109/FOCS.2010.12</a>.</li>
+<li>Shiva Prasad Kasiviswanathan, Homin K. Lee, Kobbi Nissim, Sofya Raskhodnikova, and Adam Smith. “What Can We Learn Privately?” In 46th Annual IEEE Symposium on Foundations of Computer Science (FOCS’05). Pittsburgh, PA, USA: IEEE, 2005. <a href="https://doi.org/10.1109/SFCS.2005.1" class="uri">https://doi.org/10.1109/SFCS.2005.1</a>.</li>
+<li>Murtagh, Jack, and Salil Vadhan. “The Complexity of Computing the Optimal Composition of Differential Privacy.” In Theory of Cryptography, edited by Eyal Kushilevitz and Tal Malkin, 9562:157–75. Berlin, Heidelberg: Springer Berlin Heidelberg, 2016. <a href="https://doi.org/10.1007/978-3-662-49096-9_7" class="uri">https://doi.org/10.1007/978-3-662-49096-9_7</a>.</li>
+<li>Ullman, Jonathan. “Solution to CS7880 Homework 1.”, 2017. <a href="http://www.ccs.neu.edu/home/jullman/cs7880s17/HW1sol.pdf" class="uri">http://www.ccs.neu.edu/home/jullman/cs7880s17/HW1sol.pdf</a></li>
+<li>Vadhan, Salil. “The Complexity of Differential Privacy.” In Tutorials on the Foundations of Cryptography, edited by Yehuda Lindell, 347–450. Cham: Springer International Publishing, 2017. <a href="https://doi.org/10.1007/978-3-319-57048-8_7" class="uri">https://doi.org/10.1007/978-3-319-57048-8_7</a>.</li>
+</ul>
+</body>
+</html>
diff --git a/site-from-md/posts/2019-03-14-great-but-manageable-expectations.html b/site-from-md/posts/2019-03-14-great-but-manageable-expectations.html
new file mode 100644
index 0000000..e276d47
--- /dev/null
+++ b/site-from-md/posts/2019-03-14-great-but-manageable-expectations.html
@@ -0,0 +1,359 @@
+<!doctype html>
+<html lang="en">
+ <head>
+ <meta charset="utf-8">
+ <title>Great but Manageable Expectations</title>
+ <link rel="stylesheet" href="../assets/css/default.css" />
+ <script data-isso="/comments/"
+ data-isso-css="true"
+ data-isso-lang="en"
+ data-isso-reply-to-self="false"
+ data-isso-require-author="true"
+ data-isso-require-email="true"
+ data-isso-max-comments-top="10"
+ data-isso-max-comments-nested="5"
+ data-isso-reveal-on-click="5"
+ data-isso-avatar="true"
+ data-isso-avatar-bg="#f0f0f0"
+ data-isso-avatar-fg="#9abf88 #5698c4 #e279a3 #9163b6 ..."
+ data-isso-vote="true"
+ data-vote-levels=""
+ src="/comments/js/embed.min.js"></script>
+ <script src="https://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML" type="text/javascript"></script>
+ <script src="../assets/js/analytics.js" type="text/javascript"></script>
+ </head>
+ <body>
+ <header>
+ <span class="logo">
+ <a href="../blog.html">Yuchen's Blog</a>
+ </span>
+ <nav>
+ <a href="../index.html">About</a><a href="../postlist.html">All posts</a><a href="../blog-feed.xml">Feed</a>
+ </nav>
+ </header>
+
+ <div class="main">
+ <div class="bodyitem">
+ <h2> Great but Manageable Expectations </h2>
+ <p>Posted on 2019-03-14 | <a href="/posts/2019-03-14-great-but-manageable-expectations.html#isso-thread">Comments</a> </p>
+ <!DOCTYPE html>
+<html xmlns="http://www.w3.org/1999/xhtml" lang="" xml:lang="">
+<head>
+ <meta charset="utf-8" />
+ <meta name="generator" content="pandoc" />
+ <meta name="viewport" content="width=device-width, initial-scale=1.0, user-scalable=yes" />
+ <title>Untitled</title>
+ <style>
+ code{white-space: pre-wrap;}
+ span.smallcaps{font-variant: small-caps;}
+ span.underline{text-decoration: underline;}
+ div.column{display: inline-block; vertical-align: top; width: 50%;}
+ </style>
+ <script src="https://cdnjs.cloudflare.com/ajax/libs/mathjax/2.7.2/MathJax.js?config=TeX-AMS_CHTML-full" type="text/javascript"></script>
+ <!--[if lt IE 9]>
+ <script src="//cdnjs.cloudflare.com/ajax/libs/html5shiv/3.7.3/html5shiv-printshiv.min.js"></script>
+ <![endif]-->
+</head>
+<body>
+<nav id="TOC">
+<ul>
+<li><a href="#rényi-divergence-and-differential-privacy">Rényi divergence and differential privacy</a></li>
+<li><a href="#acgmmtz16">ACGMMTZ16</a></li>
+<li><a href="#tensorflow-implementation">Tensorflow implementation</a></li>
+<li><a href="#comparison-among-different-methods">Comparison among different methods</a></li>
+<li><a href="#further-questions">Further questions</a></li>
+<li><a href="#references">References</a></li>
+</ul>
+</nav>
+<p>This is Part 2 of a two-part blog post on differential privacy. Continuing from <a href="/posts/2019-03-13-a-tail-of-two-densities.html">Part 1</a>, I discuss the Rényi differential privacy, corresponding to the Rényi divergence, a study of the moment generating functions the divergence between probability measures to derive the tail bounds.</p>
+<p>Like in Part 1, I prove a composition theorem and a subsampling theorem.</p>
+<p>I also attempt to reproduce a seemingly better moment bound for the Gaussian mechanism with subsampling, with one intermediate step which I am not able to prove.</p>
+<p>After that I explain the Tensorflow implementation of differential privacy in its <a href="https://github.com/tensorflow/privacy/tree/master/privacy">Privacy</a> module, which focuses on the differentially private stochastic gradient descent algorithm (DP-SGD).</p>
+<p>Finally I use the results from both Part 1 and Part 2 to obtain some privacy guarantees for composed subsampling queries in general, and for DP-SGD in particular. I also compare these privacy guarantees.</p>
+<p><em>If you are confused by any notations, ask me or try <a href="/notations.html">this</a>.</em></p>
+<h2 id="rényi-divergence-and-differential-privacy">Rényi divergence and differential privacy</h2>
+<p>Recall in the proof of Gaussian mechanism privacy guarantee (Claim 8) we used the Chernoff bound for the Gaussian noise. Why not use the Chernoff bound for the divergence variable / privacy loss directly, since the latter is closer to the core subject than the noise? This leads us to the study of Rényi divergence.</p>
+<p>So far we have seen several notions of divergence used in differential privacy: the max divergence which is <span class="math inline">\(\epsilon\)</span>-ind in disguise:</p>
+<p><span class="math display">\[D_\infty(p || q) := \max_y \log {p(y) \over q(y)},\]</span></p>
+<p>the <span class="math inline">\(\delta\)</span>-approximate max divergence that defines the <span class="math inline">\((\epsilon, \delta)\)</span>-ind:</p>
+<p><span class="math display">\[D_\infty^\delta(p || q) := \max_y \log{p(y) - \delta \over q(y)},\]</span></p>
+<p>and the KL-divergence which is the expectation of the divergence variable:</p>
+<p><span class="math display">\[D(p || q) = \mathbb E L(p || q) = \int \log {p(y) \over q(y)} p(y) dy.\]</span></p>
+<p>The Rényi divergence is an interpolation between the max divergence and the KL-divergence, defined as the log moment generating function / cumulants of the divergence variable:</p>
+<p><span class="math display">\[D_\lambda(p || q) = (\lambda - 1)^{-1} \log \mathbb E \exp((\lambda - 1) L(p || q)) = (\lambda - 1)^{-1} \log \int {p(y)^\lambda \over q(y)^{\lambda - 1}} dx.\]</span></p>
+<p>Indeed, when <span class="math inline">\(\lambda \to \infty\)</span> we recover the max divergence, and when <span class="math inline">\(\lambda \to 1\)</span>, by recognising <span class="math inline">\(D_\lambda\)</span> as a derivative in <span class="math inline">\(\lambda\)</span> at <span class="math inline">\(\lambda = 1\)</span>, we recover the KL divergence. In this post we only consider <span class="math inline">\(\lambda &gt; 1\)</span>.</p>
+<p>Using the Rényi divergence we may define:</p>
+<p><strong>Definition (Rényi differential privacy)</strong> (Mironov 2017). An mechanism <span class="math inline">\(M\)</span> is <span class="math inline">\((\lambda, \rho)\)</span><em>-Rényi differentially private</em> (<span class="math inline">\((\lambda, \rho)\)</span>-rdp) if for all <span class="math inline">\(x\)</span> and <span class="math inline">\(x&#39;\)</span> with distance <span class="math inline">\(1\)</span>,</p>
+<p><span class="math display">\[D_\lambda(M(x) || M(x&#39;)) \le \rho.\]</span></p>
+<p>For convenience we also define two related notions, <span class="math inline">\(G_\lambda (f || g)\)</span> and <span class="math inline">\(\kappa_{f, g} (t)\)</span> for <span class="math inline">\(\lambda &gt; 1\)</span>, <span class="math inline">\(t &gt; 0\)</span> and positive functions <span class="math inline">\(f\)</span> and <span class="math inline">\(g\)</span>:</p>
+<p><span class="math display">\[G_\lambda(f || g) = \int f(y)^{\lambda} g(y)^{1 - \lambda} dy; \qquad \kappa_{f, g} (t) = \log G_{t + 1}(f || g).\]</span></p>
+<p>For probability densities <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span>, <span class="math inline">\(G_{t + 1}(p || q)\)</span> and <span class="math inline">\(\kappa_{p, q}(t)\)</span> are the <span class="math inline">\(t\)</span>th moment generating function and cumulant of the divergence variable <span class="math inline">\(L(p || q)\)</span>, and</p>
+<p><span class="math display">\[D_\lambda(p || q) = (\lambda - 1)^{-1} \kappa_{p, q}(\lambda - 1).\]</span></p>
+<p>In the following, whenever you see <span class="math inline">\(t\)</span>, think of it as <span class="math inline">\(\lambda - 1\)</span>.</p>
+<p><strong>Example 1 (RDP for the Gaussian mechanism)</strong>. Using the scaling and translation invariance of <span class="math inline">\(L\)</span> (6.1), we have that the divergence variable for two Gaussians with the same variance is</p>
+<p><span class="math display">\[L(N(\mu_1, \sigma^2 I) || N(\mu_2, \sigma^2 I)) \overset{d}{=} L(N(0, I) || N((\mu_2 - \mu_1) / \sigma, I)).\]</span></p>
+<p>With this we get</p>
+<p><span class="math display">\[D_\lambda(N(\mu_1, \sigma^2 I) || N(\mu_2, \sigma^2 I)) = {\lambda \|\mu_2 - \mu_1\|_2^2 \over 2 \sigma^2} = D_\lambda(N(\mu_2, \sigma^2 I) || N(\mu_1, \sigma^2 I)).\]</span></p>
+<p>Again due to the scaling invariance of <span class="math inline">\(L\)</span>, we only need to consider <span class="math inline">\(f\)</span> with sensitivity <span class="math inline">\(1\)</span>, see the discussion under (6.1). The Gaussian mechanism on query <span class="math inline">\(f\)</span> is thus <span class="math inline">\((\lambda, \lambda / 2 \sigma^2)\)</span>-rdp for any <span class="math inline">\(\lambda &gt; 1\)</span>.</p>
+<p>From the example of Gaussian mechanism, we see that the relation between <span class="math inline">\(\lambda\)</span> and <span class="math inline">\(\rho\)</span> is like that between <span class="math inline">\(\epsilon\)</span> and <span class="math inline">\(\delta\)</span>. Given <span class="math inline">\(\lambda\)</span> (resp. <span class="math inline">\(\rho\)</span>) and parameters like variance of the noise and the sensitivity of the query, we can write <span class="math inline">\(\rho = \rho(\lambda)\)</span> (resp. <span class="math inline">\(\lambda = \lambda(\rho)\)</span>).</p>
+<p>Using the Chernoff bound (6.7), we can bound the divergence variable:</p>
+<p><span class="math display">\[\mathbb P(L(p || q) \ge \epsilon) \le {\mathbb E \exp(t L(p || q)) \over \exp(t \epsilon))} = \exp (\kappa_{p, q}(t) - \epsilon t). \qquad (7.7)\]</span></p>
+<p>For a function <span class="math inline">\(f: I \to \mathbb R\)</span>, denote its Legendre transform by</p>
+<p><span class="math display">\[f^*(\epsilon) := \sup_{t \in I} (\epsilon t - f(t)).\]</span></p>
+<p>By taking infimum on the RHS of (7.7), we obtain</p>
+<p><strong>Claim 20</strong>. Two probability densities <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> are <span class="math inline">\((\epsilon, \exp(-\kappa_{p, q}^*(\epsilon)))\)</span>-ind.</p>
+<p>Given a mechanism <span class="math inline">\(M\)</span>, let <span class="math inline">\(\kappa_M(t)\)</span> denote an upper bound for the cumulant of its privacy loss:</p>
+<p><span class="math display">\[\log \mathbb E \exp(t L(M(x) || M(x&#39;))) \le \kappa_M(t), \qquad \forall x, x&#39;\text{ with } d(x, x&#39;) = 1.\]</span></p>
+<p>For example, we can set <span class="math inline">\(\kappa_M(t) = t \rho(t + 1)\)</span>. Using the same argument we have the following:</p>
+<p><strong>Claim 21</strong>. If <span class="math inline">\(M\)</span> is <span class="math inline">\((\lambda, \rho)\)</span>-rdp, then</p>
+<ol type="1">
+<li>it is also <span class="math inline">\((\epsilon, \exp((\lambda - 1) (\rho - \epsilon)))\)</span>-dp for any <span class="math inline">\(\epsilon \ge \rho\)</span>.</li>
+<li>Alternatively, <span class="math inline">\(M\)</span> is <span class="math inline">\((\epsilon, - \exp(\kappa_M^*(\epsilon)))\)</span>-dp for any <span class="math inline">\(\epsilon &gt; 0\)</span>.</li>
+<li>Alternatively, for any <span class="math inline">\(0 &lt; \delta \le 1\)</span>, <span class="math inline">\(M\)</span> is <span class="math inline">\((\rho + (\lambda - 1)^{-1} \log \delta^{-1}, \delta)\)</span>-dp.</li>
+</ol>
+<p><strong>Example 2 (Gaussian mechanism)</strong>. We can apply the above argument to the Gaussian mechanism on query <span class="math inline">\(f\)</span> and get:</p>
+<p><span class="math display">\[\delta \le \inf_{\lambda &gt; 1} \exp((\lambda - 1) ({\lambda \over 2 \sigma^2} - \epsilon))\]</span></p>
+<p>By assuming <span class="math inline">\(\sigma^2 &gt; (2 \epsilon)^{-1}\)</span> we have that the infimum is achieved when <span class="math inline">\(\lambda = (1 + 2 \epsilon / \sigma^2) / 2\)</span> and</p>
+<p><span class="math display">\[\delta \le \exp(- ((2 \sigma)^{-1} - \epsilon \sigma)^2 / 2)\]</span></p>
+<p>which is the same result as (6.8), obtained using the Chernoff bound of the noise.</p>
+<p>However, as we will see later, compositions will yield different results from those obtained from methods in <a href="/posts/2019-03-13-a-tail-of-two-densities.html">Part 1</a> when considering Rényi dp.</p>
+<p><strong>Claim 22 (Moment Composition Theorem)</strong>. Let <span class="math inline">\(M\)</span> be the adaptive composition of <span class="math inline">\(M_{1 : k}\)</span>. Suppose for any <span class="math inline">\(y_{&lt; i}\)</span>, <span class="math inline">\(M_i(y_{&lt; i})\)</span> is <span class="math inline">\((\lambda, \rho)\)</span>-rdp. Then <span class="math inline">\(M\)</span> is <span class="math inline">\((\lambda, k\rho)\)</span>-rdp.</p>
+<p><strong>Proof</strong>. Rather straightforward. As before let <span class="math inline">\(p_i\)</span> and <span class="math inline">\(q_i\)</span> be the conditional laws of adpative composition of <span class="math inline">\(M_{1 : i}\)</span> at <span class="math inline">\(x\)</span> and <span class="math inline">\(x&#39;\)</span> respectively, and <span class="math inline">\(p^i\)</span> and <span class="math inline">\(q^i\)</span> be the joint laws of <span class="math inline">\(M_{1 : i}\)</span> at <span class="math inline">\(x\)</span> and <span class="math inline">\(x&#39;\)</span> respectively. Denote</p>
+<p><span class="math display">\[D_i = \mathbb E \exp((\lambda - 1)\log {p^i(\xi_{1 : i}) \over q^i(\xi_{1 : i})})\]</span></p>
+<p>Then</p>
+<p><span class="math display">\[\begin{aligned}
+D_i &amp;= \mathbb E\mathbb E \left(\exp((\lambda - 1)\log {p_i(\xi_i | \xi_{&lt; i}) \over q_i(\xi_i | \xi_{&lt; i})}) \exp((\lambda - 1)\log {p^{i - 1}(\xi_{&lt; i}) \over q^{i - 1}(\xi_{&lt; i})}) \big| \xi_{&lt; i}\right) \\
+&amp;= \mathbb E \mathbb E \left(\exp((\lambda - 1)\log {p_i(\xi_i | \xi_{&lt; i}) \over q_i(\xi_i | \xi_{&lt; i})}) | \xi_{&lt; i}\right) \exp\left((\lambda - 1)\log {p^{i - 1}(\xi_{&lt; i}) \over q^{i - 1}(\xi_{&lt; i})}\right)\\
+&amp;\le \mathbb E \exp((\lambda - 1) \rho) \exp\left((\lambda - 1)\log {p^{i - 1}(\xi_{&lt; i}) \over q^{i - 1}(\xi_{&lt; i})}\right)\\
+&amp;= \exp((\lambda - 1) \rho) D_{i - 1}.
+\end{aligned}\]</span></p>
+<p>Applying this recursively we have</p>
+<p><span class="math display">\[D_k \le \exp(k(\lambda - 1) \rho),\]</span></p>
+<p>and so</p>
+<p><span class="math display">\[(\lambda - 1)^{-1} \log \mathbb E \exp((\lambda - 1)\log {p^k(\xi_{1 : i}) \over q^k(\xi_{1 : i})}) = (\lambda - 1)^{-1} \log D_k \le k \rho.\]</span></p>
+<p>Since this holds for all <span class="math inline">\(x\)</span> and <span class="math inline">\(x&#39;\)</span>, we are done. <span class="math inline">\(\square\)</span></p>
+<p>This, together with the scaling property of the legendre transformation:</p>
+<p><span class="math display">\[(k f)^*(x) = k f^*(x / k)\]</span></p>
+<p>yields</p>
+<p><strong>Claim 23</strong>. The <span class="math inline">\(k\)</span>-fold adaptive composition of <span class="math inline">\((\lambda, \rho(\lambda))\)</span>-rdp mechanisms is <span class="math inline">\((\epsilon, \exp(- k \kappa^*(\epsilon / k)))\)</span>-dp, where <span class="math inline">\(\kappa(t) := t \rho(t + 1)\)</span>.</p>
+<p><strong>Example 3 (Gaussian mechanism)</strong>. We can apply the above claim to Gaussian mechanism. Again, without loss of generality we assume <span class="math inline">\(S_f = 1\)</span>. But let us do it manually to get the same results. If we apply the Moment Composition Theorem to the an adaptive composition of Gaussian mechanisms on the same query, then since each <span class="math inline">\(M_i\)</span> is <span class="math inline">\((\lambda, (2 \sigma^2)^{-1} \lambda)\)</span>-rdp, the composition <span class="math inline">\(M\)</span> is <span class="math inline">\((\lambda, (2 \sigma^2)^{-1} k \lambda)\)</span>-rdp. Processing this using the Chernoff bound as in the previous example, we have</p>
+<p><span class="math display">\[\delta = \exp(- ((2 \sigma / \sqrt k)^{-1} - \epsilon \sigma / \sqrt k)^2 / 2),\]</span></p>
+<p>Substituting <span class="math inline">\(\sigma\)</span> with <span class="math inline">\(\sigma / \sqrt k\)</span> in (6.81), we conclude that if</p>
+<p><span class="math display">\[\sigma &gt; \sqrt k \left(\epsilon^{-1} \sqrt{2 \log \delta^{-1}} + (2 \epsilon)^{- {1 \over 2}}\right)\]</span></p>
+<p>then the composition <span class="math inline">\(M\)</span> is <span class="math inline">\((\epsilon, \delta)\)</span>-dp.</p>
+<p>As we will see in the discussions at the end of this post, this result is different from (and probably better than) the one obtained by using the Advanced Composition Theorem (Claim 18).</p>
+<p>We also have a subsampling theorem for the Rényi dp.</p>
+<p><strong>Claim 24</strong>. Fix <span class="math inline">\(r \in [0, 1]\)</span>. Let <span class="math inline">\(m \le n\)</span> be two nonnegative integers with <span class="math inline">\(m = r n\)</span>. Let <span class="math inline">\(N\)</span> be a <span class="math inline">\((\lambda, \rho)\)</span>-rdp machanism on <span class="math inline">\(X^m\)</span>. Let <span class="math inline">\(\mathcal I := \{J \subset [n]: |J| = m\}\)</span> be the set of subsets of <span class="math inline">\([n]\)</span> of size <span class="math inline">\(m\)</span>. Define mechanism <span class="math inline">\(M\)</span> on <span class="math inline">\(X^n\)</span> by</p>
+<p><span class="math display">\[M(x) = N(x_\gamma)\]</span></p>
+<p>where <span class="math inline">\(\gamma\)</span> is sampled uniformly from <span class="math inline">\(\mathcal I\)</span>. Then <span class="math inline">\(M\)</span> is <span class="math inline">\((\lambda, {1 \over \lambda - 1} \log (1 + r(e^{(\lambda - 1) \rho} - 1)))\)</span>-rdp.</p>
+<p>To prove Claim 24, we need a useful lemma:</p>
+<p><strong>Claim 25</strong>. Let <span class="math inline">\(p_{1 : n}\)</span> and <span class="math inline">\(q_{1 : n}\)</span> be nonnegative integers, and <span class="math inline">\(\lambda &gt; 1\)</span>. Then</p>
+<p><span class="math display">\[{(\sum p_i)^\lambda \over (\sum q_i)^{\lambda - 1}} \le \sum_i {p_i^\lambda \over q_i^{\lambda - 1}}. \qquad (8)\]</span></p>
+<p><strong>Proof</strong>. Let</p>
+<p><span class="math display">\[r(i) := p_i / P, \qquad u(i) := q_i / Q\]</span></p>
+<p>where</p>
+<p><span class="math display">\[P := \sum p_i, \qquad Q := \sum q_i\]</span></p>
+<p>then <span class="math inline">\(r\)</span> and <span class="math inline">\(u\)</span> are probability mass functions. Plugging in <span class="math inline">\(p_i = r(i) P\)</span> and <span class="math inline">\(q_i = u(i) Q\)</span> into the objective (8), it suffices to show</p>
+<p><span class="math display">\[1 \le \sum_i {r(i)^\lambda \over u(i)^{\lambda - 1}} = \mathbb E_{\xi \sim u} \left({r(\xi) \over u(\xi)}\right)^\lambda\]</span></p>
+<p>This is true due to Jensen's Inequality:</p>
+<p><span class="math display">\[\mathbb E_{\xi \sim u} \left({r(\xi) \over u(\xi)}\right)^\lambda \ge \left(\mathbb E_{\xi \sim u} {r(\xi) \over u(\xi)} \right)^\lambda = 1.\]</span></p>
+<p><span class="math inline">\(\square\)</span></p>
+<p><strong>Proof of Claim 24</strong>. Define <span class="math inline">\(\mathcal I\)</span> as before.</p>
+<p>Let <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> be the laws of <span class="math inline">\(M(x)\)</span> and <span class="math inline">\(M(x&#39;)\)</span> respectively. For any <span class="math inline">\(I \in \mathcal I\)</span>, let <span class="math inline">\(p_I\)</span> and <span class="math inline">\(q_I\)</span> be the laws of <span class="math inline">\(N(x_I)\)</span> and <span class="math inline">\(N(x_I&#39;)\)</span> respectively. Then we have</p>
+<p><span class="math display">\[\begin{aligned}
+p(y) &amp;= n^{-1} \sum_{I \in \mathcal I} p_I(y) \\
+q(y) &amp;= n^{-1} \sum_{I \in \mathcal I} q_I(y),
+\end{aligned}\]</span></p>
+<p>where <span class="math inline">\(n = |\mathcal I|\)</span>.</p>
+<p>The MGF of <span class="math inline">\(L(p || q)\)</span> is thus</p>
+<p><span class="math display">\[\mathbb E((\lambda - 1) L(p || q)) = n^{-1} \int {(\sum_I p_I(y))^\lambda \over (\sum_I q_I(y))^{\lambda - 1}} dy \le n^{-1} \sum_I \int {p_I(y)^\lambda \over q_I(y)^{\lambda - 1}} dy \qquad (9)\]</span></p>
+<p>where in the last step we used Claim 25. As in the proof of Claim 19, we divide <span class="math inline">\(\mathcal I\)</span> into disjoint sets <span class="math inline">\(\mathcal I_\in\)</span> and <span class="math inline">\(\mathcal I_\notin\)</span>. Furthermore we denote by <span class="math inline">\(n_\in\)</span> and <span class="math inline">\(n_\notin\)</span> their cardinalities. Then the right hand side of (9) becomes</p>
+<p><span class="math display">\[n^{-1} \sum_{I \in \mathcal I_\in} \int {p_I(y)^\lambda \over q_I(y)^{\lambda - 1}} dy + n^{-1} \sum_{I \in \mathcal I_\notin} \int {p_I(y)^\lambda \over q_I(y)^{\lambda - 1}} dy\]</span></p>
+<p>The summands in the first are the MGF of <span class="math inline">\(L(p_I || q_I)\)</span>, and the summands in the second term are <span class="math inline">\(1\)</span>, so</p>
+<p><span class="math display">\[\begin{aligned}
+\mathbb E((\lambda - 1) L(p || q)) &amp;\le n^{-1} \sum_{I \in \mathcal I_\in} \mathbb E \exp((\lambda - 1) L(p_I || q_I)) + (1 - r) \\
+&amp;\le n^{-1} \sum_{I \in \mathcal I_\in} \exp((\lambda - 1) D_\lambda(p_I || q_I)) + (1 - r) \\
+&amp;\le r \exp((\lambda - 1) \rho) + (1 - r).
+\end{aligned}\]</span></p>
+<p>Taking log and dividing by <span class="math inline">\((\lambda - 1)\)</span> on both sides we have</p>
+<p><span class="math display">\[D_\lambda(p || q) \le (\lambda - 1)^{-1} \log (1 + r(\exp((\lambda - 1) \rho) - 1)).\]</span></p>
+<p><span class="math inline">\(\square\)</span></p>
+<p>As before, we can rewrite the conclusion of Lemma 6 using <span class="math inline">\(1 + z \le e^z\)</span> and obtain <span class="math inline">\((\lambda, (\lambda - 1)^{-1} r (e^{(\lambda - 1) \rho} - 1))\)</span>-rdp, which further gives <span class="math inline">\((\lambda, \alpha^{-1} (e^\alpha - 1) r \rho)\)</span>-rdp (or <span class="math inline">\((\lambda, O(r \rho))\)</span>-rdp) if <span class="math inline">\((\lambda - 1) \rho &lt; \alpha\)</span> for some <span class="math inline">\(\alpha\)</span>.</p>
+<p>It is not hard to see that the subsampling theorem in moment method, even though similar to the results of that in the usual method, does not help due to lack of an analogue of advanced composition theorem of the moments.</p>
+<p><strong>Example 4 (Gaussian mechanism)</strong>. Applying the moment subsampling theorem to the Gaussian mechanism, we obtain <span class="math inline">\((\lambda, O(r \lambda / \sigma^2))\)</span>-rdp for a subsampled Gaussian mechanism with rate <span class="math inline">\(r\)</span>. Abadi-Chu-Goodfellow-McMahan-Mironov-Talwar-Zhang 2016 (ACGMMTZ16 in the following), however, gains an extra <span class="math inline">\(r\)</span> in the bound given certain assumptions.</p>
+<h2 id="acgmmtz16">ACGMMTZ16</h2>
+<p>What follows is my understanding of this result. I call it a conjecture because there is a gap which I am not able to reproduce their proof or prove it myself. This does not mean the result is false. On the contrary, I am inclined to believe it is true.</p>
+<p><strong>Claim 26</strong>. Assuming Conjecture 1 (see below) is true. For a subsampled Gaussian mechanism with ratio <span class="math inline">\(r\)</span>, if <span class="math inline">\(r = O(\sigma^{-1})\)</span> and <span class="math inline">\(\lambda = O(\sigma^2)\)</span>, then we have <span class="math inline">\((\lambda, O(r^2 \lambda / \sigma^2))\)</span>-rdp.</p>
+<p>Wait, why is there a conjecture? Well, I have tried but not been able to prove the following, which is a hidden assumption in the original proof:</p>
+<p><strong>Conjecture 1</strong>. Let <span class="math inline">\(p_i\)</span>, <span class="math inline">\(q_i\)</span>, <span class="math inline">\(\mu_i\)</span>, <span class="math inline">\(\nu_i\)</span> be probability densities on the same space for <span class="math inline">\(i = 1 : n\)</span>. If <span class="math inline">\(D_\lambda(p_i || q_i) \le D_\lambda(\mu_i || \nu_i)\)</span> for all <span class="math inline">\(i\)</span>, then</p>
+<p><span class="math display">\[D_\lambda(n^{-1} \sum_i p_i || n^{-1} \sum_i q_i) \le D_\lambda(n^{-1} \sum_i \mu_i || n^{-1} \sum_i \nu_i).\]</span></p>
+<p>Basically, it is saying "if for each <span class="math inline">\(i\)</span>, <span class="math inline">\(p_i\)</span> and <span class="math inline">\(q_i\)</span> are closer to each other than <span class="math inline">\(\mu_i\)</span> and <span class="math inline">\(\nu_i\)</span>, then so are their averages over <span class="math inline">\(i\)</span>". So it is heuristically reasonable.</p>
+<p>This conjecture is equivalent to its special case when <span class="math inline">\(n = 2\)</span> by an induction argument (replacing one pair of densities at a time).</p>
+<p>Recall the definition of <span class="math inline">\(G_\lambda\)</span> under the definition of Rényi differential privacy. The following Claim will be useful.</p>
+<p><strong>Claim 27</strong>. Let <span class="math inline">\(\lambda\)</span> be a positive integer, then</p>
+<p><span class="math display">\[G_\lambda(r p + (1 - r) q || q) = \sum_{k = 1 : \lambda} {\lambda \choose k} r^k (1 - r)^{\lambda - k} G_k(p || q).\]</span></p>
+<p><strong>Proof</strong>. Quite straightforward, by expanding the numerator <span class="math inline">\((r p + (1 - r) q)^\lambda\)</span> using binomial expansion. <span class="math inline">\(\square\)</span></p>
+<p><strong>Proof of Claim 26</strong>. Let <span class="math inline">\(M\)</span> be the Gaussian mechanism with subsampling rate <span class="math inline">\(r\)</span>, and <span class="math inline">\(p\)</span> and <span class="math inline">\(q\)</span> be the laws of <span class="math inline">\(M(x)\)</span> and <span class="math inline">\(M(x&#39;)\)</span> respectively, where <span class="math inline">\(d(x, x&#39;) = 1\)</span>. I will break the proof into two parts:</p>
+<ol type="1">
+<li>The MGF of the privacy loss <span class="math inline">\(L(p || q)\)</span> is bounded by that of <span class="math inline">\(L(r \mu_1 + (1 - r) \mu_0 || \mu_0)\)</span> where <span class="math inline">\(\mu_i = N(i, \sigma^2)\)</span>.</li>
+<li>If <span class="math inline">\(r \le c_1 \sigma^{-1}\)</span> and <span class="math inline">\(\lambda \le c_2 \sigma^2\)</span>, then there exists <span class="math inline">\(C = C(c_1, c_2)\)</span> such that <span class="math inline">\(G_\lambda (r \mu_1 + (1 - r) \mu_0 || \mu_0) \le C\)</span> (since <span class="math inline">\(O(r^2 \lambda^2 / \sigma^2) = O(1)\)</span>).</li>
+</ol>
+<p><strong>Remark in the proof</strong>. Note that the choice of <span class="math inline">\(c_1\)</span>, <span class="math inline">\(c_2\)</span> and the function <span class="math inline">\(C(c_1, c_2)\)</span> are important to the practicality and usefulness of Conjecture 0.</p>
+<p>Part 1 can be derived using Conjecture 1. We use the notations <span class="math inline">\(p_I\)</span> and <span class="math inline">\(q_I\)</span> to be <span class="math inline">\(q\)</span> and <span class="math inline">\(p\)</span> conditioned on the subsampling index <span class="math inline">\(I\)</span>, just like in the proof of the subsampling theorems (Claim 19 and 24). Then</p>
+<p><span class="math display">\[D_\lambda(q_I || p_I) = D_\lambda(p_I || q_I)
+\begin{cases}
+\le D_\lambda(\mu_0 || \mu_1) = D_\lambda(\mu_1 || \mu_0), &amp; I \in \mathcal I_\in\\
+= D_\lambda(\mu_0 || \mu_0) = D_\lambda(\mu_1 || \mu_1) = 0 &amp; I \in \mathcal I_\notin
+\end{cases}\]</span></p>
+<p>Since <span class="math inline">\(p = |\mathcal I|^{-1} \sum_{I \in \mathcal I} p_I\)</span> and <span class="math inline">\(q = |\mathcal I|^{-1} \sum_{I \in \mathcal I} q_I\)</span> and <span class="math inline">\(|\mathcal I_\in| = r |\mathcal I|\)</span>, by Conjecture 1, we have Part 1.</p>
+<p><strong>Remark in the proof</strong>. As we can see here, instead of trying to prove Conjecture 1, it suffices to prove a weaker version of it, by specialising on mixture of Gaussians, in order to have a Claim 26 without any conjectural assumptions. I have in fact posted the Conjecture on <a href="https://math.stackexchange.com/questions/3147963/an-inequality-related-to-the-renyi-divergence">Stackexchange</a>.</p>
+<p>Now let us verify Part 2.</p>
+<p>Using Claim 27 and Example 1, we have</p>
+<p><span class="math display">\[\begin{aligned}
+G_\lambda(r \mu_1 + (1 - r) \mu_0 || \mu_0)) &amp;= \sum_{j = 0 : \lambda} {\lambda \choose j} r^j (1 - r)^{\lambda - j} G_j(\mu_1 || \mu_0)\\
+&amp;=\sum_{j = 0 : \lambda} {\lambda \choose j} r^j (1 - r)^{\lambda - j} \exp(j (j - 1) / 2 \sigma^2). \qquad (9.5)
+\end{aligned}\]</span></p>
+<p>Denote by <span class="math inline">\(n = \lceil \sigma^2 \rceil\)</span>. It suffices to show</p>
+<p><span class="math display">\[\sum_{j = 0 : n} {n \choose j} (c_1 n^{- 1 / 2})^j (1 - c_1 n^{- 1 / 2})^{n - j} \exp(c_2 j (j - 1) / 2 n) \le C\]</span></p>
+<p>Note that we can discard the linear term <span class="math inline">\(- c_2 j / \sigma^2\)</span> in the exponential term since we want to bound the sum from above.</p>
+<p>We examine the asymptotics of this sum when <span class="math inline">\(n\)</span> is large, and treat the sum as an approximation to an integration of a function <span class="math inline">\(\phi: [0, 1] \to \mathbb R\)</span>. For <span class="math inline">\(j = x n\)</span>, where <span class="math inline">\(x \in (0, 1)\)</span>, <span class="math inline">\(\phi\)</span> is thus defined as (note we multiply the summand with <span class="math inline">\(n\)</span> to compensate the uniform measure on <span class="math inline">\(1, ..., n\)</span>:</p>
+<p><span class="math display">\[\begin{aligned}
+\phi_n(x) &amp;:= n {n \choose j} (c_1 n^{- 1 / 2})^j (1 - c_1 n^{- 1 / 2})^{n - j} \exp(c_2 j^2 / 2 n) \\
+&amp;= n {n \choose x n} (c_1 n^{- 1 / 2})^{x n} (1 - c_1 n^{- 1 / 2})^{(1 - x) n} \exp(c_2 x^2 n / 2)
+\end{aligned}\]</span></p>
+<p>Using Stirling's approximation</p>
+<p><span class="math display">\[n! \approx \sqrt{2 \pi n} n^n e^{- n},\]</span></p>
+<p>we can approach the binomial coefficient:</p>
+<p><span class="math display">\[{n \choose x n} \approx (\sqrt{2 \pi x (1 - x)} x^{x n} (1 - x)^{(1 - x) n})^{-1}.\]</span></p>
+<p>We also approximate</p>
+<p><span class="math display">\[(1 - c_1 n^{- 1 / 2})^{(1 - x) n} \approx \exp(- c_1 \sqrt{n} (1 - x)).\]</span></p>
+<p>With these we have</p>
+<p><span class="math display">\[\phi_n(x) \approx {1 \over \sqrt{2 \pi x (1 - x)}} \exp\left(- {1 \over 2} x n \log n + (x \log c_1 - x \log x - (1 - x) \log (1 - x) + {1 \over 2} c_2 x^2) n + {1 \over 2} \log n\right).\]</span></p>
+<p>This vanishes as <span class="math inline">\(n \to \infty\)</span>, and since <span class="math inline">\(\phi_n(x)\)</span> is bounded above by the integrable function <span class="math inline">\({1 \over \sqrt{2 \pi x (1 - x)}}\)</span> (c.f. the arcsine law), and below by <span class="math inline">\(0\)</span>, we may invoke the dominant convergence theorem and exchange the integral with the limit and get</p>
+<p><span class="math display">\[\begin{aligned}
+\lim_{n \to \infty} &amp;G_n (r \mu_1 + (1 - r) \mu_0 || \mu_0)) \\
+&amp;\le \lim_{n \to \infty} \int \phi_n(x) dx = \int \lim_{n \to \infty} \phi_n(x) dx = 0.
+\end{aligned}\]</span></p>
+<p>Thus we have that the generating function of the divergence variable <span class="math inline">\(L(r \mu_1 + (1 - r) \mu_0 || \mu_0)\)</span> is bounded.</p>
+<p>Can this be true for better orders</p>
+<p><span class="math display">\[r \le c_1 \sigma^{- d_r},\qquad \lambda \le c_2 \sigma^{d_\lambda}\]</span></p>
+<p>for some <span class="math inline">\(d_r \in (0, 1]\)</span> and <span class="math inline">\(d_\lambda \in [2, \infty)\)</span>? If we follow the same approximation using these exponents, then letting <span class="math inline">\(n = c_2 \sigma^{d_\lambda}\)</span>,</p>
+<p><span class="math display">\[\begin{aligned}
+{n \choose j} &amp;r^j (1 - r)^{n - j} G_j(\mu_0 || \mu_1) \le \phi_n(x) \\
+&amp;\approx {1 \over \sqrt{2 \pi x (1 - x)}} \exp\left({1 \over 2} c_2^{2 \over d_\lambda} x^2 n^{2 - {2 \over d_\lambda}} - {d_r \over 2} x n \log n + (x \log c_1 - x \log x - (1 - x) \log (1 - x)) n + {1 \over 2} \log n\right).
+\end{aligned}\]</span></p>
+<p>So we see that to keep the divergence moments bounded it is possible to have any <span class="math inline">\(r = O(\sigma^{- d_r})\)</span> for <span class="math inline">\(d_r \in (0, 1)\)</span>, but relaxing <span class="math inline">\(\lambda\)</span> may not be safe.</p>
+<p>If we relax <span class="math inline">\(r\)</span>, then we get</p>
+<p><span class="math display">\[G_\lambda(r \mu_1 + (1 - r) \mu_0 || \mu_0) = O(r^{2 / d_r} \lambda^2 \sigma^{-2}) = O(1).\]</span></p>
+<p>Note that now the constant <span class="math inline">\(C\)</span> depends on <span class="math inline">\(d_r\)</span> as well. Numerical experiments seem to suggest that <span class="math inline">\(C\)</span> can increase quite rapidly as <span class="math inline">\(d_r\)</span> decreases from <span class="math inline">\(1\)</span>. <span class="math inline">\(\square\)</span></p>
+<p>In the following for consistency we retain <span class="math inline">\(k\)</span> as the number of epochs, and use <span class="math inline">\(T := k / r\)</span> to denote the number of compositions / steps / minibatches. With Conjecture 0 we have:</p>
+<p><strong>Claim 28</strong>. Assuming Conjecture 1 is true. Let <span class="math inline">\(\epsilon, c_1, c_2 &gt; 0\)</span>, <span class="math inline">\(r \le c_1 \sigma^{-1}\)</span>, <span class="math inline">\(T = {c_2 \over 2 C(c_1, c_2)} \epsilon \sigma^2\)</span>. then the DP-SGD with subsampling rate <span class="math inline">\(r\)</span>, and <span class="math inline">\(T\)</span> steps is <span class="math inline">\((\epsilon, \delta)\)</span>-dp for</p>
+<p><span class="math display">\[\delta = \exp(- {1 \over 2} c_2 \sigma^2 \epsilon).\]</span></p>
+<p>In other words, for</p>
+<p><span class="math display">\[\sigma \ge \sqrt{2 c_2^{-1}} \epsilon^{- {1 \over 2}} \sqrt{\log \delta^{-1}},\]</span></p>
+<p>we can achieve <span class="math inline">\((\epsilon, \delta)\)</span>-dp.</p>
+<p><strong>Proof</strong>. By Claim 26 and the Moment Composition Theorem (Claim 22), for <span class="math inline">\(\lambda = c_2 \sigma^2\)</span>, substituting <span class="math inline">\(T = {c_2 \over 2 C(c_1, c_2)} \epsilon \sigma^2\)</span>, we have</p>
+<p><span class="math display">\[\mathbb P(L(p || q) \ge \epsilon) \le \exp(k C(c_1, c_2) - \lambda \epsilon) = \exp\left(- {1 \over 2} c_2 \sigma^2 \epsilon\right).\]</span></p>
+<p><span class="math inline">\(\square\)</span></p>
+<p><strong>Remark</strong>. Claim 28 is my understanding / version of Theorem 1 in [ACGMMTZ16], by using the same proof technique. Here I quote the original version of theorem with notions and notations altered for consistency with this post:</p>
+<blockquote>
+<p>There exists constants <span class="math inline">\(c_1&#39;, c_2&#39; &gt; 0\)</span> so that for any <span class="math inline">\(\epsilon &lt; c_1&#39; r^2 T\)</span>, DP-SGD is <span class="math inline">\((\epsilon, \delta)\)</span>-differentially private for any <span class="math inline">\(\delta &gt; 0\)</span> if we choose</p>
+</blockquote>
+<p><span class="math display">\[\sigma \ge c_2&#39; {r \sqrt{T \log (1 / \delta)} \over \epsilon}. \qquad (10)\]</span></p>
+<p>I am however unable to reproduce this version, assuming Conjecture 0 is true, for the following reasons:</p>
+<ol type="1">
+<li><p>In the proof in the paper, we have <span class="math inline">\(\epsilon = c_1&#39; r^2 T\)</span> instead of "less than" in the statement of the Theorem. If we change it to <span class="math inline">\(\epsilon &lt; c_1&#39; r^2 T\)</span> then the direction of the inequality becomes opposite to the direction we want to prove: <span class="math display">\[\exp(k C(c_1, c_2) - \lambda \epsilon) \ge ...\]</span></p></li>
+<li><p>The implicit condition <span class="math inline">\(r = O(\sigma^{-1})\)</span> of Conjecture 0 whose result is used in the proof of this theorem is not mentioned in the statement of the proof. The implication is that (10) becomes an ill-formed condition as the right hand side also depends on <span class="math inline">\(\sigma\)</span>.</p></li>
+</ol>
+<h2 id="tensorflow-implementation">Tensorflow implementation</h2>
+<p>The DP-SGD is implemented in <a href="https://github.com/tensorflow/privacy">TensorFlow Privacy</a>. In the following I discuss the package in the current state (2019-03-11). It is divided into two parts: <a href="https://github.com/tensorflow/privacy/tree/master/privacy/optimizers"><code>optimizers</code></a> which implements the actual differentially private algorithms, and <a href="https://github.com/tensorflow/privacy/tree/master/privacy/analysis"><code>analysis</code></a> which computes the privacy guarantee.</p>
+<p>The <code>analysis</code> part implements a privacy ledger that "keeps a record of all queries executed over a given dataset for the purpose of computing privacy guarantees". On the other hand, all the computation is done in <a href="https://github.com/tensorflow/privacy/blob/7e2d796bdee9b60dce21a82a397eefda35b0ac10/privacy/analysis/rdp_accountant.py"><code>rdp_accountant.py</code></a>. At this moment, <code>rdp_accountant.py</code> only implements the computation of the privacy guarantees for DP-SGD with Gaussian mechanism. In the following I will briefly explain the code in this file.</p>
+<p>Some notational correspondences: their <code>alpha</code> is our <span class="math inline">\(\lambda\)</span>, their <code>q</code> is our <span class="math inline">\(r\)</span>, their <code>A_alpha</code> (in the comments) is our <span class="math inline">\(\kappa_{r N(1, \sigma^2) + (1 - r) N(0, \sigma^2)} (\lambda - 1)\)</span>, at least when <span class="math inline">\(\lambda\)</span> is an integer.</p>
+<ul>
+<li>The function <code>_compute_log_a</code> presumably computes the cumulants <span class="math inline">\(\kappa_{r N(1, \sigma^2) + (1 - r) N(0, \sigma^2), N(0, \sigma^2)}(\lambda - 1)\)</span>. It calls <code>_compute_log_a_int</code> or <code>_compute_log_a_frac</code> depending on whether <span class="math inline">\(\lambda\)</span> is an integer.</li>
+<li>The function <code>_compute_log_a_int</code> computes the cumulant using (9.5).</li>
+<li>When <span class="math inline">\(\lambda\)</span> is not an integer, we can't use (9.5). I have yet to decode how <code>_compute_log_a_frac</code> computes the cumulant (or an upper bound of it) in this case</li>
+<li>The function <code>_compute_delta</code> computes <span class="math inline">\(\delta\)</span>s for a list of <span class="math inline">\(\lambda\)</span>s and <span class="math inline">\(\kappa\)</span>s using Item 1 of Claim 25 and return the smallest one, and the function <code>_compute_epsilon</code> computes epsilon uses Item 3 in Claim 25 in the same way.</li>
+</ul>
+<p>In <code>optimizers</code>, among other things, the DP-SGD with Gaussian mechanism is implemented in <code>dp_optimizer.py</code> and <code>gaussian_query.py</code>. See the definition of <code>DPGradientDescentGaussianOptimizer</code> in <code>dp_optimizer.py</code> and trace the calls therein.</p>
+<p>At this moment, the privacy guarantee computation part and the optimizer part are separated, with <code>rdp_accountant.py</code> called in <code>compute_dp_sgd_privacy.py</code> with user-supplied parameters. I think this is due to the lack of implementation in <code>rdp_accountant.py</code> of any non-DPSGD-with-Gaussian privacy guarantee computation. There is already <a href="https://github.com/tensorflow/privacy/issues/23">an issue on this</a>, so hopefully it won't be long before the privacy guarantees can be automatically computed given a DP-SGD instance.</p>
+<h2 id="comparison-among-different-methods">Comparison among different methods</h2>
+<p>So far we have seen three routes to compute the privacy guarantees for DP-SGD with the Gaussian mechanism:</p>
+<ol type="1">
+<li>Claim 9 (single Gaussian mechanism privacy guarantee) -&gt; Claim 19 (Subsampling theorem) -&gt; Claim 18 (Advanced Adaptive Composition Theorem)</li>
+<li>Example 1 (RDP for the Gaussian mechanism) -&gt; Claim 22 (Moment Composition Theorem) -&gt; Example 3 (Moment composition applied to the Gaussian mechanism)</li>
+<li>Claim 26 (RDP for Gaussian mechanism with specific magnitudes for subsampling rate) -&gt; Claim 28 (Moment Composition Theorem and translation to conventional DP)</li>
+</ol>
+<p>Which one is the best?</p>
+<p>To make fair comparison, we may use one parameter as the metric and set all others to be the same. For example, we can</p>
+<ol type="1">
+<li>Given the same <span class="math inline">\(\epsilon\)</span>, <span class="math inline">\(r\)</span> (in Route 1 and 3), <span class="math inline">\(k\)</span>, <span class="math inline">\(\sigma\)</span>, compare the <span class="math inline">\(\delta\)</span>s</li>
+<li>Given the same <span class="math inline">\(\epsilon\)</span>, <span class="math inline">\(r\)</span> (in Route 1 and 3), <span class="math inline">\(k\)</span>, <span class="math inline">\(\delta\)</span>, compare the <span class="math inline">\(\sigma\)</span>s</li>
+<li>Given the same <span class="math inline">\(\delta\)</span>, <span class="math inline">\(r\)</span> (in Route 1 and 3), <span class="math inline">\(k\)</span>, <span class="math inline">\(\sigma\)</span>, compare the <span class="math inline">\(\epsilon\)</span>s.</li>
+</ol>
+<p>I find that the first one, where <span class="math inline">\(\delta\)</span> is used as a metric, the best. This is because we have the tightest bounds and the cleanest formula when comparing the <span class="math inline">\(\delta\)</span>. For example, the Azuma and Chernoff bounds are both expressed as a bound for <span class="math inline">\(\delta\)</span>. On the other hand, the inversion of these bounds either requires a cost in the tightness (Claim 9, bounds on <span class="math inline">\(\sigma\)</span>) or in the complexity of the formula (Claim 16 Advanced Adaptive Composition Theorem, bounds on <span class="math inline">\(\epsilon\)</span>).</p>
+<p>So if we use <span class="math inline">\(\sigma\)</span> or <span class="math inline">\(\epsilon\)</span> as a metric, either we get a less fair comparison, or have to use a much more complicated formula as the bounds.</p>
+<p>Let us first compare Route 1 and Route 2 without specialising to the Gaussian mechanism.</p>
+<p><strong>Disclaimer</strong>. What follows is a bit messy and has not been reviewed by anyone.</p>
+<p>Suppose each mechanism <span class="math inline">\(N_i\)</span> satisfies <span class="math inline">\((\epsilon&#39;, \delta(\epsilon&#39;))\)</span>-dp. Let <span class="math inline">\(\tilde \epsilon := \log (1 + r (e^{\epsilon&#39;} - 1))\)</span>, then we have the subsampled mechanism <span class="math inline">\(M_i(x) = N_i(x_\gamma)\)</span> is <span class="math inline">\((\tilde \epsilon, r \tilde \delta(\tilde \epsilon))\)</span>-dp, where</p>
+<p><span class="math display">\[\tilde \delta(\tilde \epsilon) = \delta(\log (r^{-1} (\exp(\tilde \epsilon) - 1) + 1))\]</span></p>
+<p>Using the Azuma bound in the proof of Advanced Adaptive Composition Theorem (6.99):</p>
+<p><span class="math display">\[\mathbb P(L(p^k || q^k) \ge \epsilon) \le \exp(- {(\epsilon - r^{-1} k a(\tilde\epsilon))^2 \over 2 r^{-1} k (\tilde\epsilon + a(\tilde\epsilon))^2}).\]</span></p>
+<p>So we have the final bound for Route 1:</p>
+<p><span class="math display">\[\delta_1(\epsilon) = \min_{\tilde \epsilon: \epsilon &gt; r^{-1} k a(\tilde \epsilon)} \exp(- {(\epsilon - r^{-1} k a(\tilde\epsilon))^2 \over 2 r^{-1} k (\tilde\epsilon + a(\tilde\epsilon))^2}) + k \tilde \delta(\tilde \epsilon).\]</span></p>
+<p>As for Route 2, since we do not gain anything from subsampling in RDP, we do not subsample at all.</p>
+<p>By Claim 23, we have the bound for Route 2:</p>
+<p><span class="math display">\[\delta_2(\epsilon) = \exp(- k \kappa^* (\epsilon / k)).\]</span></p>
+<p>On one hand, one can compare <span class="math inline">\(\delta_1\)</span> and <span class="math inline">\(\delta_2\)</span> with numerical experiments. On the other hand, if we further specify <span class="math inline">\(\delta(\epsilon&#39;)\)</span> in Route 1 as the Chernoff bound for the cumulants of divergence variable, i.e.</p>
+<p><span class="math display">\[\delta(\epsilon&#39;) = \exp(- \kappa^* (\epsilon&#39;)),\]</span></p>
+<p>we have</p>
+<p><span class="math display">\[\delta_1 (\epsilon) = \min_{\tilde \epsilon: a(\tilde \epsilon) &lt; r k^{-1} \epsilon} \exp(- {(\epsilon - r^{-1} k a(\tilde\epsilon))^2 \over 2 r^{-1} k (\tilde\epsilon + a(\tilde\epsilon))^2}) + k \exp(- \kappa^* (b(\tilde\epsilon))),\]</span></p>
+<p>where</p>
+<p><span class="math display">\[b(\tilde \epsilon) := \log (r^{-1} (\exp(\tilde \epsilon) - 1) + 1) \le r^{-1} \tilde\epsilon.\]</span></p>
+<p>We note that since <span class="math inline">\(a(\tilde \epsilon) = \tilde\epsilon(e^{\tilde \epsilon} - 1) 1_{\tilde\epsilon &lt; \log 2} + \tilde\epsilon 1_{\tilde\epsilon \ge \log 2}\)</span>, we may compare the two cases separately.</p>
+<p>Note that we have <span class="math inline">\(\kappa^*\)</span> is a monotonously increasing function, therefore</p>
+<p><span class="math display">\[\kappa^* (b(\tilde\epsilon)) \le \kappa^*(r^{-1} \tilde\epsilon).\]</span></p>
+<p>So for <span class="math inline">\(\tilde \epsilon \ge \log 2\)</span>, we have</p>
+<p><span class="math display">\[k \exp(- \kappa^*(b(\tilde\epsilon))) \ge k \exp(- \kappa^*(r^{-1} \tilde \epsilon)) \ge k \exp(- \kappa^*(k^{-1} \epsilon)) \ge \delta_2(\epsilon).\]</span></p>
+<p>For <span class="math inline">\(\tilde\epsilon &lt; \log 2\)</span>, it is harder to compare, as now</p>
+<p><span class="math display">\[k \exp(- \kappa^*(b(\tilde\epsilon))) \ge k \exp(- \kappa^*(\epsilon / \sqrt{r k})).\]</span></p>
+<p>It is tempting to believe that this should also be greater than <span class="math inline">\(\delta_2(\epsilon)\)</span>. But I can not say for sure. At least in the special case of Gaussian, we have</p>
+<p><span class="math display">\[k \exp(- \kappa^*(\epsilon / \sqrt{r k})) = k \exp(- (\sigma \sqrt{\epsilon / k r} - (2 \sigma)^{-1})^2) \ge \exp(- k ({\sigma \epsilon \over k} - (2 \sigma)^{-1})^2) = \delta_2(\epsilon)\]</span></p>
+<p>when <span class="math inline">\(\epsilon\)</span> is sufficiently small. However we still need to consider the case where <span class="math inline">\(\epsilon\)</span> is not too small. But overall it seems most likely Route 2 is superior than Route 1.</p>
+<p>So let us compare Route 2 with Route 3:</p>
+<p>Given the condition to obtain the Chernoff bound</p>
+<p><span class="math display">\[{\sigma \epsilon \over k} &gt; (2 \sigma)^{-1}\]</span></p>
+<p>we have</p>
+<p><span class="math display">\[\delta_2(\epsilon) &gt; \exp(- k (\sigma \epsilon / k)^2) = \exp(- \sigma^2 \epsilon^2 / k).\]</span></p>
+<p>For this to achieve the same bound</p>
+<p><span class="math display">\[\delta_3(\epsilon) = \exp\left(- {1 \over 2} c_2 \sigma^2 \epsilon\right)\]</span></p>
+<p>we need <span class="math inline">\(k &lt; {2 \epsilon \over c_2}\)</span>. This is only possible if <span class="math inline">\(c_2\)</span> is small or <span class="math inline">\(\epsilon\)</span> is large, since <span class="math inline">\(k\)</span> is a positive integer.</p>
+<p>So taking at face value, Route 3 seems to achieve the best results. However, it also has some similar implicit conditions that need to be satisfied: First <span class="math inline">\(T\)</span> needs to be at least <span class="math inline">\(1\)</span>, meaning</p>
+<p><span class="math display">\[{c_2 \over C(c_1, c_2)} \epsilon \sigma^2 \ge 1.\]</span></p>
+<p>Second, <span class="math inline">\(k\)</span> needs to be at least <span class="math inline">\(1\)</span> as well, i.e.</p>
+<p><span class="math display">\[k = r T \ge {c_1 c_2 \over C(c_1, c_2)} \epsilon \sigma \ge 1.\]</span></p>
+<p>Both conditions rely on the magnitudes of <span class="math inline">\(\epsilon\)</span>, <span class="math inline">\(\sigma\)</span>, <span class="math inline">\(c_1\)</span>, <span class="math inline">\(c_2\)</span>, and the rate of growth of <span class="math inline">\(C(c_1, c_2)\)</span>. The biggest problem in this list is the last, because if we know how fast <span class="math inline">\(C\)</span> grows then we'll have a better idea what are the constraints for the parameters to achieve the result in Route 3.</p>
+<h2 id="further-questions">Further questions</h2>
+<p>Here is a list of what I think may be interesting topics or potential problems to look at, with no guarantee that they are all awesome untouched research problems:</p>
+<ol type="1">
+<li>Prove Conjecture 1</li>
+<li>Find a theoretically definitive answer whether the methods in Part 1 or Part 2 yield better privacy guarantees.</li>
+<li>Study the non-Gaussian cases, general or specific. Let <span class="math inline">\(p\)</span> be some probability density, what is the tail bound of <span class="math inline">\(L(p(y) || p(y + \alpha))\)</span> for <span class="math inline">\(|\alpha| \le 1\)</span>? Can you find anything better than Gaussian? For a start, perhaps the nice tables of Rényi divergence in Gil-Alajaji-Linder 2013 may be useful?</li>
+<li>Find out how useful Claim 26 is. Perhaps start with computing the constant <span class="math inline">\(C\)</span> nemerically.</li>
+<li>Help with <a href="https://github.com/tensorflow/privacy/issues/23">the aforementioned issue</a> in the Tensorflow privacy package.</li>
+</ol>
+<h2 id="references">References</h2>
+<ul>
+<li>Abadi, Martín, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya Mironov, Kunal Talwar, and Li Zhang. “Deep Learning with Differential Privacy.” Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security - CCS’16, 2016, 308–18. <a href="https://doi.org/10.1145/2976749.2978318" class="uri">https://doi.org/10.1145/2976749.2978318</a>.</li>
+<li>Erven, Tim van, and Peter Harremoës. “R\’enyi Divergence and Kullback-Leibler Divergence.” IEEE Transactions on Information Theory 60, no. 7 (July 2014): 3797–3820. <a href="https://doi.org/10.1109/TIT.2014.2320500" class="uri">https://doi.org/10.1109/TIT.2014.2320500</a>.</li>
+<li>Gil, M., F. Alajaji, and T. Linder. “Rényi Divergence Measures for Commonly Used Univariate Continuous Distributions.” Information Sciences 249 (November 2013): 124–31. <a href="https://doi.org/10.1016/j.ins.2013.06.018" class="uri">https://doi.org/10.1016/j.ins.2013.06.018</a>.</li>
+<li>Mironov, Ilya. “Renyi Differential Privacy.” 2017 IEEE 30th Computer Security Foundations Symposium (CSF), August 2017, 263–75. <a href="https://doi.org/10.1109/CSF.2017.11" class="uri">https://doi.org/10.1109/CSF.2017.11</a>.</li>
+</ul>
+</body>
+</html>
+
+ </div>
+ <section id="isso-thread"></section>
+ </div>
+ </body>
+</html>