aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
-rw-r--r--posts/2019-03-13-a-tail-of-two-densities.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/posts/2019-03-13-a-tail-of-two-densities.md b/posts/2019-03-13-a-tail-of-two-densities.md
index 460364a..f3e409c 100644
--- a/posts/2019-03-13-a-tail-of-two-densities.md
+++ b/posts/2019-03-13-a-tail-of-two-densities.md
@@ -8,7 +8,7 @@ comments: true
This is Part 1 of a two-part post where I give an introduction to
differential privacy, which is a study of tail bounds of the divergence between
probability measures, with the end goal of applying it to stochastic
-gradient descent.
+gradient descent.
I start with the definition of $\epsilon$-differential privacy
(corresponding to max divergence), followed by