From a197676de5a88fc53fd5a5fcff8d3778e7009294 Mon Sep 17 00:00:00 2001 From: Yuchen Pei Date: Wed, 20 Mar 2019 14:00:12 +0100 Subject: reddit peer review --- posts/2019-03-13-a-tail-of-two-densities.md | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) (limited to 'posts/2019-03-13-a-tail-of-two-densities.md') diff --git a/posts/2019-03-13-a-tail-of-two-densities.md b/posts/2019-03-13-a-tail-of-two-densities.md index dea9d1f..dffe015 100644 --- a/posts/2019-03-13-a-tail-of-two-densities.md +++ b/posts/2019-03-13-a-tail-of-two-densities.md @@ -7,7 +7,7 @@ comments: true This is Part 1 of a two-part post where I give an introduction to differential privacy, which is a study of tail bounds of the divergence between -probability measures, with the end goal of applying it to stochastic +two probability measures, with the end goal of applying it to stochastic gradient descent. I start with the definition of $\epsilon$-differential privacy @@ -44,6 +44,12 @@ Finally I use the results from both Part 1 and Part 2 to obtain some privacy guarantees for composed subsampling queries in general, and for DP-SGD in particular. I also compare these privacy guarantees. +This post focuses on the mathematics of differential privacy, and should be +suitable for anyone with some knowledge of probability. +For how the subject discussed in this post is related to privacy, +check out the [Wikipedia entry](https://en.wikipedia.org/wiki/Differential_privacy) +or [Dwork-Roth 2013](https://www.cis.upenn.edu/~aaroth/privacybook.html). + **Acknowledgement**. I would like to thank [Stockholm AI](https://stockholm.ai) for introducing me to the subject of differential privacy. Thanks to (in chronological order) Reynaldo @@ -63,7 +69,7 @@ The gist of differential privacy If you only have one minute, here is what differential privacy is about: Let $p$ and $q$ be two probability densities, we define the *divergence -variable* of $(p, q)$ to be +variable*[^dv] of $(p, q)$ to be $$L(p || q) := \log {p(\xi) \over q(\xi)}$$ @@ -91,6 +97,10 @@ by adding noise to the gradients. Now if you have an hour\... +[^dv] For those who have read about differential privacy and never heard +of the term "divergence variable", it is closely related to the notion of "privacy loss", +see the paragraph under Claim 6 in [Back to approximate differential privacy](#back-to-approximate-differential-privacy). + $\epsilon$-dp ------------- -- cgit v1.2.3