aboutsummaryrefslogtreecommitdiff
path: root/posts/2019-03-14-great-but-manageable-expectations.org
diff options
context:
space:
mode:
Diffstat (limited to 'posts/2019-03-14-great-but-manageable-expectations.org')
-rw-r--r--posts/2019-03-14-great-but-manageable-expectations.org10
1 files changed, 9 insertions, 1 deletions
diff --git a/posts/2019-03-14-great-but-manageable-expectations.org b/posts/2019-03-14-great-but-manageable-expectations.org
index 6438090..68e757a 100644
--- a/posts/2019-03-14-great-but-manageable-expectations.org
+++ b/posts/2019-03-14-great-but-manageable-expectations.org
@@ -26,11 +26,12 @@ privacy guarantees for composed subsampling queries in general, and for
DP-SGD in particular. I also compare these privacy guarantees.
/If you are confused by any notations, ask me or try
-[[/notations.html][this]]./
+[[file:/notations.html][this]]./
** Rényi divergence and differential privacy
:PROPERTIES:
:CUSTOM_ID: rényi-divergence-and-differential-privacy
+ :ID: d1763dea-5e8f-4393-8f14-1d781147dcb5
:END:
Recall in the proof of Gaussian mechanism privacy guarantee (Claim 8) we
used the Chernoff bound for the Gaussian noise. Why not use the Chernoff
@@ -161,6 +162,7 @@ considering Rényi dp.
*** Moment Composition
:PROPERTIES:
:CUSTOM_ID: moment-composition
+ :ID: d5e94e5a-236d-4c41-96a4-4a93341f249a
:END:
*Claim 22 (Moment Composition Theorem)*. Let \(M\) be the adaptive
composition of \(M_{1 : k}\). Suppose for any \(y_{< i}\), \(M_i(y_{< i})\) is
@@ -228,6 +230,7 @@ the Advanced Composition Theorem (Claim 18).
*** Subsampling
:PROPERTIES:
:CUSTOM_ID: subsampling
+ :ID: 25cd27ac-fcb6-462f-9a3b-da861124d7b2
:END:
We also have a subsampling theorem for the Rényi dp.
@@ -330,6 +333,7 @@ assumptions.
** ACGMMTZ16
:PROPERTIES:
:CUSTOM_ID: acgmmtz16
+ :ID: 8b85cce3-01ad-4404-80c0-b73076d183a9
:END:
What follows is my understanding of this result. I call it a conjecture
because there is a gap which I am not able to reproduce their proof or
@@ -597,6 +601,7 @@ true, for the following reasons:
** Tensorflow implementation
:PROPERTIES:
:CUSTOM_ID: tensorflow-implementation
+ :ID: f856ad67-4f78-46b4-8c98-fda07a0dc670
:END:
The DP-SGD is implemented in
[[https://github.com/tensorflow/privacy][TensorFlow Privacy]]. In the
@@ -650,6 +655,7 @@ automatically computed given a DP-SGD instance.
** Comparison among different methods
:PROPERTIES:
:CUSTOM_ID: comparison-among-different-methods
+ :ID: 30502f53-d9ba-48ea-868a-dd4db995a6d4
:END:
So far we have seen three routes to compute the privacy guarantees for
DP-SGD with the Gaussian mechanism:
@@ -795,6 +801,7 @@ achieve the result in Route 3.
** Further questions
:PROPERTIES:
:CUSTOM_ID: further-questions
+ :ID: 277e8a8c-cc34-4ba9-84fb-d8950f6dc9de
:END:
Here is a list of what I think may be interesting topics or potential
problems to look at, with no guarantee that they are all awesome
@@ -816,6 +823,7 @@ untouched research problems:
** References
:PROPERTIES:
:CUSTOM_ID: references
+ :ID: 708aa715-dc2c-49ac-b7bb-f85ac168d8b3
:END:
- Abadi, Martín, Andy Chu, Ian Goodfellow, H. Brendan McMahan, Ilya