diff options
| author | Yuchen Pei <me@ypei.me> | 2020-05-17 18:56:13 +0200 | 
|---|---|---|
| committer | Yuchen Pei <me@ypei.me> | 2020-05-17 18:56:13 +0200 | 
| commit | 5750cdc9c333aa58786661707169a10a865b7e27 (patch) | |
| tree | 6cabde2edd20e57f61e2c37dd6b9ab2059efb15b | |
| parent | 9f14879ad52c2bdbbe38e7f2a125a7fa42dea5f0 (diff) | |
minor type fix.
| -rw-r--r-- | posts/2019-03-14-great-but-manageable-expectations.md | 2 | 
1 files changed, 1 insertions, 1 deletions
| diff --git a/posts/2019-03-14-great-but-manageable-expectations.md b/posts/2019-03-14-great-but-manageable-expectations.md index b52a97c..2520954 100644 --- a/posts/2019-03-14-great-but-manageable-expectations.md +++ b/posts/2019-03-14-great-but-manageable-expectations.md @@ -56,7 +56,7 @@ The Rényi divergence is an interpolation between the max divergence and  the KL-divergence, defined as the log moment generating function /  cumulants of the divergence variable: -$$D_\lambda(p || q) = (\lambda - 1)^{-1} \log \mathbb E \exp((\lambda - 1) L(p || q)) = (\lambda - 1)^{-1} \log \int {p(y)^\lambda \over q(y)^{\lambda - 1}} dx.$$ +$$D_\lambda(p || q) = (\lambda - 1)^{-1} \log \mathbb E \exp((\lambda - 1) L(p || q)) = (\lambda - 1)^{-1} \log \int {p(y)^\lambda \over q(y)^{\lambda - 1}} dy.$$  Indeed, when $\lambda \to \infty$ we recover the max divergence, and  when $\lambda \to 1$, by recognising $D_\lambda$ as a derivative in | 
