aboutsummaryrefslogtreecommitdiff
path: root/posts
diff options
context:
space:
mode:
authorYuchen Pei <me@ypei.me>2019-02-27 10:43:13 +0100
committerYuchen Pei <me@ypei.me>2019-02-27 10:43:13 +0100
commit120655e775d8400edbc3683d27003bf3e64c58d4 (patch)
tree3c0ee05b7e46dd02da5f89afb7a5d85edbe6392b /posts
parent4b8a9b0e358fa19be11a109ea414f95fd99f9be2 (diff)
fixed a typo
Diffstat (limited to 'posts')
-rw-r--r--posts/2019-02-14-raise-your-elbo.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/posts/2019-02-14-raise-your-elbo.md b/posts/2019-02-14-raise-your-elbo.md
index 14c8784..4080d0b 100644
--- a/posts/2019-02-14-raise-your-elbo.md
+++ b/posts/2019-02-14-raise-your-elbo.md
@@ -12,7 +12,7 @@ I use a top-down approach, starting with the KL divergence and the ELBO,
to lay the mathematical framework of all the models in this post.
Then I define mixture models and the EM algorithm, with Gaussian mixture
-model (GMM), probabilistic latent semantic analysis (pLSA) the hidden
+model (GMM), probabilistic latent semantic analysis (pLSA) and the hidden
markov model (HMM) as examples.
After that I present the fully Bayesian version of EM, also known as
@@ -420,7 +420,7 @@ $p(z_i | x_{i}; \theta)$, so during the E-step we instead write down
formula (2.5) directly in hope of simplifying it:
$$\begin{aligned}
-\mathbb E_{p(z_i | x_i; \theta_t)} \log p(x_i, z_i; \theta_t) &=\mathbb E_{p(z_i | x_i; \theta_t)} \left(\log \pi_{z_{i1}} + \sum_{j = 2 : T} \log a_{z_{i, j - 1}, z_{ij}} + \sum_{j = 1 : T} \log b_{z_{ij}, x_{ij}}\right). \qquad (3)
+\mathbb E_{p(z_i | x_i; \theta_t)} \log p(x_i, z_i; \theta_t) &=\mathbb E_{p(z_i | x_i; \theta_t)} \left(\log \pi_{z_{i1}} + \sum_{j = 2 : T} \log \xi_{z_{i, j - 1}, z_{ij}} + \sum_{j = 1 : T} \log \eta_{z_{ij}, x_{ij}}\right). \qquad (3)
\end{aligned}$$
Let us compute the summand in second term: