From 8b458071bd26c034dbd1add7fa3667af7899ecb5 Mon Sep 17 00:00:00 2001 From: Yuchen Pei Date: Thu, 24 Jun 2021 17:58:50 +1000 Subject: Moved things in assets/resources out to assets/ --- posts/2014-04-01-q-robinson-schensted-symmetry-paper.md | 2 +- posts/2014-04-01-q-robinson-schensted-symmetry-paper.org | 2 +- posts/2015-04-02-juggling-skill-tree.md | 2 +- posts/2016-10-13-q-robinson-schensted-knuth-polymer.md | 2 +- posts/2016-10-13-q-robinson-schensted-knuth-polymer.org | 2 +- posts/2019-02-14-raise-your-elbo.md | 14 +++++++------- posts/2019-02-14-raise-your-elbo.org | 14 +++++++------- 7 files changed, 19 insertions(+), 19 deletions(-) (limited to 'posts') diff --git a/posts/2014-04-01-q-robinson-schensted-symmetry-paper.md b/posts/2014-04-01-q-robinson-schensted-symmetry-paper.md index 38874bb..a0432c8 100644 --- a/posts/2014-04-01-q-robinson-schensted-symmetry-paper.md +++ b/posts/2014-04-01-q-robinson-schensted-symmetry-paper.md @@ -13,7 +13,7 @@ generalisation of the growth diagram approach introduced by Fomin. This approach, which uses "growth graphs", can also be applied to a wider class of insertion algorithms which have a branching structure. -![Growth graph of q-RS for 1423](../assets/resources/1423graph.jpg) +![Growth graph of q-RS for 1423](../assets/1423graph.jpg) Above is the growth graph of the \\(q\\)-weighted Robinson-Schensted algorithm for the permutation \\({1 2 3 4\\choose1 4 2 3}\\). diff --git a/posts/2014-04-01-q-robinson-schensted-symmetry-paper.org b/posts/2014-04-01-q-robinson-schensted-symmetry-paper.org index b1c967d..7411a5d 100644 --- a/posts/2014-04-01-q-robinson-schensted-symmetry-paper.org +++ b/posts/2014-04-01-q-robinson-schensted-symmetry-paper.org @@ -10,7 +10,7 @@ approach, which uses "growth graphs", can also be applied to a wider class of insertion algorithms which have a branching structure. #+caption: Growth graph of q-RS for 1423 -[[../assets/resources/1423graph.jpg]] +[[../assets/1423graph.jpg]] Above is the growth graph of the \(q\)-weighted Robinson-Schensted algorithm for the permutation \({1 2 3 4\choose1 4 2 3}\). diff --git a/posts/2015-04-02-juggling-skill-tree.md b/posts/2015-04-02-juggling-skill-tree.md index 11eb377..7708988 100644 --- a/posts/2015-04-02-juggling-skill-tree.md +++ b/posts/2015-04-02-juggling-skill-tree.md @@ -30,4 +30,4 @@ have therefore written a script using [Python](http://python.org), difficulties, which is the leftmost column) from the Library of Juggling (click the image for the full size): -The juggling skill tree +The juggling skill tree diff --git a/posts/2016-10-13-q-robinson-schensted-knuth-polymer.md b/posts/2016-10-13-q-robinson-schensted-knuth-polymer.md index 4d31e37..4a5f793 100644 --- a/posts/2016-10-13-q-robinson-schensted-knuth-polymer.md +++ b/posts/2016-10-13-q-robinson-schensted-knuth-polymer.md @@ -12,7 +12,7 @@ This article is available at [arXiv](https://arxiv.org/abs/1610.03692). It seems to me that one difference between arXiv and Github is that on arXiv each preprint has a few versions only. In Github many projects have a "dev" branch hosting continuous updates, whereas the master branch is where the stable releases live. -[Here]({{ site.url }}/assets/resources/qrsklatest.pdf) is a "dev" version of the article, which I shall push to arXiv when it stablises. Below is the changelog. +[Here]({{ site.url }}/assets/qrsklatest.pdf) is a "dev" version of the article, which I shall push to arXiv when it stablises. Below is the changelog. * 2017-01-12: Typos and grammar, arXiv v2. * 2016-12-20: Added remarks on the geometric \\(q\\)-pushTASEP. Added remarks on the converse of the Burke property. Added natural language description of the \\(q\\)RSK. Fixed typos. diff --git a/posts/2016-10-13-q-robinson-schensted-knuth-polymer.org b/posts/2016-10-13-q-robinson-schensted-knuth-polymer.org index 93da639..206a4ab 100644 --- a/posts/2016-10-13-q-robinson-schensted-knuth-polymer.org +++ b/posts/2016-10-13-q-robinson-schensted-knuth-polymer.org @@ -28,7 +28,7 @@ few versions only. In Github many projects have a "dev" branch hosting continuous updates, whereas the master branch is where the stable releases live. -[[file:%7B%7B%20site.url%20%7D%7D/assets/resources/qrsklatest.pdf][Here]] +[[file:/assets/qrsklatest.pdf][Here]] is a "dev" version of the article, which I shall push to arXiv when it stablises. Below is the changelog. diff --git a/posts/2019-02-14-raise-your-elbo.md b/posts/2019-02-14-raise-your-elbo.md index 4080d0b..36bd364 100644 --- a/posts/2019-02-14-raise-your-elbo.md +++ b/posts/2019-02-14-raise-your-elbo.md @@ -138,7 +138,7 @@ $p(x_{1 : m}; \theta)$. Represented as a DAG (a.k.a the plate notations), the model looks like this: -![](/assets/resources/mixture-model.png){style="width:250px"} +![](/assets/mixture-model.png){style="width:250px"} where the boxes with $m$ mean repitition for $m$ times, since there $m$ indepdent pairs of $(x, z)$, and the same goes for $\eta$. @@ -298,7 +298,7 @@ $$p(d_i = u, x_i = w | z_i = k; \theta) = p(d_i ; \xi_k) p(x_i; \eta_k) = \xi_{k The model can be illustrated in the plate notations: -![](/assets/resources/plsa1.png){style="width:350px"} +![](/assets/plsa1.png){style="width:350px"} So the solution of the M-step is @@ -365,7 +365,7 @@ pLSA1, $(x | z = k) \sim \text{Cat}(\eta_{k, \cdot})$. Illustrated in the plate notations, pLSA2 is: -![](/assets/resources/plsa2.png){style="width:350px"} +![](/assets/plsa2.png){style="width:350px"} The computation is basically adding an index $\ell$ to the computation of SMM wherever applicable. @@ -411,7 +411,7 @@ $$p(z_{i1}) = \pi_{z_{i1}}$$ So the parameters are $\theta = (\pi, \xi, \eta)$. And HMM can be shown in plate notations as: -![](/assets/resources/hmm.png){style="width:350px"} +![](/assets/hmm.png){style="width:350px"} Now we apply EM to HMM, which is called the [Baum-Welch algorithm](https://en.wikipedia.org/wiki/Baum%E2%80%93Welch_algorithm). @@ -569,7 +569,7 @@ later in this section that the posterior $q(\eta_k)$ belongs to the same family as $p(\eta_k)$. Represented in a plate notations, a fully Bayesian mixture model looks like: -![](/assets/resources/fully-bayesian-mm.png){style="width:450px"} +![](/assets/fully-bayesian-mm.png){style="width:450px"} Given this structure we can write down the mean-field approximation: @@ -672,7 +672,7 @@ As the second example of fully Bayesian mixture models, Latent Dirichlet allocation (LDA) (Blei-Ng-Jordan 2003) is the fully Bayesian version of pLSA2, with the following plate notations: -![](/assets/resources/lda.png){style="width:450px"} +![](/assets/lda.png){style="width:450px"} It is the smoothed version in the paper. @@ -782,7 +782,7 @@ $$L(p, q) = \sum_{k = 1 : T} \mathbb E_{q(\theta_k)} \log {p(\theta_k) \over q(\ The plate notation of this model is: -![](/assets/resources/dpmm.png){style="width:450px"} +![](/assets/dpmm.png){style="width:450px"} As it turns out, the infinities can be tamed in this case. diff --git a/posts/2019-02-14-raise-your-elbo.org b/posts/2019-02-14-raise-your-elbo.org index f0de7d1..9cc79a7 100644 --- a/posts/2019-02-14-raise-your-elbo.org +++ b/posts/2019-02-14-raise-your-elbo.org @@ -137,7 +137,7 @@ $p(x_{1 : m}; \theta)$. Represented as a DAG (a.k.a the plate notations), the model looks like this: -[[/assets/resources/mixture-model.png]] +[[/assets/mixture-model.png]] where the boxes with $m$ mean repitition for $m$ times, since there $m$ indepdent pairs of $(x, z)$, and the same goes for $\eta$. @@ -309,7 +309,7 @@ $$p(d_i = u, x_i = w | z_i = k; \theta) = p(d_i ; \xi_k) p(x_i; \eta_k) = \xi_{k The model can be illustrated in the plate notations: -[[/assets/resources/plsa1.png]] +[[/assets/plsa1.png]] So the solution of the M-step is @@ -380,7 +380,7 @@ pLSA1, $(x | z = k) \sim \text{Cat}(\eta_{k, \cdot})$. Illustrated in the plate notations, pLSA2 is: -[[/assets/resources/plsa2.png]] +[[/assets/plsa2.png]] The computation is basically adding an index $\ell$ to the computation of SMM wherever applicable. @@ -429,7 +429,7 @@ $$p(z_{i1}) = \pi_{z_{i1}}$$ So the parameters are $\theta = (\pi, \xi, \eta)$. And HMM can be shown in plate notations as: -[[/assets/resources/hmm.png]] +[[/assets/hmm.png]] Now we apply EM to HMM, which is called the [[https://en.wikipedia.org/wiki/Baum%E2%80%93Welch_algorithm][Baum-Welch @@ -592,7 +592,7 @@ later in this section that the posterior $q(\eta_k)$ belongs to the same family as $p(\eta_k)$. Represented in a plate notations, a fully Bayesian mixture model looks like: -[[/assets/resources/fully-bayesian-mm.png]] +[[/assets/fully-bayesian-mm.png]] Given this structure we can write down the mean-field approximation: @@ -701,7 +701,7 @@ As the second example of fully Bayesian mixture models, Latent Dirichlet allocation (LDA) (Blei-Ng-Jordan 2003) is the fully Bayesian version of pLSA2, with the following plate notations: -[[/assets/resources/lda.png]] +[[/assets/lda.png]] It is the smoothed version in the paper. @@ -813,7 +813,7 @@ $$L(p, q) = \sum_{k = 1 : T} \mathbb E_{q(\theta_k)} \log {p(\theta_k) \over q(\ The plate notation of this model is: -[[/assets/resources/dpmm.png]] +[[/assets/dpmm.png]] As it turns out, the infinities can be tamed in this case. -- cgit v1.2.3