diff options
-rw-r--r-- | engine/engine.py | 4 | ||||
-rw-r--r-- | site/microblog-feed.xml | 2 | ||||
-rw-r--r-- | site/microblog.html | 2 |
3 files changed, 3 insertions, 5 deletions
diff --git a/engine/engine.py b/engine/engine.py index 91532fd..7cb45e9 100644 --- a/engine/engine.py +++ b/engine/engine.py @@ -59,9 +59,7 @@ def main(): with open(templatesdir + 'barepost.html') as f: template = f.read() #headposts is the list of the first few posts, to be displayed on blog.html - headposts = {'body': ''} - for post in posts[:homepostnum]: - headposts['body'] += combine(post, template)['body'] + headposts = {'body' : ''.join([combine(post, template)['body'] for post in posts[:homepostnum]])} with open(templatesdir + 'blog.html') as f: template = f.read() headposts = combine(headposts, template) diff --git a/site/microblog-feed.xml b/site/microblog-feed.xml index d06d09e..a6578bc 100644 --- a/site/microblog-feed.xml +++ b/site/microblog-feed.xml @@ -19,7 +19,7 @@ </author> <content type="html"><h3 id="some-notes-on-rnn-fsm-fa-tm-and-utm">Some notes on RNN, FSM / FA, TM and UTM</h3> <p>Related to <a href="#neural-turing-machine">a previous micropost</a>.</p> -<p><a href="http://www.cs.toronto.edu/~rgrosse/csc321/lec9.pdf">These slides from Toronto</a> is a nice introduction to RNN (recurrent neural network) from a computational point of view. It states that RNN can simulate any FSM (finite state machine, a.k.a. finite automata abbr. FA) with a toy example computing the parity of a binary string.</p> +<p><a href="http://www.cs.toronto.edu/~rgrosse/csc321/lec9.pdf">These slides from Toronto</a> are a nice introduction to RNN (recurrent neural network) from a computational point of view. It states that RNN can simulate any FSM (finite state machine, a.k.a. finite automata abbr. FA) with a toy example computing the parity of a binary string.</p> <p><a href="http://www.deeplearningbook.org/contents/rnn.html">Goodfellow et. al.’s book</a> (see page 372 and 374) goes one step further, stating that RNN with a hidden-to-hidden layer can simulate Turing machines, and not only that, but also the <em>universal</em> Turing machine abbr. UTM (the book referenced <a href="https://www.sciencedirect.com/science/article/pii/S0022000085710136">Siegelmann-Sontag</a>), a property not shared by the weaker network where the hidden-to-hidden layer is replaced by an output-to-hidden layer (page 376).</p> <p>By the way, the RNN with a hidden-to-hidden layer has the same architecture as the so-called linear dynamical system mentioned in <a href="https://www.coursera.org/learn/neural-networks/lecture/Fpa7y/modeling-sequences-a-brief-overview">Hinton’s video</a>.</p> <p>From what I have learned, the universality of RNN and feedforward networks are therefore due to different arguments, the former coming from Turing machines and the latter from an analytical view of approximation by step functions.</p> diff --git a/site/microblog.html b/site/microblog.html index 33e017f..c551725 100644 --- a/site/microblog.html +++ b/site/microblog.html @@ -22,7 +22,7 @@ <span id=rnn-fsm><p><a href="#rnn-fsm">2018-05-11</a></p></span> <h3 id="some-notes-on-rnn-fsm-fa-tm-and-utm">Some notes on RNN, FSM / FA, TM and UTM</h3> <p>Related to <a href="#neural-turing-machine">a previous micropost</a>.</p> -<p><a href="http://www.cs.toronto.edu/~rgrosse/csc321/lec9.pdf">These slides from Toronto</a> is a nice introduction to RNN (recurrent neural network) from a computational point of view. It states that RNN can simulate any FSM (finite state machine, a.k.a. finite automata abbr. FA) with a toy example computing the parity of a binary string.</p> +<p><a href="http://www.cs.toronto.edu/~rgrosse/csc321/lec9.pdf">These slides from Toronto</a> are a nice introduction to RNN (recurrent neural network) from a computational point of view. It states that RNN can simulate any FSM (finite state machine, a.k.a. finite automata abbr. FA) with a toy example computing the parity of a binary string.</p> <p><a href="http://www.deeplearningbook.org/contents/rnn.html">Goodfellow et. al.’s book</a> (see page 372 and 374) goes one step further, stating that RNN with a hidden-to-hidden layer can simulate Turing machines, and not only that, but also the <em>universal</em> Turing machine abbr. UTM (the book referenced <a href="https://www.sciencedirect.com/science/article/pii/S0022000085710136">Siegelmann-Sontag</a>), a property not shared by the weaker network where the hidden-to-hidden layer is replaced by an output-to-hidden layer (page 376).</p> <p>By the way, the RNN with a hidden-to-hidden layer has the same architecture as the so-called linear dynamical system mentioned in <a href="https://www.coursera.org/learn/neural-networks/lecture/Fpa7y/modeling-sequences-a-brief-overview">Hinton’s video</a>.</p> <p>From what I have learned, the universality of RNN and feedforward networks are therefore due to different arguments, the former coming from Turing machines and the latter from an analytical view of approximation by step functions.</p> |