From 08a326a3a2fc423ce498b0d80edecb7e596f1f28 Mon Sep 17 00:00:00 2001 From: Yuchen Pei Date: Fri, 11 May 2018 16:16:45 +0200 Subject: mionr edit --- microposts/rnn-fsm.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) (limited to 'microposts/rnn-fsm.md') diff --git a/microposts/rnn-fsm.md b/microposts/rnn-fsm.md index a1e1315..032adc6 100644 --- a/microposts/rnn-fsm.md +++ b/microposts/rnn-fsm.md @@ -5,7 +5,7 @@ date: 2018-05-11 Related to [a previous micropost](#neural-turing-machine). -[The slides from Toronto](http://www.cs.toronto.edu/~rgrosse/csc321/lec9.pdf) is a nice introduction to RNN (recurrent neural network) from a computational point of view. It states that RNN can simulate any FSM (finite state machine, a.k.a. finite automata abbr. FA) with a toy example computing the parity of a binary string. +[These slides from Toronto](http://www.cs.toronto.edu/~rgrosse/csc321/lec9.pdf) is a nice introduction to RNN (recurrent neural network) from a computational point of view. It states that RNN can simulate any FSM (finite state machine, a.k.a. finite automata abbr. FA) with a toy example computing the parity of a binary string. [Goodfellow et. al.'s book](http://www.deeplearningbook.org/contents/rnn.html) (see page 372 and 374) goes one step further, stating that RNN with a hidden-to-hidden layer can simulate Turing machines, and not only that, but also the *universal* Turing machine abbr. UTM (the book referenced [Siegelmann-Sontag](https://www.sciencedirect.com/science/article/pii/S0022000085710136)), a property not shared by the weaker network where the hidden-to-hidden layer is replaced by an output-to-hidden layer (page 376). -- cgit v1.2.3