aboutsummaryrefslogtreecommitdiff
path: root/microposts
diff options
context:
space:
mode:
authorYuchen Pei <me@ypei.me>2018-05-11 22:29:13 +0200
committerYuchen Pei <me@ypei.me>2018-05-11 22:29:13 +0200
commit54d7212b5ac5947d4b15180408337bbc027764d5 (patch)
treed4cd05430e743e3ecb51fd09a3e126e8a3622435 /microposts
parentdb786e35abb644d83f78c21e8c4f10e1d6568a5e (diff)
fixed a typo
Diffstat (limited to 'microposts')
-rw-r--r--microposts/rnn-fsm.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/microposts/rnn-fsm.md b/microposts/rnn-fsm.md
index 032adc6..61b500f 100644
--- a/microposts/rnn-fsm.md
+++ b/microposts/rnn-fsm.md
@@ -5,7 +5,7 @@ date: 2018-05-11
Related to [a previous micropost](#neural-turing-machine).
-[These slides from Toronto](http://www.cs.toronto.edu/~rgrosse/csc321/lec9.pdf) is a nice introduction to RNN (recurrent neural network) from a computational point of view. It states that RNN can simulate any FSM (finite state machine, a.k.a. finite automata abbr. FA) with a toy example computing the parity of a binary string.
+[These slides from Toronto](http://www.cs.toronto.edu/~rgrosse/csc321/lec9.pdf) are a nice introduction to RNN (recurrent neural network) from a computational point of view. It states that RNN can simulate any FSM (finite state machine, a.k.a. finite automata abbr. FA) with a toy example computing the parity of a binary string.
[Goodfellow et. al.'s book](http://www.deeplearningbook.org/contents/rnn.html) (see page 372 and 374) goes one step further, stating that RNN with a hidden-to-hidden layer can simulate Turing machines, and not only that, but also the *universal* Turing machine abbr. UTM (the book referenced [Siegelmann-Sontag](https://www.sciencedirect.com/science/article/pii/S0022000085710136)), a property not shared by the weaker network where the hidden-to-hidden layer is replaced by an output-to-hidden layer (page 376).