From db786e35abb644d83f78c21e8c4f10e1d6568a5e Mon Sep 17 00:00:00 2001 From: Yuchen Pei Date: Fri, 11 May 2018 17:10:58 +0200 Subject: changed layout of blog.html - cut post length on blog.html to synopsis, defaulted to one paragraph long - edited engine accordingly --- site/microblog.html | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) (limited to 'site/microblog.html') diff --git a/site/microblog.html b/site/microblog.html index 8d3ba5a..33e017f 100644 --- a/site/microblog.html +++ b/site/microblog.html @@ -21,8 +21,8 @@

2018-05-11

Some notes on RNN, FSM / FA, TM and UTM

-

Related to a previous micropost.

-

The slides from Toronto is a nice introduction to RNN (recurrent neural network) from a computational point of view. It states that RNN can simulate any FSM (finite state machine, a.k.a. finite automata abbr. FA) with a toy example computing the parity of a binary string.

+

Related to a previous micropost.

+

These slides from Toronto is a nice introduction to RNN (recurrent neural network) from a computational point of view. It states that RNN can simulate any FSM (finite state machine, a.k.a. finite automata abbr. FA) with a toy example computing the parity of a binary string.

Goodfellow et. al.’s book (see page 372 and 374) goes one step further, stating that RNN with a hidden-to-hidden layer can simulate Turing machines, and not only that, but also the universal Turing machine abbr. UTM (the book referenced Siegelmann-Sontag), a property not shared by the weaker network where the hidden-to-hidden layer is replaced by an output-to-hidden layer (page 376).

By the way, the RNN with a hidden-to-hidden layer has the same architecture as the so-called linear dynamical system mentioned in Hinton’s video.

From what I have learned, the universality of RNN and feedforward networks are therefore due to different arguments, the former coming from Turing machines and the latter from an analytical view of approximation by step functions.

-- cgit v1.2.3