diff options
-rw-r--r-- | microposts/math-writing-decoupling.md | 2 |
1 files changed, 1 insertions, 1 deletions
diff --git a/microposts/math-writing-decoupling.md b/microposts/math-writing-decoupling.md index 6fbe50c..e765b71 100644 --- a/microposts/math-writing-decoupling.md +++ b/microposts/math-writing-decoupling.md @@ -7,4 +7,4 @@ One way to write readable mathematics is to decouple concepts. One idea is the f It's a bit like the UNIX philosophy, where you have a basic system of modules like IO, memory management, graphics etc, and modify / improve each module individually (H/t [NAND2Tetris](http://nand2tetris.org/)). -The book [Neutral networks and deep learning](http://neuralnetworksanddeeplearning.com/) by Michael Nielsen is an example of such approach. It begins the journey with a very simple neutral net with one hidden layer, no regularisation, and sigmoid activations. It then analyses each component including cost functions, the back propagation algorithm, the activation functions, regularisation and the overall architecture (from fully connected to CNN) individually and improve the toy example incrementally. Over the course the accuracy of the example of mnist grows incrementally from 95.42% to 99.63%. +The book [Neutral networks and deep learning](http://neuralnetworksanddeeplearning.com/) by Michael Nielsen is an example of such approach. It begins the journey with a very simple neutral net with one hidden layer, no regularisation, and sigmoid activations. It then analyses each component including cost functions, the back propagation algorithm, the activation functions, regularisation and the overall architecture (from fully connected to CNN) individually and improve the toy example incrementally. Over the course the accuracy of the example of mnist grows incrementally from 95.42% to 99.67%. |