From 147a19e84a743f1379f05bf2f444143b4afd7bd6 Mon Sep 17 00:00:00 2001 From: Yuchen Pei Date: Fri, 18 Jun 2021 12:58:44 +1000 Subject: Updated. --- microposts/math-writing-decoupling.org | 26 ++++++++++++++++++++++++++ 1 file changed, 26 insertions(+) create mode 100644 microposts/math-writing-decoupling.org (limited to 'microposts/math-writing-decoupling.org') diff --git a/microposts/math-writing-decoupling.org b/microposts/math-writing-decoupling.org new file mode 100644 index 0000000..3ccb9d1 --- /dev/null +++ b/microposts/math-writing-decoupling.org @@ -0,0 +1,26 @@ +#+title: math-writing-decoupling + +#+date: <2018-05-10> + +One way to write readable mathematics is to decouple concepts. One idea +is the following template. First write a toy example with all the +important components present in this example, then analyse each +component individually and elaborate how (perhaps more complex) +variations of the component can extend the toy example and induce more +complex or powerful versions of the toy example. Through such +incremental development, one should be able to arrive at any result in +cutting edge research after a pleasant journey. + +It's a bit like the UNIX philosophy, where you have a basic system of +modules like IO, memory management, graphics etc, and modify / improve +each module individually (H/t [[http://nand2tetris.org/][NAND2Tetris]]). + +The book [[http://neuralnetworksanddeeplearning.com/][Neutral networks +and deep learning]] by Michael Nielsen is an example of such approach. +It begins the journey with a very simple neutral net with one hidden +layer, no regularisation, and sigmoid activations. It then analyses each +component including cost functions, the back propagation algorithm, the +activation functions, regularisation and the overall architecture (from +fully connected to CNN) individually and improve the toy example +incrementally. Over the course the accuracy of the example of mnist +grows incrementally from 95.42% to 99.67%. -- cgit v1.2.3