aboutsummaryrefslogtreecommitdiff
path: root/microposts
diff options
context:
space:
mode:
authorYuchen Pei <me@ypei.me>2018-05-10 10:10:57 +0200
committerYuchen Pei <me@ypei.me>2018-05-10 10:10:57 +0200
commit0e8e006ea3f511a8e2e86e0f38ac9b1ab9847b8c (patch)
tree273551cb5051ff7543ba3df2d9d4e249cec6cf93 /microposts
parent3419d521d3301fc66ec0eb61fb000a16fcfdb4b6 (diff)
1
Diffstat (limited to 'microposts')
-rw-r--r--microposts/math-writing-decoupling.md10
1 files changed, 10 insertions, 0 deletions
diff --git a/microposts/math-writing-decoupling.md b/microposts/math-writing-decoupling.md
new file mode 100644
index 0000000..6a7b438
--- /dev/null
+++ b/microposts/math-writing-decoupling.md
@@ -0,0 +1,10 @@
+---
+2018-05-10
+---
+### Writing readable mathematics like writing an operating system
+
+One way to write readable mathematics is to decouple concepts. One idea is the following template. First write a toy example with all the important components present in this example, then analyse each component individually and elaborate how (perhaps more complex) variations of the component can extend the toy example and induce more complex or powerful versions of the toy example. Through such incremental development, one should be able to arrive at any result in cutting edge research after a pleasant journey.
+
+It's a bit like the UNIX philosophy, where you have a basic system of modules like IO, memory management, graphics etc, and modify / improve each module individually (H/t [NAND2Tetris](http://nand2tetris.org/)).
+
+The book [Neutral networks and deep learning](http://neuralnetworksanddeeplearning.com/) by Michael Nielsen is an example of such approach. It begins the journey with a very simple neutral net with one hidden layer, no regularisation, and sigmoid activations. It then analyses each component including cost functions, the back propagation algorithm, the activation functions, regularisation and the overall architecture (from fully connected to CNN) individually and improve the toy example incrementally. Over the course the accuracy of the example of mnist grows incrementally from 95.42% to 99.63%.