aboutsummaryrefslogtreecommitdiff
path: root/microposts/math-writing-decoupling.org
diff options
context:
space:
mode:
Diffstat (limited to 'microposts/math-writing-decoupling.org')
-rw-r--r--microposts/math-writing-decoupling.org26
1 files changed, 26 insertions, 0 deletions
diff --git a/microposts/math-writing-decoupling.org b/microposts/math-writing-decoupling.org
new file mode 100644
index 0000000..3ccb9d1
--- /dev/null
+++ b/microposts/math-writing-decoupling.org
@@ -0,0 +1,26 @@
+#+title: math-writing-decoupling
+
+#+date: <2018-05-10>
+
+One way to write readable mathematics is to decouple concepts. One idea
+is the following template. First write a toy example with all the
+important components present in this example, then analyse each
+component individually and elaborate how (perhaps more complex)
+variations of the component can extend the toy example and induce more
+complex or powerful versions of the toy example. Through such
+incremental development, one should be able to arrive at any result in
+cutting edge research after a pleasant journey.
+
+It's a bit like the UNIX philosophy, where you have a basic system of
+modules like IO, memory management, graphics etc, and modify / improve
+each module individually (H/t [[http://nand2tetris.org/][NAND2Tetris]]).
+
+The book [[http://neuralnetworksanddeeplearning.com/][Neutral networks
+and deep learning]] by Michael Nielsen is an example of such approach.
+It begins the journey with a very simple neutral net with one hidden
+layer, no regularisation, and sigmoid activations. It then analyses each
+component including cost functions, the back propagation algorithm, the
+activation functions, regularisation and the overall architecture (from
+fully connected to CNN) individually and improve the toy example
+incrementally. Over the course the accuracy of the example of mnist
+grows incrementally from 95.42% to 99.67%.