aboutsummaryrefslogtreecommitdiff
path: root/microposts
diff options
context:
space:
mode:
authorYuchen Pei <me@ypei.me>2018-05-08 22:20:41 +0200
committerYuchen Pei <me@ypei.me>2018-05-08 22:20:41 +0200
commit9b15030d9e410a94382616334bfce3db302ec76a (patch)
treefb24e19e6adfc8a17e89442b414eac82a39d21b9 /microposts
parenta390726282c269eff3ef2f7f56141d924c71124c (diff)
added an mpost
Diffstat (limited to 'microposts')
-rw-r--r--microposts/neural-nets-regularization.md8
1 files changed, 8 insertions, 0 deletions
diff --git a/microposts/neural-nets-regularization.md b/microposts/neural-nets-regularization.md
new file mode 100644
index 0000000..9f2866d
--- /dev/null
+++ b/microposts/neural-nets-regularization.md
@@ -0,0 +1,8 @@
+---
+date: 2018-05-08
+---
+> no-one has yet developed an entirely convincing theoretical explanation for why regularization helps networks generalize. Indeed, researchers continue to write papers where they try different approaches to regularization, compare them to see which works better, and attempt to understand why different approaches work better or worse. And so you can view regularization as something of a kludge. While it often helps, we don't have an entirely satisfactory systematic understanding of what's going on, merely incomplete heuristics and rules of thumb.
+>
+> There's a deeper set of issues here, issues which go to the heart of science. It's the question of how we generalize. Regularization may give us a computational magic wand that helps our networks generalize better, but it doesn't give us a principled understanding of how generalization works, nor of what the best approach is.
+
+Michael Nielsen, [Neural networks and deep learning](http://neuralnetworksanddeeplearning.com/chap3.html#why_does_regularization_help_reduce_overfitting)