From 147a19e84a743f1379f05bf2f444143b4afd7bd6 Mon Sep 17 00:00:00 2001 From: Yuchen Pei Date: Fri, 18 Jun 2021 12:58:44 +1000 Subject: Updated. --- microposts/neural-nets-regularization.org | 25 +++++++++++++++++++++++++ 1 file changed, 25 insertions(+) create mode 100644 microposts/neural-nets-regularization.org (limited to 'microposts/neural-nets-regularization.org') diff --git a/microposts/neural-nets-regularization.org b/microposts/neural-nets-regularization.org new file mode 100644 index 0000000..f92feb6 --- /dev/null +++ b/microposts/neural-nets-regularization.org @@ -0,0 +1,25 @@ +#+title: neural-nets-regularization + +#+date: <2018-05-08> + +#+begin_quote + no-one has yet developed an entirely convincing theoretical + explanation for why regularization helps networks generalize. Indeed, + researchers continue to write papers where they try different + approaches to regularization, compare them to see which works better, + and attempt to understand why different approaches work better or + worse. And so you can view regularization as something of a kludge. + While it often helps, we don't have an entirely satisfactory + systematic understanding of what's going on, merely incomplete + heuristics and rules of thumb. + + There's a deeper set of issues here, issues which go to the heart of + science. It's the question of how we generalize. Regularization may + give us a computational magic wand that helps our networks generalize + better, but it doesn't give us a principled understanding of how + generalization works, nor of what the best approach is. +#+end_quote + +Michael Nielsen, +[[http://neuralnetworksanddeeplearning.com/chap3.html#why_does_regularization_help_reduce_overfitting][Neural +networks and deep learning]] -- cgit v1.2.3