diff options
Diffstat (limited to 'microposts/neural-nets-regularization.org')
-rw-r--r-- | microposts/neural-nets-regularization.org | 25 |
1 files changed, 25 insertions, 0 deletions
diff --git a/microposts/neural-nets-regularization.org b/microposts/neural-nets-regularization.org new file mode 100644 index 0000000..f92feb6 --- /dev/null +++ b/microposts/neural-nets-regularization.org @@ -0,0 +1,25 @@ +#+title: neural-nets-regularization + +#+date: <2018-05-08> + +#+begin_quote + no-one has yet developed an entirely convincing theoretical + explanation for why regularization helps networks generalize. Indeed, + researchers continue to write papers where they try different + approaches to regularization, compare them to see which works better, + and attempt to understand why different approaches work better or + worse. And so you can view regularization as something of a kludge. + While it often helps, we don't have an entirely satisfactory + systematic understanding of what's going on, merely incomplete + heuristics and rules of thumb. + + There's a deeper set of issues here, issues which go to the heart of + science. It's the question of how we generalize. Regularization may + give us a computational magic wand that helps our networks generalize + better, but it doesn't give us a principled understanding of how + generalization works, nor of what the best approach is. +#+end_quote + +Michael Nielsen, +[[http://neuralnetworksanddeeplearning.com/chap3.html#why_does_regularization_help_reduce_overfitting][Neural +networks and deep learning]] |