aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorYuchen Pei <me@ypei.me>2018-05-15 13:57:56 +0200
committerYuchen Pei <me@ypei.me>2018-05-15 13:57:56 +0200
commit433a7eb2d0ce356eaf4df1ea18d9c5fa633f3907 (patch)
treea172b100139bac6deea570167f6278f0fce1bb41
parenta81cc6772f544ae974fb497c86b67ffff38ba136 (diff)
minor edit
-rw-r--r--microposts/random-forests.md2
1 files changed, 1 insertions, 1 deletions
diff --git a/microposts/random-forests.md b/microposts/random-forests.md
index de2757c..93bc704 100644
--- a/microposts/random-forests.md
+++ b/microposts/random-forests.md
@@ -9,6 +9,6 @@ date: 2018-05-15
1. The term "predictors" in statistical learning = "features" in machine learning.
2. The main idea of random forests of dropping predictors for individual trees and aggregate by majority or average is the same as the idea of dropout in neural networks, where a proportion of neurons in the hidden layers are dropped temporarily during different minibatches of training, effectively averaging over an emsemble of subnetworks. Both tricks are used as regularisations, i.e. to reduce the variance. The only difference is: in random forests, all but a square root number of the total number of features are dropped, whereas the dropout ratio in neural networks is usually a half.
-By the way, here a comparison between statistical learning and machine learning from the slides of the Statistcal Learning course:
+By the way, here's a comparison between statistical learning and machine learning from the slides of the Statistcal Learning course:
<a href="../assets/resources/sl-vs-ml.png"><img src="../assets/resources/sl-vs-ml.png" alt="SL vs ML" style="width:38em" /></a>