aboutsummaryrefslogtreecommitdiff
path: root/microposts/random-forests.org
blob: f52c17622e84f9ac3e5dede36973fcb5cc3aa374 (plain) (blame)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#+title: random-forests

#+date: <2018-05-15>

[[https://lagunita.stanford.edu/courses/HumanitiesSciences/StatLearning/Winter2016/info][Stanford
Lagunita's statistical learning course]] has some excellent lectures on
random forests. It starts with explanations of decision trees, followed
by bagged trees and random forests, and ends with boosting. From these
lectures it seems that:

1. The term "predictors" in statistical learning = "features" in machine
   learning.
2. The main idea of random forests of dropping predictors for individual
   trees and aggregate by majority or average is the same as the idea of
   dropout in neural networks, where a proportion of neurons in the
   hidden layers are dropped temporarily during different minibatches of
   training, effectively averaging over an emsemble of subnetworks. Both
   tricks are used as regularisations, i.e. to reduce the variance. The
   only difference is: in random forests, all but a square root number
   of the total number of features are dropped, whereas the dropout
   ratio in neural networks is usually a half.

By the way, here's a comparison between statistical learning and machine
learning from the slides of the Statistcal Learning course: