From 433a7eb2d0ce356eaf4df1ea18d9c5fa633f3907 Mon Sep 17 00:00:00 2001 From: Yuchen Pei Date: Tue, 15 May 2018 13:57:56 +0200 Subject: minor edit --- microposts/random-forests.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/microposts/random-forests.md b/microposts/random-forests.md index de2757c..93bc704 100644 --- a/microposts/random-forests.md +++ b/microposts/random-forests.md @@ -9,6 +9,6 @@ date: 2018-05-15 1. The term "predictors" in statistical learning = "features" in machine learning. 2. The main idea of random forests of dropping predictors for individual trees and aggregate by majority or average is the same as the idea of dropout in neural networks, where a proportion of neurons in the hidden layers are dropped temporarily during different minibatches of training, effectively averaging over an emsemble of subnetworks. Both tricks are used as regularisations, i.e. to reduce the variance. The only difference is: in random forests, all but a square root number of the total number of features are dropped, whereas the dropout ratio in neural networks is usually a half. -By the way, here a comparison between statistical learning and machine learning from the slides of the Statistcal Learning course: +By the way, here's a comparison between statistical learning and machine learning from the slides of the Statistcal Learning course: SL vs ML -- cgit v1.2.3