forked from asadoughi/stat-learning
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy path4.Rmd
17 lines (12 loc) · 1.71 KB
/
4.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
Chapter 6: Exercise 4
========================================================
### a
(iii) Steadily increase: As we increase $\lambda$ from $0$, all $\beta$ 's decrease from their least square estimate values to $0$. Training error for full-blown-OLS $\beta$ s is the minimum and it steadily increases as $\beta$ s are reduced to $0$.
### b
(ii) Decrease initially, and then eventually start increasing in a U shape: When $\lambda = 0$, all $\beta$ s have their least square estimate values. In this case, the model tries to fit hard to training data and hence test RSS is high. As we increase $\lambda$, $beta$ s start reducing to zero and some of the overfitting is reduced. Thus, test RSS initially decreases. Eventually, as $beta$ s approach $0$, the model becomes too simple and test RSS increases.
### c
(iv) Steadily decreases: When $\lambda = 0$, the $\beta$ s have their least square estimate values. The actual estimates heavily depend on the training data and hence variance is high. As we increase $\lambda$, $\beta$ s start decreasing and model becomes simpler. In the limiting case of $\lambda$ approaching infinity, all $beta$ s reduce to zero and model predicts a constant and has no variance.
### d
(iii) Steadily increases: When $\lambda = 0$, $\beta$ s have their least-square estimate values and hence have the least bias. As $\lambda$ increases, $\beta$ s start reducing towards zero, the model fits less accurately to training data and hence bias increases. In the limiting case of $\lambda$ approaching infinity, the model predicts a constant and hence bias is maximum.
#### e
(v) Remains constant: By definition, irreducible error is model independent and hence irrespective of the choice of $\lambda$, remains constant.