forked from tidymodels/parsnip
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathdetails_linear_reg_brulee.Rd
85 lines (73 loc) · 2.8 KB
/
details_linear_reg_brulee.Rd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/linear_reg_brulee.R
\name{details_linear_reg_brulee}
\alias{details_linear_reg_brulee}
\title{Linear regression via brulee}
\description{
\code{\link[brulee:brulee_linear_reg]{brulee::brulee_linear_reg()}} uses ordinary least squares to fit models with
numeric outcomes.
}
\details{
For this engine, there is a single mode: regression
\subsection{Tuning Parameters}{
This model has 2 tuning parameter:
\itemize{
\item \code{penalty}: Amount of Regularization (type: double, default: 0.001)
\item \code{mixture}: Proportion of Lasso Penalty (type: double, default: 0.0)
}
The use of the L1 penalty (a.k.a. the lasso penalty) does \emph{not} force
parameters to be strictly zero (as it does in packages such as glmnet).
The zeroing out of parameters is a specific feature the optimization
method used in those packages.
Other engine arguments of interest:
\itemize{
\item \code{optimizer()}: The optimization method. See
\code{\link[brulee:brulee_linear_reg]{brulee::brulee_linear_reg()}}.
\item \code{epochs()}: An integer for the number of passes through the training
set.
\item \code{lean_rate()}: A number used to accelerate the gradient decsent
process.
\item \code{momentum()}: A number used to use historical gradient infomration
during optimization (\code{optimizer = "SGD"} only).
\item \code{batch_size()}: An integer for the number of training set points in
each batch.
\item \code{stop_iter()}: A non-negative integer for how many iterations with no
improvement before stopping. (default: 5L).
}
}
\subsection{Translation from parsnip to the original package (regression)}{
\if{html}{\out{<div class="sourceCode r">}}\preformatted{linear_reg(penalty = double(1)) \%>\%
set_engine("brulee") \%>\%
translate()
}\if{html}{\out{</div>}}
\if{html}{\out{<div class="sourceCode">}}\preformatted{## Linear Regression Model Specification (regression)
##
## Main Arguments:
## penalty = double(1)
##
## Computational engine: brulee
##
## Model fit template:
## brulee::brulee_linear_reg(x = missing_arg(), y = missing_arg(),
## penalty = double(1))
}\if{html}{\out{</div>}}
}
\subsection{Preprocessing requirements}{
Factor/categorical predictors need to be converted to numeric values
(e.g., dummy or indicator variables) for this engine. When using the
formula method via \code{\link[=fit.model_spec]{fit()}}, parsnip will
convert factor columns to indicators.
Predictors should have the same scale. One way to achieve this is to
center and scale each so that each predictor has mean zero and a
variance of one.
}
\subsection{Case weights}{
The underlying model implementation does not allow for case weights.
}
\subsection{References}{
\itemize{
\item Kuhn, M, and K Johnson. 2013. \emph{Applied Predictive Modeling}. Springer.
}
}
}
\keyword{internal}