forked from tidymodels/parsnip
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathdetails_boost_tree_spark.Rd
147 lines (130 loc) · 5.51 KB
/
details_boost_tree_spark.Rd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
% Generated by roxygen2: do not edit by hand
% Please edit documentation in R/boost_tree_spark.R
\name{details_boost_tree_spark}
\alias{details_boost_tree_spark}
\title{Boosted trees via Spark}
\description{
\code{\link[sparklyr:ml_gradient_boosted_trees]{sparklyr::ml_gradient_boosted_trees()}} creates a series of decision trees
forming an ensemble. Each tree depends on the results of previous trees.
All trees in the ensemble are combined to produce a final prediction.
}
\details{
For this engine, there are multiple modes: classification and
regression. However, multiclass classification is not supported yet.
\subsection{Tuning Parameters}{
This model has 7 tuning parameters:
\itemize{
\item \code{tree_depth}: Tree Depth (type: integer, default: 5L)
\item \code{trees}: # Trees (type: integer, default: 20L)
\item \code{learn_rate}: Learning Rate (type: double, default: 0.1)
\item \code{mtry}: # Randomly Selected Predictors (type: integer, default: see
below)
\item \code{min_n}: Minimal Node Size (type: integer, default: 1L)
\item \code{loss_reduction}: Minimum Loss Reduction (type: double, default: 0.0)
\item \code{sample_size}: # Observations Sampled (type: integer, default: 1.0)
}
The \code{mtry} parameter is related to the number of predictors. The default
depends on the model mode. For classification, the square root of the
number of predictors is used and for regression, one third of the
predictors are sampled.
}
\subsection{Translation from parsnip to the original package (regression)}{
\if{html}{\out{<div class="sourceCode r">}}\preformatted{boost_tree(
mtry = integer(), trees = integer(), min_n = integer(), tree_depth = integer(),
learn_rate = numeric(), loss_reduction = numeric(), sample_size = numeric()
) \%>\%
set_engine("spark") \%>\%
set_mode("regression") \%>\%
translate()
}\if{html}{\out{</div>}}
\if{html}{\out{<div class="sourceCode">}}\preformatted{## Boosted Tree Model Specification (regression)
##
## Main Arguments:
## mtry = integer()
## trees = integer()
## min_n = integer()
## tree_depth = integer()
## learn_rate = numeric()
## loss_reduction = numeric()
## sample_size = numeric()
##
## Computational engine: spark
##
## Model fit template:
## sparklyr::ml_gradient_boosted_trees(x = missing_arg(), formula = missing_arg(),
## type = "regression", feature_subset_strategy = integer(),
## max_iter = integer(), min_instances_per_node = min_rows(integer(0),
## x), max_depth = integer(), step_size = numeric(), min_info_gain = numeric(),
## subsampling_rate = numeric(), seed = sample.int(10^5, 1))
}\if{html}{\out{</div>}}
}
\subsection{Translation from parsnip to the original package (classification)}{
\if{html}{\out{<div class="sourceCode r">}}\preformatted{boost_tree(
mtry = integer(), trees = integer(), min_n = integer(), tree_depth = integer(),
learn_rate = numeric(), loss_reduction = numeric(), sample_size = numeric()
) \%>\%
set_engine("spark") \%>\%
set_mode("classification") \%>\%
translate()
}\if{html}{\out{</div>}}
\if{html}{\out{<div class="sourceCode">}}\preformatted{## Boosted Tree Model Specification (classification)
##
## Main Arguments:
## mtry = integer()
## trees = integer()
## min_n = integer()
## tree_depth = integer()
## learn_rate = numeric()
## loss_reduction = numeric()
## sample_size = numeric()
##
## Computational engine: spark
##
## Model fit template:
## sparklyr::ml_gradient_boosted_trees(x = missing_arg(), formula = missing_arg(),
## type = "classification", feature_subset_strategy = integer(),
## max_iter = integer(), min_instances_per_node = min_rows(integer(0),
## x), max_depth = integer(), step_size = numeric(), min_info_gain = numeric(),
## subsampling_rate = numeric(), seed = sample.int(10^5, 1))
}\if{html}{\out{</div>}}
}
\subsection{Preprocessing requirements}{
This engine does not require any special encoding of the predictors.
Categorical predictors can be partitioned into groups of factor levels
(e.g. \verb{\{a, c\}} vs \verb{\{b, d\}}) when splitting at a node. Dummy variables
are not required for this model.
}
\subsection{Case weights}{
This model can utilize case weights during model fitting. To use them,
see the documentation in \link{case_weights} and the examples
on \code{tidymodels.org}.
The \code{fit()} and \code{fit_xy()} arguments have arguments called
\code{case_weights} that expect vectors of case weights.
Note that, for spark engines, the \code{case_weight} argument value should be
a character string to specify the column with the numeric case weights.
}
\subsection{Other details}{
For models created using the \code{"spark"} engine, there are several things
to consider.
\itemize{
\item Only the formula interface to via \code{fit()} is available; using
\code{fit_xy()} will generate an error.
\item The predictions will always be in a Spark table format. The names will
be the same as documented but without the dots.
\item There is no equivalent to factor columns in Spark tables so class
predictions are returned as character columns.
\item To retain the model object for a new R session (via \code{save()}), the
\code{model$fit} element of the parsnip object should be serialized via
\code{ml_save(object$fit)} and separately saved to disk. In a new session,
the object can be reloaded and reattached to the parsnip object.
}
}
\subsection{References}{
\itemize{
\item Luraschi, J, K Kuo, and E Ruiz. 2019. \emph{Mastering Spark with R}.
O’Reilly Media
\item Kuhn, M, and K Johnson. 2013. \emph{Applied Predictive Modeling}. Springer.
}
}
}
\keyword{internal}