layout | title | displayTitle | license |
---|---|---|---|
global |
Basic Statistics - RDD-based API |
Basic Statistics - RDD-based API |
Licensed to the Apache Software Foundation (ASF) under one or more
contributor license agreements. See the NOTICE file distributed with
this work for additional information regarding copyright ownership.
The ASF licenses this file to You under the Apache License, Version 2.0
(the "License"); you may not use this file except in compliance with
the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
|
- Table of contents {:toc}
\[ \newcommand{\R}{\mathbb{R}} \newcommand{\E}{\mathbb{E}} \newcommand{\x}{\mathbf{x}} \newcommand{\y}{\mathbf{y}} \newcommand{\wv}{\mathbf{w}} \newcommand{\av}{\mathbf{\alpha}} \newcommand{\bv}{\mathbf{b}} \newcommand{\N}{\mathbb{N}} \newcommand{\id}{\mathbf{I}} \newcommand{\ind}{\mathbf{1}} \newcommand{\0}{\mathbf{0}} \newcommand{\unit}{\mathbf{e}} \newcommand{\one}{\mathbf{1}} \newcommand{\zero}{\mathbf{0}} \]
We provide column summary statistics for RDD[Vector]
through the function colStats
available in Statistics
.
colStats()
returns an instance of
MultivariateStatisticalSummary
,
which contains the column-wise max, min, mean, variance, and number of nonzeros, as well as the
total count.
Refer to the MultivariateStatisticalSummary
Scala docs for details on the API.
{% include_example scala/org/apache/spark/examples/mllib/SummaryStatisticsExample.scala %}
colStats()
returns an instance of
MultivariateStatisticalSummary
,
which contains the column-wise max, min, mean, variance, and number of nonzeros, as well as the
total count.
Refer to the MultivariateStatisticalSummary
Java docs for details on the API.
{% include_example java/org/apache/spark/examples/mllib/JavaSummaryStatisticsExample.java %}
Refer to the MultivariateStatisticalSummary
Python docs for more details on the API.
{% include_example python/mllib/summary_statistics_example.py %}
Calculating the correlation between two series of data is a common operation in Statistics. In spark.mllib
we provide the flexibility to calculate pairwise correlations among many series. The supported
correlation methods are currently Pearson's and Spearman's correlation.
Refer to the Statistics
Scala docs for details on the API.
{% include_example scala/org/apache/spark/examples/mllib/CorrelationsExample.scala %}
Refer to the Statistics
Java docs for details on the API.
{% include_example java/org/apache/spark/examples/mllib/JavaCorrelationsExample.java %}
Refer to the Statistics
Python docs for more details on the API.
{% include_example python/mllib/correlations_example.py %}
Unlike the other statistics functions, which reside in spark.mllib
, stratified sampling methods,
sampleByKey
and sampleByKeyExact
, can be performed on RDD's of key-value pairs. For stratified
sampling, the keys can be thought of as a label and the value as a specific attribute. For example
the key can be man or woman, or document ids, and the respective values can be the list of ages
of the people in the population or the list of words in the documents. The sampleByKey
method
will flip a coin to decide whether an observation will be sampled or not, therefore requires one
pass over the data, and provides an expected sample size. sampleByKeyExact
requires significant
more resources than the per-stratum simple random sampling used in sampleByKey
, but will provide
the exact sampling size with 99.99% confidence. sampleByKeyExact
is currently not supported in
python.
{% include_example scala/org/apache/spark/examples/mllib/StratifiedSamplingExample.scala %}
{% include_example java/org/apache/spark/examples/mllib/JavaStratifiedSamplingExample.java %}
Note: sampleByKeyExact()
is currently not supported in Python.
{% include_example python/mllib/stratified_sampling_example.py %}
Hypothesis testing is a powerful tool in statistics to determine whether a result is statistically
significant, whether this result occurred by chance or not. spark.mllib
currently supports Pearson's
chi-squared ( Vector
, whereas the independence test requires a Matrix
as input.
spark.mllib
also supports the input type RDD[LabeledPoint]
to enable feature selection via chi-squared
independence tests.
{% include_example scala/org/apache/spark/examples/mllib/HypothesisTestingExample.scala %}
Refer to the ChiSqTestResult
Java docs for details on the API.
{% include_example java/org/apache/spark/examples/mllib/JavaHypothesisTestingExample.java %}
Refer to the Statistics
Python docs for more details on the API.
{% include_example python/mllib/hypothesis_testing_example.py %}
Additionally, spark.mllib
provides a 1-sample, 2-sided implementation of the Kolmogorov-Smirnov (KS) test
for equality of probability distributions. By providing the name of a theoretical distribution
(currently solely supported for the normal distribution) and its parameters, or a function to
calculate the cumulative distribution according to a given theoretical distribution, the user can
test the null hypothesis that their sample is drawn from that distribution. In the case that the
user tests against the normal distribution (distName="norm"
), but does not provide distribution
parameters, the test initializes to the standard normal distribution and logs an appropriate
message.
Refer to the Statistics
Scala docs for details on the API.
{% include_example scala/org/apache/spark/examples/mllib/HypothesisTestingKolmogorovSmirnovTestExample.scala %}
Refer to the Statistics
Java docs for details on the API.
{% include_example java/org/apache/spark/examples/mllib/JavaHypothesisTestingKolmogorovSmirnovTestExample.java %}
Refer to the Statistics
Python docs for more details on the API.
{% include_example python/mllib/hypothesis_testing_kolmogorov_smirnov_test_example.py %}
spark.mllib
provides online implementations of some tests to support use cases
like A/B testing. These tests may be performed on a Spark Streaming
DStream[(Boolean, Double)]
where the first element of each tuple
indicates control group (false
) or treatment group (true
) and the
second element is the value of an observation.
Streaming significance testing supports the following parameters:
peacePeriod
- The number of initial data points from the stream to ignore, used to mitigate novelty effects.windowSize
- The number of past batches to perform hypothesis testing over. Setting to0
will perform cumulative processing using all prior batches.
{% include_example scala/org/apache/spark/examples/mllib/StreamingTestExample.scala %}
{% include_example java/org/apache/spark/examples/mllib/JavaStreamingTestExample.java %}
Random data generation is useful for randomized algorithms, prototyping, and performance testing.
spark.mllib
supports generating random RDDs with i.i.d. values drawn from a given distribution:
uniform, standard normal, or Poisson.
Refer to the RandomRDDs
Scala docs for details on the API.
{% highlight scala %} import org.apache.spark.SparkContext import org.apache.spark.mllib.random.RandomRDDs._
val sc: SparkContext = ...
// Generate a random double RDD that contains 1 million i.i.d. values drawn from the
// standard normal distribution N(0, 1)
, evenly distributed in 10 partitions.
val u = normalRDD(sc, 1000000L, 10)
// Apply a transform to get a random double RDD following N(1, 4)
.
val v = u.map(x => 1.0 + 2.0 * x)
{% endhighlight %}
Refer to the RandomRDDs
Java docs for details on the API.
{% highlight java %} import org.apache.spark.SparkContext; import org.apache.spark.api.JavaDoubleRDD; import static org.apache.spark.mllib.random.RandomRDDs.*;
JavaSparkContext jsc = ...
// Generate a random double RDD that contains 1 million i.i.d. values drawn from the
// standard normal distribution N(0, 1)
, evenly distributed in 10 partitions.
JavaDoubleRDD u = normalJavaRDD(jsc, 1000000L, 10);
// Apply a transform to get a random double RDD following N(1, 4)
.
JavaDoubleRDD v = u.mapToDouble(x -> 1.0 + 2.0 * x);
{% endhighlight %}
Refer to the RandomRDDs
Python docs for more details on the API.
{% highlight python %} from pyspark.mllib.random import RandomRDDs
sc = ... # SparkContext
u = RandomRDDs.normalRDD(sc, 1000000L, 10)
v = u.map(lambda x: 1.0 + 2.0 * x) {% endhighlight %}
Kernel density estimation is a technique useful for visualizing empirical probability distributions without requiring assumptions about the particular distribution that the observed samples are drawn from. It computes an estimate of the probability density function of a random variables, evaluated at a given set of points. It achieves this estimate by expressing the PDF of the empirical distribution at a particular point as the mean of PDFs of normal distributions centered around each of the samples.
Refer to the KernelDensity
Scala docs for details on the API.
{% include_example scala/org/apache/spark/examples/mllib/KernelDensityEstimationExample.scala %}
Refer to the KernelDensity
Java docs for details on the API.
{% include_example java/org/apache/spark/examples/mllib/JavaKernelDensityEstimationExample.java %}
Refer to the KernelDensity
Python docs for more details on the API.
{% include_example python/mllib/kernel_density_estimation_example.py %}