Skip to content

Commit

Permalink
[SPARK-20192][SPARKR][DOC] SparkR migration guide to 2.2.0
Browse files Browse the repository at this point in the history
## What changes were proposed in this pull request?

Updating R Programming Guide

## How was this patch tested?

manually

Author: Felix Cheung <[email protected]>

Closes apache#17816 from felixcheung/r22relnote.
  • Loading branch information
felixcheung authored and Felix Cheung committed May 2, 2017
1 parent 943a684 commit d20a976
Showing 1 changed file with 8 additions and 0 deletions.
8 changes: 8 additions & 0 deletions docs/sparkr.md
Original file line number Diff line number Diff line change
Expand Up @@ -644,3 +644,11 @@ You can inspect the search path in R with [`search()`](https://stat.ethz.ch/R-ma
## Upgrading to SparkR 2.1.0

- `join` no longer performs Cartesian Product by default, use `crossJoin` instead.

## Upgrading to SparkR 2.2.0

- A `numPartitions` parameter has been added to `createDataFrame` and `as.DataFrame`. When splitting the data, the partition position calculation has been made to match the one in Scala.
- The method `createExternalTable` has been deprecated to be replaced by `createTable`. Either methods can be called to create external or managed table. Additional catalog methods have also been added.
- By default, derby.log is now saved to `tempdir()`. This will be created when instantiating the SparkSession with `enableHiveSupport` set to `TRUE`.
- `spark.lda` was not setting the optimizer correctly. It has been corrected.
- Several model summary outputs are updated to have `coefficients` as `matrix`. This includes `spark.logit`, `spark.kmeans`, `spark.glm`. Model summary outputs for `spark.gaussianMixture` have added log-likelihood as `loglik`.

0 comments on commit d20a976

Please sign in to comment.