Skip to content

Commit

Permalink
SPARK-1307 [DOCS] Don't use term 'standalone' to refer to a Spark App…
Browse files Browse the repository at this point in the history
…lication

HT to Diana, just proposing an implementation of her suggestion, which I rather agreed with. Is there a second/third for the motion?

Refer to "self-contained" rather than "standalone" apps to avoid confusion with standalone deployment mode. And fix placement of reference to this in MLlib docs.

Author: Sean Owen <[email protected]>

Closes apache#2787 from srowen/SPARK-1307 and squashes the following commits:

b5b82e2 [Sean Owen] Refer to "self-contained" rather than "standalone" apps to avoid confusion with standalone deployment mode. And fix placement of reference to this in MLlib docs.
  • Loading branch information
srowen authored and mengxr committed Oct 15, 2014
1 parent 66af8e2 commit 18ab6bd
Show file tree
Hide file tree
Showing 5 changed files with 37 additions and 36 deletions.
14 changes: 7 additions & 7 deletions docs/mllib-clustering.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,7 +69,7 @@ println("Within Set Sum of Squared Errors = " + WSSSE)
All of MLlib's methods use Java-friendly types, so you can import and call them there the same
way you do in Scala. The only caveat is that the methods take Scala RDD objects, while the
Spark Java API uses a separate `JavaRDD` class. You can convert a Java RDD to a Scala one by
calling `.rdd()` on your `JavaRDD` object. A standalone application example
calling `.rdd()` on your `JavaRDD` object. A self-contained application example
that is equivalent to the provided example in Scala is given below:

{% highlight java %}
Expand Down Expand Up @@ -113,12 +113,6 @@ public class KMeansExample {
}
}
{% endhighlight %}

In order to run the above standalone application, follow the instructions
provided in the [Standalone
Applications](quick-start.html#standalone-applications) section of the Spark
quick-start guide. Be sure to also include *spark-mllib* to your build file as
a dependency.
</div>

<div data-lang="python" markdown="1">
Expand Down Expand Up @@ -153,3 +147,9 @@ print("Within Set Sum of Squared Error = " + str(WSSSE))
</div>

</div>

In order to run the above application, follow the instructions
provided in the [Self-Contained Applications](quick-start.html#self-contained-applications)
section of the Spark
Quick Start guide. Be sure to also include *spark-mllib* to your build file as
a dependency.
14 changes: 7 additions & 7 deletions docs/mllib-collaborative-filtering.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ val model = ALS.trainImplicit(ratings, rank, numIterations, alpha)
All of MLlib's methods use Java-friendly types, so you can import and call them there the same
way you do in Scala. The only caveat is that the methods take Scala RDD objects, while the
Spark Java API uses a separate `JavaRDD` class. You can convert a Java RDD to a Scala one by
calling `.rdd()` on your `JavaRDD` object. A standalone application example
calling `.rdd()` on your `JavaRDD` object. A self-contained application example
that is equivalent to the provided example in Scala is given bellow:

{% highlight java %}
Expand Down Expand Up @@ -184,12 +184,6 @@ public class CollaborativeFiltering {
}
}
{% endhighlight %}

In order to run the above standalone application, follow the instructions
provided in the [Standalone
Applications](quick-start.html#standalone-applications) section of the Spark
quick-start guide. Be sure to also include *spark-mllib* to your build file as
a dependency.
</div>

<div data-lang="python" markdown="1">
Expand Down Expand Up @@ -229,6 +223,12 @@ model = ALS.trainImplicit(ratings, rank, numIterations, alpha = 0.01)

</div>

In order to run the above application, follow the instructions
provided in the [Self-Contained Applications](quick-start.html#self-contained-applications)
section of the Spark
Quick Start guide. Be sure to also include *spark-mllib* to your build file as
a dependency.

## Tutorial

The [training exercises](https://databricks-training.s3.amazonaws.com/index.html) from the Spark Summit 2014 include a hands-on tutorial for
Expand Down
17 changes: 9 additions & 8 deletions docs/mllib-dimensionality-reduction.md
Original file line number Diff line number Diff line change
Expand Up @@ -121,9 +121,9 @@ public class SVD {
The same code applies to `IndexedRowMatrix` if `U` is defined as an
`IndexedRowMatrix`.

In order to run the above standalone application, follow the instructions
provided in the [Standalone
Applications](quick-start.html#standalone-applications) section of the Spark
In order to run the above application, follow the instructions
provided in the [Self-Contained
Applications](quick-start.html#self-contained-applications) section of the Spark
quick-start guide. Be sure to also include *spark-mllib* to your build file as
a dependency.

Expand Down Expand Up @@ -200,10 +200,11 @@ public class PCA {
}
{% endhighlight %}

In order to run the above standalone application, follow the instructions
provided in the [Standalone
Applications](quick-start.html#standalone-applications) section of the Spark
quick-start guide. Be sure to also include *spark-mllib* to your build file as
a dependency.
</div>
</div>

In order to run the above application, follow the instructions
provided in the [Self-Contained Applications](quick-start.html#self-contained-applications)
section of the Spark
quick-start guide. Be sure to also include *spark-mllib* to your build file as
a dependency.
20 changes: 10 additions & 10 deletions docs/mllib-linear-methods.md
Original file line number Diff line number Diff line change
Expand Up @@ -247,7 +247,7 @@ val modelL1 = svmAlg.run(training)
All of MLlib's methods use Java-friendly types, so you can import and call them there the same
way you do in Scala. The only caveat is that the methods take Scala RDD objects, while the
Spark Java API uses a separate `JavaRDD` class. You can convert a Java RDD to a Scala one by
calling `.rdd()` on your `JavaRDD` object. A standalone application example
calling `.rdd()` on your `JavaRDD` object. A self-contained application example
that is equivalent to the provided example in Scala is given bellow:

{% highlight java %}
Expand Down Expand Up @@ -323,9 +323,9 @@ svmAlg.optimizer()
final SVMModel modelL1 = svmAlg.run(training.rdd());
{% endhighlight %}

In order to run the above standalone application, follow the instructions
provided in the [Standalone
Applications](quick-start.html#standalone-applications) section of the Spark
In order to run the above application, follow the instructions
provided in the [Self-Contained
Applications](quick-start.html#self-contained-applications) section of the Spark
quick-start guide. Be sure to also include *spark-mllib* to your build file as
a dependency.
</div>
Expand Down Expand Up @@ -482,12 +482,6 @@ public class LinearRegression {
}
}
{% endhighlight %}

In order to run the above standalone application, follow the instructions
provided in the [Standalone
Applications](quick-start.html#standalone-applications) section of the Spark
quick-start guide. Be sure to also include *spark-mllib* to your build file as
a dependency.
</div>

<div data-lang="python" markdown="1">
Expand Down Expand Up @@ -519,6 +513,12 @@ print("Mean Squared Error = " + str(MSE))
</div>
</div>

In order to run the above application, follow the instructions
provided in the [Self-Contained Applications](quick-start.html#self-contained-applications)
section of the Spark
quick-start guide. Be sure to also include *spark-mllib* to your build file as
a dependency.

## Streaming linear regression

When data arrive in a streaming fashion, it is useful to fit regression models online,
Expand Down
8 changes: 4 additions & 4 deletions docs/quick-start.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ title: Quick Start

This tutorial provides a quick introduction to using Spark. We will first introduce the API through Spark's
interactive shell (in Python or Scala),
then show how to write standalone applications in Java, Scala, and Python.
then show how to write applications in Java, Scala, and Python.
See the [programming guide](programming-guide.html) for a more complete reference.

To follow along with this guide, first download a packaged release of Spark from the
Expand Down Expand Up @@ -215,8 +215,8 @@ a cluster, as described in the [programming guide](programming-guide.html#initia
</div>
</div>

# Standalone Applications
Now say we wanted to write a standalone application using the Spark API. We will walk through a
# Self-Contained Applications
Now say we wanted to write a self-contained application using the Spark API. We will walk through a
simple application in both Scala (with SBT), Java (with Maven), and Python.

<div class="codetabs">
Expand Down Expand Up @@ -387,7 +387,7 @@ Lines with a: 46, Lines with b: 23
</div>
<div data-lang="python" markdown="1">

Now we will show how to write a standalone application using the Python API (PySpark).
Now we will show how to write an application using the Python API (PySpark).

As an example, we'll create a simple Spark application, `SimpleApp.py`:

Expand Down

0 comments on commit 18ab6bd

Please sign in to comment.