Skip to content

Commit

Permalink
Add support for Spark 2.2.0-hadoop2.7
Browse files Browse the repository at this point in the history
  • Loading branch information
GezimSejdiu committed Jul 13, 2017
1 parent 63f186d commit 3d047fd
Show file tree
Hide file tree
Showing 9 changed files with 14 additions and 13 deletions.
11 changes: 6 additions & 5 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ Docker images to:
* Build Spark applications in Java, Scala or Python to run on a Spark cluster

Currently supported versions:
* Spark 2.2.0 for Hadoop 2.7+ with OpenJDK 8
* Spark 2.1.1 for Hadoop 2.7+ with OpenJDK 8
* Spark 2.1.0 for Hadoop 2.7+ with OpenJDK 8
* Spark 2.0.2 for Hadoop 2.7+ with OpenJDK 8
Expand All @@ -19,7 +20,7 @@ Currently supported versions:
Add the following services to your `docker-compose.yml` to integrate a Spark master and Spark worker in [your BDE pipeline](https://github.com/big-data-europe/app-bde-pipeline):
```
spark-master:
image: bde2020/spark-master:2.1.1-hadoop2.7
image: bde2020/spark-master:2.2.0-hadoop2.7
container_name: spark-master
ports:
- "8080:8080"
Expand All @@ -28,7 +29,7 @@ spark-master:
- INIT_DAEMON_STEP=setup_spark
- "constraint:node==<yourmasternode>"
spark-worker-1:
image: bde2020/spark-worker:2.1.1-hadoop2.7
image: bde2020/spark-worker:2.2.0-hadoop2.7
container_name: spark-worker-1
depends_on:
- spark-master
Expand All @@ -38,7 +39,7 @@ spark-worker-1:
- "SPARK_MASTER=spark://spark-master:7077"
- "constraint:node==<yourmasternode>"
spark-worker-2:
image: bde2020/spark-worker:2.1.1-hadoop2.7
image: bde2020/spark-worker:2.2.0-hadoop2.7
container_name: spark-worker-2
depends_on:
- spark-master
Expand All @@ -54,12 +55,12 @@ Make sure to fill in the `INIT_DAEMON_STEP` as configured in your pipeline.
### Spark Master
To start a Spark master:

docker run --name spark-master -h spark-master -e ENABLE_INIT_DAEMON=false -d bde2020/spark-master:2.1.1-hadoop2.7
docker run --name spark-master -h spark-master -e ENABLE_INIT_DAEMON=false -d bde2020/spark-master:2.2.0-hadoop2.7

### Spark Worker
To start a Spark worker:

docker run --name spark-worker-1 --link spark-master:spark-master -e ENABLE_INIT_DAEMON=false -d bde2020/spark-worker:2.1.1-hadoop2.7
docker run --name spark-worker-1 --link spark-master:spark-master -e ENABLE_INIT_DAEMON=false -d bde2020/spark-worker:2.2.0-hadoop2.7

## Launch a Spark application
Building and running your Spark application on top of the Spark cluster is as simple as extending a template Docker image. Check the template's README for further documentation.
Expand Down
2 changes: 1 addition & 1 deletion base/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ ENV ENABLE_INIT_DAEMON true
ENV INIT_DAEMON_BASE_URI http://identifier/init-daemon
ENV INIT_DAEMON_STEP spark_master_init

ENV SPARK_VERSION=2.1.1
ENV SPARK_VERSION=2.2.0
ENV HADOOP_VERSION=2.7

COPY wait-for-step.sh /
Expand Down
2 changes: 1 addition & 1 deletion master/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM bde2020/spark-base:2.1.1-hadoop2.7
FROM bde2020/spark-base:2.2.0-hadoop2.7

MAINTAINER Erika Pauwels <[email protected]>
MAINTAINER Gezim Sejdiu <[email protected]>
Expand Down
2 changes: 1 addition & 1 deletion submit/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM bde2020/spark-base:2.1.1-hadoop2.7
FROM bde2020/spark-base:2.2.0-hadoop2.7

MAINTAINER Erika Pauwels <[email protected]>
MAINTAINER Gezim Sejdiu <[email protected]>
Expand Down
2 changes: 1 addition & 1 deletion template/java/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM bde2020/spark-submit:2.1.1-hadoop2.7
FROM bde2020/spark-submit:2.2.0-hadoop2.7

MAINTAINER Erika Pauwels <[email protected]>
MAINTAINER Gezim Sejdiu <[email protected]>
Expand Down
2 changes: 1 addition & 1 deletion template/java/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ If you overwrite the template's `CMD` in your Dockerfile, make sure to execute t

#### Example Dockerfile
```
FROM bde2020/spark-java-template:2.1.1-hadoop2.7
FROM bde2020/spark-java-template:2.2.0-hadoop2.7
MAINTAINER Erika Pauwels <[email protected]>
Expand Down
2 changes: 1 addition & 1 deletion template/python/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM bde2020/spark-submit:2.1.1-hadoop2.7
FROM bde2020/spark-submit:2.2.0-hadoop2.7

MAINTAINER Cecile Tonglet <[email protected]>

Expand Down
2 changes: 1 addition & 1 deletion template/python/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ If you overwrite the template's `CMD` in your Dockerfile, make sure to execute t

#### Example Dockerfile
```
FROM bde2020/spark-python-template:2.1.1-hadoop2.7
FROM bde2020/spark-python-template:2.2.0-hadoop2.7
MAINTAINER You <[email protected]>
Expand Down
2 changes: 1 addition & 1 deletion worker/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,4 +1,4 @@
FROM bde2020/spark-base:2.1.1-hadoop2.7
FROM bde2020/spark-base:2.2.0-hadoop2.7

MAINTAINER Erika Pauwels <[email protected]>
MAINTAINER Gezim Sejdiu <[email protected]>
Expand Down

0 comments on commit 3d047fd

Please sign in to comment.