官术网_书友最值得收藏!

Installing the prebuilt distribution

Let's download prebuilt Spark and install it. Later, we will also compile a version and build from the source. The download is straightforward. The download page is at http://spark.apache.org/downloads.html. Select the options as shown in the following screenshot:

We will use wget from the command line. You can do a direct download as well:

cd /opt
sudo wget http://www-us.apache.org/dist/spark/spark-2.0.0/spark-2.0.0-bin-hadoop2.7.tgz

We are downloading the prebuilt version for Apache Hadoop 2.7 from one of the possible mirrors. We could have easily downloaded other prebuilt versions as well, as shown in the following screenshot:

To uncompress it, execute the following command:

sudo tar xvf spark-2.0.0-bin-hadoop2.7.tgz

To test the installation, run the following command:

/opt/spark-2.0.0-bin-hadoop2.7/bin/run-example SparkPi 10

It will fire up the Spark stack and calculate the value of Pi. The result will be as shown in the following screenshot:

主站蜘蛛池模板: 靖江市| 乐山市| 商城县| 龙游县| 铁力市| 凤庆县| 宁乡县| 远安县| 乌兰浩特市| 洪江市| 灵丘县| 策勒县| 正阳县| 黑龙江省| 塔城市| 荔浦县| 平定县| 通化县| 鹤岗市| 重庆市| 西平县| 于都县| 灵璧县| 建昌县| 景东| 郧西县| 商丘市| 湖口县| 隆尧县| 虞城县| 黎城县| 涟水县| 黑河市| 花垣县| 利津县| 枣庄市| 陇南市| 五寨县| 化德县| 房山区| 平安县|