官术网_书友最值得收藏!

How to do it...

To install from the binaries, we only need four steps (see the following source code) as we do not need to compile the sources:

  1. Download the precompiled binaries from Spark's website.
  2. Unpack the archive.
  3. Move to the final destination.
  4. Create the necessary environmental variables.

The skeleton for our code looks as follows (see the Chapter01/installFromBinary.sh file):

#!/bin/bash
# Shell script for installing Spark from binaries

#
# PySpark Cookbook
# Author: Tomasz Drabas, Denny Lee
# Version: 0.1
# Date: 12/2/2017
_spark_binary="http://mirrors.ocf.berkeley.edu/apache/spark/spark-2.3.1/spark-2.3.1-bin-hadoop2.7.tgz"
_spark_archive=$( echo "$_spark_binary" | awk -F '/' '{print $NF}' )
_spark_dir=$( echo "${_spark_archive%.*}" )
_spark_destination="/opt/spark"
...
checkOS
printHeader
downloadThePackage
unpack
moveTheBinaries
setSparkEnvironmentVariables
cleanUp
主站蜘蛛池模板: 广德县| 富平县| 兴义市| 石嘴山市| 友谊县| 胶州市| 鹿邑县| 陈巴尔虎旗| 安阳县| 安宁市| 阆中市| 习水县| 延津县| 尼勒克县| 大埔区| 定陶县| 兰溪市| 吉木萨尔县| 沙湾县| 交城县| 板桥市| 阜新| 宁安市| 定兴县| 永宁县| 芦溪县| 仪征市| 章丘市| 奇台县| 固始县| 漳州市| 江孜县| 南京市| 长岛县| 卢龙县| 新安县| 喀什市| 宿松县| 梧州市| 历史| 德化县|