- Spark Cookbook
- Rishi Yadav
- 335字
- 2021-07-16 13:43:57
Building the Spark source code with Maven
Installing Spark using binaries works fine in most cases. For advanced cases, such as the following (but not limited to), compiling from the source code is a better option:
- Compiling for a specific Hadoop version
- Adding the Hive integration
- Adding the YARN integration
Getting ready
The following are the prerequisites for this recipe to work:
- Java 1.6 or a later version
- Maven 3.x
How to do it...
The following are the steps to build the Spark source code with Maven:
- Increase
MaxPermSize
for heap:$ echo "export _JAVA_OPTIONS=\"-XX:MaxPermSize=1G\"" >> /home/hduser/.bashrc
- Open a new terminal window and download the Spark source code from GitHub:
$ wget https://github.com/apache/spark/archive/branch-1.4.zip
- Unpack the archive:
$ gunzip branch-1.4.zip
- Move to the
spark
directory:$ cd spark
- Compile the sources with these flags: Yarn enabled, Hadoop version 2.4, Hive enabled, and skipping tests for faster compilation:
$ mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -DskipTests clean package
- Move the
conf
folder to theetc
folder so that it can be made a symbolic link:$ sudo mv spark/conf /etc/
- Move the
spark
directory to/opt
as it's an add-on software package:$ sudo mv spark /opt/infoobjects/spark
- Change the ownership of the
spark
home directory toroot
:$ sudo chown -R root:root /opt/infoobjects/spark
- Change the permissions of the
spark
home directory0755 = user:rwx group:r-x world:r-x
:$ sudo chmod -R 755 /opt/infoobjects/spark
- Move to the
spark
home directory:$ cd /opt/infoobjects/spark
- Create a symbolic link:
$ sudo ln -s /etc/spark conf
- Put the Spark executable in the path by editing
.bashrc
:$ echo "export PATH=$PATH:/opt/infoobjects/spark/bin" >> /home/hduser/.bashrc
- Create the
log
directory in/var
:$ sudo mkdir -p /var/log/spark
- Make
hduser
the owner of the Sparklog
directory:$ sudo chown -R hduser:hduser /var/log/spark
- Create the Spark
tmp
directory:$ mkdir /tmp/spark
- Configure Spark with the help of the following command lines:
$ cd /etc/spark $ echo "export HADOOP_CONF_DIR=/opt/infoobjects/hadoop/etc/hadoop" >> spark-env.sh $ echo "export YARN_CONF_DIR=/opt/infoobjects/hadoop/etc/Hadoop" >> spark-env.sh $ echo "export SPARK_LOG_DIR=/var/log/spark" >> spark-env.sh $ echo "export SPARK_WORKER_DIR=/tmp/spark" >> spark-env.sh
推薦閱讀
- 計算機網絡
- Python爬蟲開發:從入門到實戰(微課版)
- Java 9 Programming Blueprints
- DevOps入門與實踐
- Java持續交付
- Visual Basic程序設計實驗指導(第二版)
- ServiceNow:Building Powerful Workflows
- RESTful Java Web Services(Second Edition)
- HTML+CSS+JavaScript網頁設計從入門到精通 (清華社"視頻大講堂"大系·網絡開發視頻大講堂)
- OpenCV 3 Blueprints
- IBM Cognos TM1 Developer's Certification guide
- Applied Deep Learning with Python
- Implementing Microsoft Dynamics NAV(Third Edition)
- Node.js實戰:分布式系統中的后端服務開發
- C/C++程序設計教程