官术网_书友最值得收藏!

Building the Spark source code with Maven

Installing Spark using binaries works fine in most cases. For advanced cases, such as the following (but not limited to), compiling from the source code is a better option:

  • Compiling for a specific Hadoop version
  • Adding the Hive integration
  • Adding the YARN integration

Getting ready

The following are the prerequisites for this recipe to work:

  • Java 1.6 or a later version
  • Maven 3.x

How to do it...

The following are the steps to build the Spark source code with Maven:

  1. Increase MaxPermSize for heap:
    $ echo "export _JAVA_OPTIONS=\"-XX:MaxPermSize=1G\"" >> /home/hduser/.bashrc
    
  2. Open a new terminal window and download the Spark source code from GitHub:
    $ wget https://github.com/apache/spark/archive/branch-1.4.zip
    
  3. Unpack the archive:
    $ gunzip branch-1.4.zip
    
  4. Move to the spark directory:
    $ cd spark
    
  5. Compile the sources with these flags: Yarn enabled, Hadoop version 2.4, Hive enabled, and skipping tests for faster compilation:
    $ mvn -Pyarn -Phadoop-2.4 -Dhadoop.version=2.4.0 -Phive -DskipTests clean package
    
  6. Move the conf folder to the etc folder so that it can be made a symbolic link:
    $ sudo mv spark/conf /etc/
    
  7. Move the spark directory to /opt as it's an add-on software package:
    $ sudo mv spark /opt/infoobjects/spark
    
  8. Change the ownership of the spark home directory to root:
    $ sudo chown -R root:root /opt/infoobjects/spark
    
  9. Change the permissions of the spark home directory 0755 = user:rwx group:r-x world:r-x:
    $ sudo chmod -R 755 /opt/infoobjects/spark
    
  10. Move to the spark home directory:
    $ cd /opt/infoobjects/spark
    
  11. Create a symbolic link:
    $ sudo ln -s /etc/spark conf
    
  12. Put the Spark executable in the path by editing .bashrc:
    $ echo "export PATH=$PATH:/opt/infoobjects/spark/bin" >> /home/hduser/.bashrc
    
  13. Create the log directory in /var:
    $ sudo mkdir -p /var/log/spark
    
  14. Make hduser the owner of the Spark log directory:
    $ sudo chown -R hduser:hduser /var/log/spark
    
  15. Create the Spark tmp directory:
    $ mkdir /tmp/spark
    
  16. Configure Spark with the help of the following command lines:
    $ cd /etc/spark
    $ echo "export HADOOP_CONF_DIR=/opt/infoobjects/hadoop/etc/hadoop" >> spark-env.sh
    $ echo "export YARN_CONF_DIR=/opt/infoobjects/hadoop/etc/Hadoop" >> spark-env.sh
    $ echo "export SPARK_LOG_DIR=/var/log/spark" >> spark-env.sh
    $ echo "export SPARK_WORKER_DIR=/tmp/spark" >> spark-env.sh
    
主站蜘蛛池模板: 荃湾区| 齐河县| 尉氏县| 图木舒克市| 监利县| 随州市| 晋州市| 灵武市| 阿拉善盟| 平凉市| 晋江市| 台山市| 教育| 西宁市| 巴马| 宜丰县| 衡阳市| 青阳县| 大余县| 南通市| 乐平市| 溧水县| 南和县| 寿宁县| 阿瓦提县| 桃园市| 商城县| 饶阳县| 栖霞市| 抚松县| 新和县| 寻乌县| 潼南县| 大连市| 台山市| 无棣县| 麟游县| 乌兰察布市| 揭阳市| 胶南市| 正阳县|