官术网_书友最值得收藏!

Running Spark examples

Spark comes with packaged examples for Java, Python, Scala, and R. We'll demonstrate how you can run a program provided in the examples directory.

As we only have a local installation, we'll run the Spark PI example locally on 4 cores. The examples are available at the Apache Spark GitHub page http://bit.ly/28S1hDY. We've taken an excerpt out of the example to explain how SparkContext is initialized:

 val conf = new SparkConf().setAppName("Spark Pi") val spark = new SparkContext(conf) 

The example comes packaged with Spark binaries. The code can be downloaded from GitHub too. Looking closely at the code you will realize that we instantiate our own SparkContext object from a SparkConf object. The application name Spark PI will appear in the Spark UI as a running application during the execution, and will help you track the status of your job. Remember, this is in stark contrast to the spark-shell where a SparkContext is automatically instantiated and passed as a reference.

Let's run this example with Spark submit script:

The log of the script spans over multiple pages, so we will skip over the intermediate manipulation step and go to the part where the output is printed. Remember in this case we are running Spark Pi, which prints out a value of Pi. Here's the second part of the log:

Figure 1.17: Running Spark Pi example

At the moment we have seen an example in Scala. If we see the example for this in Python, you will realize that we will just need to pass in the Python source code. We do not have to pass in any JAR files, as we are not referencing any other code. Similar to the Scala example, we have to instantiate the SparkContext directly, which is unlike how PySpark shell automatically provides you with a reference to the context object:

sc = SparkContext(appName="PythonPi")

Running the Spark Pi example is a bit different to the Scala example:

Similar to the PySpark example, the log of the SparkPi program in spark-shell spans multiple pages. We'll just move directly to the part where the value of Pi is printed in the log:

Figure 1.18: Running Spark Pi Python example

Building your own programs

We have tested pre-compiled programs but, as discussed earlier in this chapter, you can create your own programs and use sbt or Maven to package the application together and run using spark-submit script. In the later chapters in this book, we will use both the REPL environments and spark-submit for various code examples. For a complete code example, we'll build a Recommendation system in Chapter 9, Building a Recommendation System, and predict customer churn in a telco environment in Chapter 10, Customer Churn Prediction. Both of these examples (though fictional) will help you understand the overall life cycle of a machine learning application.

主站蜘蛛池模板: 青田县| 麦盖提县| 伊宁市| 睢宁县| 盐城市| 佛坪县| 顺昌县| 顺义区| 丁青县| 沙河市| 辽源市| 三门峡市| 龙江县| 安宁市| 武定县| 景泰县| 化州市| 固镇县| 金溪县| 安徽省| 绥滨县| 长治市| 武定县| 通山县| 许昌县| 二手房| 江津市| 福安市| 会理县| 炎陵县| 禹城市| 廊坊市| 龙游县| 海南省| 霍林郭勒市| 普宁市| 利川市| 色达县| 浦县| 金乡县| 湖北省|