官术网_书友最值得收藏!

Running Spark examples

Spark comes with packaged examples for Java, Python, Scala, and R. We'll demonstrate how you can run a program provided in the examples directory.

As we only have a local installation, we'll run the Spark PI example locally on 4 cores. The examples are available at the Apache Spark GitHub page http://bit.ly/28S1hDY. We've taken an excerpt out of the example to explain how SparkContext is initialized:

 val conf = new SparkConf().setAppName("Spark Pi") val spark = new SparkContext(conf) 

The example comes packaged with Spark binaries. The code can be downloaded from GitHub too. Looking closely at the code you will realize that we instantiate our own SparkContext object from a SparkConf object. The application name Spark PI will appear in the Spark UI as a running application during the execution, and will help you track the status of your job. Remember, this is in stark contrast to the spark-shell where a SparkContext is automatically instantiated and passed as a reference.

Let's run this example with Spark submit script:

The log of the script spans over multiple pages, so we will skip over the intermediate manipulation step and go to the part where the output is printed. Remember in this case we are running Spark Pi, which prints out a value of Pi. Here's the second part of the log:

Figure 1.17: Running Spark Pi example

At the moment we have seen an example in Scala. If we see the example for this in Python, you will realize that we will just need to pass in the Python source code. We do not have to pass in any JAR files, as we are not referencing any other code. Similar to the Scala example, we have to instantiate the SparkContext directly, which is unlike how PySpark shell automatically provides you with a reference to the context object:

sc = SparkContext(appName="PythonPi")

Running the Spark Pi example is a bit different to the Scala example:

Similar to the PySpark example, the log of the SparkPi program in spark-shell spans multiple pages. We'll just move directly to the part where the value of Pi is printed in the log:

Figure 1.18: Running Spark Pi Python example

Building your own programs

We have tested pre-compiled programs but, as discussed earlier in this chapter, you can create your own programs and use sbt or Maven to package the application together and run using spark-submit script. In the later chapters in this book, we will use both the REPL environments and spark-submit for various code examples. For a complete code example, we'll build a Recommendation system in Chapter 9, Building a Recommendation System, and predict customer churn in a telco environment in Chapter 10, Customer Churn Prediction. Both of these examples (though fictional) will help you understand the overall life cycle of a machine learning application.

主站蜘蛛池模板: 哈尔滨市| 南安市| 广元市| 贡嘎县| 邵东县| 拜城县| 蓬莱市| 兴国县| 资阳市| 静宁县| 弋阳县| 永泰县| 宜宾县| 安岳县| 兴义市| 斗六市| 禄劝| 体育| 南丰县| 广水市| 肇东市| 新晃| 水城县| 衡阳市| 汉阴县| 大渡口区| 桦川县| 土默特右旗| 齐齐哈尔市| 南平市| 富宁县| 承德县| 敦煌市| 昌吉市| 扎鲁特旗| 万盛区| 贺兰县| 揭阳市| 古交市| 长垣县| 林芝县|