官术网_书友最值得收藏!

Running Spark examples

Spark comes with packaged examples for Java, Python, Scala, and R. We'll demonstrate how you can run a program provided in the examples directory.

As we only have a local installation, we'll run the Spark PI example locally on 4 cores. The examples are available at the Apache Spark GitHub page http://bit.ly/28S1hDY. We've taken an excerpt out of the example to explain how SparkContext is initialized:

 val conf = new SparkConf().setAppName("Spark Pi") val spark = new SparkContext(conf) 

The example comes packaged with Spark binaries. The code can be downloaded from GitHub too. Looking closely at the code you will realize that we instantiate our own SparkContext object from a SparkConf object. The application name Spark PI will appear in the Spark UI as a running application during the execution, and will help you track the status of your job. Remember, this is in stark contrast to the spark-shell where a SparkContext is automatically instantiated and passed as a reference.

Let's run this example with Spark submit script:

The log of the script spans over multiple pages, so we will skip over the intermediate manipulation step and go to the part where the output is printed. Remember in this case we are running Spark Pi, which prints out a value of Pi. Here's the second part of the log:

Figure 1.17: Running Spark Pi example

At the moment we have seen an example in Scala. If we see the example for this in Python, you will realize that we will just need to pass in the Python source code. We do not have to pass in any JAR files, as we are not referencing any other code. Similar to the Scala example, we have to instantiate the SparkContext directly, which is unlike how PySpark shell automatically provides you with a reference to the context object:

sc = SparkContext(appName="PythonPi")

Running the Spark Pi example is a bit different to the Scala example:

Similar to the PySpark example, the log of the SparkPi program in spark-shell spans multiple pages. We'll just move directly to the part where the value of Pi is printed in the log:

Figure 1.18: Running Spark Pi Python example

Building your own programs

We have tested pre-compiled programs but, as discussed earlier in this chapter, you can create your own programs and use sbt or Maven to package the application together and run using spark-submit script. In the later chapters in this book, we will use both the REPL environments and spark-submit for various code examples. For a complete code example, we'll build a Recommendation system in Chapter 9, Building a Recommendation System, and predict customer churn in a telco environment in Chapter 10, Customer Churn Prediction. Both of these examples (though fictional) will help you understand the overall life cycle of a machine learning application.

主站蜘蛛池模板: 兰坪| 吴川市| 缙云县| 吉林市| 阳春市| 张北县| 浦东新区| 秦皇岛市| 扬州市| 南京市| 云浮市| 霍邱县| 扶绥县| 钦州市| 昭平县| 迭部县| 辉县市| 江油市| 江都市| 青田县| 甘南县| 聂荣县| 友谊县| 泗水县| 桦南县| 塘沽区| 孝昌县| 龙泉市| 辰溪县| 清远市| 黎平县| 昌图县| 木兰县| 汾阳市| 浦江县| 竹山县| 都江堰市| 永和县| 墨江| 厦门市| 腾冲县|