- Machine Learning with Spark(Second Edition)
- Rajdeep Dua Manpreet Singh Ghotra Nick Pentreath
- 241字
- 2021-07-09 21:07:39
Spark clusters
A Spark cluster is made up of two types of processes: a driver program and multiple executors. In the local mode, all these processes are run within the same JVM. In a cluster, these processes are usually run on separate nodes.
For example, a typical cluster that runs in Spark's standalone mode (that is, using Spark's built-in cluster management modules) will have the following:
- A master node that runs the Spark standalone master process as well as the driver program
- A number of worker nodes, each running an executor process
While we will be using Spark's local standalone mode throughout this book to illustrate concepts and examples, the same Spark code that we write can be run on a Spark cluster. In the preceding example, if we run the code on a Spark standalone cluster, we could simply pass in the URL for the master node, as follows:
$ MASTER=spark://IP:PORT --class org.apache.spark.examples.SparkPi
./examples/jars/spark-examples_2.11-2.0.0.jar 100
Here, IP is the IP address and PORT is the port of the Spark master. This tells Spark to run the program on the cluster where the Spark master process is running.
A full treatment of Spark's cluster management and deployment is beyond the scope of this book. However, we will briefly teach you how to set up and use an Amazon EC2 cluster later in this chapter.
For an overview of the Spark cluster-application deployment, take a look at the following links:
- Clojure Data Analysis Cookbook
- 機器學習實戰:基于Sophon平臺的機器學習理論與實踐
- Visual FoxPro 6.0數據庫與程序設計
- 深度學習中的圖像分類與對抗技術
- VB語言程序設計
- 現代傳感技術
- INSTANT Autodesk Revit 2013 Customization with .NET How-to
- 樂高機器人—槍械武器庫
- PostgreSQL 10 Administration Cookbook
- 數據要素:全球經濟社會發展的新動力
- Python文本分析
- Building Google Cloud Platform Solutions
- FreeCAD [How-to]
- 巧學活用Photoshop
- Mastercam X5應用技能基本功特訓