- Hands-On Big Data Analytics with PySpark
- Rudy Lai Bart?omiej Potaczek
- 110字
- 2021-06-24 15:52:33
SparkConf
SparkConf allows us to configure a Spark application. It sets various Spark parameters as key-value pairs, and so will usually create a SparkConf object with a SparkConf() constructor, which would then load values from the spark.* underlying Java system.
There are a few useful functions; for example, we can use the sets() function to set the configuration property. We can use the setMaster() function to set the master URL to connect to. We can use the setAppName() function to set the application name, and setSparkHome() in order to set the path where Spark will be installed on worker nodes.
You can learn more about SparkConf at https://spark.apache.org/docs/0.9.0/api/pyspark/pysaprk.conf.SparkConf-class.html.
推薦閱讀
- PyTorch深度學習實戰:從新手小白到數據科學家
- LibGDX Game Development Essentials
- Redis應用實例
- 業務數據分析:五招破解業務難題
- 大數據導論
- Lean Mobile App Development
- 大數據Hadoop 3.X分布式處理實戰
- 數據革命:大數據價值實現方法、技術與案例
- 大數據架構和算法實現之路:電商系統的技術實戰
- 網站數據庫技術
- INSTANT Android Fragmentation Management How-to
- 數據挖掘競賽實戰:方法與案例
- 數據應用工程:方法論與實踐
- 從零進階!數據分析的統計基礎(第2版)
- Unity 4.x Game AI Programming