官术网_书友最值得收藏!

  • Learning Apache Spark 2
  • Muhammad Asif Abbasi
  • 375字
  • 2021-07-09 18:45:58

Passing functions to Spark (Python)

Python provides a simple way to pass functions to Spark. The Spark programming guide available at spark.apache.org suggests there are three recommended ways to do this:

  • Lambda expressions is the ideal way for short functions that can be written inside a single expression
  • Local defs inside the function calling into Spark for longer code
  • Top-level functions in a module

While we have already looked at the lambda functions in some of the previous examples, let's look at local definitions of the functions. We can encapsulate our business logic which is splitting of words, and counting into two separate functions as shown below.

def splitter(lineOfText): words = lineOfText.split(" ") return len(words) def aggregate(numWordsLine1, numWordsLineNext): totalWords = numWordsLine1 + numWordsLineNext return totalWords 

Let's see the working code example:

Figure 2.15: Code example of Python word count (local definition of functions)

Here's another way to implement this by defining the functions as a part of a UtilFunctions class, and referencing them within your map and reduce functions:

Figure 2.16: Code example of Python word count (Utility class)

You may want to be a bit cheeky here and try to add a countWords() to the UtilFunctions, so that it takes an RDD as input, and returns the total number of words. This method has potential performance implications as the whole object will need to be sent to the cluster. Let's see how this can be implemented and the results in the following screenshot:

Figure 2.17: Code example of Python word count (Utility class - 2)

This can be avoided by making a copy of the referenced data field in a local object, rather than accessing it externally.

Now that we have had a look at how to pass functions to Spark, and have already looked at some of the transformations and actions in the previous examples, including map, flatMap, and reduce, let's look at the most common transformations and actions used in Spark. The list is not exhaustive, and you can find more examples in the Apache Spark documentation in the programming guide section (http://bit.ly/SparkProgrammingGuide). If you would like to get a comprehensive list of all the available functions, you might want to check the following API docs:

Table 2.1 - RDD and PairRDD API references

主站蜘蛛池模板: 且末县| 武山县| 石狮市| 汝南县| 台中县| 库车县| 长沙县| 安仁县| 利津县| 会昌县| 丹阳市| 衡阳县| 齐齐哈尔市| 扶风县| 成武县| 闻喜县| 宜宾市| 洪泽县| 济宁市| 墨江| 临猗县| 昆明市| 上饶县| 江西省| 石狮市| 乐清市| 勐海县| 常山县| 改则县| 益阳市| 普兰县| 保康县| 安远县| 房山区| 渭源县| 巨野县| 永靖县| 城市| 阳西县| 平阴县| 永州市|