官术网_书友最值得收藏!

Counting the number of words in a file

Let's read the file people.txt placed in $SPARK_HOME/examples/src/main/resources:

The textFile() method can be used to read the file as follows:
scala>val file=sc.textFile("/usr/local/spark/examples/src/main/resources/people.txt")
file: org.apache.spark.rdd.RDD[String] = /usr/local/spark/examples/src/main/resources/people.txt MapPartitionsRDD[1] at textFile at <console>:24

The next step is to flatten the contents of the file, that is, we will create an RDD by splitting each line with , and flatten all the words in the list, as follows:

scala>valflattenFile = file.flatMap(s =>s.split(", "))
flattenFile: org.apache.spark.rdd.RDD[String] = MapPartitionsRDD[5] at flatMap at <console>:26

The contents of flattenFile RDD looks as follows:

scala>flattenFile.collect
res5: Array[String] = Array(Michael, 29, Andy, 30, Justin, 19)

Now, we can count all the words in this RDD as follows:

scala>val count = flattenFile.count
count: Long = 6
scala> count
res2: Long = 6

It is shown in the following screenshot:

Whenever any action such as count gets called, the Spark creates a directed acyclic graph (DAG) to depict the lineage dependency of each RDD. Spark provides a debug method toDebugString() to show such lineage dependencies of the RDD:

scala>flattenFile.toDebugString

It is shown in the following screenshot:

The indentations represent the shuffle while the number in the parentheses indicates the parallelism level at each stage.

In this section, we became familiar with some Spark CLI concepts. In the next section, we will discuss various components of Spark job.

主站蜘蛛池模板: 武强县| 长治县| 五台县| 盐池县| 得荣县| 邳州市| 佛教| 云浮市| 云和县| 科尔| 阜新市| 香河县| 兖州市| 辽宁省| 共和县| 绥化市| 福清市| 天柱县| 平山县| 光泽县| 阜康市| 太谷县| 墨江| 秦安县| 三原县| 镇雄县| 郑州市| 竹北市| 固原市| 如皋市| 新源县| 洪泽县| 昌邑市| 沭阳县| 冕宁县| 澜沧| 青州市| 平利县| 桐庐县| 桂东县| 平罗县|