官术网_书友最值得收藏!

  • Big Data Analytics
  • Venkat Ankam
  • 220字
  • 2021-08-20 10:32:23

Why Hadoop plus Spark?

Apache Spark shines better when it is combined with Hadoop. To understand this, let's take a look at Hadoop and Spark features.

Hadoop features

Spark features

When both frameworks are combined, we get the power of enterprise-grade applications with in-memory performance, as shown in Figure 2.11:

Figure 2.11: Spark applications on the Hadoop platform

Frequently asked questions about Spark

The following are frequent questions that practitioners raise about Spark:

  • My dataset does not fit in-memory. How can I use Spark?

    Spark's operators spill the data to disk if it does not fit in-memory, allowing it to run on data of any size. Likewise, cached datasets that do not fit in-memory are either spilled to disk or recomputed on the fly when needed, as determined by the RDD's storage level. By default, Spark will recompute the partitions that don't fit in-memory. The storage level can be changed as MEMORY_AND_DISK to spill partitions to disk.

    Figure 2.12 shows you the performance difference between fully cached and on disk:

    Figure 2.12: Spark performance: Fully cached versus disk

  • How does fault recovery work in Spark?

    Spark's built-in fault tolerance based on the RDD lineage will automatically recover from failures. Figure 2.13 shows you the performance over failure in the 6th iteration in a k-means algorithm:

    Figure 2.13: Fault recovery performance

主站蜘蛛池模板: 嘉义县| 观塘区| 红原县| 长宁区| 屯门区| 都昌县| 青川县| 福州市| 西充县| 班戈县| 商都县| 临清市| 清远市| 祥云县| 任丘市| 浦北县| 宁夏| 若羌县| 勐海县| 普宁市| 财经| 昆明市| 精河县| 车致| 通山县| 沙田区| 拉孜县| 高邮市| 奇台县| 湟中县| 日照市| 那坡县| 缙云县| 东宁县| 东平县| 崇文区| 乌海市| 读书| 日喀则市| 安西县| 梧州市|