官术网_书友最值得收藏!

Native streaming versus micro-batch

Let's examine how the stateful stream processing (as found in Apex and Flink) compares to the micro-batch based approach in Apache Spark Streaming.

Let's look at the following diagram:

On top, we see an example of processing in Spark Streaming and below we see an example in Apex in the preceding diagram. Based on its underlying "stateless" batch architecture, Spark Streaming processes a stream by piding it into small batches (micro-batches) that typically last from 500 ms to a few seconds. A new task is scheduled for every micro-batch. Once scheduled, the new task needs to be initialized. Such initialization could include opening connections to external resources, loading data that is needed for processing and so on. Overall this implies a per task overhead that limits the micro-batch frequency and leads to a latency trade-off.

In classical batch processing, tasks may last for the entire bounded input data set. Any computational state remains internal to the task and there is typically no special consideration for fault tolerance required, since whenever there is a failure, the task can restart from the beginning.

However, with unbounded data and streaming, a stateful operation like counting would need to maintain the current count and it would need to be transferred across task boundaries. As long as the state is small, this may be manageable. However, when transformations are applied to large key cardinality, the state can easily grow to a size that makes it impractical to swap in and out (cost of serialization, I/O, and so on). The correct state management is not easy to solve without underlying platform support, especially not when accuracy, consistency and fault tolerance are important.

主站蜘蛛池模板: 吉木萨尔县| 大姚县| 建瓯市| 林甸县| 黔东| 白水县| 睢宁县| 康定县| 马公市| 陵川县| 柳州市| 大姚县| 额尔古纳市| 都匀市| 延吉市| 黄平县| 黄大仙区| 南岸区| 定日县| 五家渠市| 安仁县| 洱源县| 西贡区| 花莲县| 射洪县| 肇庆市| 怀来县| 山东| 临潭县| 桂东县| 蛟河市| 楚雄市| 林口县| 亚东县| 泗阳县| 顺义区| 保康县| 刚察县| 永春县| 尼木县| 新密市|