官术网_书友最值得收藏!

The MapReduce framework

MapReduce is a framework used to compute a large amount of data in a Hadoop cluster. MapReduce uses YARN to schedule the mappers and reducers as tasks, using the containers. The MapReduce framework enables you to write distributed applications to process large amounts of data from a filesystem, such as a Hadoop Distributed File System (HDFS), in a reliable and fault-tolerant manner. When you want to use the MapReduce framework to process data, it works through the creation of a job, which then runs on the framework to perform the tasks needed. A MapReduce job usually works by splitting the input data across worker nodes, running the mapper tasks in a parallel manner.

At this time, any failures that happen, either at the HDFS level or the failure of a mapper task, are handled automatically, to be fault-tolerant. Once the mappers have completed, in the results are copied over the network to other machines running the reducer tasks.

An example of using a MapReduce job to count frequencies of words is shown in the following diagram:

MapReduce uses YARN as a resource manager, which is shown in the following diagram:

The term MapReduce actually refers to two separate and distinct tasks that Hadoop programs perform. The first is the map job, which takes a set of data and converts it into another set of data, where inpidual elements are broken down into tuples (key/value pairs).

The reduce job takes the output from a map as input and combines those data tuples into a smaller set of tuples. As the sequence of the name MapReduce implies, the reduce job is always performed after the map job.

The input to a MapReduce job is a set of files in the data store that is spread out over the HDFS. In Hadoop, these files are split with an input format, which defines how to separate a file into input splits. An input split is a byte-oriented view of a chunk of the file, to be loaded by a map task. Each map task in Hadoop is broken into the following phases: record reader, mapper, combiner, and partitioner. The output of the map tasks, called the intermediate keys and values, is sent to the reducers. The reduce tasks are broken into the following phases: shuffle, sort, reducer, and output format. The nodes in which the map tasks run are optimally on the nodes in which the data rests. This way, the data typically does not have to move over the network, and can be computed on the local machine. 

Throughout this chapter, we will look at different use cases, and how to use a MapReduce job to produce the output desired; for this purpose, we will use a simple dataset.

主站蜘蛛池模板: 西峡县| 遂川县| 友谊县| 乐至县| 罗甸县| 延安市| 枞阳县| 张家口市| 南阳市| 平遥县| 庆云县| 米脂县| 宁都县| 长葛市| 化州市| 甘德县| 陵水| 台中县| 浪卡子县| 长顺县| 登封市| 南召县| 固原市| 北流市| 万载县| 白朗县| 通榆县| 隆尧县| 香格里拉县| 九龙城区| 县级市| 沐川县| 扶风县| 尚义县| 东安县| 唐海县| 民县| 堆龙德庆县| 祁连县| 临城县| 清原|