Running the WordCount program in a distributed cluster environment
This recipe describes how to run a MapReduce computation in a distributed Hadoop v2 cluster.
Getting ready
Start the Hadoop cluster by following the Setting up HDFS recipe or the Setting up Hadoop ecosystem in a distributed cluster environment using a Hadoop distribution recipe.
How to do it...
Now let's run the WordCount sample in the distributed Hadoop v2 setup:
Upload the wc-input directory in the source repository to the HDFS filesystem. Alternatively, you can upload any other set of text documents as well.
$ hdfs dfs -copyFromLocal wc-input .
Execute the WordCount example from the HADOOP_HOME directory:
$ hadoop jar hcb-c1-samples.jar \chapter1.WordCount \wc-input wc-output
Run the following commands to list the output directory and then look at the results:
When we submit a job, YARN would schedule a MapReduce ApplicationMaster to coordinate and execute the computation. ApplicationMaster requests the necessary resources from the ResourceManager and executes the MapReduce computation using the containers it received from the resource request.
There's more...
You can also see the results of the WordCount application through the HDFS monitoring UI by visiting http://NAMANODE:50070.