官术网_书友最值得收藏!

Summary

In this chapter, we started by checking the prerequisites for how to install Hadoop and configured Hadoop in the pseudo-distributed mode. Then, we got the Elasticsearch server up and running and understood the basic configurations of Elasticsearch. We learned how to install the Elasticsearch plugins. We imported the sample file for the WordCount example to HDFS and successfully ran our first Hadoop MapReduce job that uses ES-Hadoop to get the data to Elasticsearch. Then we learned how to use the Head and Marvel plugins to explore documents in Elasticsearch.

With our environment and the required tools set up with a basic understanding, we are all set to have a hands-on experience of how to write MapReduce jobs that use ES-Hadoop. In the next chapter, we will take a look at how the WordCount job is developed. We will also develop a couple of jobs for real-world scenarios that will write and read data to and from HDFS and Elasticsearch.

主站蜘蛛池模板: 固阳县| 清徐县| 澜沧| 和硕县| 四会市| 米林县| 沽源县| 江城| 临海市| 修武县| 淮北市| 都昌县| 水城县| 青龙| 都安| 旺苍县| 陆丰市| 临沂市| 兴海县| 惠来县| 左权县| 阜城县| 防城港市| 陆丰市| 鄢陵县| 山阳县| 怀化市| 葫芦岛市| 时尚| 宜兰市| 华宁县| 大冶市| 乡宁县| 金平| 渑池县| 大埔县| 洛浦县| 西和县| 浦城县| 兴义市| 棋牌|