官术网_书友最值得收藏!

Summary

In this chapter, we started by checking the prerequisites for how to install Hadoop and configured Hadoop in the pseudo-distributed mode. Then, we got the Elasticsearch server up and running and understood the basic configurations of Elasticsearch. We learned how to install the Elasticsearch plugins. We imported the sample file for the WordCount example to HDFS and successfully ran our first Hadoop MapReduce job that uses ES-Hadoop to get the data to Elasticsearch. Then we learned how to use the Head and Marvel plugins to explore documents in Elasticsearch.

With our environment and the required tools set up with a basic understanding, we are all set to have a hands-on experience of how to write MapReduce jobs that use ES-Hadoop. In the next chapter, we will take a look at how the WordCount job is developed. We will also develop a couple of jobs for real-world scenarios that will write and read data to and from HDFS and Elasticsearch.

主站蜘蛛池模板: 东乌珠穆沁旗| 临夏市| 长宁县| 青州市| 图们市| 佛山市| 汕头市| 辽源市| 靖江市| 沙坪坝区| 乌拉特前旗| 瓮安县| 如东县| 武鸣县| 灵武市| 任丘市| 福清市| 博罗县| 兴业县| 九龙坡区| 巧家县| 灵宝市| 辰溪县| 宁明县| 曲水县| 延川县| 禄丰县| 沾益县| 四川省| 宜都市| 磐安县| 虞城县| 金溪县| 珲春市| 大厂| 富宁县| 峨山| 万宁市| 武强县| 织金县| 资溪县|