官术网_书友最值得收藏!

Hadoop streaming

In this recipe, we will look at how we can execute jobs on an Hadoop cluster using scripts written in Bash or Python. It is not mandatory to use only Java for programming MapReduce code; any language can be used by evoking the Hadoop streaming utility. Do not confuse this with real-time streaming, which is different from what we will be discussing here.

Getting ready

To step through the recipes in this chapter, make sure you have a running cluster with HDFS and YARN setup correctly as discussed in the previous chapters. This can be a single node cluster or a multinode cluster, as long as the cluster is configured correctly.

It is not necessary to know Java to run MapReduce programs on Hadoop. Users can carry forward their existing scripting knowledge and use Bash or Python to run the job on Hadoop.

How to do it...

  1. Connect to an edge node in the cluster and switch to user hadoop.
  2. The streaming JAR is also under the location as Hadoop /opt/cluster/hadoop/share/hadoop/tools/lib/hadoop-streaming-2.7.2.jar.
  3. The map script of the wordcount example using Python is shown in the following screenshot:
    How to do it...
  4. The reduce script is as shown next:
    #!/usr/bin/env python
    
    from operator import itemgetter
    import sys
    
    current_word = None
    current_count = 0
    word = None
    
    # input comes from STDIN
    for line in sys.stdin:
        # remove leading and trailing whitespace
        line = line.strip()
    
        # parse the input we got from mapper.py
        word, count = line.split('\t', 1)
    
       # convert count (currently a string) to int
        try:
            count = int(count)
        except ValueError:
            # count was not a number, so silently
            # ignore/discard this line
            continue
    
        # this IF-switch only works because Hadoop sorts map output
        # by key (here: word) before it is passed to the reducer
        if current_word == word:
            current_count += count
        else:
            if current_word:
                # write result to STDOUT
                print '%s\t%s' % (current_word, current_count)
            current_count = count
            current_word = word
    
    # do not forget to output the last word if needed!
    if current_word == word:
    print '%s\t%s' % (current_word, current_count)
  5. The user can execute the script as shown in the following screenshot:
    How to do it...

How it works...

In this recipe, mapper.py and reducer.py are simple Python scripts, which can be executed directly on the command line, without the need for Hadoop as shown next:

$ cat file | ./mapper.py | ./reducer.py

Here, file is a simple text file. Make sure you understand the indentation in Python to troubleshoot this script.

If the users are finding it difficult to write scripts or configurations, all these are available at GitHub: https://github.com/netxillon/hadoop/tree/master/map_scripts

主站蜘蛛池模板: 营山县| 麻阳| 南陵县| 富阳市| 和硕县| 界首市| 哈巴河县| 松桃| 寿宁县| 濮阳县| 通辽市| 吴堡县| 汪清县| 社旗县| 湛江市| 潞西市| 高邑县| 乐亭县| 扎囊县| 会宁县| 沙田区| 忻州市| 额尔古纳市| 正蓝旗| 星子县| 喜德县| 新化县| 蚌埠市| 柞水县| 永城市| 广宗县| 南华县| 临清市| 阿尔山市| 江达县| 巫山县| 射阳县| 大厂| 阳朔县| 营口市| 苏尼特右旗|