python map reduce示例

执行shell脚本run.sh

#! /bin/bash
IN_DIR="/input/20140221"
OUT_DIR="/output/20140221"

HPHOME=$HADOOP_HOME/bin/
JAR_PACKAGE=$HADOOP_HOME/contrib/streaming/hadoop-streaming-0.20.2-cdh3u0.jar

MAP_FILE=$(pwd)/map.py
RED_FILE=$(pwd)/red.py

${HPHOME}hadoop fs -rmr $OUT_DIR
${HPHOME}hadoop jar $JAR_PACKAGE \
        -numReduceTasks 1 \
        -jobconf mapred.job.queue.name=platform \
        -input $IN_DIR \
        -output $OUT_DIR  \
        -file $MAP_FILE  \
        -file $RED_FILE \
        -mapper $MAP_FILE \
        -reducer $RED_FILE \
        -inputformat SequenceFileAsTextInputFormat
exit $?

map.py

#!/usr/bin/env python

import sys

for line in sys.stdin:
        line = line.strip()
        if line.find("str1")!=-1:
                item = line.split('\01')
                key = item[3]
                print '%s\t%s' % (key, 'str3')
        elif line.find("str2")!=-1:
                item = line.split('\01')
                key = item[2]
                value = item[5]
                print '%s\t%s\t%s' % (key, 'str3', value)
        else:
                pass
red.py

#!/usr/bin/env python

from operator import itemgetter
import sys

word2count = {}
total_count = 0
total_money = 0.0
total_error = 0

pre_pid = ''
pre_money = 0.0
is_pre_push = 0
for line in sys.stdin:
        line = line.strip()
        if line.find('str3')!=-1:
                item = line.split('\t')
                total_money += float(item[2])
                total_count += 1
        else:
                pass

print total_count, total_error, total_money ##最后才打印
仅作为参考

hadoop默认以'\t'分隔key和value,第一个字段为key

到reduce时,相同key的都在一起被同一个red.py处理,故可以连续读取相同key的每一行。(java中相同key的都以数组的形式放在一次reduce中了,而python却仍要读取多行,但相同key的都排在一起)



你可能感兴趣的:(python map reduce示例)