hadoop简单实例-手机号流量统计

刚调试了一个用mapreduce统计每一个用户(手机号)所耗费的总上行流量、总下行流量,总流量实例,仍然遇到了一些问题并一一解决,赶紧记录下

步骤一:准备统计文本

从网上抄的一段文本,粘贴到txt文档,命名为phones.txt,并上传到hdfs中,路径为/a/phones.txt。同时本地也保留了一份,路径我D:\hadoopmaterial\phones.txt,方便调试。
1363157985066 13726230503 00-FD-07-A4-72-B8:CMCC 120.196.100.82 i02.c.aliimg.com 24 27 2481 24681 200
1363157995052 13826544101 5C-0E-8B-C7-F1-E0:CMCC 120.197.40.4 4 0 264 0 200
1363157991076 13926435656 20-10-7A-28-CC-0A:CMCC 120.196.100.99 2 4 132 1512 200
1363154400022 13926251106 5C-0E-8B-8B-B1-50:CMCC 120.197.40.4 4 0 240 0 200
1363157993044 18211575961 94-71-AC-CD-E6-18:CMCC-EASY 120.196.100.99 iface.qiyi.com 视频网站 15 12 1527 2106 200
1363157995074 84138413 5C-0E-8B-8C-E8-20:7DaysInn 120.197.40.4 122.72.52.12 20 16 4116 1432 200
1363157993055 13560439658 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 18 15 1116 954 200
1363157995033 15920133257 5C-0E-8B-C7-BA-20:CMCC 120.197.40.4 sug.so.360.cn 信息安全 20 20 3156 2936 200
1363157983019 13719199419 68-A1-B7-03-07-B1:CMCC-EASY 120.196.100.82 4 0 240 0 200
1363157984041 13660577991 5C-0E-8B-92-5C-20:CMCC-EASY 120.197.40.4 s19.cnzz.com 站点统计 24 9 6960 690 200
1363157973098 15013685858 5C-0E-8B-C7-F7-90:CMCC 120.197.40.4 rank.ie.sogou.com 搜索引擎 28 27 3659 3538 200
1363157986029 15989002119 E8-99-C4-4E-93-E0:CMCC-EASY 120.196.100.99 www.umeng.com 站点统计 3 3 1938 180 200
1363157992093 13560439658 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 15 9 918 4938 200
1363157986041 13480253104 5C-0E-8B-C7-FC-80:CMCC-EASY 120.197.40.4 3 3 180 180 200
1363157984040 13602846565 5C-0E-8B-8B-B6-00:CMCC 120.197.40.4 2052.flash2-http.qq.com 综合门户 15 12 1938 2910 200
1363157995093 13922314466 00-FD-07-A2-EC-BA:CMCC 120.196.100.82 img.qfc.cn 12 12 3008 3720 200
1363157982040 13502468823 5C-0A-5B-6A-0B-D4:CMCC-EASY 120.196.100.99 y0.ifengimg.com 综合门户 57 102 7335 110349 200
1363157986072 18320173382 84-25-DB-4F-10-1A:CMCC-EASY 120.196.100.99 input.shouji.sogou.com 搜索引擎 21 18 9531 2412 200
1363157990043 13925057413 00-1F-64-E1-E6-9A:CMCC 120.196.100.55 t3.baidu.com 搜索引擎 69 63 11058 48243 200
1363157988072 13760778710 00-FD-07-A4-7B-08:CMCC 120.196.100.82 2 2 120 120 200
1363157985066 13726238888 00-FD-07-A4-72-B8:CMCC 120.196.100.82 i02.c.aliimg.com 24 27 2481 24681 200
1363157993055 13560436666 C4-17-FE-BA-DE-D9:CMCC 120.196.100.99 18 15 1116 954 200

步骤二:eclipse编写代码
//HdfsDao.java  工具类 
public static Configuration config() {
    Configuration conf = new Configuration();
        return conf;
    }

conf新建后全部采用默认配置,表示hadoop在本地运行

//FlowSumMR.java
public class FlowSumMR {
    public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException {
        Configuration jobConf = HdfsDao.config();
        Job job = Job.getInstance(jobConf, "FlowSumMR");
        job.setJarByClass(FlowSumMR.class);
        job.setMapperClass(FlowSumMRMapper.class);
        job.setReducerClass(FlowSumMRReducer.class);
        job.setOutputKeyClass(Text.class);
        job.setOutputValueClass(Text.class);
        FileInputFormat.setInputPaths(job, new Path("D:\\hadoopmaterial\\phones.txt"));
        FileOutputFormat.setOutputPath(job, new Path("D:\\hadoopmaterial\\phone_output"));
        boolean isDone = job.waitForCompletion(true);
        System.exit(isDone ? 0 : 1);
    }   
}
//FlowSumMRMapper.java
public  class FlowSumMRMapper extends Mapper {

    @Override
    protected void map(LongWritable key, Text value, Mapper.Context context)
            throws IOException, InterruptedException {
        String[] split = value.toString().split("\t");
        String outputKey = split[1];
        String outputValue = split[7] + "\t" + split[8];
        context.write(new Text(outputKey), new Text(outputValue));

    }

}
//FlowSumMRReducer.java
public class FlowSumMRReducer extends Reducer {
    @Override
    protected void reduce(Text key, Iterable values, Reducer.Context context)
            throws IOException, InterruptedException {
        int upFlow = 0;
        int downFlow = 0;
        int sumFlow = 0;
        for (Text value : values) {
            String[] split = value.toString().split("\t");
            int upTempFlow = Integer.parseInt(split[0]);
            int downTempFlow = Integer.parseInt(split[1]);
            upFlow += upTempFlow;
            downFlow += downTempFlow;
        }
        sumFlow = upFlow + downFlow;
        context.write(key, new Text(upFlow + "\t" + downFlow + "\t" + sumFlow));
    }
}

FlowSumMR.java中直接运行main方法,本地运行成功

步骤三:任务提交到远程map reduce

从hadoop服务器或者集群中,下载core-site.xml,hdfs-site.xml,mapred-site.xml,yarn-site.xml,log4j.properties文件,放入编译路径下,下图是我放的路径


hadoop简单实例-手机号流量统计_第1张图片

修改HdfsDao.java中的config方法

    public static Configuration config() {
        Configuration conf = new Configuration();
        conf.addResource("hadoop/core-site.xml");
        conf.set("dfs.client.use.datanode.hostname", "true");
        System.setProperty("HADOOP_USER_NAME", "hadoop");
        conf.addResource("hadoop/hdfs-site.xml");
        conf.addResource("hadoop/mapred-site.xml");
        conf.addResource("hadoop/yarn-site.xml");
         conf.set("mapreduce.framework.name","yarn");
         conf.set("yarn.resourcemanager.hostname","master");
        return conf;
    }

注意 conf.set("mapreduce.framework.name","yarn");
conf.set("yarn.resourcemanager.hostname","master");一定要配置,否则程序会卡住且不知道为什么。后来查到这两句是将任务提交到远程yarn来运行。
下载下来的mapred-site.xml 文件也要改一下


    
        mapreduce.framework.name
        yarn
    
    
        mapred.remote.os
        Linux
    
    
        mapreduce.app-submission.cross-platform
        true
    
    
        mapreduce.application.classpath
        /usr/local/hadoop/etc/hadoop,       
            /usr/local/hadoop/share/hadoop/common/*,
                   
            /usr/local/hadoop/share/hadoop/common/lib/*,
                   
            /usr/local/hadoop/share/hadoop/hdfs/*,
                   
            /usr/local/hadoop/share/hadoop/hdfs/lib/*,
                   
            /usr/local/hadoop/share/hadoop/mapreduce/*,
                   
            /usr/local/hadoop/share/hadoop/mapreduce/lib/*,
                   
            /usr/local/hadoop/share/hadoop/yarn/*,
                   
            /usr/local/hadoop/share/hadoop/yarn/lib/*
        
    
    
        mapreduce.jobhistory.address
        
        master:10020
    


还有yarn-site.xml


    
    
        yarn.nodemanager.aux-services
        mapreduce_shuffle
    
    
        yarn.nodemanager.local-dirs
        file:/usr/local/hadoop/tmp/yarn/nm
    
    
        yarn.application.classpath
        /usr/local/hadoop/etc/hadoop,
            /usr/local/hadoop/share/hadoop/common/*,
            /usr/local/hadoop/share/hadoop/common/lib/*,
            /usr/local/hadoop/share/hadoop/hdfs/*,
            /usr/local/hadoop/share/hadoop/hdfs/lib/*,
            /usr/local/hadoop/share/hadoop/mapreduce/*,
            /usr/local/hadoop/share/hadoop/mapreduce/lib/*,
            /usr/local/hadoop/share/hadoop/yarn/*,
            /usr/local/hadoop/share/hadoop/yarn/lib/*
        
    


FlowSumMR 类也要改一下

public class FlowSumMR
{
  public static void main(String[] args)
    throws IOException, ClassNotFoundException, InterruptedException
  {
    Configuration jobConf = HdfsDao.config();
    Job job = Job.getInstance(jobConf, "FlowSumMR");
    job.setJarByClass(FlowSumMR.class);
    job.setMapperClass(FlowSumMRMapper.class);
    job.setReducerClass(FlowSumMRReducer.class);
    job.setOutputKeyClass(Text.class);
    job.setOutputValueClass(Text.class);
    FileInputFormat.setInputPaths(job, new Path[] { new Path("/a/phones.txt") });
    FileOutputFormat.setOutputPath(job, new Path("/a/flow/output_sum"));

    boolean isDone = job.waitForCompletion(true);
    System.exit(isDone ? 0 : 1);
  }
}
步骤四:打包发布

本实例用maven进行管理的,打包的时候,记得要将依赖的jar打包。pom.xml文件配置


        
            
                org.apache.maven.plugins
                maven-compiler-plugin
                
                    ${java.version}
                    ${java.version}
                
            
            
                maven-assembly-plugin
                
                    false
                    
                        jar-with-dependencies
                    
                    
                        
                            
                            com.jiangxl.hadoop.flowcount.FlowSumMR
                        
                    
                
                
                    
                        make-assembly
                        package
                        
                            assembly
                        
                    
                
            
        
    

将打包好的jar上传到hadoop服务器,运行 hadoop jar ***.jar可得到正确结果。运行的时候注意下 FileOutputFormat.setOutputPath(job, new Path("/a/flow/output_sum"));这个输出文件不能存在,否则会失败

步骤五:运用远程yarn本地调试

之前的代码,如果在本地运行main函数,被报ClassNotFoundException,需要在FlowSumMR加入jobConf.set("mapred.jar", "D:\workspace-test\WordCount\target\WordCount-0.0.1-SNAPSHOT.jar");
注意jar的目录为windows本地打的jar包的目录结构,不是linux中的目录(这是在本地调试使用的,实际发布不需要设置)

总结

注意文中使用的地址都是主机名,并没有使用具体的地址,namenode和datanode通信也是通过主机名来通信, conf.set("dfs.client.use.datanode.hostname", "true");这句话就是设置通过主机名来通信,否则dfs读取会有问题。
主机名和ip对应关系,windows和linux都需要设置,都在hosts文件中设置,linux还要修改为指定的主机名
参考网站https://blog.csdn.net/qq_19648191/article/details/56684268

你可能感兴趣的:(hadoop简单实例-手机号流量统计)