MapReduce自定义序列化报错解决方案

MapReduce自定义序列化报错解决方案

—袁润和
对于一个初学的人来说,遇到执行错误就立马百度
我在执行MapReduce代码时,输出文件已经自动创建出来了,但是文件下是空的,没有处理结果,然后在百度上找到了一些解决方案,说是没导入**import org.apache.hadoop.io.Text;**这个包,我检查了一遍代码,我里面有导入啊,如何就找找找,找到了解决方案,接下来我把我的解决过程告诉大家
这下面是我执行时报的错误,

19/12/13 21:11:21 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
19/12/13 21:11:22 INFO input.FileInputFormat: Total input paths to process : 1
19/12/13 21:11:22 INFO mapreduce.JobSubmitter: number of splits:1
19/12/13 21:11:22 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1576300081125_0002
19/12/13 21:11:23 INFO impl.YarnClientImpl: Submitted application application_1576300081125_0002
19/12/13 21:11:23 INFO mapreduce.Job: The url to track the job: http://localhost:8088/proxy/application_1576300081125_0002/
19/12/13 21:11:23 INFO mapreduce.Job: Running job: job_1576300081125_0002
19/12/13 21:11:34 INFO mapreduce.Job: Job job_1576300081125_0002 running in uber mode : false
19/12/13 21:11:34 INFO mapreduce.Job:  map 0% reduce 0%
19/12/13 21:11:41 INFO mapreduce.Job:  map 100% reduce 0%
19/12/13 21:11:46 INFO mapreduce.Job: Task Id : attempt_1576300081125_0002_r_000000_0, Status : FAILED
Error: java.lang.RuntimeException: java.lang.NoSuchMethodException: phoneFlow.flowCount$flow.<init>()
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:66)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
	at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKeyValue(ReduceContextImpl.java:146)
	at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKey(ReduceContextImpl.java:121)
	at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.nextKey(WrappedReducer.java:302)
	at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:170)
	at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.NoSuchMethodException: phoneFlow.flowCount$flow.<init>()
	at java.lang.Class.getConstructor0(Class.java:3082)
	at java.lang.Class.getDeclaredConstructor(Class.java:2178)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:128)
	... 13 more

19/12/13 21:11:52 INFO mapreduce.Job: Task Id : attempt_1576300081125_0002_r_000000_1, Status : FAILED
Error: java.lang.RuntimeException: java.lang.NoSuchMethodException: phoneFlow.flowCount$flow.<init>()
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:66)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
	at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKeyValue(ReduceContextImpl.java:146)
	at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKey(ReduceContextImpl.java:121)
	at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.nextKey(WrappedReducer.java:302)
	at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:170)
	at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.NoSuchMethodException: phoneFlow.flowCount$flow.<init>()
	at java.lang.Class.getConstructor0(Class.java:3082)
	at java.lang.Class.getDeclaredConstructor(Class.java:2178)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:128)
	... 13 more

19/12/13 21:11:57 INFO mapreduce.Job: Task Id : attempt_1576300081125_0002_r_000000_2, Status : FAILED
Error: java.lang.RuntimeException: java.lang.NoSuchMethodException: phoneFlow.flowCount$flow.<init>()
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:134)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:66)
	at org.apache.hadoop.io.serializer.WritableSerialization$WritableDeserializer.deserialize(WritableSerialization.java:42)
	at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKeyValue(ReduceContextImpl.java:146)
	at org.apache.hadoop.mapreduce.task.ReduceContextImpl.nextKey(ReduceContextImpl.java:121)
	at org.apache.hadoop.mapreduce.lib.reduce.WrappedReducer$Context.nextKey(WrappedReducer.java:302)
	at org.apache.hadoop.mapreduce.Reducer.run(Reducer.java:170)
	at org.apache.hadoop.mapred.ReduceTask.runNewReducer(ReduceTask.java:627)
	at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:389)
	at org.apache.hadoop.mapred.YarnChild$2.run(YarnChild.java:164)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1762)
	at org.apache.hadoop.mapred.YarnChild.main(YarnChild.java:158)
Caused by: java.lang.NoSuchMethodException: phoneFlow.flowCount$flow.<init>()
	at java.lang.Class.getConstructor0(Class.java:3082)
	at java.lang.Class.getDeclaredConstructor(Class.java:2178)
	at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:128)
	... 13 more

19/12/13 21:12:05 INFO mapreduce.Job:  map 100% reduce 100%
19/12/13 21:12:05 INFO mapreduce.Job: Job job_1576300081125_0002 failed with state FAILED due to: Task failed task_1576300081125_0002_r_000000
Job failed as tasks failed. failedMaps:0 failedReduces:1

19/12/13 21:12:05 INFO mapreduce.Job: Counters: 37
	File System Counters
		FILE: Number of bytes read=0
		FILE: Number of bytes written=122680
		FILE: Number of read operations=0
		FILE: Number of large read operations=0
		FILE: Number of write operations=0
		HDFS: Number of bytes read=164
		HDFS: Number of bytes written=0
		HDFS: Number of read operations=3
		HDFS: Number of large read operations=0
		HDFS: Number of write operations=0
	Job Counters 
		Failed reduce tasks=4
		Launched map tasks=1
		Launched reduce tasks=4
		Data-local map tasks=1
		Total time spent by all maps in occupied slots (ms)=4912
		Total time spent by all reduces in occupied slots (ms)=12100
		Total time spent by all map tasks (ms)=4912
		Total time spent by all reduce tasks (ms)=12100
		Total vcore-milliseconds taken by all map tasks=4912
		Total vcore-milliseconds taken by all reduce tasks=12100
		Total megabyte-milliseconds taken by all map tasks=5029888
		Total megabyte-milliseconds taken by all reduce tasks=12390400
	Map-Reduce Framework
		Map input records=6
		Map output records=6
		Map output bytes=102
		Map output materialized bytes=120
		Input split bytes=93
		Combine input records=0
		Spilled Records=6
		Failed Shuffles=0
		Merged Map outputs=0
		GC time elapsed (ms)=91
		CPU time spent (ms)=1210
		Physical memory (bytes) snapshot=213499904
		Virtual memory (bytes) snapshot=2080657408
		Total committed heap usage (bytes)=158793728
	File Input Format Counters 
		Bytes Read=71

下面是我的执行代码,刚开始的时候我是没在自定义WritableComparable块中写没参数的构造方法的,所以一直执行错误,直到我写了无参数构造方法后执行就成功了

package grade;

import java.io.DataInput;
import java.io.DataOutput;
import java.io.IOException;

import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.fs.Path;
import org.apache.hadoop.io.Text;
import org.apache.hadoop.io.WritableComparable;
import org.apache.hadoop.mapreduce.Job;
import org.apache.hadoop.mapreduce.Mapper;
import org.apache.hadoop.mapreduce.Reducer;
import org.apache.hadoop.mapreduce.lib.input.FileInputFormat;
import org.apache.hadoop.mapreduce.lib.output.FileOutputFormat;
import org.apache.hadoop.util.GenericOptionsParser;





/*
 原始数据 学号、语文成绩、数学成绩
 1001 88 75
 1002 79 90
 1003 86 85
 1004 75 72
 1005 86 88
 1006 81 88 
 */

//需求:按照总成绩从小到大的顺序排序,如果总成绩一样,则按照数学成绩从小到大顺序排序
//mapper函数在输出的时候,会自动按照key从小到大的顺序进行排序

public class selfSort {
     

	public static class score implements WritableComparable<score>
	{
     
		public int yuwen;
		public int shuxue;
		public int total;
		
		//必须要有默认的构造参数,并且还要将变量初始化
		public score(){
     
			this.yuwen=0;
			this.shuxue=0;
			this.total=0;
		}
		
		public score(int yw,int sx ) {
     
			this.yuwen=yw;
			this.shuxue=sx;
			this.total=yw+sx;
		}
		

		@Override
		public void readFields(DataInput arg0) throws IOException {
     
			// TODO Auto-generated method stub
			this.yuwen=arg0.readInt();
			this.shuxue=arg0.readInt();
			this.total=arg0.readInt();
		}

		@Override
		public void write(DataOutput arg0) throws IOException {
     
			// TODO Auto-generated method stub		
			arg0.writeInt(this.yuwen);
			arg0.writeInt(this.shuxue);
			arg0.writeInt(this.total);
		}
		//重载格式化的tostring方法
		//这个方法是说明这个的输出顺序
		@Override	//这个必须写
		public String toString() {
     
			return this.yuwen +" "+this.shuxue+" "+this.total;
		}	
			//下面方法是自定义排序
			//要求是在总成绩一样是进行数学成绩大小排序,否则进行总成绩排序
			//这个函数就是实现自定义排序的关键方法
			@Override
			public int compareTo(score arg0) {
     
				// TODO Auto-generated method stub
				//拿类对象本身和传过来的arg0参数对象进行比较
				if(this.total == arg0.total) {
     //如果两个的总成绩一样
					//则比较数学成绩
					//现在是按照从小到大排序,如果要从小到大排序只需要颠倒相减的顺序return arg0.shuxue -this.shuxue ;
					return this.shuxue - arg0.shuxue;
				}else {
     
					//如果总成绩不一样时
					//同理,如果想从小到大排序,颠倒相减的顺序即可
					return this.total-arg0.total;
				}
				
			}
			
	}
	/*
	 原始数据 学号、语文成绩、数学成绩
	 1001 88 75
	 1002 79 90
	 1003 86 85
	 1004 75 72
	 1005 86 88
	 1006 81 88
	 */
	
	
	//mapper
	 public static class sortMapper extends Mapper<Object, Text, score, Text>{
     
		 public void map(Object key, Text value, Context context) throws IOException, InterruptedException {
     
			 
			 String str = value.toString();
			 String[] ss = str.split(" ");
			 String id =ss[0];
			 String yw =ss[1];
			 String sx =ss[2];
			 
			 int iyw =Integer.valueOf(yw);
			 int isx =Integer.valueOf(sx);
			 
			 Text t = new Text();
			 t.set(id);
			 
			 score s = new score(iyw, isx);
			 
			 //mapper函数的输出的时候,会按照key从小到大的顺序进行排序
			 //如果输出的形式为<学号,对应的score对象>,这种输出方式的结果是按照学号从小到大进行排序
			 //我们需求是先按照总成绩排序,在哪找数学成绩排序
			 //像这种比较复杂的排序就需要用自定义排序方式来实现
			 //输出形式为<对应的score对象,学号>
			 
			 
			 context.write(s, t);
		 }
		 }
	
	 
	 /*mapper输出结果
	 (86 88 174) 1005
	 (86 85 171) 1003
	 (79 90 169) 1002
	 (81 88 169) 1006
	 (88 75 163) 1001
	 (75 72 147) 1004
	 */
	 
	 //reducer
	 /*
	 (86 88 174) [1005]
	 (86 85 171) [1003]
	 (79 90 169) [1002]
	 (81 88 169) [1006]
	 (88 75 163) [1001]
	 (75 72 147) [1004]
	  */
	 
	 //reducer输出的形式为<学号,对应的score对象>
	 
	 /*
	  reduce输出结果
	[1005] (86 88 174) 
	[1003] (86 85 171) 
	[1002] (79 90 169) 
	[1006] (81 88 169) 
	[1001] (88 75 163) 
	[1004] (75 72 147) 
	  */
	 
	 public static class sortReducer extends Reducer<score,Text,Text,score> {
     
		 public void reduce(score key, Iterable<Text> values, Context context) throws IOException, InterruptedException {
     	
			//由于传进来的values是数组类型的所以要进行遍历出来
			 for(Text p : values) {
     
				 context.write(p,key);
			 }
		 }
		 }

	//main
	 public static void main(String[] args) throws Exception {
      
			Configuration conf = new Configuration(); 
		
			String[] otherArgs = new GenericOptionsParser(conf, args).getRemainingArgs(); 
			if (otherArgs.length != 2) {
      
				System.err.println("Usage: wordcount  "); 
				System.exit(2); 
			} 			
			Job job = Job.getInstance(conf, "self scrt"); 
			job.setJarByClass(selfSort.class); 
			//如果map函数和reduce参数不一样的话,这里要添加下面两个
			//注意这里:由于Mapper与Reducer的输出Key,Value类型不同,所以要单独为Mapper设置类型
			//map的输出key
			job.setMapOutputKeyClass(score.class);
			//reduce的输出value
			job.setMapOutputValueClass(Text.class);
			
			job.setMapperClass(sortMapper.class);
			job.setReducerClass(sortReducer.class); 
			job.setOutputKeyClass(Text.class); 
			job.setOutputValueClass(score.class); 		
			FileInputFormat.addInputPath(job, new Path(otherArgs[0]));
			FileOutputFormat.setOutputPath(job, new Path(otherArgs[1])); 
			System.exit(job.waitForCompletion(true) ? 0 : 1); 
		  }
		}


转载请告知作者

你可能感兴趣的:(MapReduce报错处理方案,mapreduce,hadoop,大数据)