hadoop1.0向hadoop2.0 distcp数据

报错:
java.io.IOException: Check-sum mismatch between hftp://zw-tvhadoop-master:50070/user/hive/warehouse/pvlog_depth_rcfile/dt=20140306/_logs/history/job_201401231451_357062_1394133462287_tvhadoop_mapred-extend.jar and hdfs://10.10.34.69:8020/user/hive/warehouse/pvlog_depth_rcfile/.distcp.tmp.attempt_local2087680281_0001_m_000000_0.
        at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.compareCheckSums(RetriableFileCopyCommand.java:152)
        at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doCopy(RetriableFileCopyCommand.java:108)
        at org.apache.hadoop.tools.mapred.RetriableFileCopyCommand.doExecute(RetriableFileCopyCommand.java:83)
        at org.apache.hadoop.tools.util.RetriableCommand.execute(RetriableCommand.java:87)
        at org.apache.hadoop.tools.mapred.CopyMapper.copyFileWithRetry(CopyMapper.java:258)
        at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:229)
        at org.apache.hadoop.tools.mapred.CopyMapper.map(CopyMapper.java:45)
        at org.apache.hadoop.mapreduce.Mapper.run(Mapper.java:145)
        at org.apache.hadoop.mapred.MapTask.runNewMapper(MapTask.java:763)
        at org.apache.hadoop.mapred.MapTask.run(MapTask.java:339)
        at org.apache.hadoop.mapred.LocalJobRunner$Job$MapTaskRunnable.run(LocalJobRunner.java:235)
        at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:441)
        at java.util.concurrent.FutureTask$Sync.innerRun(FutureTask.java:303)
        at java.util.concurrent.FutureTask.run(FutureTask.java:138)
        at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
        at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
        at java.lang.Thread.run(Thread.java:662)

解决方案:
1. hadoop distcp -skipcrccheck -update hftp://zw-tvhadoop-master:50070/user/hive/warehouse/pvlog_depth_rcfile/dt=20140306 hdfs://10.10.34.69:8020/user/hive/warehouse/pvlog_depth_rcfile
2. 在hdfs-site.xml中添加:
       
             dfs.checksum.type
             CRC32
         
  不版本间的数据迁移要注意:  
  a. source 用hftp协议.  
  b. 迁移到的hadoop集群版本比较高, 最好设置-skipcrccheck选项也-update选项, skipcrccheck忽略FileChecksum校验, 因为版本的升级可能带来Checksum值不一样

你可能感兴趣的:(hadoop)