[sqoop][mysql导入到hadoop]ipc.Client: Retrying connect to server: spark002/10.211.55.12:60587. Already

报错1:
17/03/29 11:18:53 INFO orm.CompilationManager: Writing jar file: /tmp/sqoop-root/compile/bde23e1bdcb658a74784c760aeca9881/fda_djml.jar
17/03/29 11:18:53 INFO manager.DirectMySQLManager: Beginning mysqldump fast path import
17/03/29 11:18:53 INFO mapreduce.ImportJobBase: Beginning import of fda_djml
17/03/29 11:18:53 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/03/29 11:18:53 INFO Configuration.deprecation: mapred.jar is deprecated. Instead, use mapreduce.job.jar
17/03/29 11:18:54 INFO Configuration.deprecation: mapred.map.tasks is deprecated. Instead, use mapreduce.job.maps
17/03/29 11:18:54 INFO client.RMProxy: Connecting to ResourceManager at master/10.211.55.10:8032
17/03/29 11:18:56 INFO db.DBInputFormat: Using read commited transaction isolation
17/03/29 11:18:56 INFO db.DataDrivenDBInputFormat: BoundingValsQuery: SELECT MIN(`djguid`), MAX(`djguid`) FROM fda_djml
17/03/29 11:18:56 WARN db.TextSplitter: Generating splits for a textual index column.
17/03/29 11:18:56 WARN db.TextSplitter: If your database sorts in a case-insensitive order, this may result in a partial import or duplicate records.
17/03/29 11:18:56 WARN db.TextSplitter: You are strongly encouraged to choose an integral split column.
17/03/29 11:18:56 INFO mapreduce.JobSubmitter: number of splits:6
17/03/29 11:18:56 INFO mapreduce.JobSubmitter: Submitting tokens for job: job_1490572354639_0017
17/03/29 11:18:56 INFO impl.YarnClientImpl: Submitted application application_1490572354639_0017
17/03/29 11:18:57 INFO mapreduce.Job: The url to track the job: http://master:8088/proxy/application_1490572354639_0017/
17/03/29 11:18:57 INFO mapreduce.Job: Running job: job_1490572354639_0017
17/03/29 11:19:09 INFO mapreduce.Job: Job job_1490572354639_0017 running in uber mode : false
17/03/29 11:19:09 INFO mapreduce.Job:  map 0% reduce 0%
17/03/29 11:19:16 INFO mapreduce.Job:  map 33% reduce 0%
17/03/29 11:19:23 INFO mapreduce.Job:  map 100% reduce 0%
17/03/29 11:19:28 INFO ipc.Client: Retrying connect to server: spark002/10.211.55.12:60587. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
17/03/29 11:19:29 INFO ipc.Client: Retrying connect to server: spark002/10.211.55.12:60587. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)
17/03/29 11:19:30 INFO ipc.Client: Retrying connect to server: spark002/10.211.55.12:60587. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=3, sleepTime=1000 MILLISECONDS)

报错2:

2017-03-29 11:19:22,038 ERROR [Thread-68] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Exception while unregistering 
java.lang.NullPointerException
	at org.apache.hadoop.mapreduce.v2.util.MRWebAppUtil.getApplicationWebURLOnJHSWithoutScheme(MRWebAppUtil.java:135)
	at org.apache.hadoop.mapreduce.v2.util.MRWebAppUtil.getApplicationWebURLOnJHSWithScheme(MRWebAppUtil.java:150)
	at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.doUnregistration(RMCommunicator.java:212)
	at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.unregister(RMCommunicator.java:182)
	at org.apache.hadoop.mapreduce.v2.app.rm.RMCommunicator.serviceStop(RMCommunicator.java:255)
	at org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator.serviceStop(RMContainerAllocator.java:272)
	at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
	at org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$ContainerAllocatorRouter.serviceStop(MRAppMaster.java:821)
	at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
	at org.apache.hadoop.service.ServiceOperations.stop(ServiceOperations.java:52)
	at org.apache.hadoop.service.ServiceOperations.stopQuietly(ServiceOperations.java:80)
	at org.apache.hadoop.service.CompositeService.stop(CompositeService.java:157)
	at org.apache.hadoop.service.CompositeService.serviceStop(CompositeService.java:131)
	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.serviceStop(MRAppMaster.java:1497)
	at org.apache.hadoop.service.AbstractService.stop(AbstractService.java:221)
	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.stop(MRAppMaster.java:1094)
	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster.shutDownJob(MRAppMaster.java:556)
	at org.apache.hadoop.mapreduce.v2.app.MRAppMaster$JobFinishEventHandler$1.run(MRAppMaster.java:603)
2017-03-29 11:19:22,040 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.rm.RMContainerAllocator: Final Stats: PendingReds:0 ScheduledMaps:0 ScheduledReds:0 AssignedMaps:4 AssignedReds:0 CompletedMaps:6 CompletedReds:0 ContAlloc:9 ContRel:3 HostLocal:0 RackLocal:0
2017-03-29 11:19:22,040 INFO [Thread-68] org.apache.hadoop.mapreduce.v2.app.MRAppMaster: Skipping cleaning up the staging dir. assuming AM will be retried.
2017-03-29 11:19:22,040 INFO [Thread-68] org.apache.hadoop.ipc.Server: Stopping server on 53611

解决办法:

修改mapred-site.xml里面的配置,注意

mapreduce.jobhistory.address和
mapreduce.jobhistory.webapp.address配置正确,以及加入
mapreduce.application.classpath的配置

 
  
 
  



        mapreduce.framework.name
        yarn



        mapreduce.jobhistory.address
        master:10020



        mapreduce.jobhistory.webapp.address
        master:19888



        mapreduce.jobhistory.intermediate-done-dir
        /mr-history/tmp



        mapreduce.jobhistory.done-dir
        /mr-history/done


       mapreduce.application.classpath
       
            /opt/hadoop-2.5.2/etc/hadoop,
            /opt/hadoop-2.5.2/share/hadoop/common/*,
            /opt/hadoop-2.5.2/share/hadoop/common/lib/*,
            /opt/hadoop-2.5.2/share/hadoop/hdfs/*,
            /opt/hadoop-2.5.2/share/hadoop/hdfs/lib/*,
            /opt/hadoop-2.5.2/share/hadoop/mapreduce/*,
            /opt/hadoop-2.5.2/share/hadoop/mapreduce/lib/*,
            /opt/hadoop-2.5.2/share/hadoop/yarn/*,
            /opt/hadoop-2.5.2/share/hadoop/yarn/lib/*
       






你可能感兴趣的:(大数据,sqoop)