问题1:
提示如下信息:
Java HotSpot(TM) Client VM warning: You have loaded library /usr/local/hadoop/lib/native/libhadoop.so.1.0.0 which might have disabled stack guard. The VM will try to fix the stack guard now.
It's highly recommended that you fix the library with 'execstack -c
解决方法:
修改$HADOOP_HOME/etc/hadoop路径当中的hadoop-env.sh文件。
加上下面两句话:
export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native
export HADOOP_OPTS="-Djava.library.path=${HADOOP_HOME}/lib"
这里的${HADOOP_HOME}要写安装路径,如:
export HADOOP_COMMON_LIB_NATIVE_DIR=/usr/local/hadoop-2.7.3/lib/native
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true -Djava.library.path=/usr/local/hadoop-2.7.3/lib"
问题2:
提示如下信息:
17/05/18 13:26:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
解决方法:
在$HADOOP_HOME/etc/hadoop路径中修改log4j.properties文件,加上下面一句话:
log4j.logger.org.apache.hadoop.util.NativeCodeLoader=ERROR
问题3:无法通过网页查看集群
在确定配置文件无误的前提下,无法通过网页查看集群状态。
确定是否开启了YARN,通过start-yarn.sh则可以开启。然后输入http://Master:8088/cluster可以查看集群。
通过http://Master:50070查看NameNode和DataNode状态,HDFS情况。
问题3:Java.io.IOException: No FileSystem for scheme: hdfs
问题4:
警告: Cannot load filesystem
java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.hdfs.web.WebHdfsFileSystem could not be instantiated
at java.util.ServiceLoader.fail(ServiceLoader.java:232)
at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2631)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2650)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:172)
at util.Utils.init(Utils.java:361)
at init.InitServlet.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.catalina.core.DefaultInstanceManager.newInstance(DefaultInstanceManager.java:120)
at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1041)
at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:980)
at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4819)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5129)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1425)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1415)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:941)
at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:839)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1425)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1415)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:941)
at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:258)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.StandardService.startInternal(StandardService.java:422)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:770)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.startup.Catalina.start(Catalina.java:657)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:355)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:495)
Caused by: java.lang.NoClassDefFoundError: org/codehaus/jackson/map/ObjectMapper
at org.apache.hadoop.hdfs.web.WebHdfsFileSystem.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at java.lang.Class.newInstance(Class.java:442)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
... 49 more
Caused by: java.lang.ClassNotFoundException: org.codehaus.jackson.map.ObjectMapper
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1269)
at org.apache.catalina.loader.WebappClassLoaderBase.loadClass(WebappClassLoaderBase.java:1104)
... 56 more
七月 06, 2017 12:27:10 下午 org.apache.hadoop.fs.FileSystem loadFileSystems
警告: Cannot load filesystem
java.util.ServiceConfigurationError: org.apache.hadoop.fs.FileSystem: Provider org.apache.hadoop.hdfs.web.SWebHdfsFileSystem could not be instantiated
at java.util.ServiceLoader.fail(ServiceLoader.java:232)
at java.util.ServiceLoader.access$100(ServiceLoader.java:185)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:384)
at java.util.ServiceLoader$LazyIterator.next(ServiceLoader.java:404)
at java.util.ServiceLoader$1.next(ServiceLoader.java:480)
at org.apache.hadoop.fs.FileSystem.loadFileSystems(FileSystem.java:2631)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2650)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:172)
at util.Utils.init(Utils.java:361)
at init.InitServlet.
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.catalina.core.DefaultInstanceManager.newInstance(DefaultInstanceManager.java:120)
at org.apache.catalina.core.StandardWrapper.loadServlet(StandardWrapper.java:1041)
at org.apache.catalina.core.StandardWrapper.load(StandardWrapper.java:980)
at org.apache.catalina.core.StandardContext.loadOnStartup(StandardContext.java:4819)
at org.apache.catalina.core.StandardContext.startInternal(StandardContext.java:5129)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1425)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1415)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:941)
at org.apache.catalina.core.StandardHost.startInternal(StandardHost.java:839)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1425)
at org.apache.catalina.core.ContainerBase$StartChild.call(ContainerBase.java:1415)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at org.apache.tomcat.util.threads.InlineExecutorService.execute(InlineExecutorService.java:75)
at java.util.concurrent.AbstractExecutorService.submit(AbstractExecutorService.java:134)
at org.apache.catalina.core.ContainerBase.startInternal(ContainerBase.java:941)
at org.apache.catalina.core.StandardEngine.startInternal(StandardEngine.java:258)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.StandardService.startInternal(StandardService.java:422)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.core.StandardServer.startInternal(StandardServer.java:770)
at org.apache.catalina.util.LifecycleBase.start(LifecycleBase.java:183)
at org.apache.catalina.startup.Catalina.start(Catalina.java:657)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at org.apache.catalina.startup.Bootstrap.start(Bootstrap.java:355)
at org.apache.catalina.startup.Bootstrap.main(Bootstrap.java:495)
Caused by: java.lang.NoClassDefFoundError: org.apache.hadoop.hdfs.web.WebHdfsFileSystem
at java.lang.Class.getDeclaredConstructors0(Native Method)
at java.lang.Class.privateGetDeclaredConstructors(Class.java:2671)
at java.lang.Class.getConstructor0(Class.java:3075)
at java.lang.Class.newInstance(Class.java:412)
at java.util.ServiceLoader$LazyIterator.nextService(ServiceLoader.java:380)
... 49 more
将$HADOOP_HOME/share/hadoop/common/lib目录下的
jackson-core-asl-1.9.13.jar
jackson-jaxrs-1.9.13.jar
jackson-mapper-asl-1.9.13.jar
jackson-xc-1.9.13.jar
4个jar包放到工程的lib目录下,然后add to buildpath
问题5:
java.lang.ClassNotFoundException: org.apache.htrace.SamplerBuilder
将$HADOOP_HOME/share/hadoop/common/lib目录下的htrace-core-3.1.0-incubating.jar拷贝到工程目录下的lib当中。
问题6:
启动集群时提示:
Desktop: Error: JAVA_HOME is not set and could not be found.
Server3: Error: JAVA_HOME is not set and could not be found.
Server1: Error: JAVA_HOME is not set and could not be found.
Server2: Error: JAVA_HOME is not set and could not be found.
先检查JAVA环境变量是否设置正确,再检查${HADOOP_HOME}/etc/hadoop/hadoop-env.sh文件
其中有一句
export JAVA_HOME=${JAVA_HOME}
修改为
expot JAVA_HOME=/usr/local/jdk1.8.0_111 即显式修改为java路径
问题7:
重复出现:
18/05/29 17:14:34 INFO mapred.ClientServiceDelegate: Application state is completed. FinalApplicationStatus=SUCCEEDED. Redirecting to job history server
18/05/29 17:14:35 INFO ipc.Client: Retrying connect to server: Desktop/192.168.244.3:10020. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/05/29 17:14:36 INFO ipc.Client: Retrying connect to server: Desktop/192.168.244.3:10020. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/05/29 17:14:37 INFO ipc.Client: Retrying connect to server: Desktop/192.168.244.3:10020. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
18/05/29 17:14:38 INFO ipc.Client: Retrying connect to server: Desktop/192.168.244.3:10020. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
解决方法:配置historyserver,执行:mr-jobhistory-daemon.sh start historyserver即可解决。
问题8:
Starting namenodes on [Master]
Master: starting namenode, logging to /usr/local/bigdata/hadoop-2.7.3/logs/hadoop-root-namenode-Master.out
Slave3: bash: /usr/local/bigdata/hadoop-2.7.3/sbin/hadoop-daemon.sh: Permission denied
Slave2: bash: /usr/local/bigdata/hadoop-2.7.3/sbin/hadoop-daemon.sh: Permission denied
Slave1: bash: /usr/local/bigdata/hadoop-2.7.3/sbin/hadoop-daemon.sh: Permission denied
解决方法:
(1)检查无密登录是否正常;
(2)如果无密登录可以,那么执行下面的语句
sudo chmod 777 /usr/local/bigdata/hadoop-2.7.3/* -R