C.D.H 版本:6.2.0
hadoop 版本:3.0.0
集群环境:Linux CentOS7
①
.java.io.IOException: Port 9820 specified in URI hdfs://nameservice1:9820/user/root/.staging/job_1616116977910_0014/job.splitmetainfo but host ‘nameservice1’ is a logical (HA) namenode and does not use port information.闲暇情况下,尝试用 C.D.H 搭建了一个Hadoop集群,所用版本是
6.2.0
,集群是跑在Linux虚拟机的,然后运行一个 简单的 mapreduce的任务,发现在虚拟机节点可以直接运行, 但是放到 windows下的IDE (这里用的是IDEA)中,报错:java.io.IOException: Port 9820 specified in URI hdfs://nameservice1:9820/user/root/.staging/job_1616116977910_0014/job.splitmetainfo but host 'nameservice1' is a logical (HA) namenode and does not use port information.
从错误日志看到,
nameservice1
是逻辑节点,没有使用到 9820 节点???
处理:起初以为是core-site.xml
的配置文件中的配置写错了,通过查看老哥们的博客说是,配置里面端口号不能写,但是因为我的配置文件是从CDH集群中直接取下来的,并没有端口在写。以下是真实的配置:
<property>
<name>fs.defaultFSname>
<value>hdfs://nameservice1value>
property>
这就奇怪了,后来去查看源码,发现
if (checkPort && ((AbstractNNFailoverProxyProvider)providerNN).useLogicalURI()) {
int port = nameNodeUri.getPort();
if (port > 0 && port != 9820) {
throw new IOException("Port " + port + " specified in URI " + nameNodeUri + " but host '" + nameNodeUri.getHost() + "' is a logical (HA) namenode and does not use port information.");
}
}
当从nameNode获取到的端口
9820
获取不到的时候,就会报错???可是明明打印出来我的port
就是 9820啊???
最后,我意识到,我用的是CDH环境下的hadoop组件,会不会是我引入的Pom依赖的问题,查看我的pom.xml
<properties>
<hadoop.version>3.0.0hadoop.version>
properties>
<dependencies>
<dependency>
<groupId>org.apache.hadoopgroupId>
<artifactId>hadoop-commonartifactId>
<version>${hadoop.version}version>
dependency>
.
.
.
....
我是不是应该用
呢?
3.0.0-cdh6.2.0
果然,改了之后,就不报错了。
修改 pom.xml 的引入依赖,示例:
<properties>
<hadoop.version>3.0.0-cdh6.2.0hadoop.version>
properties>
<dependencies>
<dependency>
<groupId>org.apache.hadoopgroupId>
<artifactId>hadoop-clientartifactId>
<version>${hadoop.version}version>
dependency>
<dependency>
<groupId>org.apache.hadoopgroupId>
<artifactId>hadoop-commonartifactId>
<version>${hadoop.version}version>
dependency>
<dependency>
<groupId>org.apache.hadoopgroupId>
<artifactId>hadoop-hdfsartifactId>
<version>${hadoop.version}version>
dependency>
<dependency>
<groupId>org.apache.hadoopgroupId>
<artifactId>hadoop-mapreduce-client-coreartifactId>
<version>${hadoop.version}version>
dependency>
<dependency>
<groupId>org.apache.hadoopgroupId>
<artifactId>hadoop-mapreduce-client-commonartifactId>
<version>${hadoop.version}version>
dependency>
<dependency>
<groupId>org.apache.hadoopgroupId>
<artifactId>hadoop-mapreduce-client-jobclientartifactId>
<version>${hadoop.version}version>
dependency>
②
.Container exited with a non-zero exit code 1. Error file: prelaunch.err. Last 4096 bytes of prelaunch.err :/bin/bash: line 0: fg: no job control
参考博客:https://www.jianshu.com/p/48658f8035a2?utm_campaign=maleskine&utm_content=note&utm_medium=seo_notes&utm_source=recommendation
通过IDEA远程提交一个
mapreduce
任务到远程的 linux 的hadoop集群,结果报错:
[2021-03-19 13:49:22.806]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
/bin/bash: line 0: fg: no job control
[2021-03-19 13:49:22.807]Container exited with a non-zero exit code 1. Error file: prelaunch.err.
Last 4096 bytes of prelaunch.err :
/bin/bash: line 0: fg: no job control
For more detailed output, check the application tracking page: http://master:8088/cluster/app/application_1616116977910_0015 Then click on links to logs of each attempt.
. Failing the application.
[INFO]-2021-03-19 14:49:29 223-[org.apache.hadoop.mapreduce.Job.monitorAndPrintJob(Job.java:1665)]-Counters: 0
报错日志的意思是说,没有任务控制。现在是我通过windows系统提交任务到linxu系统,那会不会是它没有读取正确的信息?后来查博客,老哥们说是 这是
跨平台提交任务
,所以要开启跨平台的属性。
方法一:通过在写原生的 mapreduce 代码时,添加以下的code。
conf.set("mapreduce.app-submission.cross-platform","true")
或者采用第二种方法: (我用了之后没有效果,emmmm,可能我没配对吧),在
mapred-site.xml
中添加以下属性:
<property>
<name>mapreduce.app-submission.cross-platformname>
<value>truevalue>
property>
③
. Error: java.lang.RuntimeException: java.lang.ClassNotFoundException: Class com.fahai.bigdata.dirver.calculate.ZylMapper not found通过 IDEA 提交
mapreduce
任务到远程linux
的 hadoop 集群,报错,说找不到我继承的 Mapper 类
打包你的项目,然后引入到 code 中
conf.set("mapreduce.job.jar","C:\\Users\\DELL\\IdeaProjects\\fahai-hadoop-core\\target\\fahai-hadoop-core-0.0.1-SNAPSHOT.jar");