centos6
修改hostname[root@centos6 ~]$ hostname # 查看当前的hostnmae
node1
[root@centos6 ~]$ vim /etc/sysconfig/network # 编辑network文件修改hostname行(重启生效)
[root@centos6 ~]$ cat /etc/sysconfig/network # 检查修改
NETWORKING=yes
HOSTNAME=centos6-node1
[root@centos6 ~]$ hostname centos6-node1 # 设置当前的hostname(立即生效)
[root@centos6 ~]$ vim /etc/hosts # 编辑hosts文件,给127.0.0.1添加hostname
[root@centos6 ~]$ cat /etc/hosts # 检查
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 centos6-node1
centos7
修改hostname[root@centos7 ~]$ hostname # 查看
node1
[root@centos7 ~]$ hostnamectl set-hostname centos7-node1 # 使用这个命令会立即生效且重启也生效
[root@centos7 ~]$ hostname # 再次查看
centos7-node1
[root@centos7 ~]$ vim /etc/hosts # 编辑下hosts文件, 给127.0.0.1添加hostname
[root@centos7 ~]$ cat /etc/hosts # 检查
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 centos7-node1
这里的映射就和Windows下的原理一样(C:\Windows\System32\drivers\etc\hosts
)
[root@master ~]$ sudo vim /etc/hosts
在文件尾部追加以下内容:
192.168.0.104 master
192.168.0.102 slave1
192.168.0.106 slave2
# 查看防火墙状态:
service firewalld status
#关闭防火墙:
sudo systemctl stop firewalld
#关闭防火墙开机自启:
sudo systemctl disable firewalld.service
# 查看防火墙状态:
[root@master software]# service firewalld status
Redirecting to /bin/systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; enabled; vendor preset: enabled)
Active: active (running) since 一 2020-07-13 17:34:12 CST; 11min ago
Docs: man:firewalld(1)
Main PID: 562 (firewalld)
CGroup: /system.slice/firewalld.service
└─562 /usr/bin/python2 -Es /usr/sbin/firewalld --nofork --nopid
7月 13 17:34:11 master systemd[1]: Starting firewalld - dynamic firewall daemon...
7月 13 17:34:12 master systemd[1]: Started firewalld - dynamic firewall daemon.
#关闭防火墙:
[root@master software]# systemctl stop firewalld
#关闭防火墙开机自启:
[root@master software]# systemctl disable firewalld.service
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
# 查看防火墙状态:
[root@master software]# service firewalld status
Redirecting to /bin/systemctl status firewalld.service
● firewalld.service - firewalld - dynamic firewall daemon
Loaded: loaded (/usr/lib/systemd/system/firewalld.service; disabled; vendor preset: enabled)
Active: inactive (dead)
Docs: man:firewalld(1)
7月 13 17:34:11 master systemd[1]: Starting firewalld - dynamic firewall daemon...
7月 13 17:34:12 master systemd[1]: Started firewalld - dynamic firewall daemon.
7月 13 17:46:23 master systemd[1]: Stopping firewalld - dynamic firewall daemon...
7月 13 17:46:23 master systemd[1]: Stopped firewalld - dynamic firewall daemon.
检查是否已经安装了ssh:
[root@master software]# rpm -qa | grep ssh
openssh-server-7.4p1-21.el7.x86_64
openssh-clients-7.4p1-21.el7.x86_64
libssh2-1.8.0-3.el7.x86_64
openssh-7.4p1-21.el7.x86_64
没有安装的话执行以下命令:
[root@master software]$ sudo yum -y install ssh
查看ssh服务状态:`
[root@master software]# service sshd status
Redirecting to /bin/systemctl status sshd.service
● sshd.service - OpenSSH server daemon
Loaded: loaded (/usr/lib/systemd/system/sshd.service; enabled; vendor preset: enabled)
Active: active (running) since 一 2020-07-13 17:34:14 CST; 18min ago
Docs: man:sshd(8)
man:sshd_config(5)
Main PID: 914 (sshd)
CGroup: /system.slice/sshd.service
└─914 /usr/sbin/sshd -D
7月 13 17:34:13 master systemd[1]: Starting OpenSSH server daemon...
7月 13 17:34:14 master sshd[914]: Server listening on 0.0.0.0 port 22.
7月 13 17:34:14 master sshd[914]: Server listening on :: port 22.
7月 13 17:34:14 master systemd[1]: Started OpenSSH server daemon.
7月 13 17:44:25 master sshd[1448]: Accepted password for root from 192.168.0.101 port 50808 ssh2
[root@master software]#
ssh-keygen是SSH服务下的一个生成、管理和转换认证密钥的命令工具。包括两种密钥类型DSA和RSA
通过公私钥的验证可以使服务器与服务器之间实现无密码通讯。
ssh-keygen常用参数
-t:指定生成密钥的类型,默认使用SSH2d的rsa
-f:指定生成密钥的文件名,默认id_rsa(私钥id_rsa,公钥id_rsa.pub)
-P:提供旧密码,空表示不需要密码(-P ‘’)
-N:提供新密码,空表示不需要密码(-N ‘’)
-b:指定密钥长度(bits),RSA最小要求768位,默认是2048位;DSA密钥必须是1024位(FIPS 1862标准规定)
-C:提供一个新注释
-R hostname:从known_hosta(第一次连接时就会在家目录.ssh目录下生产该密钥文件)文件中删除所有属于hostname的密钥
生成公钥和私钥:
[root@master software]# ssh-keygen
#三次回车
[root@master software]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:0kJRHiY4Et0j0iHX19P55Gf7OPxbOfUnFh/w3/ty5oY root@master
The key's randomart image is:
+---[RSA 2048]----+
| o+o=o.+. . . |
| oo* +=..o o . |
| o o.o. . = |
| . . = o|
| o S .=o|
| o +B|
| .o=O|
| .E.X|
| @*|
+----[SHA256]-----+
[root@master software]#
将公钥复制到需要免密登陆的主机上的authorized_keys上:
[root@master software]# scp /root/.ssh/id_rsa.pub slave1:/root/.ssh/authorized_keys
# 派发到slave1
[root@master software]# scp /root/.ssh/id_rsa.pub slave1:/root/.ssh/authorized_keys
The authenticity of host 'slave1 (192.168.0.102)' can't be established.
ECDSA key fingerprint is SHA256:GuWVSxRlZdj2Da8PAZg0AnYTN9TSPt8bv82ApaQjyfU.
ECDSA key fingerprint is MD5:db:26:91:1f:00:30:7c:cc:a6:10:05:de:46:21:58:0e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave1,192.168.0.102' (ECDSA) to the list of known hosts.
root@slave1's password:
id_rsa.pub 100% 393 594.5KB/s 00:00
[2]+ 已停止 scp /root/.ssh/id_rsa.pub slave1:/root/.ssh/authorized_keys
[root@master software]#
# 派发到slave2
[root@master software]# scp /root/.ssh/id_rsa.pub slave2:/root/.ssh/authorized_keys
The authenticity of host 'slave2 (192.168.0.106)' can't be established.
ECDSA key fingerprint is SHA256:GuWVSxRlZdj2Da8PAZg0AnYTN9TSPt8bv82ApaQjyfU.
ECDSA key fingerprint is MD5:db:26:91:1f:00:30:7c:cc:a6:10:05:de:46:21:58:0e.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'slave2,192.168.0.106' (ECDSA) to the list of known hosts.
root@slave2's password:
id_rsa.pub 100% 393 462.5KB/s 00:00
[root@master software]# ssh slave2
Last login: Mon Jul 13 17:44:55 2020 from 192.168.0.101
[root@slave2 ~]# exit
登出
Connection to slave2 closed.
[root@master software]#
测试是否能免密登录:
[root@master software]# ssh slave1
Last login: Mon Jul 13 17:44:40 2020 from 192.168.0.101
[root@slave1 ~]# exit
登出
Connection to slave1 closed.
[root@master software]#
[root@master software]# ssh slave2
Last login: Mon Jul 13 17:44:40 2020 from 192.168.0.101
[root@slave1 ~]# exit
登出
Connection to slave1 closed.
[root@master software]#
[root@master software]# rpm -qa | grep ntp
ntpdate-4.2.6p5-29.el7.centos.x86_64
ntp-4.2.6p5-29.el7.centos.x86_64
没有安装的话,执行以下命令进行安装:
[root@master software]# yum -y install ntp
[root@master software]# vim /etc/ntp.conf
#设置本地网络上的主机不受限制
#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap 为
restrict 192.168.239.0 mask 255.255.255.0 nomodify notrap
#在includefile上面添加默认的一个内部时钟数据,使用它为局域网用户提供服务
server 127.127.1.0
fudge 127.127.1.0 stratum 10
includefile /etc/ntp/crypto/pw
#设置为不采用公共的服务器
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
# server 0.centos.pool.ntp.org iburst
# server 1.centos.pool.ntp.org iburst
# server 2.centos.pool.ntp.org iburst
# server 3.centos.pool.ntp.org iburst
[root@master software]# vim /etc/sysconfig/ntpd
#增加如下内容(让硬件时间与系统时间一起同步)
OPTIONS="-u ntp:ntp -p /var/run/ntpd.pid -g"
SYNC_HWCLOCK=yes
[root@master software]# systemctl start ntpd
[root@master software]# systemctl status ntpd
[root@master hadoop]# systemctl status ntpd
● ntpd.service - Network Time Service
Loaded: loaded (/usr/lib/systemd/system/ntpd.service; enabled; vendor preset: disabled)
Active: active (running) since 一 2020-07-13 21:11:24 CST; 24s ago
Process: 3982 ExecStart=/usr/sbin/ntpd -u ntp:ntp $OPTIONS (code=exited, status=0/SUCCESS)
Main PID: 3983 (ntpd)
CGroup: /system.slice/ntpd.service
└─3983 /usr/sbin/ntpd -u ntp:ntp -g
7月 13 21:11:24 master ntpd[3983]: Listen and drop on 1 v6wildcard :: UDP 123
7月 13 21:11:24 master ntpd[3983]: Listen normally on 2 lo 127.0.0.1 UDP 123
7月 13 21:11:24 master ntpd[3983]: Listen normally on 3 ens33 192.168.0.104 UDP 123
7月 13 21:11:24 master ntpd[3983]: Listen normally on 4 lo ::1 UDP 123
7月 13 21:11:24 master ntpd[3983]: Listen normally on 5 ens33 fe80::5206:bab2:53a:9a51 UDP 123
7月 13 21:11:24 master ntpd[3983]: Listening on routing socket on fd #22 for interface updates
7月 13 21:11:24 master ntpd[3983]: 0.0.0.0 c016 06 restart
7月 13 21:11:24 master ntpd[3983]: 0.0.0.0 c012 02 freq_set kernel 0.000 PPM
7月 13 21:11:24 master ntpd[3983]: 0.0.0.0 c011 01 freq_not_set
7月 13 21:11:25 master ntpd[3983]: 0.0.0.0 c514 04 freq_mode
[root@master hadoop]#
#设置ntp服务开机自启
[root@master Hadoop]# systemctl enable ntpd.service
#测试
[root@master hadoop]# ntpstat
synchronised to local net (127.127.1.0) at stratum 11
time correct to within 949 ms
polling server every 64 s
[root@master hadoop]#
[root@slave1 hadoop]# systemctl stop ntpd
[root@slave1 software]# ntpdate master
13 Jul 21:18:21 ntpdate[2201]: adjust time server 192.168.0.104 offset -0.008088 sec
[root@slave1 software]# date
2020年 07月 13日 星期一 21:18:52 CST
在其他机器配置10分钟与时间服务器同步一次
[root@slave1 software]# crontab -e
no crontab for root - using an empty one
crontab: installing new crontab
系统环境:centos 7
JDK 版本:jdk 1.8.0_191
JDK的安装包已经为大家准备好,在/root/software目录下。
tar -zxvf jdk-8u171-linux-x64.tar.gz
vim /etc/profile
export JAVA_HOME=/root/software/jdk1.8.0_221 # 配置Java的安装目录
export PATH=$PATH:$JAVA_HOME/bin # 在原PATH的基础上加入JDK的bin目录
source /etc/profile
java -version
[root@master software]# java -version
java version "1.8.0_171"
Java(TM) SE Runtime Environment (build 1.8.0_171-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.171-b11, mixed mode)
[root@master software]#
使用cd
命令进入/root/software目录下,使用如下命令解压hadoop2.7.7安装包:
tar -zxvf hadoop-2.7.7.tar.gz
vim /etc/profile
# 配置Hadoop的安装目录
export HADOOP_HOME=/root/software/hadoop-2.7.7
# 在原PATH的基础上加入Hadoop的bin和sbin目录
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
source /etc/profile
hadoop version
[root@master software]# vi /etc/profile
[root@master software]# source /etc/profile
[root@master software]# hadoop version
Hadoop 2.7.7
Subversion Unknown -r c1aad84bd27cd79c3d1a7dd58202a8c3ee1ed3ac
Compiled by stevel on 2018-07-18T22:47Z
Compiled with protoc 2.5.0
From source with checksum 792e15d20b12c74bd6f19a1fb886490
This command was run using /root/software/hadoop-2.7.7/share/hadoop/common/hadoop-common-2.7.7.jar
[root@master software]#
打开hadoop-env.sh文件:
[root@master software]# vi /root/software/hadoop-2.7.7/etc/hadoop/hadoop-env.sh
找到JAVA_HOME参数位置,修改为本机安装的JDK的实际位置。
[root@master software]# echo $JAVA_HOME
/root/software/jdk1.8.0_171
[root@master software]# vi /root/software/hadoop-2.7.7/etc/hadoop/hadoop-env.sh
使用如下命令打开“core-site.xml”文件:
vi /root/software/hadoop-2.7.7/etc/hadoop/core-site.xml
将下面的配置内容添加到
中间:
<configuration>
<property>
<name>fs.defaultFSname>
<value>hdfs://master:9000value>
property>
<property>
<name>hadoop.tmp.dirname>
<value>/root/software/hadoop-2.7.7/tempvalue>
property>
configuration>
使用如下命令打开“hdfs-site.xml”文件:
vim /root/software/hadoop-2.7.7/etc/hadoop/hdfs-site.xml
具体配置:
<configuration>
<property>
<name>dfs.namenode.name.dirname>
<value>/root/software/hadoop-2.7.7/dfs/namenode_datavalue>
property>
<property>
<name>dfs.datanode.data.dirname>
<value>/root/software/hadoop-2.7.7/dfs/datanode_datavalue>
property>
<property>
<name>dfs.replicationname>
<value>1value>
property>
configuration>
使用如下命令打开“yarn-env.sh”文件:
vi /root/software/hadoop-2.7.7/etc/hadoop/yarn-env.sh
找到JAVA_HOME参数位置,将前面的#去掉,将其值修改为本机安装的JDK的实际位置。
# some Java parameters
export JAVA_HOME=/root/software/jdk1.8.0_171
在$HADOOP_HOME/etc/hadoop/目录中默认没有该文件,需要先通过如下命令将文件复制并重命名为“mapred-site.xml”:
cp mapred-site.xml.template mapred-site.xml
接着,打开“mapred-site.xml”文件进行修改:
vi /root/software/hadoop-2.7.7/etc/hadoop/mapred-site.xml
具体配置:
<configuration>
<property>
<name>mapreduce.framework.namename>
<value>yarnvalue>
property>
configuration>
使用如下命令打开该配置文件:
vim /root/software/hadoop-2.7.7/etc/hadoop/yarn-site.xml
具体配置:
<configuration>
<property>
<name>yarn.nodemanager.aux-servicesname>
<value>mapreduce_shufflevalue>
property>
<property>
<name>yarn.resourcemanager.hostnamename>
<value>mastervalue>
property>
configuration>
在当前配置文件目录内是不存在master文件的,我们使用vim写入内容到master内保存即可
[root@master hadoop]# vi maste
master
配置所有从属节点的主机名或 IP 地址,每行一个。所有从属节点上的 DataNode
服务和 NodeManager
服务都会被启动。
[root@master hadoop]# vi slaves
master
slave1
slave2
将 Hadoop 安装包分发到其他两台服务器,分发后建议在这两台服务器上也配置一下 Hadoop 的环境变量。
# 将安装包分发到slave1
[root@master hadoop]# scp -r /root/software/hadoop-2.7.7 slave1:/root/software/
# 将安装包分发到slave2
[root@master hadoop]# scp -r /root/software/hadoop-2.7.7 slave2:/root/software/
将 jdk 安装包分发到其他两台服务器,分发后建议在这两台服务器上也配置一下 jdk 的环境变量。
# 将安装包分发到slave1
[root@master hadoop]# scp -r /root/software/jdk1.8.0_171 slave1:/root/software/
# 将安装包分发到slave2
[root@master hadoop]# scp -r /root/software/jdk1.8.0_171 slave2:/root/software/
环境变量:
export JAVA_HOME=/root/software/jdk1.8.0_171 # 配置Java的安装目录
export PATH=$PATH:$JAVA_HOME/bin # 在原PATH的基础上加入JDK的bin目录
# 配置Hadoop的安装目录
export HADOOP_HOME=/root/software/hadoop-2.7.7
# 在原PATH的基础上加入Hadoop的bin和sbin目录
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
[root@master software]# hdfs namenode -format
成功标志:
namenode_data has been successfully formatted.
20/07/13 19:25:46 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
20/07/13 19:25:46 INFO common.Storage: Storage directory /root/software/hadoop-2.7.7/dfs/namenode_data has been successfully formatted.
20/07/13 19:25:46 INFO namenode.FSImageFormatProtobuf: Saving image file /root/software/hadoop-2.7.7/dfs/namenode_data/current/fsimage.ckpt_0000000000000000000 using no compression
20/07/13 19:25:46 INFO namenode.FSImageFormatProtobuf: Image file /root/software/hadoop-2.7.7/dfs/namenode_data/current/fsimage.ckpt_0000000000000000000 of size 321 bytes saved in 0 seconds.
20/07/13 19:25:46 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
20/07/13 19:25:46 INFO util.ExitUtil: Exiting with status 0
20/07/13 19:25:46 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at master/192.168.0.104
************************************************************/
# 启动dfs服务
[xiaokang@hadoop01 ~]$ start-dfs.sh
# 启动yarn服务
[xiaokang@hadoop01 ~]$ start-yarn.sh
# 启动任务历史服务器
[xiaokang@hadoop01 ~]$ mr-jobhistory-daemon.sh start historyserver
DFS:
[root@master .ssh]# start-dfs.sh
Starting namenodes on [master]
master: starting namenode, logging to /root/software/hadoop-2.7.7/logs/hadoop-root-namenode-master.out
slave2: starting datanode, logging to /root/software/hadoop-2.7.7/logs/hadoop-root-datanode-slave2.out
master: starting datanode, logging to /root/software/hadoop-2.7.7/logs/hadoop-root-datanode-master.out
slave1: starting datanode, logging to /root/software/hadoop-2.7.7/logs/hadoop-root-datanode-slave1.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /root/software/hadoop-2.7.7/logs/hadoop-root-secondarynamenode-master.out
[root@master .ssh]# jps
3156 Jps
3047 SecondaryNameNode
2889 DataNode
2762 NameNode
[root@master .ssh]#
YARN:
[root@master .ssh]# start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /root/software/hadoop-2.7.7/logs/yarn-root-resourcemanager-master.out
slave2: starting nodemanager, logging to /root/software/hadoop-2.7.7/logs/yarn-root-nodemanager-slave2.out
slave1: starting nodemanager, logging to /root/software/hadoop-2.7.7/logs/yarn-root-nodemanager-slave1.out
master: starting nodemanager, logging to /root/software/hadoop-2.7.7/logs/yarn-root-nodemanager-master.out
[root@master .ssh]# jps
3203 ResourceManager
3047 SecondaryNameNode
3367 NodeManager
2889 DataNode
3545 Jps
2762 NameNode
[root@master .ssh]#
[root@slave1 software]# jps
2064 Jps
1864 DataNode
1964 NodeManager
[root@slave1 software]#
[root@slave2 software]# jps
2067 Jps
1869 DataNode
1967 NodeManager
[root@slave2 software]#
安装好hadoop,hadoop和yarn都正常启动,但是yarn的web界面(8088),hdfs的web界面(50070)都不能打开,防火墙是处于关闭状态。
解决方法:
执行命令:
netstat -nltp
没安装net-tools
会报错
[root@master hadoop]# netstat -nltp
-bash: netstat: 未找到命令
再执行命令:
# yum -y install net-tools
[root@master hadoop]# yum -y install net-tools
已加载插件:fastestmirror
Loading mirror speeds from cached hostfile
* base: mirrors.aliyun.com
* extras: mirrors.163.com
* updates: mirrors.aliyun.com
正在解决依赖关系
--> 正在检查事务
---> 软件包 net-tools.x86_64.0.2.0-0.25.20131004git.el7 将被 安装
--> 解决依赖关系完成
依赖关系解决
===========================================================================================================
Package 架构 版本 源 大小
===========================================================================================================
正在安装:
net-tools x86_64 2.0-0.25.20131004git.el7 base 306 k
事务概要
===========================================================================================================
安装 1 软件包
总下载量:306 k
安装大小:917 k
Downloading packages:
net-tools-2.0-0.25.20131004git.el7.x86_64.rpm | 306 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : net-tools-2.0-0.25.20131004git.el7.x86_64 1/1
验证中 : net-tools-2.0-0.25.20131004git.el7.x86_64 1/1
已安装:
net-tools.x86_64 0:2.0-0.25.20131004git.el7
完毕!
再次执行:
netstat -nltp
查看端口状态:
[root@master hadoop]# netstat -nltp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:50020 0.0.0.0:* LISTEN 2889/java
tcp 0 0 192.168.0.104:9000 0.0.0.0:* LISTEN 2762/java
tcp 0 0 0.0.0.0:50090 0.0.0.0:* LISTEN 3047/java
tcp 0 0 127.0.0.1:42319 0.0.0.0:* LISTEN 2889/java
tcp 0 0 0.0.0.0:50070 0.0.0.0:* LISTEN 2762/java
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN 914/sshd
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN 1164/master
tcp 0 0 0.0.0.0:50010 0.0.0.0:* LISTEN 2889/java
tcp 0 0 0.0.0.0:50075 0.0.0.0:* LISTEN 2889/java
tcp6 0 0 :::42140 :::* LISTEN 3367/java
tcp6 0 0 192.168.0.104:8030 :::* LISTEN 3203/java
tcp6 0 0 192.168.0.104:8031 :::* LISTEN 3203/java
tcp6 0 0 192.168.0.104:8032 :::* LISTEN 3203/java
tcp6 0 0 192.168.0.104:8033 :::* LISTEN 3203/java
tcp6 0 0 :::8040 :::* LISTEN 3367/java
tcp6 0 0 :::8042 :::* LISTEN 3367/java
tcp6 0 0 :::22 :::* LISTEN 914/sshd
tcp6 0 0 192.168.0.104:8088 :::* LISTEN 3203/java
tcp6 0 0 ::1:25 :::* LISTEN 1164/master
tcp6 0 0 :::13562 :::* LISTEN 3367/java
50070端口,没有程序使用,怀疑是程序没有正常运行。
查看 hdfs-site.xml 文件,空的,没有配置。
添加配置:
<property>
<name>dfs.namenode.http-addressname>
<value>master:50070value>
property>
此时可以打开:
端口为 50070
。可以看到此时有三个可用的 Datanode
: