角色分配:
node01:Namenode
node02:SecondiryNamenode,Datanode
node03:Datanode
node04:Datenode
yum install ntp
ntpdate ntp1.aliyun.com
node01->node01
node01->node02
node01->node03
node01->node04
①所有节点执行
——命令:ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
②在node01节点执行,将node01的公钥加入到其他节点的白名单中
——命令:
ssh-copy-id -i ~/.ssh/id_rsa.pub root@node01
ssh-copy-id -i ~/.ssh/id_rsa.pub root@node02
ssh-copy-id -i ~/.ssh/id_rsa.pub root@node03
ssh-copy-id -i ~/.ssh/id_rsa.pub root@node04
注:node01~04为服务器名称,可用ip地址替换
yum remove *openjdk*
tar -zxvf jdk-8u131-linux-x64.tar.gz
vim /etc/profile
export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_131
export PATH=$PATH:$JAVA_HOME/bin
source /etc/profile
或. /etc/profile
java -version
java version "1.8.0_131"
Java(TM) SE Runtime Environment (build 1.8.0_131-b11)
Java HotSpot(TM) 64-Bit Server VM (build 25.131-b11, mixed mode)
dfs.replication
3
dfs.namenode.secondary.http-address
node02:50090
fs.defaultFS
hdfs://node01:9000
hadoop.tmp.dir
/var/abc/hadoop/cluster
scp -r hadoop-2.6.5 root@node02:/opt/software/hadoop/
scp -r hadoop-2.6.5 root@node03:/opt/software/hadoop/
scp -r hadoop-2.6.5 root@node04:/opt/software/hadoop/
注:安装包目录应该统一
stop-dfs.sh
hdfs namenode -format
注:格式化NameNode时将会创建/var/abc/hadoop/cluster文件目录,如遇错误需要重新格式化时需要删除/var/abc文件目录再格式化
start-dfs.sh
未完待续…