开发环境和工具
hadoop环境
集群节点配置
IpAddr | HostName |
---|---|
10.211.55.111 | wpixel01 |
10.211.55.112 | wpixel02 |
10.211.55.113 | wpixel03 |
10.211.55.114 | wpixel04 |
IpAddr | NameNode | DataNode | JournalNode | Zookeeper |
---|---|---|---|---|
wpixel01 | √ | √ | √ | √ |
wpixel02 | √ | √ | √ | √ |
wpixel03 | √ | √ | √ | √ |
wpixel04 | √ | √ |
配置静态ip(略)
通过winscp上传jdk文件
[root@wpixel01 www]# ll
total 1
-rw-r--r--. 1 root root 181352138 Sep 7 2016 jdk-8u101-linux-x64.tar.gz
解压jdk文件
[root@wpixel01 www]# tar -zxvf jdk-8u101-linux-x64.tar.gz
然后在/etc/profile文件里配置环境变量
[root@wpixel01 www]# vi /etc/profile
在profile文件最后添加jdk的目录,并加载到path目录
export JAVA_HOME=/home/www/jdk1.8.0_101
export PATH=$PATH:$JAVA_HOME/bin
保存之后还是不能生效的,要将配置文件生效执行这段代码
[root@wpixel01 www]# source /etc/profile
jdk配置完成
[root@wpixel01 www]# java -version or jps
java version "1.8.0_101"
Java(TM) SE Runtime Environment (build 1.8.0_101-b13)
Java HotSpot(TM) 64-Bit Server VM (build 25.101-b13, mixed mode)
[root@wpixel01 www]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.211.55.111 wpixel01
10.211.55.112 wpixel02
10.211.55.113 wpixel03
10.211.55.114 wpixel04
CentOS 7.0默认使用的是firewall作为防火墙,所以只要关闭firewall就可以啦
默认running状态
[root@wpixel01 www]# firewall-cmd --state
running
先停止firewall
[root@wpixel01 www]# systemctl stop firewalld.service
然后禁止开机启动
[root@wpixel01 www]# systemctl disable firewalld.service
现在防火墙是not running状态
[root@wpixel01 www]# firewall-cmd --state
not running
10.211.55.111
10.211.55.112
10.211.55.113
10.211.55.114
四台主机相互的访问必须要消除密码验证,所以每台主机都要生成公钥私钥,并向其他三台主机发送公钥(最好也向本地主机也配置免密)
4.1生成公钥私钥(四个回车)
[root@wpixel01 www]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
e0:58:09:6d:78:79:77:02:57:9f:8f:dd:82:e5:25:ce root@wpixel01
The key's randomart image is:
+--[ RSA 2048]----+
| .o ...... |
| ..=...o .. . |
| o+. . o = .|
| + . * *.|
| . . S . E +|
| . |
| |
| |
| |
+-----------------+
完成后会在”/root/.ssh“文件夹下,产生master的文件私钥id_rsa和公钥id_rsa.pub
[root@wpixel01 www]# ll /root/.ssh
total 8
-rw-------. 1 root root 1675 Jan 8 02:49 id_rsa
-rw-r--r--. 1 root root 395 Jan 8 02:49 id_rsa.pub
4.2将公钥拷贝到其他主机(四台都要)
[root@wpixel01 www]# ssh-copy-id wpixel01
The authenticity of host 'wpixel01 (192.168.196.111)' can't be established.
ECDSA key fingerprint is e3:7c:14:8d:27:22:96:eb:62:fc:19:18:9e:f8:85:d8.
Are you sure you want to continue connecting (yes/no)? 'yes (输入yes)'
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@wpixel01s password: '(输入密码)'
Number of key(s) added: 1
Now try logging into the machine, with: "ssh 'wpixel01'"
and check to make sure that only the key(s) you wanted were added.
(其他三台不要忘了)
ssh-copy-id wpixel02
ssh-copy-id wpixel03
ssh-copy-id wpixel04
现在从wpixel01主机连接ssh wpixel02 或者ssh wpixel03就已经不要密码了
其余的主机都要配置免密,所以都要执行4.1和4.2的步骤
5.1将hosts文件发送到其他的主机上
[root@wpixel01 www]# scp -r /etc/hosts -C root@wpixel02:/etc
[root@wpixel01 www]# scp -r /etc/hosts -C root@wpixel03:/etc
[root@wpixel01 www]# scp -r /etc/hosts -C root@wpixel04:/etc
5.2将jdk和环境变量发送到其他主机上
[root@wpixel01 www]# scp -r /home/www/jdk1.8.0_101/ -C root@wpixel02:/home/www/
[root@wpixel01 www]# scp -r /home/www/jdk1.8.0_101/ -C root@wpixel03:/home/www/
[root@wpixel01 www]# scp -r /home/www/jdk1.8.0_101/ -C root@wpixel04:/home/www/
5.3将环境变量也发送过去
[root@wpixel01 www]# scp -r /etc/profile -C root@wpixel02:/etc/
[root@wpixel01 www]# scp -r /etc/profile -C root@wpixel03:/etc/
[root@wpixel01 www]# scp -r /etc/profile -C root@wpixel04:/etc/
然后到将其他主机的配置文件生效
[root@wpixel01 www]# source /etc/profile
基础环境配置完成后,就可以开始安装hadoop环境了
Hadoop Federation + HA 搭建(二) – hadoop Federation