克隆配置虚拟器hadoop-hive-zookeeper-kafka全流程

1.清理垃圾
[root@hadoop201 ~]# rm -rf anaconda-ks.cfg install.log install.log.syslog

2.选中“克隆虚拟机”:“右键→管理→克隆”
弹窗:
(1)下一步
(2)克隆自:虚拟机中的当前状态
(3)创建完整克隆
(4)虚拟机命名,选择存储位置
(5)完成

3.配置IP
(1)获取地址:vi /etc/udev/rules.d/70-persistent-net.rules
删除第一个eth0,改第二个eth0为eth1
SUBSYSTEM==“net”, ACTION==“add”, DRIVERS=="?", ATTR{address}“00:0c:29:2e:99:a1”, ATTR{type}“1”, KERNEL=="eth", NAME==“eth0”
~
208:“00:0c:29:2e:99:a1”
209:“00:0c:29:78:4f:3f”
(2)修改本机IP:vi /etc/sysconfig/network-scripts/ifcfg-eth0
DEVICE=eth0
HWADDR=00:0c:29:b9:4d:21
TYPE=Ethernet
UUID=73c15477-5d31-4956-8be7-2809e1d204db
ONBOOT=yes
NM_CONTROLLED=yes
BOOTPROTO=static

IPADDR=192.168.1.208
GATEWAY=192.168.1.3
DNS1=192.168.1.3
(3)配置主机名
|- 当前主机名:hostname
hadoop206.cevent.com
|- 修改网络主机配置:vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=hadoop208.cevent.com
|- 修改主机配置:vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.201 hadoop201.cevent.com
192.168.1.202 hadoop202.cevent.com
192.168.1.203 hadoop203.cevent.com
192.168.1.204 hadoop204.cevent.com
192.168.1.205 hadoop205.cevent.com
192.168.1.206 hadoop206.cevent.com
192.168.1.207 hadoop207.cevent.com
192.168.1.208 hadoop208.cevent.com
192.168.1.209 hadoop209.cevent.com

4.重启生效
(1)sync
(2)reboot

5.登录报错,修改hosts
路径:C:\Windows\System32\drivers\etc\hosts


Copyright © 1993-2009 Microsoft Corp.

This is a sample HOSTS file used by Microsoft TCP/IP for Windows.

This file contains the mappings of IP addresses to host names. Each

entry should be kept on an individual line. The IP address should

be placed in the first column followed by the corresponding host name.

The IP address and the host name should be separated by at least one

space.

Additionally, comments (such as these) may be inserted on individual

lines or following the machine name denoted by a ‘#’ symbol.

For example:

102.54.94.97 rhino.acme.com # source server

38.25.63.10 x.acme.com # x client host

localhost name resolution is handled within DNS itself.

127.0.0.1 localhost

::1 localhost

127.0.0.1 www.vmix.com
192.30.253.112 github.com
151.101.88.249 github.global.ssl.fastly.net
127.0.0.1 www.xmind.net
192.168.1.201 hadoop201.cevent.com
192.168.1.202 hadoop202.cevent.com
192.168.1.203 hadoop203.cevent.com
192.168.1.204 hadoop204.cevent.com
192.168.1.205 hadoop205.cevent.com
192.168.1.206 hadoop206.cevent.com
192.168.1.207 hadoop207.cevent.com
192.168.1.208 hadoop208.cevent.com
192.168.1.209 hadoop209.cevent.com
192.168.1.1 windows10.microdone.cn

6.ssh登录报错:
[cevent@hadoop207 ~]$ sudo ssh hadoop209.cevent.com
[sudo] password for cevent:
ssh: Could not resolve hostname hadoop209.cevent.com: Name or service not known

修改hosts: vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.1.201 hadoop201.cevent.com
192.168.1.202 hadoop202.cevent.com
192.168.1.203 hadoop203.cevent.com
192.168.1.204 hadoop204.cevent.com
192.168.1.205 hadoop205.cevent.com
192.168.1.206 hadoop206.cevent.com
192.168.1.207 hadoop207.cevent.com
192.168.1.208 hadoop208.cevent.com
192.168.1.209 hadoop209.cevent.com

7.无秘钥登录
(1)ssh登录A
[cevent@hadoop207 .ssh]$ ssh hadoop208.cevent.com
The authenticity of host ‘hadoop208.cevent.com (192.168.1.208)’ can’t be established.
RSA key fingerprint is fe:07:91:9c:00:5d:09:2c:48:bb:d5:53:9f:09:6c:34.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘hadoop208.cevent.com,192.168.1.208’ (RSA) to the list of known hosts.
[email protected]’s password:
Last login: Mon Jun 15 16:06:58 2020 from 192.168.1.1
[cevent@hadoop208 ~]$ exit
logout
(2)ssh登录B
[cevent@hadoop207 ~]$ ssh hadoop209.cevent.com
The authenticity of host ‘hadoop209.cevent.com (192.168.1.209)’ can’t be established.
RSA key fingerprint is fe:07:91:9c:00:5d:09:2c:48:bb:d5:53:9f:09:6c:34.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘hadoop209.cevent.com,192.168.1.209’ (RSA) to the list of known hosts.
[email protected]’s password:
Last login: Mon Jun 15 17:01:11 2020 from 192.168.1.1
[cevent@hadoop209 ~]$ exit
logout
(3)查看登录记录
[cevent@hadoop207 ~]$ cd
[cevent@hadoop207 ~]$ ls -al
总用量 188
drwx------. 26 cevent cevent 4096 6月 15 17:03 .
drwxr-xr-x. 3 root root 4096 4月 30 09:15 …
drwxrwxr-x. 2 cevent cevent 4096 5月 7 10:43 .abrt
-rw-------. 1 cevent cevent 17232 6月 15 17:05 .bash_history
-rw-r–r--. 1 cevent cevent 18 5月 11 2016 .bash_logout
-rw-r–r--. 1 cevent cevent 176 5月 11 2016 .bash_profile
-rw-r–r--. 1 cevent cevent 124 5月 11 2016 .bashrc
drwxrwxr-x. 2 cevent cevent 4096 5月 7 17:32 .beeline
drwxr-xr-x. 3 cevent cevent 4096 5月 7 10:43 .cache
drwxr-xr-x. 5 cevent cevent 4096 5月 7 10:43 .config
drwx------. 3 cevent cevent 4096 5月 7 10:43 .dbus
-rw-------. 1 cevent cevent 16 5月 7 10:43 .esd_auth
drwx------. 4 cevent cevent 4096 6月 14 22:13 .gconf
drwx------. 2 cevent cevent 4096 6月 14 23:14 .gconfd
drwxr-xr-x. 5 cevent cevent 4096 5月 7 10:43 .gnome2
drwxrwxr-x. 3 cevent cevent 4096 5月 7 10:43 .gnote
drwx------. 2 cevent cevent 4096 6月 14 22:13 .gnupg
-rw-rw-r–. 1 cevent cevent 195 6月 14 22:13 .gtk-bookmarks
drwx------. 2 cevent cevent 4096 5月 7 10:43 .gvfs
-rw-rw-r–. 1 cevent cevent 589 6月 12 13:42 .hivehistory
-rw-------. 1 cevent cevent 620 6月 14 22:13 .ICEauthority
-rw-r–r--. 1 cevent cevent 874 6月 14 23:14 .imsettings.log
drwxr-xr-x. 3 cevent cevent 4096 5月 7 10:43 .local
drwxr-xr-x. 4 cevent cevent 4096 3月 13 01:51 .mozilla
-rw-------. 1 cevent cevent 214 5月 7 13:37 .mysql_history
drwxr-xr-x. 2 cevent cevent 4096 5月 7 10:43 .nautilus
drwx------. 2 cevent cevent 4096 5月 7 10:43 .pulse
-rw-------. 1 cevent cevent 256 5月 7 10:43 .pulse-cookie
drwx------. 2 cevent cevent 4096 4月 30 15:20 .ssh

[cevent@hadoop207 ~]$ cd .ssh/
[cevent@hadoop207 .ssh]$ ll
总用量 16
-rw-------. 1 cevent cevent 409 4月 30 15:20 authorized_keys
-rw-------. 1 cevent cevent 1675 4月 30 15:20 id_rsa
-rw-r–r--. 1 cevent cevent 409 4月 30 15:20 id_rsa.pub
-rw-r–r--. 1 cevent cevent 832 6月 15 17:08 known_hosts
查看登录记录
[cevent@hadoop207 .ssh]$ vi known_hosts

(5)生成ssh-key
[cevent@hadoop207 .ssh]$ ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/home/cevent/.ssh/id_rsa): cevent
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in cevent.
Your public key has been saved in cevent.pub.
The key fingerprint is:
1c:5a:1a:d2:e4:3b:fe:36:39:df:95:b3:75:85:0f:af [email protected]
The key’s randomart image is:
±-[ RSA 2048]----+
| . |
| + |
| . + o |
| . B . . |
| = S o .|
| . . =.|
| . . + =|
| .= . . =.|
| …+. . E |
±----------------+
(6)复制key-id
[cevent@hadoop207 .ssh]$ ssh-copy-id hadoop208.cevent.com
[cevent@hadoop207 .ssh]$ ssh-copy-id hadoop209.cevent.com
(7)测试ssh
[cevent@hadoop207 .ssh]$ ssh hadoop208.cevent.com
Last login: Mon Jun 15 18:23:00 2020 from hadoop207.cevent.com
[cevent@hadoop208 ~]$ exit
logout
Connection to hadoop208.cevent.com closed.
[cevent@hadoop207 .ssh]$ ssh hadoop209.cevent.com
Last login: Mon Jun 15 17:08:11 2020 from hadoop207.cevent.com
[cevent@hadoop209 ~]$ exit
logout

8.查看其他虚拟机无秘钥登录
[cevent@hadoop208 ~]$ cd
[cevent@hadoop208 ~]$ ls -al
[cevent@hadoop208 ~]$ cd .ssh/
[cevent@hadoop208 .ssh]$ ll
总用量 8
-rw-------. 1 cevent cevent 818 6月 15 18:24 authorized_keys
-rw-r–r--. 1 cevent cevent 1664 6月 16 17:56 known_hosts
[cevent@hadoop208 .ssh]$ vi authorized_keys
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAyRWmouFLr4b6orfQrWqtlukJ1orkeYcqM6orH1JMOMo1BLYY/Bop2ZIAHahUxGgUdRlZ5mFkGHut1A7h/VdGLBdegyZXQwxwl6Cx67XIsQWRwUgZWXbujg+qV9irPE7WvxF5e3FvJGfbmh7boPk+q/Hsk6rgZ3k9qrEDx4vhv7eL+Ostt2D8tV4UbRReUNl3yI9bt4P/S7ARBpelFtB4p9drbGSxjtH0sNKBHiDAcAV+MOSLz21+WlYr2x58jAZc37UXi/qYfosgc+u5GJ88z/kEI+1YqXBX11FFiRWZcI2aiLweri5WtHH0hoEZGrTXBeShuWQzegx9/E0upPlsfw== [email protected]
ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA0Fe9XV0baD7RPiGkIf+ZMoMPOaCF445aAvaJdGt8wuegkxJjqPMTAop79xcA7AY/vFS7PjpllM162t/lVoCozGHK1iOfElObiLo6+pxBcwfVYnEUlzAz/L0Ngpss54Eb48xOq068gcKcDAZrNbdtxDkTgzHFttcWpB7j++gRXrfB9O9HxKcRObu16sBM8tLmLF4M+tvxTC/Ko/amnrOvi3/AyCtxH1sRumqUiu9buDJAFAgV1Y+s7CR7GORGIkDkmHr9e3O5UMpNXTgnfIaCPdNzn6qRTUM/Sb5KAkkMBb3MY5NgbOPDvFwkPlG8xcFS5Ua8Arq58n8kwa2dyy94kQ== [email protected]

9.rsync同步文件测试
(1)新建文件
[cevent@hadoop207 module]$ vim rsync.txt
kakaxi
(2)执行同步
[cevent@hadoop207 module]$ rsync -rvl rsync.txt [email protected]:/opt/module/

10.xsync同步任务文件
(1)在/usr/local/bin目录下创建xsync文件,文件内容如下:
#!/bin/bash
#1 获取输入参数的个数,如果没有参数,直接退出,KaTeX parse error: Expected 'EOF', got '#' at position 1: #̲获取参数个数 pcount=#
if((pcount==0)); then
#如果没有参数,返回no args,退出,程序关闭
echo no args;
exit;
fi

#2 获取文件名称:$1获取第一个参数,basename+ 路径1/路径2,获取路径最后一个值名称
p1=$1
fname=basename $p1
echo fname=$fname

#3 获取上级目录到:绝对路径
#获取文件路径:$(dirname $p1)
#cd -P 进入绝对路径 pwd获取路径
pdir=cd -P $(dirname $p1); pwd
echo pdir=$pdir

#4 获取当前用户名称
user=whoami

#5 循环
for((host=207; host<210; host++)); do
#echo p d i r / pdir/ pdir/fname u s e r @ h a d o o p user@hadoop user@hadoophost: p d i r e c h o − − − − − − − − − − − − − − − h a d o o p pdir echo --------------- hadoop pdirechohadoophost.cevent.com ----------------
#拼接路径用户
rsync -rvl p d i r / pdir/ pdir/fname u s e r @ h a d o o p user@hadoop user@hadoophost.cevent.com:$pdir
done

(2)测试同步,默认同步在相同路径下
[cevent@hadoop207 module]$ vim xsync.txt
this is a xsync test!

[cevent@hadoop207 module]$ xsync xsync.txt
fname=xsync.txt
pdir=/opt/module
--------------- hadoop207.cevent.com ----------------
sending incremental file list

sent 32 bytes received 12 bytes 29.33 bytes/sec
total size is 23 speedup is 0.52
--------------- hadoop208.cevent.com ----------------
sending incremental file list
xsync.txt

sent 98 bytes received 31 bytes 258.00 bytes/sec
total size is 23 speedup is 0.18
--------------- hadoop209.cevent.com ----------------
sending incremental file list
xsync.txt

sent 98 bytes received 31 bytes 258.00 bytes/sec
total size is 23 speedup is 0.18

11.xcall脚本,所有主机同时执行命令
(1)在/usr/local/bin目录下创建xcall文件,文件内容如下:
#!/bin/bash
#KaTeX parse error: Expected 'EOF', got '#' at position 1: #̲获取参数个数 #@ 获取所有参数
pcount=$#
if((pcount==0));then
echo no args;
exit;
fi

echo -------------localhost.cevent.com----------
@ f o r ( ( h o s t = 207 ; h o s t < 210 ; h o s t + + ) ) ; d o e c h o − − − − − − − − − − h a d o o p @ for((host=207; host<210; host++)); do echo ----------hadoop @for((host=207;host<210;host++));doechohadoophost.cevent.com---------
ssh hadoop$host.cevent.com $@
done

(2)修改脚本 xcall 具有执行权限
[cevent@hadoop207 bin]$ sudo chmod a+x xcall
[cevent@hadoop207 bin]$ xcall rm -rf /opt/module/rsync.txt
-------------localhost.cevent.com----------
----------hadoop207.cevent.com---------
----------hadoop208.cevent.com---------
----------hadoop209.cevent.com---------

12.core-site.xml配置

[cevent@hadoop207 ~]$ cat /opt/module/hadoop-2.7.2/etc/hadoop/core-site.xml

fs.defaultFS hdfs://hadoop207.cevent.com:8020
    
    
            hadoop.tmp.dir
            /opt/module/hadoop-2.7.2/data/tmp
    

13.hdfs-site.xml配置

[cevent@hadoop207 ~]$ cat /opt/module/hadoop-2.7.2/etc/hadoop/hdfs-site.xml

dfs.replication 1
    
            dfs.namenode.secondary.http-address
            hadoop207.cevent.com:50090
    

14.slaves配置

[cevent@hadoop207 ~]$ vim /opt/module/hadoop-2.7.2/etc/hadoop/slaves
hadoop207.cevent.com
hadoop208.cevent.com
hadoop209.cevent.com

15.yarn-site.xml

[cevent@hadoop207 ~]$ cat /opt/module/hadoop-2.7.2/etc/hadoop/yarn-site.xml

    
yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.hostname hadoop207.cevent.com

16.mapred-site.xml配置

[cevent@hadoop207 ~]$ cat /opt/module/hadoop-2.7.2/etc/hadoop/mapred-site.xml

mapreduce.framework.name yarn

17.同步xsync+xcall测试
[cevent@hadoop207 module]$ xsync hadoop-2.7.2/
[cevent@hadoop207 module]$ xcall cat /opt/module/hadoop-2.7.2/etc/hadoop/slaves
-------------localhost.cevent.com----------
hadoop207.cevent.com
hadoop208.cevent.com
hadoop209.cevent.com
----------hadoop207.cevent.com---------
hadoop207.cevent.com
hadoop208.cevent.com
hadoop209.cevent.com
----------hadoop208.cevent.com---------
hadoop207.cevent.com
hadoop208.cevent.com
hadoop209.cevent.com
----------hadoop209.cevent.com---------
hadoop207.cevent.com
hadoop208.cevent.com
hadoop209.cevent.com

18.zookeeper安装
(1)解压
[cevent@hadoop207 zookeeper-3.4.10]$ tar -zxvf zookeeper-3.4.10.tar.gz -C /opt/module/

[cevent@hadoop207 soft]$ cd /opt/module/
[cevent@hadoop207 zookeeper-3.4.10]$ ll
总用量 1592
drwxr-xr-x. 2 cevent cevent 4096 3月 23 2017 bin
-rw-rw-r–. 1 cevent cevent 84725 3月 23 2017 build.xml
drwxr-xr-x. 2 cevent cevent 4096 6月 16 22:38 conf
drwxr-xr-x. 10 cevent cevent 4096 3月 23 2017 contrib
drwxrwxr-x. 3 cevent cevent 4096 6月 16 22:34 data
drwxr-xr-x. 2 cevent cevent 4096 3月 23 2017 dist-maven
drwxr-xr-x. 6 cevent cevent 4096 3月 23 2017 docs
-rw-rw-r–. 1 cevent cevent 1709 3月 23 2017 ivysettings.xml
-rw-rw-r–. 1 cevent cevent 5691 3月 23 2017 ivy.xml
drwxr-xr-x. 4 cevent cevent 4096 3月 23 2017 lib
-rw-rw-r–. 1 cevent cevent 11938 3月 23 2017 LICENSE.txt
-rw-rw-r–. 1 cevent cevent 3132 3月 23 2017 NOTICE.txt
-rw-rw-r–. 1 cevent cevent 1770 3月 23 2017 README_packaging.txt
-rw-rw-r–. 1 cevent cevent 1585 3月 23 2017 README.txt
drwxr-xr-x. 5 cevent cevent 4096 3月 23 2017 recipes
drwxr-xr-x. 8 cevent cevent 4096 3月 23 2017 src
-rw-rw-r–. 1 cevent cevent 1456729 3月 23 2017 zookeeper-3.4.10.jar
-rw-rw-r–. 1 cevent cevent 819 3月 23 2017 zookeeper-3.4.10.jar.asc
-rw-rw-r–. 1 cevent cevent 33 3月 23 2017 zookeeper-3.4.10.jar.md5
-rw-rw-r–. 1 cevent cevent 41 3月 23 2017 zookeeper-3.4.10.jar.sha1
(2)将/opt/module/zookeeper-3.4.10/conf这个路径下的zoo_sample.cfg修改为zoo.cfg
[cevent@hadoop207 zookeeper-3.4.10]$ mv conf/zoo_sample.cfg zoo.cfg
[cevent@hadoop207 zookeeper-3.4.10]$ mv zoo.cfg conf/

(3)创建zkData
[cevent@hadoop207 zookeeper-3.4.10]$ mkdir -p data/zkData
/opt/module/zookeeper-3.4.5/data/zkData

(4)编辑zoo.cfg
[cevent@hadoop207 zookeeper-3.4.10]$ vim conf/zoo.cfg

The number of snapshots to retain in dataDir

#autopurge.snapRetainCount=3

Purge task interval in hours

Set to “0” to disable auto purge feature

The number of milliseconds of each tick

tickTime=2000

The number of ticks that the initial

synchronization phase can take

initLimit=10

The number of ticks that can pass between

sending a request and getting an acknowledgement

syncLimit=5

the directory where the snapshot is stored.

do not use /tmp for storage, /tmp here is just

example sakes.

dataDir=/opt/module/zookeeper-3.4.10/data/zkData

the port at which the clients will connect

clientPort=2181

the maximum number of client connections.

increase this if you need to handle more clients

#maxClientCnxns=60

Be sure to read the maintenance section of the

administrator guide before turning on autopurge.

http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance

The number of snapshots to retain in dataDir

#autopurge.snapRetainCount=3

Purge task interval in hours

Set to “0” to disable auto purge feature

#autopurge.purgeInterval=1

(5)启动zookeeper服务器和客户端(需先编辑myid)
[cevent@hadoop207/208/209分别配置相应id zookeeper-3.4.10]$ vim data/zkData/myid
207
208
209
[cevent@hadoop207 zookeeper-3.4.10]$ bin/zkServer.sh start
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper-3.4.10/bin/…/conf/zoo.cfg
Starting zookeeper … STARTED
[cevent@hadoop207 zookeeper-3.4.10]$ jps
6134 QuorumPeerMain
6157 Jps
[cevent@hadoop207 zookeeper-3.4.10]$ bin/zkCli.sh
Connecting to localhost:2181

(6)Cli.sh启动失败
【报错】
[cevent@hadoop207 zookeeper-3.4.10]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper-3.4.10/bin/…/conf/zoo.cfg
Error contacting service. It is probably not running.

【解决】
|-关闭防火墙
暂时关闭防火墙:(立即生效,开机重启,会重新打开)
service iptables stop
永久关闭防火墙(关机重启才会生效)
chkconfig iptables off
重新解压zookeeper,在data/zkData下新建zookeeper_server.pid=进程ID

[cevent@hadoop207 zookeeper-3.4.10]$ jps
5449 QuorumPeerMain
5763 Jps
[cevent@hadoop207 zookeeper-3.4.10]$ cd data/zkData/
[cevent@hadoop207 zkData]$ ll
总用量 4
drwxrwxr-x. 2 cevent cevent 4096 6月 17 11:54 version-2
[cevent@hadoop207 zkData]$ vim zookeeper_server.pid
5449
【启动成功】
[cevent@hadoop207 zookeeper-3.4.10]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper-3.4.10/bin/…/conf/zoo.cfg
Mode: standalone

[zk: localhost:2181(CONNECTED) 1] ll
ZooKeeper -server host:port cmd args
connect host:port
get path [watch]
          ls path [watch]
set path data [version]
rmr path
delquota [-n|-b] path
quit
printwatches on|off
create [-s] [-e] path data acl
stat path [watch]
close
ls2 path [watch]
history
listquota path
setAcl path acl
getAcl path
sync path
redo cmdno
addauth scheme auth
delete path [version]
setquota -n|-b val path
[zk: localhost:2181(CONNECTED) 2] quit
Quitting…
2020-06-17 12:00:36,210 [myid:] - INFO [main:ZooKeeper@684] - Session: 0x172c06c50490000 closed
2020-06-17 12:00:36,211 [myid:] - INFO [main-EventThread:ClientCnxn$EventThread@519] - EventThread shut down for session: 0x172c06c50490000

(7)配置zookeeper集群zoo.cfg
[cevent@hadoop207 zookeeper-3.4.10]$ vim conf/zoo.cfg
__>最后一行添加如下:

##################cluster#################
server.207=hadoop207.cevent.com:2888:3888
server.208=hadoop208.cevent.com:2888:3888
server.209=hadoop209.cevent.com:2888:3888

(8)启动hadoop:dfs和yarn(不启动hadoop也可以实现,最少启动2个zk)
[cevent@hadoop207 hadoop-2.7.2]$ sbin/start-dfs.sh
[cevent@hadoop207 hadoop-2.7.2]$ sbin/start-yarn.sh
(9)依次启动zookeeper,全部启动查看zkServer.sh status
[cevent@hadoop207 zookeeper-3.4.10]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper-3.4.10/bin/…/conf/zoo.cfg
Mode: follower
[cevent@hadoop208 zookeeper-3.4.10]$ bin/zkServer.sh status
ZooKeeper JMX enabled by default
Using config: /opt/module/zookeeper-3.4.10/bin/…/conf/zoo.cfg
Mode: leader

你可能感兴趣的:(hadoop,zookeeper,kafka,linux,hadoop,centos,zookeeper,hdfs)