一、环境准备
1.在VMware上开启三台CentOS7虚拟机
2.通过Xshell连接这三台虚拟机,便于文件的上传
二、集群部署
主机名 | IP | Zookeeper | Nimbus | Supervisor |
---|---|---|---|---|
master | 10.6.6.1 | 有 | 有 | 无 |
slave1 | 10.6.6.2 | 有 | 无 | 有 |
slave2 | 10.6.6.3 | 有 | 无 | 有 |
三、搭建Java环境
1.将下载好的jdk-8u161-linux-x64.tar.gz
通过rz
命令上传到master
2.创建java
目录,将JDK解压到该目录
mkdir /home/java
tar -zxvf jdk-8u161-linux-x64.tar.gz -C /home/java
3.配置环境变量
输入vi /etc/profile
打开profile文件,在文件末尾添加如下代码:
export JAVA_HOME=/home/java/jdk1.8.0_161
export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export PATH=$PATH:$JAVA_HOME/bin
输入source /etc/profile
使环境变量生效
4.查看Java环境
输入java -version
,若出现下面的语句则配置成功:
Java(TM) SE Runtime Environment (build 1.8.0_161-b12)
Java HotSpot(TM) 64-Bit Server VM (build 25.161-b12, mixed mode)
5.使用scp
命令将环境拷贝到另外两台主机
scp -r /home/java root@slave1:/home/
scp -r /home/java root@slave2:/home/
scp /etc/profile root@slave1:/etc/
scp /etc/profile root@slave2:/etc/
相同地,分别在另外两台主机上执行命令source /etc/profile
使环境变量生效
JDK环境搭建完成
四、搭建Zookeeper集群
1.将下载好的zookeeper-3.4.8.tar.gz
通过rz
命令上传到master
2.创建software
目录,将Zookeeper解压到该目录
mkdir /root/software
tar -zxvf zookeeper-3.4.8.tar.gz -C /root/software
3.修改配置文件
切换到Zookeeper的conf
目录下:
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/root/software/zookeeper-3.4.8/data
dataLogDir=/root/software/zookeeper-3.4.8/datalog
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to "0" to disable auto purge feature
#autopurge.purgeInterval=1
server.1=10.6.6.1:2888:3888
server.2=10.6.6.2:2888:3888
server.3=10.6.6.3:2888:3888
同时在Zookeeper目录下新建data
和dataLog
两个目录用来存放Zookeeper的myid
文件和日志文件
mkdir data
mkdir dataLog
切换到Zookeeper的data
目录下:
vi myid
输入当前主机的id:1,并保存退出
1
4.使用scp
命令将环境拷贝到另外两台主机
scp -r zookeeper-3.4.8 root@slave1:/root/software/
scp -r zookeeper-3.4.8 root@slave2:/root/software/
相同地,将slave1
的Zookeeper中myid
文件内容修改为2
,将slave2
的Zookeeper中myid
文件内容修改为3
5.启动Zookeeper集群
# 首先关闭防火墙
systemctl stop firewalld
# 进入Zookeeper的bin目录下
cd /root/software/zookeeper-3.4.8/bin/
# 启动Zookeeper
./zkServer.sh start
会出现以下语句:
ZooKeeper JMX enabled by default
Using config: /root/software/zookeeper-3.4.8/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
在三台主机上都进行相同的操作后,在任意一台主机上输入以下语句:
./zkServer.sh status
如果集群启动成功会出现以下语句:
ZooKeeper JMX enabled by default
Using config: /root/software/zookeeper-3.4.8/bin/../conf/zoo.cfg
Mode: leader # 或者为Mode: follower
Zookeeper集群环境搭建完毕
五、搭建Storm集群
1.将下载好的apache-storm-2.1.0.tar.gz
通过rz
命令上传到master
2.将storm解压到software
目录
tar -zxvf apache-storm-2.1.0.tar.gz -C /root/software
3.修改配置文件
切换到storm的conf
目录下,输入vi storm.yaml
打开storm.yaml文件,在文件末尾添加如下代码:
# 指定storm使用的zookeeper集群
storm.zookeeper.servers:
- "master"
- "slave1"
- "slave2"
# 指定storm集群中的nimbus节点所在的服务器
nimbus.seeds: ["master"]
# 指定storm文件存放目录
storm.local.dir: "/root/software/apache-storm-2.1.0/data"
# 指定supervisor节点上,启动worker时对应的端口号,每个端口对应槽,每个槽位对应一个worker
supervisor.slots.ports:
- 6700
- 6701
- 6702
- 6703
# 指定web ui 的端口为9085
ui.port: 9085
4.使用scp
命令将环境拷贝到另外两台主机
scp -r apache-storm-2.1.0 root@slave1:/root/software
scp -r apache-storm-2.1.0 root@slave2:/root/software
5.启动storm集群
切换到storm的bin
目录下
在master
结点上启动Nimbus,webUI
./storm nimbus &
./storm ui &
在两个slave
结点上启动Supervisor
./storm supervisor &
在Storm UI上查看集群情况
storm集群环境搭建完毕!