1)准备至少3台PC或虚拟机
2)Linux系统:CentOS-7.0-1406-x86_64-DVD.iso
3)Java安装包:jdk-8u161-linux-x64.tar.gz
4)Hadoop安装包:hadoop-2.6.5.tar.gz
5)Hbase安装包:hbase-1.2.1-bin.tar.gz
主机名 |
IP |
master |
192.168.1.105 |
slave1 |
192.168.1.106 |
slave2 |
192.168.1.107 |
1)选择启动盘CentOS-7.0-1406-x86_64-DVD.iso,启动安装
2)选择Install CentOS 7,回车,继续安装
3)选择语言,默认是English,学习可以选择中文,正式环境选择English
4)配置网络和主机名,主机名(hostname):master,网络选择开启,配置手动的IPV4
修改hostname:vi /proc/sys/kernel/hostname
5)选择安装位置;在分区处选择手动配置;选择标准分区,点击这里自动创建他们,点击完成,收受更改
6)修改root密码,密码:s
7)重启,安装完毕。
# ip addr
或
# ip link
#cd /etc/sysconfig/network-scripts #进入网络配置文件目录
# find ifcfg-em* #查到网卡配置文件,例如ifcfg-em1
# vi ifcfg-em1 #编辑网卡配置文件
或
# vi /etc/sysconfig/network-scripts/ifcfg-em1 #编辑网卡配置文件
配置内容:
BOOTPROTO=static #静态IP配置为static,动态配置为dhcp
ONBOOT=yes #开机启动
IPADDR=192.168.1.105 #IP地址
NETMASK=255.255.255.0 #子网掩码
GATEWAY=192.168.1.1
DNS1=144.144.144.144
# systemctl restart network.service #重启网络
# vi /etc/hosts
删除或注释其他不需要的配置
新增:
192.168.1.105 master
192.168.1.106 slave1
192.168.1.107 slave2
# systemctl status firewalld.service #检查防火墙状态
# systemctl stop firewalld.service #关闭防火墙
# systemctl disable firewalld.service #禁止开机启动防火墙
# yum install -y ntp #安装ntp服务
# ntpdate cn.pool.ntp.org #同步网络时间
用命令java –version查看是否系统已经安装jdk,如果已经安装且版本不适用,需要先卸载:
先查看rpm -qa | grep java
显示如下信息:
java-1.4.2-gcj-compat-1.4.2.0-40jpp.115
java-1.6.0-openjdk-1.6.0.0-1.7.b09.el5
卸载:
rpm -e--nodepsjava-1.4.2-gcj-compat-1.4.2.0-40jpp.115
rpm -e --nodepsjava-1.6.0-openjdk-1.6.0.0-1.7.b09.el5
还有一些其他的命令
rpm-qa | grep gcj
rpm-qa | grep jdk
如果出现找不到openjdksource的话,那么还可以这样卸载
yum -yremove javajava-1.4.2-gcj-compat-1.4.2.0-40jpp.115
yum -y remove javajava-1.6.0-openjdk-1.6.0.0-1.7.b09.el5
上传jdk-8u161-linux-x64.gz 安装包到root根目录
# mkdir /usr/java
# tar -zxvf jdk-8u161-linux-x64.gz -C /usr/java/
# rm –rf jdk-8u161-linux-x64.gz
# scp -r /usr/java slave1:/usr
# scp -r /usr/java slave2:/usr
# vi /etc/profile
编辑内容
export JAVA_HOME=/usr/java/jdk1.8.0_161
export PATH=$JAVA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
# source /etc/profile #使配置文件生效
# java -version #查看java版本
分别在各个主机上检查ssh服务状态:
# systemctl status sshd.service #检查ssh服务状态
# yum install openssh-server openssh-clients #安装ssh服务,如果已安装,则不用执行该步骤
# systemctl start sshd.service #启动ssh服务,如果已安装,则不用执行该步骤
分别在各个主机上生成密钥
# ssh-keygen -t rsa #生成密钥
在slave1上
# cp ~/.ssh/id_rsa.pub ~/.ssh/slave1.id_rsa.pub
#scp ~/.ssh/slave1.id_rsa.pub master:~/.ssh
在slave2上
# cp ~/.ssh/id_rsa.pub ~/.ssh/slave2.id_rsa.pub
# scp ~/.ssh/slave2.id_rsa.pub master:~/.ssh
在master上
# cd ~/.ssh
# cat id_rsa.pub >> authorized_keys
# cat slave1.id_rsa.pub >>authorized_keys
# cat slave2.id_rsa.pub >>authorized_keys
# scp authorized_keys slave1:~/.ssh
# scp authorized_keys slave2:~/.ssh
上传hadoop-2.6.5.tar.gz安装包到root根目录
# tar -zxvf hadoop-2.6.5.tar.gz -C /usr
# rm -rf hadoop-2.6.5.tar.gz
# mkdir /usr/hadoop-2.6.5/tmp
# mkdir /usr/hadoop-2.6.5/logs
# mkdir /usr/hadoop-2.6.5/hdf
# mkdir /usr/hadoop-2.6.5/hdf/data
# mkdir /usr/hadoop-2.6.5/hdf/name
# vi /usr/hadoop-2.6.5/etc/hadoop/hadoop-env.sh
修改java_home:
export JAVA_HOME=/usr/java/jdk1.8.0_161
# vi /usr/hadoop-2.6.5/etc/hadoop/yarn-env.sh
exportJAVA_HOME=/usr/java/jdk1.8.0_161
# vi /usr/hadoop-2.6.5/etc/hadoop/slaves
配置内容:
删除:localhost
添加:
slave1
slave2
# vi /usr/hadoop-2.6.5/etc/hadoop/core-site.xml
配置内容:
# vi /usr/hadoop-2.6.0/etc/hadoop/hdfs-site.xml
配置内容:
# cp /usr/hadoop-2.6.5/etc/hadoop/mapred-site.xml.template /usr/hadoop-2.6.5/etc/hadoop/mapred-site.xml
# vi /usr/hadoop-2.6.5/etc/hadoop/mapred-site.xml
配置内容:
# vi /usr/hadoop-2.6.5/etc/hadoop/yarn-site.xml
配置内容:
# scp -r /usr/ hadoop-2.6.5 slave1:/usr
# scp -r /usr/ hadoop-2.6.5 slave2:/usr
# vi /etc/profile
编辑内容:
export HADOOP_HOME=/usr/ hadoop-2.6.5
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
export HADOOP_LOG_DIR=/usr/hadoop-2.6.5/logs
export YARN_LOG_DIR=$HADOOP_LOG_DIR
# source /etc/profile #使配置文件生效
# cd /usr/hadoop-2.6.5/sbin
# hdfs namenode -format
启动hdfs:
# cd /usr/hadoop-2.6.5/sbin
# start-all.sh
检查hadoop启动情况:
http://192.168.1.105:50070
http://192.168.1.105:8088/cluster
检查进程:
# jps
master主机包含ResourceManager、SecondaryNameNode、NameNode等,则表示启动成功,例如
2212 ResourceManager
2484 Jps
1917 NameNode
2078 SecondaryNameNode
各个slave主机包含DataNode、NodeManager等,则表示启用成功,例如
17153 DataNode
17334 Jps
17241 NodeManager
#vi /etc/profile
export ZOOKEEPER_HOME=/usr/zookeeper-3.3.6
export PATH=$ZOOKEEPER_HOME/bin:$PATH
#source /etc/profile
1、到zookeeper官网下载:
http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.3.6/
2、在slave1,slave2上面搭建zookeeper
例如:
slave1 192.168.1.105
slave2 192.168.1.106
3、上传zookeeper-3.3.6.tar.gz到master服务器的根目录,并解压:zookeeper:
tar –zxvf zookeeper-3.3.6.tar.gz -C /usr
4、在zookeeper目录下建立zookeeper-data目录,同时将zookeeper目录下conf/zoo_simple.cfg文件复制一份成zoo.cfg
cp /usr/zookeeper-3.3.6/conf/zoo_sample.cfg zoo.cfg
5、修改zoo.cfg
# Thenumber of milliseconds of each tick
tickTime=2000
# Thenumber of ticks that the initial
#synchronization phase can take
initLimit=10
# Thenumber of ticks that can passbetween
#sending a request and getting anacknowledgement
syncLimit=5
# thedirectory where the snapshot isstored.
# do notuse /tmp for storage, /tmp hereis just
#example sakes.
dataDir=/usr/zookeeper-3.3.6/zookeeper-data
# theport at which the clients willconnect
clientPort=2181
# themaximum number of clientconnections.
#increase this if you need to handle moreclients
#maxClientCnxns=60
# Besure to read the maintenance sectionof the
# administratorguide before turning onautopurge.
#http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
# Thenumber of snapshots to retain indataDir
#autopurge.snapRetainCount=3
# Purgetask interval in hours
# Set to"0" to disable autopurge feature
#autopurge.purgeInterval=1
server.0=slave0:2888:3888
server.1=slave1:2888:3888
server.2=slave2:2888:3888
6、拷贝zookeeper目录到另外两台服务器:
scp –r /usr/zookeeper-3.3.6 slave1:/usr
scp –r /usr/zookeeper-3.3.6 slave2:/usr
分别在几台服务器的zookeeper-data目录下建立myid其ip对应相应的server.*:master的myid内容为0;slave1的myid内容为1;slave2的myid为2。例如master的mydi文件创建:
cd /usr/zookeeper-3.3.6/zookeeper-data
vi myid
编辑内容:
0
保存退出即可。
7、启动ZooKeeper集群,在每个节点上分别启动ZooKeeper服务:
cd /usr/zookeeper-3.3.6/
./zkServer.sh start
8、查看ZooKeeper集群的状态:
cd /usr/zookeeper-3.3.6/
./zkServer.sh status
上传hbase-1.2.1-bin.tar.gz安装包到root根目录
# tar -zxvf hbase-1.2.1-bin.tar.gz -C /usr
# mkdir /usr/hbase-1.2.1/logs
vi /etc/profile
增加配置:
export HBASE_HOME=/usr/hbase
export PATH=$PATH:$HBASE_HOME/bin
保存退出,并使其生效:
source /etc/profile
# vi /usr/hbase-1.2.1/conf/hbase-env.sh
配置内容:
export JAVA_HOME=/usr/java/jdk1.8.0_161
export HBASE_LOG_DIR=${HBASE_HOME}/logs
export HBASE_MANAGES_ZK=false
# vi /usr/hbase-1.2.1/conf/regionservers
配置内容:
删除:localhost
添加:
slave1
slave2
# vi /usr/hbase-1.2.1/conf/hbase-site.xml
配置内容:
# scp -r /usr/hbase-1.2.1 slave1:/usr
# scp -r /usr/hbase-1.2.1 slave2:/usr
启动之前先启动hadoop和zookeeper集群
启动hbase:
# cd /usr/hbase-1.2.1/bin
#./start-hbase.sh
# hbase shell
#hbase(main):011:0> status
1 active master, 0 backup masters, 2 servers, 0dead, 4.0000 average load
# cd /usr/zookeeper-3.3.6/bin
# ./zkServer.sh start
# cd /usr/hadoop-2.6.5/sbin
# start-all.sh
# cd /usr/hbase-1.2.1/bin
#./start-hbase.sh
# cd /usr/hbase-1.2.1/bin
#./stop-hbase.sh
# cd /usr/hadoop-2.6.5/sbin
# stop-all.sh
# cd /usr/zookeeper-3.3.6/bin
# ./zkServer.sh stop
如无法正常停止,可使用kill命令杀掉进程。