大数据Hadoop+Hive+Hbase的部署

1.基础环境的准备
1.1 准确4台服务器,尽量不要安装中文环境。
1.2 服务器的静态的IP的设置
1.3 系统安装的centos6.5
1.4 所用的软件版本如下:
    jdk-7u79-linux-x64.tar.gz
    zookeeper-3.4.6.tar.gz
    hadoop-2.5.1-x64.tar.gz
    apache-hive-1.2.1-bin.tar.gz
    hbase-0.98.15-hadoop2-bin.tar.gz 

2.基础环境的部署

2.1 主机名映射

vi /etc/hosts
192.168.100.11 node1
192.168.100.12 node2
192.168.100.13 node3
192.168.100.14 node4

2.2 关闭防火墙

 iptables –X
 iptables –F
 iptables –Z
 service iptables save
 vi /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
#     enforcing - SELinux security policy is enforced.
#     permissive - SELinux prints warnings instead of enforcing.
#     disabled - No SELinux policy is loaded.
SELINUX=permissive
reboot
2.3  yum的配置和安装ftp

2.3.1 node1节点的配置

cd /etc/yum.repos.d/
rm -rf *
vi local.repo
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
mkdir /opt/centos
mount /dev/sr0 /opt/centos/
yum clean all
yum list
yum install vsftpd –y
vi /etc/vsftpd/vsftpd.conf
加多一行anon_root=/opt---引导ftp到/opt目录下
service vsftpd restart
chkconfig vsftpd on

2.3.2其它节点的配置

cd /etc/yum.repos.d/
rm -rf *
vi local.repo
[centos]
name=centos
baseurl=ftp://192.168.100.11/centos
gpgcheck=0
enabled=1
yum clean all
yum list

2.4 安装openssh-clients 

yum install openssh-clients -y
2.5 安装ntp(全部所有节点)
yum install ntp –y 
node1:vi /etc/ntp.conf
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 0.centos.pool.ntp.org iburst
server 1.centos.pool.ntp.org iburst
server 2.centos.pool.ntp.org iburst
server 3.centos.pool.ntp.org iburst
server 127.127.1.0
fudge 127.127.1.0 stratum 10
service ntpd restart
chkconfig ntpd on
其他节点: ntpdate node1
Hadoop整个集群的角色分布
       NN    DN    JN   ZK  ZKFC   RS
node1   1                1   1
node2   1    1      1    1   1
node3        1      1    1         1
node4        1      1              1
3.Hadoop部署准备环境

3.1安装免秘钥  

node1,node2>>对所有的节点免秘钥
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
cd /root/.ssh/
scp id_dsa.pub node2:/tmp/
cat /tmp/id_dsa.pub >> /root/.ssh/authorized_keys

3.2  JDK的部署

mkdir /home/tools
cd /home/tools
scp jdk-7u79-linux-x64.tar.gz @node4:/home/tools/.
mkdir /usr/java
tar zxvf jdk-7u79-linux-x64.tar.gz -C /usr/java/.
vi /etc/profile
export JAVA_HOME=/usr/java/jdk1.7.0_79
export PATH=$PATH:$JAVA_HOME/bin
source /etc/profile
java –version

4 安装zookeeper(node1,node2,node3)

scp zookeeper-3.4.6.tar.gz node3:/home/tools/.
tar -zxvf zookeeper-3.4.6.tar.gz -C /home/.
vi /etc/profile 
export ZOOKEEPER_HOME=/home/zookeeper-3.4.6
export PATH=$PATH:$ZOOKEEPER_HOME/bin
source /etc/profile
cd /home/zookeeper-3.4.6/conf/
cp zoo_sample.cfg zoo.cfg
vi zoo.cfg 
dataDir=/opt/zookeeper
server.1=node1:2888:3888
server.2=node2:2888:3888
server.3=node3:2888:3888 
scp -r zookeeper-3.4.6/ @node2:/home/.
scp -r zookeeper-3.4.6/ @node3:/home/.
mkdir /opt/zookeeper
cd /opt/zookeeper
vi myid
node1:1,node2:2,node3:3
zkServer.sh start
zkServer.sh status
5. Hadoop的部署

5.1 解压安装配置

tar zxvf hadoop-2.5.1-x64.tar.gz -C /home/.
vi /etc/profile 
export HADOOP_HOME=/home/hadoop-2.5.1
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
source /etc/profile

5.2 配置Hadoop文件

cd /home/hadoop-2.5.1/etc/hadoop
vi hadoop-env.sh
export JAVA_HOME=/usr/java/jdk1.7.0_79
vi hdfs-site.xml  

   
         dfs.nameservices
         lyl
    
    
   
         dfs.ha.namenodes.lyl
         nn1,nn2
    
    
    
         dfs.namenode.rpc-address.lyl.nn1
         node1:8020
    


    
         dfs.namenode.rpc-address.lyl.nn2
         node2:8020
    
     
    
         dfs.namenode.http-address.lyl.nn1
         node1:50070
    


    
         dfs.namenode.http-address.lyl.nn2
         node2:50070
    
    
    
         dfs.namenode.shared.edits.dir
         qjournal://node2:8485;node3:8485;node4:8485/lyl
    
    
    
         dfs.client.failover.proxy.provider.lyl
         org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
    
    
    
         dfs.ha.fencing.methods
         sshfence
    


    
         dfs.ha.fencing.ssh.private-key-files
         /root/.ssh/id_dsa
    
     
    
         dfs.journalnode.edits.dir
         /opt/journal/data
    
   
    
        dfs.ha.automatic-failover.enabled
        true
    
vi core-site.xml
 

    fs.defaultFS
    hdfs://lyl

 

    hadoop.tmp.dir
    /opt/hadoop

 

    ha.zookeeper.quorum
    node1:2181,node2:2181,node3:2181

cp mapred-site.xml.template  mapred-site.xml
vi mapred-site.xml
 

   mapreduce.framework.name
   yarn
vi yarn-site.xml

 
      yarn.nodemanager.aux-services
      mapreduce_shuffle
  

  
     yarn.resourcemanager.ha.enabled
     true
  

  
     yarn.resourcemanager.cluster-id
     lylyear
 

 
    yarn.resourcemanager.ha.rm-ids
    rm1,rm2
 

 
    yarn.resourcemanager.hostname.rm1
    node3
 

 
    yarn.resourcemanager.hostname.rm2
    node4
 


    yarn.resourcemanager.zk-address
    node1:2181,node2:2181,node3:2181
vi slaves 指定DataNode(NodeManager)节点
node2
node3
node4
scp -r hadoop-2.5.1/ @node2:/home/.

5.3 启动Hadoop

nod2:node3:node4:-> hadoop-daemon.sh start journalnode
node1: hdfs namenode –format
hadoop-daemon.sh start namenode
node2: hdfs namenode –bootstrapStandby
node1: hdfs zkfc –formatZK
start-all.sh
node3:node4: yarn-daemon.sh start resourcemanager
Hive的部署节点的分配
        MySQL      Hive
node1    1
node2                1
node3
node4

6,Hive的部署(单用户模式安装)

6.1 node1安装MySql

yum install mysql-server
service mysqld start
chkconfig mysqld on
chkconfig --list mysqld

mysql
use mysql
GRANT ALL PRIVILEGES ON *.* TO 'root'@'%' IDENTIFIED BY '123' WITH GRANT OPTION;
delete from user where host != '%';
flush privileges;
mysql -u root –p

6.2 node2安装Hive

tar zxvf apache-hive-1.2.1-bin.tar.gz -C /home/
mv apache-hive-1.2.1-bin/ hive-1.2.1
vi /etc/profile
export HIVE_HOME=/home/hive-1.2.1
export PATH=$PATH:$HIVE_HOME/bin
cd hive-1.2.1/conf/
cp hive-default.xml.template hive-site.xml
vi hive-site.xml




   
        hive.metastore.warehouse.dir
        /user/hive_remote/warehouse
   


   
       hive.metastore.local
       true
  


  
      javax.jdo.option.ConnectionURL
      jdbc:mysql://node1/hive_remote?createDatabaseIfNotExist=true
 


  
      javax.jdo.option.ConnectionDriverName
      com.mysql.jdbc.Driver
  


  
     javax.jdo.option.ConnectionUserName
     root
  


  
    javax.jdo.option.ConnectionPassword
    123456
  
  

cd /home/hive-1.2.1/lib/
cp /home/tools/mysql-connector-java-5.1.32-bin.jar .
cd /home/hadoop-2.5.1/share/hadoop/yarn/lib
cp /home/hive-1.2.1/lib/jline-2.12.jar .
rm -rf jline-0.9.94.jar
hive
Hbase的部署的节点角色分布
        ZK    Master    RegionServer
node1   1       1
node2   1                    1
node3   1       1            1
node4                        1

7,Hbase的部署(完全分布)

7.1 node3实现无秘钥登录

scp id_dsa.pub node1:/tmp/.
cat /tmp/id_dsa.pub >> /root/.ssh/authorized_keys

7.2 解压安装部署Hbase

tar zxvf hbase-0.98.15-hadoop2-bin.tar.gz -C /home/.  
mv hbase-0.98.15-hadoop2/ hbase-0.98
vi /etc/profile
export HBASE_HOME=/home/hbase-0.98
export PATH=$PATH:$HBASE_HOME/bin
source /etc/profile

7.3 配置Hbase文件

vi hbase-env.sh 
export JAVA_HOME=/usr/java/jdk1.7.0_79
export HBASE_MANAGES_ZK=false
 vi hbase-site.xml
 
 
      hbase.rootdir
      hdfs://lyl/hbase
   
 
   
      hbase.cluster.distributed
      true
   
 
   
       hbase.zookeeper.quorum
       node1,node2,node3
 
vi regionservers   
node2
node3
node4
vi backup-masters
node1
cp /home/hadoop-2.5.1/etc/hadoop/hdfs-site.xml  .
scp -r hbase-0.98/ @node1:/home/.

7.4 启动Hbase(检查时间同步)

zkServer.sh start
start-all.sh
start-hbase.sh
hbase shell


你可能感兴趣的:(大数据Hadoop+Hive+Hbase的部署)