Lunix安装Hadoop,Hbase,Hive

所有的安装包地址: https://pan.baidu.com/s/1rakXuePNojIBh9oBs2qUBg 提取码:ZJZT

前言:本次安装包和解压,都是在/root/Tools文件夹中,如果是用的其他的文件夹,配置处修改成对应文件夹即可。Jdk1.8.0_102, Hadoop-2.8.5,Hbase-2.1.1,Mysql 8.0.13,Hive 2.3.4

1,安装JAVA

#解压
tar zxvf jdk-8u102-linux-x64.tar.gz
#修改环境变量
vim /etc/profile
#增加java变量
export JAVA_HOME=/root/Tools/jdk1.8.0_102
export PATH=$PATH:$JAVA_HOME/bin:
#使环境变量生效
source /etc/profile
#查看java版本,安装成功
java -version

2,修改host

vim /etc/hosts
#新增内容
172.16.0.11 hadoop01

3,安装Hadoop

#解压
tar zxvf hadoop-2.8.5.tar.gz
#修改环境变量
vim /etc/profile
#修改内容
export HADOOP_HOME=/root/Tools/hadoop-2.8.5
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS=-Djava.library.path=$HADOOP_HOME/lib/native
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:
#使环境变量生效
source /etc/profile
修改core-site.xml

     
        fs.defaultFS
        hdfs://hadoop01:9000
    
    
        hadoop.tmp.dir
    /root/Tools/hadoop-2.8.5/tmp
    

修改hdfs-site.xml

  
      dfs.replication
      1
  
  
     dfs.permissions.enabled
     false
  
  
      dfs.datanode.max.xcievers
      4096
      Datanode 有一个同时处理文件的上限,至少要有4096
 
 
         dfs.datanode.hostname
        42.194.136.210

  
        dfs.webhdfs.enabled
        true

  
    dfs.datanode.data.dir
    /root/Tools/hadoop-2.8.5/datanode
  
  
    dfs.namenode.data.dir
    /root/Tools/hadoop-2.8.5/namenode
  
  
      dfs.support.append
      true
  
  
      dfs.client.block.write.replace-datanode-on-failure.policy
      NEVER
  

修改hadoop-env.sh
export JAVA_HOME=/root/Tools/jdk1.8.0_102
export HADOOP_CONF_DIR=/root/Tools/hadoop-2.8.5/etc/hadoop/
修改完后,让环境生效
source /root/Tools/hadoop-2.8.5/etc/hadoop/hadoop-env.sh
修改yarn-site.xml


  
     yarn.nodemanager.aux-services
     mapreduce_shuffle
  
  
    yarn.log-aggregation-enable
    true
  
   
  
    yarn.log-aggregation.retain-seconds
    86400
  

重命名mapred-site.xml.template => mapred-site.xml,修改内容

   
       mapreduce.framework.name
       yarn
   

运行和停止hadoop
#格式化
hdfs namenode -format  
/root/Tools/hadoop-2.8.5/sbin/start-all.sh
/root/Tools/hadoop-2.8.5/sbin/stop-all.sh
#只启动dfs
/root/Tools/hadoop-2.8.5/sbin/start-dfs.sh
/root/Tools/hadoop-2.8.5/sbin/stop-dfs.sh
JPS查看进程,运行成功
image.png

8088端口 50070端口 访问

4,安装Hbase
#解压
tar zxvf hadoop-2.8.5.tar.gz
#修改环境变量
vim /etc/profile
#修改内容
 export HBASE_HOME=/root/Tools/hbase-2.1.1
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:
#使环境变量生效
source /etc/profile
修改hbase2.1.1/conf下的hbase-site.xml


  hbase.rootdir
  hdfs://hadoop01:9000/hbase


  hbase.zookeeper.quorum
  hadoop01


  hbase.cluster.distributed
  true


   hbase.unsafe.stream.capability.enforce
   false


  hbase.zookeeper.property.dataDir
  /root/Tools/hbase-2.1.1/zookeeper


   dfs.replication
   1


   hbase.tmp.dir
   /root/Tools/hbase-2.1.1/hbasetmp


    hbase.master.info.port
    60010


修改hbase2.1.1/conf下的hbase-env.sh
export JAVA_HOME=/root/Tools/jdk1.8.0_102
export HBASE_MANAGES_ZK=true
修改后让环境生效
source /root/Tools/hbase-2.1.1/conf/hbase-env.sh
由于只在本机,暂不修改regionservers

如果运行报错,重复的slf4j-log4j12-1.7.25.jar包。删除/root/Tools/hbase-2.1.1/lib/client-facing-thirdparty/slf4j-log4j12-1.7.25.jar

运行
#分开启动
 /root/Tools/hbase-2.1.1/bin/hbase-daemons.sh start zookeeper
 /root/Tools/hbase-2.1.1/bin/hbase-daemons.sh start master
 /root/Tools/hbase-2.1.1/bin/hbase-daemons.sh start regionserver
#也可以
 /root/Tools/hbase-2.1.1/bin/start-hbase.sh
5,安装Hive
首先需要安装mysql,mysql会跟自带的 mariadb冲突

查看mariadb版本

rpm -qa|grep mariadb

卸载mariadb

rpm -e mariadb-libs-5.5.56-2.el7.x86_64 --nodeps

mysql有依赖关系,安装顺序是 common、lib、client、server
(如果安装某个依赖失败,说明缺少依赖需要yum一下,对应的错百度一下)

sudo  rpm -ivh mysql-community-common-8.0.13-1.el7.x86_64.rpm
sudo rpm -ivh mysql-community-libs-8.0.13-1.el7.x86_64.rpm
sudo rpm -ivh mysql-community-client-8.0.13-1.el7.x86_64.rpm
sudo rpm -ivh mysql-community-server-8.0.13-1.el7.x86_64.rpm

启动mysql:

sudo systemctl start mysqld

设置root密码:(查看日志文件,A temporary password is generated for root@localhost: ****** 知道初始密码)

cat /var/log/mysqld.log
##用初始密码登录mysql
mysql -u root -p
Enter password:  初始密码
mysql>

然后就是创建一个用户,并给予他权限
(mysql的语句需要以分号结尾)

mysql> ALTER USER USER() IDENTIFIED BY '密码';
mysql> CREATE USER 'zjzt'@'hadoop01' IDENTIFIED BY '密码';
mysql> create database hive_metedata;
mysql> grant all privileges on *.* to 'zjzt'@'hadoop01';
mysql> flush privileges;
安装hive
#解压
tar -zxvf apache-hive-2.3.4-bin.tar.gz
#修改环境变量
vim /etc/profile
#修改内容
export HIVE_HOME=/root/Tools/apache-hive-2.3.4-bin
export PATH=$PATH:$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$HBASE_HOME/bin:$HIVE_HOME/bin:
#使环境变量生效
source /etc/profile
#修改 hive-env.sh
vim /root/Tools/apache-hive-2.3.4-bin/conf/hive-env.sh

配置hadoop_home等环境变量

export HADOOP_HOME=/root/Tools/hadoop-2.8.5
export HIVE_CONF_DIR=/root/Tools/apache-hive-2.3.4-bin/conf
export HIVE_AUX_JARS_PATH=/root/Tools/apache-hive-2.3.4-bin/lib
export JAVA_HOME=/root/Tools/jdk1.8.0_102

修改hive-site.xml(部分修改)

将配置文件中所有的${system:java.io.tmpdir}更改为 /root/Tools/apache-hive-2.3.4-bin/tmp (如果没有该文件则创建),并将此文件夹赋予读写权限,将${system:user.name}更改为 zjzt

要把那个mysql-connector jar包放入到/root/Tools/apache-hive-2.3.4-bin/lib文件夹


    hive.exec.local.scratchdir
    /root/Tools/apache-hive-2.3.4-bin/tmp/zjzt
    Local scratch space for Hive jobs
  

    hive.downloaded.resources.dir
    /root/Tools/apache-hive-2.3.4-bin/tmp/${hive.session.id}_resources
    Temporary local directory for added resources in the remote file system.

 
    javax.jdo.option.ConnectionPassword
    密码
    password to use against metastore database
  
  
    javax.jdo.option.ConnectionURL
    jdbc:mysql://hadoop01:3306/hive_metedata?createDatabaseIfNotExist=true
    
      JDBC connect string for a JDBC metastore.
      To use SSL to encrypt/authenticate the connection, provide database-specific SSL flag in the connection URL.
      For example, jdbc:postgresql://myhost/db?ssl=true for postgres database.
    
  

    javax.jdo.option.ConnectionDriverName
    com.mysql.cj.jdbc.Driver    
    Driver class name for a JDBC metastore

创建hdfs文件夹,并赋予权限

hadoop fs -mkdir -p /user/hive/
hadoop fs -mkdir -p /user/hive/warehouse
hadoop fs -chmod 777 /user/hive/
hadoop fs -chmod 777 /user/hive/warehouse

初始化hive,然后启动hive:
(如果报错,有重复的log4j-slf4j-impl-2.6.2.jar包,删除hive文件夹中的log4j-slf4j-impl-2.6.2.jar包)

#初始化
schematool -dbType mysql -initSchema
#启动
hive

启动完成。安装结束

5,Hive结合Hbase配置修改

修改hive-env.sh 添加hbase配置
#添加内容
export HBASE_HOME=/root/Tools/hbase-2.1.1
#环境生效
source /root/Tools/apache-hive-2.3.4-bin/conf/hive-env.sh
修改hive-site.xml,设置hive的zookeeper和hbase zookeeper

    hive.zookeeper.quorum
    hadoop01
  

    hbase.zookeeper.quorum
    hadoop01
  
复制hive lib下的hive-hbase-handler-2.3.4.jar到hbase的lib文件夹下

如果出现版本冲突,需要用hbase下的jar包覆盖hive lib下的jar包。如果需要引用其他包,可设置hive.aux.jars.path引用

hive.aux.jars.path
file:///root/Tools/XXX.jar

你可能感兴趣的:(Lunix安装Hadoop,Hbase,Hive)