伪分布安装Hadoop2.8.0+Hbase1.3.1+Hive1.2.1+Kylin2.0

测试环境:centos6.5 + jdk1.8.0_131/

 1.hadoop 2.8.0安装

1)下载hadoop-2.8.0

2)解压缩到/opt/app/hadoop-2.8.0目录下

3)伪分布配置文件如下(伪分布下一定要用localhost):

vi  /opt/app/hadoop-2.8.0/etc/hadoop/core-site.xml 


    
        fs.defaultFS
        hdfs://localhost:9000
    

    hadoop.tmp.dir
    /home/hadoop/data
    namenode


 vi  /opt/app/hadoop-2.8.0/etc/hadoop/hdfs-site.xml

    
        dfs.replication
        1


        dfs.namenode.name.dir
        /usr/hadoopdata/name


        dfs.datanode.data.dir
        /usr/hadoopdata/data


    dfs.namenode.secondary.http-address
    localhost:50090


 vi  /opt/app/hadoop-2.8.0/etc/hadoop/yarn-site.xml




    
    yarn.nodemanager.aux-services
    mapreduce_shuffle
    

    yarn.nodemanager.aux-services.mapreduce_shuffle.class
    org.apache.hadoop.mapred.ShuffleHandler

 
        mapreduce.jobtracker.staging.root.dir
        /home/hadoop/data/mapred/staging

  
   yarn.app.mapreduce.am.staging-dir
    /home/hadoop/data/mapred/staging
  
  
    yarn.resourcemanager.hostname
    localhost
  

 vi  /opt/app/hadoop-2.8.0/etc/hadoop/mapred-site.xml



    mapreduce.reduce.java.opts
    -Xms2000m -Xmx4600m


    mapreduce.map.memory.mb
    5120


    mapreduce.reduce.input.buffer.percent
    0.5

 
   mapreduce.reduce.memory.mb
   2048
 

    mapred.tasktracker.reduce.tasks.maximum
    2


    mapreduce.framework.name
    yarn


     mapreduce.jobhistory.address
     localhost:10020
  
  
     yarn.app.mapreduce.am.staging-dir
     /home/hadoop/data/mapred/staging
  
  
     mapreduce.jobhistory.intermediate-done-dir
     ${yarn.app.mapreduce.am.staging-dir}/history/done_intermediate
  
  
     mapreduce.jobhistory.done-dir
     ${yarn.app.mapreduce.am.staging-dir}/history/done
  



vi  /opt/app/hadoop-2.8.0/etc/hadoop/hadoop-env.sh

export JAVA_HOME=/opt/app/jdk1.8.0_131

 2.hbase1.3.1安装

1)下载hbase1.3.1

2)解压缩到/opt/app/hbase-1.3.1下

3)伪分布下,配置文件如下(zookeeper用hbase自带):

 vi  /opt/app/hbase-1.3.1/conf/hbase-site.xml



    hbase.rootdir
    hdfs://localhost:9000/hbase

  
    hbase.cluster.distributed
    true
  
  
    hbase.zookeeper.quorum
    localhost
  

 vi  /opt/app/hbase-1.3.1/conf/regionservers

localhost
3.安装mysql
安装步骤略:可参考如下文章:

http://blog.csdn.net/cuker919/article/details/46481427

需要注意的是:

1)先卸载在带的mysql

2)最好将mysql安装在/usr/local/mysql目录下,否则有很多不必要的麻烦要做

3)建立hive用户,便于hive使用mysql数据库

4.安装hive1.2.1

1)下载hive1.2.1

2)解压缩 /opt/app/apache-hive-1.2.1-bin/下

3)主要配置文件如下:(注意拷贝mysql驱动到hive lib报下)

 vi /opt/app/apache-hive-1.2.1-bin/conf/

export JAVA_HOME=/opt/app/jdk1.8.0_131
export HIVE_HOME=/opt/app/apache-hive-1.2.1-bin/
export HBASE_HOME=/opt/app/hbase-1.3.1
export HIVE_AUX_JARS_PATH=/opt/app/apache-hive-1.2.1-bin/lib
export HIVE_CLASSPATH==/opt/app/apache-hive-1.2.1-bin/conf
片段: vi /opt/app/apache-hive-1.2.1-bin/conf/hive-site.xml 


  javax.jdo.option.ConnectionDriverName
  com.mysql.jdbc.Driver
  Driver class name for a JDBC metastore




  javax.jdo.option.ConnectionUserName
  hive
  username to use against metastore database




  javax.jdo.option.ConnectionPassword
  hive
  password to use against metastore database




    hive.exec.scratchdir
    /tmp/hive    
  
    
    hive.exec.local.scratchdir
    /home/hive/iotmp   
  
    
    hive.downloaded.resources.dir
    /home/hive/iotmp
 

    hive.metastore.warehouse.dir
    /user/hive/warehouse

  
    hive.querylog.location
    /home/hive/iotmp
    Location of Hive run time structured log file
  

    hive.metastore.uris
    thrift://localhost:9083

  
    javax.jdo.option.ConnectionURL
    jdbc:mysql://localhost:3306/hive
    JDBC connect string for a JDBC metastore
  
4)需要将hive lib传到 hdfs上 

 hadoop fs -put /opt/app/apache-hive-1.2.1-bin/lib/* /opt/app/apache-hive-1.2.1-bin/lib/

5.安装Kylin2.0
1) 下载apache-kylin-2.0.0-bin

2) /opt/app/apache-kylin-2.0.0-bin/ 下

3)主要配置目录如下:

vi /opt/app/apache-kylin-2.0.0-bin/bin/find-hive-dependency.sh 

hive_conf_path=$HIVE_HOME/conf
hive_exec_path=$HIVE_HOME/lib/hive-exec-1.2.1.jar
vi /opt/app/apache-kylin-2.0.0-bin/bin/kylin.sh

修改HBASE_CLASSPATH_PREFIX,增加hive_dependency

 export HBASE_CLASSPATH_PREFIX=${KYLIN_HOME}/conf:${KYLIN_HOME}/lib/*:${KYLIN_HOME}/ext/*:${hive_dependency}:${HBASE_CLASSPATH_PREFIX}
6.配置vi /etc/profile

## set java
export JAVA_HOME=/opt/app/jdk1.8.0_131
PATH=$PATH:/$JAVA_HOME/bin
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:/opt/app/hadoop-2.8.0/etc/hadoop/mapred-site.xml
JRE_HOME=$JAVA_HOME/jre
export HADOOP_HOME=/opt/app/hadoop-2.8.0
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export PATH=$PATH:$HADOOP_HOME/lib
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export HIVE_HOME=/opt/app/apache-hive-1.2.1-bin
export HCAT_HOME=$HIVE_HOME/hcatalog
export HIVE_CONF=$HIVE_HOME/conf
PATH=$PATH:$HIVE_HOME/bin:$PATH
export HBASE_HOME=/opt/app/hbase-1.3.1
PATH=$PATH:$HBASE_HOME/bin:$PATH
#export HIVE_CONF=/opt/app/apache-hive-1.2.1-bin/conf
#PATH=$PATH:$HIVE_HOME/bin:$PATH
export KYLIN_HOME=/opt/app/apache-kylin-2.0.0-bin
PATH=$PATH:$KYLIN_HOME/bin:$PATH
#export KYLIN_HOME=/opt/app/kylin/
source /etc/profile使生效

7.配置vi /etc/profile

首先查看hostname

[root@CentOS65x64 mysql]# hostname
CentOS65x64.localdomain


将hostname(CentOS65x64.localdomain) 与127.0.0.1 映射,否则伪分布下,zookeeper可能启动不起来


8.配置完成,启动

service mysql start
/opt/app/hadoop-2.8.0/sbin/start-all.sh
/opt/app/hadoop-2.8.0/sbin/mr-jobhistory-daemon.sh start historyserver
/opt/app/hbase-1.3.1/bin/start-hbase.sh 
 nohup hive --service metastore > /home/hive/metastore.log 2>&1 &
/opt/app/apache-kylin-2.0.0-bin/bin/kylin.sh start

注意红色字部分,不要忘记启动,负责kylin执行cube会报找不到hive-meta-1.2.1.jar错误。


9.关闭

/opt/app/apache-kylin-2.0.0-bin/bin/kylin.sh stop
/opt/app/hadoop-2.8.0/sbin/stop-all.sh
/opt/app/hbase-1.3.1/bin/stop-hbase.sh

其他的应用用jps查看,用 kill -9 进程号杀死。

你可能感兴趣的:(伪分布安装Hadoop2.8.0+Hbase1.3.1+Hive1.2.1+Kylin2.0)