hadoop系列安装小记

原文3年多前发表在私人站点,现迁移到

当时装的是5.1.0,现在最新的版本是5.4.2,因为有在线业务使用,所以暂时不升级。

cdh

独立下载hadoop各个组件再安装比较繁琐(hdfs+yarn+hbsae+zk+hive),没有选好版本可能会冲突,CDH的版本都是选定好的,安装和升级文档齐全,非常方便

  • 5.1.0各版本信息

  • 5.1.0安装文档

  • 升级文档

安装前配置

官方流程 大致分一下3个步骤:

  • 1.配置cdh库,并通过yum安装

  • 2.配置网络/hdfs/YARN等

  • 3.其他组件安装,比如hbase/hive

配置yum源

wget http://archive.cloudera.com/cdh5/one-click-install/redhat/5/x86_64/cloudera-cdh-5-0.x86_64.rpm

sudo yum --nogpgcheck localinstall cloudera-cdh-5-0.x86_64.rpm  #安装rpm,会加一个clouder的yum源:

yum clean all 、 yum makecache # 重新构建yum缓存

sudo rpm --import http://archive.cloudera.com/cdh5/redhat/5/x86_64/cdh/RPM-GPG-KEY-cloudera #导入GPG验证的key

* 可能的问题:

1.运行yum的可能遇到错误:

It's possible that the above module doesn't match the current version of Python, which is:2.7.3 (default, May 19

2014, 15:04:50) [GCC 4.1.2 20080704 (Red Hat 4.1.2-46)]

需要修改yum的python依赖版本:

修改文件: vim /usr/bin/yum

修改头#!/usr/bin/python  => #!/usr/bin/python2.4

2.找不到host命令,需要装下bind-utils:yum install bind-utils

安装jdk

yum -y install unzip

curl -L -s get.jenv.io | bash

source /home/admin/.jenv/bin/jenv-init.sh

jenv install java 1.7.0_45

jdk通过USER账号安装,cdh系列的需要在自己的特定账号下执行,比如hdfs账号,所以会出现找不到JAVA_HOME的问题,解决方法:

  • 在/etc/sudoers 配置:Defaults env_keep+=JAVA_HOME

  • 设置ROOT下的JAVA_HOME指向USER。。,需要修改USER为可执行权限

  • 还有另一个方法,是在/etc/default/bigtop-utils 下配置javahome(chmod 755 /home/USER)


export JAVA_HOME=/home/USER/.jenv/candidates/java/current

chmod 755 /home/USER/

HDFS

安装和配置

NameNode、Client


sudo yum install hadoop-hdfs-namenode

sudo yum install hadoop-client

安装DataNode


在DataNode机器上执行:

sudo yum install hadoop-yarn-nodemanager hadoop-hdfs-datanode hadoop-mapreduce

设置hdfs config文件到自己的目录下

sudo cp -r /etc/hadoop/conf.empty /etc/hadoop/conf.my_cluster

sudo /usr/sbin/alternatives --install /etc/hadoop/conf hadoop-conf /etc/hadoop/conf.my_cluster 50

sudo /usr/sbin/alternatives --set hadoop-conf /etc/hadoop/conf.my_cluster

sudo chmod -R 777 /etc/hadoop/conf.my_cluster

(alternatives --config java好像无效)

创建数据目录(用户组hdfs:hdfs 权限700):

datanode:sudo mkdir -p /data/hadoop/hdfs/dn

sudo chown -R hdfs:hdfs /data/hadoop

hadoop-env.sh

hadoop默认为namenode、datanode都是1G的内存:

export HADOOP_NAMENODE_OPTS="$HADOOP_NAMENODE_OPTS -Xmx3072m -verbose:gc -Xloggc:/var/log/hadoop-hdfs/gc.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps"

export HADOOP_DATANODE_OPTS="$HADOOP_DATANODE_OPTS -Xmx2048m -verbose:gc -Xloggc:/var/log/hadoop-hdfs/gc.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps"

core-site.xml



 fs.defaultFS
 hdfs://cdhhadoop1:8020



 fs.trash.interval
 1440


 fs.trash.checkpoint.interval
 0


 
  io.compression.codecs
 org.apache.hadoop.io.compress.DefaultCodec,org.apache.hadoop.io.compress.GzipCodec,org.apache.hadoop.io.compress.BZip2Codec,org.apache.hadoop.io.compress.SnappyCodec

配置hdfs-site.xml



   dfs.permissions.superusergroup
   admin



   dfs.replication
   2



    hbase.regionserver.checksum.verify
    false
    
        If set to  true, HBase will read data and then verify checksums  for
        hfile blocks. Checksum verification inside HDFS will be switched off.
        If the hbase-checksum verification fails, then it will  switch back to
        using HDFS checksums.
    
  

    hbase.hstore.checksum.algorithm
    NULL
    
      Name of an algorithm that is used to compute checksums. Possible values
      are NULL, CRC32, CRC32C.
    
  

启动


service hbase-master start

service hbase-regionserver start

测试


60010是master的端口 http://localhost:60010/master-status?filter=all

60030是regionServer的端口

测试hbase集群是否支持snappy:

hbase org.apache.hadoop.hbase.util.CompressionTest hdfs://namenode:8020/benchmarks/hbase snappy

通过hbase shell访问hbase

Hive

安装和配置

安装hive/metastore/hieveserver


sudo yum install -y hive

sudo yum install -y hive-metastore

sudo yum install -y hive-server2

mysql-connector-java.jar


在metastore的机器,把mysql-connector-java.jar放到/usr/lib/hive/lib/目录下

java堆配置

我们配置的是3G


官方文档有误,实际配置文件是:/etc/hive/conf/hive-env.sh

if [ "$SERVICE" = "hiveserver2或者metastore" ]; then

  export HADOOP_OPTS="${HADOOP_OPTS} -Xmx3072m -Xms1024m -Xloggc:/var/log/hive/gc.log -XX:+PrintGCDetails -XX:+PrintGCDateStamps"

fi

export HADOOP_HEAPSIZE=512

metastore配置(配置文件:hive-site.xml)

参考

metastore 配置hdfs


先初始化下hdfs得配置,再从namenode把最新的配置拷过来:scp /etc/hadoop/conf.my_cluster/hdfs-site.xml /etc/hadoop/conf.my_cluster/core-site.xml host:/etc/hadoop/conf.my_cluster/

hiveserver2配置(配置文件:/etc/hive/conf/hive-site.xml)

主要是配置metastore地址,zk地址


  hive.support.concurrency

  Enable Hive's Table Lock Manager Service

  true

  hive.zookeeper.quorum

  Zookeeper quorum used by Hive's Table Lock Manager

  A,B,C

  hive.metastore.local

  false

  hive.metastore.uris

  thrift://xxxxx:9083

启动


sudo /sbin/service hive-metastore start

sudo /sbin/service hive-server2 start

测试

  • 1./usr/lib/hive/bin/beeline

  • 2.!connect jdbc:hive2://localhost:10000 username password org.apache.hive.jdbc.HiveDriver

    或者: !connect jdbc:hive2://10.241.52.161:10000 username password org.apache.hive.jdbc.HiveDriver

  • 3.show tables;

hive服务端日志在:/var/log/hive

hive shell日志在/tmp/admin/hive.log,之前有个配置错误引起的异常,一直没找到日志,原来路径是在/etc/hive/conf下的log4j配置的

参考

  • Platform base: HDFS + MR

  • hive搭建&测试

  • 从Hadoop到Spark的架构实践

你可能感兴趣的:(hadoop系列安装小记)