hadoop_hadoop-2.6.0-cdh5.7.0源码编译支持压缩以及伪分布式部署

1.需求与设计

1.1需求

直接使用的hadoop-2.6.0-cdh5.7.0.tar.gz包部署的hadoop集群不支持文件压缩,生产上是不可接受的,故需要将hadoop源码下载重新编译支持压缩

1.1概要设计

下载hadoop源码,使用maven编译,使其支持压缩。并成功进行伪分布式集群部署验证压缩功能。
|组件名称|组件版本 |百度网盘链接

2.环境需求以及部署规划

2.1 硬件环境
一台centos6.X虚拟机

2.2 软件环境:

组件版本 百度网盘链接
Hadoop-2.6.0-cdh5.7.0-src.tar.gz 链接:https://pan.baidu.com/s/1uRMGIhLSL9QHT-Ee4F16jw 提取码:jb1d
jdk-7u80-linux-x64.tar.gz 链接:https://pan.baidu.com/s/1xSCQ8rjABVI-zDFQS5nCPA 提取码:lfze
apache-maven-3.3.9-bin.tar.gz 链接:https://pan.baidu.com/s/1ddkdkLW7r7ahFZmgACGkVw 提取码:fdfz
protobuf-2.5.0.tar.gz 链接:https://pan.baidu.com/s/1RSNZGd_ThwknMB3vDkEfhQ 提取码:hvc2

注意:
1、编译的JDK版本必须是1.7,1.8的JDK会导致编译失败,采坑

3.安装centos

请参考VM虚拟机安装Centos6.X以及主机和网络配置

4.编译hadoop

4.1安装必要的依赖库

[root@hadoop001 ~]# yum install -y svn ncurses-devel
[root@hadoop001 ~]# yum install -y gcc gcc-c++ make cmake
[root@hadoop001 ~]# yum install -y openssl openssl-devel svn ncurses-devel zlib-devel libtool
[root@hadoop001 ~]# yum install -y snappy snappy-devel bzip2 bzip2-devel lzo lzo-devel lzop autoconf automake cmake 

4.2添加用户以及上传软件

[root@hadoop001 ~]# yum install -y lrzsz
[root@hadoop001 ~]# useradd hadoop
[root@hadoop001 ~]# su - hadoop
[hadoop@hadoop001 ~]$ mkdir app soft source lib data maven_repo shell mysql
[hadoop@hadoop001 ~]$ cd soft/
[hadoop@hadoop001 soft]$ rz

[hadoop@hadoop001 soft]$ ll
total 202192
-rw-r--r--. 1 hadoop hadoop   8491533 Apr  7 11:25 apache-maven-3.3.9-bin.tar.gz
-rw-r--r--. 1 hadoop hadoop  42610549 Apr  6 16:55 hadoop-2.6.0-cdh5.7.0-src.tar.gz
-rw-r--r--. 1 hadoop hadoop 153530841 Apr  7 11:12 jdk-7u80-linux-x64.tar.gz
-rw-r--r--. 1 hadoop hadoop   2401901 Apr  7 11:31 protobuf-2.5.0.tar.gz

4.3安装JDK

  • 解压安装包,安装目录必须是/usr/java,安装后记得修改拥有者为root

    [hadoop@hadoop001 soft]$ exit
    [root@hadoop001 ~]# mkdir /usr/java
    [root@hadoop001 ~]# tar -zxvf /home/hadoop/soft/jdk-7u80-linux-x64.tar.gz -C /usr/java
    [root@hadoop001 ~]# cd /usr/java/
    [root@hadoop001 java]# chown -R root:root jdk1.7.0_80

  • 添加环境变量

     [root@hadoop001 jdk1.7.0_80]# vim /etc/profile 
     #添加如下两行环境变量
     export JAVA_HOME=/usr/java/jdk1.7.0_80
     export PATH=$JAVA_HOME/bin:$PATH
     [root@hadoop001 jdk1.7.0_80]# source /etc/profile
     #测试java是否安装成功
     [root@hadoop001 jdk1.7.0_80]# java -version
     java version "1.7.0_80"
     Java(TM) SE Runtime Environment (build 1.7.0_80-b15)
     Java HotSpot(TM) 64-Bit Server VM (build 24.80-b11, mixed mode)
    

4.4安装maven

  • 解压

    [root@hadoop001 ~]# su - hadoop
    [hadoop@hadoop001 ~]$ tar -zxvf ~/soft/apache-maven-3.3.9-bin.tar.gz -C ~/app/

  • 添加环境变量
    #修改haoop用户的环境变量

     [hadoop@hadoop001 ~]$ vim ~/.bash_profile
     #添加或修改如下内容,注意MAVEN_OPTS设置了maven运行的内存,防止内存太小导致编译失败
     export MAVEN_HOME=/home/hadoop/app/apache-maven-3.3.9
     export MAVEN_OPTS="-Xms1024m -Xmx1024m"
     export PATH=$MAVEN_HOME/bin:$PATH
     [hadoop@hadoop001 ~]$ source ~/.bash_profile
     [hadoop@hadoop001 ~]$ which mvn
     ~/app/apache-maven-3.3.9/bin/mvn
    
  • 配置maven

     [hadoop@hadoop001 protobuf-2.5.0]$ vim ~/app/apache-maven-3.3.9/conf/settings.xml
     #配置maven的本地仓库位置,注意注释符号
     /home/hadoop/maven_repo/repo
     #添加阿里云中央仓库地址,注意一定要写在之间,注意注释符号
     
          nexus-aliyun
          central
          Nexus aliyun
          http://maven.aliyun.com/nexus/content/groups/public
     
    

(可选)添加jars到本地仓库,网络慢可能导致mvn第一次编译时下载需要超长的时间甚至编译失败

#jar包链接
链接:https://pan.baidu.com/s/1vq4iVFqqyJNkYzg90bVrfg 
提取码:vugv 
复制这段内容后打开百度网盘手机App,操作更方便哦
#下载后 rz上传解压,注意目录层次
[hadoop@hadoop001 maven_repo]$ rz
[hadoop@hadoop001 maven_repo]$ tar -zxvf repo.tar.gz 

4.5安装protobuf

  • 解压

     [hadoop@hadoop001 ~]$ tar -zxvf ~/soft/protobuf-2.5.0.tar.gz -C ~/app/
    
  • 编译软件

     [hadoop@hadoop001 protobuf-2.5.0]$ cd ~/app/protobuf-2.5.0/
     #  --prefix= 是用来待会编译好的包放在为路径
     [hadoop@hadoop001 protobuf-2.5.0]$ ./configure  --prefix=/home/hadoop/app/protobuf-2.5.0
     #编译以及安装
     [hadoop@hadoop001 protobuf-2.5.0]$ make
     [hadoop@hadoop001 protobuf-2.5.0]$ make install
    
  • 添加环境变量

     [hadoop@hadoop001 protobuf-2.5.0]$ vim ~/.bash_profile
     #追加如下两行内容,未编译前是没有bin目录的
     export PROTOBUF_HOME=/home/hadoop/app/protobuf-2.5.0
     export PATH=$PROTOBUF_HOME/bin:$PATH
     [hadoop@hadoop001 protobuf-2.5.0]$ source ~/.bash_profile 
     #测试是否生效,若出现libprotoc 2.5.0则为生效
     [hadoop@hadoop001 protobuf-2.5.0]$ protoc --version
     libprotoc 2.5.0
    

4.6编译hadoop

  • 解压

     [hadoop@hadoop001 protobuf-2.5.0]$ tar -zxvf ~/soft/hadoop-2.6.0-cdh5.7.0-src.tar.gz -C ~/source/
    

    编译hadoop使其支持压缩:mvn clean package -Pdist,native -DskipTests -Dtar

     #进入hadoop的源码目录
     [hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ cd ~/source/hadoop-2.6.0-cdh5.7.0/
     #进行编译,第一次编译会下载很多依赖的jar包,快慢由网速决定,需耐心等待,本人亲测耗时
     [hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ mvn clean package -Pdist,native -DskipTests -Dtar
    
  • 若报异常,主要信息如下(无异常跳过):
    [FATAL] Non-resolvable parent POM for org.apache.hadoop:hadoop-main:2.6.0-cdh5.7.0: Could not transfer artifact com.cloudera.cdh:cdh-root:pom:5.7.0 from/to cdh.repo (https://repository.cloudera.com/artifactory/cloudera-repos): Remote host closed connectio
    #分析:是https://repository.cloudera.com/artifactory/cloudera-repos/com/cloudera/cdh/cdh-root/5.7.0/cdh-root-5.7.0.pom文件下载不了,但是虚拟机确实是ping通远程的仓库,很是费解为什么。
    #解决方案:前往本地仓库到目标文件目录,然后 通过wget 文件,来成功获取该文件,重新执行编译命令,或者执行4.5的可选步骤,将需要的jar直接放到本地仓库

  • 查看编译后的包:hadoop-2.6.0-cdh5.7.0.tar.gz

     #有 BUILD SUCCESS 信息则表示编译成功
     [INFO] Apache Hadoop Scheduler Load Simulator ............. SUCCESS [ 13.592 s]
     [INFO] Apache Hadoop Tools Dist ........................... SUCCESS [ 12.042 s]
     [INFO] Apache Hadoop Tools ................................ SUCCESS [  0.094 s]
     [INFO] Apache Hadoop Distribution ......................... SUCCESS [01:49 min]
     [INFO] ------------------------------------------------------------------------
     [INFO] BUILD SUCCESS
     [INFO] ------------------------------------------------------------------------
     [INFO] Total time: 37:39 min
     [INFO] Finished at: 2019-04-07T16:48:42+08:00
     [INFO] Final Memory: 200M/989M
     [INFO] ------------------------------------------------------------------------
     [hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ 
     [hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ 
     [hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ 
     [hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ 
     [hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ ll /home/hadoop/source/hadoop-2.6.0-cdh5.7.0/hadoop-dist/target/
     total 564036
     drwxrwxr-x. 2 hadoop hadoop      4096 Apr  7 16:46 antrun
     drwxrwxr-x. 3 hadoop hadoop      4096 Apr  7 16:46 classes
     -rw-rw-r--. 1 hadoop hadoop      1998 Apr  7 16:46 dist-layout-stitching.sh
     -rw-rw-r--. 1 hadoop hadoop       690 Apr  7 16:47 dist-tar-stitching.sh
     drwxrwxr-x. 9 hadoop hadoop      4096 Apr  7 16:47 hadoop-2.6.0-cdh5.7.0
     -rw-rw-r--. 1 hadoop hadoop 191880143 Apr  7 16:47 hadoop-2.6.0-cdh5.7.0.tar.gz
     -rw-rw-r--. 1 hadoop hadoop      7314 Apr  7 16:47 hadoop-dist-2.6.0-cdh5.7.0.jar
     -rw-rw-r--. 1 hadoop hadoop 385618309 Apr  7 16:48 hadoop-dist-2.6.0-cdh5.7.0-javadoc.jar
     -rw-rw-r--. 1 hadoop hadoop      4855 Apr  7 16:47 hadoop-dist-2.6.0-cdh5.7.0-sources.jar
     -rw-rw-r--. 1 hadoop hadoop      4855 Apr  7 16:47 hadoop-dist-2.6.0-cdh5.7.0-test-sources.jar
     drwxrwxr-x. 2 hadoop hadoop      4096 Apr  7 16:47 javadoc-bundle-options
     drwxrwxr-x. 2 hadoop hadoop      4096 Apr  7 16:47 maven-archiver
     drwxrwxr-x. 3 hadoop hadoop      4096 Apr  7 16:46 maven-shared-archive-resources
     drwxrwxr-x. 3 hadoop hadoop      4096 Apr  7 16:46 test-classes
     drwxrwxr-x. 2 hadoop hadoop      4096 Apr  7 16:46 test-dir
    

5.伪分布式部署

5.1解压安装包

[hadoop@hadoop001 hadoop-2.6.0-cdh5.7.0]$ cp /home/hadoop/source/hadoop-2.6.0-cdh5.7.0/hadoop-dist/target/hadoop-2.6.0-cdh5.7.0.tar.gz /home/hadoop/software/
[hadoop@hadoop001 ~]$ cd ~
[hadoop@hadop001 ~]$ tar -xzvf  ~/software/hadoop-2.6.0-cdh5.7.0.tar.gz   -C  ~/app/

5.2配置环境变量

[hadoop@hadoop001 ~]$ vim ~/.bash_profile 
export HADOOP_HOME=/home/hadoop/app/hadoop-2.6.0-cdh5.7.0
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
[hadoop@hadoop001 ~]$ source ~/.bash_profile 
[hadoop@hadoop001 ~]$ which hadoop
~/app/hadoop-2.6.0-cdh5.7.0/bin/hadoop

5.3配置ssh

[hadoop@hadoop001 ~]$ rm -rf ~/.ssh
[hadoop@hadoop001 ~]$ ssh-keygen  然后按三下回车 
[hadoop@hadoop001 ~]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
[hadoop@hadoop001 ~]$ chmod 600 ~/.ssh/authorized_keys
#测试ssh是否成功,用户第一ssh会提示输入是否连接,yes。成功显示时间
[hadoop@hadoop001 ~]$ ssh hadoop001 date 

5.4修改配置文件

  1. 编辑hadoop-env.sh文件相关配置

    [hadoop@hadoop001 ~]$ vim ~/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/hadoop-env.sh 
    #将JDK的安装目录修改为绝对路径
    export JAVA_HOME=/usr/java/jdk1.7.0_80
    #修改hadoop的进程pid的存放目录,如果不修改,默认实在/tmp目下
    export HADOOP_PID_DIR=/home/hadoop/data/tmp
    [hadoop@hadop001 ~]$ mkdir data/tmp
    
  2. 编辑core-site.xml文件相关配置

    [hadoop@hadoop001 ~]$ vim ~/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/core-site.xml 
    #添加如下配置
    
    		  fs.defaultFS
    		  hdfs://hadoop001:9000
    
    ##下面这个参数必须配,不然默认/tmp/hadoop-hadoop/dfs/namenode文件可能丢失,导致namenode服务起不来
    
    hadoop.tmp.dir
    /home/hadoop/data/tmp/hadoop-${user.name}
    
    
  3. 编辑hdfs-site.xml文件相关配置

    [hadoop@hadoop001 ~]$ vim ~/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/hdfs-site.xml 
    #添加如下配置
    
      dfs.replication
      1
    
    
      dfs.namenode.secondary.http-address
      hadoop001:50090
    
    
        dfs.namenode.secondary.https-address
        hadoop001:50091
    
    
  4. 修改datanode的访问主机

    [hadoop@hadoop001 ~]$ vim ~/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/slaves
    #添加一行内容如下
    hadoop001
    
  5. 编辑mapred-site.xml文件相关配置

    [hadoop@hadop001 ~]$ cp    ~/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/mapred-site.xml.template   ~/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/mapred-site.xml
    [hadoop@hadop001 ~]$ vim   ~/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/mapred-site.xml
    
    #添加如下配置
    
    mapreduce.framework.name
    yarn
    
    
  6. 编辑yarn-site.xml文件相关配置

    [hadoop@hadop001 ~]$ vim ~/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop/yarn-site.xml 
    #添加如下配置
    
            yarn.nodemanager.aux-services
                    mapreduce_shuffle
    
    
    
            yarn.resourcemanager.webapp.address
            0.0.0.0:8098
    
    

5.5格式化namenode

[hadoop@hadoop001 ~]$ hdfs namenode -format
#若出现 has been successfully formatted 则表示格式化成功
19/04/07 17:42:31 INFO namenode.FSImage: Allocated new BlockPoolId: BP-565897555-192.168.175.135-1554630151139
19/04/07 17:42:31 INFO common.Storage: Storage directory /tmp/hadoop-hadoop/dfs/name has been successfully formatted.
19/04/07 17:42:32 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
19/04/07 17:42:32 INFO util.ExitUtil: Exiting with status 0
19/04/07 17:42:32 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at hadoop001/192.168.175.135
************************************************************/

5.6启动hadoop

[hadoop@hadoop001 ~]$ start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh
Starting namenodes on [hadoop001]
hadoop001: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop001.out
hadoop001: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop001.out
Starting secondary namenodes [hadoop001]
hadoop001: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-secondarynamenode-hadoop001.out
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-resourcemanager-hadoop001.out
hadoop001: starting nodemanager, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/yarn-hadoop-nodemanager-hadoop001.out

#查询出五个守护进程

	[hadoop@hadoop001 dfs]$ jps
	2176 NameNode
	2694 NodeManager
	2391 SecondaryNameNode
	2264 DataNode
	2601 ResourceManager
	3147 Jps
	[hadoop@hadoop001 dfs]$ 

6.验证hadoop

6.1 hdfs验证

			访问:http://192.168.137.20:50070

6.2yarn验证

			访问:http://192.168.137.20:8098

6.3检测压缩格式

#true表示支持的意思
[hadoop@hadoop001 ~]$ hadoop checknative
19/04/07 17:50:08 INFO bzip2.Bzip2Factory: Successfully loaded & initialized native-bzip2 library system-native
19/04/07 17:50:08 INFO zlib.ZlibFactory: Successfully loaded & initialized native-zlib library
Native library checking:
hadoop:  true /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/lib/native/libhadoop.so.1.0.0
zlib:    true /lib64/libz.so.1
snappy:  true /usr/lib64/libsnappy.so.1
lz4:     true revision:99
bzip2:   true /lib64/libbz2.so.1
openssl: true /usr/lib64/libcrypto.so

扩展1:protobuf是什么?

protobuf它是一种轻便高效的数据格式,类似于Json,平台无关、语言无关、可扩展,可用于通讯协议和数据存储等领域。
优点:
平台无关,语言无关,可扩展;
提供了友好的动态库,使用简单;
解析速度快,比对应的XML快约20-100倍;
序列化数据非常简洁、紧凑,与XML相比,其序列化之后的数据量约为1/3到1/10。

你可能感兴趣的:(hadoop)