大数据平台搭建------CDH单机部署

转载https://blog.csdn.net/sinat_32176947/article/details/79591449?utm_source=blogxgwz3 

 

一、部署前的准备
1、检查jdk是否安装:参考jdk安装配置文档(jdk7或jdk8,所有节点都要一致)

2、兼容性建议:CentOS6.8+CDH5.11.x(x>1)+mysql5.x+jdk1.7u80

***CDH5.13以上包含kafka3.0,不支持jdk1.7***

***CDH不支持ipv6,需提前关闭***

二、在线安装
(不推荐,安装慢,失败率高,原因你懂得)

参考文档:

https://www.cloudera.com/documentation/enterprise/5-11-x/topics/cdh_qs_mrv1_pseudo.html

1、下载CDH5的包,放在linux系统中,下载链接:

https://archive.cloudera.com/cdh5/one-click-install/redhat/6/x86_64/cloudera-cdh-5-0.x86_64.rpm

2、使用rpm安装

$ sudo yum --nogpgcheck localinstall cloudera-cdh-5-0.x86_64.rpm

3、(可选)将Cloudera公钥添加到库中

$ sudo rpm --import https://archive.cloudera.com/cdh5/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera

 4、安装伪分布式hadoop

MRv1

$ sudo yum install hadoop-0.20-conf-pseudo

YARN

sudo yum install hadoop-conf-pseudo

 5、启动Hadoop并且验证其是否工作正常

MRv1

$ rpm -ql hadoop-0.20-conf-pseudo

YARN

$ rpm -ql hadoop-conf-pseudo

 6、格式化NameNode

$ sudo -u hdfs hdfs namenode -format

 7、启动HDFS,通过http://localhost:50070/来检查是否正常启动

for x in `cd /etc/init.d ; ls hadoop-hdfs-*` ; do sudo service $x start ; done

 8、创建Hadoop进程所需的目录

$ sudo /usr/lib/hadoop/libexec/init-hdfs.sh

9、验证HDFS文件结构

 sudo -u hdfs hadoop fs -ls -R /

 10、启动MapReduce/YARN

MRv1

for x in `cd /etc/init.d ; ls hadoop-0.20-mapreduce-*` ; do sudo service $x start ; done

YARN

$ sudo service hadoop-yarn-resourcemanager start

$ sudo service hadoop-yarn-nodemanager start

$ sudo service hadoop-mapreduce-historyserver start

 11、在namenode创建用户目录

$ sudo -u hdfs hadoop fs -mkdir -p /user/

$ sudo -u hdfs hadoop fs -chown /user/

 12、启动MapReduce例子

MRv1

创建运行目录

sudo -u hdfs hadoop fs -mkdir -p /user/joe

sudo -u hdfs hadoop fs -chown joe /user/joe

创建输入目录并拷贝xml文件准备测试

$ hadoop fs -mkdir input

$ hadoop fs -put /etc/hadoop/conf/*.xml input

$ hadoop fs -ls input

运行hadoop示例

$ /usr/bin/hadoop jar

 /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar grep input output 'dfs[a-z.]+'

查看输出

$ hadoop fs -ls output

YARN

创建运行目录

sudo -u hdfs hadoop fs -mkdir -p /user/joe

sudo -u hdfs hadoop fs -chown joe /user/joe

创建输入目录并拷贝xml文件准备测试

$ hadoop fs -mkdir input

$ hadoop fs -put /etc/hadoop/conf/*.xml input

$ hadoop fs -ls input

设置环境变量

$ export

HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce

运行hadoop示例

$ hadoop jar

/usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar grep input output23 'dfs[a-z.]+'

查看输出

$ hadoop fs -ls output23


三、离线安装
参考文档:

http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.11.1/hadoop-project-dist/hadoop-common/SingleCluster.html

 

1、配置静态IP(vmware的nat模式下)

(1)修改自己电脑的虚拟网卡VMware Network Adapter VMnet8——属性——IPv4属性设置静态IP(此处设置的静态IP要对应与VMware的虚拟网络编辑器中VMnet8的子网IP)

(2)、修改虚拟机的IP

添加网关

vi /etc/sysconfig/network

GATEWAY=192.168.159.2(同虚拟网络编辑器中的nat设置中地址)

修改ip、子网掩码

vi /etc/sysconfig/network-scripts/ifcfg-eth0

IPADDR=192.168.159.10(最后一位自选10-255)

NETMASK=255.255.255.0

重启网卡

service network restart或ifdown eth0;ifup eth0

 

*****************************问题*****************************

我们会克隆虚拟机,克隆的机子没有显示网卡eth0,取而代之的是网卡 eth1

在原型机上找原型机的mac地址

vim /etc/udev/rules.d/70-persistent-net.rules(原型机上运行)

对比新机上的mac地址

vim /etc/udev/rules.d/70-persistent-net.rules(新机上运行)

发现新机上多个eth0和一个eth1,eth1是新机真实的mac地址,删除多余eth0,将eth1改成eth0,记录下真实mac地址

修改eth0上的mac地址

vim /etc/sysconfig/network-scripts/ifcfg-eth0

HWADDR=刚记录下的mac地址

重启网卡和机器

service network restart

reboot

2、下载Hadoop与CDH对应版本,这里选hadoop-2.6.0-cdh5.11.1.tar.gz

下载链接:http://archive.cloudera.com/cdh5/cdh/5/

解压到/home/hadoop下

tar -zxvf hadoop-2.6.0-cdh5.11.1.tar.gz -C /home/hadoop

 

3、关闭防火墙

service iptables stop

service ip6tables stop

 

4、配置全局环境变量(追加)

vi /etc/profile

   JAVA_HOME=/opt/jdk1.7.0_80

   PATH=$JAVA_HOME/bin:$PATH:$HADOOP_HOME/bin:HADOOP_HOME/sbin

   CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

   export HADOOP_HOME=/home/hadoop

   export JAVA_HOME

   export PATH

   export CLASSPATH

source /etc/profile

 

5、配置本机免密码登录本机

ssh-keygen -t rsa

cat id_rsa.pub >> authorized_keys


 

6、配置hadoop_env.sh(追加)

vim $HADOOP_HOME/etc/hadoop/hadoop-env.sh

    export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-1.7.0.45.x86_64

    export HADOOP_COMMON_LIB_NATIVE_DIR=${HADOOP_HOME}/lib/native

export  HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib:$HADOOP_HOME/lib/native"

 

7、配置core-site.xml(追加在configuration里)

vim $HADOOP_HOME/etc/hadoop/core-site.xml

   

        fs.defaultFS

        hdfs://localhost

   

 

8、配置hdfs-site.xml(追加在configuration里)

vim $HADOOP_HOME/etc/hadoop/hdfs-site.xml

    dfs.replication

    1

    hadoop.tmp.dir

    /home/hadoop/hdfs

    dfs.namenode.http-address

    localhost:50070

    dfs.namenode.secondary.http-address

     localhost:50090

 

9、配置yarn-site.xml(追加在configuration里)

vim $HADOOP_HOME/etc/hadoop/yarn-site.xml

    yarn.resourcemanager.hostname

      localhost

      yarn.nodemanager.aux-services

      mapreduce_shuffle

      yarn.nodemanager.aux-services.mapreduce.shuffle.class

      org.apache.hadoop.mapred.ShuffleHandler

 

10、配置slaves,如果已存在该文件,且里面是localhost,则不用修改,否则修改为localhost

vim $HADOOP_HOME/etc/hadoop/slave

 

 

遇到得问题:

     1、host文件未修改

     2、hadoop_env.sh漏掉了一个$符

     3、忘了配置虚拟机的dns服务器,导致两台不能访问外网

     3、yarn-site.xml粘贴出错

     4、WARN util.NativeCodeLoader: Unable to load native-hadoop library foryour platform... using builtin-java classes where applicable

你可能感兴趣的:(大数据)