Hadoop2.7.1+Hbase1.1.2集群环境搭建(2)hadoop2.7.1安装准备

(1)hadoop2.7.1源码编译           http://zilongzilong.iteye.com/blog/2246856

(2)hadoop2.7.1安装准备           http://zilongzilong.iteye.com/blog/2253544

(3)hadoop2.7.1安装                   http://zilongzilong.iteye.com/blog/2245547

(4)hbase安装准备                   http://zilongzilong.iteye.com/blog/2254451

(5)hbase安装                   http://zilongzilong.iteye.com/blog/2254460

(6)snappy安装                   http://zilongzilong.iteye.com/blog/2254487

(7)雅虎YCSBC测试hbase性能测试     http://zilongzilong.iteye.com/blog/2248863

 

(8)spring-hadoop实战                   http://zilongzilong.iteye.com/blog/2254491

 

本文章主要解决以下几个问题:

       (1)去掉操作系统ulimit -n 限制

       (2)关闭防火墙

       (3)配置hostname,设置主机名

       (4)配置host,设置ip和主机名的映射

       (5)配置静态ip

       (6)下载并安装JDK(压缩包版),配置环境变量

       (7)创建用户用于hadoop安装,配置SSH免密码登录

 

1.去掉操作系统ulimit -n 限制

1)修改/etc/security/limits.conf ,在最后增加如下内容:

* soft nofile 102400

* hard nofile 409600

2)修改/etc/pam.d/login,在最后添加如下内容:

session required     /lib/security/pam_limits.so

3)重启系统使得配置生效

 

2.关闭防火墙

1) 重启后生效 

开启: chkconfig iptables on 

关闭: chkconfig iptables off 

2) 即时生效,重启后失效 

开启: service iptables start 

关闭: service iptables stop 

3) vi /etc/selinux/config

# This file controls the state of SELinux on the system.

# SELINUX= can take one of these three values:

#     enforcing - SELinux security policy is enforced.

#     permissive - SELinux prints warnings instead of enforcing.

#     disabled - No SELinux policy is loaded.

SELINUX=enforcing  #注释掉

SELINUX=disabled   #新增

# SELINUXTYPE= can take one of three two values:

#     targeted - Targeted processes are protected,

#     minimum - Modification of targeted policy. Only selected processes are protected.

#     mls - Multi Level Security protection.

#SELINUXTYPE=targeted  #注释掉

4) selinux修改立即生效

setenforce 0

 

3.配置hostname,设置主机名

vi /etc/sysconfig/network

NETWORKING=yes

HOSTNAME=nmsc0

 

4.配置host,设置ip和主机名的映射

vi /etc/hosts

127.0.0.1               localhost.localdomain localhost

::1             localhost6.localdomain6 localhost6

192.168.181.66 nmsc0

192.168.88.21 nmsc1

192.168.88.22 nmsc2

 

5.配置静态ip

   这里公司有条件的尽量找自己网管解决,因为一般公司会统一规划局域网络配置。如果是个人实验,设置参考如下连接:

              CentOS6.5下设置静态IP              http://my.oschina.net/allman90/blog/294847

              CentOS7修改网卡名称为eth0          http://my.oschina.net/allman90/blog/484704

 

6.下载并安装JDK(压缩包版),配置环境变量

1) 查看默认jdk

java -version

java version "1.4.2"

gij (GNU libgcj) version 4.1.2 20080704 (Red Hat 4.1.2-52)

Copyright (C) 2006 Free Software Foundation, Inc.

This is free software; see the source for copying conditions.  There is NO

warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.

2) 卸载默认jdk

[root@nmsc0 ~]#  rpm -qa | grep gcj

libgcj-4.1.2-52.el5

java-1.4.2-gcj-compat-1.4.2.0-40jpp.115

libgcj-4.1.2-52.el5

[root@nmsc0 ~]#  yum -y remove java-1.4.2-gcj-compat-1.4.2.0-40jpp.115

Loaded plugins: katello, product-id, security, subscription-manager

Updating certificate-based repositories.

Unable to read consumer identity

Setting up Remove Process

Resolving Dependencies

--> Running transaction check

---> Package java-1.4.2-gcj-compat.x86_64 0:1.4.2.0-40jpp.115 set to be erased

--> Processing Dependency: java-gcj-compat >= 1.0.64 for package: gjdoc

--> Processing Dependency: java-gcj-compat >= 1.0.64 for package: gjdoc

--> Processing Dependency: java-gcj-compat for package: antlr

--> Processing Dependency: java-gcj-compat for package: antlr

--> Running transaction check

---> Package antlr.x86_64 0:2.7.6-4jpp.2 set to be erased

---> Package gjdoc.x86_64 0:0.7.7-12.el5 set to be erased

--> Finished Dependency Resolution

Dependencies Resolved

===================================================================================

 Package                             Arch                 Version                         Repository               Size

===================================================================================

Removing:

 java-1.4.2-gcj-compat               x86_64               1.4.2.0-40jpp.115               installed                441

Removing for dependencies:

 antlr                               x86_64               2.7.6-4jpp.2                    installed               3.2 M

 gjdoc                               x86_64               0.7.7-12.el5                    installed               2.2 M

 

Transaction Summary

===================================================================================

Remove        3 Package(s)

Reinstall     0 Package(s)

Downgrade     0 Package(s)

 

Downloading Packages:

Running rpm_check_debug

Running Transaction Test

Finished Transaction Test

Transaction Test Succeeded

Running Transaction

  Erasing        : java-1.4.2-gcj-compat                                                                            1/3

  Erasing        : antlr                                                                                                                                            2/3

  Erasing        : gjdoc                                                                                                                                            3/3

Installed products updated.

 

Removed:

  java-1.4.2-gcj-compat.x86_64 0:1.4.2.0-40jpp.115

 

Dependency Removed:

  antlr.x86_64 0:2.7.6-4jpp.2                                                        gjdoc.x86_64 0:0.7.7-12.el5

 

Complete!

3)下载jdk-7u65-linux-x64.gz放置于/opt/java/jdk-7u65-linux-x64.gz

4)解压,输入命令tar -zxvf jdk-7u65-linux-x64.gz

5)编辑vi /etc/profile,在文件末尾追加如下内容

export JAVA_HOME=/opt/java/jdk1.7.0_65  

export CLASSPATH=.:$JAVA_HOME/jre/lib/rt.jar:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar 

export PATH=$PATH:$JAVA_HOME/bin  

6)使配置生效,输入命令,source /etc/profile

7)输入命令java -version,检查JDK环境是否配置成功

7.创建用户用于hadoop安装,配置SSH免密码登录

1)创建用户用于Hadoop安装

    #删除已经存在的hadoop用户,并且删除目录/home/hadoop目录

    userdel -r hadoop

    #创建用户hadoop

    useradd hadoop

    #设置用户hadoop密码

    passwd hadoop

Hadoop2.7.1+Hbase1.1.2集群环境搭建(2)hadoop2.7.1安装准备_第1张图片
 

2)配置SSH免密码登录

    这里我三台机器IP为(192.168.181.66     192.168.88.21         192.168.88.22),以下是我在机器192.168.181.66上执行的命令:

#首先切换到上面的hadoop用户
su - hadoop
#删除之前在hadoop上做过的免密码登录记录,这个在集群中所有节点都必须执行
rm -r /home/hadoop/.ssh
#生成非对称公钥和私钥,这个在集群中所有节点都必须执行
ssh-keygen -t rsa
#通过ssh登录远程机器时,本机会默认将当前用户目录下的.ssh/authorized_keys带到远程机器进行验证,这里是/home/hadoop/.ssh/authorized_keys中公钥(来自其他机器上的/home/hadoop/.ssh/id_rsa.pub.pub),以下代码只在主节点执行就可以做到主从节点之间SSH免密码登录
cd /home/hadoop/.ssh/
#首先将Master节点的公钥添加到authorized_keys 
cat id_rsa.pub>>authorized_keys 
#其次将Slaves节点的公钥添加到authorized_keys 
ssh [email protected] cat /home/hadoop/.ssh/id_rsa.pub>> authorized_keys
ssh [email protected] cat /home/hadoop/.ssh/id_rsa.pub>> authorized_keys
#这里将Master节点的authorized_keys分发到其他slaves节点
scp -r /home/hadoop/.ssh/authorized_keys [email protected]:/home/hadoop/.ssh/ 
scp -r /home/hadoop/.ssh/authorized_keys [email protected]:/home/hadoop/.ssh/ 
#必须设置修改/home/hadoop/.ssh/authorized_keys权限
chmod 600 /home/hadoop/.ssh/authorized_keys
#免密码远程登录nmsc1
ssh nmsc1

 

3) ssh服务相关命令

#查看openssh版本

ssh -V

#查看openssl版本

openssl version -a

#重启ssh服务

/etc/rc.d/init.d/sshd restart

#通过ssh登录远程机器nmsc1

ssh nmsc1  或者 ssh hadoop@nmsc1

#查看ssh登录远程机器nmsc1的debug信息

ssh -v2 nmsc1

Hadoop2.7.1+Hbase1.1.2集群环境搭建(2)hadoop2.7.1安装准备_第2张图片
 

你可能感兴趣的:(hadoop,准备工作,分布式安装)