[置顶] hadoop 2.2、 hbase0.94.18 集群安装

        这里我们安装三台的集群:集群ip:192.168.157.132-134

            master:192.168.157.132

            slaves:192.168.157.133-134

             hbase 依赖zookeeper自行安装。


1、配置ssh 无密码 免登陆

     1>安装ssh:132 、134、 133

       

 yum install openssh-clients

    2> 配置无密码免登陆:132 、134、 133
  

      [root@localhost ~]# ssh-keygen -t rsa
       一路回车
      [root@localhost ~]# cd /root/.ssh/
      [root@localhost .ssh]# cat id_rsa.pub >> authorized_keys
      [root@localhost .ssh]# scp authorized_keys [email protected]:/root/.ssh
      [root@localhost .ssh]# scp authorized_keys [email protected]:/root/.ssh
      [root@localhost .ssh]# scp authorized_keys [email protected]:/root/.ssh


     修改权限三台 

      [root@localhost ~]# chmod 700 .ssh/
      [root@localhost ~]# chmod 600 ~/.ssh/authorized_keys 



2、修改机器名称

   1> 通过命令修改:重启失效
   

     [root@localhost .ssh]# hostname dev-157-132
     [root@localhost .ssh]# hostname
      dev-157-132
   2> 修改配置文件:永久生效
   
  [root@localhost .ssh]# vim /etc/sysconfig/network
    NETWORKING=yes
    NETWORKING_IPV6=no
    HOSTNAME=dev-157-132
    GATEWAY=192.168.248.254

    3>三台先后经过两个步骤修改即可:dev-157-132 dev-157-133  dev-157-134
   4>修改三台/etc/hosts
     
 [root@localhost .ssh]# vim /etc/hosts

        127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
        ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
       192.168.157.132 dev-157-132
       192.168.157.133 dev-157-133
       192.168.157.134 dev-157-134


3、关闭防火墙(三台)

   

   service iptables stop


4、安装JDK 略...

5、安装hadoop2.2.0

    

 [root@dev-157-132 servers]# tar -xf hadoop-2.2.0.tar.gz
 [root@dev-157-132 servers]# cd hadoop-2.2.0/etc/hadoop
  1> 修改hadoop-env.sh
  
 [root@dev-157-132 hadoop]# vim hadoop-env.sh
 export JAVA_HOME=/export/servers/jdk1.6.0_25 (这java_home)
     其他默认

   2> 修改core-site.xml

    

 [root@dev-157-132 hadoop]# vim core-site.xml
     <configuration>
          <property>
      <name>fs.defaultFS</name>
        <value>hdfs://dev-157-132:9100</value>
      <description></description>
</property>
        <property>
       <name>hadoop.tmp.dir</name>
          <value>/export/servers/hadoop-2.2.0/data/hadoop_tmp</value>
        <description></description>
 </property>
 <property>
     <name>io.native.lib.available</name>
      <value>true</value>
      <description></description>
 </property>
</configuration>

    3>修改 mapred-site.xml

      

   <configuration>
   <property>
           <name>mapreduce.framework.name</name>
            <value>yarn</value>
   </property>
</configuration>    

    4>修改yarn-site.xml

  

[root@dev-157-132 hadoop]# vim yarn-site.xml
<property>
       <name>yarn.resourcemanager.resource-tracker.address</name>
       <value>dev-157-132:8031</value>
       <description></description>
 </property>
 <property>
 <name>yarn.resourcemanager.scheduler.address</name>
  <value>dev-157-132:8030</value>
 <description></description>
 </property>
  <property>
          <name>yarn.resourcemanager.scheduler.class</name>
          <value>org.apache.hadoop.yarn.server.resourcemanager.scheduler.capacity.CapacityScheduler</value>
         <description></description>
   </property>
   <property>
           <name>yarn.resourcemanager.address</name>
           <value>dev-157-132:8032</value>
           <description>the host is the hostname of the ResourceManager and the port is the port on
             which the clients can talk to the Resource Manager. </description>
   </property>
  5>修改hdfs-site.xml

 [root@dev-157-132 hadoop]# vim hdfs-site.xml
  <property>
  <name>dfs.namenode.name.dir</name>
   <value>file:/export/servers/hadoop-2.2.0/data/nn</value>
</property>
<property>
  <name>dfs.datanode.data.dir</name>
    <value>file:/export/servers/hadoop-2.2.0/data/dfs</value>
</property>
<property>
  <name>dfs.permissions</name>
   <value>false</value>
</property>


6>修改slaves

 

[root@dev-157-132 hadoop]# vim slaves
                dev-157-133
                dev-157-134

7> scp 到slave机器上

 

 scp -r hadoop-2.2.0  [email protected]:/export/servers
 scp -r hadoop-2.2.0  [email protected]:/export/servers
 

 8> 设置三台环境变量

 

 [root@dev-157-132 hadoop]# vim /etc/profile
export HADOOP_HOME=/export/servers/hadoop-2.2.0
export HADOOP_CONF_DIR=/export/servers/hadoop-2.2.0/etc/hadoop
export PATH=$JAVA_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
 [root@dev-157-132 hadoop]# source /etc/profile

 9> 格式化 hdfs 在master

    

hadoop namenode -format

6 、启动hadoop 在 master

 

start-all.sh 
This script is Deprecated. Instead use start-dfs.sh and start-yarn.sh 
14/12/23 11:18:27 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
Starting namenodes on [dev-157-132] 
dev-157-132: starting namenode, logging to /export/servers/hadoop-2.2.0/logs/hadoop-root-namenode-dev-157-132.out 
dev-157-134: starting datanode, logging to /export/servers/hadoop-2.2.0/logs/hadoop-root-datanode-dev-157-134.out 
dev-157-133: starting datanode, logging to /export/servers/hadoop-2.2.0/logs/hadoop-root-datanode-dev-157-133.out 
Starting secondary namenodes [0.0.0.0] 
0.0.0.0: starting secondarynamenode, logging to /export/servers/hadoop-2.2.0/logs/hadoop-root-secondarynamenode-dev-157-132.out 
14/12/23 11:18:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 
starting yarn daemons 
starting resourcemanager, logging to /export/servers/hadoop-2.2.0/logs/yarn-root-resourcemanager-dev-157-132.out 
dev-157-134: starting nodemanager, logging to /export/servers/hadoop-2.2.0/logs/yarn-root-nodemanager-dev-157-134.out 
dev-157-133: starting nodemanager, logging to /export/servers/hadoop-2.2.0/logs/yarn-root-nodemanager-dev-157-133.out
[root@dev-157-132 hbase-0.94.18-security]# jps 
8100 NameNode 
8973 Jps 
8269 SecondaryNameNode 
8416 ResourceManager

7、安装hbase

 [root@dev-157-132 servers]# tar -zxvf hbase-0.94.18-security.tar.gz
   [root@dev-157-132 servers]# cd hbase-0.94.18-security/conf/

1>修改配置

 [root@dev-157-132 conf]# vim hbase-env.sh
    export JAVA_HOME=/export/servers/jdk1.6.0_25
    配置内存大小
     export HBASE_MASTER_OPTS="-Xms512m -Xmx512m $HBASE_MASTER_OPTS"
     export HBASE_REGIONSERVER_OPTS="$HBASE_REGIONSERVER_OPTS $HBASE_JMX_BASE -Dcom.sun.management.jmxremote.port=10102 -Xmn256m -Xms256m -Xmx256m -XX:SurvivorRatio=4"
    export HBASE_MANAGES_ZK=false
    [root@dev-157-132 conf]# vim hbase-site.xml
    <configuration>
<property>
    <name>hbase.rootdir</name>
    <value>hdfs://dev-157-132:9100/hbase</value>
</property>
<property>
    <name>hbase.cluster.distributed</name>
    <value>true</value>
</property>
<property>
      <name>hbase.tmp.dir</name>
      <value>/export/servers/hbase-0.94.18-security/data/tmp</value>
 </property>
<property>
    <name>hbase.zookeeper.quorum</name>
    <value>ip ,ip</value>
</property>
<property>
    <name>hbase.zookeeper.property.clientPort</name>
    <value>2181</value>
</property>
<property>
        <name>hbase.regionserver.handler.count</name>
        <value>30</value>
 </property>
</configuration>
 
[root@dev-157-132 conf]# vim regionservers 
        dev-157-133
        dev-157-134

 2> cp到其他机器

 scp 之前 替换 hbase lib相关的jar 为安装的hadoop 版本的jar

 [root@dev-157-132 servers]# scp -r hbase-0.94.18-security [email protected]:/export/servers/
 [root@dev-157-132 servers]# scp -r hbase-0.94.18-security [email protected]:/export/servers/

 3> 设置hbase_home(三台)

 

[root@dev-157-132 hadoop]# vim /etc/profile
 export HBASE_HOME=/export/servers/hbase-0.94.18-security

8、启动hbase

[root@dev-157-132 servers]# ./hbase-0.94.18-security/bin/start-hbase.sh
starting master, logging to /export/servers/hbase-0.94.18-security/logs/hbase-root-master-dev-157-132.out
dev-157-134: starting regionserver, logging to /export/servers/hbase-0.94.18-security/bin/../logs/hbase-root-regionserver-dev-157-134.out
dev-157-133: starting regionserver, logging to /export/servers/hbase-0.94.18-security/bin/../logs/hbase-root-regionserver-dev-157-133.out

你可能感兴趣的:(hadoop,hbase,hdfs)