Hadoop-2.7.2+Zookeeper-3.4.6完全分布式环境搭建
一.版本
组件名 |
版本 |
说明 |
JRE |
java version "1.7.0_67" Java(TM) SE Runtime Environment (build 1.7.0_67-b01) Java HotSpot(TM) 64-Bit Server VM (build 24.65-b04, mixed mode) |
|
Hadoop |
hadoop-2.7.2.tar.gz |
主程序包 |
Zookeeper |
zookeeper-3.4.6.tar.gz |
热切,Yarn 存储数据使用的协调服务 |
二.主机规划
IP |
Host 及安装软件 |
部署模块 |
进程 |
172.16.101.55 |
sht-sgmhadoopnn-01 hadoop |
NameNode ResourceManager |
NameNode DFSZKFailoverController ResourceManager |
172.16.101.56 |
sht-sgmhadoopnn-02 hadoop |
NameNode ResourceManager |
NameNode DFSZKFailoverController ResourceManager |
172.16.101.58 |
sht-sgmhadoopdn-01 hadoop、zookeeper |
DataNode NodeManager Zookeeper |
DataNode NodeManager JournalNode QuorumPeerMain |
172.16.101.59 |
sht-sgmhadoopdn-02 Hadoop、zookeeper |
DataNode NodeManager Zookeeper |
DataNode NodeManager JournalNode QuorumPeerMain |
172.16.101.60 |
sht-sgmhadoopdn-03 Hadoop、zookeeper |
DataNode NodeManager Zookeeper |
DataNode NodeManager JournalNode QuorumPeerMain |
三.目录规划
名称 |
路径 |
$HADOOP_HOME |
/hadoop/hadoop-2.7.2 |
Data |
$ HADOOP_HOME/data |
Log |
$ HADOOP_HOME/logs |
四.常用脚本及命令
1.启动集群
start-dfs.sh
start-yarn.sh
2.关闭集群
stop-yarn.sh
stop-dfs.sh
3.监控集群
hdfs dfsadmin -report
4.单个进程启动/关闭
hadoop-daemon.sh start|stop namenode|datanode| journalnode
yarn-daemon.sh start |stop resourcemanager|nodemanager
http://blog.chinaunix.net/uid-25723371-id-4943894.html
五.环境准备
1 .设置ip地址(5台)
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0
- DEVICE="eth0"
- BOOTPROTO="static"
- DNS1="172.16.101.63"
- DNS2="172.16.101.64"
- GATEWAY="172.16.101.1"
- HWADDR="00:50:56:82:50:1E"
- IPADDR="172.16.101.55"
- NETMASK="255.255.255.0"
- NM_CONTROLLED="yes"
- ONBOOT="yes"
- TYPE="Ethernet"
- UUID="257c075f-6c6a-47ef-a025-e625367cbd9c"
执行命令: service network restart
验证:ifconfig
2 .关闭防火墙(5台)
执行命:service iptables stop
验证:service iptables status
3.关闭防火墙的自动运行(5台)
执行命令:chkconfig iptables off
验证:chkconfig --list | grep iptables
4 设置主机名(5台)
执行命令
(1)hostname sht-sgmhadoopnn-01
(2)vi /etc/sysconfig/network
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 ~]# vi /etc/sysconfig/network
- NETWORKING=yes
- HOSTNAME=sht-sgmhadoopnn-01.telenav.cn
- GATEWAY=172.16.101.1
5 ip与hostname绑定(5台)
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 ~]# vi /etc/hosts
- 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
- ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
- 172.16.101.55 sht-sgmhadoopnn-01.telenav.cn sht-sgmhadoopnn-01
- 172.16.101.56 sht-sgmhadoopnn-02.telenav.cn sht-sgmhadoopnn-02
- 172.16.101.58 sht-sgmhadoopdn-01.telenav.cn sht-sgmhadoopdn-01
- 172.16.101.59 sht-sgmhadoopdn-02.telenav.cn sht-sgmhadoopdn-02
- 172.16.101.60 sht-sgmhadoopdn-03.telenav.cn sht-sgmhadoopdn-03
- 验证:ping sht-sgmhadoopnn-01
6. 设置5台machines,SSH互相通信
http://blog.itpub.net/30089851/viewspace-1992210/
7 .安装JDK(5台)
点击(此处)折叠或打开
- (1)执行命令
- [root@sht-sgmhadoopnn-01 ~]# cd /usr/java
- [root@sht-sgmhadoopnn-01 java]# cp /tmp/jdk-7u67-linux-x64.gz ./
- [root@sht-sgmhadoopnn-01 java]# tar -xzvf jdk-7u67-linux-x64.gz
- (2)vi /etc/profile 增加内容如下:
- export JAVA_HOME=/usr/java/jdk1.7.0_67
- export HADOOP_HOME=/hadoop/hadoop-2.7.2
- export ZOOKEEPER_HOME=/hadoop/zookeeper
- export PATH=.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$ZOOKEEPER_HOME/bin:$PATH
- #先把HADOOP_HOME, ZOOKEEPER_HOME配置了
- #本次实验机器已经配置好了jdk1.7.0_67-cloudera
- (3)执行 source /etc/profile
- (4)验证:java –version
8.创建文件夹(5台)
mkdir /hadoop
六.安装Zookeeper
sht-sgmhadoopdn-01/02/03
1.下载解压zookeeper-3.4.6.tar.gz
点击(此处)折叠或打开
- [root@sht-sgmhadoopdn-01 tmp]# wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
- [root@sht-sgmhadoopdn-02 tmp]# wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
- [root@sht-sgmhadoopdn-03 tmp]# wget http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
- [root@sht-sgmhadoopdn-01 tmp]# tar -xvf zookeeper-3.4.6.tar.gz
- [root@sht-sgmhadoopdn-02 tmp]# tar -xvf zookeeper-3.4.6.tar.gz
- [root@sht-sgmhadoopdn-03 tmp]# tar -xvf zookeeper-3.4.6.tar.gz
- [root@sht-sgmhadoopdn-01 tmp]# mv zookeeper-3.4.6 /hadoop/zookeeper
- [root@sht-sgmhadoopdn-02 tmp]# mv zookeeper-3.4.6 /hadoop/zookeeper
- [root@sht-sgmhadoopdn-03 tmp]# mv zookeeper-3.4.6 /hadoop/zookeeper
2.修改配置
点击(此处)折叠或打开
- [root@sht-sgmhadoopdn-01 tmp]# cd /hadoop/zookeeper/conf
- [root@sht-sgmhadoopdn-01 conf]# cp zoo_sample.cfg zoo.cfg
- [root@sht-sgmhadoopdn-01 conf]# vi zoo.cfg
- 修改dataDir
- dataDir=/hadoop/zookeeper/data
- 添加下面三行
- server.1=sht-sgmhadoopdn-01:2888:3888
- server.2=sht-sgmhadoopdn-02:2888:3888
- server.3=sht-sgmhadoopdn-03:2888:3888
- [root@sht-sgmhadoopdn-01 conf]# cd ../
- [root@sht-sgmhadoopdn-01 zookeeper]# mkdir data
- [root@sht-sgmhadoopdn-01 zookeeper]# touch data/myid
- [root@sht-sgmhadoopdn-01 zookeeper]# echo 1 > data/myid
- [root@sht-sgmhadoopdn-01 zookeeper]# more data/myid
- 1
- ## sht-sgmhadoopdn-02/03,也修改配置,就如下不同
- [root@sht-sgmhadoopdn-02 zookeeper]# echo 2 > data/myid
- [root@sht-sgmhadoopdn-03 zookeeper]# echo 3 > data/myid
七.安装Hadoop(HDFS HA+YARN HA)
#step3~7,用SecureCRT ssh 到 linux的环境中,假如copy 内容从window 到 linux 中,中文乱码,请参照修改http://www.cnblogs.com/qi09/archive/2013/02/05/2892922.html
1.下载解压hadoop-2.7.2.tar.gz
点击(此处)折叠或打开
- [root@sht-sgmhadoopdn-01 tmp]# cd /hadoop/zookeeper/conf
- [root@sht-sgmhadoopdn-01 conf]# cp zoo_sample.cfg zoo.cfg
- [root@sht-sgmhadoopdn-01 conf]# vi zoo.cfg
- 修改dataDir
- dataDir=/hadoop/zookeeper/data
- 添加下面三行
- server.1=sht-sgmhadoopdn-01:2888:3888
- server.2=sht-sgmhadoopdn-02:2888:3888
- server.3=sht-sgmhadoopdn-03:2888:3888
- [root@sht-sgmhadoopdn-01 conf]# cd ../
- [root@sht-sgmhadoopdn-01 zookeeper]# mkdir data
- [root@sht-sgmhadoopdn-01 zookeeper]# touch data/myid
- [root@sht-sgmhadoopdn-01 zookeeper]# echo 1 > data/myid
- [root@sht-sgmhadoopdn-01 zookeeper]# more data/myid
- 1
- ## sht-sgmhadoopdn-02/03,也修改配置,就如下不同
- [root@sht-sgmhadoopdn-02 zookeeper]# echo 2 > data/myid
- [root@sht-sgmhadoopdn-03 zookeeper]# echo 3 > data/myid
2.修改$HADOOP_HOME/etc/hadoop/hadoop-env.sh
export JAVA_HOME="/usr/java/jdk1.7.0_67-cloudera"
3.修改$HADOOP_HOME/etc/hadoop/core-site.xml
点击(此处)折叠或打开
-
-
- fs.defaultFS
- hdfs://mycluster
-
-
-
- dfs.permissions.superusergroup
- root
-
-
-
-
- fs.trash.checkpoint.interval
- 0
-
-
-
- fs.trash.interval
- 1440
-
4.修改$HADOOP_HOME/etc/hadoop/hdfs-site.xml
点击(此处)折叠或打开
-
-
- dfs.webhdfs.enabled
- true
-
-
- dfs.namenode.name.dir
- /hadoop/hadoop-2.7.2/data/dfs/name
- namenode 存放name table(fsimage)本地目录(需要修改)
-
-
- dfs.namenode.edits.dir
- ${dfs.namenode.name.dir}
- namenode粗放 transaction file(edits)本地目录(需要修改)
-
-
- dfs.datanode.data.dir
- /hadoop/hadoop-2.7.2/data/dfs/data
- datanode存放block本地目录(需要修改)
-
-
- dfs.replication
- 3
-
-
-
- dfs.blocksize
- 268435456
-
-
-
-
-
- dfs.nameservices
- mycluster
-
-
-
- dfs.ha.namenodes.mycluster
- nn1,nn2
-
-
-
-
- dfs.namenode.rpc-address.mycluster.nn1
- sht-sgmhadoopnn-01:8020
-
-
- dfs.namenode.rpc-address.mycluster.nn2
- sht-sgmhadoopnn-02:8020
-
-
-
-
- dfs.namenode.http-address.mycluster.nn1
- sht-sgmhadoopnn-01:50070
-
-
- dfs.namenode.http-address.mycluster.nn2
- sht-sgmhadoopnn-02:50070
-
-
-
-
-
- dfs.journalnode.http-address
- 0.0.0.0:8480
-
-
- dfs.journalnode.rpc-address
- 0.0.0.0:8485
-
-
-
-
- dfs.namenode.shared.edits.dir
- qjournal://sht-sgmhadoopdn-01:8485;sht-sgmhadoopdn-02:8485;sht-sgmhadoopdn-03:8485/mycluster
-
-
-
-
- dfs.journalnode.edits.dir
- /hadoop/hadoop-2.7.2/data/dfs/jn
-
-
-
-
- dfs.client.failover.proxy.provider.mycluster
- org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider
-
-
-
-
- dfs.ha.fencing.methods
- sshfence
-
-
- dfs.ha.fencing.ssh.private-key-files
- /root/.ssh/id_rsa
-
-
-
- dfs.ha.fencing.ssh.connect-timeout
- 30000
-
-
-
-
-
- dfs.ha.automatic-failover.enabled
- true
-
-
- ha.zookeeper.quorum
- sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181
-
-
-
- ha.zookeeper.session-timeout.ms
- 2000
-
5.修改$HADOOP_HOME/etc/hadoop/yarn-env.sh
#Yarn Daemon Options
#export YARN_RESOURCEMANAGER_OPTS
#export YARN_NODEMANAGER_OPTS
#export YARN_PROXYSERVER_OPTS
#export HADOOP_JOB_HISTORYSERVER_OPTS
#Yarn Logs
export YARN_LOG_DIR="/hadoop/hadoop-2.7.2/logs"
6.修改$HADOOP_HOEM/etc/hadoop/mapred-site.xml
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 hadoop]# cp mapred-site.xml.template mapred-site.xml
- [root@sht-sgmhadoopnn-01 hadoop]# vi mapred-site.xml
-
-
- mapreduce.framework.name
- yarn
-
-
-
-
- mapreduce.jobhistory.address
- sht-sgmhadoopnn-01:10020
-
-
-
- mapreduce.jobhistory.webapp.address
- sht-sgmhadoopnn-01:19888
-
7.修改$HADOOP_HOME/etc/hadoop/yarn-site.xml
点击(此处)折叠或打开
-
-
- yarn.nodemanager.aux-services
- mapreduce_shuffle
-
-
- yarn.nodemanager.aux-services.mapreduce.shuffle.class
- org.apache.hadoop.mapred.ShuffleHandler
-
-
- Address where the localizer IPC is.
- yarn.nodemanager.localizer.address
- 0.0.0.0:23344
-
-
- NM Webapp address.
- yarn.nodemanager.webapp.address
- 0.0.0.0:23999
-
-
-
-
-
- yarn.resourcemanager.connect.retry-interval.ms
- 2000
-
-
- yarn.resourcemanager.ha.enabled
- true
-
-
- yarn.resourcemanager.ha.automatic-failover.enabled
- true
-
-
-
- yarn.resourcemanager.ha.automatic-failover.embedded
- true
-
-
-
- yarn.resourcemanager.cluster-id
- yarn-cluster
-
-
- yarn.resourcemanager.ha.rm-ids
- rm1,rm2
-
-
-
- yarn.resourcemanager.scheduler.class
- org.apache.hadoop.yarn.server.resourcemanager.scheduler.fair.FairScheduler
-
-
- yarn.resourcemanager.recovery.enabled
- true
-
-
- yarn.app.mapreduce.am.scheduler.connection.wait.interval-ms
- 5000
-
-
-
- yarn.resourcemanager.store.class
- org.apache.hadoop.yarn.server.resourcemanager.recovery.ZKRMStateStore
-
-
- yarn.resourcemanager.zk-address
- sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181
-
-
- yarn.resourcemanager.zk.state-store.address
- sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181
-
-
-
- yarn.resourcemanager.address.rm1
- sht-sgmhadoopnn-01:23140
-
-
- yarn.resourcemanager.address.rm2
- sht-sgmhadoopnn-02:23140
-
-
-
- yarn.resourcemanager.scheduler.address.rm1
- sht-sgmhadoopnn-01:23130
-
-
- yarn.resourcemanager.scheduler.address.rm2
- sht-sgmhadoopnn-02:23130
-
-
-
- yarn.resourcemanager.admin.address.rm1
- sht-sgmhadoopnn-01:23141
-
-
- yarn.resourcemanager.admin.address.rm2
- sht-sgmhadoopnn-02:23141
-
-
-
- yarn.resourcemanager.resource-tracker.address.rm1
- sht-sgmhadoopnn-01:23125
-
-
- yarn.resourcemanager.resource-tracker.address.rm2
- sht-sgmhadoopnn-02:23125
-
-
-
- yarn.resourcemanager.webapp.address.rm1
- sht-sgmhadoopnn-01:8088
-
-
- yarn.resourcemanager.webapp.address.rm2
- sht-sgmhadoopnn-02:8088
-
-
- yarn.resourcemanager.webapp.https.address.rm1
- sht-sgmhadoopnn-01:23189
-
-
- yarn.resourcemanager.webapp.https.address.rm2
- sht-sgmhadoopnn-02:23189
-
8.修改slaves
[root@sht-sgmhadoopnn-01 hadoop]# vi slaves
sht-sgmhadoopdn-01
sht-sgmhadoopdn-02
sht-sgmhadoopdn-03
9.分发文件夹
[root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopnn-02:/hadoop
[root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopdn-01:/hadoop
[root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopdn-02:/hadoop
[root@sht-sgmhadoopnn-01 hadoop]# scp -r hadoop-2.7.2 root@sht-sgmhadoopdn-03:/hadoop
八.启动集群
另外一种启动方式:http://www.micmiu.com/bigdata/hadoop/hadoop2-cluster-ha-setup/
1.启动zookeeper
点击(此处)折叠或打开
- command: ./zkServer.sh start|stop|status
- [root@sht-sgmhadoopdn-01 bin]# ./zkServer.sh start
- JMX enabled by default
- Using config: /hadoop/zookeeper/bin/../conf/zoo.cfg
- Starting zookeeper ... STARTED
- [root@sht-sgmhadoopdn-01 bin]# jps
- 2073 QuorumPeerMain
- 2106 Jps
- [root@sht-sgmhadoopdn-02 bin]# ./zkServer.sh start
- JMX enabled by default
- Using config: /hadoop/zookeeper/bin/../conf/zoo.cfg
- Starting zookeeper ... STARTED
- [root@sht-sgmhadoopdn-02 bin]# jps
- 2073 QuorumPeerMain
- 2106 Jps
- [root@sht-sgmhadoopdn-03 bin]# ./zkServer.sh start
- JMX enabled by default
- Using config: /hadoop/zookeeper/bin/../conf/zoo.cfg
- Starting zookeeper ... STARTED
- [root@sht-sgmhadoopdn-03 bin]# jps
- 2073 QuorumPeerMain
- 2106 Jps
2.启动hadoop(HDFS+YARN)
a.格式化前,先在journalnode 节点机器上先启动JournalNode进程
点击(此处)折叠或打开
- [root@sht-sgmhadoopdn-01 ~]# cd /hadoop/hadoop-2.7.2/sbin
- [root@sht-sgmhadoopdn-01 sbin]# hadoop-daemon.sh start journalnode
- starting journalnode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-journalnode-sht-sgmhadoopdn-03.telenav.cn.out
- [root@sht-sgmhadoopdn-03 sbin]# jps
- 16722 JournalNode
- 16775 Jps
- 15519 QuorumPeerMain
- [root@sht-sgmhadoopdn-02 ~]# cd /hadoop/hadoop-2.7.2/sbin
- [root@sht-sgmhadoopdn-02 sbin]# hadoop-daemon.sh start journalnode
- starting journalnode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-journalnode-sht-sgmhadoopdn-03.telenav.cn.out
- [root@sht-sgmhadoopdn-03 sbin]# jps
- 16722 JournalNode
- 16775 Jps
- 15519 QuorumPeerMain
- [root@sht-sgmhadoopdn-03 ~]# cd /hadoop/hadoop-2.7.2/sbin
- [root@sht-sgmhadoopdn-03 sbin]# hadoop-daemon.sh start journalnode
- starting journalnode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-journalnode-sht-sgmhadoopdn-03.telenav.cn.out
- [root@sht-sgmhadoopdn-03 sbin]# jps
- 16722 JournalNode
- 16775 Jps
- 15519 QuorumPeerMain
b.NameNode格式化
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 bin]# hadoop namenode -format
- 16/02/25 14:05:04 INFO namenode.NameNode: STARTUP_MSG:
- /************************************************************
- STARTUP_MSG: Starting NameNode
- STARTUP_MSG: host = sht-sgmhadoopnn-01.telenav.cn/172.16.101.55
- STARTUP_MSG: args = [-format]
- STARTUP_MSG: version = 2.7.2
- STARTUP_MSG: classpath =
- ……………..
- ………………
- 16/02/25 14:05:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
- 16/02/25 14:05:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
- 16/02/25 14:05:07 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension = 30000
- 16/02/25 14:05:07 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
- 16/02/25 14:05:07 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
- 16/02/25 14:05:07 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
- 16/02/25 14:05:07 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
- 16/02/25 14:05:07 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
- 16/02/25 14:05:07 INFO util.GSet: Computing capacity for map NameNodeRetryCache
- 16/02/25 14:05:07 INFO util.GSet: VM type = 64-bit
- 16/02/25 14:05:07 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
- 16/02/25 14:05:07 INFO util.GSet: capacity = 2^15 = 32768 entries
- 16/02/25 14:05:08 INFO namenode.FSImage: Allocated new BlockPoolId: BP-1182930464-172.16.101.55-1456380308394
- 16/02/25 14:05:08 INFO common.Storage: Storage directory /hadoop/hadoop-2.7.2/data/dfs/name has been successfully formatted.
- 16/02/25 14:05:08 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
- 16/02/25 14:05:08 INFO util.ExitUtil: Exiting with status 0
- 16/02/25 14:05:08 INFO namenode.NameNode: SHUTDOWN_MSG:
- /************************************************************
- SHUTDOWN_MSG: Shutting down NameNode at sht-sgmhadoopnn-01.telenav.cn/172.16.101.55
- ************************************************************/
c.同步NameNode元数据
点击(此处)折叠或打开
- 同步sht-sgmhadoopnn-01 元数据到sht-sgmhadoopnn-02
- 主要是:dfs.namenode.name.dir,dfs.namenode.edits.dir还应该确保共享存储目录下(dfs.namenode.shared.edits.dir ) 包含NameNode 所有的元数据。
- [root@sht-sgmhadoopnn-01 hadoop-2.7.2]# pwd
- /hadoop/hadoop-2.7.2
- [root@sht-sgmhadoopnn-01 hadoop-2.7.2]# scp -r data/ root@sht-sgmhadoopnn-02:/hadoop/hadoop-2.7.2
- seen_txid 100% 2 0.0KB/s 00:00
- fsimage_0000000000000000000 100% 351 0.3KB/s 00:00
- fsimage_0000000000000000000.md5 100% 62 0.1KB/s 00:00
- VERSION 100% 205 0.2KB/s 00:00
d.初始化ZFCK
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 bin]# hdfs zkfc -formatZK
- ……………..
- ……………..
- 16/02/25 14:14:41 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
- 16/02/25 14:14:41 INFO zookeeper.ZooKeeper: Client environment:user.dir=/hadoop/hadoop-2.7.2/bin
- 16/02/25 14:14:41 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=sht-sgmhadoopdn-01:2181,sht-sgmhadoopdn-02:2181,sht-sgmhadoopdn-03:2181 sessionTimeout=2000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@5f4298a5
- 16/02/25 14:14:41 INFO zookeeper.ClientCnxn: Opening socket connection to server sht-sgmhadoopdn-01.telenav.cn/172.16.101.58:2181. Will not attempt to authenticate using SASL (unknown error)
- 16/02/25 14:14:41 INFO zookeeper.ClientCnxn: Socket connection established to sht-sgmhadoopdn-01.telenav.cn/172.16.101.58:2181, initiating session
- 16/02/25 14:14:42 INFO zookeeper.ClientCnxn: Session establishment complete on server sht-sgmhadoopdn-01.telenav.cn/172.16.101.58:2181, sessionid = 0x15316c965750000, negotiated timeout = 4000
- 16/02/25 14:14:42 INFO ha.ActiveStandbyElector: Session connected.
- 16/02/25 14:14:42 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.
- 16/02/25 14:14:42 INFO zookeeper.ClientCnxn: EventThread shut down
- 16/02/25 14:14:42 INFO zookeeper.ZooKeeper: Session: 0x15316c965750000 closed
e.启动HDFS 系统
集群启动,在sht-sgmhadoopnn-01执行start-dfs.sh
集群关闭,在sht-sgmhadoopnn-01执行stop-dfs.sh
#####集群启动############
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 sbin]# start-dfs.sh
- 16/02/25 14:21:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- Starting namenodes on [sht-sgmhadoopnn-01 sht-sgmhadoopnn-02]
- sht-sgmhadoopnn-01: starting namenode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-namenode-sht-sgmhadoopnn-01.telenav.cn.out
- sht-sgmhadoopnn-02: starting namenode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-namenode-sht-sgmhadoopnn-02.telenav.cn.out
- sht-sgmhadoopdn-01: starting datanode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-sht-sgmhadoopdn-01.telenav.cn.out
- sht-sgmhadoopdn-02: starting datanode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-sht-sgmhadoopdn-02.telenav.cn.out
- sht-sgmhadoopdn-03: starting datanode, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-datanode-sht-sgmhadoopdn-03.telenav.cn.out
- Starting journal nodes [sht-sgmhadoopdn-01 sht-sgmhadoopdn-02 sht-sgmhadoopdn-03]
- sht-sgmhadoopdn-01: journalnode running as process 6348. Stop it first.
- sht-sgmhadoopdn-03: journalnode running as process 16722. Stop it first.
- sht-sgmhadoopdn-02: journalnode running as process 7197. Stop it first.
- 16/02/25 14:21:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
- Starting ZK Failover Controllers on NN hosts [sht-sgmhadoopnn-01 sht-sgmhadoopnn-02]
- sht-sgmhadoopnn-01: starting zkfc, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-zkfc-sht-sgmhadoopnn-01.telenav.cn.out
- sht-sgmhadoopnn-02: starting zkfc, logging to /hadoop/hadoop-2.7.2/logs/hadoop-root-zkfc-sht-sgmhadoopnn-02.telenav.cn.out
- You have mail in /var/spool/mail/root
####单进程启动###########
NameNode(sht-sgmhadoopnn-01, sht-sgmhadoopnn-02):
hadoop-daemon.sh start namenode
DataNode(sht-sgmhadoopdn-01, sht-sgmhadoopdn-02, sht-sgmhadoopdn-03):
hadoop-daemon.sh start datanode
JournamNode(sht-sgmhadoopdn-01, sht-sgmhadoopdn-02, sht-sgmhadoopdn-03):
hadoop-daemon.sh start journalnode
ZKFC(sht-sgmhadoopnn-01, sht-sgmhadoopnn-02):
hadoop-daemon.sh start zkfc
f.验证namenode,datanode,zkfc
1) 进程
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 sbin]# jps
- 12712 Jps
- 12593 DFSZKFailoverController
- 12278 NameNode
- [root@sht-sgmhadoopnn-02 ~]# jps
- 29714 NameNode
- 29849 DFSZKFailoverController
- 30229 Jps
- [root@sht-sgmhadoopdn-01 ~]# jps
- 6348 JournalNode
- 8775 Jps
- 559 QuorumPeerMain
- 8509 DataNode
- [root@sht-sgmhadoopdn-02 ~]# jps
- 9430 Jps
- 9160 DataNode
- 7197 JournalNode
- 2073 QuorumPeerMain
- [root@sht-sgmhadoopdn-03 ~]# jps
- 16722 JournalNode
- 17369 Jps
- 15519 QuorumPeerMain
- 17214 DataNode
2) 页面
sht-sgmhadoopnn-01:
http://172.16.101.55:50070/
sht-sgmhadoopnn-02:
http://172.16.101.56:50070/
g.启动YARN运算框架
#####集群启动############
1) sht-sgmhadoopnn-01启动Yarn,命令所在目录:$HADOOP_HOME/sbin
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 sbin]# start-yarn.sh
- starting yarn daemons
- starting resourcemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-resourcemanager-sht-sgmhadoopnn-01.telenav.cn.out
- sht-sgmhadoopdn-03: starting nodemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-sht-sgmhadoopdn-03.telenav.cn.out
- sht-sgmhadoopdn-02: starting nodemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-sht-sgmhadoopdn-02.telenav.cn.out
- sht-sgmhadoopdn-01: starting nodemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-nodemanager-sht-sgmhadoopdn-01.telenav.cn.out
2) sht-sgmhadoopnn-02备机启动RM
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-02 sbin]# yarn-daemon.sh start resourcemanager
- starting resourcemanager, logging to /hadoop/hadoop-2.7.2/logs/yarn-root-resourcemanager-sht-sgmhadoopnn-02.telenav.cn.out
####单进程启动###########
1) ResourceManager(sht-sgmhadoopnn-01, sht-sgmhadoopnn-02)
yarn-daemon.sh start resourcemanager
2) NodeManager(sht-sgmhadoopdn-01, sht-sgmhadoopdn-02, sht-sgmhadoopdn-03)
yarn-daemon.sh start nodemanager
######关闭#############
[root@sht-sgmhadoopnn-01 sbin]# stop-yarn.sh
#包含namenode的resourcemanager进程,datanode的nodemanager进程
[root@sht-sgmhadoopnn-02 sbin]# yarn-daemon.sh stop resourcemanager
h.验证resourcemanager,nodemanager
1) 进程
点击(此处)折叠或打开
- [root@sht-sgmhadoopnn-01 sbin]# jps
- 13611 Jps
- 12593 DFSZKFailoverController
- 12278 NameNode
- 13384 ResourceManager
- [root@sht-sgmhadoopnn-02 sbin]# jps
- 32265 ResourceManager
- 32304 Jps
- 29714 NameNode
- 29849 DFSZKFailoverController
- [root@sht-sgmhadoopdn-01 ~]# jps
- 6348 JournalNode
- 559 QuorumPeerMain
- 8509 DataNode
- 10286 NodeManager
- 10423 Jps
- [root@sht-sgmhadoopdn-02 ~]# jps
- 9160 DataNode
- 10909 NodeManager
- 11937 Jps
- 7197 JournalNode
- 2073 QuorumPeerMain
- [root@sht-sgmhadoopdn-03 ~]# jps
- 18031 Jps
- 16722 JournalNode
- 17710 NodeManager
- 15519 QuorumPeerMain
- 17214 DataNode
2) 页面
ResourceManger(Active):http://172.16.101.55:8088
ResourceManger(Standby):http://172.16.101.56:8088/cluster/cluster
九.监控集群
[root@sht-sgmhadoopnn-01 ~]# hdfs dfsadmin -report
十.附件及参考
#http://archive-primary.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.5.2.tar.gz
#http://archive-primary.cloudera.com/cdh5/cdh/5/zookeeper-3.4.5-cdh5.5.2.tar.gz
hadoop : http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-2.7.2/hadoop-2.7.2.tar.gz
zookeeper :http://mirror.bit.edu.cn/apache/zookeeper/zookeeper-3.4.6/zookeeper-3.4.6.tar.gz
参考:
Hadoop-2.3.0-cdh5.0.1完全分布式环境搭建(NameNode,ResourceManager HA):
http://blog.itpub.net/30089851/viewspace-1987620/
如何解决这类问题:The string "--" is not permitted within comments:
http://blog.csdn.net/free4294/article/details/38681095
SecureCRT连接linux终端中文显示乱码解决办法:
http://www.cnblogs.com/qi09/archive/2013/02/05/2892922.html
参照:http://blog.itpub.net/30089851/viewspace-1987620/
转自:http://blog.itpub.net/30089851/viewspace-1994585/