Hadoop升级(No HA) 2.2升级到2.6

部署2.6.3

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[hadoop@hadoop-master1 ~]$ tar zxvf hadoop-2.6.3.tar.gz  [hadoop@hadoop-master1 ~]$ cd hadoop-2.6.3/share/ [hadoop@hadoop-master1 share]$ rm -rf doc/  [hadoop@hadoop-master1 hadoop-2.6.3]$ rm -rf lib/native/*  #// 拷贝四个配置文件到hadoop-2.6.3 [hadoop@hadoop-master1 hadoop-2.6.3]$ cd etc/hadoop/ [hadoop@hadoop-master1 hadoop]$ cp -f ~/hadoop-2.2.0/etc/hadoop/*-site.xml ./ [hadoop@hadoop-master1 hadoop]$ cp -f ~/hadoop-2.2.0/etc/hadoop/slaves ./  [hadoop@hadoop-master1 hadoop]$ cd [hadoop@hadoop-master1 ~]$ for h in hadoop-master2 hadoop-slaver1 hadoop-slaver2 hadoop-slaver3 ; do rsync -vaz --delete --exclude=logs hadoop-2.6.3 $h:~/ ; done 

升级(最佳方式)

直接使用upgrade选项启动dfs即可。(secondarynamenode不要单独操作来升级,反正就是执行upgrade启动dfs就好了)。

1
2
3
4
5
[hadoop@hadoop-master1 hadoop-2.6.3]$ sbin/start-dfs.sh -upgrade  // 2.2和2.6都没有这个命令 // hadoop dfsadmin -upgradeProgress status hadoop dfsadmin -finalizeUpgrade 

参考[Hadoop: The Definitive Guide/Chapter 10. Administering Hadoop/Maintenance/Upgrades]

你可能感兴趣的:(Hadoop升级(No HA) 2.2升级到2.6)