转自http://my.oschina.net/sansom/blog/185378
通过cloudera-manager来安装hadoop
本人笨,装的过程碰了不少东西,其他网站转载的文章也没具体写清楚,以下我实战了下总结下来。
1. ssh登陆要安装管理界面的机器,确定关闭防火墙和selinux,然后安装cloudera-manager-installer.bin
2. 修改host,并复制到所有节点
vim /etc/hosts
##内容————————————————
172.16.1.1x node1
172.16.1.2x node2
172.16.1.3x node3
127.0.0.1 localhost # 这是必须指定为localhost,且必须为第一个127.0.0.1的域
3. 打开管理界面 http://{{host}}:7180/
1)用来安装hadoop组件的帐号必须有ssh 且 root 权限
2)像我们的服务器都使用了key登陆,所以在安装时,必须为选用的帐号设置sudo权限且不需输入密码,以下操作是每一台节点机器必须进行
a. 用root操作,修改文件可写权限: chmod +w /etc/sudoers
b. vim /etc/sudoers 添加如: nic ALL=(ALL) NOPASSWD: ALL
c. 去除可写权限,chmod -w /etc/sudoers
3)给你安装hadoop的账户赋予这些文件的读权限和执行权限
chmod +r /bin/mktemp
chmod +x /bin/mktemp
chmod +r /usr/bin/tee
chmod +x /usr/bin/tee
chmod +r /usr/bin/tr
chmod +x /usr/bin/tr
4)随便进去一个目录wget下载各种hadoop组件安装包 (cloudera-manager所在机器如果不需要安装任何hadoop组件,则不需要下载和安装)
wget http://archive.cloudera.com/cm4/redhat/5/x86_64/cm/4.1.1/RPMS/x86_64/jdk-6u31-linux-amd64.rpm
wgethttp://archive.cloudera.com/cm4/redhat/5/x86_64/cm/4.1.2/RPMS/x86_64/cloudera-manager-agent-4.1.2-1.cm412.p0.428.x86_64.rpm
wgethttp://archive.cloudera.com/cm4/redhat/5/x86_64/cm/4.1.2/RPMS/x86_64/cloudera-manager-daemons-4.1.2-1.cm412.p0.428.x86_64.rpm
wget http://archive.cloudera.com/cdh4/redhat/5/x86_64/cdh/4/RPMS/noarch/bigtop-utils-0.4+359-1.cdh4.1.2.p0.34.el5.noarch.rpm
wget http://archive.cloudera.com/cdh4/redhat/5/x86_64/cdh/4/RPMS/x86_64/bigtop-jsvc-0.4+359-1.cdh4.1.2.p0.43.el5.x86_64.rpm
wget http://archive.cloudera.com/cdh4/redhat/5/x86_64/cdh/4/RPMS/noarch/bigtop-tomcat-0.4+359-1.cdh4.1.2.p0.38.el5.noarch.rpm
wget http://archive.cloudera.com/cdh4/redhat/5/x86_64/cdh/4/RPMS/noarch/flume-ng-1.2.0+122-1.cdh4.1.2.p0.7.el5.noarch.rpm
wget http://archive.cloudera.com/cdh4/redhat/5/x86_64/cdh/4/RPMS/noarch/oozie-3.2.0+126-1.cdh4.1.2.p0.10.el5.noarch.rpm -O oozie-3.2.0-cdh4.1.2.p0.10.el5.noarch.rpm
### 注意这只是其中一部分,并且各个版本可能不一样,所需的文件可以从以上路径下找,或者用cloudera-manager来自动安装,把整个过程复制下来,慢慢找它自动下载的*.rpm包的路径 ###
### 既然有自动下载和安装功能,为什么还手动下载呢? 因为cloudera-manager安装过程只要一发生失败(安装过程有权限问题、或者下载超时等问题.),一切行为都将回滚,包括下载和安装的文件,
即如果依赖cloudera-manager每次安装都必须重新下载、重新安装。再说,有的rpm包很大,咱们服务器不像国外服务器,咱国内服务器下载这下资源包过程很慢而且还有很大可能下载不了,也就是说很容易出现辛辛苦苦装半天,一下回到解放前。
要一劳永逸,在本地迅雷下好了,再scp上去平均速度也有90kb/s,比服务器下载要快(亲测,不同网络环境可能有不一样), 下载后再从一个节点scp到各个节点 ###
### 要用到的安装包名有如下列表,请各自寻找下载 ###
###
hadoop-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm
hadoop-hdfs-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm
hadoop-httpfs-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm
hadoop-yarn-2.0.0.1.cdh4.1.2.p0.27.el5.x86_64.rpm
hadoop-mapreduce-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm
hadoop-0.20-mapreduce-0.20.21.cdh4.1.2.p0.24.el5.x86_64.rpm
hadoop-libhdfs-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm
hadoop-client-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm
hadoop-hdfs-fuse-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm
zookeeper-3.4.31.cdh4.1.2.p0.34.el5.noarch.rpm
hbase-0.92.1-cdh4.1.2.p0.24.el5.noarch.rpm
hive-0.9.0-cdh4.1.2.p0.21.el5.noarch.rpm
oozie-3.2.0-cdh4.1.2.p0.10.el5.noarch.rpm
oozie-client-3.2.0-cdh4.1.2.p0.10.el5.noarch.rpm
pig-0.10.01.cdh4.1.2.p0.24.el5.noarch.rpm
hue-common-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
hue-about-2.1.-cdh4.1.2.p0.9.el5.x86_64.rpm
hue-help-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
hue-filebrowser-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
hue-jobbrowser-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
hue-jobsub-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
hue-beeswax-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
hue-plugins-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
hue-proxy-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
hue-shell-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
hue-useradmin-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
sqoop-1.4.11.cdh4.1.2.p0.21.el5.noarch.rpm
###
5)执行安装
yum install cyrus-sasl-gssapi
rpm -ivh jdk-6u31-linux-amd64.rpm # 竟然还指定装这个,有点莫名奇妙,我给机器自己装了jdk1.7的还配了环境变量,但这个cloudera还是说没找到,还自己下载安装,不知道是不是我没配置好呢?
rpm -ivh cloudera-manager-agent-4.1.2-1.cm412.p0.428.x86_64.rpm
rpm -ivh cloudera-manager-daemons-4.1.2-1.cm412.p0.428.x86_64.rpm
rpm -ivh bigtop-utils-0.4+359-1.cdh4.1.2.p0.34.el5.noarch.rpm
rpm -ivh bigtop-jsvc-0.4-cdh4.1.2.p0.43.el5.x86_64.rpm
rpm -ivh bigtop-tomcat-0.4-cdh4.1.2.p0.38.el5.noarch.rpm
rpm -ivh flume-ng-1.2.0-cdh4.1.2.p0.7.el5.noarch.rpm
rpm -ivh hadoop-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm
rpm -ivh hadoop-hdfs-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm
rpm -ivh hadoop-httpfs-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm
rpm -ivh hadoop-yarn-2.0.0.1.cdh4.1.2.p0.27.el5.x86_64.rpm
rpm -ivh hadoop-mapreduce-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm
rpm -ivh hadoop-0.20-mapreduce-0.20.21.cdh4.1.2.p0.24.el5.x86_64.rpm
rpm -ivh hadoop-libhdfs-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm
rpm -ivh hadoop-client-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm
rpm -ivh hadoop-hdfs-fuse-2.0.0-cdh4.1.2.p0.27.el5.x86_64.rpm
rpm -ivh zookeeper-3.4.31.cdh4.1.2.p0.34.el5.noarch.rpm
rpm -ivh hbase-0.92.1-cdh4.1.2.p0.24.el5.noarch.rpm
rpm -ivh hive-0.9.0-cdh4.1.2.p0.21.el5.noarch.rpm
rpm -ivh oozie-3.2.0-cdh4.1.2.p0.10.el5.noarch.rpm
rpm -ivh oozie-client-3.2.0-cdh4.1.2.p0.10.el5.noarch.rpm
rpm -ivh pig-0.10.01.cdh4.1.2.p0.24.el5.noarch.rpm
rpm -ivh hue-common-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
rpm -ivh hue-about-2.1.-cdh4.1.2.p0.9.el5.x86_64.rpm
rpm -ivh hue-help-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
rpm -ivh hue-filebrowser-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
rpm -ivh hue-jobbrowser-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
rpm -ivh hue-jobsub-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
rpm -ivh hue-beeswax-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
rpm -ivh hue-oozie-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
rpm -ivh hue-plugins-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
rpm -ivh hue-proxy-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
rpm -ivh hue-shell-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
rpm -ivh hue-useradmin-2.1.0-cdh4.1.2.p0.9.el5.x86_64.rpm
rpm -ivh sqoop-1.4.11.cdh4.1.2.p0.21.el5.noarch.rpm
6)安装过程有由于顺序问题引起某些包安装不全,请重复执行上述命令4-5次直到所有安装包提示已经安装为止
7)用cloudera-manager进行“安装”,这过程基本上就是完成配置文件和启动cloudera-scm-agent(这个重要,有可能会提示出错,出错原因稍后补充)而已,很快。
8)最后提示成功,则这台机器已经受cloudera-scm-agent管理,基本上安装配置完毕,剩下的就是根据cloudera-manager提示界面按需要完成hadoop组件、节点配置,如选择namenode,datanode等。
这个过程很简单很方便,但当然如果配置完成并启动后再修改某些节点的内容如什么什么mapreduce、什么Tracker之类,可能会出现问题再也启动不了,具体内容出现过但没深入研究,
有的问题还没法解决只能换了机器(除了hadoop安装过程依赖多,配置多,服务器本身环境也复杂。所以官方文档建议用干净的机器)。
4.安装过程问题补充
1)cloudera-scm-agent 无法启动:查看机器名,机器名不能包含下划线
2)安装过程、或配置有问题,需完全删除cloudera-scm-agent的,除了上述反安装rpm包的命令,还需手动删除部分文件
rm -rf /var/run/cloudera-scm-agent
rm -rf /var/log/cloudera-scm-agent
3)如果cloudera-manager上管理的机器识别标识老是机器名(应该是host上配置的)或ip不对, 手动修改启动脚本参数
vim /usr/sbin/cmf-agent
大概内容、位置
agent/src/cmf/agent.py
—hostname vip1 —ip_address 172.16.22.1
4)重启agent
/etc/init.d/cloudera-scm-agent restart
5)卸载所有安装内容的命令:
rpm -e cloudera-manager-agent
rpm -e cloudera-manager-daemons
rpm -e sqoop
rpm -e pig
rpm -e oozie
rpm -e oozie-client
rpm -e flume-ng
rpm -e hadoop-hdfs-fuse
rpm -e hadoop-libhdfs
rpm -e hue
rpm -e hue-useradmin
rpm -e hue-about
rpm -e hue-oozie
rpm -e hue-beeswax
rpm -e hue-jobsub
rpm -e hue-jobbrowser
rpm -e hue-shell
rpm -e hue-proxy
rpm -e hue-plugins
rpm -e hue-filebrowser
rpm -e hue-help
rpm -e hue-common
rpm -e hive
rpm -e hadoop-client
rpm -e hadoop-0.20-mapreduce
rpm -e hadoop-mapreduce
rpm -e hadoop-yarn
rpm -e hadoop-httpfs
rpm -e hbase
rpm -e hadoop-hdfs
rpm -e bigtop-tomcat
rpm -e bigtop-jsvc
rpm -e hadoop
rpm -e zookeeper
rpm -e bigtop-utils
rm -f /etc/cloudera-scm-agent/config.ini.rpmsave
rm -rf /etc/hadoop
rm -rf /usr/lib/hadoop
rm -rf /etc/hadoop-httpfs/
rm -rf /usr/lib/hadoop-httpfs
rm -rf /etc/hive
rm -rf /usr/lib/hive
rm -rf /etc/hbase
rm -rf /usr/lib/hbase
rm -rf /usr/lib/hadoop-0.20-mapreduce/
rm -rf /usr/lib/hadoop-hdfs
rm -rf /usr/lib/hadoop-mapreduce
rm -rf /usr/lib/hadoop-yarn
rm -rf /usr/lib/zookeeper
rm -rf /etc/zookeeper
rm -rf /usr/lib/oozie
rm -rf /etc/oozie
rm -rf /usr/lib/bigtop-tomcat
rm -rf /usr/lib/flume-ng
rm -rf /etc/flume-ng
rm -rf /var/lib/hive
rm -rf /var/lib/oozie
rm -rf /var/lib/zookeeper
rm -rf /dfs # cloudera 默认是这个配置路径,根据实际吧,别乱删
rm -rf /data/dfs/ # cloudera 默认是这个配置路径,根据实际吧,别乱删
6. 测试。
1)用浏览器打开hdfs-site.xml中配置的dfs.namenode.http-address所对应的地址,则看看到该hadoop集群机器当前相应状况。
2)登陆集群中的任意一台机器,直接执行”hadoop fs -ls / ” 来查看hadoop中的文件,或”hadoop fs -put xxx /tmp” 把文件放到hadoop中存放。(这里当然需要配置系统环境变量PATH的$HADOOP_HOME/bin)。
3)非集群内机器访问并操作该hadoop集群文件,还是下载hadoop,配置什么的跟上面一样,不需要启动什么,然后直接hadoop fs 看看,或者hadoop fs -ls hdfs://xxxx:12345/。(前提,这台机器能直接访问集群的namenode、datanode)。