折腾了一段时间hadoop的部署管理,写下此系列博客记录一下。
为了避免各位做部署这种重复性的劳动,我已经把部署的步骤写成脚本,各位只需要按着本文把脚本执行完,整个环境基本就部署完了。部署的脚本我放在了开源中国的git仓库里(http://git.oschina.net/snake1361222/hadoop_scripts)。
本文的所有部署都基于cloudera公司的CDH4,CDH4是cloudera公司包装好的hadoop生态圈一系列yum包,把CDH4放到自己的yum仓库中,能极大的提高hadoop环境部署的简易性。
本文的部署过程中涵盖了namenode的HA实现,hadoop管理的解决方案(hadoop配置文件的同步,快速部署脚本等)。
一共用5台机器作为硬件环境,全都是centos 6.4
namenode & resourcemanager 主服务器: 192.168.1.1
namenode & resourcemanager 备服务器: 192.168.1.2
datanode & nodemanager 服务器: 192.168.1.100 192.168.1.101 192.168.1.102
zookeeper 服务器集群(用于namenode 高可用的自动切换): 192.168.1.100 192.168.1.101
jobhistory 服务器(用于记录mapreduce的日志): 192.168.1.1
用于namenode HA的NFS: 192.168.1.100
wget http://archive.cloudera.com/cdh4/one-click-install/redhat/6/x86_64/cloudera-cdh-4-0.x86_64.rpm sudo yum --nogpgcheck localinstall cloudera-cdh-4-0.x86_64.rpm
#!/bin/bash yum -y install rpc-bind nfs-utils mkdir -p /data/nn_ha/ echo "/data/nn_ha *(rw,root_squash,all_squash,sync)" >> /etc/exports /etc/init.d/rpcbind start /etc/init.d/nfs start chkconfig --level 234 rpcbind on chkconfig -level 234 nfs on
yum �Cy install git mkdir �Cp /opt/ cd /opt/ git clone http://git.oschina.net/snake1361222/hadoop_scripts.git /etc/init.d/iptables stop
sh /opt/hadoop_scripts/deploy/AddHostname.sh
vim /opt/kingsoft/hadoop_scripts/deploy/config #添加master服务器的地址,也就是namenode主服务器 master="192.168.1.1" #添加nfs服务器地址 nfsserver="192.168.1.100"
vim /opt/hadoop_scripts/share_data/resolv_host 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.1 nn.dg.hadoop.cn 192.168.1.2 nn2.dg.hadoop.cn 192.168.1.100 dn100.dg.hadoop.cn 192.168.1.101 dn101.dg.hadoop.cn 192.168.1.102 dn102.dg.hadoop.cn
sh /opt/hadoop_scripts/deploy/CreateNamenode.sh
PS:类似于puppet的服务器管理开源工具,比较轻量,在这里用于管理hadoop集群,调度datanode,关于saltstack的详细请看 SaltStack部署与使用
yum -y install salt salt-master
修改监听IP: interface: 0.0.0.0 多线程池: worker_threads: 5 开启任务缓存:(官方描叙开启缓存能承载5000minion) job_cache 开启自动认证: auto_accept: True
c.开启服务
/etc/init.d/salt-master start chkconfig salt-master on
<property> <name>dfs.namenode.rpc-address.mycluster.ns1</name> <value>nn.dg.hadoop.cn:8020</value> <description>定义ns1的rpc地址</description> </property> <property> <name>dfs.namenode.rpc-address.mycluster.ns2</name> <value>nn2.dg.hadoop.cn:8020</value> <description>定义ns2的rpc地址</description> </property> <property> <name>ha.zookeeper.quorum</name> <value>dn100.dg.hadoop.cn:2181,dn101.dg.hadoop.cn:2181,dn102.dg.hadoop.cn:2181,</value> <description>指定用于HA的ZooKeeper集群机器列表</description> </property>
<property> <name>mapreduce.jobhistory.address</name> <value>nn.dg.hadoop.cn:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>nn.dg.hadoop.cn:19888</value> </property>
<property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>nn.dg.hadoop.cn:8031</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>nn.dg.hadoop.cn:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>nn.dg.hadoop.cn:8030</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>nn.dg.hadoop.cn:8033</value> </property>
/etc/init.d/iptables stop mkdir �Cp /opt/hadoop_scripts rsync �Cavz 192.168.1.1::hadoop_s /opt/hadoop_scripts
sh /opt/hadoop_scripts/deploy/CreateNamenode.sh
rsync �Cavz 192.168.1.1::hadoop_conf /etc/hadoop/conf
sh /opt/hadoop_scripts/deploy/salt_minion.sh
zookeeper是一个开源分布式服务,在这里用于namenode 的auto fail over功能。
yum install zookeeper zookeeper-server
maxClientCnxns=50 # The number of milliseconds of each tick tickTime=2000 # The number of ticks that the initial # synchronization phase can take initLimit=10 # The number of ticks that can pass between # sending a request and getting an acknowledgement syncLimit=5 # the directory where the snapshot is stored. dataDir=/var/lib/zookeeper # the port at which the clients will connect clientPort=2181 #这里指定zookeeper集群内的所有机器,此配置集群内机器都是一样的 server.1=dn100.dg.hadoop.cn :2888:3888 server.2=dn101.dg.hadoop.cn:2888:3888
#譬如当前机器是192.168.1.100(dn100.dg.hadoop.cn),它是server.1,id是1,SO: echo "1" > /var/lib/zookeeper/myid chown -R zookeeper.zookeeper /var/lib/zookeeper/ service zookeeper-server init /etc/init.d/zookeeper-server start chkconfig zookeeper-server on #如此类推,部署192.168.1.101
/etc/init.d/iptables stop mkdir �Cp /opt/hadoop_scripts rsync �Cavz 192.168.1.1::hadoop_s /opt/hadoop_scripts
sh /opt/hadoop_scripts/deploy/AddHostname.sh sh /opt/hadoop_scripts/deploy/CreateDatanode.sh
到这里,hadoop集群的环境已部署完毕,现在开始初始化集群
sudo �Cu hdfs hdfs zkfc �CformatZK
/etc/init.d/zookeeper-server start
/etc/init.d/hadoop-hdfs-zkfc start
#确保是用hdfs用户格式化 sudo -u hdfs hadoop namenode �Cformat
tar -zcvPf /tmp/namedir.tar.gz /data/hadoop/dfs/name/ nc -l 9999 < /tmp/namedir.tar.gz
wget 192.168.1.1:9999 -O /tmp/namedir.tar.gz tar -zxvPf /tmp/namedir.tar.gz
/etc/init.d/hadoop-hdfs-namenode start /etc/init.d/hadoop-yarn-resourcemanager start
http://192.168.1.1:9080 http://192.168.1.2:9080 #如果在web界面看到两个namenode都是backup状态,那就是auto fail over配置不成功 #查看zkfc日志(/var/log/hadoop-hdfs/hadoop-hdfs-zkfc-nn.dg.s.kingsoft.net.log) #查看zookeeper集群的日志(/var/log/zookeeper/zookeeper.log)
到这里,所有hadoop部署已完成,现在开始把集群启动,验证效果
#还记得之前搭建的saltstack管理工具不,现在开始发挥它的作用,登录saltstack master(192.168.1.1)执行 salt -v "dn*" cmd.run "/etc/init.d/hadoop-hdfs-datanode start"
#创建一个tmp目录 sudo -u hdfs hdfs dfs -mkdir /tmp #创建一个10G大小的空文件,计算它的MD5值,并放入hdfs dd if=/dev/zero of=/data/test_10G_file bs=1G count=10 md5sum /data/test_10G_file sudo -u hdfs hdfs dfs -put /data/test_10G_file /tmp sudo -u hdfs hdfs dfs -ls /tmp #现在可以尝试关闭一台datanode,然后把刚才的测试文件拉取出来,再算一次MD5看是否一样 sudo -u hdfs hdfs dfs -get /tmp/test_10G_file /tmp/ md5sum /tmp/test_10G_file
hadoop除了hdfs用于大数据的分布式存储,还有更重要的组件,分布式计算(mapreduce)。现在我们来把mapreducev2 yarn集群启动
/etc/init.d/hadoop-yarn-resourcemanager start
#还是登陆saltstack master,执行 salt -v "dn*" cmd.run "/etc/init.d/hadoop-yarn-nodemanager start"
#TestDFSIO测试HDFS的读写性能,写10个文件,每个文件1G. su hdfs - hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.0.0-cdh4.2.1-tests.jar TestDFSIO -write -nrFiles 10 -fileSize 1000 #Sort测试MapReduce ##向random-data目录输出数据 hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar randomwriter random-data ##运行sort程序 hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar sort random-data sorted-data ##验证sorted-data 文件是否排好序 hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-client-jobclient-2.0.0-cdh4.2.1-tests.jar testmapredsort -sortInput random-data \ -sortOutput sorted-data
vim /opt/hadoop_scripts/share_data/resolv_host 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.1.1 nn.dg.hadoop.cn 192.168.1.2 nn2.dg.hadoop.cn 192.168.1.100 dn100.dg.hadoop.cn 192.168.1.101 dn101.dg.hadoop.cn 192.168.1.102 dn102.dg.hadoop.cn 192.168.1.103 dn103.dg.hadoop.cn
mkdir �Cp /opt/hadoop_scripts rsync �Cavz 192.168.1.1::hadoop_s /opt/hadoop_scripts sh /opt/hadoop_scripts/deploy/CreateDatanode.sh sh /opt/hadoop_scripts/deploy/AddHostname.sh
/etc/init.d/hadoop-hdfs-datanode start /etc/init.d/hadoop-yarn-nodemanager start
一般在一个hadoop集群中维护一份hadoop配置,这份hadoop配置需要分发到集群中各个成员。这里的做法是 salt + rsync
#修改namenode主服务器的hadoop配置文件 /etc/hadoop/conf/,然后执行以下命令同步到集群中所有成员 sync_h_conf #脚本目录也是需要维护的,譬如hosts文件/opt/hadoop_scripts/share_data/resolv_host,修改后执行以下命令同步到集群中所有成员 sync_h_script #其实这两个命令是我自己定义的salt命令的别名,查看这里/opt/hadoop_scripts/profile.d/hadoop.sh
比较普遍的方案是,ganglia和nagios监控,ganglia收集大量度量,以图形化程序,nagios在某度量超出阀值后报警.ganglia监控以后补充一下文档
其实,hadoop自带有接口提供我们自己写监控程序,而且这个接口还是比较简单,通过这样便可以访问http://192.168.1.1:9080/jmx,返回值是JSON格式,其中的内容也非常详细。但是每次查询都返回一大串的JSON也是浪费,其实接口还提供更新详细的查询 譬如我只想查找系统信息,可以这样调用接口 http://192.168.1.1:9080/jmx?qry=java.lang:type=OperatingSystem 。qry参考后跟的就是整个JSON的“name”这个key的值
在折腾hadoop集群的部署是还是遇到了很多坑,打算下篇写自己所遭遇的问题。通过本文部署遇到问题的可以联系一下我,互相交流一下。QQ:83766787。当然也欢迎大家一起修改部署的脚本,git地址是http://git.oschina.net/snake1361222/hadoop_scripts