准备: 三台虚拟机
配置主节点:
1. 查看当前使用网卡
[root@localhost ~]# dmesg | grep -i eth Bluetooth: BNEP (Ethernet Emulation) ver 1.3 eth0: no IPv6 routers presentdmesg是显示开机启动信息, grep -i是忽略大小写查找
2. 查看当前ip、网关
[root@localhost ~]# ifconfig eth0 Link encap:Ethernet HWaddr 00:0C:29:E0:C3:61 inet addr:192.168.182.138 Bcast:192.168.182.255 Mask:255.255.255.0 inet6 addr: fe80::20c:29ff:fee0:c361/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:24092 errors:0 dropped:0 overruns:0 frame:0 TX packets:12373 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:35176927 (33.5 MiB) TX bytes:672502 (656.7 KiB) Interrupt:19 Base address:0x2024 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:8 errors:0 dropped:0 overruns:0 frame:0 TX packets:8 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:480 (480.0 b) TX bytes:480 (480.0 b)配置静态IP:
[root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-eth0 DEVICE="eth0" #BOOTPROTO="dhcp" BOOTPROTO="static" HWADDR="00:0C:29:E0:C3:61" #IPV6INIT="yes" IPV6INIT="no" NM_CONTROLLED="yes" ONBOOT="yes" TYPE="Ethernet" UUID="7039ffae-3334-4445-b571-a805eecd4a77" IPADDR=192.168.182.101 NETMASK=255.255.255.0 GATEWAY=192.168.182.255修改主机名:
[root@localhost ~]# vi /etc/sysconfig/network NETWORKING=yes HOSTNAME=master [root@localhost ~]# hostname master [root@localhost ~]# hostname master
修改hosts文件:
[root@localhost ~]# vi /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.182.100 master 192.168.182.101 slave1 192.168.182.102 slave2关闭防火墙(生产环境不能关闭, 需要添加允许列表)
[root@localhost ~]# service iptables stop iptables: Setting chains to policy ACCEPT: filter [ OK ] iptables: Flushing firewall rules: [ OK ] iptables: Unloading modules: [ OK ] [root@localhost ~]# chkconfig iptables off关闭 selinux
[root@localhost ~]# vi /etc/selinux/config
将SELINUX的值对应修改为: SELINUX=disabled
将两台从属节点按照上面的配置配好, ip分别设置为192.168.182.102和192.168.182.103, 主机名分别为slave1、slave2。
#useradd hadoop #passwd hadoop
三台机器上分别执行以下语句:
[hadoop@master .ssh]$ ssh-keygen -t rsa [hadoop@master .ssh]$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
将salve1和slave2上生成的authorized_keys文件拷贝到master上
scp ~/.ssh/authorized_keys master:~/.ssh/authorized_keys1 scp ~/.ssh/authorized_keys master:~/.ssh/authorized_keys2合并文件
cat authorized_keys1 >> authorized_keys cat authorized_keys2 >> authorized_keys
将master主机上合并后的authorized_keys文件复制到salve1和slaver2上
scp ~/.ssh/authorized_keys slave1:~/.ssh/ scp ~/.ssh/authorized_keys slave2:~/.ssh/
$ chmod 600 authorized_keys
验证$ ssh slave2
配置hadoop
上传hadoop-1.2.1.tar.gz到home目录, 解压后确认所属用户和组都是hadoop
$ tar xzvf hadoop-1.2.1-bin.tar.gz $ chown -R hadoop:hadoop hadoop-1.2.1
export JAVA_HOME=/usr/java/jdk1.6.0_45 export PATH=$PATH:$JAVA_HOME/bin export CLASSPATH=$CLASSPATH:$JAVA_HOME/lib
$ scp .bashrc slave1:~ $ scp .bashrc slave2:~
修改hadoop-env.sh文件
[hadoop@master ~]$ vi hadoop-1.2.1/conf/hadoop-env.sh
修改:
export JAVA_HOME=/usr/java/jdk1.6.0_45
export HADOOP_HEAPSIZE=20
修改core-site.xml
[hadoop@master ~]$ vi hadoop-1.2.1/conf/core-site.xml文件内容:
<configuration> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/tmp</value> </property> <property> <name>fs.default.name</name> <value>hdfs://master:9000</value> </property> </configuration>hadoop会自动创建/home/hadoop/tmp
[hadoop@master ~]$ vi hadoop-1.2.1/conf/hdfs-site.xml文件内容:
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.data.dir</name> <value>/home/hadoop/data</value> </property> <property> <name>dfs.replication</name> <value>2</value> </property> </configuration>修改mapred-site.xml, 文件内容:
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapred.job.tracker</name> <value>master:9001</value> </property> </configuration>修改master文件:
master修改slave文件:
slave1 slave2
将master节点上配置好的hadoop发送到slave1、slave2节点
$ scp -r ./hadoop-1.2.1 slave1:/home/hadoop $ scp -r ./hadoop-1.2.1 slave2:/home/hadoop
通过jps命令检测启动情况
master节点上应该存在:NameNode、SecondaryNameNode、JobTracker
slave1、slave2节点上存在:DataNode、TaskTracker
查看运行情况
1)
[hadoop@master bin]$ hadoop dfsadmin -report2)
http://192.168.182.100:50070/dfshealth.jsp
http://192.168.182.100:50030/jobtracker.jsp