Hadoop-1作为namenode,datanode,resourcemanager,nodemanager
Hadoop-2作为datanode,nodemanager
Hadoop-3作为datanode,nodemanager
在三台机器编辑/etc/hosts配置文件,修改主机名以及配置其他机器的主机名
[root@hadoop-1 hadoop]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
#::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.101.42 hadoop-1
192.168.101.43 hadoop-2
192.168.101.28 hadoop-3
在三台节点执行ssh-keygen生成公钥,然后执行ssh-copy-id将公钥拷贝到其他节点
下载jdk1.8的tar包,将tar包解压移动到目录/usr/local/目录下
编辑/etc/profile配置文件,增加以下文件内容:
[root@hadoop-1 hadoop]# cat /etc/profile
JAVA_HOME=/usr/local/jdk1.8/
JAVA_BIN=/usr/local/jdk1.8/bin
JRE_HOME=/usr/local/jdk1.8/jre
PATH=$PATH:/usr/local/jdk1.8/bin:/usr/local/jdk1.8/jre/bin
CLASSPATH=/usr/local/jdk1.8/jre/lib:/usr/local/jdk1.8/lib:/usr/local/jdk1.8/jre/lib/charsets.jar
export PATH=$PATH:/usr/local/mysql/bin/
执行source /etc/profile指令,使修改的配置文件生效
[root@hadoop-1 ~]# source /etc/profile
下载hadoop安装包,下载地址http://archive.cloudera.com/cdh5/cdh/5/hadoop-2.6.0-cdh5.7.0.tar.gz
将下载的hadoop安装包解压大/usr/local目录下
其中几个目录存放的东西如下所示:
bin 目录存放可执行文件
etc 目录存放配置文件
sbin目录存放服务的启动文件
share 目录下存放jar包与文档
解压好文件以后修改配置文件hadoop-env.sh文件,添加JAVA_HOME地址
[root@hadoop-1 hadoop]# cd /usr/local/hadoop-2.6.0-cdh5.7.0/etc/hadoop
[root@hadoop-1 hadoop]# cat hadoop-env.sh | grep JAVA_HOME | grep -v "#"
export JAVA_HOME=/usr/local/jdk1.8/
修改好hadoop-env.sh配置文件后将hadoop的安装目录配置到环境变量
[root@hadoop-1 hadoop]# cat ~/.bash_profile
export HADOOP_HOME=/usr/local/hadoop-2.6.0-cdh5.7.0/
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
加载配置文件使其生效
[root@hadoop2 hadoop]# source !$
source ~/.bash_profile
1 core-site.xml配置文件,修改好如下所示
[root@hadoop-2 hadoop]# cat core-site.xml
2 hdfs-site.xml配置文件
[root@hadoop-2 hadoop]# cat hdfs-site.xml
以下配置文件为设置hdfs的数据块的副本存储个数以及secondarynamenode运行节点的信息,和namenode节点信息不同
创建相应的临时文件的存放目录
mkdir –p /data/hadoop/app/tmp/dfs/name
mkdir –p /data/hadoop/app/tmp/dfs/data
3 修改yarn-site.yml配置文件
[root@hadoop-2 hadoop]# cat yarn-site.xml
4 修改mapreduce配置文件
[root@hadoop-2 hadoop]# cat mapred-site.xml
5 修改slaves配置文件
[root@hadoop-2 hadoop]# cat slaves
hadoop-1
hadoop-2
hadoop-3
其余各节点配置信息和hadop-1节点一样。
对namenode进行格式化,执行如下命令
[root@hadoop-2 bin]# pwd
/usr/local/hadoop-2.6.0-cdh5.7.0/bin
[root@hadoop-2 bin]# ./hdfs namenode –format
备注:如果多次进行namenode的格式化容易出现data与name里面的clusterid不一致,无法再namenode节点的ui界面查看具体的节点的信息
格式化以后就可以启动hadoop集群
[root@hadoop-2 sbin]# ./start-all.sh
启动完成以后查看对应节点的服务是否启动
Hadoop-1
[root@hadoop-1 current]# jps
25299 DataNode
25507 SecondaryNameNode
25797 NodeManager
25190 NameNode
25688 ResourceManager
5257 Jps
Hadoop-2
[root@hadoop-2 sbin]# jps
16584 NodeManager
16457 DataNode
27599 Jps
Hadop-3
[root@hadoop-3 logs]# jps
7616 DataNode
18724 Jps
7742 NodeManager
如果各节点的服务启动正常,则登录master节点进行查看192.168.101.42:50070
[root@hadoop-1 ~]# hdfs dfs -ls /
19/01/07 09:14:23 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[root@hadoop-1 ~]# hdfs dfs -mkdir /data
19/01/07 09:14:50 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[root@hadoop-1 ~]# hdfs dfs -put ./test.sh /data
19/01/07 09:16:12 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
[root@hadoop-1 ~]# hdfs dfs -ls /
19/01/07 09:16:24 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
drwxr-xr-x - root supergroup 0 2019-01-07 09:16 /data
[root@hadoop-1 ~]# hdfs dfs -ls /data
19/01/07 09:16:38 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 root supergroup 0 2019-01-07 09:16 /data/test.sh
集群中其他节点也能访问hdfs,而且在集群中hdfs是共享的,所有节点访问的书籍是一样的
Hadoop-2
[root@hadoop-2 ~]# hdfs dfs -ls /
19/01/07 09:17:46 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
drwxr-xr-x - root supergroup 0 2019-01-07 09:16 /data
[root@hadoop-2 ~]# hdfs dfs -ls /data
19/01/07 09:18:06 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 root supergroup 0 2019-01-07 09:16 /data/test.sh
Hadoop-3
[root@hadoop-3 ~]# hdfs dfs -ls /
19/01/07 09:17:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
drwxr-xr-x - root supergroup 0 2019-01-07 09:16 /data
[root@hadoop-3 ~]# hdfs dfs -ls /data
19/01/07 09:18:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 1 items
-rw-r--r-- 3 root supergroup 0 2019-01-07 09:16 /data/test.sh
通过hadoop自带的案例,验证yarn能否获取到任务的执行信息,如果没有问题,输出结果如下