一、安装集群
A、上传HADOOP安装包
B、规划安装目录 /export/servers/hadoop-2.8.4
C、解压安装包
D、修改配置文件 $HADOOP_HOME/etc/hadoop/
1、hadoop-env.sh
export JAVA_HOME=/export/servers/jdk1.8.0_11
2、core-site.xml
3、hdfs-site.xml
4、yarn-site.xml
5、mapred-site.xml
6、slaves
hadoop2
hadoop3
E、分发安装目录到其他机器节点
F、在namenode节点初始化HDFS 本例配置在 hadoop1 上
执行 #./bin/hadoop namenode -format
G、启动HDFS
执行 # ./sbin/start-dfs.sh
[root@hadoop1 hadoop-2.8.4]# ./sbin/start-dfs.sh
Starting namenodes on [hadoop1]
hadoop1: namenode running as process 2343. Stop it first.
hadoop2: starting datanode, logging to /export/servers/hadoop-2.8.4/logs/hadoop-root-datanode-hadoop2.out
hadoop3: starting datanode, logging to /export/servers/hadoop-2.8.4/logs/hadoop-root-datanode-hadoop3.out
hadoop4: ssh: connect to host hadoop4 port 22: No route to host 【这个是我在slave 配置了hadoop4 ,然后我又没有分发和启动hadoop4节点,所以链接不到】
Starting secondary namenodes [hadoop1]
hadoop1: secondarynamenode running as process 2510. Stop it first. 【secondary namenode】hdfs 的冷备
H、启动YARN
执行 #./sbin/start-yarn.sh
starting yarn daemons
resourcemanager running as process 2697. Stop it first.【在哪台机器上执行命令,resourcemanager就在这太机器上,然后再启动slave配置的nodemanager】
hadoop2: starting nodemanager, logging to /export/servers/hadoop-2.8.4/logs/yarn-root-nodemanager-hadoop2.out
hadoop3: starting nodemanager, logging to /export/servers/hadoop-2.8.4/logs/yarn-root-nodemanager-hadoop3.out
二、测试