Hadoop集群安装day03

Hadoop集群安装

day03

Hadoop分布式环境搭建

上传压缩包并解压【上传的压缩包需要重新编译后支持snappy压缩的hadoop包】

cd /export/softwares/
tar -zxvf hadopp-2.6.0-cdh5.14.0.tar.gz -C ../servers/

三台机器在线安装openssl-devel

yum -y install openssl-devel

修改core-site.xml配置文件

cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop
vim core-site.xml

** 需要修改**


	fs.defaultFS
	hdfs://node01:8020


	hadoop.tmp.dir
	/export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/tempDatas



	io.file.buffer.size
	4096




	fs.trash.interval
	10080


修改hdfs-site.xml配置文件

cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop
vim hdfs-site.xml

**需要修改**

 
		dfs.namenode.secondary.http-address
		node01:50090



	dfs.namenode.http-address
	node01:50070


	dfs.namenode.name.dir
	file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/namenodeDatas



	dfs.datanode.data.dir
	file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/datanodeDatas



	dfs.namenode.edits.dir
	file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/edits


	dfs.namenode.checkpoint.dir
	file:///export/servers/hadoop-2.6.0-					cdh5.14.0/hadoopDatas/dfs/snn/name


	dfs.namenode.checkpoint.edits.dir
	file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/snn/edits


	dfs.replication
	2


	dfs.permissions
	false


	dfs.blocksize
	134217728


修改hadoop-env.sh

cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop
vim hadoop-env.sh


**需要修改**
export JAVA_HOME=/export/servers/jdk1.8.0_141

修改mapred-site.xml

cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop
vim mapred-site.xml

**需要修改**


	mapreduce.framework.name
	yarn



	mapreduce.job.ubertask.enable
	true



	mapreduce.jobhistory.address
	node01:10020



	mapreduce.jobhistory.webapp.address
	node01:19888


修改yarn-site.xml

cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop
vim yarn-site.xml

**需要修改**


	yarn.resourcemanager.hostname
	node01


	yarn.nodemanager.aux-services
	mapreduce_shuffle



	yarn.log-aggregation-enable
	true


	yarn.log-aggregation.retain-seconds
	604800


修改slaves文件
node01执行:

cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop
vi slaves

**需要修改**
node01
node02
node03

创建文件存放目录

node01执行:

mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/tempDatas
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/namenodeDatas
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/datanodeDatas 
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/edits
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/snn/name
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/snn/edits

分发安装包

node01执行:

cd /export/servers/
scp -r hadoop-2.6.0-cdh5.14.0/ node02:$PWD
scp -r hadoop-2.6.0-cdh5.14.0/ node03:$PWD

配置hadoop环境

三台机器执行:

vi /etc/profile

**需要修改**
export  HADOOP_HOME=/export/servers/hadoop-2.6.0-cdh5.14.0
export  PATH=:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH

配置完成后

source /etc/profile

集群!启动!

首次启动需要格式化一下:

bin/hdfs namenode -format
or
bin/hadoop namenode -format

节点启动方式:

在主节点上使用以下命令启动 HDFS NameNode: 
hadoop-daemon.sh start namenode 
在每个从节点上使用以下命令启动 HDFS DataNode: 
hadoop-daemon.sh start datanode 
在主节点上使用以下命令启动 YARN ResourceManager: 
yarn-daemon.sh  start resourcemanager 
在每个从节点上使用以下命令启动 YARN nodemanager: 
yarn-daemon.sh start nodemanager 

你可能感兴趣的:(Hadoop集群安装day03)