大数据生态系统(Hadoop)的安装部署

大数据生态系统(Hadoop)的安装部署

安装hadoop的准备阶段(在每个节点)

​ 1、安装 JDK 1.8

​ 2 、远程ssh无密码登录(主到从) ssh-keygen ssh-copy-id ip/主机名

​ 3、防火墙关闭 service iptables stop 永久关闭chkconfig iptables off

​ 4、selinux关闭 vim /etc/selinux/config
将 SELINUX=enforcing 改为 SELINUX=disabled

​ 5、修改主机名 vim /etc/sysconfig/network

​ 6、主机名和IP对应 vim /etc/hosts

hadoop的安装

​ 1、上传解压

​ 2、配置hadoop的环境变量

​ 3、检查支持哪些库或包

​ 进入hadoop安装目录的bin里面执行以下命令

./hadoop checknative

yum -y install openssl-devel

​ 4、修改hadoop的核心配置文件

​ 进入hadoop安装目录的etc里面执行以下命令

vim core-site.xml


	fs.defaultFS
	hdfs://node01:8020


	hadoop.tmp.dir
	/export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/tempDatas



	io.file.buffer.size
	4096



	fs.trash.interval
	10080

vim hdfs-site.xml

 

 
		dfs.namenode.secondary.http-address
		node01:50090


	dfs.namenode.http-address
	node01:50070


	dfs.namenode.name.dir
	file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/namenodeDatas



	dfs.datanode.data.dir
	file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/datanodeDatas


	dfs.namenode.edits.dir
	file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/edits


	dfs.namenode.checkpoint.dir
	file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/snn/name


	dfs.namenode.checkpoint.edits.dir
	file:///export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/snn/edits


	dfs.replication
	2


	dfs.permissions
	false

dfs.blocksize 134217728

vim mapred-site.xml -->(将mapred-site.xml.template复制一份改名为mapred-site.xml)


	mapreduce.framework.name
	yarn


	mapreduce.job.ubertask.enable
	true


	mapreduce.jobhistory.address
	node01:10020


	mapreduce.jobhistory.webapp.address
	node01:19888

vim yarn-site.xml


	yarn.resourcemanager.hostname
	node01


	yarn.nodemanager.aux-services
	mapreduce_shuffle

第一台机器执行以下命令
cd /export/servers/hadoop-2.6.0-cdh5.14.0/etc/hadoop
vim hadoop-env.sh

export JAVA_HOME=/export/servers/jdk1.8.0_141

​ 5、设置集群有哪些工作节点

​ 编辑安装目录下etcslaves文件
node01
node02
node03

node01机器上面创建以下目录
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/tempDatas
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/namenodeDatas
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/datanodeDatas 
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/edits
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/snn/name
mkdir -p /export/servers/hadoop-2.6.0-cdh5.14.0/hadoopDatas/dfs/nn/snn/edits

​ 6、其他节点分发

scp -r hadoop-2.6.0-cdh5.14.0 node02:$PWD

scp -r hadoop-2.6.0-cdh5.14.0 node03:$PWD

​ 7、配置其他节点的hadoop的环境变量

scp /etc/profifile.d/hadoop.sh node02:/etc/profifile.d/

scp /etc/profifile.d/hadoop.sh node03:/etc/profifile.d/

​ 8 、格式化集群

​ 在集群安装目录的bin内部执行一下命令进行格式化

hdfs namenode -format

​ 9、集群启动

​ 在集群安装目录的sbin内部执行一下命令进行启动

./start-all.sh

你可能感兴趣的:(hadoop)