Hadoop安装之standAlone单机

hadoop安装1.x和2.x有三种安装架构,本文将介绍第一种安装模式standAlone

一:standAlone(单机)

standAlone安装将所有服务都安装在一台机器上,如下:

运行服务

服务器IP

NameNode

192.168.254.100

SecondaryNameNode

192.168.254.100

DataNode

192.168.254.100

ResourceManager

192.168.254.100

NodeManager

192.168.254.100

1)下载安装:

JDK的安装请自行安装。

下载地址:http://archive.apache.org/dist/hadoop/common/hadoop-2.7.5/hadoop-2.7.5.tar.gz

解压:

mkdir -p /export/softwares

mkdir -p /exprot/servers

cd /export/softwares

tar -zxvf hadoop-2.7.5.tar.gz -C ../servers/

2)修改配置文件:

2.1:修改core-site.xml

cd /export/servers/hadoop-2.7.5/etc/hadoop

vim core-site.xml

fs.default.name

hdfs://192.168.254.100:8020

hadoop.tmp.dir

/export/servers/hadoop-2.7.5/hadoopDatas/tempDatas

io.file.buffer.size

4096

 

fs.trash.interval

10080

2.2:修改hdfs-site.xml

cd /export/servers/hadoop-2.7.5/etc/hadoop

vim hdfs-site.xml

 

 

dfs.namenode.secondary.http-address

node01:50090

 

dfs.namenode.http-address

node01:50070

dfs.namenode.name.dir

file:///export/servers/hadoop-2.7.5/hadoopDatas/namenodeDatas,file:///export/servers/hadoop-2.7.5/hadoopDatas/namenodeDatas2

dfs.datanode.data.dir

file:///export/servers/hadoop-2.7.5/hadoopDatas/datanodeDatas,file:///export/servers/hadoop-2.7.5/hadoopDatas/datanodeDatas2

dfs.namenode.edits.dir

file:///export/servers/hadoop-2.7.5/hadoopDatas/nn/edits

 

dfs.namenode.checkpoint.dir

file:///export/servers/hadoop-2.7.5/hadoopDatas/snn/name

dfs.namenode.checkpoint.edits.dir

file:///export/servers/hadoop-2.7.5/hadoopDatas/dfs/snn/edits

dfs.replication

3

dfs.permissions

false

dfs.blocksize

134217728

2.3:修改hadoop-env.sh

cd /exprot/servers/hadoop2.7.5/etc/hadoop

vim hadoop-evn.sh

export JAVA_HOME=/export/servers/jdk1.8.0_141

2.4:修改mapredp-site.xml

cd /exprot/servers/hadoop2.7.5/etc/hadoop

vim mapredp-site.xml

mapreduce.framework.name

yarn

 

mapreduce.job.ubertask.enable

true

mapreduce.jobhistory.address

192.168.254.100:10020

 

mapreduce.jobhistory.webapp.address

192.168.254.100:19888

2.5:修改yarn-site.xml

cd /exprot/servers/hadoop2.7.5/etc/hadoop

vim yarn-site.xml

yarn.resourcemanager.hostname

node01

yarn.nodemanager.aux-services

mapreduce_shuffle

yarn.log-aggregation-enable

true

yarn.log-aggregation.retain-seconds

604800

2.6:修改mapred-env.sh

cd /exprot/servers/hadoop2.7.5/etc/hadoop

vim mapred-env.sh

export JAVA_HOME=/export/servers/jdk1.8.0_141

2.7:slaves

cd /exprot/servers/hadoop2.7.5/etc/hadoop

vim slaves

localhost

3)启动集群

启动hadoop集群,需要启动hdfs和yarn两个模块。注意首次启动HDFS时,必须对其进行格式化操作,本质上是一些清理和准备工作,因为此时HDFS在物理上还是不存在的

格式化命令:hdfs namenode -format

再启动前需要创建存放数据文件夹

cd  /export/servers/hadoop-2.7.5

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/tempDatas

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/namenodeDatas

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/namenodeDatas2

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/datanodeDatas

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/datanodeDatas2

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/nn/edits

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/snn/name

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/dfs/snn/edits

启动命令:

cd  /export/servers/hadoop-2.7.5/

bin/hdfs namenode -format --如果已经格式化了就不需要再格式化

sbin/start-dfs.sh

sbin/start-yarn.sh

sbin/mr-jobhistory-daemon.sh start historyserver

4)界面查看:

http://192.168.254.100:50070/explorer.html#/  查看hdfs

http://192.168.254.100:8088/cluster   查看yarn集群

http://192.168.254.100:19888/jobhistory  查看历史完成的任务

你可能感兴趣的:(bigdata)