hadoop StandAlone环境搭建

1.准备一台服务器

192.168.100.100

 

2.提前安装jdk

 

3.hadoop运行服务

NameNode            192.168.100.100

SecondaryNameNode   192.168.100.100

DataNode            192.168.100.100

ResourceManager     192.168.100.100

NodeManager         192.168.100.100

 

4.下载并解压hadoop

http://archive.apache.org/dist/hadoop/common/hadoop-2.7.5/hadoop-2.7.5.tar.gz

/export/server/hadoop/

 

5.修改配置文件

5.1  vim hadoop-2.7.5/etc/hadoop/core-site.xml

fs.default.name

hdfs://192.168.100.100:8020

hadoop.tmp.dir

/export/servers/hadoop-2.7.5/hadoopDatas/tempDatas

io.file.buffer.size

4096

fs.trash.interval

10080

5.2  vim hadoop-2.7.5/etc/hadoop/hdfs-site.xml

 

 

dfs.namenode.secondary.http-address

node01:50090

 

dfs.namenode.http-address

node01:50070

dfs.namenode.name.dir

file:///export/servers/hadoop-2.7.5/hadoopDatas/namenodeDatas,file:///export/servers/hadoop-2.7.5/hadoopDatas/namenodeDatas2

dfs.datanode.data.dir

file:///export/servers/hadoop-2.7.5/hadoopDatas/datanodeDatas,file:///export/servers/hadoop-2.7.5/hadoopDatas/datanodeDatas2

 

dfs.namenode.edits.dir

file:///export/servers/hadoop-2.7.5/hadoopDatas/nn/edits

 

 

dfs.namenode.checkpoint.dir

file:///export/servers/hadoop-2.7.5/hadoopDatas/snn/name

dfs.namenode.checkpoint.edits.dir

file:///export/servers/hadoop-2.7.5/hadoopDatas/dfs/snn/edits

 

dfs.replication

3

 

dfs.permissions

false

 

dfs.blocksize

134217728

 

5.3 vim hadoop-2.7.5/etc/hadoop/hadoop-env.sh

export JAVA_HOME=jdk路径

 

5.4 vim hadoop-2.7.5/etc/hadoop/mapred-site.xml

mapreduce.framework.name

yarn

 

mapreduce.job.ubertask.enable

true

 

mapreduce.jobhistory.address

node01:10020

 

mapreduce.jobhistory.webapp.address

node01:19888

 

 

5.5 vim hadoop-2.7.5/etc/hadoop/yarn-site.xml

yarn.resourcemanager.hostname

node01

yarn.nodemanager.aux-services

mapreduce_shuffle

 

yarn.log-aggregation-enable

true

yarn.log-aggregation.retain-seconds

604800

 

5.6  vim hadoop-2.7.5/etc/hadoop/mapred-env.sh

export JAVA_HOME=jdk路径

 

5.7 vim hadoop-2.7.5/etc/hadoop/slaves

localhost

 

6.启动服务

 

 

创建数据存放文件夹:

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/tempDatas

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/namenodeDatas

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/namenodeDatas2

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/datanodeDatas

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/datanodeDatas2

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/nn/edits

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/snn/name

mkdir -p /export/servers/hadoop-2.7.5/hadoopDatas/dfs/snn/edits

 

首次启动 HDFS 时,必须对其进行格式化操作:

bin/hdfs namenode -format

 

启动服务:

sbin/start-dfs.sh

sbin/start-yarn.sh

sbin/mr-jobhistory-daemon.sh start historyserver

 

 浏览查看启动页面:

hdfs :     http://192.168.100.100:50070

 yarn :    http://192.168.100.100:8088

jobhistory:  http://192.168.100.100:19888

 

 

 

你可能感兴趣的:(hadoop StandAlone环境搭建)