192.168.113.101 master
192.168.113.102 slaver1
192.168.113.103 slaver2
上传软件包到指定位置
cd /opt/
解压到当前文件夹
tail -xvf hadoop-2.6.0-cdh5.16.1.tar.gz
得到文件夹
hadoop-2.6.0-cdh5.16.1
修改配置文件
cd /opt/hadoop/etc/hadoop/
需要修改的配置文件
hadoop-env.sh
yarn-env.sh
core-site.xml
hdfs-site.xml
mapred-site.xml
yarn-site.xml
slaves
export JAVA_HOME=/你自己/的jdk路径/
export JAVA_HOME=/你自己/的jdk路径/
export HADOOP_YARN_USER=${HADOOP_YARN_USER:-yarn}
export YARN_CONF_DIR="/opt/hadoop-2.6.0-cdh5.16.1/etc/hadoop/"
fs.defaultFS
hdfs://master:9000
io.file.buffer.size
131072
hadoop.tmp.dir
file:/opt/hadoop-2.6.0-cdh5.16.1/hadoopData/tmpdir
hadoop.proxyuser.root.hosts
*
hadoop.proxyuser.root.groups
*
注意新建数据目录
mkdir -p /opt/hadoop/hadoopData/tmpdir
dfs.namenode.secondary.http-address
master:9001
dfs.namenode.name.dir
file:/opt/hadoop-2.6.0-cdh5.16.1/hadoopData/dfs/name
dfs.datanode.data.dir
file:/opt/hadoop-2.6.0-cdh5.16.1/hadoopData/dfs/data
dfs.replication
3
dfs.webhdfs.enabled
true
dfs.permissions
false
dfs.web.ugi
supergroup
注意新建数据目录
mkdir -p /opt/hadoop-2.6.0-cdh5.16.1/hadoopData/dfs/name
mkdir -p /opt/hadoop-2.6.0-cdh5.16.1/hadoopData/dfs/data
mapreduce.framework.name
yarn
mapreduce.jobhistory.address
master:10020
mapreduce.jobhistory.webapp.address
master:19888
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.nodemanager.aux-services.mapreduce.shuffle.class
org.apache.hadoop.mapred.ShuffleHandler
yarn.resourcemanager.address
master:8032
yarn.resourcemanager.scheduler.address
master:8030
yarn.resourcemanager.resource-tracker.address
master:8031
yarn.resourcemanager.admin.address
master:8033
yarn.resourcemanager.webapp.address
master:8088
配置datanode节点 (数据存储节点)
192.168.113.102 slaver1
192.168.113.103 slaver2
配置高可用
slaver1
slaver2
配置环境变量
vim /etc/profile
export HADOOP_HOME=/opt/hadoop-2.6.0-cdh5.16.1/
export PATH=:$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
scp -r /etc/profile root@slaver1:/etc/profile
scp -r /etc/profile root@slaver2:/etc/profile
在每个节点使用生效命令
. /etc/profile
拷贝hadoop安装文件到子节点
主节点上执行:
scp -r /opt/hadoop-2.6.0-cdh5.16.1 root@slaver1:/opt/hadoop-2.6.0-cdh5.16.1/
scp -r /opt/hadoop-2.6.0-cdh5.16.1 root@slaver2:/opt/hadoop-2.6.0-cdh5.16.1/
在主节点格式化namenode
hadoop namenode -format
提示:successfully formatted表示格式化成功
启动hadoop
start-all.sh
进程检查
jps
主节点
NameNode
SecondaryNameNode
ResourceManager
子节点
DataNode
NodeManager
vim /opt/hadoop-2.6.0-cdh5.16.1/etc/hadoop/hadoop-env.sh
在最后一行加上
export HADOOP_SSH_OPTS="-p 对应端口"