大数据之hadoop-hdfs完全分布式环境搭建(详细步骤真实可用)

1,服务器规划

本次服务搭建是在原有伪分布式的基础上进行搭建,伪分布式的搭建,参照
伪分布式搭建

节点 NN SNN DN
node01
node02
node03
node04

2,基础设施

jps检查jdk1.8安装,检查网络是否正常,配置host

vim /etc/hosts
10.0.0.11 node01
10.0.0.12 node02
10.0.0.13 node03
10.0.0.14 node04
[root@node02 ~]# jps
1504 Jps

3,伪分布式到完全分布式

SSH 免密

node01:

stop-dfs.sh
scp /root/.ssh/id_dsa.pub  node02:/root/.ssh/node01.pub
scp /root/.ssh/id_dsa.pub  node03:/root/.ssh/node01.pub
scp /root/.ssh/id_dsa.pub  node04:/root/.ssh/node01.pub

node02~node04相同的操作

scp localhost
cd ~/.ssh
cat node01.pub >> authorized_keys

配置部署

node01
cd $HADOOP_HOME/etc/hadoop
vi core-site.xml 不需要改

<configuration>
    <property>
        <name>fs.defaultFSname>
        <value>hdfs://node01:9000value>
    property>
configuration>

vi hdfs-site.xml

<configuration>
<property>
        <name>dfs.replicationname>
        <value>2value>
property>
<property>
        <name>dfs.namenode.name.dirname>
        <value>/var/bigdata/hadoop/full/dfs/namevalue>
property>
<property>
        <name>dfs.datanode.data.dirname>
        <value>/var/bigdata/hadoop/full/dfs/datavalue>
property>
<property>
        <name>dfs.namenode.secondary.http-addressname>
        <value>node02:50090value>
property>
<property>
        <name>dfs.namenode.checkpoint.dirname>
        <value>/var/bigdata/hadoop/full/dfs/secondaryvalue>
property>
configuration>

vi slaves
node02
node03
node04

cd /opt
scp -r ./bigdata/  node02:`pwd`
scp -r ./bigdata/  node03:`pwd`
scp -r ./bigdata/  node04:`pwd`

4,格式化启动

hdfs namenode -format
start-dfs.sh

5,验证

http://node01:50070/
大数据之hadoop-hdfs完全分布式环境搭建(详细步骤真实可用)_第1张图片

你可能感兴趣的:(hadoop,大数据)