Hadoop 搭建分布式环境 hadoop-3.0.0.tar.gz

1. 使用3台CentOS7虚拟机

关闭防火墙

# systemctl stop firewalld
# systemctl disable firewalld

主机名和IP地址

node4111:172.16.18.59
node4113:172.16.18.62
node4114:172.16.18.63

每个主机修改hosts
vim /etc/hosts

172.16.18.59 node4111
172.16.18.62 node4113
172.16.18.63 node4114

1.1 hadoop 角色分布

node4111:NameNode/DataNode  ResourceManager/NodeManager
node4113:DataNode   NodeManager
node4114:DataNode   NodeManager

2.设置SSH免密钥

在每台主机上执行,一直回车
/root/目录下会生产一个隐藏目录.ssh

ssh-keygen -t rsa

在node4111主机上执行,

ssh-copy-id  -i /root/.ssh/id_rsa.pub node4111
ssh-copy-id  -i /root/.ssh/id_rsa.pub node4113
ssh-copy-id  -i /root/.ssh/id_rsa.pub node4114

查看每个主机文件

cat /root/.ssh/authorized_keys

3. 下载JDK 和 Hadoop

JDK下载页面

Hadoop下载页面

安装软件版本为

hadoop-3.0.0.tar.gz
jdk-10_linux-x64_bin.tar.gz

/root/目录下新建目录had,解压软件

tar -zxvf hadoop-3.0.0.tar.gz -C had
tar -zxvf jdk-10_linux-x64_bin.tar.gz -C had

配置环境变量

export JAVA_HOME=/root/had/jdk-10
export PATH=$JAVA_HOME/bin:$PATH

export HADOOP_HOME=/root/had/hadoop-3.0.0
export PATH=$HADOOP_HOME:/bin:$PATH
export HADOOP_HOME=/root/had/hadoop-3.0.0
export HADOOP_HDFS_HOME=/root/had/hadoop-3.0.0
export HADOOP_CONF_DIR=/root/had/hadoop-3.0.0/etc/hadoop
source .bash_profile
测试 
# java 有输出

4. hadoop 配置和分发

hadoop 集群安装
hadoop添加到环境变量

export HADOOP_HOME=/root/had/hadoop-3.0.0
export PATH=$HADOOP_HOME:/bin:$PATH
source .bash_profile
路径
/root/had/hadoop-3.0.0/etc/hadoop

1) vim hadoop-env.sh

修改
export JAVA_HOME=/root/had/jdk-1

2) vim core-site.xml

<configuration>
        <property>
                <name>fs.defaultFSname>
                <value>hdfs://node4111:9000value>
        property>
configuration>

3) vim hdfs-site.xml
/root/had 目录下新建目录tmp

<configuration>
    <property>
         <name>dfs.replicationname>
         <value>2value>
    property>


    <property>
        <name>dfs.namenode.name.dirname>
        <value>/root/had/tmp/dfs/namevalue>
    property>

    <property>
        <name>dfs.datanode.data.dirname>
        <value>/root/had/tmp/dfs/datavalue>
    property>
configuration>

4) vim yarn-site.xml

<configuration>


    <property>
        <name>yarn.nodemanager.aux-servicesname>
        <value>mapreduce_shufflevalue>

    property>

    <property>
        <name>yarn.resourcemanager.hostnamename>
        <value>node4111value>
    property>


<property>
    <name>yarn.nodemanager.env-whitelistname>
    <value>JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,HADOOP_YARN_HOME,HADOOP_MAPRED_HOMEvalue>
property>
<property>
    <name>yarn.nodemanager.vmem-check-enabledname>
    <value>falsevalue>
property>
<property>
    <name>yarn.nodemanager.resource.memory-mbname>
    <value>49152value>
property>
<property>
    <name>yarn.scheduler.maximum-allocation-mbname>
    <value>49152value>
property>




configuration>

5) vim mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.namename>
        <value>yarnvalue>
    property>

<property>  
    <name>yarn.nodemanager.aux-servicesname>  
    <value>mapreduce_shufflevalue>  
property>  
<property>  
    <name>yarn.nodemanager.aux-services.mapreduce.shuffle.classname>  
    <value>org.apache.hadoop.mapred.ShuffleHandlervalue>  
property> 

configuration>

6: vim workers

node4111
node4113
node4114

5. 分发安装包到其它2个节点

scp -r /root/had root@node4113:/root/
scp -r /root/had root@node4114:/root/
scp -r /root/.bash_profile root@node4113:/root/  
scp -r /root/.bash_profile root@node4114:/root/

在2个节点上执行

source .bash_profile

6. HFS NameNode 格式化

只在node4111上执行

/root/had/tmp
目录为空
/root/had/hadoop-3.0.0/bin
路径下执行   
./hdfs namenode -format
打印信息
Storage directory /root/had/tmp/dfs/name has been successfully formatted

7. 启动 hadoop

/root/had/hadoop-3.0.0/etc/hadoop

vim hadoop-env.sh

export HDFS_NAMENODE_USER=root
export HDFS_DATANODE_USER=root
export HDFS_JOURNALNODE_USER=root

export YARN_RESOURCEMANAGER_USER=root
export YARN_NODEMANAGER_USER=root

export HDFS_SECONDARYNAMENODE_USER=root
路径
/root/had/hadoop-3.0.0/sbin
./start-all.sh 

8. 验证

在各个节点上执行JPS 查看角色服务

# jps

8065 DataNode
7908 NameNode
17101 Jps
16159 SecondaryNameNode

9. hadoop web登陆

浏览NameNode的Web界面; 默认情况
http://172.16.18.64:9870
# pwd
/root/had/hadoop-3.0.0/bin

# ./hdfs dfs -mkdir /user

# ./hdfs dfs -mkdir /input
# ./hdfs dfs -put /root/had/hadoop-3.0.0/etc/hadoop/*.xml /input

# ./hdfs dfs -ls /input
hdfs dfs –cat 
hdfs dfs –text

参考:
a.hadoop-3.0.0集群环境搭建、配置
b.linux 下安装hadoop hadoop-3.0.0-alpha4.tar.gz
c.Hadoop:设置单个节点群集
1.在 CentOS 7.2 下安装 Hadoop 2.7.5 并搭建伪分布式环境的方法
2.linux添加用户,用户组(centos7)

你可能感兴趣的:(Ceph,Hadoop)