创建Hadoop平台

创建Hadoop平台

下载镜像

在这里插入图片描述

安装虚拟机

使用VMwareWorkstation
打开虚拟机进入软件,配置网络
创建Hadoop平台_第1张图片创建Hadoop平台_第2张图片

解压jdk

1.sudo apt update
2.sudo apt upgrade
3.sudo apt autoremove
4.ftp 10.13.32.2

  • name : ftpuser
  • password:123456
  1. ls
    • get jdk-15.0.2_linux-x64_bin.tar.gz
  2. 退出(bye)
  3. 创建新文件夹 sudo mkdir usr/lib/jvm
  4. 解压jdk
    sudo tar -zxvf jdk-15.0.2_linux-x64_bin.tar.gz -C /usr/lib/jvm
  5. 下载vim
    sudo apt install vim
  6. 进入cd目录,编辑.bashrc文件
    vim .bashrc
    加入两行代码
  export JAVA_HOME=/usr/lib/jvm/jdk-15.0.2  
  export PATH=${JAVA_HOME}/bin:$PATH    

:wq保存
source .bashrc
11.查看解压成功
java -version

解压Hadoop

  • 进入ftp获得hadoop压缩包
    ftp 10.13.32.2
    username:ftpuser
    password:123456
    get hadoop-3.2.2.tar.gz
  • 解压 sudo tar -zxvf hadoop-3.2.2.tar.gz -C /usr/local
  • 授权用户(建虚拟机时自己命名的用户)
    chown R h123 usr/local/hadoop-3.2.2
  • 查看hadoop中的内容
    ls /usr/local/hadoop-3.2.2/etc/hadoop

免密登录SSH

  • sudo apt install ssh

  • ssh localhost

  • ssh-keygen

  • ls .ssh 在这里插入图片描述

  • cat .ssh/id_rsa.pub >>.ssh/authorized_keys

进行配置

  • vim /usr/local/hadoop-3.2.2/etc/hadoop/hadoop-env.sh
export JAVA_HOME=/usr/lib/jvm/jdk-15.0.2
  • vim /usr/local/hadoop-3.2.2/etc/hadoop/core-site.xml

  
      hadoop.tmp.dir
      file:/usr/local/hadoop-3.2.2/tmp
  

  
      fs.defaultFS
      hdfs://localhost:9000
  


  • vim /usr/local/hadoop-3.2.2/etc/hadoop/hdfs-site.xml

  
      dfs.namenode.name.dir
      file:/usr/local/hadoop-3.2.2/hadoop_data/hdfs/namenode
  

  
      dfs.datanode.data.dir
      file:/usr/local/hadoop-3.2.2/hadoop_data/hdfs/datanode


  • 编译vim /usr/local/hadoop-3.2.2/etc/hadoop/mapred-site.xml

    
        mapreduce.framework.name
        yarn
    
    
        mapreduce.application.classpath
        $HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*:$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
    

  • vim /usr/local/hadoop-3.2.2/etc/hadoop/yarn-site.xml

        yarn.nodemanager.aux-services
        mapreduce_shuffle
    
    
        yarn.nodemanager.env-whitelist
        JAVA_HOME,HADOOP_COMMON_HOME,HADOOP_HDFS_HOME,
                HADOOP_CONF_DIR,CLASSPATH_PREPEND_DISTCACHE,
                HADOOP_YARN_HOME,HADOOP_MAPRED_HOME
    
  • vim .bashrc(hadoop的环境配置)
export HADOOP_HOME=/usr/local/hadoop-3.2.2
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export CLASSPATH=$($HADOOP_HOME/bin/hadoop classpath):$CLASSPATH
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
  • source .bashrc
  • 格式化 hadf namenode -format(命令执行完,会出来successfuly,若没有则以上的配置文件有问题)
  • 开启dfs
    start-dfs.sh(关闭服务对应的是stop-dfs.sh)
  • 输入jsp会开启三个节点
    创建Hadoop平台_第3张图片
    如果dataNode节点没有出来,可能是格式化多次
    执行rm -r hadoop date,然后再格式化一遍。
  • 开启yarm start-yarn.sh
    创建Hadoop平台_第4张图片
  • 这就成功搭建hadoop平台了,开始跑个程序
    cd 目录下
    输入 hadoop jar /usr/local/hadoop-3.2.2/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.2.2.jar pi 5 10
    如果出现Job failed,则按以下步骤
    在命令行输入:hadoop classpath
    创建Hadoop平台_第5张图片把上述输出的值添加到yarn-site.xml文件对应的属性 yarn.application.classpath下面,然后进入配置文件vim /usr/local/hadoop-3.2.2/etc/hadoop/yarn-site.xml
    
      yarn.application.classpath
      /opt/module/hadoop/etc/hadoop:/opt/module/hadoop/share/hadoop/common/lib/*:/opt/module/hadoop/share/hadoop/common/*:/opt/module/hadoop/share/hadoop/hdfs:/opt/module/hadoop/share/hadoop/hdfs/lib/*:/opt/module/hadoop/share/hadoop/hdfs/*:/opt/module/hadoop/share/hadoop/mapreduce/lib/*:/opt/module/hadoop/share/hadoop/mapreduce/*:/opt/module/hadoop/share/hadoop/yarn:/opt/module/hadoop/share/hadoop/yarn/lib/*:/opt/module/hadoop/share/hadoop/yarn/*
    
  重启服务就成功了

你可能感兴趣的:(笔记)