环境准备(03)HDFS核心配置文件

1. 解压hadoop

tar -zxvf hadoop-2.6.0-cdh5.7.0.tar.gz -C ~/app/

2. 配置文件位置

/home/hadoop/app/hadoop-2.6.0-cdh5.7.0/etc/hadoop

3. hadoop-env.sh

export JAVA_HOME=/home/hadoop/app/jdk1.7.0_51

4. core-site.xml

  • /home/hadoop/app/tmp提前创建一下

    
        fs.defaultFS
        hdfs://hadoop001:8020
    
    
        hadoop.tmp.dir
        /home/hadoop/app/tmp
    

5. hdfs-site.xml


    
        dfs.replication
        1
    

6. 格式化HDFS

  • 只做一次;
bin/hdfs namenode -format

7. 启动HDFS

[hadoop@hadoop001 sbin]$ ./start-dfs.sh 
18/07/10 05:42:56 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [hadoop001]
hadoop001: starting namenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-namenode-hadoop001.out
localhost: starting datanode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-datanode-hadoop001.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /home/hadoop/app/hadoop-2.6.0-cdh5.7.0/logs/hadoop-hadoop-secondarynamenode-hadoop001.out
18/07/10 05:43:13 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

8. 查看启动结果

[hadoop@hadoop001 sbin]$ jps
154 NameNode
272 DataNode
429 SecondaryNameNode
642 Jps

注:
HDFS的Web页面的管理端口是50070,把当前容器重新commit成个镜像,run的时候把端口映射出去,从宿主机就可以访问了,localhost:50070;

你可能感兴趣的:(环境准备(03)HDFS核心配置文件)