新人求助

这里写自定义目录标题

    • Configured Capacity: 54716792832 (50.96 GB) Present Capacity: 48401537700 (45.08 GB) DFS Remaining: 48401035264 (45.08 GB) DFS Used: 502436 (490.66 KB) DFS Used%: 0.00% Under replicated blocks: 0 Blocks with corrupt replicas: 0 Missing blocks: 0 Missing blocks (with replication factor 1): 0 这一点启动错误是因为datanode目录吗

这是我们刚学的linux实验

[root@hadoop01 hadoop]# sbin/start-dfs.sh
Starting namenodes on [hadoop01]
hadoop01: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-hadoop01.out
localhost: mv: 无法获取"/usr/local/hadoop/logs/hadoop-root-datanode-hadoop01.out.2" 的文件状态(stat): 没有那个文件或目录
localhost: mv: 无法获取"/usr/local/hadoop/logs/hadoop-root-datanode-hadoop01.out.1" 的文件状态(stat): 没有那个文件或目录
hadoop01: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-hadoop01.out
localhost: mv: 无法获取"/usr/local/hadoop/logs/hadoop-root-datanode-hadoop01.out" 的文件状态(stat): 没有那个文件或目录
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-hadoop01.out
hadoop03: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-hadoop03.out
hadoop02: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-hadoop02.out
localhost: ulimit -a for user root
localhost: core file size (blocks, -c) 0
localhost: data seg size (kbytes, -d) unlimited
localhost: scheduling priority (-e) 0
localhost: file size (blocks, -f) unlimited
localhost: pending signals (-i) 7183
localhost: max locked memory (kbytes, -l) 64
localhost: max memory size (kbytes, -m) unlimited
localhost: open files (-n) 1024
localhost: pipe size (512 bytes, -p) 8
[root@hadoop01 hadoop]# sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-hadoop01.out
hadoop02: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-hadoop02.out
hadoop03: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-hadoop03.out
localhost: mv: 无法获取"/usr/local/hadoop/logs/yarn-root-nodemanager-hadoop01.out.4" 的文件状态(stat): 没有那个文件或目录
hadoop01: mv: 无法获取"/usr/local/hadoop/logs/yarn-root-nodemanager-hadoop01.out.3" 的文件状态(stat): 没有那个文件或目录
localhost: mv: 无法获取"/usr/local/hadoop/logs/yarn-root-nodemanager-hadoop01.out.2" 的文件状态(stat): 没有那个文件或目录
hadoop01: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-hadoop01.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-hadoop01.out
localhost: ulimit -a
localhost: core file size (blocks, -c) 0
localhost: data seg size (kbytes, -d) unlimited
localhost: scheduling priority (-e) 0
localhost: file size (blocks, -f) unlimited
localhost: pending signals (-i) 7183
localhost: max locked memory (kbytes, -l) 64
localhost: max memory size (kbytes, -m) unlimited
localhost: open files (-n) 1024
localhost: pipe size (512 bytes, -p) 8
[root@hadoop01 hadoop]# jps
8352 Jps
8115 NodeManager
7545 NameNode
7678 DataNode
7966 ResourceManager
8110 NodeManager
[root@hadoop01 hadoop]# hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 54716792832 (50.96 GB)
Present Capacity: 48401537700 (45.08 GB)
DFS Remaining: 48401035264 (45.08 GB)
DFS Used: 502436 (490.66 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0
这一点启动错误是因为datanode目录吗

Live datanodes (3):

Name: 172.16.248.203:50010 (hadoop03)
Hostname: hadoop03
Decommission Status : Normal
Configured Capacity: 18238930944 (16.99 GB)
DFS Used: 161106 (157.33 KB)
Non DFS Used: 2032806574 (1.89 GB)
DFS Remaining: 16205963264 (15.09 GB)
DFS Used%: 0.00%
DFS Remaining%: 88.85%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Apr 20 04:44:17 CST 2020

Name: 172.16.248.201:50010 (hadoop01)
Hostname: hadoop01
Decommission Status : Normal
Configured Capacity: 18238930944 (16.99 GB)
DFS Used: 180224 (176 KB)
Non DFS Used: 2249555968 (2.10 GB)
DFS Remaining: 15989194752 (14.89 GB)
DFS Used%: 0.00%
DFS Remaining%: 87.67%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Apr 20 04:44:15 CST 2020

Name: 172.16.248.202:50010 (hadoop02)
Hostname: hadoop02
Decommission Status : Normal
Configured Capacity: 18238930944 (16.99 GB)
DFS Used: 161106 (157.33 KB)
Non DFS Used: 2032892590 (1.89 GB)
DFS Remaining: 16205877248 (15.09 GB)
DFS Used%: 0.00%
DFS Remaining%: 88.85%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Apr 20 04:44:17 CST 2020[root@hadoop01 hadoop]# sbin/start-dfs.sh
Starting namenodes on [hadoop01]
hadoop01: starting namenode, logging to /usr/local/hadoop/logs/hadoop-root-namenode-hadoop01.out
localhost: mv: 无法获取"/usr/local/hadoop/logs/hadoop-root-datanode-hadoop01.out.2" 的文件状态(stat): 没有那个文件或目录
localhost: mv: 无法获取"/usr/local/hadoop/logs/hadoop-root-datanode-hadoop01.out.1" 的文件状态(stat): 没有那个文件或目录
hadoop01: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-hadoop01.out
localhost: mv: 无法获取"/usr/local/hadoop/logs/hadoop-root-datanode-hadoop01.out" 的文件状态(stat): 没有那个文件或目录
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-hadoop01.out
hadoop03: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-hadoop03.out
hadoop02: starting datanode, logging to /usr/local/hadoop/logs/hadoop-root-datanode-hadoop02.out
localhost: ulimit -a for user root
localhost: core file size (blocks, -c) 0
localhost: data seg size (kbytes, -d) unlimited
localhost: scheduling priority (-e) 0
localhost: file size (blocks, -f) unlimited
localhost: pending signals (-i) 7183
localhost: max locked memory (kbytes, -l) 64
localhost: max memory size (kbytes, -m) unlimited
localhost: open files (-n) 1024
localhost: pipe size (512 bytes, -p) 8
[root@hadoop01 hadoop]# sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-root-resourcemanager-hadoop01.out
hadoop02: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-hadoop02.out
hadoop03: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-hadoop03.out
localhost: mv: 无法获取"/usr/local/hadoop/logs/yarn-root-nodemanager-hadoop01.out.4" 的文件状态(stat): 没有那个文件或目录
hadoop01: mv: 无法获取"/usr/local/hadoop/logs/yarn-root-nodemanager-hadoop01.out.3" 的文件状态(stat): 没有那个文件或目录
localhost: mv: 无法获取"/usr/local/hadoop/logs/yarn-root-nodemanager-hadoop01.out.2" 的文件状态(stat): 没有那个文件或目录
hadoop01: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-hadoop01.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-root-nodemanager-hadoop01.out
localhost: ulimit -a
localhost: core file size (blocks, -c) 0
localhost: data seg size (kbytes, -d) unlimited
localhost: scheduling priority (-e) 0
localhost: file size (blocks, -f) unlimited
localhost: pending signals (-i) 7183
localhost: max locked memory (kbytes, -l) 64
localhost: max memory size (kbytes, -m) unlimited
localhost: open files (-n) 1024
localhost: pipe size (512 bytes, -p) 8
[root@hadoop01 hadoop]# jps
8352 Jps
8115 NodeManager
7545 NameNode
7678 DataNode
7966 ResourceManager
8110 NodeManager
[root@hadoop01 hadoop]# hadoop dfsadmin -report
DEPRECATED: Use of this script to execute hdfs command is deprecated.
Instead use the hdfs command for it.

Configured Capacity: 54716792832 (50.96 GB)
Present Capacity: 48401537700 (45.08 GB)
DFS Remaining: 48401035264 (45.08 GB)
DFS Used: 502436 (490.66 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0


Live datanodes (3):

Name: 172.16.248.203:50010 (hadoop03)
Hostname: hadoop03
Decommission Status : Normal
Configured Capacity: 18238930944 (16.99 GB)
DFS Used: 161106 (157.33 KB)
Non DFS Used: 2032806574 (1.89 GB)
DFS Remaining: 16205963264 (15.09 GB)
DFS Used%: 0.00%
DFS Remaining%: 88.85%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Apr 20 04:44:17 CST 2020

Name: 172.16.248.201:50010 (hadoop01)
Hostname: hadoop01
Decommission Status : Normal
Configured Capacity: 18238930944 (16.99 GB)
DFS Used: 180224 (176 KB)
Non DFS Used: 2249555968 (2.10 GB)
DFS Remaining: 15989194752 (14.89 GB)
DFS Used%: 0.00%
DFS Remaining%: 87.67%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Apr 20 04:44:15 CST 2020

Name: 172.16.248.202:50010 (hadoop02)
Hostname: hadoop02
Decommission Status : Normal
Configured Capacity: 18238930944 (16.99 GB)
DFS Used: 161106 (157.33 KB)
Non DFS Used: 2032892590 (1.89 GB)
DFS Remaining: 16205877248 (15.09 GB)
DFS Used%: 0.00%
DFS Remaining%: 88.85%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Mon Apr 20 04:44:17 CST 2020

主要问题是把创建的test.txt文件上传到hdfs上面时为大小为0

你可能感兴趣的:(hadoop,hdfs,mapreduce)