hdfs hadoop上删除非空目录方法

hdfs hadoop上删除非空目录方法

删除hdfs根目录上/hbase 重新进行安装操作

hadoop3@master ~]$ hdfs dfs -rm -r -f /hbase
18/08/17 14:20:20 INFO fs.TrashPolicyDefault: Namenode trash configuration: Deletion interval = 0 minutes, Emptier interval = 0 minutes.
Deleted /hbase
[hadoop3@master ~]$

hdfs hadoop上其它操作命令

bin/hdfs dfs -mkdir /dajiangtai       //新建hdfs目录
bin/hdfs dfs -rm   -r -f   /hbase       //删除hdfs不为空目录hbase
bin/hdfs dfs -rm   -r -f   /windows     //删除hdfs不为空目录windows
bin/hdfs dfs -put djt.txt /dajiangtai    //上传文件到hdfs上
bin/hdfs dfs -cat /dajiangtai/djt.txt    //查看hdfs上的文件

//显示hdfs上根目录下目录或文件

[hadoop3@master ~]$ hdfs dfs -ls /
Found 3 items
drwxr-xr-x   - hadoop3 supergroup          0 2018-08-17 14:04 /spark
drwxr-xr-x   - hadoop3 supergroup          0 2018-08-13 11:57 /user
drwxr-xr-x   - hadoop3 supergroup          0 2018-07-13 20:03 /windows
[hadoop3@master app]$ hdfs dfs -put hellospark /spark/
[hadoop3@master app]$ hdfs dfs -ls /spark
Found 1 items
-rw-r--r--   3 hadoop3 supergroup         36 2018-08-17 14:32 /spark/hellospark
[hadoop3@master app]$ 
[hadoop3@master app]$ hdfs dfs -cat /spark/hellospark
hello spark
hello world
hello spark![hadoop3@master app]$ 

查看hdfs上文件的方法:

hdfs dfs -cat /spark/hellospark
hdfs dfs -cat hdfs://192.168.10.201/spark/hellospark
hdfs dfs -cat hdfs://192.168.10.201:8020/spark/hellospark

具体执行过程如下所示:

[hadoop3@master app]$ hdfs dfs -cat /spark/hellospark
hello spark
hello world
hello spark!

[hadoop3@master app]$ ls
hadoop  hadoop-2.7.6.tar.gz  hadoop303  hbase  hbase-2.0.1-bin.tar.gz  hellospark  jdk  scala  scala-2.11.6.tgz  spark  spark-2.3.1-bin-hadoop2.7.tgz  zookeeper

[hadoop3@master app]$ hdfs dfs -cat hdfs://192.168.10.200/spark/hellospark
cat: Operation category READ is not supported in state standby

[hadoop3@master app]$ hdfs dfs -cat hdfs://192.168.10.201/spark/hellospark
hello spark
hello world
hello spark!

[hadoop3@master app]$ hdfs dfs -cat hdfs://192.168.10.201:8020/spark/hellospark
hello spark
hello world
hello spark!

[hadoop3@master app]$ hdfs dfs -cat hdfs://192.168.10.201:50070/spark/hellospark
cat: Failed on local exception: com.google.protobuf.InvalidProtocolBufferException: Protocol message end-group tag did not match expected tag.; Host Details : local host is: "master/192.168.10.200"; destination host is: "slave1":50070; 
[hadoop3@master app]$ 

你可能感兴趣的:(操作系统,Linux/Unix,云平台,hadoop)