hadoop fs 具体的命令 或者 hdfs dfs 具体的命令
一般是使用hadoop fs 具体的命令,因为是简单好记,容易理解。
[moxi@hadoop102 hadoop-3.1.3]$ bin/hadoop fs
输入过后,显示以下操作指令
[-appendToFile ... ]
[-cat [-ignoreCrc] ...]
[-chgrp [-R] GROUP PATH...]
[-chmod [-R] PATH...]
[-chown [-R] [OWNER][:[GROUP]] PATH...]
[-copyFromLocal [-f] [-p] ... ]
[-copyToLocal [-p] [-ignoreCrc] [-crc] ... ]
[-count [-q] ...]
[-cp [-f] [-p] ... ]
[-df [-h] [ ...]]
[-du [-s] [-h] ...]
[-get [-p] [-ignoreCrc] [-crc] ... ]
[-getmerge [-nl] ]
[-help [cmd ...]]
[-ls [-d] [-h] [-R] [ ...]]
[-mkdir [-p] ...]
[-moveFromLocal ... ]
[-moveToLocal ]
[-mv ... ]
[-put [-f] [-p] ... ]
[-rm [-f] [-r|-R] [-skipTrash] ...]
[-rmdir [--ignore-fail-on-non-empty] ...]
]]
(1)启动hadoop集群
[moxi@hadoop102 hadoop-3.1.3]$ sbin/start-dfs.sh
[moxi@hadoop103 hadoop-3.1.3]$ sbin/start-yarn.sh
(2)使用指令帮助
例如通过查询指令查询出来要使用的指令,但是不知道这个指令如何操作,这时候可以使用指令帮助。
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -help mkdir
上传操作是指,把本地文件通过指令上传到hdfs上,上传操作可以把本地文件通过剪切或者拷贝的方式上传到hdfs,方法灵活。
(1) -moveFromLocal:从本地剪切到HDFS
[moxi@hadoop102 hadoop-3.1.3]$ vim shuguo.txt
输入:
shuguo
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -moveFromLocal ./shuguo.txt /sanguo
(2)-copyFromLocal:从本地拷贝到HDFS
[moxi@hadoop102 hadoop-3.1.3]$ vim weiguo.txt
输入:
weiguo
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -copyFromLocal weiguo.txt /sanguo
(3)-put:等同于-copyFromLocal,生产环境常用,因为简单
[moxi@hadoop102 hadoop-3.1.3]$ vim wuguo.txt
输入:
wuguo
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -put ./wuguo.txt /sanguo
(4)-appendToFFile:追加一个文件到已经存在的文件末尾
[moxi@hadoop102 hadoop-3.1.3]$ vim liubei.txt
输入:
liubei
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -appendToFile liubei.txt /sanguo/shuguo.txt
下载操作,就是把HDFS上的文件下载到本地文件系统
(1)-copyToLoccal:从HDFS拷贝到本地
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -copyToLocal /sanguo/shuguo.txt ./
(2)-get等同于copyToLocal,生产环境更习惯用get
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -get /sanguo/shuguo.txt ./[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -get /sanguo/shuguo.txt ./shuguo2.txt
(3)下载到windows操作
sz 文件名
(1)-ls:显示目录信息
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -ls /sanguo
(2)-cat:显示文件内容
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -cat /sanguo/shuguo.txt
(3)-chgrp、-chmod、-chown:Linux文件系统中的用法一样,修改文件所属权限
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -chmod 666 /sanguo/shuguo.txt
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -chown moxi:moxi /sanguo/shuguo.txt
(4) -mkdir:创建文件夹
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -mkdir /jinguo
(5) -cp复制操作
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -cp /sanguo/shuguo.txt /jinguo
(6) -mv移动文件
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -mv /sanguo/wuguo.txt /jinguo
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -mv /sanguo/weiguo.txt /jinguo
(7)-tail:显示文件的末尾1kb的数据
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -tail /jinguo/shuguo.txt
(8) rm:删除操作
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -rm /sanguo/shuguo.txt
(9) rm -f:递归删除操作
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -rm -r /sanguo
(10) -du统计文件夹的大小信息
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -du -s -h /jinguo
显示的结果:27 81 /jinguo
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -du -h /jinguo
显示的结果:
14 42 /jinguo/shuguo.txt
7 21 /jinguo/weiguo.txt
6 18 /jinguo/wuguo.tx
(11)-setrep:设置HDFS中文件的副本数量
[moxi@hadoop102 hadoop-3.1.3]$ hadoop fs -setrep 10 /jinguo/shuguo.txt
这里设置的副本数只是记录在NameNode的元数据中,是否真的会有这么多副本,还得看DataNode的数量。因为目前只有3台设备,最多也就3个副本,设置10个副本暂时会记录下来,当服务器增加到10台的时候副本数才能达会到10。