Hadoop 2.6.0升级至Hadoop 3.2.1及回滚

1、背景

计划升级Hadoop版本从2.6.0-cdh5.16.1升级至Hadoop3.2.1开源社区版。
列一下升级和回滚的步骤以及给遇到的问题做个记录。

2、问题

Hadoop2.6.0升级至3.2.1步骤

停掉集群,替换安装包:

1.启动journalnode集群

./hadoop-daemon.sh start journalnode

2.升级启动namenode

./hadoop-daemon.sh start namenode -upgrade

3.启动datanode

./hadoop-daemon.sh start datanode

4.重做standby namenode
hdfs namenode -bootstrapStandby

然后启动standby namenode:

./hadoop-daemon.sh start namenode

5.启动yarn 和jobhistory

问题汇总

① 启动报错

Exception in thread "main" java.lang.NoSuchMethodError: com.google.common.base.Preconditions.checkArgument(ZLjava/lang/String;Ljava/lang/Object;)V
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1357)
	at org.apache.hadoop.conf.Configuration.set(Configuration.java:1338)
	at org.apache.hadoop.mapred.JobConf.setJar(JobConf.java:536)
	at org.apache.hadoop.mapred.JobConf.setJarByClass(JobConf.java:554)
	at org.apache.hadoop.mapred.JobConf.>(JobConf.java:448)
	at org.apache.hadoop.hive.conf.HiveConf.initialize(HiveConf.java:5141)
	at org.apache.hadoop.hive.conf.HiveConf.>(HiveConf.java:5099)
	at org.apache.hadoop.hive.common.LogUtils.initHiveLog4jCommon(LogUtils.java:97)
	at org.apache.hadoop.hive.common.LogUtils.initHiveLog4j(LogUtils.java:81)
	at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:699)
	at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)
	at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
	at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
	at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
	at java.lang.reflect.Method.invoke(Method.java:498)
	at org.apache.hadoop.util.RunJar.run(RunJar.java:323)
	at org.apache.hadoop.util.RunJar.main(RunJar.java:236)

解决:找到Hadoop目录hadoop/share/hadoop/common/lib下的guava jar包,这边用的是guava-27.0-jre.jar,将其cp至Hive目录lib下,并删除Hive目录下旧的guava版本。

② presto0.213写的orc格式的表
dim.dim_tb_random_d

CREATE TABLE `dim.dim_tb_random_d`(
  `num` string COMMENT '用于打散倾斜字段,num不能修改')
COMMENT '用于打散倾斜字段,禁止删除修改'
ROW FORMAT SERDE
  'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
  'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
  'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
LOCATION
  'hdfs://nameservice1/user/hive/warehouse/dim.db/dim_tb_random_d'

用Spark或Hive重新创建。

Failed with exception java.io.IOException:java.lang.RuntimeException: native snappy library not available: this version of libhadoop was built without snappy support.

打包时带上snappy
mvn clean package -DskipTests -Pdist,native -Dtar -Dsnappy.lib=/usr/local/lib/lib -Dbundle.snappy

20/07/01 11:18:53 FATAL resourcemanager.ResourceManager---main: Error starting ResourceManager
org.apache.hadoop.yarn.webapp.WebAppException: Error starting http server
	at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:385)
	at org.apache.hadoop.yarn.webapp.WebApps$Builder.start(WebApps.java:432)
	at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.startWepApp(ResourceManager.java:1231)
	at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.serviceStart(ResourceManager.java:1340)
	at org.apache.hadoop.service.AbstractService.start(AbstractService.java:194)
	at org.apache.hadoop.yarn.server.resourcemanager.ResourceManager.main(ResourceManager.java:1535)
Caused by: java.io.FileNotFoundException: webapps/cluster not found in CLASSPATH
	at org.apache.hadoop.http.HttpServer2.getWebAppsPath(HttpServer2.java:1085)
	at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:536)
	at org.apache.hadoop.http.HttpServer2.<init>(HttpServer2.java:119)
	at org.apache.hadoop.http.HttpServer2$Builder.build(HttpServer2.java:433)
	at org.apache.hadoop.yarn.webapp.WebApps$Builder.build(WebApps.java:341)
	... 5 more

webapps/xxxx not found in CLASSPATH问题
查看hadoop/share/hadoop/yarn/hadoop-yarn-common-3.2.1.jar里面是否没有把xxxx目录打进去。
重新打包或手动将其打入。


Spark报错

java.lang.UnsatisfiedLinkError: org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy()Z
	at org.apache.hadoop.util.NativeCodeLoader.buildSupportsSnappy(Native Method)
	at org.apache.hadoop.io.compress.SnappyCodec.checkNativeCodeLoaded(SnappyCodec.java:63)
	at org.apache.hadoop.io.compress.SnappyCodec.getDecompressorType(SnappyCodec.java:192)
	at org.apache.hadoop.io.compress.CodecPool.getDecompressor(CodecPool.java:176)
	at org.apache.hadoop.mapred.LineRecordReader.<init>(LineRecordReader.java:111)
	at org.apache.hadoop.mapred.TextInputFormat.getRecordReader(TextInputFormat.java:67)
	at org.apache.spark.rdd.HadoopRDD$$anon$1.liftedTree1$1(HadoopRDD.scala:267)
	at org.apache.spark.rdd.HadoopRDD$$anon$1.<init>(HadoopRDD.scala:266)
	at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:224)
	at org.apache.spark.rdd.HadoopRDD.compute(HadoopRDD.scala:95)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.rdd.UnionRDD.compute(UnionRDD.scala:105)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:52)
	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:324)
	at org.apache.spark.rdd.RDD.iterator(RDD.scala:288)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:55)
	at org.apache.spark.scheduler.Task.run(Task.scala:121)
	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)

在spark-defualts.conf中指定snappy路径:
spark.executor.extraLibraryPath /usr/local/yunji/hadoop/lib/native
spark.driver.extraLibraryPath /usr/local/yunji/hadoop/lib/native

回滚Hadoop3.2.1至Hadoop2.6.0

操作步骤

Hadoop升级回滚步骤
1.替换安装包

停掉hadoop集群,替换老版本安装包

2.journalnode

启动所有的journalnode

3.namenode回滚

执行:sh hadoop-daemon.sh start namenode -rollback

4.重启JN

重启所有journalnode

5.启动namenode

replay namenode editslog

hadoop namenode -recover

在执行过回滚操作的namenode节点上启动namenode

sh hadoop-daemon.sh start namenode

6.启动所有datanode

sh hadoop-daemon.sh start datanode -rollback

7.强制切换namenode为active

hdfs haadmin -transitionToActive --forceactive nn1

8.启动standby

删除standby节点的previous目录,然后执行:

hdfs namenode -bootstrapStandby

sh hadoop-daemon.sh start namenode

问题汇总

① 版本不一致

2020-07-09 16:59:37,315 ERROR [main] org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.IncorrectVersionException: Unexpected version of storage directory /home/hdfs/hadoopdata/nn. Reported: -65. Expecting = -60.
	at org.apache.hadoop.hdfs.server.common.StorageInfo.setLayoutVersion(StorageInfo.java:179)
	at org.apache.hadoop.hdfs.server.common.StorageInfo.setFieldsFromProperties(StorageInfo.java:131)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.setFieldsFromProperties(NNStorage.java:635)
	at org.apache.hadoop.hdfs.server.namenode.NNStorage.readProperties(NNStorage.java:664)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:389)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:228)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1159)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:802)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:844)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:823)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)

解决:执行hadoop namenode -rollback

② 执行hadoop namenode -rollback回滚失败

20/07/09 16:56:24 ERROR namenode.NameNode---main: Failed to start namenode.
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Could not check if roll back possible for one or more JournalNodes. 1 exceptions thrown:
10.0.117.208:8485: Call From txidc-bigdata-testcluster2/10.0.117.208 to txidc-bigdata-testcluster2:8485 failed on connection exception: java.net.ConnectException: 拒绝连接; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
	at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.canRollBack(QuorumJournalManager.java:573)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.canRollBackSharedLog(FSEditLog.java:1522)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.doRollback(FSImage.java:525)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.doRollback(NameNode.java:1266)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1507)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)
20/07/09 16:56:24 INFO util.ExitUtil---main: Exiting with status 1

解决:启动journalnode

③ edits log replay失败

20/07/09 17:38:38 ERROR namenode.FSImage---main: Error replaying edit log at offset 0.  Expected transaction ID was 240441
org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream$PrematureEOFException: got premature end-of-file at txid 240432; expected file to go up to 240442
	at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:196)
	at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
	at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.skipUntil(EditLogInputStream.java:151)
	at org.apache.hadoop.hdfs.server.namenode.RedundantEditLogInputStream.nextOp(RedundantEditLogInputStream.java:178)
	at org.apache.hadoop.hdfs.server.namenode.EditLogInputStream.readOp(EditLogInputStream.java:85)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadEditRecords(FSEditLogLoader.java:188)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLogLoader.loadFSEdits(FSEditLogLoader.java:141)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadEdits(FSImage.java:903)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.loadFSImage(FSImage.java:756)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:324)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1159)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:802)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.doRecovery(NameNode.java:1437)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1531)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)

解决:舍弃不符合的edits log,hadoop namenode -recover

④ 切换namenode standby到active失败
hdfs haadmin -transitionToActive nn1

2020-07-09 19:04:12,810 FATAL [IPC Server handler 105 on 8022] org.apache.hadoop.hdfs.server.namenode.FSEditLog: Error: recoverUnfinalizedSegments failed for required journal (JournalAndStream(mgr=QJM to [10.0.118.217:8485, 10.0.117.208:8485, 10.0.118.179:8485], stream=null))
org.apache.hadoop.hdfs.qjournal.client.QuorumException: Got too many exceptions to achieve quorum size 2/3. 3 exceptions thrown:
10.0.118.217:8485: Incompatible namespaceID for journal Storage Directory /mnt/vdc-11176G-0/dfs/jn/nameservicetest1: NameNode has nsId 647617129 but storage has nsId 0
	at org.apache.hadoop.hdfs.qjournal.server.JNStorage.checkConsistentNamespace(JNStorage.java:236)
	at org.apache.hadoop.hdfs.qjournal.server.Journal.newEpoch(Journal.java:300)
	at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.newEpoch(JournalNodeRpcServer.java:136)
	at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.newEpoch(QJournalProtocolServerSideTranslatorPB.java:133)
	at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25417)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2278)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2274)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2274)

解决:统一VERSION中的nsId

10.0.118.217:8485: Incompatible clusterID for journal Storage Directory /mnt/vdc-11176G-0/dfs/jn/nameservicetest1: NameNode has clusterId 'CID-36438bb5-79bd-46e9-9c41-450a43b2f55a' but storage has clusterId 'CID-e31d97a8-dac1-4e18-bdf5-7bd731467e89'
	at org.apache.hadoop.hdfs.qjournal.server.JNStorage.checkConsistentNamespace(JNStorage.java:242)
	at org.apache.hadoop.hdfs.qjournal.server.Journal.newEpoch(Journal.java:300)
	at org.apache.hadoop.hdfs.qjournal.server.JournalNodeRpcServer.newEpoch(JournalNodeRpcServer.java:136)
	at org.apache.hadoop.hdfs.qjournal.protocolPB.QJournalProtocolServerSideTranslatorPB.newEpoch(QJournalProtocolServerSideTranslatorPB.java:133)
	at org.apache.hadoop.hdfs.qjournal.protocol.QJournalProtocolProtos$QJournalProtocolService$2.callBlockingMethod(QJournalProtocolProtos.java:25417)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2278)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2274)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2274)

	at org.apache.hadoop.hdfs.qjournal.client.QuorumException.create(QuorumException.java:81)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumCall.rethrowException(QuorumCall.java:286)
	at org.apache.hadoop.hdfs.qjournal.client.AsyncLoggerSet.waitForWriteQuorum(AsyncLoggerSet.java:142)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.createNewUniqueEpoch(QuorumJournalManager.java:195)
	at org.apache.hadoop.hdfs.qjournal.client.QuorumJournalManager.recoverUnfinalizedSegments(QuorumJournalManager.java:441)
	at org.apache.hadoop.hdfs.server.namenode.JournalSet$8.apply(JournalSet.java:624)
	at org.apache.hadoop.hdfs.server.namenode.JournalSet.mapJournalsAndReportErrors(JournalSet.java:393)
	at org.apache.hadoop.hdfs.server.namenode.JournalSet.recoverUnfinalizedSegments(JournalSet.java:621)
	at org.apache.hadoop.hdfs.server.namenode.FSEditLog.recoverUnclosedStreams(FSEditLog.java:1478)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startActiveServices(FSNamesystem.java:1300)
	at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.startActiveServices(NameNode.java:1771)
	at org.apache.hadoop.hdfs.server.namenode.ha.ActiveState.enterState(ActiveState.java:61)
	at org.apache.hadoop.hdfs.server.namenode.ha.HAState.setStateInternal(HAState.java:64)
	at org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.setState(StandbyState.java:49)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.transitionToActive(NameNode.java:1644)
	at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.transitionToActive(NameNodeRpcServer.java:1530)
	at org.apache.hadoop.ha.protocolPB.HAServiceProtocolServerSideTranslatorPB.transitionToActive(HAServiceProtocolServerSideTranslatorPB.java:107)
	at org.apache.hadoop.ha.proto.HAServiceProtocolProtos$HAServiceProtocolService$2.callBlockingMethod(HAServiceProtocolProtos.java:4460)
	at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:617)
	at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1073)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2278)
	at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2274)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1924)
	at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2274)

解决:同上,统一VERSION中的clusterId

2020-07-09 19:38:46,026 ERROR [main] org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /home/hdfs/hadoopdata/nn is in an inconsistent state: storage directory does not exist or is not accessible.
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverStorageDirs(FSImage.java:377)
	at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:228)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:1159)
	at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:802)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:614)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:676)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:844)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:823)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1547)
	at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1615)

解决:确保启动用户对/home/hdfs/hadoopdata/nn下所有文件都有权限

你可能感兴趣的:(hadoop,hadoop)