org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException

在Hadoop-1.0.4和Hadoop-2.2的使用append时,需求:追加写入文件,如果文件不存在,需求先创建。

异常:

Exception in thread "main" org.apache.hadoop.ipc.RemoteException: org.apache.hadoop.hdfs.protocol.AlreadyBeingCreatedException: failed to create file /huangq/dailyRolling/mommy-dailyRolling for DFSClient_-1456545217 on client 10.1.85.243 because current leaseholder is trying to recreate file.
  at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.recoverLeaseInternal(FSNamesystem.java:1374)
  at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.startFileInternal(FSNamesystem.java:1246)
  at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.appendFile(FSNamesystem.java:1426)
  at org.apache.hadoop.hdfs.server.namenode.NameNode.append(NameNode.java:643)
  at sun.reflect.GeneratedMethodAccessor25.invoke(Unknown Source)
  at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
  at java.lang.reflect.Method.invoke(Method.java:597)

代码版本1:(出现上述报错的代码)

1 FileSystem fs = FileSystem.get(conf);
2 Path dstPath = new Path(dst);
3 if (!fs.exists(dstPath)) {
4     fs.create(dstPath);
5 } 
6 FSDataOutputStream fsout = fs.append(dstPath);
7 BufferedWriter bw = new BufferedWriter(new OutputStreamWriter(fsout));

异常的原因:FSDataOutputStream create(Path f) 产生了一个输出流,创建完后需要关闭。

解决:创建完文件之后,关闭流FSDataOutputStream。

复制代码
FileSystem fs = FileSystem.get(conf);
Path dstPath = new Path(dst);
if (!fs.exists(dstPath)) {
    fs.create(dstPath).close();
} 
FSDataOutputStream fsout = fs.append(dstPath);
 
   
参考:http://www.cnblogs.com/byrhuangqiang/p/3926663.html
 
  
 
 

你可能感兴趣的:(hadoop)