hdfs-

1、在属性dfs.namenode.name.dir指定的目录中创建相应的元数据文件

a)current/VERSION

#Fri Dec 06 11:21:35 CST 2013
namespaceID=140437801
clusterID=CID-5a02557f-1977-44d6-b7bc-d124be2d0ba3
cTime=0
storageType=NAME_NODE
blockpoolID=BP-599437854-10.12.120.79-1386299958223
layoutVersion=-47


b)current/seen_txid
写入事务序号txid

0


FSImage$FSImageSaver写入current/fsimage.ckpt
然后再重命名为fsimage(+19位的txid)fsimage_0000000000000000000
1、文件头

out.writeInt(HdfsConstants.LAYOUT_VERSION);
out.writeInt(sourceNamesystem.unprotectedGetNamespaceInfo()
.getNamespaceID());
out.writeLong(fsDir.rootDir.numItemsInTree());//整个文件系统中包含有多少文件和目录
out.writeLong(sourceNamesystem.getGenerationStampV1());
out.writeLong(sourceNamesystem.getGenerationStampV2()); out.writeLong(sourceNamesystem.getGenerationStampAtblockIdSwitch());
out.writeLong(sourceNamesystem.getLastAllocatedBlockId());
out.writeLong(context.getTxId());
out.writeLong(sourceNamesystem.getLastInodeId());


2、接下来便是目录和文件的元数据信息

Map nodes =
leaseManager.getINodesUnderConstruction();
out.writeInt(nodes.size()); // write the size
for (Map.Entry entry
: nodes.entrySet()) {
FSImageSerialization.writeINodeUnderConstruction(
out, entry.getValue(), entry.getKey());
[code="java"]
writeString(path, out);
out.writeLong(cons.getId());
out.writeShort(cons.getFileReplication());
out.writeLong(cons.getModificationTime());
out.writeLong(cons.getPreferredBlockSize());

writeBlocks(cons.getBlocks(), out);
cons.getPermissionStatus().write(out);

writeString(cons.getClientName(), out);
writeString(cons.getClientMachine(), out);

out.writeInt(0); // do not store locations of last block

}
[/code]


参考:http://abloz.com/2013/01/08/hadoop-1-0-4-fsimage-file-format.html
http://blog.csdn.net/xhh198781/article/details/6904615

你可能感兴趣的:(hadoop,Hadoop)