hdfs data flow-part writing

The client creates the file by calling create() on DistributedFileSystem (step 1 in
Figure 3-3). DistributedFileSystem makes an RPC call to the namenode to create a new
file in the filesystem’s namespace, with no blocks associated with it (step 2). The name-
node performs various checks to make sure the file doesn’t already exist, and that the
client has the right permissions to create the file. If these checks pass, the namenode
makes a record of the new file; otherwise, file creation fails and the client is thrown an
IOException. The DistributedFileSystem returns a FSDataOutputStream for the client to
start writing data to. Just as in the read case, FSDataOutputStream wraps a DFSOutput
Stream, which handles communication with the datanodes and namenode.


As the client writes data (step 3), DFSOutputStream splits it into packets, which it writes
to an internal queue, called the data queue. The data queue is consumed by the Data
Streamer, whose responsibility it is to ask the namenode to allocate new blocks by
picking a list of suitable datanodes to store the replicas. The list of datanodes forms a
pipeline—we’ll assume the replication level is 3, so there are three nodes in the pipeline.
The DataStreamer streams the packets to the first datanode in the pipeline, which stores
the packet and forwards it to the second datanode in the pipeline. Similarly, the second
datanode stores the packet and forwards it to the third (and last) datanode in the pipe-
line (step 4).

 

DFSOutputStream also maintains an internal queue of packets that are waiting to be
acknowledged by datanodes, called the ack queue. A packet is removed from the ack
queue only when it has been acknowledged by all the datanodes in the pipeline (step 5).


If a datanode fails while data is being written to it, then the following actions are taken,
which are transparent to the client writing the data. First the pipeline is closed, and any
packets in the ack queue are added to the front of the data queue so that datanodes
that are downstream from the failed node will not miss any packets. The current block
on the good datanodes is given a new identity, which is communicated to the name-
node, so that the partial block on the failed datanode will be deleted if the failed data-
node recovers later on. The failed datanode is removed from the pipeline and the
remainder of the block’s data is written to the two good datanodes in the pipeline. The
namenode notices that the block is under-replicated, and it arranges for a further replica
to be created on another node. Subsequent blocks are then treated as normal.

 

It’s possible, but unlikely, that multiple datanodes fail while a block is being written.
As long as dfs.replication.min replicas (default one) are written the write will succeed,
and the block will be asynchronously replicated across the cluster until its target rep-
lication factor is reached (dfs.replication, which defaults to three).

 

 

When the client has finished writing data it calls close() on the stream (step 6). This
action flushes all the remaining packets to the datanode pipeline and waits for ac-
knowledgments before contacting the namenode to signal that the file is complete (step
7). The namenode already knows which blocks the file is made up of (via Data
Streamer asking for block allocations), so it only has to wait for blocks to be minimally
replicated before returning successfully.

 

 

 

 

你可能感兴趣的:(UP)