HDFS

From HDFS: The Hadoop Distributed File System (HDFS) is designed to store very large data sets reliably, and to stream those data sets at high bandwidth to user applications.

Assumptions

  1. Hardware failure is the norm rather than the exception.

Goals

  1. HDFS is designed more for batch processing rather than interactive use by users. The emphasis is on high throughput of data access rather than low latency of data access.
  2. HDFS is tuned to support large files.
  3. The assumption is that it is often better to migrate the computation closer to where the data is located rather than moving the data to where the application is running.

NameNode and DataNode

  1. The existence of a single NameNode in a cluster greatly simplifies the architecture of the system. The NameNode is the arbitrator and repository for all HDFS metadata. The system is designed in such a way that user data never flows through the NameNode.
  2. The NameNode maintains the file system namespace. Any change to the file system namespace or its properties is recorded by the NameNode.

Data Replication

  1. The number of copies of a file is called the replication factor of that file. This information is stored by the NameNode.
  2. The block size and replication factor are configurable per file.
  3. All blocks in a file except the last block are the same size
  4. An application can specify the number of replicas of a file.
  5. The NameNode makes all decisions regarding replication of blocks. It periodically receives a Heartbeat and a Blockreport from each of the DataNodes in the cluster. Receipt of a Heartbeat implies that the DataNode is functioning properly. A Blockreport contains a list of all blocks on a DataNode.

Racks

  1. Large HDFS instances run on a cluster of computers that commonly spread across many racks. Communication between two nodes in different racks has to go through switches.
  2. For the common case, when the replication factor is three, HDFS’s placement policy is to put one replica on one node in the local rack, another on a different node in the local rack, and the last on a different node in a different rack.
  3. If there exists a replica on the same rack as the reader node, then that replica is preferred to satisfy the read request. If HDFS cluster spans multiple data centers, then a replica that is resident in the local data center is preferred over any remote replica.
  4. On startup, the NameNode enters a special state called Safemode. Replication of data blocks does not occur when the NameNode is in the Safemode state.
  5. A block is considered safely replicated when the minimum number of replicas of that data block has checked in with the NameNode.
  6. The NameNode keeps an image of the entire file system namespace and file Blockmap in memory.

The Persistence of File System Metadata

  1. The NameNode uses a transaction log called the EditLog to persistently record every change that occurs to file system metadata.
  2. The entire file system namespace, including the mapping of blocks to files and file system properties, is stored in a file called the FsImage. The FsImage is stored as a file in the NameNode’s local file system too.
  3. When the NameNode starts up, it reads the FsImage and EditLog from disk, applies all the transactions from the EditLog to the in-memory representation of the FsImage, and flushes out this new version into a new FsImage on disk. It can then truncate the old EditLog because its transactions have been applied to the persistent FsImage. This process is called a checkpoint.
  4. It is not optimal to create all local files in the same directory because the local file system might not be able to efficiently support a huge number of files in a single directory.

The Communication Protocols

  1. All HDFS communication protocols are layered on top of the TCP/IP protocol.
  2. By design, the NameNode never initiates any RPCs. Instead, it only responds to RPC requests issued by DataNodes or clients.

Robustness

  1. The three common types of failures are NameNode failures, DataNode failures and network partitions.
  2. The time-out to mark DataNodes dead is conservatively long (over 10 minutes by default) in order to avoid replication storm caused by state flapping of DataNodes.
  3. A scheme might automatically move data from one DataNode to another if the free space on a DataNode falls below a certain threshold.
  4. When a client creates an HDFS file, it computes a checksum of each block of the file and stores these checksums in a separate hidden file in the same HDFS namespace. When a client retrieves file contents it verifies that the data it received from each DataNode matches the checksum stored in the associated checksum file. If not, then the client can opt to retrieve that block from another DataNode that has a replica of that block.
  5. However, this degradation is acceptable because even though HDFS applications are very data intensive in nature, they are not metadata intensive. When a NameNode restarts, it selects the latest consistent FsImage and EditLog to use.

Data Organization

  1. HDFS supports write-once-read-many semantics on files. A typical block size used by HDFS is 128 MB. Thus, an HDFS file is chopped up into 128 MB chunks, and if possible, each chunk will reside on a different DataNode.
  2. Thus, the data is pipelined from one DataNode to the next.

你可能感兴趣的:(HDFS)