MapReduce官网介绍地址http://hadoop.apache.org/docs/stable/hadoop-mapreduce-client/hadoop-mapreduce-client-core/MapReduceTutorial.html
Overview
Hadoop MapReduce is a software framework for easily writing applications which process vast amounts of data (multi-terabyte data-sets) in-parallel on large clusters (thousands of nodes) of commodity hardware in a reliable, fault-tolerant manner.
MapReduce以可靠、容错的方式在大型集群的普通硬件上并行处理海量的数据
A MapReduce job usually splits the input data-set into independent chunks(块) which are processed by the map tasks in a completely parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce tasks. Typically both the input and the output of the job are stored in a file-system. The framework takes care of scheduling tasks, monitoring them and re-executes the failed tasks.
MapReduce作业通常会把输入的数据集切分成独立的块,交由map进行完全的并行的处理。框架会把map输出的数据进行排序然后输入到reduce。作业的输入和输出都存储在文件系统中。框架负责调度任务、监视任务和重启失败的任务。
Typically the compute nodes and the storage nodes are the same, that is, the MapReduce framework and the Hadoop Distributed File System (see HDFS Architecture Guide) are running on the same set of nodes. This configuration allows the framework to effectively schedule tasks on the nodes where data is already present, resulting in very high aggregate(高聚合) bandwidth across the cluster.
通常计算节点和存储节点是相同的,即MapReduce和HDFS运行在相同的节点上。这种配置允许框架有效地在已存在数据的节点上调度任务,从而在集群中产生非常高的聚合带宽。
The MapReduce framework consists of a single master ResourceManager, one slave NodeManager per cluster-node, and MRAppMaster per application (see YARN Architecture Guide).
MapReduce框架由一个主节点RM,每个集群节点一个NM,和每个应用程序一个MR的AM组成
Minimally, applications specify the input/output locations and supply map and reduce functions via implementations of appropriate interfaces and/or abstract-classes. These, and other job parameters, comprise the job configuration.
应用程序通过合适的接口实现类或抽象类来指定输入输出位置和实现map和reduce功能
The Hadoop job client then submits the job (jar/executable etc.) and configuration to the ResourceManager which then assumes the responsibility of distributing the software/configuration to the slaves, scheduling tasks and monitoring them, providing status and diagnostic information to the job-client.
hadoop作业客户端提交作业和配置给RM,RM负责分发程序和配置给从节点,调度任务并监视他们,提供状态和诊断信息给客户端
Inputs and Outputs
The MapReduce framework operates exclusively(只) on
pairs, that is, the framework views the input to the job as a set of pairs and produces a set of pairs as the output of the job, conceivably of different types.
MapReduce框架只处理键值对数据,作业的输入可以看成是一连串的键值对,产生一连串不同类型的键值对作为作业的输出。
The key and value classes have to be serializable by the framework and hence need to implement the Writable interface. Additionally, the key classes have to implement the WritableComparable interface to facilitate(促进) sorting by the framework.
key和value值必须经过框架的序列化,因此需要实现Writable接口。key值还需要实现WritableComparable接口便于框架的排序。
- Writable接口源码
package org.apache.hadoop.io;
import java.io.DataOutput;
import java.io.DataInput;
import java.io.IOException;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;
/**
* A serializable object which implements a simple, efficient, serialization
* protocol, based on {@link DataInput} and {@link DataOutput}.
*
* Any key
or value
type in the Hadoop Map-Reduce
* framework implements this interface.
*
* Implementations typically implement a static read(DataInput)
* method which constructs a new instance, calls {@link #readFields(DataInput)}
* and returns the instance.
*
* Example:
*
* public class MyWritable implements Writable {
* // Some data
* private int counter;
* private long timestamp;
*
* public void write(DataOutput out) throws IOException {
* out.writeInt(counter);
* out.writeLong(timestamp);
* }
*
* public void readFields(DataInput in) throws IOException {
* counter = in.readInt();
* timestamp = in.readLong();
* }
*
* public static MyWritable read(DataInput in) throws IOException {
* MyWritable w = new MyWritable();
* w.readFields(in);
* return w;
* }
* }
*
*/
@InterfaceAudience.Public
@InterfaceStability.Stable
public interface Writable {
/**
* Serialize the fields of this object to out
.
*
* @param out DataOuput
to serialize this object into.
* @throws IOException
*/
void write(DataOutput out) throws IOException;
/**
* Deserialize the fields of this object from in
.
*
* For efficiency, implementations should attempt to re-use storage in the
* existing object where possible.
*
* @param in DataInput
to deseriablize this object from.
* @throws IOException
*/
void readFields(DataInput in) throws IOException;
}
- WritableComparable接口源码
package org.apache.hadoop.io;
import org.apache.hadoop.classification.InterfaceAudience;
import org.apache.hadoop.classification.InterfaceStability;
/**
* A {@link Writable} which is also {@link Comparable}.
*
* WritableComparable
s can be compared to each other, typically
* via Comparator
s. Any type which is to be used as a
* key
in the Hadoop Map-Reduce framework should implement this
* interface.
*
* Note that hashCode()
is frequently used in Hadoop to partition
* keys. It's important that your implementation of hashCode() returns the same
* result across different instances of the JVM. Note also that the default
* hashCode()
implementation in Object
does not
* satisfy this property.
*
* Example:
*
* public class MyWritableComparable implements WritableComparable {
* // Some data
* private int counter;
* private long timestamp;
*
* public void write(DataOutput out) throws IOException {
* out.writeInt(counter);
* out.writeLong(timestamp);
* }
*
* public void readFields(DataInput in) throws IOException {
* counter = in.readInt();
* timestamp = in.readLong();
* }
*
* public int compareTo(MyWritableComparable o) {
* int thisValue = this.value;
* int thatValue = o.value;
* return (thisValue < thatValue ? -1 : (thisValue==thatValue ? 0 : 1));
* }
*
* public int hashCode() {
* final int prime = 31;
* int result = 1;
* result = prime * result + counter;
* result = prime * result + (int) (timestamp ^ (timestamp >>> 32));
* return result
* }
* }
*
*/
@InterfaceAudience.Public
@InterfaceStability.Stable
public interface WritableComparable extends Writable, Comparable {
}