这份文档描述了如何安装、配置和管理从几个节点到有数千个节点的Hadoop集群。
玩的话,你可能想先在单机上安装。(看单节点配置)。
从Apache镜像上下载一个Hadoop的稳定版本。
安装一个Hadoop集群,一般包括分发软件到所有集群中的机器上或者是安装RPMs。
一般地,集群中的一台机器被唯一地设计成NameNode,另一台机器被设置成ResourceManager。这是master(主)。
集群中剩下的机器作为DataNode 和 NodeManager。这些是slaves(从)。
下面的小节描述了如何配置一个Hadoop集群。
配置文件
Hadoop配置由两种重要的配置文件驱动:
core-default.xml
, hdfs-default.xml
, yarn-default.xml
and mapred-default.xml
.还有,在发布文件的bin目录中你可以找到Hadoop脚 本,通过conf/hadoop-env.sh 和 yarn-env.sh 设置site-specific值。
Site 配置
配置Hadoop 集群,你需要配置环境,以让Hadoop守护进程通过配置参数执行。
Hadoop 守护进程是 NameNode/DataNode 和 ResourceManager/NodeManager。
administrator应该使用 conf/hadoop-env.sh 和 conf/yarn-env.sh 脚 本来做site-specific 定制Hadoop守护进程的环境。
至少要你应该指定JAVA_HOME,这样才能在每一个远程节点上正确定义。
大多数情况下你应该指定 HADOOP_PID_DIR
和 HADOOP_SECURE_DN_PID_DIR
指向在只能被运行Hadoop守护进程的用户可写的目录。否则很有可能有symlink攻 击。
administrator可以用下表所示的配置选项配置个人守护进程。
守护进程 | 环境变量 |
---|---|
NameNode | HADOOP_NAMENODE_OPTS |
DataNode | HADOOP_DATANODE_OPTS |
Secondary NameNode | HADOOP_SECONDARYNAMENODE_OPTS |
ResourceManager | YARN_RESOURCEMANAGER_OPTS |
NodeManager | YARN_NODEMANAGER_OPTS |
WebAppProxy | YARN_PROXYSERVER_OPTS |
Map Reduce Job History Server | HADOOP_JOB_HISTORYSERVER_OPTS |
例如,要配置 Namenode 到 parallelGC, 下面的表达式应该要加入到 hadoop-env.sh :
export HADOOP_NAMENODE_OPTS="-XX:+UseParallelGC ${HADOOP_NAMENODE_OPTS}"
别的你能定制的有用配置参数包括:
HADOOP_LOG_DIR
/ YARN_LOG_DIR
—— 存放守护进程日志的目录。若不存在,则会自动创建。HADOOP_HEAPSIZE
/ YARN_HEAPSIZE
—— 可用的最大数量堆,例如以MB表示。如果变量被设置成1000,堆就会被设置成1000MB。这个被用来给守护进程设置堆大小。默认值是1000,如果你想为每个进程单独配置你可以使用。守护进程 | 环境变量 |
---|---|
ResourceManager | YARN_RESOURCEMANAGER_HEAPSIZE |
NodeManager | YARN_NODEMANAGER_HEAPSIZE |
WebAppProxy | YARN_PROXYSERVER_HEAPSIZE |
Map Reduce Job History Server | HADOOP_JOB_HISTORYSERVER_HEAPSIZE |
本节处理在如下配置文件中要指定的重要的参数:
conf/core-site.xml
参数 | 值 | 说明 |
---|---|---|
fs.defaultFS |
NameNode URI | hdfs://host:port/ |
io.file.buffer.size |
131072 | 在序列文件中使用的读/写缓冲大小。 |
conf/hdfs-site.xml
参数 | 值 | 说明 |
---|---|---|
dfs.namenode.name.dir |
NameNode在本地文件系统上的路径,永久地存储了命名空间和传输日志。 | 如果这是一个逗号分隔的词典,为了冗余,名字表将被复制到所有的词典中。 |
dfs.namenode.hosts /dfs.namenode.hosts.exclude |
允许/非允许 DataNodes列表. | 若有必要,用这个文件控制允许的datanode列表。 |
dfs.blocksize |
268435456 | 在大型文件系统中HDFS 块大小为256MB 。 |
dfs.namenode.handler.count |
100 | 更多的 NameNode 服务器线程处理来自DataNode大量的RPCs。 |
参数 | 值 | 说明 |
---|---|---|
dfs.datanode.data.dir |
用于一个DataNode存放块的本地文件系统上逗号分隔的路径列表 | 如果这是一个逗号分隔的词典,那么数据就被存在所有的命名的词典中,通常在不同设备上。 |
conf/yarn-site.xml
参数 | 值 | 说明 |
---|---|---|
yarn.acl.enable |
true /false |
允许 ACLs默认是 false. |
yarn.admin.acl |
Admin ACL | 在集群上设置administrator。 ACLs are of for comma-separated-usersspacecomma-separated-groups.默认是指定值为*表示任何人。特别的是空格表示皆无权限。 |
yarn.log-aggregation-enable |
false | 配置允许日志聚合与否。 |
参数 | 值 | 说明 |
---|---|---|
yarn.resourcemanager.address |
ResourceManager host:port 用于客户端提交任务。 |
host:port |
yarn.resourcemanager.scheduler.address |
ResourceManager host:port 用于应用管理者向Scheduler获取资源 |
host:port |
yarn.resourcemanager.resource-tracker.address |
ResourceManager host:port 用于 NodeManagers. |
host:port |
yarn.resourcemanager.admin.address |
ResourceManager host:port用于管理命令 |
host:port |
yarn.resourcemanager.webapp.address |
ResourceManager 浏览器界面 host:port. |
host:port |
yarn.resourcemanager.scheduler.class |
ResourceManager Scheduler class. |
CapacityScheduler (推荐), FairScheduler (亦推荐), 或FifoScheduler |
yarn.scheduler.minimum-allocation-mb |
在Resource Manager上为每个容器分配的最小内存。 |
In MBs |
yarn.scheduler.maximum-allocation-mb |
在Resource Manager上为每个容器分配的最大内存。 |
In MBs |
yarn.resourcemanager.nodes.include-path /yarn.resourcemanager.nodes.exclude-path |
允许/非允许 NodeManagers列表. | 如有必要,用这些文件控制允许的NodeManager列表。 |
参数 | 值 | 说明 |
---|---|---|
yarn.nodemanager.resource.memory-mb |
Resource i.e. available physical memory, in MB, for givenNodeManager |
Defines total available resources on the NodeManager to be made available to running containers |
yarn.nodemanager.vmem-pmem-ratio |
Maximum ratio by which virtual memory usage of tasks may exceed physical memory | The virtual memory usage of each task may exceed its physical memory limit by this ratio. The total amount of virtual memory used by tasks on the NodeManager may exceed its physical memory usage by this ratio. |
yarn.nodemanager.local-dirs |
Comma-separated list of paths on the local filesystem where intermediate data is written. | Multiple paths help spread disk i/o. |
yarn.nodemanager.log-dirs |
Comma-separated list of paths on the local filesystem where logs are written. | Multiple paths help spread disk i/o. |
yarn.nodemanager.log.retain-seconds |
10800 | Default time (in seconds) to retain log files on the NodeManager Only applicable if log-aggregation is disabled. |
yarn.nodemanager.remote-app-log-dir |
/logs | HDFS directory where the application logs are moved on application completion. Need to set appropriate permissions. Only applicable if log-aggregation is enabled. |
yarn.nodemanager.remote-app-log-dir-suffix |
logs | Suffix appended to the remote log dir. Logs will be aggregated to ${yarn.nodemanager.remote-app-log-dir}/${user}/${thisParam} Only applicable if log-aggregation is enabled. |
yarn.nodemanager.aux-services |
mapreduce_shuffle | Shuffle service that needs to be set for Map Reduce applications. |
参数 | 值 | 说明 |
---|---|---|
yarn.log-aggregation.retain-seconds |
-1 | How long to keep aggregation logs before deleting them. -1 disables. Be careful, set this too small and you will spam the name node. |
yarn.log-aggregation.retain-check-interval-seconds |
-1 | Time between checks for aggregated log retention. If set to 0 or a negative value then the value is computed as one-tenth of the aggregated log retention time. Be careful, set this too small and you will spam the name node. |
conf/mapred-site.xml
Parameter | Value | Notes |
---|---|---|
mapreduce.framework.name |
yarn | Execution framework set to Hadoop YARN. |
mapreduce.map.memory.mb |
1536 | Larger resource limit for maps. |
mapreduce.map.java.opts |
-Xmx1024M | Larger heap-size for child jvms of maps. |
mapreduce.reduce.memory.mb |
3072 | Larger resource limit for reduces. |
mapreduce.reduce.java.opts |
-Xmx2560M | Larger heap-size for child jvms of reduces. |
mapreduce.task.io.sort.mb |
512 | Higher memory-limit while sorting data for efficiency. |
mapreduce.task.io.sort.factor |
100 | More streams merged at once while sorting files. |
mapreduce.reduce.shuffle.parallelcopies |
50 | Higher number of parallel copies run by reduces to fetch outputs from very large number of maps. |
Configurations for MapReduce JobHistory Server:
Parameter | Value | Notes |
---|---|---|
mapreduce.jobhistory.address |
MapReduce JobHistory Server host:port | Default port is 10020. |
mapreduce.jobhistory.webapp.address |
MapReduce JobHistory Server Web UIhost:port | Default port is 19888. |
mapreduce.jobhistory.intermediate-done-dir |
/mr-history/tmp | Directory where history files are written by MapReduce jobs. |
mapreduce.jobhistory.done-dir |
/mr-history/done | Directory where history files are managed by the MR JobHistory Server. |
HDFS 和 the YARN 组件是 机架感知的。
NameNode 和 ResourceManager 通过请求一个在administrator配置模块中的API获取集群中的slaves的机架信息 。
这个API汇报DNS名字(IP地址也是)给一个机架id.
site-specific模块可以用配置项topology.node.switch.mapping.impl配置。同样的默认实现用topology.script.file.name运行一个脚本/命令。如果 topology.script.file.name
未被设置, 机架 id /default-rack会被返回任何传过去的ip地址.
Hadoop 提供了一个administrator能通过配置NodeManager去周期性地运行一个脚 本去检测一个节点健康有否的机制。
administrator能通过在脚 本中执行他们的任意选择来检测一个节点是否处在健康状态。如果脚 本检测到节点处在非健康状态,它必定会以ERROR开头在标准输出上打印一行。NodeManager 周期性地大量生产脚 本并检查它的输出。如果输出包含字符串ERROR,如上所述,节点的状态就会报告为非健康,节点就会被ResourceManager列入黑 名单。不会再有任务分发给这个节点。然而,NodeManager持续运行这个脚 本,所以如果节点又变得健康了,它就会自动从ResourceManager 的黑 名单中移除。节点的健康伴随着脚 本的输出,如果是不健康的,administrator就会在ResourceManager 的浏览器界面中看到。节点变得健康后也会从web见面中消失。
下面的参数是在conf/yarn-site.xml中用来控制节点健康检测脚 本。
Parameter | Value | Notes |
---|---|---|
yarn.nodemanager.health-checker.script.path |
Node health script | Script to check for node's health status. |
yarn.nodemanager.health-checker.script.opts |
Node health script options | Options for script to check for node's health status. |
yarn.nodemanager.health-checker.script.interval-ms |
Node health script interval | Time interval for running health script. |
yarn.nodemanager.health-checker.script.timeout-ms |
Node health script timeout interval | Timeout for health script execution. |
如果一些本地磁盘坏了的话,健康检测脚 本就不能给出ERROR了。NodeManager 有能力周期性地检查本地磁盘的健康状态(特别是检查nodemanager-local-dirs and nodemanager-log-dirs)和在达到基于配置的属性yarn.nodemanager.disk-health-checker.min-healthy-disks坏目录阀值后,整个节点被标记为不健康,同时这个信息也会发送给resource manager。引导盘要么突然坏了要么引导盘中的一个失效被健康检测脚 本识别。
一般在集群中你唯一地选择一台机器作为 NameNode ,一台机器作为 ResourceManager. 剩下的机器既作为 DataNode 也作为 NodeManager 并被叫做 slaves.
在你的conf/slaves文件上列出所有slave机器名或IP地址,一个一行。
Hadoop 通过 Apache Commons Logging 框架 用 Apache log4j 记录日志。编辑 conf/log4j.properties
文件定制 Hadoop 守护进程的日志配置(日志标准等等).
一旦所有的必要配置完成了,发布文件到所有机器的 HADOOP_CONF_DIR
目录上。
要启动一个Hadoop集群你需要启动 HDFS 和 YARN .
格式化一个新的分布式文件系统:
$ $HADOOP_PREFIX/bin/hdfs namenode -format <cluster_name>
用下面的命令启动 HDFS,运行在指定的 NameNode 上:
$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode
用下面的命令启动 HDFS,运行在指定的 NameNode 上:
$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start namenode
运行一个脚本以启动所有 slave 上的 DataNode :
$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs start datanode
用下面的命令启动 YARN, 运行在指定的 ResourceManager 上:
$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start resourcemanager
运行一个脚本以启动所有 slave 上的 NodeManagers :
$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR start nodemanager
$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh start proxyserver --config $HADOOP_CONF_DIR
用下面的命令启动 MapReduce JobHistory 服务器,运行在指定的服务器上:
$ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh start historyserver --config $HADOOP_CONF_DIR
$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh start proxyserver --config $HADOOP_CONF_DIR
用下面的命令启动 MapReduce JobHistory 服务器,运行在指定的服务器上:
$ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh start historyserver --config $HADOOP_CONF_DIR
用下面的命令关闭 NameNode , 运行在指定的 ResourceManager 上:
$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop namenode
运行一个脚本以关闭所有 slave 上的 DataNode :
$ $HADOOP_PREFIX/sbin/hadoop-daemon.sh --config $HADOOP_CONF_DIR --script hdfs stop datanode
用下面的命令关闭 YARN, 运行在指定的 ResourceManager 上:
$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop resourcemanager
运行一个脚本以关闭所有 slave 上的 NodeManagers :
$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh --config $HADOOP_CONF_DIR stop nodemanager
Stop the WebAppProxy server. If multiple servers are used with load balancing it should be run on each of them:
$ $HADOOP_YARN_HOME/sbin/yarn-daemon.sh stop proxyserver --config $HADOOP_CONF_DIR
用下面的命令关闭 MapReduce JobHistory 服务器,运行在指定的服务器上:
$ $HADOOP_PREFIX/sbin/mr-jobhistory-daemon.sh stop historyserver --config $HADOOP_CONF_DIR
本节处理一些重要的参数,以指定在健壮的、基于身份验证的安全模式下运行Hadoop。
Hadoop 守护进程的用户帐号
确保 HDFS 和 YARN 以不同的Unix用户运行, 如 hdfs
和 yarn
. 还有,确保 MapReduce JobHistory server 以 user mapred运行
.
推荐让他们共享一个Unix组,如hadoop.
User:Group | Daemons |
---|---|
hdfs:hadoop | NameNode, Secondary NameNode, Checkpoint Node, Backup Node, DataNode |
yarn:hadoop | ResourceManager, NodeManager |
mapred:hadoop | MapReduce JobHistory Server |
下表列出了多种 HDFS 和 本地文件系统路径(所有节点上)和推荐的权限。
Filesystem | Path | User:Group | Permissions |
---|---|---|---|
local | dfs.namenode.name.dir |
hdfs:hadoop | drwx------ |
local | dfs.datanode.data.dir |
hdfs:hadoop | drwx------ |
local | $HADOOP_LOG_DIR | hdfs:hadoop | drwxrwxr-x |
local | $YARN_LOG_DIR | yarn:hadoop | drwxrwxr-x |
local | yarn.nodemanager.local-dirs |
yarn:hadoop | drwxr-xr-x |
local | yarn.nodemanager.log-dirs |
yarn:hadoop | drwxr-xr-x |
local | container-executor | root:hadoop | --Sr-s--- |
local | conf/container-executor.cfg |
root:hadoop | r-------- |
hdfs | / | hdfs:hadoop | drwxr-xr-x |
hdfs | /tmp | hdfs:hadoop | drwxrwxrwxt |
hdfs | /user | hdfs:hadoop | drwxr-xr-x |
hdfs | yarn.nodemanager.remote-app-log-dir |
yarn:hadoop | drwxrwxrwxt |
hdfs | mapreduce.jobhistory.intermediate-done-dir |
mapred:hadoop | drwxrwxrwxt |
hdfs | mapreduce.jobhistory.done-dir |
mapred:hadoop | drwxr-x--- |
安全认证 keytab 文件
NameNode keytab file, 在 NameNode 主机上, 看起来应该像这样:
$ /usr/kerberos/bin/klist -e -k -t /etc/security/keytab/nn.service.keytab
Keytab name: FILE:/etc/security/keytab/nn.service.keytab
KVNO Timestamp Principal
4 07/18/11 21:08:09 nn/[email protected] (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 nn/[email protected] (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 nn/[email protected] (ArcFour with HMAC/md5)
4 07/18/11 21:08:09 host/[email protected] (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/[email protected] (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/[email protected] (ArcFour with HMAC/md5)
The Secondary NameNode keytab file, on that host, should look like the following:
$ /usr/kerberos/bin/klist -e -k -t /etc/security/keytab/sn.service.keytab
Keytab name: FILE:/etc/security/keytab/sn.service.keytab
KVNO Timestamp Principal
4 07/18/11 21:08:09 sn/[email protected] (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 sn/[email protected] (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 sn/[email protected] (ArcFour with HMAC/md5)
4 07/18/11 21:08:09 host/[email protected] (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/[email protected] (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/[email protected] (ArcFour with HMAC/md5)
The DataNode keytab file, on each host, should look like the following:
$ /usr/kerberos/bin/klist -e -k -t /etc/security/keytab/dn.service.keytab
Keytab name: FILE:/etc/security/keytab/dn.service.keytab
KVNO Timestamp Principal
4 07/18/11 21:08:09 dn/[email protected] (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 dn/[email protected] (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 dn/[email protected] (ArcFour with HMAC/md5)
4 07/18/11 21:08:09 host/[email protected] (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/[email protected] (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/[email protected] (ArcFour with HMAC/md5)
The ResourceManager keytab file, on the ResourceManager host, should look like the following:
$ /usr/kerberos/bin/klist -e -k -t /etc/security/keytab/rm.service.keytab
Keytab name: FILE:/etc/security/keytab/rm.service.keytab
KVNO Timestamp Principal
4 07/18/11 21:08:09 rm/[email protected] (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 rm/[email protected] (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 rm/[email protected] (ArcFour with HMAC/md5)
4 07/18/11 21:08:09 host/[email protected] (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/[email protected] (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/[email protected] (ArcFour with HMAC/md5)
The NodeManager keytab file, on each host, should look like the following:
$ /usr/kerberos/bin/klist -e -k -t /etc/security/keytab/nm.service.keytab
Keytab name: FILE:/etc/security/keytab/nm.service.keytab
KVNO Timestamp Principal
4 07/18/11 21:08:09 nm/[email protected] (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 nm/[email protected] (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 nm/[email protected] (ArcFour with HMAC/md5)
4 07/18/11 21:08:09 host/[email protected] (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/[email protected] (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/[email protected] (ArcFour with HMAC/md5)
The MapReduce JobHistory Server keytab file, on that host, should look like the following:
$ /usr/kerberos/bin/klist -e -k -t /etc/security/keytab/jhs.service.keytab
Keytab name: FILE:/etc/security/keytab/jhs.service.keytab
KVNO Timestamp Principal
4 07/18/11 21:08:09 jhs/[email protected] (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 jhs/[email protected] (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 jhs/[email protected] (ArcFour with HMAC/md5)
4 07/18/11 21:08:09 host/[email protected] (AES-256 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/[email protected] (AES-128 CTS mode with 96-bit SHA-1 HMAC)
4 07/18/11 21:08:09 host/[email protected] (ArcFour with HMAC/md5)
一旦Hadoop启动并运行,可通过如下的描述检查其组件:
守护进程 | Web 界面 | 说明 |
---|---|---|
NameNode | http://nn_host:port/ | 默认 HTTP 端口是 50070. |
ResourceManager | http://rm_host:port/ | 默认 HTTP 端口是 8088. |
MapReduce JobHistory Server | http://jhs_host:port/ | 默认HTTP 端口是 19888. |