Hadoop学习第0天——配置分布式(译)

  Prerequisites先决条件

Required Software

  1. Java TM  1.6.x, preferably from Sun, must be installed.至少是JAVA1.6,最好是SUN公司的,而不是Open JDK
  2. ssh  must be installed and   sshd  must be running to use the Hadoop scripts that manage remote Hadoop daemons.ssh必须安装(LINUX下这个基本都有),sshd必须启动,因为要管理远程的Hadoop daemons.ssh

这里要求集群安装ssh,这里需要做机器互信,使得SSH不用输入密码(实际上,下载安装scp复制到其他机器上都需要,否则每次都输入密码。。。)参见:http://lvdccyb.iteye.com/blog/1163686

Installation安装

Typically one machine in the cluster is designated as the NameNode and another machine the as JobTracker, exclusively. These are the masters. The rest of the machines in the cluster act as both DataNode and TaskTracker. These are the slaves.

The root of the distribution is referred to as HADOOP_HOME. All machines in the cluster usually have the same HADOOP_HOME path.

通常一个集群中的一台机器(节点)被指定为NameNode 而相应的,另一台机器为JobTracker。它们2个都是 masters.通剩下的都是slaves,它们同时是 DataNode 和 TaskTracker。

 

Configuration配置

 

Configuring the Hadoop Daemons

This section deals with important parameters to be specified in the following: 
conf/core-site.xml:

 

Parameter Value Notes
fs.default.name URI of NameNode. hdfs://hostname/

 


conf/hdfs-site.xml:

 

Parameter Value Notes
dfs.name.dir Path on the local filesystem where the NameNode stores the namespace and transactions logs persistently. If this is a comma-delimited list of directories then the name table is replicated in all of the directories, for redundancy.
dfs.data.dir Comma separated list of paths on the local filesystem of a DataNodewhere it should store its blocks. If this is a comma-delimited list of directories, then data will be stored in all named directories, typically on different devices.


conf/mapred-site.xml:

 

Parameter Value Notes
mapred.job.tracker Host or IP and port ofJobTracker. host:port pair.
mapred.system.dir Path on the HDFS where where the MapReduce framework stores system files e.g./hadoop/mapred/system/. This is in the default filesystem (HDFS) and must be accessible from both the server and client machines.
mapred.local.dir Comma-separated list of paths on the local filesystem where temporary MapReduce data is written. Multiple paths help spread disk i/o.
mapred.tasktracker.{map|reduce}.tasks.maximum The maximum number of MapReduce tasks, which are run simultaneously on a given TaskTracker, individually. Defaults to 2 (2 maps and 2 reduces), but vary it depending on your hardware.
dfs.hosts/dfs.hosts.exclude List of permitted/excluded DataNodes. If necessary, use these files to control the list of allowable datanodes.
mapred.hosts/mapred.hosts.exclude List of permitted/excluded TaskTrackers. If necessary, use these files to control the list of allowable TaskTrackers.
mapred.queue.names Comma separated list of queues to which jobs can be submitted. The MapReduce system always supports atleast one queue with the name as default. Hence, this parameter's value should always contain the string default. Some job schedulers supported in Hadoop, like the Capacity Scheduler, support multiple queues. If such a scheduler is being used, the list of configured queue names must be specified here. Once queues are defined, users can submit jobs to a queue using the property name mapred.job.queue.name in the job configuration. There could be a separate configuration file for configuring properties of these queues that is managed by the scheduler. Refer to the documentation of the scheduler for information on the same.
mapred.acls.enabled Boolean, specifying whether checks for queue ACLs and job ACLs are to be done for authorizing users for doing queue operations and job operations. If true, queue ACLs are checked while submitting and administering jobs and job ACLs are checked for authorizing view and modification of jobs. Queue ACLs are specified using the configuration parameters of the form mapred.queue.queue-name.acl-name, defined below under mapred-queue-acls.xml. Job ACLs are described at Job Authorization


conf/mapred-queue-acls.xml

 

Parameter Value Notes
mapred.queue.queue-name.acl-submit-job List of users and groups that can submit jobs to the specified queue-name. The list of users and groups are both comma separated list of names. The two lists are separated by a blank. Example: user1,user2 group1,group2. If you wish to define only a list of groups, provide a blank at the beginning of the value.
mapred.queue.queue-name.acl-administer-jobs List of users and groups that can view job details, change the priority or kill jobs that have been submitted to the specifiedqueue-name. The list of users and groups are both comma separated list of names. The two lists are separated by a blank. Example: user1,user2 group1,group2. If you wish to define only a list of groups, provide a blank at the beginning of the value. Note that the owner of a job can always change the priority or kill his/her own job, irrespective of the ACLs.

Typically all the above parameters are marked as final to ensure that they cannot be overriden by user-applications.

上面太多,于是设置了几项:

最终的配置如下:

<configuration>
 <property> <name>dfs.support.append</name> <value>true</value> </property> <property> <name>dfs.datanode.max.xcievers</name> <value>4096</value> </property> <property> <name>dfs.block.size</name> <value>134217728</value> </property> <property> <name>dfs.namenode.handler.count</name> <value>40</value> </property> </configuration> 

 

 conf/hdfs-site.xml

 

  conf/hadoop-env.sh

这里只修改了JAVA_HOME项

conf/mapred-site.xml

 

 

<configuration> <property> <name>mapred.reduce.parallel.copies</name> <value>20</value> </property> </configuration>

 

 

  • Hadoop Startup

 

To start a Hadoop cluster you will need to start both the HDFS and Map/Reduce cluster.

Format a new distributed filesystem:
$ bin/hadoop namenode -format

Start the HDFS with the following command, run on the designated NameNode:
$ bin/start-dfs.sh

The bin/start-dfs.sh script also consults the ${HADOOP_CONF_DIR}/slaves file on the NameNode and starts the DataNode daemon on all the listed slaves.

Start Map-Reduce with the following command, run on the designated JobTracker:
$ bin/start-mapred.sh

The bin/start-mapred.sh script also consults the ${HADOOP_CONF_DIR}/slaves file on the JobTracker and starts the TaskTracker daemon on all the listed slaves.

你可能感兴趣的:(hadoop)