下载安装包 |
hadoop-2.2.0-cdh5.0.0-beta-1.tar.gz |
解压 |
tar zxvf hadoop-2.2.0-cdh5.0.0-beta-1.tar.gz ln -s /opt/hadoop-2.2.0-cdh5.0.0-beta-1 ~/hadoop 所有节点都解压 |
ssh双向认证 |
配置所有机器hosts vi /etc/hosts 10.10.1.1 hadoop1 10.10.1.2 hadoop2 10.10.1.3 hadoop3
使用hostname 设置主机名 修改/etc/sysconfig/network 中的内容改为相应的主机名
在.ssh目录生成id_rsa.pub ,添加到本机~/.ssh/authorized_keys文件里面 ssh-keygen -q -t rsa cat ~/.ssh/id_rsa.pub >>~/.ssh/authorized_keys 本机登入验证:ssh localhost 复制authorized_keys到其他机器上 scp ~/.ssh/id_rsa.pub hadoop3:~/ cat ~/id_rsa.pub >> ~/.ssh/authorized_keys chmod 600 ~/.ssh/authorized_keys chmod 700 ~/.ssh 其他机器重复以上步骤 |
修改环境参数 |
vi /etc/profile 增加以下内容: export JAVA_HOME=/opt/jdk1.7.0_51 export CLASSPATH=.:$JAVA_HOME/lib/tools.jar:/lib/dt.jar export PATH=$PATH:$JAVA_HOME/bin export HADOOP_DEV_HOME=/home/hadoop/hadoop export PATH=$PATH:$HADOOP_DEV_HOME/bin
修改ulimit vi /etc/security/limits.d/90-nproc.conf * soft nproc 502400
执行环境参数 source /etc/profile |
修改hadoop配置 |
1.编辑同步脚本:vi hadoop/cp2slaves.sh BASE_PATH=`dirname $0` cd $BASE_PATH echo `/bin/pwd` scp etc/hadoop/* hadoop2:~/hadoop/etc/hadoop/ scp etc/hadoop/* hadoop3:~/hadoop/etc/hadoop/
2.配置hadoop/etc/hadoop/hadoop-env.sh JAVA_HOME修改为: export JAVA_HOME=/opt/jdk HADOOP_PID_DIR修改为: export HADOOP_PID_DIR=${HADOOP_LOG_DIR}
3.创建hadoop工作目录 mkdir -p /home/hadoop/tmp mkdir -p /home/hadoop/hdfs/name mkdir -p /home/hadoop/hdfs/data mkdir -p /home/hadoop/hadoop-yarn 4.配置hadoop/etc/hadoop/core-site.xml <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop/tmp</value> <description>A base for other temporary directories.</description> </property> <property> <name>fs.default.name</name> <value>hdfs://hadoop1:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.proxyuser.root.hosts</name> <value>hadoop1</value> </property> <property> <name>hadoop.proxyuser.root.groups</name> <value>*</value> </property> 5.配置hadoop/etc/hadoop/mapred-site.xml mv hadoop/etc/hadoop/mapred-site.xml.template hadoop/etc/hadoop/mapred-site.xml <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>hadoop1:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>hadoop1:19888</value> </property> <property> <name>mapreduce.jobhistory.intermediate-done-dir</name> <value>/mr-history/tmp</value> </property> <property> <name>mapreduce.jobhistory.done-dir</name> <value>/mr-history/done</value> </property> 6.配置hadoop/etc/hadoop/hdfs-site.xml <property> <name>dfs.name.dir</name> <value>file:/home/hadoop/hdfs/name</value> <description> </description> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop1:9001</value> </property> <property> <name>dfs.data.dir</name> <value>file:/home/hadoop/hdfs/data</value> </property> <property> <name>dfs.http.address</name> <value>hadoop1:9002</value> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> 7.编辑hadoop/etc/hadoop/masters haoop1 8.编辑hadoop/etc/hadoop/slaves hadoop1 hadoop2 hadoop3 9.编辑hadoop/etc/hadoop/yarn-site.xml <property> <name>yarn.resourcemanager.address</name> <value>hadoop1:8032</value> </property> <property> <description>The address of the scheduler interface.</description> <name>yarn.resourcemanager.scheduler.address</name> <value>hadoop1:8030</value> </property> <property> <description>The address of the RM web application.</description> <name>yarn.resourcemanager.webapp.address</name> <value>hadoop1:8088</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>hadoop1:8031</value> </property> <property> <description>The address of the RM admin interface.</description> <name>yarn.resourcemanager.admin.address</name> <value>hadoop1:8033</value> </property> <property> <description>The hostname of the NM.</description> <name>yarn.nodemanager.hostname</name> <value>0.0.0.0</value> </property> <property> <description>The address of the container manager in the NM. </description> <name>yarn.nodemanager.address</name> <value>${yarn.nodemanager.hostname}:0</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property> <property> <description>List of directories to store localized files in. </description> <name>yarn.nodemanager.local-dirs</name> <value>/home/hadoop/hadoop-yarn/cache/${user.name}/nm-local-dir</value> </property> <property> <description>Where to store container logs.</description> <name>yarn.nodemanager.log-dirs</name> <value>/home/hadoop/hadoop-yarn/containers</value> </property> <property> <name>yarn.nodemanager.remote-app-log-dir</name> <value>/hadoop/hadoop-yarn/apps</value> </property> <property> <name>yarn.app.mapreduce.am.staging-dir</name> <value>/hadoop/staging</value> </property>
10.同步配置文件 sh hadoop/cp2slaves.sh |
格式化namenode |
hadoop/bin/hdfs namenode -format |
启动 |
hadoop/sbin/start-all.sh |
测试验证 |
http://hadoop1:8088 hadoop/bin/hadoop fs -df -h hadoop/bin/hadoop jar hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0-cdh5.0.0-beta-1.jar pi 5 10 |
停止 |
hadoop/sbin/stop-all.sh |