目录
Hadoop能在单台机器上以伪分布式模式运行,即每个Hadoop模块运行在单独的java进程里
centos:5.8
hadoop:2.2.0
不是必须的,但从安全和运维的角度,建议隔离在一个专门的用户下
- sudo groupadd hadoop
- sudo useradd -g hadoop hadoop
- sudo passwd hadoop
切换到hadoop用户:
su hadoop
- ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
- cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
测试:
ssh localhost
如果不用输入密码,则设置成功
如果仍然需要输入密码,可以参考本站另一篇博文《ssh无密码登陆》,修改相应文件夹的权限
官网下载hadoop-2.2.0.tar.gz
tar -xvzf hadoop-2.2.0.tar.gz -C /var/
cd /var/hadoop-2.2.0/
$ vim ~/.bashrc
添加:
export HADOOP_PREFIX=/var/hadoop-2.2.0
export HADOOP_COMMON_HOME=$HADOOP_PREFIX
export HADOOP_HDFS_HOME=$HADOOP_PREFIX
export HADOOP_MAPRED_HOME=$HADOOP_PREFIX
export HADOOP_YARN_HOME=$HADOOP_PREFIX
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoop
export PATH=$PATH:$HADOOP_PREFIX/bin:$HADOOP_PREFIX/sbin
hadoop配置文件默认在安装目录的etc/hadoop文件夹下面
vim hadoop-env.sh
主要是配置JAVA_HOME,设置正确的JDK位置
vim core-site.xml
- <configuration>
- <property>
- <name>fs.defaultFS</name>
- <value>hdfs://localhost</value>
- </property>
- </configuration>
vim hdfs-site.xml
- <configuration>
- <property>
- <name>dfs.datanode.data.dir</name>
- <value>file:///home/hadoop/hdfs/datanode</value>
- </property>
- <property>
- <name>dfs.namenode.name.dir</name>
- <value>file:///home/hadoop/hdfs/namenode</value>
- </property>
- <property>
- <name>dfs.namenode.checkpoint.dir</name>
- <value>file:///home/hadoop/hdfs/namesecondary</value>
- </property>
- <property>
- <name>dfs.replication</name>
- <value>1</value>
- </property>
- </configuration>
hadoop会自动创建相应的目录
yarn-site.xml
添加:
- <property>
- <name>yarn.nodemanager.aux-services</name>
- <value>mapreduce_shuffle</value>
- </property>
mv mapred-site.xml.template mapred-site.xml
vim mapred-site.xml
添加:
- <property>
- <name>mapreduce.framework.name</name>
- <value>yarn</value>
- </property>
hdfs namenode -format
start-dfs.sh
start-yarn.sh
如果使用maven引用hadoop的jar包方式,一定注意hadoop集群的版本,是1.0还是2.0
否则会出现类似“Server IPC version 7 cannot communicate with client version 4”的错误
如果是hadoop1版本,在pom.xml下添加类似下面依赖:
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-core</artifactId>
<version>1.2.0</version>
</dependency>
注意的是这个依赖并不全,如果写mr任务或者写hdfs,还需要引入其他依赖
如果是hadoop2,添加类似下面依赖:
<dependency>
<groupId>org.apache.hadoop</groupId>
<artifactId>hadoop-client</artifactId>
<version>2.4.0</version>
</dependency>
这个依赖基本会把所以相关jar包都包含了