Hadoop 伪分布式安装,可以在单节点上模拟分布式的各个进程,本地开发环境建议采用这种方式:
#配置免密钥登陆
ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa
cat ~/.ssh/id_dsa.pub>>~/.ssh/authorized_key
#创建目录
cd /Users/Administrator/hadoop
mkdir tmp
mkdir -p hdfs/name
mkdir hdfs/data
#修改配置
vi etc/hadoop/core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
<name>hadoop.tmp.dir</name>
<value>file:///Users/Administrator/hadoop/tmp</value>
</property> </configuration>
vi etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
<name>dfs.namenode.name.dir</name>
<value>file:///Users/Administrator/hadoop/hdfs/name</value>
<name>dfs.datanode.data.dir</name>
<value>file:///Users/Administrator/hadoop/hdfs/data</value>
</property>
</configuration>
cp etc/hadoop/mapred-site.xml.template etc/hadoop/mapred-site.xml
vi etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
vi etc/hadoop/yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
</configuration>
#格式化namenode
bin/hadoop namenode -format
#启动服务
sbin/start-dfs.sh
sbin/start-yarn.sh
#查看进程,看到如下进程说明已经安装成功
MacBook-Pro:hadoop Administrator$ jps
696 DataNode
972 Jps
874 ResourceManager
780 SecondaryNameNode
632 NameNode
944 NodeManager
遇到的问题:
1.Unable to load realm info from SCDynamicStore
解决办法:hadoop-env.sh里加设置:export HADOOP_OPTS="-Djava.security.krb5.realm=OX.AC.UK -Djava.security.krb5.kdc=kdc0.ox.ac.uk:kdc1.ox.ac.uk”
2.Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
原因是hadoop官网下的编译好的安装包一般是32位的,缺少MAC本地库,可通过编译hadoop获得,步骤:
1),安装maven,
mkdir ~/applib/
tar zxf apache-maven-3.0.5.tar.gz
sudo vi /etc/profile 增加
export MAVEN_HOME=~/apache-maven-3.0.5
export PATH=$PATH:$MAVEN_HOME/bin
验证是否安装成功
mvn -version
2),安装protobuf
tar zxf protobuf-2.5.0.tar.gz
cd protobuf-2.5.0
./configure --prefix=/apps/dev/protobuf
make -j4
make install
将protobuf放进环境变量
export PATH=/apps/dev/protobuf/bin:$PATH
export DYLD_LIBRARY_PATH=/apps/dev/protobuf/lib
验证是否安装成功
protoc 是否输出 missing input file
3),下载源代码包hadoop-2.3.0-cdh5.0.0-src.tar.gz
tar zxf hadoop-2.3.0-cdh5.0.0-src.tar.gz
cd hadoop-2.3.0-cdh5.0.0
mvn package -Pdist,native -DskipTests -Dtar
编译过程中如果遇到无法解决依赖的包,可能是**防火墙导致无法下载到依赖的jar包
解决办法:在MAVEN里增加一个公共库
找到MAVEN的安装目录,进入目录下的CONF,编辑setting.xml配置文件,添加如下配置信息
<mirrors>
<mirror>
<id>nexus-osc</id>
<mirrorOf>*</mirrorOf>
<name>Nexus osc</name>
<url>http://maven.oschina.net/content/groups/public/</url>
</mirror>
</mirrors>
4),安装homebrew
ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
5),安装相应的程序
brew install autoconf automake libtool cmake
6),sudo mkdir $JAVA_HOME/Classes
sudo ln -sf $JAVA_HOME/lib/tools.jar $JAVA_HOME/Classes/classes.jar
不然报:Exception in thread "main" java.lang.AssertionError: Missing tools.jar at: /Library/Java/JavaVirtualMachines/jdk1.7.0_45.jdk/Contents/Home/Classes/classes.jar. Expression: file.exists()
7),Hadoop本地库只支持*nix平台,已经广泛使用在GNU/Linux平台上,但是不支持 Cygwin 和 Mac OS X 。当然有人已经给出了Mac OSX系统编译本地库的patch,需先打上这个patch。
brew install wget
wget https://issues.apache.org/jira/secure/attachment/12617363/HADOOP-9648.v2.patch
patch -p1 < HADOOP-9648.v2.patch
ps:如果要回退patch 执行:patch -RE -p1 < HADOOP-9648.v2.patch 即可。
8),brew install openssl不行,只安装openssl,没有安装openssl-devel
必须port安装,macports类似brew是包管理工具,可收藏夹“macports”中下载pkg安装;
sudo port -v selfupdate;
sudo port install ncurses;
sudo port install openssl;
不然报:Failed to execute goal org.apache.maven.plugins:maven-antrun-plugin:1.7:run (make) on project hadoop-pipes: An Ant BuildException has occured: exec returned: 1
9),mvn package -Pdist,native -DskipTests -Dtar (加上-e -X参数,可以查看调试信息)
编译好的包在hadoop-dist/target,拷贝到native中
cp hadoop-dist/target/hadoop-2.3.0-cdh5.0.0/lib/native/* ~/hadoop/lib/native/
3.MAC下执行jar包报错:Mkdirs failed to create /tmp/hadoop-Administrator/hadoop-unjar5349774432125303237/META-INF/license
解决方法:zip -d jar包名.jar META_INF/LICENSE
4.Directory /private/tmp/hadoop-Administrator/dfs/name is in an inconsistent state: storage directory does not exist or is not accessible
原因是hadoop默认将临时文件和namenode,datanode的文件放在tmp目录下,电脑重启后tmp目录被删除,启动服务时报错。
解决方法:core-site.xml 配置hadoop.tmp.dir,hdfs-site.xml配置dfs.namenode.name.dir/dfs.datanode.dir到tmp以外的目录。
纪录实际操作过程
内容在个人公众号mangrendd同步更新