docker学习笔记----搭建3个节点hadoop

准备: ,docker 已经安装 ,Ubuntu镜像(没有从docker hub pull

docker pull ubuntu:14.04


1 安装java 

sudo apt-get install software-properties-common python-software-properties
sudo add-apt-repository ppa:webupd8team/java
sodu apt-get update
apt-get install oracle-java7-installer
java -version
java安装完了 可以在这一步 commit 一下 ,保存下来 

docker commit  “java installed ” containerId  ubuntu:java

2  下载安装hadoop

mkdir soft && cd soft && mkdir apache && cd apache && mkdir hadoop && cd hadoop 

wget http://mirrors.sonic.net/apache/hadoop/common/hadoop-2.6.0/hadoop-2.6.0.tar.gz

 tar xvzf hadoop-2.6.0.tar.gz

修改 ~/.bashrc  ,append 以下

export JAVA_HOME=/usr/lib/jvm/java-7-oracle
export HADOOP_HOME=/root/soft/apache/hadoop/hadoop-2.6.0
export HADOOP_CONFIG_HOME=$HADOOP_HOME/etc/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
source bashrc
开始配置hadoop 

  cd $HADOOP_HOME/ 
  mkdir tmp && mkdir namenode && mkdir datanode 
  cd  $HADOOP_CONFIG_HOME/

配置core.site 

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
            <name>hadoop.tmp.dir</name>
            <value>/root/soft/apache/hadoop/hadoop-2.6.0/tmp</value>
            <description>A base for other temporary directories.</description>
    </property>

    <property>
            <name>fs.default.name</name>
            <value>hdfs://master:9000</value>
            <final>true</final>
            <description>The name of the default file system.  A URI whose
            scheme and authority determine the FileSystem implementation.  The
            uri's scheme determines the config property (fs.SCHEME.impl) naming
            the FileSystem implementation class.  The uri's authority is used to
            determine the host, port, etc. for a filesystem.</description>
    </property>

配置 hdfs-site。xml 

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>2</value>
        <final>true</final>
        <description>Default block replication.
        The actual number of replications can be specified when the file is created.
        The default is used if replication is not specified in create time.
        </description>
    </property>

    <property>
        <name>dfs.namenode.name.dir</name>
        <value>/root/soft/apache/hadoop/hadoop-2.6.0/namenode</value>
        <final>true</final>
    </property>

    <property>
        <name>dfs.datanode.data.dir</name>
        <value>/root/soft/apache/hadoop/hadoop-2.6.0/datanode</value>
        <final>true</final>
    </property>
</configuration>
配置 mapred-site 

<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
    <property>
        <name>mapred.job.tracker</name>
        <value>master:9001</value>
        <description>The host and port that the MapReduce job tracker runs
        at.  If "local", then jobs are run in-process as a single map
        and reduce task.
        </description>
    </property>
</configuration>
修改 hadoop-env.sh

# The java implementation to use.
export JAVA_HOME=/usr/lib/jvm/java-7-oracle

安装 apt-get isntall ssh 

生层秘钥

root@8ef06706f88d:/# cd ~/
root@8ef06706f88d:~# ssh-keygen -t rsa -P '' -f ~/.ssh/id_dsa
root@8ef06706f88d:~# cd .ssh
root@8ef06706f88d:~/.ssh# cat id_dsa.pub >> authorized_key
之后每个容器都按照这样的方式生成秘钥 ,在多个容器之间相互拷贝

ssh 多节点配置内容 ,请参考:

http://lidzh1109.blog.51cto.com/4820434/864473

这里在docker 之间互相拷贝的时候,我采用的思路是:采用docker copy 命令 将master 节点的authorized_key 文件 拷贝到宿主机,然后再从宿主机docker cp 到slave1 , slave2  

另外,ssh 经常出现各种状况,,没有自启动 ,所以service ssh  start && service ssh status 

将这个容器的目前的状态commit 成一个镜像   取名: Ubuntu:hadoop 


用这个镜像 启动三个docker 容器:

  docker run -ti -h master ubuntu:hadoop
  docker run -ti -h slave1 ubuntu:hadoop
  docker run -ti -h slave2 ubuntu:hadoop

分别得到三个容器的ip 

修改每个容器的、etc/hosts 

172.17.0.2 master 
172.17.0.3 slave1
172.17.0.4 slave2 


cd $HADOOP_CONFIG_HOME 

启动脚本: start-dfs.sh 


jps命令 
1223 Jps
992 SecondaryNameNode
813 NameNode
1140 ResourceManager

很多步骤写的很简单 : 详情请参考:http://my.oschina.net/u/1866821/blog/483243

OK! 


你可能感兴趣的:(docker学习笔记----搭建3个节点hadoop)