ubuntu-10.10平台搭建hadoop-0.20.2分布式系统

实验平台:Ubuntu 10.10,Hadoop0.20.2,JDK1.6

 

step 1. ssh的安装设置
由于Hadoop用ssh 通信,因此先进行免密码登录设定,
root$ apt-get install ssh
root$ ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa
root$ cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
root$ ssh localhost

完成后请登入确认不用输入密码,(第一次登入需按enter键,第二次就可以直接登入到系统。
~$ ssh localhost
~$ exit
~$ ssh localhost
~$ exit

step 2. 安装java

1、下载linux下JDK,我下载的是jdk-6u22-linux-i586.bin
2、将该文件放置到/usr/lib/jvm/java文件夹下,jvm和java为新建文件夹
3、运行命令
cd /usr/lib/jvm/java
进入该目录
sudo chmod u+x /usr/lib/jvm/java/jdk-6u22-linux-i586.bin
修改bin文件权限,使其可执行
然后,执行以下命令
sudo /usr/lib/jvm/java/jdk-6u22-linux-i586.bin
最后可见终端显示:
Java(TM) SE Development Kit 6 successfully installed.

此处省去n个字符

Press Enter to continue.....

证明安装已完成
4、运行如下命令
sudo vi /etc/environment
打开环境变量进行配置
具体配置如下:
PATH="...:/usr/lib/jvm/java/jdk1.6.0_22"
CLASSPATH=.:/usr/lib/jvm/java/jdk1.6.0_22/lib
JAVA_HOME=/usr/lib/jvm/java/jdk1.6.0_22
5、进行上述步骤后,在终端中运行java -version命令,提示如下:
程序“java”已包含在下列软件包中:
* gcj-4.4-jre-headless
* gcj-4.5-jre-headless
* openjdk-6-jre-headless
请尝试:apt-get install <选定的软件包>
因此还需要进行配置,以确保我们安装的程序为默认的JDK
运行以下命令:
update-alternatives --install /usr/bin/java java /usr/lib/jvm/java/jdk1.6.0_22/bin/java 300
update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/java/jdk1.6.0_22/bin/javac 300
将我们安装的jdk加入java选单。
接着执行:
update-alternatives --config java
通过这一步选择系统默认的jdk
这样,再在终端中输入 java -version
显示的应该是:
java version "1.6.0_22"
Java(TM) SE Runtime Environment (build 1.6.0_22-b04)
Java HotSpot(TM) Server VM (build 17.1-b03, mixed mode)
step 3. 下载安装Hadoop

下载 Hadoop 0.20.2,并解开压缩文件到 /usr/local/hadoop/路径。
•root$ tar zxvf hadoop-0.20.0.tar.gz
•root$ sudo chown -R hadoop:hadoop /
usr/local/hadoop/hadoop-0.20.2

step 4. 设定 hadoop-env.sh

•进入 hadoop 目录,做进一步的设定。我们需要修改两个档案,第一个是 hadoop-env.sh,需要设定 JAVA_HOME, HADOOP_HOME, PATH 三个环境变量。

创建hadoop用户

$ sudo adduser hadoop
/ usr/local/hadoop/hadoop-0.20.2 $ vi  conf/hadoop-env.sh
贴上以下信息
export JAVA_HOME=
export HADOOP_HOME=
export PATH=$PATH:/opt/hadoop/bin

step 5. 设定 hadoop配置文件

•編輯 /opt/hadoop/conf/core-site.xml
<configuration>
  <property>
    <name>fs.default.name</name>
    <value>hdfs://localhost:9000</value>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/tmp/hadoop/hadoop-${user.name}</value>
  </property>
</configuration>
•編輯 /opt/hadoop/conf/hdfs-site.xml
<configuration>
  <property>
    <name>dfs.replication</name>
    <value>1</value>
  </property>
</configuration>
•編輯 /opt/hadoop/conf/mapred-site.xml
<configuration>
  <property>
    <name>mapred.job.tracker</name>
    <value>localhost:9001</value>
  </property>
</configuration>

step 6. 格式化HDFS

•以上我们已经设定好 Hadoop 单机测试的环境,接着让我们来启动 Hadoop 相关服务,格式化 namenode, secondarynamenode, tasktracker
•$ cd /opt/hadoop
•$ source /opt/hadoop/conf/hadoop-env.sh
•$ hadoop namenode -format


执行画面如:
09/03/23 20:19:47 INFO dfs.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:  host = /localhost
STARTUP_MSG:  args = [-format]
STARTUP_MSG:  version = 0.20.3
STARTUP_MSG:  build = https://svn.apache.org/repos/asf/hadoop/core/branches/branch-0.20 -r 736250; compiled by 'ndaley' on Thu Jan 22 23:12:08 UTC 2009
************************************************************/
09/03/23 20:19:47 INFO fs.FSNamesystem: fsOwner=hadooper,hadooper
09/03/23 20:19:47 INFO fs.FSNamesystem: supergroup=supergroup
09/03/23 20:19:47 INFO fs.FSNamesystem: isPermissionEnabled=true
09/03/23 20:19:47 INFO dfs.Storage: Image file of size 82 saved in 0 seconds.
09/03/23 20:19:47 INFO dfs.Storage: Storage directory /tmp/hadoop-hadooper/dfs/name has been successfully formatted.
09/03/23 20:19:47 INFO dfs.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at /localhost
************************************************************/

step 7. 启动Hadoop

•接着用 start-all.sh 来启动所有服务,包含 namenode, datanode,
/usr/local/hadoop/hadoop$ bin/start-all.sh
执行画面如:
starting namenode, logging to /opt/hadoop/logs/hadoop-hadooper-namenode-vPro.out
localhost: starting datanode, logging to /opt/hadoop/logs/hadoop-hadooper-datanode-vPro.out
localhost: starting secondarynamenode, logging to /opt/hadoop/logs/hadoop-hadooper-secondarynamenode-vPro.out
starting jobtracker, logging to /opt/hadoop/logs/hadoop-hadooper-jobtracker-vPro.out

step 8. 安装完毕测试

•启动之后,可以检查以下网址,来观看服务是否正常。Hadoop 管理接口 Hadoop Task Tracker 状态 Hadoop DFS 状态
•http://localhost:50030/ - Hadoop 管理接口

step 7. 停止Hadoop

$ bin/stop-all.sh

问题:

1、vi问题
sudo apt-get install vim

 

 

你可能感兴趣的:(java,jvm,hadoop,ubuntu,ssh)