在主节点,就是第一台主机的命令行下;
vim /etc/hosts
我的是三台云主机:
在原文件的基础上加上;
ip1 master worker0 namenode
ip2 worker1 datanode1
ip3 worker2 datanode2
其中的ipN代表一个可用的集群IP,ip1为master的主节点,ip2和iip3为从节点。
注意我这里配置的是root用户,所以以下的家目录是/root
如果你配置的是用户是xxxx,那么家目录应该是/home/xxxxx/
#在主节点执行下面的命令:
ssh-keygen -t rsa -P '' #一路回车直到生成公钥
scp /root/.ssh/id_rsa.pub root@worker1:/root/.ssh/id_rsa.pub.master #从master节点拷贝id_rsa.pub到worker主机上,并且改名为id_rsa.pub.master scp /root/.ssh/id_rsa.pub root@worker1:/root/.ssh/id_rsa.pub.master #同上,以后使用workerN代表worker1和worker2.
scp /etc/hosts root@workerN:/etc/hosts #统一hosts文件,让几个主机能通过host名字来识别彼此
#在对应的主机下执行如下命令: cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys #master主机 cat /root/.ssh/id_rsa.pub.master >> /root/.ssh/authorized_keys #workerN主机
这样master主机就可以无密码登录到其他主机,这样子在运行master上的启动脚本时和使用scp命令时候,就可以不用输入密码了。
配置master的java环境
#下载jdk1.8的rpm包
wget --no-check-certificate --no-cookies --header "Cookie: oraclelicense=accept-securebackup-cookie" http://download.oracle.com/otn-pub/java/jdk/8u112-b15/jdk-8u112-linux-x64.rpm rpm -ivh jdk-8u112-linux-x64.rpm
#增加JAVA_HOME
vim etc/profile
#增加如下行: #Java home export JAVA_HOME=/usr/java/jdk1.8.0_112/
#刷新配置:
source /etc/profile #当然reboot也是可以的
配置workerN主机的java环境
#使用scp命令进行拷贝 scp jdk-8u112-linux-x64.rpm root@workerN:/root
#其他的步骤如master节点配置一样
Master节点:
#下载scala安装包: wget -O "scala-2.12.1.rpm" "http://159.226.251.229/videoplayer/scala-2.12.1.rpm?ich_u_r_i=e43f9cc87710b8bba72b4c32577f60ea&ich_s_t_a_r_t=0&ich_e_n_d=0&ich_k_e_y=1745018917750263442428&ich_t_y_p_e=1&ich_d_i_s_k_i_d=1&ich_u_n_i_t=1"
#安装rpm包: rpm -ivh scala-2.12.1.rpm
#增加SCALA_HOME vim /etc/profile
#增加如下内容; #Scala Home export SCALA_HOME=/usr/share/scala #刷新配置 source /etc/profile
WorkerN节点;
#使用scp命令进行拷贝 scp scala-2.12.1.rpm root@workerN:/root #其他的步骤如master节点配置一样
wget http://www-eu.apache.org/dist/hadoop/common/hadoop-2.7.3/hadoop-2.7.3.tar.gz
我的习惯是将软件放置/opt目录下:
tar -xvf hadoop-2.7.3.tar.gz mv hadoop-2.7.3 /opt
增加如下内容:
#hadoop enviroment export HADOOP_HOME=/opt/hadoop-2.7.3/ export PATH="$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH" export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export YARN_CONF_DIR=$HADOOP_HOME/etc/hadoop
修改JAVA_HOME 如下:
export JAVA_HOME=/usr/java/jdk1.8.0_112/
worker1
worker2
fs.defaultFS hdfs://master:9000 io.file.buffer.size 131072 hadoop.tmp.dir /opt/hadoop-2.7.3/tmp
dfs.namenode.secondary.http-address master:50090 dfs.replication 2 dfs.namenode.name.dir file:/opt/hadoop-2.7.3/hdfs/name dfs.datanode.data.dir file:/opt/hadoop-2.7.3/hdfs/data
复制template,生成xml:
cp mapred-site.xml.template mapred-site.xml
内容:
mapreduce.framework.name yarn mapreduce.jobhistory.address master:10020 mapreduce.jobhistory.address master:19888
yarn.nodemanager.aux-services mapreduce_shuffle yarn.resourcemanager.address master:8032 yarn.resourcemanager.scheduler.address master:8030 yarn.resourcemanager.resource-tracker.address master:8031 yarn.resourcemanager.admin.address master:8033 yarn.resourcemanager.webapp.address master:8088
至此master节点的hadoop搭建完毕
再启动之前我们需要
格式化一下namenode
hadoop namenode -format
scp -r /opt/hadoop-2.7.3 root@wokerN:/opt #注意这里的N要改为1或者2
过程如master一样
wget -O "spark-2.1.0-bin-hadoop2.7.tgz" "http://159.226.251.230/videoplayer/spark-2.1.0-bin-hadoop2.7.tgz?ich_u_r_i=8d258cfd6421af60c998d108eae1ca4d&ich_s_t_a_r_t=0&ich_e_n_d=0&ich_k_e_y=1745018916751063032407&ich_t_y_p_e=1&ich_d_i_s_k_i_d=7&ich_u_n_i_t=1"
tar -xvf spark-2.1.0-bin-hadoop2.7.tgz mv spark-2.1.0-bin-hadoop2.7 /opt
#Spark enviroment export SPARK_HOME=/opt/spark-2.1.0-bin-hadoop2.7/ export PATH="$SPARK_HOME/bin:$PATH"
cp spark-env.sh.template spark-env.sh
#配置内容如下:
export SCALA_HOME=/usr/share/scala export JAVA_HOME=/usr/java/jdk1.8.0_112/ export SPARK_MASTER_IP=master export SPARK_WORKER_MEMORY=1g export HADOOP_CONF_DIR=/opt/hadoop-2.7.3/etc/hadoop
cp slaves.template slaves
配置内容如下
master
worker1
worker2
将配置好的spark文件复制到workerN节点
scp spark-2.1.0-bin-hadoop2.7 root@workerN:/opt
修改/etc/profile,增加spark相关的配置,如MASTER节点一样
启动集群脚本start-cluster.sh如下:
#!/bin/bash echo -e "\033[31m ========Start The Cluster======== \033[0m" echo -e "\033[31m Starting Hadoop Now !!! \033[0m" /opt/hadoop-2.7.3/sbin/start-all.sh echo -e "\033[31m Starting Spark Now !!! \033[0m" /opt/spark-2.1.0-bin-hadoop2.7/sbin/start-all.sh echo -e "\033[31m The Result Of The Command \"jps\" : \033[0m" jps echo -e "\033[31m ========END======== \033[0m"
截图如下:
关闭集群脚本stop-cluser.sh如下:
#!/bin/bash echo -e "\033[31m ===== Stoping The Cluster ====== \033[0m" echo -e "\033[31m Stoping Spark Now !!! \033[0m" /opt/spark-2.1.0-bin-hadoop2.7/sbin/stop-all.sh echo -e "\033[31m Stopting Hadoop Now !!! \033[0m" /opt/hadoop-2.7.3/sbin/stop-all.sh echo -e "\033[31m The Result Of The Command \"jps\" : \033[0m" jps echo -e "\033[31m ======END======== \033[0m"
截图如下:
这里我都用最简单最常用的Wordcount来测试好了!
测试的源文件的内容为:
Hello hadoop
hello spark
hello bigdata
然后执行下列命令:
hadoop fs -mkdir -p /Hadoop/Input hadoop fs -put wordcount.txt /Hadoop/Input hadoop jar /opt/hadoop-2.7.3/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.7.3.jar wordcount /Hadoop/Input /Hadoop/Output
等待mapreduce执行完毕后,查看结果;
hadoop fs -cat /Hadoop/Output/*
hadoop集群搭建成功!
为了避免麻烦这里我们使用spark-shell,做一个简单的worcount的测试
用于在测试hadoop的时候我们已经在hdfs上存储了测试的源文件,下面就是直接拿来用就好了!
spark-shell
val file=sc.textFile("hdfs://master:9000/Hadoop/Input/wordcount.txt") val rdd = file.flatMap(line => line.split(" ")).map(word => (word,1)).reduceByKey(_+_) rdd.collect() rdd.foreach(println)
退出的话使用如下命令:
:quit
至此我们这篇文章就结束了。