docker
spark
1.准备工作
三个 docker 容器,操作系统为:Ubuntu 14.04
ip | 机器名称 | 集群节点 | 登录用户 |
---|---|---|---|
17.172.192.108 | Hadoop1 | master/slave | tank |
17.172.192.123 | Hadoop2 | slave | tank |
17.172.192.124 | Hadoop3 | slave | tank |
2.安装jdk并配置环境变量
1)解压缩文件
tar -zxvf jdk-8u141-linux-x64.tar.gz /usr/local/java
2)配置环境变量
- 打开 vi
sudo vi /etc/profile
- 在打开的profile末尾添加环境变量
export JAVA_HOME=/usr/local/java/jdk.1.8.0_141
export JRE_HOME=${JAVA_HOME}/jre
export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib
export PATH=$PATH:${JAVA_HOME}/bin
- 让文件生效
source /etc/profile
- 验证 Java 环境配置
java -version
3.安装和配置Scala
1)下载Scala安装包
wget https://downloads.lightbend.com/scala/2.12.7/scala-2.12.7.tgz
2)解压
tar -zxvf scala-2.12.7.tgz
3)复制到/usr下面
docker mv scala-2.12.7 /usr
4)配置环境变量
vi /etc/profile
export SCALA_HOME=/usr/scala-2.12.7
export PATH=$SCALA_HOME/bin:$PATH
5)保存后刷新配置
source /etc/profile
6)验证是否配置成功
scala -version
4.配置SSH免密登录
1)生成ssh秘钥
ssh -keygen
2) 将秘钥导入authorized_keys,配置成免密码登录本地
cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys
3)测试免密码登录本机
ssh localhost
注:docker容器之间通信,不用防火墙
5.安装 Hadoop
1)解压缩下载之后的hadoop文件
tar -zxvf hadoop-2.7.3.tar.gz /usr/local/hadoop/
2) 配置core-site.xml
fs.default.name
hdfs://hadoop1:9000
hadoop.tmp.dir
/home/tank/hadoop/tmp
3)配置hdfs-site.xml
dfs.namenode.secondary.http-address
master:50900
dfs.replication
1
dfs.namenode.name.dir
/home/tank/hadoop/hdfs/name
dfs.datanode.data.dir
/home/tank/hadoop/hdfs/data
dfs.namenode.handler.count
10
dfs.datanode.du.reserved
10737418240
4)配置mapred-site.xml
mapred.child.java.opts
-Xmx1000m
mapreduce.map.memory.mb
1024MB
mapreduce.reduce.memory.mb
1024MB
mapreduce.job.reduce.slowstart.completedmaps
0.5
mapreduce.jobtracker.taskscheduler
org.apache.hadoop.mapred.JobQueueTaskScheduler
mapreduce.map.maxattempts
3
mapreduce.framework.name
yarn
mapreduce.jobhistory.address
hadoop1:10020
mapreduce.jobhistory.webapp.address
hadoop1:19888
mapred.job.tracker
hadoop1:9001
5)配置yarn-site.xml
yarn.resourcemanager.hostname
hadoop1
yarn.nodemanager.aux-services
mapreduce_shuffle
yarn.nodemanager.resource.memory-mb
8192
yarn.scheduler.minimum-allocation-mb
1024
yarn.scheduler.maxmum-allocation-mb
8192
yarn.log-aggregation-enable
true
6)修改hadoop-env.sh,配置jdk路径
export JAVA_HOME=/usr/local/java/jdk1.8.0_141
7)添加hadoop环境变量
sudo vi /etc/profile
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.7.3
export PATH=$PATH:${HADOOP_HOME}/bin
8)在集群所有节点进行前15步操作,并进行ssh互相免密码登录设置
- 修改各个节点的/etc/hosts文件,添加
17.172.192.108 hadoop1
17.172.192.123 hadoop2
17.172.192.124 hadoop3
- 将主节点的id_rsa.pub远程发送至所有叶子节点,命名为master.pub
rcp id_rsa.pub hadoop@hadoop2:~/.ssh/master.pub
rcp id_rsa.pub hadoop@hadoop3:~/.ssh/master.pu
- 将主节点的master.pub追加到所有叶子节点的authorized_keys文件中,最终结果为主节点可以免密码登录到所有叶子节点
9)配置集群从节点
修改$HADOOP_HOME/etc/hadoop目录下的slaves文件,改为一下内容,代表三台机器都作为从节点参与任务
hadoop1
hadoop2
hadoop3
10)启动hadoop集群
cd $HADOOP_HOME
sbin/start-all.sh
11)查看集群运行状态
jps
NodeManager
Jps
NameNode
ResourceManager
SecondaryNameNode
DataNode
12)启动jobhistory进程
sbin/mr-jobhistory-daemon.sh start historyserver
jps
NodeManager
Jps
NameNode
ResourceManager
JobHistoryServer
SecondaryNameNode
DataNode
JobHistoryServer
//子节点上的进程
Jps
NodeManage
DataNode
6.Spark2.1.0完全分布式环境搭建
以下操作都在Master节点(Hadoop1)进行
1)下载二进制包spark-2.3.2-bin-hadoop2.7.tgz
2)解压并移动到相应目录,命令如下:
tar -zxvf spark-2.3.2-bin-hadoop2.7.tgz
mv spark-2.3.2-bin-hadoop2.7.tgz /opt
3)修改相应的配置文件
- /etc/profie
export SPARK_HOME=/opt/spark-2.3.2-bin-hadoop2.7/
export PATH=$PATH:$SPARK_HOME/bin
- 复制spark-env.sh.template成spark-env.sh
cp spark-env.sh.template spark-env.sh
- 修改$SPARK_HOME/conf/spark-env.sh,添加如下内容:
export JAVA_HOME=/usr/local/jdk1.8.0_141
export SCALA_HOME=/usr/scala-2.12.7
export HADOOP_HOME=/usr/local/hadoop/hadoop-2.7.3
export HADOOP_CONF_DIR=/usr/local/hadoop/hadoop-2.7.3/etc/hadoop
export SPARK_MASTER_IP=172.17.192.108
export SPARK_MASTER_HOST=172.17.192.108
export SPARK_LOCAL_IP=172.17.192.108
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_CORES=2
export SPARK_HOME=/opt/spark-2.3.2-bin-hadoop2.7
export SPARK_DIST_CLASSPATH=$(/usr/local/hadoop/hadoop-2.7.3/bin/hadoop classpath)
- 复制slaves.template成slaves
cp slaves.template slaves
5)修改Slave1和Slave2配置
在Slave1和Slave2上分别修改/etc/profile,增加Spark的配置,过程同Master一样。
在Slave1和Slave2修改$SPARK_HOME/conf/spark-env.sh,将export > >SPARK_LOCAL_IP=172.17.192.108改成Slave1和Slave2对应节点的IP。
6)在Master节点启动集群
/opt/spark-2.3.2-bin-hadoop2.7/sbin/start-all.sh
7)查看集群是否启动成功
jps
Master在Hadoop的基础上新增了:
Master
Slave在Hadoop的基础上新增了:
Worker