Spark HA集群搭建

本教程基于Hadoop HA集群搭建。


版本介绍

software version
OS CentOS-7-x86_64-DVD-1810.iso
Hadoop hadoop-2.8.4
Zookeeper zookeeper-3.4.10
Spark spark-2.4.3

集群角色分配

node actor
master1 NameNode、DFSZKFailoverController(zkfc)、ResourceManager、Master
master2 NameNode、DFSZKFailoverController(zkfc)、ResourceManager、Master
node1 DataNode、NodeManager、JournalNode、QuorumPeerMain、Worker
node2 DataNode、NodeManager、JournalNode、QuorumPeerMain、Worker
node3 DataNode、NodeManager、JournalNode、QuorumPeerMain、Worker

配置Spark [all]

解压重命名

tar -xvf spark-2.4.3-bin-hadoop2.7.tgz -C /opt/spark/

mv spark-2.4.3-bin-hadoop2.7 spark-2.4.3

mv sbin/start-all.sh sbin/start-spark-all.sh
mv sbin/stop-all.sh sbin/stop-spark-all.sh

配置环境变量

vi ~/.bash_profile

export SPARK_HOME=/opt/spark/spark-2.4.3
export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin
source ~/.bash_profile

配置spark运行环境

vi /opt/spark/spark-2.4.3/conf/spark-env.sh

export JAVA_HOME=/opt/env/jdk1.8.0_121
export CLASS_PATH=$JAVA_HOME/lib:$JAVA_HOME/jre/lib
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
export SPARK_MASTER_PORT=7077
export SPARK_WORKER_CORES=1
export SPARK_WORKER_MEMORY=1g
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=node1:2181,node2:2181,node3:2181 -Dspark.dep
loy.zookeeper.dir=/spark/ha"

配置spark slaves

vi /opt/spark/spark-2.4.3/conf/slaves

node1
node2
node3

启动集群[master1 master2]

[hadoop@master1 spark-2.4.3]$ sbin/start-spark-all.sh 
starting org.apache.spark.deploy.master.Master, logging to /opt/spark/spark-2.4.3/logs/spark-hadoop-org.apache.spark.deploy.master.Master-1-master1.out
node2: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark/spark-2.4.3/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-node2.out
node3: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark/spark-2.4.3/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-node3.out
node1: starting org.apache.spark.deploy.worker.Worker, logging to /opt/spark/spark-2.4.3/logs/spark-hadoop-org.apache.spark.deploy.worker.Worker-1-node1.out
[hadoop@master2 spark-2.4.3]$ sbin/start-master.sh 
starting org.apache.spark.deploy.master.Master, logging to /opt/spark/spark-2.4.3/logs/spark-hadoop-org.apache.spark.deploy.master.Master-2-master2.out

验证[all]

[hadoop@master1 hadoop-2.8.4]$ jps
3808 Master
4390 ResourceManager
4281 DFSZKFailoverController
4474 Jps
3981 NameNode
[hadoop@node3 sbin]$ jps
3377 QuorumPeerMain
3793 NodeManager
3491 Worker
3589 DataNode
3673 JournalNode
3935 Jps

Spark HA集群搭建_第1张图片

你可能感兴趣的:(hadoop,spark)