使用keepalived 做Carbon Thrift Server HA

Carbon Thrift Server HA

  • Carbon Thrift Server
    • 先决条件
    • 启动
  • keepalived
    • 安装
    • 编辑check_carbon.sh
    • keepalived.conf配置
      • 主节点
      • backup节点
  • 测试

Carbon Thrift Server

先决条件

  • 大数据集群环境
  • spark 2.4.5
  • carbondata2.0.1
    可参考HDP2.6.5更换spark版本为2.4.5 与carbondata2.0.1集成

启动

分别在两台机器上启动

spark-submit    --master yarn    --deploy-mode client     --class org.apache.carbondata.spark.thriftserver.CarbonThriftServer     --num-executors 5    --driver-memory 2G    --executor-memory 4G    --executor-cores 4    /usr/hdp/2.6.5.0-292/spark2/carbonlib/apache-carbondata-2.0.1-bin-spark2.4.5-hadoop2.7.2.jar &

keepalived

安装

yum install keepalived -y

编辑check_carbon.sh

vim /etc/keepalived/check_carbon.sh

#!/bin/bash
counter=$(yarn application -list 2>/dev/null | awk '/Carbon Thrift Server/' |awk '/fpc-test-lxl-1/ {print $9}')
if [ "${counter}" = "RUNNING" ]; then
     /etc/init.d/keepalived stop
fi

keepalived.conf配置

主节点

vim /etc/keepalived/keepalived.conf

/bin/bash: Configuration: command not found
bal_defs {
    notification_email {
        [email protected]
    }
    notification_email_from [email protected]
    smtp_server mail.qq.com
    smtp_connect_timeout 30
    router_id LVS_DEVEL
}
 
vrrp_script chk_carbondata {
#    script "killall -0 nginx"
    script "/etc/keepalived/check_carbon.sh" 
    interval 2 
    weight -5 
    fall 3  
    rise 2 
}
 
vrrp_instance VI_1 {
    state MASTER
    interface eth0
    mcast_src_ip 10.20.10.129
    virtual_router_id 51
    priority 101
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.20.10.2
    }
    track_script {
       chk_carbondata 
    }
}

backup节点

/bin/bash: Configuration: command not found
bal_defs {
    notification_email {
        1003149427@qq.com
    }
    notification_email_from 1003149427@qq.com
    smtp_server mail.qq.com
    smtp_connect_timeout 30
    router_id LVS_DEVEL
}
 
vrrp_script chk_carbondata {
#    script "killall -0 nginx"
    script "/etc/keepalived/check_carbon.sh" 
    interval 2 
    weight -5 
    fall 3  
    rise 2 
}
 
vrrp_instance VI_1 {
    state BACKUP
    interface eth0
    mcast_src_ip 10.20.10.137
    virtual_router_id 51
    priority 100
    advert_int 2
    authentication {
        auth_type PASS
        auth_pass 1111
    }
    virtual_ipaddress {
        10.20.10.2
    }
    track_script {
       chk_carbondata 
    }
}

测试

cd $SPARK_HOME
./bin/beeline -u  jdbc:hive2://10.20.10.2:10000  -n root

把主节点停掉再试一下 ok

你可能感兴趣的:(大数据,carbondata,spark)