ElasticJob源码学习

1. spring命名空间 NamespaceHandlerSupport

 
RegNamespaceHandler->ZookeeperBeanDefinitionParser.parseInternal
->ZookeeperRegistryCenter.init

JobNamespaceHandler->SimpleJobBeanDefinitionParser->AbstractJobBeanDefinitionParser.parseInternal->SpringJobScheduler.init->JobScheduler.init

2. ZookeeperRegistryCenter.init

CuratorFramework client = builder.build();  // 构建Zookeeper客户端
client.start();   // 启动,以便后续注册节点变化监听

3. JobScheduler对象

从spring配置即,命名空间的AbstractJobBeanDefinitionParser.parseInternal的调用来看,每一个job对应一个JobScheduler对象。
对象属性:

  • SchedulerFacade schedulerFacade

    • ConfigurationService configService; 【config】弹性化分布式作业配置服务.
      主要功能:持久化分布式作业配置、读取作业配置。
      读取作业配置时,可通过从本地缓存TreeMap中获取,或直接client.getData()从zk上获取。
    • LeaderService leaderService; 【leader/election/latch】【leader/election/instance】主节点服务
      主要功能:选举主节点electLeader、删除主节点removeLeader、判断是否主节点isLeaderUntilBlock,isLeader、判断是否已经有主节点hasLeader
    • ServerService serverService;【servers】作业服务器服务
      persistOnline将本机IP持久化到作业服务器信息中。
    • InstanceService instanceService; 【instances】作业运行实例服务,同一台服务器可能有多个运行实例的情况
    • ShardingService shardingService; 【sharding/{item}/running】分片服务
    • ExecutionService executionService; 【sharding/{item}/running】 执行作业服务,主要注册作业启动、作业完成信息。
    • MonitorService monitorService;
    • ReconcileService reconcileService; 调解分布式作业不一致状态服务
    • ListenerManager listenerManager; 作业注册中心的监听器管理者,包涵选主、分片、失效转移等监听器
  • JobFacade jobFacade

    • ConfigurationService configService
    • ShardingService shardingService
    • ExecutionContextService executionContextService
    • ExecutionService executionService
    • FailoverService failoverService
    • List elasticJobListeners
    • JobEventBus jobEventBus
  • LiteJobConfiguration liteJobConfig Lite作业配置 job的配置信息

  • CoordinatorRegistryCenter regCenter 用于协调分布式服务的注册中心.

4.JobScheduler.init

        //@1
        LiteJobConfiguration liteJobConfigFromRegCenter = schedulerFacade.updateJobConfiguration(liteJobConfig);
        //@2
        JobRegistry.getInstance().setCurrentShardingTotalCount(liteJobConfigFromRegCenter.getJobName(), liteJobConfigFromRegCenter.getTypeConfig().getCoreConfig().getShardingTotalCount());
        //@3
        JobScheduleController jobScheduleController = new JobScheduleController(
                createScheduler(), createJobDetail(liteJobConfigFromRegCenter.getTypeConfig().getJobClass()), liteJobConfigFromRegCenter.getJobName());
        JobRegistry.getInstance().registerJob(liteJobConfigFromRegCenter.getJobName(), jobScheduleController, regCenter);
        //@4
        schedulerFacade.registerStartUpInfo(!liteJobConfigFromRegCenter.isDisabled());
        //@5
        jobScheduleController.scheduleJob(liteJobConfigFromRegCenter.getTypeConfig().getCoreConfig().getCron());

@1:持久化作业配置信息。

// liteJobConfig以json格式,持久化保存在/{jobNamespace}/{jobName}/config
 jobNodeStorage.replaceJobNode(ConfigurationNode.ROOT, LiteJobConfigurationGsonFactory.toJson(liteJobConfig));

@2:在JobRegistry中设置当前分片总数

currentShardingTotalCountMap.put(jobName, currentShardingTotalCount);

@3:创建quartz的调度器

  • 1.createScheduler:StdSchedulerFactory.getScheduler->initialize创建QuartzScheduler
factory.initialize(getBaseQuartzProperties());
  • 2.createJobDetail:jobDetail 对应 LiteJob,再看下LiteJob的execute方法!
 JobDetail result = JobBuilder.newJob(LiteJob.class).withIdentity(liteJobConfig.getJobName()).build();

@4: registerStartUpInfo 作业快要启动了!!

        // 1.启动作业监听服务
        listenerManager.startAllListeners();
        // 2.选举主节点
        leaderService.electLeader();
        // 3.持久化服务器上线服务
        serverService.persistOnline(enabled);
        // 4.持久化作业运行实例上线相关信息
        instanceService.persistOnline();
        // 5.重新分片
        shardingService.setReshardingFlag();
        monitorService.listen();
        if (!reconcileService.isRunning()) {
            reconcileService.startAsync();
        }
  • 1.startAllListeners,启动监听器,监听zk上节点的变动,可能触发重新选主、重新分片
   /**
     * 开启所有监听器.
     */
    public void startAllListeners() {
         // LeaderElectionJobListener 选主事件监听器,监听节点主节点LeaderNode.INSTANCE (主节点与zk失联)
        // LeaderAbdicationJobListener 主退位监听器(主节点被配置成禁用的情况)
        electionListenerManager.start();  
        // 
        shardingListenerManager.start();
        failoverListenerManager.start();
        monitorExecutionListenerManager.start();
        shutdownListenerManager.start();
        triggerListenerManager.start();
        rescheduleListenerManager.start();
        guaranteeListenerManager.start();
        jobNodeStorage.addConnectionStateListener(regCenterConnectionStateListener);
    }
  • 2.选举主节点过程
/**
     * 在主节点执行操作.
     * 
     * @param latchNode 分布式锁使用的作业节点名称
     * @param callback 执行操作的回调
     */
    public void executeInLeader(final String latchNode, final LeaderExecutionCallback callback) {
        try (LeaderLatch latch = new LeaderLatch(getClient(), jobNodePath.getFullPath(latchNode))) {
            latch.start();
            latch.await();
            callback.execute();
        //CHECKSTYLE:OFF
        } catch (final Exception ex) {
        //CHECKSTYLE:ON
            handleException(ex);
        }
    }
  • 使用cautor开源框架提供的实现类org.apache.curator.framework.recipes.leader.LeaderLatch。
    LeaderLatch需要传入两个参数:
    CuratorFramework client:curator框架客户端。
    latchPath:锁节点路径:/{jobNamespace}/{jobname}/leader/election/latch。
    LeaderLatch.start,其主要过程就是去锁路径下创建一个临时排序节点,如果创建的节点序号最小,await方法将返回,否则在前一个节点监听该节点事件,并阻塞,如何获得分布式锁后,执行callback回调方法。
    LeaderService$LeaderElectionExecutionCallback
@RequiredArgsConstructor
    class LeaderElectionExecutionCallback implements LeaderExecutionCallback {
        @Override
        public void execute() {
            if (!hasLeader()) {
                jobNodeStorage.fillEphemeralJobNode(LeaderNode.INSTANCE, JobRegistry.getInstance().getJobInstance(jobName).getJobInstanceId());
            }
        }
    }
  • 成功获取选主的分布式锁后,如果{jobNamespace}/{jobname}/leader/election/instance节点不存在,则创建该临时节点,节点存储的内容为IP地址@-@进程ID,
    其代码见JobInstance构造方法:jobInstanceId = IpUtils.getIp() + “@-@”+ ManagementFactory.getRuntimeMXBean().getName().split(“@”)[0];

3.持久化服务器上线服务 {jobNamespace}/{jobname}/servers/IP地址
4.持久化作业运行实例上线相关信息 {jobNamespace}/{jobname}/instances/IP地址@-@进程ID
5.设置需要重新分片 {jobNamespace}/{jobname}/leader/sharding/necessary

@5:启动调度作业

  scheduler.start();

5.作业执行Job.execute

前面已经分析在jobSchedule的初始化过程中,quartz的jobDetail使用的是LiteJob.

@Override
    public void execute(final JobExecutionContext context) throws JobExecutionException {
        JobExecutorFactory.getJobExecutor(elasticJob, jobFacade).execute();
    }
/**
     * 获取作业执行器.
     *
     * @param elasticJob 分布式弹性作业
     * @param jobFacade 作业内部服务门面服务
     * @return 作业执行器
     */
    @SuppressWarnings("unchecked")
    public static AbstractElasticJobExecutor getJobExecutor(final ElasticJob elasticJob, final JobFacade jobFacade) {
        if (null == elasticJob) {
            return new ScriptJobExecutor(jobFacade);
        }
        if (elasticJob instanceof SimpleJob) {
            return new SimpleJobExecutor((SimpleJob) elasticJob, jobFacade);
        }
        if (elasticJob instanceof DataflowJob) {
            return new DataflowJobExecutor((DataflowJob) elasticJob, jobFacade);
        }
        throw new JobConfigurationException("Cannot support job type '%s'", elasticJob.getClass().getCanonicalName());
    }

在项目中spring配置的是simple,调用SimpleJobExecutor.execute()->AbstractElasticJobExecutor.execute()

/**
     * 执行作业.
     */
    public final void execute() {
        // 1. 检查环境配置
        try {
            jobFacade.checkJobExecutionEnvironment();
        } catch (final JobExecutionEnvironmentException cause) {
            jobExceptionHandler.handleException(jobName, cause);
        }
     //  2.获取当前作业服务器的分片信息
        ShardingContexts shardingContexts = jobFacade.getShardingContexts();
        if (shardingContexts.isAllowSendJobEvent()) {
            // 发布作业状态追踪事件.
            jobFacade.postJobStatusTraceEvent(shardingContexts.getTaskId(), State.TASK_STAGING, String.format("Job '%s' execute begin.", jobName));
        }
        // 设置任务被错过执行的标记.
        if (jobFacade.misfireIfRunning(shardingContexts.getShardingItemParameters().keySet())) {
            if (shardingContexts.isAllowSendJobEvent()) {
                // 发布作业状态追踪事件.
                jobFacade.postJobStatusTraceEvent(shardingContexts.getTaskId(), State.TASK_FINISHED, String.format(
                        "Previous job '%s' - shardingItems '%s' is still running, misfired job will start after previous job completed.", jobName, 
                        shardingContexts.getShardingItemParameters().keySet()));
            }
            return;
        }

        // 执行作业前置监听
        try {
            jobFacade.beforeJobExecuted(shardingContexts);
            //CHECKSTYLE:OFF
        } catch (final Throwable cause) {
            //CHECKSTYLE:ON
            jobExceptionHandler.handleException(jobName, cause);
        }
        // 3.执行作业
        execute(shardingContexts, JobExecutionEvent.ExecutionSource.NORMAL_TRIGGER);
        while (jobFacade.isExecuteMisfired(shardingContexts.getShardingItemParameters().keySet())) {
            jobFacade.clearMisfire(shardingContexts.getShardingItemParameters().keySet());
            execute(shardingContexts, JobExecutionEvent.ExecutionSource.MISFIRE);
        }
        // 如果需要失效转移, 则执行作业失效转移
        jobFacade.failoverIfNecessary();
        try {
            jobFacade.afterJobExecuted(shardingContexts);
            //CHECKSTYLE:OFF
        } catch (final Throwable cause) {
            //CHECKSTYLE:ON
            jobExceptionHandler.handleException(jobName, cause);
        }
    }

    1. 检查环境配置
 /**
     * 检查本机与注册中心的时间误差秒数是否在允许范围.
     * 
     * @throws JobExecutionEnvironmentException 本机与注册中心的时间误差秒数不在允许范围所抛出的异常
     */
    public void checkMaxTimeDiffSecondsTolerable() throws JobExecutionEnvironmentException {
        int maxTimeDiffSeconds =  load(true).getMaxTimeDiffSeconds();
        if (-1  == maxTimeDiffSeconds) {
            return;
        }
        long timeDiff = Math.abs(timeService.getCurrentMillis() - jobNodeStorage.getRegistryCenterTime());
        if (timeDiff > maxTimeDiffSeconds * 1000L) {
            throw new JobExecutionEnvironmentException(
                    "Time different between job server and register center exceed '%s' seconds, max time different is '%s' seconds.", timeDiff / 1000, maxTimeDiffSeconds);
        }
    }

2.获取当前作业服务器的分片信息

@Override
    public ShardingContexts getShardingContexts() {
        boolean isFailover = configService.load(true).isFailover();
        if (isFailover) {
            List failoverShardingItems = failoverService.getLocalFailoverItems();
            if (!failoverShardingItems.isEmpty()) {
                return executionContextService.getJobShardingContext(failoverShardingItems);
            }
        }
        // 进行分片
        shardingService.shardingIfNecessary();
        List shardingItems = shardingService.getLocalShardingItems();
        if (isFailover) {
            shardingItems.removeAll(failoverService.getLocalTakeOffItems());
        }
        shardingItems.removeAll(executionService.getDisabledItems(shardingItems));
        return executionContextService.getJobShardingContext(shardingItems);
    }

分片处理:

public void shardingIfNecessary() {
        // 获取当前可用的作业实例,client实时读取zk-instances节点
        List availableJobInstances = instanceService.getAvailableJobInstances();
        // 判断是否需要分片 {jobNamespace}/{jobname}/leader/sharding/necessary 是否存在,前面启动过程中,已经添加了该节点
        // 且可用的作业实例不能为空
        if (!isNeedSharding() || availableJobInstances.isEmpty()) {
            return;
        }
        // 如果不是主节点,阻塞直到分片完成,阻塞检查间隔是100毫秒
        if (!leaderService.isLeaderUntilBlock()) {
            blockUntilShardingCompleted();
            return;
        }
        // 进入到这,说明当前节点已经成为主节点
        // 检查并等待 是否还有执行中的作业:遍历每个item ,查看是否存在节点/sharding/{item}/running  会不会执行一半就挂了,导致running没有删除
        waitingOtherShardingItemCompleted();
        // 读取作业配置 这里的fromCache=false,即不从缓存中获取,直接获取zk上最新的config配置信息
        LiteJobConfiguration liteJobConfig = configService.load(false);
        int shardingTotalCount = liteJobConfig.getTypeConfig().getCoreConfig().getShardingTotalCount();
        log.debug("Job '{}' sharding begin.", jobName);
        // 填充临时节点 /leader/sharding/processing 代表当前正在分片中
        jobNodeStorage.fillEphemeralJobNode(ShardingNode.PROCESSING, "");
        // 重新调整sharding下的item节点,删除item下的instance,并调整item,总分片可能增加或者减少
        resetShardingInfo(shardingTotalCount);
        // 获取作业分片策略实例 这里用的是AverageAllocationJobShardingStrategy 平均分片策略
        /* 基于平均分配算法的分片策略.
         * 
         * 
         * 如果分片不能整除, 则不能整除的多余分片将依次追加到序号小的服务器.
         * 如: 
         * 1. 如果有3台服务器, 分成9片, 则每台服务器分到的分片是: 1=[0,1,2], 2=[3,4,5], 3=[6,7,8].
         * 2. 如果有3台服务器, 分成8片, 则每台服务器分到的分片是: 1=[0,1,6], 2=[2,3,7], 3=[4,5].
         * 3. 如果有3台服务器, 分成10片, 则每台服务器分到的分片是: 1=[0,1,2,9], 2=[3,4,5], 3=[6,7,8].
         */
        JobShardingStrategy jobShardingStrategy = JobShardingStrategyFactory.getStrategy(liteJobConfig.getJobShardingStrategyClass());
        // 执行作业分片
        //并将分片信息持久化到zk上,删除/leader/sharding/processing和/leader/sharding/necessary (PersistShardingInfoTransactionExecutionCallback.execute)
        // 提交事务 curatorTransactionFinal.commit();
        jobNodeStorage.executeInTransaction(new PersistShardingInfoTransactionExecutionCallback(jobShardingStrategy.sharding(availableJobInstances, jobName, shardingTotalCount)));
        log.debug("Job '{}' sharding complete.", jobName);
    }

3.执行作业

// process-1
process(shardingContexts, executionSource);

process-1

private void process(final ShardingContexts shardingContexts, final JobExecutionEvent.ExecutionSource executionSource) {
        Collection items = shardingContexts.getShardingItemParameters().keySet();
        // 只有一个分片
        if (1 == items.size()) {
            int item = shardingContexts.getShardingItemParameters().keySet().iterator().next();
            JobExecutionEvent jobExecutionEvent =  new JobExecutionEvent(shardingContexts.getTaskId(), jobName, executionSource, item);
            // process-2立刻执行
            process(shardingContexts, item, jobExecutionEvent);
            return;
        }
        final CountDownLatch latch = new CountDownLatch(items.size());
        for (final int each : items) {
            final JobExecutionEvent jobExecutionEvent = new JobExecutionEvent(shardingContexts.getTaskId(), jobName, executionSource, each);
            if (executorService.isShutdown()) {
                return;
            }
            // 多个分片并发执行
            executorService.submit(new Runnable() {
                
                @Override
                public void run() {
                    try {
                        // process-2
                        process(shardingContexts, each, jobExecutionEvent);
                    } finally {
                        latch.countDown();
                    }
                }
            });
        }
        try {
            latch.await();
        } catch (final InterruptedException ex) {
            Thread.currentThread().interrupt();
        }
    }

process-2

// 执行任务
    private void process(final ShardingContexts shardingContexts, final int item, final JobExecutionEvent startEvent) {
        if (shardingContexts.isAllowSendJobEvent()) {
            jobFacade.postJobExecutionEvent(startEvent);
        }
        log.trace("Job '{}' executing, item is: '{}'.", jobName, item);
        JobExecutionEvent completeEvent;
        try {
            // process-3
            process(new ShardingContext(shardingContexts, item));
            completeEvent = startEvent.executionSuccess();
            log.trace("Job '{}' executed, item is: '{}'.", jobName, item);
            if (shardingContexts.isAllowSendJobEvent()) {
                jobFacade.postJobExecutionEvent(completeEvent);
            }
            // CHECKSTYLE:OFF
        } catch (final Throwable cause) {
            // CHECKSTYLE:ON
            completeEvent = startEvent.executionFailure(cause);
            jobFacade.postJobExecutionEvent(completeEvent);
            itemErrorMessages.put(item, ExceptionUtil.transform(cause));
            jobExceptionHandler.handleException(jobName, cause);
        }
    }

process-3
SimpleJobExecutor.process

@Override
  protected void process(final ShardingContext shardingContext) {
      simpleJob.execute(shardingContext);
  }
/**
* 简单分布式作业接口.
* 
* @author zhangliang
*/
public interface SimpleJob extends ElasticJob {
  
  /**
   * 执行作业.
   *
   * @param shardingContext 分片上下文
   */
  void execute(ShardingContext shardingContext);
}

到这里就比较熟悉了,项目中自己定义的job就实现了这个接口的execute方法。

你可能感兴趣的:(ElasticJob源码学习)