ElasticJob使用与封装

本博客属于 《RabbitMQ基础组件封装—整体结构》的子博客

一. 简介

ElasticJob属于分布式定时任务,需要依赖分布式组件,这里使用 ZooKeeper 作为分布式锁,另外关于ElasticJob的分片的概念:分片的意思是,一张表里面比如5条数据,分5片就是每台机器执行一条数据,一片则就是只有一台机器执行五条数据。

这里使用三台虚拟机,分别是:192.168.11.111、192.168.11.112、192.168.11.113,来组成ZooKeeper集群。集群的搭建可参考教程:CentOS 7.x 安装 ZooKeeper 并实现集群搭建

ElasticJob可以执行的定时任务类型为:

  • simpleJob 简单定时任务(用的多)
  • dataFlowJob 流式定时任务,不间断的执行(用的多)
  • scriptJob 脚本定时任务,利用脚本执行定时任务(不多)

二、使用

1. 添加依赖

        
	    
	      com.dangdang
	      elastic-job-lite-core
          2.1.4
	    
	    
	      com.dangdang
	      elastic-job-lite-spring
          2.1.4
	    

2. 配置文件 application.properties

server.port=8881


#elastic.job.zk.namespace=elastic-job
#elastic.job.zk.serverLists=192.168.11.111:2181,192.168.11.112:2181,192.168.11.113:2181
zookeeper.address=192.168.11.111:2181,192.168.11.112:2181,192.168.11.113:2181
zookeeper.namespace=elastic-job
zookeeper.connectionTimeout=10000
zookeeper.sessionTimeout=10000
zookeeper.maxRetries=3


simpleJob.cron=0/5 * * * * ?
#simpleJob.cron=00 03 21 * * ?
# 分片数量
simpleJob.shardingTotalCount=5 
simpleJob.shardingItemParameters=0=beijing,1=shanghai,2=changchun,3=changsha,4=hangzhou
simpleJob.jobParameter=source1=public,source2=private
simpleJob.failover=true
simpleJob.monitorExecution=true
simpleJob.monitorPort=8889
simpleJob.maxTimeDiffSeconds=-1
simpleJob.jobShardingStrategyClass=com.dangdang.ddframe.job.lite.api.strategy.impl.AverageAllocationJobShardingStrategy

#dataflowJob.cron=0/10 * * * * ?
#dataflowJob.shardingTotalCount=2
#dataflowJob.shardingItemParameters=0=Beijing,1=Shanghai

spring.datasource.url=jdbc:mysql://localhost:3306/elasticjob?useUnicode=true&characterEncoding=utf-8&verifyServerCertificate=false&useSSL=false&requireSSL=false
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.username=root
spring.datasource.password=root


并在数据库中创建配置文件中要访问的数据库 elasticjob。

3. ZooKeeper 的注册

@Configuration
// 配置文件中 zookeeper.address 的ZK地址个数大于0,才会加载这个文件
@ConditionalOnExpression("'${zookeeper.address}'.length() > 0")
public class RegistryCenterConfig {
	
	/**
	 * 	把注册中心加载到spring 容器中
	 * @return
	 */
	@Bean(initMethod = "init")
	public ZookeeperRegistryCenter registryCenter(@Value("${zookeeper.address}") final String serverLists, 
			@Value("${zookeeper.namespace}") final String namespace, 
			@Value("${zookeeper.connectionTimeout}") final int connectionTimeout, 
			@Value("${zookeeper.sessionTimeout}") final int sessionTimeout,
			@Value("${zookeeper.maxRetries}") final int maxRetries) {
		ZookeeperConfiguration zookeeperConfiguration = new ZookeeperConfiguration(serverLists, namespace);
		zookeeperConfiguration.setConnectionTimeoutMilliseconds(connectionTimeout);
		zookeeperConfiguration.setSessionTimeoutMilliseconds(sessionTimeout);
		zookeeperConfiguration.setMaxRetries(maxRetries);
		
		return new ZookeeperRegistryCenter(zookeeperConfiguration);
		
	}
}

将 application.properties 中的配置项读取出来并赋值 ZookeeperConfiguration ,然后丢给注册中心ZookeeperRegistryCenter,然后返回并注入到Spring容器中。

4. Job的数据跟踪配置

ElasticJob可以对一些定时任务的日志轨迹进行链路跟踪,记录任务的执行成功与否等相关信息,这些都会记录在数据库中的表 job_execution_log 和表 job_status_trace_log 里面,不过这个需要做相关配置, JobEventConfig.java

@Configuration
public class JobEventConfig {

    @Autowired
    private DataSource dataSource;

    @Bean
    public JobEventConfiguration jobEventConfiguration() {
        return new JobEventRdbConfiguration(dataSource);
    }
}

5. (可选)写一个job的listener,用来添加一些定时任务执行前、执行后的处理逻辑

SimpleJobListener.java

import org.slf4j.Logger;
import org.slf4j.LoggerFactory;

import com.alibaba.fastjson.JSON;
import com.dangdang.ddframe.job.executor.ShardingContexts;
import com.dangdang.ddframe.job.lite.api.listener.ElasticJobListener;

public class SimpleJobListener implements ElasticJobListener {

	private static Logger LOGGER = LoggerFactory.getLogger(SimpleJobListener.class);
	
	@Override
	public void beforeJobExecuted(ShardingContexts shardingContexts) {
		LOGGER.info("-----------------执行任务之前:{}", JSON.toJSONString(shardingContexts));
	}

	@Override
	public void afterJobExecuted(ShardingContexts shardingContexts) {
		LOGGER.info("-----------------执行任务之后:{}", JSON.toJSONString(shardingContexts));		
	}
	
}

6. 简单定时任务 SimpleJob 的配置与使用

(1)对简单定时任务进行配置

MySimpleJobConfig.java

@Configuration
public class MySimpleJobConfig {

	/**
	 * 注册中心的这个方法名 registryCenter 应该和注册中心配置类RegistryCenterConfig里的配置方法registryCenter()保持一致
	 */
	@Autowired
	private ZookeeperRegistryCenter registryCenter;
	
	@Autowired
	private JobEventConfiguration jobEventConfiguration;
	
	/**
	 * 	具体真正的定时任务执行逻辑
	 * @return
	 */
	@Bean
	public SimpleJob simpleJob() {
		return new MySimpleJob();
	}
	
	/**
	 * @param simpleJob 加载配置文件里的配置项,并创建 SpringJobScheduler
	 * @return
	 */
	@Bean(initMethod = "init")
	public JobScheduler simpleJobScheduler(final SimpleJob simpleJob,
			@Value("${simpleJob.cron}") final String cron,
			@Value("${simpleJob.shardingTotalCount}") final int shardingTotalCount,
			@Value("${simpleJob.shardingItemParameters}") final String shardingItemParameters,
			@Value("${simpleJob.jobParameter}") final String jobParameter,
			@Value("${simpleJob.failover}") final boolean failover,
			@Value("${simpleJob.monitorExecution}") final boolean monitorExecution,
			@Value("${simpleJob.monitorPort}") final int monitorPort,
			@Value("${simpleJob.maxTimeDiffSeconds}") final int maxTimeDiffSeconds,
			@Value("${simpleJob.jobShardingStrategyClass}") final String jobShardingStrategyClass) {
		
		return new SpringJobScheduler(simpleJob,
				registryCenter,
				getLiteJobConfiguration(simpleJob.getClass(),
						cron,
						shardingTotalCount,
						shardingItemParameters,
						jobParameter,
						failover,
						monitorExecution,
						monitorPort,
						maxTimeDiffSeconds,
						jobShardingStrategyClass),
				jobEventConfiguration,
				new SimpleJobListener());
		
	}
	
	
	private LiteJobConfiguration getLiteJobConfiguration(Class jobClass, String cron,
			int shardingTotalCount, String shardingItemParameters, String jobParameter, boolean failover,
			boolean monitorExecution, int monitorPort, int maxTimeDiffSeconds, String jobShardingStrategyClass) {

		JobCoreConfiguration jobCoreConfiguration = JobCoreConfiguration
				.newBuilder(jobClass.getName(), cron, shardingTotalCount)
				.misfire(true)
				.failover(failover)
				.jobParameter(jobParameter)
				.shardingItemParameters(shardingItemParameters)
				.build();
		
		SimpleJobConfiguration simpleJobConfiguration = new SimpleJobConfiguration(jobCoreConfiguration, jobClass.getCanonicalName());
		
		LiteJobConfiguration liteJobConfiguration = LiteJobConfiguration.newBuilder(simpleJobConfiguration)
				.jobShardingStrategyClass(jobShardingStrategyClass)
				.monitorExecution(monitorExecution)
				.monitorPort(monitorPort)
				.maxTimeDiffSeconds(maxTimeDiffSeconds)
				.overwrite(false)
				.build();
		
		return liteJobConfiguration;
	}
}

其中,simpleJobScheduler()方法是用来创建定时任务,方法里面调用了 getLiteJobConfiguration(),而 getLiteJobConfiguration()里面主要做了三件事:

  • 使用配置文件里的配置项创建 JobCoreConfiguration 对象;
  • 使用 JobCoreConfiguration 对象创建简单定时任务的配置对象 SimpleJobConfiguration
  • 使用  SimpleJobConfiguration 对象和配置文件的一些内容创建 LiteJobConfiguration 对象

(2) 具体的定时任务实现

MySimpleJob.java

@Component
public class MySimpleJob implements SimpleJob {

	
//	@JobTrace
	// shardingContext 中包含分片等的配置文件中填写的信息
    @Override
	public void execute(ShardingContext shardingContext) {

		System.err.println("---------	开始任务 MySimpleJob	---------");
//		
//		System.err.println(shardingContext.getJobName());
//		System.err.println(shardingContext.getJobParameter());
//		System.err.println(shardingContext.getShardingItem());
//		System.err.println(shardingContext.getShardingParameter());
//		System.err.println(shardingContext.getShardingTotalCount());
//		System.err.println("当前线程 : ---------------" + Thread.currentThread().getName());
//		
//		System.err.println("----------结束任务------");
	}
}

7. 流式定时任务 DataFlowJob 的配置与使用

(1)配置文件 application.properties 进行相应的配置

dataflowJob.cron=0/10 * * * * ?
dataflowJob.shardingTotalCount=2
dataflowJob.shardingItemParameters=0=Beijing,1=Shanghai

然后是相关配置代码 DataflowJobConfig.java

@Configuration
public class DataflowJobConfig {
    
	@Autowired
    private ZookeeperRegistryCenter regCenter;
    
    @Autowired
    private JobEventConfiguration jobEventConfiguration;
    
    @Bean
    public DataflowJob dataflowJob() {
        return new SpringDataflowJob();
    }
    
    @Bean(initMethod = "init")
    public JobScheduler dataflowJobScheduler(final DataflowJob dataflowJob, @Value("${dataflowJob.cron}") final String cron,
                                             @Value("${dataflowJob.shardingTotalCount}") final int shardingTotalCount,
                                             @Value("${dataflowJob.shardingItemParameters}") final String shardingItemParameters) {
       
    	SpringJobScheduler springJobScheduler = new SpringJobScheduler(dataflowJob, regCenter, getLiteJobConfiguration(dataflowJob.getClass(), cron,
                shardingTotalCount, shardingItemParameters), jobEventConfiguration);
//    	springJobScheduler.init();
    	return springJobScheduler;
    }
    
    private LiteJobConfiguration getLiteJobConfiguration(final Class jobClass, final String cron, final int shardingTotalCount, final String shardingItemParameters) {
        return LiteJobConfiguration.newBuilder(
        		new DataflowJobConfiguration(JobCoreConfiguration.newBuilder(jobClass.getName(), cron, shardingTotalCount)
        		.shardingItemParameters(shardingItemParameters).build(), 
        		jobClass.getCanonicalName(),
        		false))	//streamingProcess ,如果设置为true,表示流式地去执行定时任务,即不停地抓取数据,不会再按照 cron 配置的时间执行,false则会按照 cron 设置的周期去执行
        		.overwrite(false) // overwrite表示是以本地配置为主,还是以 ZooKeeper 配置的为主,false表示以ZooKeeper为主,true表示本地配置会覆盖ZooKeeper的配置
        		.build();
    }
}

主要是读取配置文件,生成流式定时任务的配置对象DataflowJobConfiguration,并在此基础上创建 SpringJobScheduler 

(2) DataFlowJob 流式定时任务的逻辑实现

SpringDataflowJob.java

public class SpringDataflowJob implements DataflowJob {
	
    private static final Logger LOGGER = LoggerFactory.getLogger(SpringDataflowJob.class);
    
    @Override
    public List fetchData(final ShardingContext shardingContext) {
    	System.err.println("--------------@@@@@@@@@@ 抓取数据集合...--------------");
    	List list = new ArrayList();
    	list.add(new Foo("001", "张三"));
    	list.add(new Foo("002", "李四"));
    	return list;
    }
    
    @Override
    public void processData(final ShardingContext shardingContext, final List data) {
    	System.err.println("--------------@@@@@@@@@ 处理数据集合...--------------");
    }
}

定时任务分为两步:首先去抓取符合条件的数据 fetchData(),然后再对这些数据执行相应的处理processData()。

三、将 EalsticJob 封装成注解

1. 新建项目 rabbit-task,并在其中添加 EalsticJob 的相关依赖

        
	    
	      com.dangdang
	      elastic-job-lite-core
          2.1.4
	    
	    
	      com.dangdang
	      elastic-job-lite-spring
          2.1.4
	    

 2. 实现自动装配

(1)首先定义一个类,用于 spring 的自动装配。

JobParserAutoConfigurartion.java 


@Configuration
// @ConditionalOnProperty表示只有在配置文件中有以 elastic.job.zk 为前缀的这两个配置项"namespace", "serverLists",才会加载这个类
@ConditionalOnProperty(prefix = "elastic.job.zk", name = {"namespace", "serverLists"}, matchIfMissing = false)
public class JobParserAutoConfigurartion {

}

(2)然后将该类加入到自动装配的扫描中,需要在 src/main/resources 下创建 META-INF 文件夹,并在该文件夹中创建 spring.factories文件:

# Auto Configure
org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
com.didiok.rabbit.task.autoconfigure.JobParserAutoConfigurartion

3. 创建一个用于存储配置项属性的类

JobZookeeperProperties.java

// 读取配置文件中以elastic.job.zk为前缀的属性
@ConfigurationProperties(prefix = "elastic.job.zk")
public class JobZookeeperProperties {

	private String namespace;
	
	private String serverLists;
	
	private int maxRetries = 3;

	private int connectionTimeoutMilliseconds = 15000;
	
	private int sessionTimeoutMilliseconds = 60000;
	
	private int baseSleepTimeMilliseconds = 1000;
	
	private int maxSleepTimeMilliseconds = 3000;
	
	private String digest = "";
    
    // getter、setter省略...
	
}

使用上面这个类JobZookeeperProperties.java,即在 JobParserAutoConfigurartion.java中使用

@Configuration
// @ConditionalOnProperty表示只有在配置文件中有以 elastic.job.zk 为前缀的这两个配置项"namespace", "serverLists",才会加载这个类
@ConditionalOnProperty(prefix = "elastic.job.zk", name = {"namespace", "serverLists"}, matchIfMissing = false)
@EnableConfigurationProperties(JobZookeeperProperties.class)
public class JobParserAutoConfigurartion {

}

只是加了一个注解:@EnableConfigurationProperties(JobZookeeperProperties.class),这样当加载 JobParserAutoConfigurartion 类时,会将配置文件里的配置项赋值给 JobZookeeperProperties 的属性中。

4. ZooKeeper 注册中心配置

在 JobParserAutoConfigurartion.java 类中进行 ZooKeeper 注册中心的初始化:

    @Bean(initMethod = "init")
	public ZookeeperRegistryCenter zookeeperRegistryCenter(JobZookeeperProperties jobZookeeperProperties) {
		ZookeeperConfiguration zkConfig = new ZookeeperConfiguration(jobZookeeperProperties.getServerLists(),
				jobZookeeperProperties.getNamespace());
		zkConfig.setBaseSleepTimeMilliseconds(jobZookeeperProperties.getBaseSleepTimeMilliseconds());
		zkConfig.setMaxSleepTimeMilliseconds(jobZookeeperProperties.getMaxSleepTimeMilliseconds());
		zkConfig.setConnectionTimeoutMilliseconds(jobZookeeperProperties.getConnectionTimeoutMilliseconds());
		zkConfig.setSessionTimeoutMilliseconds(jobZookeeperProperties.getSessionTimeoutMilliseconds());
		zkConfig.setMaxRetries(jobZookeeperProperties.getMaxRetries());
		zkConfig.setDigest(jobZookeeperProperties.getDigest());
		log.info("初始化job注册中心配置成功, zkaddress : {}, namespace : {}", jobZookeeperProperties.getServerLists(), jobZookeeperProperties.getNamespace());
		return new ZookeeperRegistryCenter(zkConfig);
	}

即将 JobZookeeperProperties中的属性值用于  ZooKeeper 注册中心的初始化。

5. 模块方式的装配

模块装配:也就是以 @Enable* 的注解形式启动 ElasticJob,如果使用模块装配,那么第三、2. (2)中的自动装配就可以不用去处理了。

EnableElasticJob.java

@Target(ElementType.TYPE) # @Target 说明了Annotation所修饰的对象范围, ElementType.TYPE:可以用于类、接口(包括注解类型) 或enum声明
@Retention(RetentionPolicy.RUNTIME) # RetentionPolicy.RUNTIME:注解不仅被保存到class文件中,jvm加载class文件之后,仍然存在;
@Documented  
@Inherited # 被 @Inherited 注解修饰的注解,如果作用于某个类上,其子类是可以继承该注解的。反之,如果一个注解没有被 @Inherited注解所修饰,那么他的作用范围只能是当前类,其子类是不能被继承的。
// 加载 JobParserAutoConfigurartion类
@Import(JobParserAutoConfigurartion.class) # @Import后接一个普通类: spring会将该类加载到spring容器中
public @interface EnableElasticJob {

}

这样,当在其他类上想要开启ElasticJob定时任务时,就可以在那个类上使用注解 @EnableElasticJob

6. 对 ElasticJob 配置项的解析过程进行封装

ElasticJobConfig.java

@Target(ElementType.TYPE)
@Retention(RetentionPolicy.RUNTIME)
public @interface ElasticJobConfig {

	String name();	//elasticjob的名称
	
	String cron() default "";
	
	int shardingTotalCount() default 1;
	
	String shardingItemParameters() default "";
	
	String jobParameter() default "";
	
	boolean failover() default false;
	
	boolean misfire() default true;
	
	String description() default "";
	
	boolean overwrite() default false;
	
	boolean streamingProcess() default false;
	
	String scriptCommandLine() default "";
	
	boolean monitorExecution() default false;
	
	public int monitorPort() default -1;	//must

	public int maxTimeDiffSeconds() default -1;	//must

	public String jobShardingStrategyClass() default "";	//must

	public int reconcileIntervalMinutes() default 10;	//must

	public String eventTraceRdbDataSource() default "";	//must

	public String listener() default "";	//must

	public boolean disabled() default false;	//must

	public String distributedListener() default "";

	public long startedTimeoutMilliseconds() default Long.MAX_VALUE;	//must

	public long completedTimeoutMilliseconds() default Long.MAX_VALUE;		//must

	public String jobExceptionHandler() default "com.dangdang.ddframe.job.executor.handler.impl.DefaultJobExceptionHandler";

	public String executorServiceHandler() default "com.dangdang.ddframe.job.executor.handler.impl.DefaultExecutorServiceHandler";
	
}

这些字段属于注解 @ElasticJobConfig 规定的属性,字段名称都是根据 ElasticJob 的官方文档定义的属性进行定义的,而之所以选择这几个字段进行定义,是因为后面配置过程中需要用到。

ElasticJob3.x 的官方文档

ElasticJob2.x 的官方文档

7. 对注解 @ElasticJobConfig 进行解析,即使用@ElasticJobConfig中的属性做相关配置

(1)首先简单定义一下 ElasticJobConfParser.java

@Slf4j
// ApplicationListener表示应用程序启动完成以后才会加载该类
public class ElasticJobConfParser implements ApplicationListener {

	private JobZookeeperProperties jobZookeeperProperties;
	
	private ZookeeperRegistryCenter zookeeperRegistryCenter;
	
	public ElasticJobConfParser(JobZookeeperProperties jobZookeeperProperties,
			ZookeeperRegistryCenter zookeeperRegistryCenter) {
		this.jobZookeeperProperties = jobZookeeperProperties;
		this.zookeeperRegistryCenter = zookeeperRegistryCenter;
	}

}

然后将该类作为 Bean 注入到 Spring 中进行管理,以下代码定义在 JobParserAutoConfigurartion.java中

    @Bean
	public ElasticJobConfParser elasticJobConfParser(JobZookeeperProperties jobZookeeperProperties, ZookeeperRegistryCenter zookeeperRegistryCenter) {
		return new ElasticJobConfParser(jobZookeeperProperties, zookeeperRegistryCenter);
	}
	

梳理一下逻辑:当使用 @EnableElasticJob 注解时,会将配置文件中 ZooKeeper 的配置信息加载进来进行 ZookeeperRegistryCenter 的初始化配置,之后再执行 elasticJobConfParser() 方法,即初始化 ElasticJobConfParser 类。

(2)ElasticJobConfParser.java中的具体解析过程

@Slf4j
public class ElasticJobConfParser implements ApplicationListener {

	private JobZookeeperProperties jobZookeeperProperties;
	
	private ZookeeperRegistryCenter zookeeperRegistryCenter;
	
	public ElasticJobConfParser(JobZookeeperProperties jobZookeeperProperties,
			ZookeeperRegistryCenter zookeeperRegistryCenter) {
		this.jobZookeeperProperties = jobZookeeperProperties;
		this.zookeeperRegistryCenter = zookeeperRegistryCenter;
	}

	// onApplicationEvent是指在应用程序启动完成之后,才会对我们声明的 @ElasticJobConfig 注解去执行解析
	@Override
	public void onApplicationEvent(ApplicationReadyEvent event) {
		try {
			ApplicationContext applicationContext = event.getApplicationContext();
			// 获取 所有被 @ElasticJobConfig 注解修饰的类
			Map beanMap = applicationContext.getBeansWithAnnotation(ElasticJobConfig.class);
			for(Iterator it = beanMap.values().iterator(); it.hasNext();) {
				Object confBean = it.next();
				Class clazz = confBean.getClass();
				// 带有 $ 符号的类名,取出它真正的名字
				if(clazz.getName().indexOf("$") > 0) {
					String className = clazz.getName();
					clazz = Class.forName(className.substring(0, className.indexOf("$")));
				}
				// 	获取接口类型 用于判断是什么类型的任务
				/**
				 * getInterfaces()[0]这个只是取第一个接口类,有可能继承多个接口类 比如 MySimpleJob implements SimpleJob, xxxJob等,
				 * 这里为了简化操作,默认第一个接口类就是代表定时任务类型的类,生产环境的话,这里要循环获取接口类的
				 */
				String jobTypeName = clazz.getInterfaces()[0].getSimpleName();
				//	获取@ElasticJobConfig注解中声明的配置项
				ElasticJobConfig conf = clazz.getAnnotation(ElasticJobConfig.class);
				
				String jobClass = clazz.getName();
				String jobName = this.jobZookeeperProperties.getNamespace() + "." + conf.name();
				String cron = conf.cron();
				String shardingItemParameters = conf.shardingItemParameters();
				String description = conf.description();
				String jobParameter = conf.jobParameter();
				String jobExceptionHandler = conf.jobExceptionHandler();
				String executorServiceHandler = conf.executorServiceHandler();

				String jobShardingStrategyClass = conf.jobShardingStrategyClass();
				String eventTraceRdbDataSource = conf.eventTraceRdbDataSource();
				String scriptCommandLine = conf.scriptCommandLine();

				boolean failover = conf.failover();
				boolean misfire = conf.misfire();
				boolean overwrite = conf.overwrite();
				boolean disabled = conf.disabled();
				boolean monitorExecution = conf.monitorExecution();
				boolean streamingProcess = conf.streamingProcess();

				int shardingTotalCount = conf.shardingTotalCount();
				int monitorPort = conf.monitorPort();
				int maxTimeDiffSeconds = conf.maxTimeDiffSeconds();
				int reconcileIntervalMinutes = conf.reconcileIntervalMinutes();				
				
				//	先把当当网的esjob的相关configuration实例化出来
				JobCoreConfiguration coreConfig = JobCoreConfiguration
						.newBuilder(jobName, cron, shardingTotalCount)
						.shardingItemParameters(shardingItemParameters)
						.description(description)
						.failover(failover)
						.jobParameter(jobParameter)
						.misfire(misfire)
						.jobProperties(JobProperties.JobPropertiesEnum.JOB_EXCEPTION_HANDLER.getKey(), jobExceptionHandler)
						.jobProperties(JobProperties.JobPropertiesEnum.EXECUTOR_SERVICE_HANDLER.getKey(), executorServiceHandler)
						.build();

				//	要创建什么类型的任务:SimpleJob/DataflowJob/ScriptJob
				JobTypeConfiguration typeConfig = null;
				if(ElasticJobTypeEnum.SIMPLE.getType().equals(jobTypeName)) {
					typeConfig = new SimpleJobConfiguration(coreConfig, jobClass);
				}
				
				if(ElasticJobTypeEnum.DATAFLOW.getType().equals(jobTypeName)) {
					typeConfig = new DataflowJobConfiguration(coreConfig, jobClass, streamingProcess);
				}
				
				if(ElasticJobTypeEnum.SCRIPT.getType().equals(jobTypeName)) {
					typeConfig = new ScriptJobConfiguration(coreConfig, scriptCommandLine);
				}
				
				// 实例化 LiteJobConfiguration
				LiteJobConfiguration jobConfig = LiteJobConfiguration
						.newBuilder(typeConfig)
						.overwrite(overwrite)
						.disabled(disabled)
						.monitorPort(monitorPort)
						.monitorExecution(monitorExecution)
						.maxTimeDiffSeconds(maxTimeDiffSeconds)
						.jobShardingStrategyClass(jobShardingStrategyClass)
						.reconcileIntervalMinutes(reconcileIntervalMinutes)
						.build();
				
				// 	创建一个Spring的bean,使用 BeanDefinitionBuilder.rootBeanDefinition()方法
				BeanDefinitionBuilder factory = BeanDefinitionBuilder.rootBeanDefinition(SpringJobScheduler.class);
				factory.setInitMethodName("init");
				// 多例模式(不能是单例模式)
				factory.setScope("prototype");
				
				//	1.添加bean构造参数,相当于添加自己的真实的任务实现类
				if (!ElasticJobTypeEnum.SCRIPT.getType().equals(jobTypeName)) {
					factory.addConstructorArgValue(confBean);
				}
				//	2.添加注册中心
				factory.addConstructorArgValue(this.zookeeperRegistryCenter);
				//	3.添加LiteJobConfiguration
				factory.addConstructorArgValue(jobConfig);

				//	4.如果有eventTraceRdbDataSource 则也进行添加
				if (StringUtils.hasText(eventTraceRdbDataSource)) {
					BeanDefinitionBuilder rdbFactory = BeanDefinitionBuilder.rootBeanDefinition(JobEventRdbConfiguration.class);
					rdbFactory.addConstructorArgReference(eventTraceRdbDataSource);
					factory.addConstructorArgValue(rdbFactory.getBeanDefinition());
				}
				
				//  5.添加监听
				List elasticJobListeners = getTargetElasticJobListeners(conf);
				factory.addConstructorArgValue(elasticJobListeners);
				
				// 	接下来就是把factory 也就是 SpringJobScheduler注入到Spring容器中
				DefaultListableBeanFactory defaultListableBeanFactory = (DefaultListableBeanFactory) applicationContext.getAutowireCapableBeanFactory();

				String registerBeanName = conf.name() + "SpringJobScheduler";
				defaultListableBeanFactory.registerBeanDefinition(registerBeanName, factory.getBeanDefinition());
				SpringJobScheduler scheduler = (SpringJobScheduler)applicationContext.getBean(registerBeanName);
				// 启动elastic-job
				scheduler.init();
				log.info("启动elastic-job作业: " + jobName);
				// 以上 for 循环里的代码实是只处理了被@ElasticJobConfig修饰的类的其中一个的解析过程
			}
			log.info("共计启动elastic-job作业数量为: {} 个", beanMap.values().size());
			
		} catch (Exception e) {
			log.error("elasticjob 启动异常, 系统强制退出", e);
			System.exit(1);
		}
	}

	/**
	 * 添加监听器
	 * @param conf
	 * @return
	 */
	private List getTargetElasticJobListeners(ElasticJobConfig conf) {
		List result = new ManagedList(2);
		// 普通listener,也就是本地listener
		String listeners = conf.listener();
		if (StringUtils.hasText(listeners)) {
			BeanDefinitionBuilder factory = BeanDefinitionBuilder.rootBeanDefinition(listeners);
			factory.setScope("prototype");
			result.add(factory.getBeanDefinition());
		}

		// 分布式listener
		String distributedListeners = conf.distributedListener();
		long startedTimeoutMilliseconds = conf.startedTimeoutMilliseconds();
		long completedTimeoutMilliseconds = conf.completedTimeoutMilliseconds();

		if (StringUtils.hasText(distributedListeners)) {
			BeanDefinitionBuilder factory = BeanDefinitionBuilder.rootBeanDefinition(distributedListeners);
			factory.setScope("prototype");
			factory.addConstructorArgValue(Long.valueOf(startedTimeoutMilliseconds));
			factory.addConstructorArgValue(Long.valueOf(completedTimeoutMilliseconds));
			result.add(factory.getBeanDefinition());
		}
		return result;
	}

}

由于继承了 ApplicationListener类,所以其中的 onApplicationEvent() 方法是指在应用程序启动完成之后,才会执行,即对我们声明的 @ElasticJobConfig 注解去执行解析。上面代码中的 ElasticJobTypeEnum.java 如下

public enum ElasticJobTypeEnum {

	SIMPLE("SimpleJob", "简单类型job"),
	DATAFLOW("DataflowJob", "流式类型job"),
	SCRIPT("ScriptJob", "脚本类型job");
	
	private String type;
	
	private String desc;
	
	private ElasticJobTypeEnum(String type, String desc) {
		this.type = type;
		this.desc = desc;
	}

	public String getType() {
		return type;
	}

	public void setType(String type) {
		this.type = type;
	}

	public String getDesc() {
		return desc;
	}

	public void setDesc(String desc) {
		this.desc = desc;
	}

}

8. 测试

(1)新建一个项目 es-job,在该项目的pom.xml中引入依赖

        
    		com.didiok.base.rabbit
    		rabbit-task
    		0.0.1-SNAPSHOT      	
      	 	

(2)在 Application.java 中加上注解 @EnableElasticJob

@EnableElasticJob
@SpringBootApplication
@ComponentScan(basePackages = {"com.didiok.esjob.*"})
public class Application {

	public static void main(String[] args) {
		SpringApplication.run(Application.class, args);
	}
}

(3)配置文件 application.properties

server.port=8881


elastic.job.zk.namespace=elastic-job
elastic.job.zk.serverLists=192.168.11.111:2181,192.168.11.112:2181,192.168.11.113:2181

spring.datasource.url=jdbc:mysql://localhost:3306/elasticjob?useUnicode=true&characterEncoding=utf-8&verifyServerCertificate=false&useSSL=false&requireSSL=false
spring.datasource.driver-class-name=com.mysql.jdbc.Driver
spring.datasource.username=root
spring.datasource.password=root 

(4)编写测试类

TestJob.java

@Component
@ElasticJobConfig(
			name = "com.bfxy.esjob.task.test.TestJob",
			cron = "0/5 * * * * ?",
			description = "测试定时任务",
			overwrite = true,
			shardingTotalCount = 5
		)
public class TestJob implements SimpleJob {

	@Override
	public void execute(ShardingContext shardingContext) {
		System.err.println("执行Test job.");
	}

}

只需要在某个类上加上注解 

@ElasticJobConfig(
            name = "com.bfxy.esjob.task.test.TestJob",
            cron = "0/5 * * * * ?",
            description = "测试定时任务",
            overwrite = true,
            shardingTotalCount = 5
        )

就能将当前类解析成定时任务进行运行。

然后启动 Application.java 后,观察定时任务是否定时执行。

你可能感兴趣的:(java,数据库,spring)