有一张客户信息表,数据量很大。需要把这些记录调用某个接口(这个接口一次只能查询10个客户信息),然后把接口返回信息解析保存到另一个表里。
客户信息表(cust_info)如下:
id | cust_id | status | remark | input_time | update_tiem |
---|---|---|---|---|---|
1 | 20191111000001 | 2019-11-23 10:45:04 | 2019-11-23 10:45:04 | ||
2 | 20191111000002 | 2019-11-23 10:45:04 | 2019-11-23 10:45:04 | ||
3 | 20191111000003 | 2019-11-23 10:45:04 | 2019-11-23 10:45:04 | ||
4 | 20191111000004 | 2019-11-23 10:45:04 | 2019-11-23 10:45:04 | ||
5 | 20191111000005 | 2019-11-23 10:45:04 | 2019-11-23 10:45:04 |
解析后要保存的表(cust_detail)如下:
id | cust_id | city | name | tel | age | input_time | update_time |
---|---|---|---|---|---|---|---|
1 | 20191111000001 | 北京 | 张三 | 1877778872 | 12 | 2019-11-23 10:45:04 | 2019-11-23 10:45:04 |
2 | 20191111000002 | 北京 | 张三 | 1877778872 | 12 | 2019-11-23 10:45:04 | 2019-11-23 10:45:04 |
3 | 20191111000003 | 北京 | 张三 | 1877778872 | 12 | 2019-11-23 10:45:04 | 2019-11-23 10:45:04 |
4 | 20191111000004 | 北京 | 张三 | 1877778872 | 12 | 2019-11-23 10:45:04 | 2019-11-23 10:45:04 |
5 | 20191111000005 | 北京 | 张三 | 1877778872 | 12 | 2019-11-23 10:45:04 | 2019-11-23 10:45:04 |
思路:
使用线程池,结合redis来实现。每次固定从cust_info表取30条未处理的记录,按10等分给三个线程。用redis来统计线程是否执行完毕,在redis中存一个变量值为开启的线程数。当子线程执行完毕时将redis中的该变量减一。while循环发现该变量值变为0时说明子线程全部执行完毕,然后再取30条未处理的数据,重复上面的步骤,直至完成。子线程在查询完某个客户时会更新cust_info表的status字段值为1,表示该客户已经被查询过了。
缺点:
- 需要依赖redis。
- 每次循环要等所以有的线程执行完毕后,再进入下一次循环,线程利用不充分。
主要文件详情:
- CustThreadPoolExecutor.java 线程池
- CustQueryOneThread.java 线程类
- CustInfoOneServiceImpl.java 业务逻辑类
每次取数据的mybatis的 xml文件
<select id="selectCustInfoList" resultMap="BaseResultMap">
select
<include refid="Base_Column_List"/>
from cust_info
where status = 0 or status is null
select>
CustThreadPoolExecutor.java
package com.lh.service.task;
import java.util.concurrent.ArrayBlockingQueue;
import java.util.concurrent.ThreadPoolExecutor;
import java.util.concurrent.TimeUnit;
public class CustThreadPoolExecutor {
private static ThreadPoolExecutor pool;
private static void init() {
pool = new ThreadPoolExecutor(
3,
6,
1000,
TimeUnit.MICROSECONDS,
new ArrayBlockingQueue<Runnable>(30)
);
}
private CustThreadPoolExecutor() {
pool = new ThreadPoolExecutor(
3,
6,
1000,
TimeUnit.MICROSECONDS,
new ArrayBlockingQueue<Runnable>(30)
);
}
public static void execute(Runnable runnable) {
if (pool == null) {
init();
}
pool.execute(runnable);
}
}
CustQueryOneThread.java
package com.lh.service.task;
import com.alibaba.fastjson.JSON;
import com.lh.dao.sys.CustDetailMapper;
import com.lh.dao.sys.CustInfoMapper;
import com.lh.entity.CustDetail;
import com.lh.entity.CustInfo;
import com.lh.entity.enums.RedisKeyEnum;
import com.lh.utils.RedisUtil;
import com.lh.utils.SpringUtil;
import lombok.extern.slf4j.Slf4j;
import org.apache.ibatis.session.ExecutorType;
import org.apache.ibatis.session.SqlSession;
import org.apache.ibatis.session.SqlSessionFactory;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.stream.Collectors;
@Slf4j
public class CustQueryOneThread implements Runnable {
private List<CustInfo> data;
private Map<String, CustInfo> map;
private SqlSessionFactory sqlSessionFactory;
private RedisUtil redisUtil;
public CustQueryOneThread(List<CustInfo> data) {
this.data = data;
this.map = new HashMap<>();
this.sqlSessionFactory = SpringUtil.getBean(SqlSessionFactory.class);
this.redisUtil = SpringUtil.getBean(RedisUtil.class);
}
@Override
public void run() {
try {
// 1、组装参数
String custIds = this.before();
// 2、调用接口 我们这里简单模拟
String result = this.runing(custIds);
// 3、解析返回保存记录
this.after(result);
} catch (Exception e) {
e.printStackTrace();
log.error("查询客户信息发生异常!e={}", e);
} finally {
//线程执行完毕 计数减一
redisUtil.incrBy(RedisKeyEnum.CUST_QUERY_THREAD_COUNT.getCode(), -1);
}
}
/**
* 组装接口调用入参
*
* @return
*/
private String before() {
return data.stream().map(a -> {
map.put(a.getCustId(), a);
return a.getCustId();
}).collect(Collectors.joining(","));
}
/**
* 模拟调用外部接口返回Json 数据
*
* @return
*/
private String runing(String request) {
List<CustDetail> list = Arrays.stream(request.split(",")).map(a -> {
CustDetail custDetail = new CustDetail();
custDetail.setCustId(a);
custDetail.setAge(12);
custDetail.setCity("北京");
custDetail.setTel("17788998880");
return custDetail;
}).collect(Collectors.toList());
return JSON.toJSONString(list);
}
/**
* 解析接口返回并保存结果
*
* @param result
*/
private void after(String result) {
SqlSession session = sqlSessionFactory.openSession(ExecutorType.BATCH);
try {
CustDetailMapper custDetailMapper = session.getMapper(CustDetailMapper.class);
CustInfoMapper custInfoMapper = session.getMapper(CustInfoMapper.class);
List<CustDetail> list = JSON.parseArray(result, CustDetail.class);
for (CustDetail custDetail : list) {
custDetailMapper.insertSelective(custDetail);
CustInfo custInfo = map.get(custDetail.getCustId());
custInfo.setStatus(Byte.parseByte("1"));
custInfo.setRemark("处理成功");
custInfoMapper.updateByPrimaryKeySelective(custInfo);
}
session.commit();
session.clearCache();
} catch (Exception e) {
e.printStackTrace();
log.error("保存接口返回发生异常!e={}", e);
session.rollback();
} finally {
session.close();
}
}
}
CustInfoOneServiceImpl.java
package com.lh.service.impl;
import com.github.pagehelper.Page;
import com.github.pagehelper.PageHelper;
import com.lh.dao.sys.CustInfoMapper;
import com.lh.entity.CustInfo;
import com.lh.entity.enums.RedisKeyEnum;
import com.lh.service.CustInfoService;
import com.lh.service.task.CustQueryOneThread;
import com.lh.service.task.CustThreadPoolExecutor;
import com.lh.utils.RedisUtil;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;
import javax.annotation.Resource;
import java.util.List;
import java.util.stream.Collectors;
@Slf4j
@Service("custInfoOneServiceImpl")
public class CustInfoOneServiceImpl implements CustInfoService {
private static final int PAGE_SIZE = 30;
private static final int QUERY_SIZE = 10;
@Resource
private CustInfoMapper custInfoMapper;
@Resource
private RedisUtil redisUtil;
@Override
public void queryJob() {
try {
log.info("==============cust query task begin!=================");
redisUtil.set(RedisKeyEnum.CUST_QUERY_THREAD_COUNT.getCode(), "0");
while (true) {
String s = redisUtil.get(RedisKeyEnum.CUST_QUERY_THREAD_COUNT.getCode());
if ("0".equals(s)) {
this.query();
} else if ("-1".equals(s)) {
break;
} else {
Thread.sleep(1000);
}
}
log.info("==============cust query task end !=================");
} catch (Exception e) {
e.printStackTrace();
log.error("==============cust query task exception! e={}=================", e);
}
}
private void query() {
Page page = PageHelper.startPage(1, PAGE_SIZE);
List<CustInfo> list = custInfoMapper.selectCustInfoList();
if (list == null || list.size() == 0) {
redisUtil.set(RedisKeyEnum.CUST_QUERY_THREAD_COUNT.getCode(), "-1");
return;
}
int size = list.size();
int m = size / QUERY_SIZE;
int n = size % QUERY_SIZE;
int threadCount = m + (n > 0 ? 1 : 0);
redisUtil.set(RedisKeyEnum.CUST_QUERY_THREAD_COUNT.getCode(), String.valueOf(threadCount));
if (m > 0) {
for (int i = 0; i < m; i++) {
List<CustInfo> collect = list.stream().skip(QUERY_SIZE * i).limit(QUERY_SIZE).collect(Collectors.toList());
CustThreadPoolExecutor.execute(new CustQueryOneThread(collect));
}
}
if (n > 0) {
List<CustInfo> collect = list.stream().skip(QUERY_SIZE * m).collect(Collectors.toList());
CustThreadPoolExecutor.execute(new CustQueryOneThread(collect));
}
}
}
思路:
使用线程池,结合CountDownLatch来实现。每次固定从cust_info表取30条未处理的记录,按10等分给三个线程,用CountDownLatch来统计线程是否执行完毕。CountDownLatch在创建时传入开启的线程数。并将CountDownLatch对象传给子线程,子线程执行完毕时调用CountDownLatch的countDown()方法,此时主线程调用CountDownLatch的await()方法等待所有子线程执行完毕。当所有的子线程都调用完countDown()方法说明子线程全部执行完毕,然后主线程继续进入下一个循环。再取30条未处理的数据,重复上面的步骤,直至完成。子线程在查询完某个客户时会更新cust_info表的status字段值为1,表示该客户已经被查询过了。
优点:不需要依赖其他。
缺点: 每次循环要等所以有的线程执行完毕后,再进入下一次循环,线程利用不充分。
主要文件详情:
- CustQueryTwoThread.java 线程类
- CustInfoTwoServiceImpl.java 业务逻辑类
每次取数据的mybatis的 xml文件
<select id="selectCustInfoList" resultMap="BaseResultMap">
select
<include refid="Base_Column_List"/>
from cust_info
where status = 0 or status is null
select>
<select id="selectCustInfoCount" resultType="java.lang.Integer">
select count(1) from cust_info
select>
CustQueryTwoThread.java
package com.lh.service.task;
import com.alibaba.fastjson.JSON;
import com.lh.dao.sys.CustDetailMapper;
import com.lh.dao.sys.CustInfoMapper;
import com.lh.entity.CustDetail;
import com.lh.entity.CustInfo;
import com.lh.utils.SpringUtil;
import lombok.extern.slf4j.Slf4j;
import org.apache.ibatis.session.ExecutorType;
import org.apache.ibatis.session.SqlSession;
import org.apache.ibatis.session.SqlSessionFactory;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.CountDownLatch;
import java.util.stream.Collectors;
@Slf4j
public class CustQueryTwoThread implements Runnable {
private List<CustInfo> data;
private Map<String, CustInfo> map;
private CountDownLatch countDownLatch;
private SqlSessionFactory sqlSessionFactory;
public CustQueryTwoThread(List<CustInfo> data, CountDownLatch countDownLatch) {
this.data = data;
this.map = new HashMap<>();
this.countDownLatch = countDownLatch;
this.sqlSessionFactory = SpringUtil.getBean(SqlSessionFactory.class);
}
@Override
public void run() {
try {
// 1、组装参数
String custIds = this.before();
// 2、调用接口 我们这里简单模拟
String result = this.runing(custIds);
// 3、解析返回保存记录
this.after(result);
} catch (Exception e) {
e.printStackTrace();
log.error("查询客户信息发生异常!e={}", e);
} finally {
//线程执行完毕 释放
countDownLatch.countDown();
}
}
/**
* 组装接口调用入参
*
* @return
*/
private String before() {
return data.stream().map(a -> {
map.put(a.getCustId(), a);
return a.getCustId();
}).collect(Collectors.joining(","));
}
/**
* 模拟调用外部接口返回Json 数据
*
* @return
*/
private String runing(String request) {
List<CustDetail> list = Arrays.stream(request.split(",")).map(a -> {
CustDetail custDetail = new CustDetail();
custDetail.setCustId(a);
custDetail.setAge(12);
custDetail.setCity("北京");
custDetail.setTel("17788998880");
custDetail.setName("小花");
return custDetail;
}).collect(Collectors.toList());
return JSON.toJSONString(list);
}
/**
* 解析接口返回并保存结果
*
* @param result
*/
private void after(String result) {
SqlSession session = sqlSessionFactory.openSession(ExecutorType.BATCH);
try {
CustDetailMapper custDetailMapper = session.getMapper(CustDetailMapper.class);
CustInfoMapper custInfoMapper = session.getMapper(CustInfoMapper.class);
List<CustDetail> list = JSON.parseArray(result, CustDetail.class);
for (CustDetail custDetail : list) {
custDetailMapper.insertSelective(custDetail);
CustInfo custInfo = map.get(custDetail.getCustId());
custInfo.setStatus(Byte.parseByte("1"));
custInfo.setRemark("处理成功");
custInfoMapper.updateByPrimaryKeySelective(custInfo);
}
session.commit();
session.clearCache();
} catch (Exception e) {
e.printStackTrace();
log.error("保存接口返回发生异常!e={}", e);
session.rollback();
} finally {
session.close();
}
}
}
CustInfoTwoServiceImpl.java
package com.lh.service.impl;
import com.github.pagehelper.Page;
import com.github.pagehelper.PageHelper;
import com.lh.dao.sys.CustInfoMapper;
import com.lh.entity.CustInfo;
import com.lh.service.CustInfoService;
import com.lh.service.task.CustQueryTwoThread;
import com.lh.service.task.CustThreadPoolExecutor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;
import javax.annotation.Resource;
import java.util.List;
import java.util.concurrent.CountDownLatch;
import java.util.stream.Collectors;
@Slf4j
@Service("custInfoTwoServiceImpl")
public class CustInfoTwoServiceImpl implements CustInfoService {
private static final int PAGE_SIZE = 30;
private static final int QUERY_SIZE = 10;
@Resource
private CustInfoMapper custInfoMapper;
@Override
public void queryJob() {
try {
log.info("==============cust query task begin!=================");
int i = custInfoMapper.selectCustInfoCount();
int pageCount = i / PAGE_SIZE + (i % PAGE_SIZE > 0 ? 1 : 0);
int flag = 1;
while (flag <= pageCount) {
Page page = PageHelper.startPage(1, PAGE_SIZE);
List<CustInfo> list = custInfoMapper.selectCustInfoList();
if (list == null || list.size() == 0) {
break;
}
this.query(list);
flag++;
}
log.info("==============cust query task end !=================");
} catch (Exception e) {
e.printStackTrace();
log.error("==============cust query task exception! e={}=================", e);
}
}
private void query(List<CustInfo> list) throws InterruptedException {
int size = list.size();
int m = size / QUERY_SIZE;
int n = size % QUERY_SIZE;
int threadCount = m + (n > 0 ? 1 : 0);
CountDownLatch latch = new CountDownLatch(threadCount);
if (m > 0) {
for (int i = 0; i < m; i++) {
List<CustInfo> collect = list.stream().skip(QUERY_SIZE * i).limit(QUERY_SIZE).collect(Collectors.toList());
CustThreadPoolExecutor.execute(new CustQueryTwoThread(collect, latch));
}
}
if (n > 0) {
List<CustInfo> collect = list.stream().skip(QUERY_SIZE * m).collect(Collectors.toList());
CustThreadPoolExecutor.execute(new CustQueryTwoThread(collect, latch));
}
// 主线程等待所以有子线程执行完毕再继续执行
latch.await();
}
}
思路:
使用线程池,和Semaphore 实现。Semaphore 初始化时设置线程数为3,将cust_info表按照id字段排序每次依次从cust_info表30条记录,按10等分给三个线程,在分给线程之前要先获取许可(semaphore.acquire()),有许可时才开启线程。当子线程在执行完毕后会释放许可,主线程这时会获取许可,把准备好的数据分给子线程。当一次循环执行完毕之后再依次向下取30条。重复上面的步骤,直至完成。
优点:不需要依赖其他。每当一个线程执行完毕,就可以开启一个新的线程。不用等到所有的线程执行完毕再继续。
主要文件详情:
- CustQueryThreeThread.java 线程类
- CustInfoThreeServiceImpl.java 业务逻辑类
<select id="selectList" resultMap="BaseResultMap">
select
<include refid="Base_Column_List"/>
from cust_info
select>
<select id="selectCustInfoCount" resultType="java.lang.Integer">
select count(1) from cust_info
select>
CustQueryThreeThread.java
package com.lh.service.task;
import com.alibaba.fastjson.JSON;
import com.lh.dao.sys.CustDetailMapper;
import com.lh.dao.sys.CustInfoMapper;
import com.lh.entity.CustDetail;
import com.lh.entity.CustInfo;
import com.lh.utils.SpringUtil;
import lombok.extern.slf4j.Slf4j;
import org.apache.ibatis.session.ExecutorType;
import org.apache.ibatis.session.SqlSession;
import org.apache.ibatis.session.SqlSessionFactory;
import java.util.Arrays;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.concurrent.Semaphore;
import java.util.stream.Collectors;
@Slf4j
public class CustQueryThreeThread implements Runnable {
private List<CustInfo> data;
private Map<String, CustInfo> map;
private Semaphore semaphore;
private SqlSessionFactory sqlSessionFactory;
public CustQueryThreeThread(List<CustInfo> data, Semaphore semaphore) {
this.data = data;
this.map = new HashMap<>();
this.semaphore = semaphore;
this.sqlSessionFactory = SpringUtil.getBean(SqlSessionFactory.class);
}
@Override
public void run() {
try {
// 1、组装参数
String custIds = this.before();
// 2、调用接口 我们这里简单模拟
System.out.println("sleep ===== begin");
Thread.sleep(10000);
System.out.println("sleep ===== end");
String result = this.runing(custIds);
// 3、解析返回保存记录
this.after(result);
} catch (Exception e) {
e.printStackTrace();
log.error("查询客户信息发生异常!e={}", e);
} finally {
//线程执行完毕 释放许可
System.out.println("sleep ===== release ");
semaphore.release();
}
}
/**
* 组装接口调用入参
*
* @return
*/
private String before() {
return data.stream().map(a -> {
map.put(a.getCustId(), a);
return a.getCustId();
}).collect(Collectors.joining(","));
}
/**
* 模拟调用外部接口返回Json 数据
*
* @return
*/
private String runing(String request) {
List<CustDetail> list = Arrays.stream(request.split(",")).map(a -> {
CustDetail custDetail = new CustDetail();
custDetail.setCustId(a);
custDetail.setAge(12);
custDetail.setCity("北京");
custDetail.setTel("17788998880");
custDetail.setName("小花");
return custDetail;
}).collect(Collectors.toList());
return JSON.toJSONString(list);
}
/**
* 解析接口返回并保存结果
*
* @param result
*/
private void after(String result) {
SqlSession session = sqlSessionFactory.openSession(ExecutorType.BATCH);
try {
CustDetailMapper custDetailMapper = session.getMapper(CustDetailMapper.class);
CustInfoMapper custInfoMapper = session.getMapper(CustInfoMapper.class);
List<CustDetail> list = JSON.parseArray(result, CustDetail.class);
for (CustDetail custDetail : list) {
custDetailMapper.insertSelective(custDetail);
CustInfo custInfo = map.get(custDetail.getCustId());
custInfo.setStatus(Byte.parseByte("1"));
custInfo.setRemark("处理成功");
custInfoMapper.updateByPrimaryKeySelective(custInfo);
}
session.commit();
session.clearCache();
} catch (Exception e) {
e.printStackTrace();
log.error("保存接口返回发生异常!e={}", e);
session.rollback();
} finally {
session.close();
}
}
}
CustInfoThreeServiceImpl.java
package com.lh.service.impl;
import com.github.pagehelper.Page;
import com.github.pagehelper.PageHelper;
import com.lh.dao.sys.CustInfoMapper;
import com.lh.dao.sys.CustQueryRecordMapper;
import com.lh.entity.CustInfo;
import com.lh.entity.CustQueryRecord;
import com.lh.service.CustInfoService;
import com.lh.service.task.CustQueryThreeThread;
import com.lh.service.task.CustQueryTwoThread;
import com.lh.service.task.CustThreadPoolExecutor;
import lombok.extern.slf4j.Slf4j;
import org.springframework.stereotype.Service;
import javax.annotation.Resource;
import java.text.SimpleDateFormat;
import java.util.Calendar;
import java.util.Date;
import java.util.List;
import java.util.concurrent.CountDownLatch;
import java.util.concurrent.Semaphore;
import java.util.stream.Collectors;
@Slf4j
@Service("custInfoThreeServiceImpl")
public class CustInfoThreeServiceImpl implements CustInfoService {
private static final int PAGE_SIZE = 30;
private static final int QUERY_SIZE = 10;
@Resource
private CustInfoMapper custInfoMapper;
@Resource
private CustQueryRecordMapper custQueryRecordMapper;
@Override
public void queryJob() {
try {
log.info("==============cust query task begin!=================");
Semaphore semaphore = new Semaphore(3);
int i = custInfoMapper.selectCustInfoCount();
int pageCount = i / PAGE_SIZE + (i % PAGE_SIZE > 0 ? 1 : 0);
int flag = 1;
while (flag <= pageCount) {
Page page = PageHelper.startPage(flag, PAGE_SIZE);
List<CustInfo> list = custInfoMapper.selectList();
if (list == null || list.size() == 0) {
break;
}
this.query(list, semaphore);
this.saveRecord(flag, list.size());
flag++;
}
log.info("==============cust query task end !=================");
} catch (Exception e) {
e.printStackTrace();
log.error("==============cust query task exception! e={}=================", e);
}
}
/**
* 查询
*
* @param list
* @param semaphore
* @throws InterruptedException
*/
private void query(List<CustInfo> list, Semaphore semaphore) throws InterruptedException {
int size = list.size();
int m = size / QUERY_SIZE;
int n = size % QUERY_SIZE;
if (m > 0) {
for (int i = 0; i < m; i++) {
List<CustInfo> collect = list.stream().skip(QUERY_SIZE * i).limit(QUERY_SIZE).collect(Collectors.toList());
// 获取许可
semaphore.acquire();
CustThreadPoolExecutor.execute(new CustQueryThreeThread(collect, semaphore));
}
}
if (n > 0) {
List<CustInfo> collect = list.stream().skip(QUERY_SIZE * m).collect(Collectors.toList());
CustThreadPoolExecutor.execute(new CustQueryThreeThread(collect, semaphore));
}
}
/**
* 保存记录
*
* @param page
* @param size
*/
private void saveRecord(int page, int size) {
SimpleDateFormat sdf = new SimpleDateFormat("yyyy-MM-dd");
String busiDate = sdf.format(new Date());
CustQueryRecord record = new CustQueryRecord();
record.setRecordId(busiDate + "_" + page);
record.setPage(page);
record.setBusiDate(busiDate);
record.setRemark(String.valueOf(size));
custQueryRecordMapper.insertSelective(record);
}
}
新增了查询记录表,这里会把执行成功的记录和页号存在表里。方便异常后重启用。可以在方法开始的时候根据busi_date 来查询page最大值,如果存在从最大值+1开始执行。如果不存在就从第一页(page等于1)开始执行。注意获取最大值后要和根据和 pagecount (int pageCount = i / PAGE_SIZE + (i % PAGE_SIZE > 0 ? 1 : 0);)进行比较如果相等或者大于pagecount ;直接return。
id | record_id | page | remark | busi_date | input_time | update_time |
---|---|---|---|---|---|---|
1 | 2019-11-23_1 | 1 | 30 | 2019-11-23 | 2019-11-23 19:09:12 | 2019-11-23 19:09:12 |
2 | 2019-11-23_2 | 2 | 30 | 2019-11-23 | 2019-11-23 19:09:22 | 2019-11-23 19:09:22 |
3 | 2019-11-23_3 | 3 | 30 | 2019-11-23 | 2019-11-23 19:09:32 | 2019-11-23 19:09:32 |
4 | 2019-11-23_4 | 4 | 10 | 2019-11-23 | 2019-11-23 19:09:42 | 2019-11-23 19:09:42 |