druid监控地址:http://localhost:8080/druid/sql.html
druid关闭监控页面:
spring.datasource.druid.filter.config.enabled=false
spring.datasource.druid.web-stat-filter.enabled=false
spring.datasource.druid.stat-view-servlet.enabled=false
根据图上的步骤,详细说下
(1)调用DruidDataSourceFactory.createDataSource(properties)方法,初始化DruidDataSource对象;只是初始化了对象,并没有创建连接;properties参数指的druid.properties配置文件
public static DataSource createDataSource(Map properties) throws Exception {
DruidDataSource dataSource = new DruidDataSource();
config(dataSource, properties);
return dataSource;
}
(2)调用DruidDataSource.getConnection()方法,获取连接;maxWait配置文件设置的超时时间
@Override
public DruidPooledConnection getConnection() throws SQLException {
return getConnection(maxWait);
}
(3)init()方法创建连接池;inited初始化连接标志位为false,首次获取连接时先初始化initialSize配置数量的连接数,线程池异步创建连接(socket通道);submitCreateTask(true)方法提交创建连接任务
public DruidPooledConnection getConnection(long maxWaitMillis) throws SQLException {
init();
if (filters.size() > 0) {
FilterChainImpl filterChain = new FilterChainImpl(this);
return filterChain.dataSource_connect(this, maxWaitMillis);
} else {
return getConnectionDirect(maxWaitMillis);
}
}
if (inited) {
return;
}
if (createScheduler != null && asyncInit) {
for (int i = 0; i < initialSize; ++i) {
submitCreateTask(true);
}
}
(4)调用DruidDataSource.pollLast(long nanos)方法,参数超时纳秒,获取连接池数组尾部连接;如果连接池为0时,emptySignal()方法会提交创建连接的任务,如果存活的连接超过最大连接数,会创建失败,在未超时情况下,会循环做此操作,当有连接调用close()方法关闭后,会将连接再设置到连接池数组中,此时再拿连接池尾部连接使用;否则超时返回null,上层做出抛异常处理
private DruidConnectionHolder pollLast(long nanos) throws InterruptedException, SQLException {
long estimate = nanos;
for (;;) {
if (poolingCount == 0) {
emptySignal(); // send signal to CreateThread create connection
if (failFast && isFailContinuous()) {
throw new DataSourceNotAvailableException(createError);
}
if (estimate <= 0) {
waitNanosLocal.set(nanos - estimate);
return null;
}
notEmptyWaitThreadCount++;
if (notEmptyWaitThreadCount > notEmptyWaitThreadPeak) {
notEmptyWaitThreadPeak = notEmptyWaitThreadCount;
}
try {
long startEstimate = estimate;
estimate = notEmpty.awaitNanos(estimate); // signal by
// recycle or
// creator
notEmptyWaitCount++;
notEmptyWaitNanos += (startEstimate - estimate);
if (!enable) {
connectErrorCountUpdater.incrementAndGet(this);
if (disableException != null) {
throw disableException;
}
throw new DataSourceDisableException();
}
} catch (InterruptedException ie) {
notEmpty.signal(); // propagate to non-interrupted thread
notEmptySignalCount++;
throw ie;
} finally {
notEmptyWaitThreadCount--;
}
if (poolingCount == 0) {
// 如果未超时,继续次循环获取连接
if (estimate > 0) {
continue;
}
// 超时返回null,上层会做抛异常处理
waitNanosLocal.set(nanos - estimate);
return null;
}
}
// 取数组尾部连接对象
decrementPoolingCount();
DruidConnectionHolder last = connections[poolingCount];
connections[poolingCount] = null;
long waitNanos = nanos - estimate;
last.setLastNotEmptyWaitNanos(waitNanos);
return last;
}
}
(5)Connection.close()关闭连接(实现方法DruidPooledConnection.close()),DruidDataSource.putLast()方法将连接重新设置到连接池数组的尾部
boolean putLast(DruidConnectionHolder e, long lastActiveTimeMillis) {
if (poolingCount >= maxActive || e.discard) {
return false;
}
e.lastActiveTimeMillis = lastActiveTimeMillis;
connections[poolingCount] = e;
incrementPoolingCount();
if (poolingCount > poolingPeak) {
poolingPeak = poolingCount;
poolingPeakTime = lastActiveTimeMillis;
}
notEmpty.signal();
notEmptySignalCount++;
return true;
}
个人看源码理解,说下大致流程,有不足和不对地方可以指出