Druid连接池实现分析

主要想要了解几个问题。
1.该连接池是否使用了队列,使用的是有界还是无界队列?
2.拒绝策略是什么样的?
3.如何被Spring引用的?
4.Spring如何通过该连接池获取数据库连接的?

项目是Springboot引入druid-spring-boot-starter来使用Druid,因此从start开始分析

druid-spring-boot-starter分析

spring.factories

org.springframework.boot.autoconfigure.EnableAutoConfiguration=\
com.alibaba.druid.spring.boot.autoconfigure.DruidDataSourceAutoConfigure

DruidDataSourceAutoConfigure
该类引入各种配置,看名字,数据库状态的配置一律不管,带Stat的全部过滤,先关注下DataSource这个Bean,该Bean在没有数据源时会生效。返回了一个包装类DruidDataSourceWrapper

@Configuration
@ConditionalOnClass({DruidDataSource.class})
@AutoConfigureBefore({DataSourceAutoConfiguration.class})
@EnableConfigurationProperties({DruidStatProperties.class, DataSourceProperties.class})
@Import({DruidSpringAopConfiguration.class, DruidStatViewServletConfiguration.class, DruidWebStatFilterConfiguration.class, DruidFilterConfiguration.class})
public class DruidDataSourceAutoConfigure {
    private static final Logger LOGGER = LoggerFactory.getLogger(DruidDataSourceAutoConfigure.class);

    public DruidDataSourceAutoConfigure() {
    }

    @Bean(
        initMethod = "init"
    )
    @ConditionalOnMissingBean
    public DataSource dataSource() {
        LOGGER.info("Init DruidDataSource");
        return new DruidDataSourceWrapper();
    }
}

DruidDataSourceWrapper部分代码
实现了InitializingBean,设置了数据库核心属性,然后继承了DruidDataSource,最后增加了许多filter添加方法

public void afterPropertiesSet() throws Exception {
        if (super.getUsername() == null) {
            super.setUsername(this.basicProperties.determineUsername());
        }

        if (super.getPassword() == null) {
            super.setPassword(this.basicProperties.determinePassword());
        }

        if (super.getUrl() == null) {
            super.setUrl(this.basicProperties.determineUrl());
        }

        if (super.getDriverClassName() == null) {
            super.setDriverClassName(this.basicProperties.getDriverClassName());
        }

    }

DruidDataSource
核心类,继承自DruidAbstractDataSource模板,实习了DruidDataSourceMBean, ManagedDataSource, Referenceable, Closeable, Cloneable, ConnectionPoolDataSource, MBeanRegistration等接口
部分值得关注的属性

    private volatile DruidConnectionHolder[] connections;
    private DruidConnectionHolder[] evictConnections;
    private DruidConnectionHolder[] keepAliveConnections;
    private DruidDataSource.CreateConnectionThread createConnectionThread;
    private DruidDataSource.DestroyConnectionThread destroyConnectionThread;
    private LogStatsThread   logStatsThread;
    public static ThreadLocal waitNanosLocal = new ThreadLocal();
    private final CountDownLatch initedLatch;
    private volatile boolean enable;

    protected static final AtomicLongFieldUpdater recycleErrorCountUpdater = AtomicLongFieldUpdater.newUpdater(DruidDataSource.class, "recycleErrorCount");
    protected static final AtomicLongFieldUpdater connectErrorCountUpdater = AtomicLongFieldUpdater.newUpdater(DruidDataSource.class, "connectErrorCount");
    protected static final AtomicLongFieldUpdater resetCountUpdater = AtomicLongFieldUpdater.newUpdater(DruidDataSource.class, "resetCount");

第一个属性connections,必然是数据库连接,没跑了,DruidConnectionHolder必然是保存连接的类。
DruidConnectionHolder

  protected final DruidAbstractDataSource       dataSource;
  protected final long                          connectionId;
  protected final Connection                    conn;
  protected final List connectionEventListeners = new CopyOnWriteArrayList();
  protected volatile long                       lastActiveTimeMillis;
  protected volatile long                       lastValidTimeMillis;
  private long                                  useCount                 = 0;
  private long                                  lastNotEmptyWaitNanos;
  protected final boolean                       defaultAutoCommit;
  protected boolean                             discard                  = false;
  public DruidConnectionHolder(DruidAbstractDataSource dataSource, Connection conn, long connectNanoSpan,
                                 Map variables, Map globleVariables)
                                                                                                    throws SQLException{
        this.dataSource = dataSource;
        this.conn = conn;
        this.createNanoSpan = connectNanoSpan;
        this.variables = variables;
        this.globleVariables = globleVariables;

        this.connectTimeMillis = System.currentTimeMillis();
        this.lastActiveTimeMillis = connectTimeMillis;

        this.underlyingAutoCommit = conn.getAutoCommit();

        if (conn instanceof WrapperProxy) {
            this.connectionId = ((WrapperProxy) conn).getId();
        } else {
            this.connectionId = dataSource.createConnectionId();
        }

        {
            boolean initUnderlyHoldability = !holdabilityUnsupported;
            if (JdbcConstants.SYBASE.equals(dataSource.dbType) //
                || JdbcConstants.DB2.equals(dataSource.dbType) //
                || JdbcConstants.HIVE.equals(dataSource.dbType) //
                || JdbcConstants.ODPS.equals(dataSource.dbType) //
            ) {
                initUnderlyHoldability = false;
            }
            if (initUnderlyHoldability) {
                try {
                    this.underlyingHoldability = conn.getHoldability();
                } catch (UnsupportedOperationException e) {
                    holdabilityUnsupported = true;
                    LOG.warn("getHoldability unsupported", e);
                } catch (SQLFeatureNotSupportedException e) {
                    holdabilityUnsupported = true;
                    LOG.warn("getHoldability unsupported", e);
                } catch (SQLException e) {
                    // bug fixed for hive jdbc-driver
                    if ("Method not supported".equals(e.getMessage())) {
                        holdabilityUnsupported = true;
                    }
                    LOG.warn("getHoldability error", e);
                }
            }
        }

        this.underlyingReadOnly = conn.isReadOnly();
        try {
            this.underlyingTransactionIsolation = conn.getTransactionIsolation();
        } catch (SQLException e) {
            // compartible for alibaba corba
            if ("HY000".equals(e.getSQLState())
                    || "com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException".equals(e.getClass().getName())) {
                // skip
            } else {
                throw e;
            }
        }

        this.defaultHoldability = underlyingHoldability;
        this.defaultTransactionIsolation = underlyingTransactionIsolation;
        this.defaultAutoCommit = underlyingAutoCommit;
        this.defaultReadOnly = underlyingReadOnly;
    }

在构造方法里面,查询了下数据库当前的隔离级别,设置了connectionId,autoCommit,connectTimeMillis,基本上没啥需要特殊注意的。返回继续看DruidDataSource类。

    private volatile DruidConnectionHolder[] connections;
    private DruidConnectionHolder[]          evictConnections;
    private DruidConnectionHolder[]          keepAliveConnections;

connections连接的总数,evictConnections回收的连接,keepAliveConnections活跃的总数

接下来看下DruidDataSource的初始化方法,一共200多行,还是比较长的

public void init() throws SQLException {
        if (inited) {
            return;
        }

        final ReentrantLock lock = this.lock;
        try {
            lock.lockInterruptibly();
        } catch (InterruptedException e) {
            throw new SQLException("interrupt", e);
        }

        boolean init = false;
        try {
            if (inited) {
                return;
            }

            initStackTrace = Utils.toString(Thread.currentThread().getStackTrace());

            this.id = DruidDriver.createDataSourceId();
            if (this.id > 1) {
                long delta = (this.id - 1) * 100000;
                this.connectionIdSeedUpdater.addAndGet(this, delta);
                this.statementIdSeedUpdater.addAndGet(this, delta);
                this.resultSetIdSeedUpdater.addAndGet(this, delta);
                this.transactionIdSeedUpdater.addAndGet(this, delta);
            }

            if (this.jdbcUrl != null) {
                this.jdbcUrl = this.jdbcUrl.trim();
                initFromWrapDriverUrl();
            }

            for (Filter filter : filters) {
                filter.init(this);
            }

            if (this.dbType == null || this.dbType.length() == 0) {
                this.dbType = JdbcUtils.getDbType(jdbcUrl, null);
            }

            if (JdbcConstants.MYSQL.equals(this.dbType)
                    || JdbcConstants.MARIADB.equals(this.dbType)
                    || JdbcConstants.ALIYUN_ADS.equals(this.dbType)) {
                boolean cacheServerConfigurationSet = false;
                if (this.connectProperties.containsKey("cacheServerConfiguration")) {
                    cacheServerConfigurationSet = true;
                } else if (this.jdbcUrl.indexOf("cacheServerConfiguration") != -1) {
                    cacheServerConfigurationSet = true;
                }
                if (cacheServerConfigurationSet) {
                    this.connectProperties.put("cacheServerConfiguration", "true");
                }
            }

            if (maxActive <= 0) {
                throw new IllegalArgumentException("illegal maxActive " + maxActive);
            }

            if (maxActive < minIdle) {
                throw new IllegalArgumentException("illegal maxActive " + maxActive);
            }

            if (getInitialSize() > maxActive) {
                throw new IllegalArgumentException("illegal initialSize " + this.initialSize + ", maxActive " + maxActive);
            }

            if (timeBetweenLogStatsMillis > 0 && useGlobalDataSourceStat) {
                throw new IllegalArgumentException("timeBetweenLogStatsMillis not support useGlobalDataSourceStat=true");
            }

            if (maxEvictableIdleTimeMillis < minEvictableIdleTimeMillis) {
                throw new SQLException("maxEvictableIdleTimeMillis must be grater than minEvictableIdleTimeMillis");
            }

            if (this.driverClass != null) {
                this.driverClass = driverClass.trim();
            }

            initFromSPIServiceLoader();

            if (this.driver == null) {
                if (this.driverClass == null || this.driverClass.isEmpty()) {
                    this.driverClass = JdbcUtils.getDriverClassName(this.jdbcUrl);
                }

                if (MockDriver.class.getName().equals(driverClass)) {
                    driver = MockDriver.instance;
                } else {
                    driver = JdbcUtils.createDriver(driverClassLoader, driverClass);
                }
            } else {
                if (this.driverClass == null) {
                    this.driverClass = driver.getClass().getName();
                }
            }

            initCheck();

            initExceptionSorter();
            initValidConnectionChecker();
            validationQueryCheck();

            if (isUseGlobalDataSourceStat()) {
                dataSourceStat = JdbcDataSourceStat.getGlobal();
                if (dataSourceStat == null) {
                    dataSourceStat = new JdbcDataSourceStat("Global", "Global", this.dbType);
                    JdbcDataSourceStat.setGlobal(dataSourceStat);
                }
                if (dataSourceStat.getDbType() == null) {
                    dataSourceStat.setDbType(this.dbType);
                }
            } else {
                dataSourceStat = new JdbcDataSourceStat(this.name, this.jdbcUrl, this.dbType, this.connectProperties);
            }
            dataSourceStat.setResetStatEnable(this.resetStatEnable);

            connections = new DruidConnectionHolder[maxActive];
            evictConnections = new DruidConnectionHolder[maxActive];
            keepAliveConnections = new DruidConnectionHolder[maxActive];

            SQLException connectError = null;

            if (createScheduler != null) {
                for (int i = 0; i < initialSize; ++i) {
                    createTaskCount++;
                    CreateConnectionTask task = new CreateConnectionTask(true);
                    this.createSchedulerFuture = createScheduler.submit(task);
                }
            } else if (!asyncInit) {
                try {
                    // init connections
                    for (int i = 0; i < initialSize; ++i) {
                        PhysicalConnectionInfo pyConnectInfo = createPhysicalConnection();
                        DruidConnectionHolder holder = new DruidConnectionHolder(this, pyConnectInfo);
                        connections[poolingCount] = holder;
                        incrementPoolingCount();
                    }

                    if (poolingCount > 0) {
                        poolingPeak = poolingCount;
                        poolingPeakTime = System.currentTimeMillis();
                    }
                } catch (SQLException ex) {
                    LOG.error("init datasource error, url: " + this.getUrl(), ex);
                    connectError = ex;
                }
            }

            createAndLogThread();
            createAndStartCreatorThread();
            createAndStartDestroyThread();

            initedLatch.await();
            init = true;

            initedTime = new Date();
            registerMbean();

            if (connectError != null && poolingCount == 0) {
                throw connectError;
            }

            if (keepAlive) {
                // async fill to minIdle
                if (createScheduler != null) {
                    for (int i = 0; i < minIdle; ++i) {
                        createTaskCount++;
                        CreateConnectionTask task = new CreateConnectionTask(true);
                        this.createSchedulerFuture = createScheduler.submit(task);
                    }
                } else {
                    this.emptySignal();
                }
            }

        } catch (SQLException e) {
            LOG.error("{dataSource-" + this.getID() + "} init error", e);
            throw e;
        } catch (InterruptedException e) {
            throw new SQLException(e.getMessage(), e);
        } catch (RuntimeException e){
            LOG.error("{dataSource-" + this.getID() + "} init error", e);
            throw e;
        } catch (Error e){
            LOG.error("{dataSource-" + this.getID() + "} init error", e);
            throw e;

        } finally {
            inited = true;
            lock.unlock();

            if (init && LOG.isInfoEnabled()) {
                String msg = "{dataSource-" + this.getID();

                if (this.name != null && !this.name.isEmpty()) {
                    msg += ",";
                    msg += this.name;
                }

                msg += "} inited";

                LOG.info(msg);
            }
        }
    }

init方法会初始化id为1的连接池,核心poolingCount 默认为2,connections、evictConnections 、keepAliveConnections 的大小为maxActive的值。然后创建CreateConnectionThread 、LogStatsThread、DestroyConnectionThread三个线程,都是以守护线程的方式启动的。

PhysicalConnectionInfo pyConnectInfo = this.createPhysicalConnection();

会创建一个条数据库连接
可以观察PERFORMANCE_SCHEMA.threads
会发现新增了两条记录


image.png

再观察一下java线程池情况

java -classpath "sa-jdi.jar" sun.jvm.hotspot.HSDB

使用HSDB查看创建情况


image.png

CreateConnectionThread

          while(true) {
                PhysicalConnectionInfo connection;
                label338: {
                    while(true) {
                        try {
                            DruidDataSource.this.lock.lockInterruptibly();
                        } catch (InterruptedException var33) {
                            break;
                        }

                        long discardCount = DruidDataSource.this.discardCount;
                        boolean discardChanged = discardCount - lastDiscardCount > 0L;
                        lastDiscardCount = discardCount;

                        try {
                            boolean emptyWait = true;
                            if (DruidDataSource.this.createError != null && DruidDataSource.this.poolingCount == 0 && !discardChanged) {
                                emptyWait = false;
                            }

                            if (emptyWait && DruidDataSource.this.asyncInit && DruidDataSource.this.createCount < (long)DruidDataSource.this.initialSize) {
                                emptyWait = false;
                            }

                            if (emptyWait) {
                                if (DruidDataSource.this.poolingCount >= DruidDataSource.this.notEmptyWaitThreadCount && (!DruidDataSource.this.keepAlive || DruidDataSource.this.activeCount + DruidDataSource.this.poolingCount >= DruidDataSource.this.minIdle)) {
                                    DruidDataSource.this.empty.await();
                                }

                                if (DruidDataSource.this.activeCount + DruidDataSource.this.poolingCount >= DruidDataSource.this.maxActive) {
                                    DruidDataSource.this.empty.await();
                                    continue;
                                }
                            }
                        } catch (InterruptedException var31) {
                            DruidDataSource.this.lastCreateError = var31;
                            DruidDataSource.this.lastErrorTimeMillis = System.currentTimeMillis();
                            if (!DruidDataSource.this.closing) {
                                DruidDataSource.LOG.error("create connection Thread Interrupted, url: " + DruidDataSource.this.jdbcUrl, var31);
                            }
                            break;
                        } finally {
                            DruidDataSource.this.lock.unlock();
                        }

                        connection = null;

                        try {
                            connection = DruidDataSource.this.createPhysicalConnection();
                            break label338;
                        } catch (SQLException var28) {
                            DruidDataSource.LOG.error("create connection SQLException, url: " + DruidDataSource.this.jdbcUrl + ", errorCode " + var28.getErrorCode() + ", state " + var28.getSQLState(), var28);
                            ++errorCount;
                            if (errorCount <= DruidDataSource.this.connectionErrorRetryAttempts || DruidDataSource.this.timeBetweenConnectErrorMillis <= 0L) {
                                break label338;
                            }

                            DruidDataSource.this.setFailContinuous(true);
                            if (DruidDataSource.this.failFast) {
                                DruidDataSource.this.lock.lock();

                                try {
                                    DruidDataSource.this.notEmpty.signalAll();
                                } finally {
                                    DruidDataSource.this.lock.unlock();
                                }
                            }

                            if (!DruidDataSource.this.breakAfterAcquireFailure) {
                                try {
                                    Thread.sleep(DruidDataSource.this.timeBetweenConnectErrorMillis);
                                    break label338;
                                } catch (InterruptedException var27) {
                                }
                            }
                            break;
                        } catch (RuntimeException var29) {
                            DruidDataSource.LOG.error("create connection RuntimeException", var29);
                            DruidDataSource.this.setFailContinuous(true);
                        } catch (Error var30) {
                            DruidDataSource.LOG.error("create connection Error", var30);
                            DruidDataSource.this.setFailContinuous(true);
                            break;
                        }
                    }

                    return;
                }

                if (connection != null) {
                    boolean result = DruidDataSource.this.put(connection);
                    if (!result) {
                        JdbcUtils.close(connection.getPhysicalConnection());
                        DruidDataSource.LOG.info("put physical connection to pool failed.");
                    }

                    errorCount = 0;
                }
            }

该线程负责创建数据库连接,如果有线程在等待连接的话
DestroyConnectionThread

        public void run() {
            initedLatch.countDown();

            for (;;) {
                // 从前面开始删除
                try {
                    if (closed) {
                        break;
                    }

                    if (timeBetweenEvictionRunsMillis > 0) {
                        Thread.sleep(timeBetweenEvictionRunsMillis);
                    } else {
                        Thread.sleep(1000); //
                    }

                    if (Thread.interrupted()) {
                        break;
                    }

                    destroyTask.run();
                } catch (InterruptedException e) {
                    break;
                }
            }
        }
  public class DestroyTask implements Runnable {

        @Override
        public void run() {
            shrink(true, keepAlive);

            if (isRemoveAbandoned()) {
                removeAbandoned();
            }
        }

    }
public void shrink(boolean checkTime, boolean keepAlive) {
    // 回收超过一定时间不活跃
}

持有连接数=核心连接数+活跃的连接数

int allCount = this.poolingCount + this.activeCount;

至此,Druid连接池初始化基本上有了大致的了解,下面从获取连接的方法入手,查看下线程获取数据库连接的过程。
DruidPooledConnection

    public DruidPooledConnection getConnectionDirect(long maxWaitMillis) throws SQLException {
        int notFullTimeoutRetryCnt = 0;

        DruidPooledConnection poolableConnection;
        while(true) {
            while(true) {
                try {
                    poolableConnection = this.getConnectionInternal(maxWaitMillis);
                    break;
                } catch (GetConnectionTimeoutException var19) {
                    // 超时获取不到poolableConnection,异常处理
                }
            }

            if (this.testOnBorrow) {
                // 调用ping方法测试连接是否可用
                boolean validate = this.testConnectionInternal(poolableConnection.holder, poolableConnection.conn);
                if (validate) {
                    break;
                }

                if (LOG.isDebugEnabled()) {
                    LOG.debug("skip not validate connection.");
                }

                Connection realConnection = poolableConnection.conn;
                this.discardConnection(realConnection);
            } else {
                Connection realConnection = poolableConnection.conn;
                if (poolableConnection.conn.isClosed()) {
                    this.discardConnection((Connection)null);
                } else {
                    if (!this.testWhileIdle) {
                        break;
                    }

                    long currentTimeMillis = System.currentTimeMillis();
                    long lastActiveTimeMillis = poolableConnection.holder.lastActiveTimeMillis;
                    long idleMillis = currentTimeMillis - lastActiveTimeMillis;
                    long timeBetweenEvictionRunsMillis = this.timeBetweenEvictionRunsMillis;
                    if (timeBetweenEvictionRunsMillis <= 0L) {
                        timeBetweenEvictionRunsMillis = 60000L;
                    }

                    if (idleMillis < timeBetweenEvictionRunsMillis && idleMillis >= 0L) {
                        break;
                    }

                    boolean validate = this.testConnectionInternal(poolableConnection.holder, poolableConnection.conn);
                    if (validate) {
                        break;
                    }

                    if (LOG.isDebugEnabled()) {
                        LOG.debug("skip not validate connection.");
                    }

                    this.discardConnection(realConnection);
                }
            }
        }

        if (this.removeAbandoned) {
            StackTraceElement[] stackTrace = Thread.currentThread().getStackTrace();
            poolableConnection.connectStackTrace = stackTrace;
            poolableConnection.setConnectedTimeNano();
            poolableConnection.traceEnable = true;
            this.activeConnectionLock.lock();

            try {
                this.activeConnections.put(poolableConnection, PRESENT);
            } finally {
                this.activeConnectionLock.unlock();
            }
        }

        if (!this.defaultAutoCommit) {
            poolableConnection.setAutoCommit(false);
        }

        return poolableConnection;
    }

通过调用poolableConnection = this.getConnectionInternal(maxWaitMillis);获取连接

    private DruidPooledConnection getConnectionInternal(long maxWait) throws SQLException {
        if (this.closed) {
            connectErrorCountUpdater.incrementAndGet(this);
            throw new DataSourceClosedException("dataSource already closed at " + new Date(this.closeTimeMillis));
        } else if (!this.enable) {
            connectErrorCountUpdater.incrementAndGet(this);
            throw new DataSourceDisableException();
        } else {
            long nanos = TimeUnit.MILLISECONDS.toNanos(maxWait);
            int maxWaitThreadCount = this.maxWaitThreadCount;
            boolean createDirect = false;

            DruidConnectionHolder holder;
            while(true) {
                if (createDirect) {
                        //创建新的连接,代码略过
                        this.lock.lock();

                        // 更新状态字段,代码略过
                    }
                }

                // 中断方式获取锁,代码略过

                try {
                    if (最大等待线程数是否超出) {
                        connectErrorCountUpdater.incrementAndGet(this);
                        throw new SQLException("maxWaitThreadCount " + maxWaitThreadCount + ", current wait Thread count " + this.lock.getQueueLength());
                    }

                    if (失败次数是否超出最大失败次数) {
                        // 输出日志,代码略过

                        throw new SQLException(errorMsg, this.lastFatalError);
                    }

                    ++this.connectCount;
                    if (this.createScheduler != null && this.poolingCount == 0 && this.activeCount < this.maxActive && creatingCountUpdater.get(this) == 0 && this.createScheduler instanceof ScheduledThreadPoolExecutor) {
                        // 该条件不会被满足,因为this.createScheduler默认为空
                        ScheduledThreadPoolExecutor executor = (ScheduledThreadPoolExecutor)this.createScheduler;
                        if (executor.getQueue().size() > 0) {
                            createDirect = true;
                            continue;
                        }
                    }
                    // 获取是否有超时时间,这里会转换成纳秒
                    if (maxWait > 0L) {
                        holder = this.pollLast(nanos);
                    } else {
                        holder = this.takeLast();
                    }

                    if (holder != null) {
                        ++this.activeCount;
                        if (this.activeCount > this.activePeak) {
                            this.activePeak = this.activeCount;
                            this.activePeakTime = System.currentTimeMillis();
                        }
                    }
                    break;
                } catch (InterruptedException var24) {
                    connectErrorCountUpdater.incrementAndGet(this);
                    throw new SQLException(var24.getMessage(), var24);
                } catch (SQLException var25) {
                    connectErrorCountUpdater.incrementAndGet(this);
                    throw var25;
                } finally {
                    this.lock.unlock();
                }
            }
            if (holder == null) {
            // 如果holder为空的,异常处理,这里略过
            } else {
                holder.incrementUseCount();
                DruidPooledConnection poolalbeConnection = new DruidPooledConnection(holder);
                return poolalbeConnection;
            }
        }
    }

holder = this.pollLast(nanos); 会调用这个获取DruidConnectionHolder

    private DruidConnectionHolder pollLast(long nanos) throws InterruptedException, SQLException {
        long estimate = nanos;

        while(this.poolingCount == 0) {
            this.emptySignal();
            if (this.failFast && this.failContinuous.get()) {
                throw new DataSourceNotAvailableException(this.createError);
            }

            if (estimate <= 0L) {
                waitNanosLocal.set(nanos - estimate);
                return null;
            }

            ++this.notEmptyWaitThreadCount;
            if (this.notEmptyWaitThreadCount > this.notEmptyWaitThreadPeak) {
                this.notEmptyWaitThreadPeak = this.notEmptyWaitThreadCount;
            }

            try {
                long startEstimate = estimate;
                estimate = this.notEmpty.awaitNanos(estimate);
                ++this.notEmptyWaitCount;
                this.notEmptyWaitNanos += startEstimate - estimate;
                if (!this.enable) {
                    connectErrorCountUpdater.incrementAndGet(this);
                    throw new DataSourceDisableException();
                }
            } catch (InterruptedException var10) {
                this.notEmpty.signal();
                ++this.notEmptySignalCount;
                throw var10;
            } finally {
                --this.notEmptyWaitThreadCount;
            }

            if (this.poolingCount != 0) {
                break;
            }

            if (estimate <= 0L) {
                waitNanosLocal.set(nanos - estimate);
                return null;
            }
        }

        this.decrementPoolingCount();
        DruidConnectionHolder last = this.connections[this.poolingCount];
        this.connections[this.poolingCount] = null;
        long waitNanos = nanos - estimate;
        last.setLastNotEmptyWaitNanos(waitNanos);
        return last;
    }

自旋获取连接,这个地方可用发现并没有等待队列。
再来看看连接数情况
初始化完成连接情况,两个sleep的连接


初始化完成

这里打隔点,模拟有三个线程需要同时获取连接的情况,可以发现,前面两个线程获取到连接后,第三个线程获取的时候会先超时获取不到,等待CreateConnectionThread 创建了新的连接后返回。


image.png

总结,阅读到这里开头的四个问题都有答案了。
1.该连接池是否使用了队列,使用的是有界还是无界队列?
不使用队列,就是忙等待,while循环。但是超时会返回。
2.拒绝策略是什么样的?
超过了最大等待线程数就会直接抛出SQLException(前提设置了maxWaitThreadCount)
3.如何被Spring引用的?
创建了一个DataSource Bean,使用DruidDataSourceWrapper包装类。
4.Spring如何通过该连接池获取数据库连接的?
如果有可以用的连接,直接返回,否则忙等待创建线程创建新的连接,获取到连接返回,获取不到则超时处理。

补充问题:
用完了连接,是如何将连接归还给DruidDataSource的?
JPA的话会调用
NonContextualJdbcConnectionAccess

  public void releaseConnection(Connection connection) throws SQLException {
        try {
            this.listener.jdbcConnectionReleaseStart();
            this.connectionProvider.closeConnection(connection);
        } finally {
            this.listener.jdbcConnectionReleaseEnd();
        }

    }

this.connectionProvider.dataSource=DruidDataSourceWrapper(最开始定义的bean)
DatasourceConnectionProviderImpl

    public void closeConnection(Connection connection) throws SQLException {
        connection.close();
    }

DruidPooledConnection

   public void close() throws SQLException {
        if (!this.disable) {
            DruidConnectionHolder holder = this.holder;
            if (holder == null) {
                if (this.dupCloseLogEnable) {
                    LOG.error("dup close");
                }

            } else {
                DruidAbstractDataSource dataSource = holder.getDataSource();
                boolean isSameThread = this.getOwnerThread() == Thread.currentThread();
                if (!isSameThread) {
                    dataSource.setAsyncCloseConnectionEnable(true);
                }

                if (dataSource.isAsyncCloseConnectionEnable()) {
                    this.syncClose();
                } else {
                    Iterator var4 = holder.getConnectionEventListeners().iterator();

                    while(var4.hasNext()) {
                        ConnectionEventListener listener = (ConnectionEventListener)var4.next();
                        listener.connectionClosed(new ConnectionEvent(this));
                    }

                    List filters = dataSource.getProxyFilters();
                    if (filters.size() > 0) {
                        FilterChainImpl filterChain = new FilterChainImpl(dataSource);
                        filterChain.dataSource_recycle(this);
                    } else {
                        this.recycle();
                    }

                    this.disable = true;
                }
            }
        }
    }

经过一系列方法,最终会调用DruidDataSource的recycle方法回收资源
DruidDataSource

    protected void recycle(DruidPooledConnection pooledConnection) throws SQLException {
        // 回收连接资源
    }

你可能感兴趣的:(Druid连接池实现分析)