阿里数据库连接池druid整理

常见的数据库连接池

  • DBCP(Database Connection pool)
  • C3P0
  • druid
    数据库连接池的对比可以参考:https://blog.csdn.net/qq_34359363/article/details/72763491

连接池中连接有效性校验

数据库连接池中对应的连接有可能已经和数据库服务器之间断开,而数据库连接池可能还认为对应的连接时有效的,如果客户端请求使用这样的连接就会导致连接丢失等异常,因此需要对数据库连接池中的连接进行有效性的判断;连接池内部有机制判断,如果当前的总的连接数少于miniIdle,则会建立新的空闲连接,以保证连接数得到miniIdle。如果当前连接池中某 个连接在空闲了timeBetweenEvictionRunsMillis时间后任然没有使用,则被物理性的关闭掉。有些数据库连接的时候有超时限制(mysql连接在8小时后断开,也就是说一个connection空闲超过8个小时,Mysql将自动断开该 connection),或者由于网络中断等原因,连接池的连接会出现失效的情况,这时候设置一个testWhileIdle参数为true, 可以保证连接池内部定时检测连接的可用性,不可用的连接会被抛弃或者重建,最大情况的保证从连接池中得到的Connection对象是可用的。当然,为了 保证绝对的可用性,你也可以使用testOnBorrow为true(即在获取Connection对象时检测其可用性),不过这样会影响性能。对于DBCP(DataBase Connection Pool)数据库连接池,默认配置的DBCP连接池,是不对池中的连接做测试的,有时连接已断开了,但DBCP连接池不知道,还以为连接是好的呢。应用从池中取出这样的连接访问数据库一定会报错。

Druid整体类图结构

Druid核心处理类图结构

注意:

  • 过滤器的功能是在执行任何连接或者语句等相关方法之前都需要先执行对应的过滤器
  • 原始数据库连接,语句等在Druid中都进行了包装和代理
  • StatFilter类对执行的连接或者语句等进行信息统计,并将统计的结果放入JdbcDataSourceStat 和JdbcSqlStat中,供后续监控显示使用
  • Druid使用了MBean方式(管理bean的功能)来实现监控和管理对应的信息,MBean介绍参考https://www.jianshu.com/p/6aff920def9b

DruidDataSource.init 方法流程

DruidDataSource.init 方法流程
public void init() throws SQLException {
        if (inited) {
            return;
        }

        initStackTrace = Utils.toString(Thread.currentThread().getStackTrace());
        this.id = DruidDriver.createDataSourceId();
        if (this.jdbcUrl != null) {
            this.jdbcUrl = this.jdbcUrl.trim();
            initFromWrapDriverUrl();
        }

        for (Filter filter : filters) {
            filter.init(this);
        }

        if (this.dbType == null || this.dbType.length() == 0) {
            this.dbType = JdbcUtils.getDbType(jdbcUrl, null);
        }
        initFromSPIServiceLoader();
        initCheck();

        initExceptionSorter();
        initValidConnectionChecker();
        validationQueryCheck();

        if (isUseGlobalDataSourceStat()) {
            dataSourceStat = JdbcDataSourceStat.getGlobal();
            if (dataSourceStat == null) {
                dataSourceStat = new JdbcDataSourceStat("Global", "Global", this.dbType);
                JdbcDataSourceStat.setGlobal(dataSourceStat);
            }
            if (dataSourceStat.getDbType() == null) {
                dataSourceStat.setDbType(this.dbType);
            }
        } else {
            dataSourceStat = new JdbcDataSourceStat(this.name, this.jdbcUrl, this.dbType, this.connectProperties);
        }
        dataSourceStat.setResetStatEnable(this.resetStatEnable);

        connections = new DruidConnectionHolder[maxActive];
        evictConnections = new DruidConnectionHolder[maxActive];
        keepAliveConnections = new DruidConnectionHolder[maxActive];

        SQLException connectError = null;

        if (createScheduler != null) {
            for (int i = 0; i < initialSize; ++i) {
                createTaskCount++;
                CreateConnectionTask task = new CreateConnectionTask(true);
                this.createSchedulerFuture = createScheduler.submit(task);
            }
        } else if (!asyncInit) {
            try {
                // init connections
                for (int i = 0; i < initialSize; ++i) {
                    PhysicalConnectionInfo pyConnectInfo = createPhysicalConnection();
                    DruidConnectionHolder holder = new DruidConnectionHolder(this, pyConnectInfo);
                    connections[poolingCount] = holder;
                    incrementPoolingCount();
                }

                if (poolingCount > 0) {
                    poolingPeak = poolingCount;
                    poolingPeakTime = System.currentTimeMillis();
                }
            } catch (SQLException ex) {
                LOG.error("init datasource error, url: " + this.getUrl(), ex);
                connectError = ex;
            }
        }

        createAndLogThread();
        createAndStartCreatorThread();
        createAndStartDestroyThread();

        initedLatch.await();
        init = true;

        initedTime = new Date();
        registerMbean();

        if (connectError != null && poolingCount == 0) {
            throw connectError;
        }

        if (keepAlive) {
            // async fill to minIdle
            if (createScheduler != null) {
                for (int i = 0; i < minIdle; ++i) {
                    createTaskCount++;
                    CreateConnectionTask task = new CreateConnectionTask(true);
                    this.createSchedulerFuture = createScheduler.submit(task);
                }
            } else {
                this.emptySignal();
            }
        }

    }

DruidDataSource.getConnection方法流程

public DruidPooledConnection getConnection(long maxWaitMillis) throws SQLException {
        // 调用上面的init方法
        init();
        if (filters.size() > 0) {
            // 调用过滤器链进行连接
            FilterChainImpl filterChain = new FilterChainImpl(this);
            return filterChain.dataSource_connect(this, maxWaitMillis);
        } else {
            return getConnectionDirect(maxWaitMillis);
        }
    }

数据库的一个连接对应的类图结构如下图所示:
Druid数据库链接对象

DruidPooledConnection.prepareStatement方法流程

public PreparedStatement prepareStatement(String sql) throws SQLException {
        checkState();

        PreparedStatementHolder stmtHolder = null;
        PreparedStatementKey key = new PreparedStatementKey(sql, getCatalog(), MethodType.M1);

        boolean poolPreparedStatements = holder.isPoolPreparedStatements();

        if (poolPreparedStatements) {
            stmtHolder = holder.getStatementPool().get(key);
        }

        if (stmtHolder == null) {
            try {
                // 调用ConnectionProxyImpl.prepareStatement方法
                stmtHolder = new PreparedStatementHolder(key, conn.prepareStatement(sql));
                holder.getDataSource().incrementPreparedStatementCount();
            } catch (SQLException ex) {
                handleException(ex, sql);
            }
        }

        initStatement(stmtHolder);

        DruidPooledPreparedStatement rtnVal = new DruidPooledPreparedStatement(this, stmtHolder);

        holder.addTrace(rtnVal);

        return rtnVal;
    }

ConnectionProxyImpl.prepareStatement方法

 public PreparedStatement prepareStatement(String sql) throws SQLException {
        FilterChainImpl chain = createChain();
        PreparedStatement stmt = chain.connection_prepareStatement(this, sql);
        recycleFilterChain(chain);
        return stmt;
    }

Statement语句对应的类图结构如下图所示:
Druid statement对应的类图结构

DruidPooledStatement.execute方法流程

public final boolean execute(String sql) throws SQLException {
        checkOpen();

        incrementExecuteCount();
        transactionRecord(sql);

        try {
            // 调用StatementProxyImpl.execute方法
            return stmt.execute(sql);
        } catch (Throwable t) {
            errorCheck(t);

            throw checkException(t, sql);
        }
    }

StatementProxyImpl.execute方法代码:

  public boolean execute(String sql) throws SQLException {
        updateCount = null;
        lastExecuteSql = sql;
        lastExecuteType = StatementExecuteType.Execute;
        lastExecuteStartNano = -1L;
        lastExecuteTimeNano = -1L;
        // 调用过滤器链进行执行,如果过滤器都没有处理则在FilterChainImpl 中进行处理
        FilterChainImpl chain = createChain();
        firstResultSet = chain.statement_execute(this, sql);
        recycleFilterChain(chain);
        return firstResultSet;
    }

Druid监控显示整体类图结构

Druid监控显示整体类图结构

Druid监控显示的整体流程

Druid监控显示的整体流程
如果url访问的是html则从如下截图目录中读取对应的html文件
html文件对应的目录
访问http://localhost:8088/WebRoot/druid/datasource.html对应的结果图如下:
datasource.html
访问http://localhost:8088/WebRoot/druid/datasource.json对应的结果截图如下:

{
    "ResultCode": 1,
    "Content": [{
        "Identity": 1643256033,
        "Name": "DataSource-1643256033",
        "DbType": "mysql",
        "DriverClassName": "com.mysql.jdbc.Driver",
        "URL": "jdbc:mysql://192.168.1.204:3306/testyjk",
        "UserName": "root",
        "FilterClassNames": ["com.alibaba.druid.filter.stat.StatFilter"],
        "WaitThreadCount": 0,
        "NotEmptyWaitCount": 0,
        "NotEmptyWaitMillis": 0,
        "PoolingCount": 1,
        "PoolingPeak": 1,
        "PoolingPeakTime": "2018-05-28 14:45:34",
        "ActiveCount": 0,
        "ActivePeak": 1,
        "ActivePeakTime": "2018-05-28 14:45:34",
        "InitialSize": 1,
        "MinIdle": 10,
        "MaxActive": 20,
        "QueryTimeout": 0,
        "TransactionQueryTimeout": 0,
        "LoginTimeout": 0,
        "ValidConnectionCheckerClassName": "com.alibaba.druid.pool.vendor.MySqlValidConnectionChecker",
        "ExceptionSorterClassName": "com.alibaba.druid.pool.vendor.MySqlExceptionSorter",
        "TestOnBorrow": false,
        "TestOnReturn": false,
        "TestWhileIdle": true,
        "DefaultAutoCommit": true,
        "DefaultReadOnly": true,
        "DefaultTransactionIsolation": null,
        "LogicConnectCount": 20,
        "LogicCloseCount": 20,
        "LogicConnectErrorCount": 0,
        "PhysicalConnectCount": 1,
        "PhysicalCloseCount": 0,
        "PhysicalConnectErrorCount": 0,
        "ExecuteCount": 20,
        "ErrorCount": 8,
        "CommitCount": 0,
        "RollbackCount": 0,
        "PSCacheAccessCount": 4,
        "PSCacheHitCount": 3,
        "PSCacheMissCount": 1,
        "StartTransactionCount": 0,
        "TransactionHistogram": [0, 0, 0, 0, 0, 0],
        "ConnectionHoldTimeHistogram": [0, 0, 16, 4, 0, 0, 0, 0],
        "RemoveAbandoned": false,
        "ClobOpenCount": 0,
        "BlobOpenCount": 0
    }]
}

参考

https://github.com/alibaba/druid

你可能感兴趣的:(阿里数据库连接池druid整理)