假设数据库有这么一张表:
CREATE TABLE `blog` (
`bid` int(11) NOT NULL AUTO_INCREMENT,
`name` varchar(255) DEFAULT NULL,
`author_id` int(11) DEFAULT NULL,
PRIMARY KEY (`bid`)
) ENGINE=InnoDB AUTO_INCREMENT=3 DEFAULT CHARSET=utf8
使用原生JDBC查询数据库:
@Test
public void testJdbc() throws IOException {
Connection conn = null;
Statement stmt = null;
Blog blog = new Blog();
try {
// 注册 JDBC 驱动
Class.forName("com.mysql.jdbc.Driver");
// 打开连接
conn = DriverManager.getConnection("jdbc:mysql://localhost:3306/mybatistest", "root", "lchadmin");
// 执行查询
stmt = conn.createStatement();
String sql = "SELECT bid, name, author_id FROM blog where bid = 1";
ResultSet rs = stmt.executeQuery(sql);
// 获取结果集
while (rs.next()) {
Integer bid = rs.getInt("bid");
String name = rs.getString("name");
Integer authorId = rs.getInt("author_id");
blog.setAuthorId(authorId);
blog.setBid(bid);
blog.setName(name);
}
System.out.println(blog);
rs.close();
stmt.close();
conn.close();
} catch (SQLException se) {
se.printStackTrace();
} catch (Exception e) {
e.printStackTrace();
} finally {
try {
if (stmt != null) stmt.close();
} catch (SQLException se2) {
}
try {
if (conn != null) conn.close();
} catch (SQLException se) {
se.printStackTrace();
}
}
}
只是查询一条记录,就需要下面6个步骤:
为了解决JDBC操作数据库的种种问题,出现了Commons DbUtils 工具类,用来简化数据库的操作,DbUtils 提供了一个工具类QueryRunner,封装了CRUD的一些操作,使用步骤如下:
String sql = "select * from blog";
List<BlogDto> list = queryRunner.query(sql, new BeanListHandler<>(BlogDto.class));
那么自动类型转换是怎么实现的呢,点击进入BeanListHandler类,它实现了 ResultSetHandler接口,重写了handle方法:
public List<T> handle(ResultSet rs) throws SQLException {
return this.convert.toBeanList(rs, this.type);
}
它调用的是RowProcessor的toBeanList方法,这是一个接口,找到它的默认实现org.apache.commons.dbutils.BasicRowProcessor#toBeanList ,它又调用了org.apache.commons.dbutils.BeanProcessor#toBeanList
public <T> List<T> toBeanList(ResultSet rs, Class<? extends T> type) throws SQLException {
List<T> results = new ArrayList();
if (!rs.next()) {
return results;
} else {
PropertyDescriptor[] props = this.propertyDescriptors(type);
ResultSetMetaData rsmd = rs.getMetaData();
int[] columnToProperty = this.mapColumnsToProperties(rsmd, props);
do {
results.add(this.createBean(rs, type, props, columnToProperty));
} while(rs.next());
return results;
}
}
进入createBean 方法,看到最终是调用 populateBean方法进行结果集到POJO对象的映射
private <T> T createBean(ResultSet rs, Class<T> type, PropertyDescriptor[] props, int[] columnToProperty) throws SQLException {
T bean = this.newInstance(type);
return this.populateBean(rs, bean, props, columnToProperty);
}
private <T> T populateBean(ResultSet rs, T bean, PropertyDescriptor[] props, int[] columnToProperty) throws SQLException {
for(int i = 1; i < columnToProperty.length; ++i) {
if (columnToProperty[i] != -1) {
PropertyDescriptor prop = props[columnToProperty[i]];
Class<?> propType = prop.getPropertyType();
Object value = null;
if (propType != null) {
value = this.processColumn(rs, i, propType);
if (value == null && propType.isPrimitive()) {
value = primitiveDefaults.get(propType);
}
}
this.callSetter(bean, prop, value);
}
}
return bean;
}
通过for循环,将rs的值填充到指定的类型属性中,但是,这个工具类有个问题:数据库中authorId明明有值,但查询结果却是null, 这是因为dbutils没有解决数据库字段与POJO属性名不一致的问题,DbUtils中的自动映射,必须要求数据库字段与POJO属性名完全一致
JdbcTemplate是Spring对原生Jdbc的封装,它封装了JDBC的核心流程,应用只要提供sql ,提取结果集就可以了,并且在初始化的时候课设置数据源,解决了资源管理的问题;
对结果集的处理,JdbcTemplate提供了RowMapper接口,用来把结果集转换为Java对象,RowMapper作为JdbcTemplate的参数来使用:
public class EmployeeRowMapper implements RowMapper {
@Override
public Object mapRow(ResultSet resultSet, int i) throws SQLException {
Employee employee = new Employee();
employee.setEmpId(resultSet.getInt("emp_id"));
employee.setEmpName(resultSet.getString("emp_name"));
employee.setGender(resultSet.getString("gender"));
employee.setEmail(resultSet.getString("email"));
return employee;
}
}
使用时传入sql 和EmployeeRowMapper即可进行查询,返回我们需要的类型
List<Employee> list = jdbcTemplate.query(" select * from tbl_emp", new EmployeeRowMapper());
但是,如果项目的表非常多,每张表转换为POJO都要定义一个RowMapper 的实现类,会导致类的膨胀。想要让表里面一行数据的字段跟POJO属性自动进行映射,必须要解决两个问题:
public class BaseRowMapper<T> implements RowMapper<T> {
private Class<?> targetClazz;
private HashMap<String, Field> fieldMap;
public BaseRowMapper(Class<?> targetClazz) {
this.targetClazz = targetClazz;
fieldMap = new HashMap<>();
Field[] fields = targetClazz.getDeclaredFields();
for (Field field : fields) {
fieldMap.put(field.getName(), field);
}
}
@Override
public T mapRow(ResultSet rs, int arg1) throws SQLException {
T obj = null;
try {
obj = (T) targetClazz.newInstance();
final ResultSetMetaData metaData = rs.getMetaData();
int columnLength = metaData.getColumnCount();
String columnName = null;
for (int i = 1; i <= columnLength; i++) {
columnName = metaData.getColumnName(i);
Class fieldClazz = fieldMap.get(camel(columnName)).getType();
Field field = fieldMap.get(camel(columnName));
field.setAccessible(true);
// fieldClazz == Character.class || fieldClazz == char.class
if (fieldClazz == int.class || fieldClazz == Integer.class) { // int
field.set(obj, rs.getInt(columnName));
} else if (fieldClazz == boolean.class || fieldClazz == Boolean.class) { // boolean
field.set(obj, rs.getBoolean(columnName));
} else if (fieldClazz == String.class) { // string
field.set(obj, rs.getString(columnName));
} else if (fieldClazz == float.class) { // float
field.set(obj, rs.getFloat(columnName));
} else if (fieldClazz == double.class || fieldClazz == Double.class) { // double
field.set(obj, rs.getDouble(columnName));
} else if (fieldClazz == BigDecimal.class) { // bigdecimal
field.set(obj, rs.getBigDecimal(columnName));
} else if (fieldClazz == short.class || fieldClazz == Short.class) { // short
field.set(obj, rs.getShort(columnName));
} else if (fieldClazz == Date.class) { // date
field.set(obj, rs.getDate(columnName));
} else if (fieldClazz == Timestamp.class) { // timestamp
field.set(obj, rs.getTimestamp(columnName));
} else if (fieldClazz == Long.class || fieldClazz == long.class) { // long
field.set(obj, rs.getLong(columnName));
}
field.setAccessible(false);
}
} catch (Exception e) {
e.printStackTrace();
}
return obj;
}
/**
* 下划线转驼峰
* @param str
* @return
*/
public static String camel(String str) {
Pattern pattern = Pattern.compile("_(\\w)");
Matcher matcher = pattern.matcher(str);
StringBuffer sb = new StringBuffer(str);
if(matcher.find()) {
sb = new StringBuffer();
matcher.appendReplacement(sb, matcher.group(1).toUpperCase());
matcher.appendTail(sb);
}else {
return sb.toString();
}
return camel(sb.toString());
}
虽然两种工具类已经解决了很多jdbc操作数据库的问题,但是仍然存在很大缺陷:
每一种技术的产生,都是为了解决某个场景下的问题。那么,什么是ORM? ORM是Object Relation Mapping ,就是对象关系映射,对象是程序里面的对象,关系是它与数据库里面的数据的关系,ORM框架帮我们解决的是对象和关系型数据库的相互映射的问题