2021SC@SDUSC
本文在个人博客同步发出,地址fastjson源码解析——反序列化特辑(二)
上一期特辑fastjson源码解析——反序列化特辑(一)详细列举了fastjson内部所有已经被注册的各类型的反序列化实例,同时惊奇地发现,所有的反序列化实例同时也是序列化器中使用的实例。
本期反序列化特辑,我将进一步深入,解读几个重要的反序列化实例(BooleanCodec
,CharacterCodec
,IntegerCodec
,FloatCodec
,JavaBeanDeserializer
)内部对token的使用,以及最底层的反序列化操作逻辑。
首先,我先介绍一下文中将会出现的token
。token
是fastjson遍历一遍json字符串,对每一个字符都判断了他的类型,大致区分为数字、普通符号、对json字符串有意义的符号({
,[
等),用
public static final int
包装起来的常量int
类型的数据。
使用一个int
类型的数组存储每一位的token
,用来标记json字符串每一位字符的含义。
举个例子
public final static int TRUE = 6;
遍历json字符串时,将boolean
类型的数据在数组的对应位置标记为TRUE
(即为int
的6,使用常量是为了增强可读性)
让我们先从常见的基本类型开始分析反序列化的实现
BooleanCodec
反序列化对boolean
类型的json字符串反序列化,先遍历一遍token
数组,找到第一个有效的token
,将此位置对应的字符数据转化为boolean
类型的数据(即完成反序列化),存储返回该数据。
具体代码如下
源码没有注释,为了便于直接对照,我在代码中添加了自己的注释
//com.alibaba.fastjson.serializer.BooleanCodec.deserialize
public <T> T deserialze(DefaultJSONParser parser, Type clazz, Object fieldName) {
final JSONLexer lexer = parser.lexer;
Boolean boolObj;
try {
if (lexer.token() == JSONToken.TRUE) {// 遇到true类型的token,预读下一个token(一直读到下一个逗号,下同)
lexer.nextToken(JSONToken.COMMA);
boolObj = Boolean.TRUE;
} else if (lexer.token() == JSONToken.FALSE) {// 遇到false类型的token,预读下一个token
lexer.nextToken(JSONToken.COMMA);
boolObj = Boolean.FALSE;
} else if (lexer.token() == JSONToken.LITERAL_INT) {// 遇到整数类型的token,预读下一个token
int intValue = lexer.intValue();
lexer.nextToken(JSONToken.COMMA);
if (intValue == 1) {// 1代表true,其他情况false
boolObj = Boolean.TRUE;
} else {
boolObj = Boolean.FALSE;
}
} else {
Object value = parser.parse();
if (value == null) {
return null;
}
boolObj = TypeUtils.castToBoolean(value);// 处理其他情况,比如Y,T代表true
}
} catch (Exception ex) {// 解决过程中抛出的所有异常,均表示反序列化出错
throw new JSONException("parseBoolean error, field : " + fieldName, ex);
}
if (clazz == AtomicBoolean.class) {// 如果是原子类型
return (T) new AtomicBoolean(boolObj.booleanValue());
}
return (T) boolObj;
}
这个方法看起来很简单,可能会刷新我们对反序列化的认知。
boolean
类型的反序列化逻辑只是将一个字符串遍历一遍,一一拼凑出我们需要的数据类型的实例数据。简单的方法,但是蕴含着最基础的反序列化逻辑。上层的方法只需要调用这些最底层的方法即可实现对不同类型进行反序列化。
接下来我们再使用同一个逻辑,分析下一个反序列化实例CharacterCodec
CharacterCodec
反序列化//com.alibaba.fastjson.serializer.CharacterCodec.deserialize
public <T> T deserialze(DefaultJSONParser parser, Type clazz, Object fieldName) {
Object value = parser.parse();//使用token获取到需要反序列化的子字符串
return value == null //判断子字符串是否为空
? null //返回,节约运算资源
: (T) TypeUtils.castToChar(value);
}
这个方法过于简单,先判断一下是否没有需要反序列化的字符串,再递交更底层的数据类型转化即可,内部解析值委托给了parse(java.lang.Object)
, 会把字符串解析取第一个字符处理
我们进一步看看parser.parse()
方法
public Object parse(Object fieldName) {
final JSONLexer lexer = this.lexer;
switch (lexer.token()) {
/**
忽略其他token(这个方法是一个公用的方法)
*/
case LITERAL_STRING://是个字符串,解析它的值
String stringLiteral = lexer.stringVal();
lexer.nextToken(JSONToken.COMMA);//读到下一个逗号
if (lexer.isEnabled(Feature.AllowISO8601DateFormat)) {//按照用户的配置,对日期特殊处理
JSONScanner iso8601Lexer = new JSONScanner(stringLiteral);
try {
if (iso8601Lexer.scanISO8601DateIfMatch()) {
return iso8601Lexer.getCalendar().getTime();
}
} finally {
iso8601Lexer.close();
}
}
return stringLiteral;
/**
忽略其他token(同上)
*/
}
}
其实这个方法和上文的BooleanCodec
反序列化的实现逻辑非常相似,基本上一模一样。
我们再看下一个,IntegerCodec
反序列化的实现
IntegerCodec
反序列化//com.alibaba.fastjson.serializer.IntegerCodec.deserialize
public <T> T deserialze(DefaultJSONParser parser, Type clazz, Object fieldName) {
final JSONLexer lexer = parser.lexer;
final int token = lexer.token();
if (token == JSONToken.NULL) {//解析到NULL值,直接返回null
lexer.nextToken(JSONToken.COMMA);
return null;
}
Integer intObj;
try {
if (token == JSONToken.LITERAL_INT) {//是整型的数据
int val = lexer.intValue();
lexer.nextToken(JSONToken.COMMA);//读到下一个逗号处
intObj = Integer.valueOf(val);//转化值,保存
} else if (token == JSONToken.LITERAL_FLOAT) {//是浮点型的数据,先处理浮点数,再保存
BigDecimal number = lexer.decimalValue();
intObj = TypeUtils.intValue(number);
lexer.nextToken(JSONToken.COMMA);
} else {
if (token == JSONToken.LBRACE) {//历史遗留问题,兼容过去的版本
JSONObject jsonObject = new JSONObject(true);
parser.parseObject(jsonObject);
intObj = TypeUtils.castToInt(jsonObject);
} else {//调用基本类型的转化,处理其他的情况
Object value = parser.parse();
intObj = TypeUtils.castToInt(value);
}
}
} catch (Exception ex) {
String message = "parseInt error";
if (fieldName != null) {
message += (", field : " + fieldName);
}
throw new JSONException(message, ex);
}
if (clazz == AtomicInteger.class) {//处理原子数据
return (T) new AtomicInteger(intObj.intValue());
}
return (T) intObj;
}
这个处理同上,逻辑都差不多,放出来只是提供几个基本类型的转化具体逻辑,便于大家参考学习。
看下一个FloatCodec
FloatCodec
反序列化//com.alibaba.fastjson.serializer.FloatCodec.deserialize
public static <T> T deserialze(DefaultJSONParser parser) {
final JSONLexer lexer = parser.lexer;
if (lexer.token() == JSONToken.LITERAL_INT) {//整型数据,读到下一个字段
String val = lexer.numberString();
lexer.nextToken(JSONToken.COMMA);
return (T) Float.valueOf(Float.parseFloat(val));
}
if (lexer.token() == JSONToken.LITERAL_FLOAT) {//浮点型数据,同上(重要部分)
float val = lexer.floatValue();
lexer.nextToken(JSONToken.COMMA);
return (T) Float.valueOf(val);
}
Object value = parser.parse();
if (value == null) {
return null;
}
return (T) TypeUtils.castToFloat(value);
}
逻辑同上
其实每个数据结构(在这里是类)的各个字段都可以分解为基础数据类型,只要先写出对每种基础数据类型的反序列化操作,就可以实现对不同字段的反序列化,从而实现整个类的反序列化。
下面,最重要的部分来了,JavaBeanDeserializer
反序列化
JavaBeanDeserializer
反序列化先看看这个类的构造函数,他检查了用户配置的情况,给每个属性配置了反序列化器,等待调用,是反序列化的总指挥。
//com.alibaba.fastjson.parser.deserializer.JavaBeanDeserializer.JavaBeanDeserializer
public JavaBeanDeserializer(ParserConfig config, JavaBeanInfo beanInfo){
/** java对象类名称 */
this.clazz = beanInfo.clazz;
this.beanInfo = beanInfo;
ParserConfig.AutoTypeCheckHandler autoTypeCheckHandler = null;//处理配置
if (beanInfo.jsonType != null && beanInfo.jsonType.autoTypeCheckHandler() != ParserConfig.AutoTypeCheckHandler.class) {
try {
autoTypeCheckHandler = beanInfo.jsonType.autoTypeCheckHandler().newInstance();
} catch (Exception e) {
//终止配置的执行,执行默认配置
}
}
this.autoTypeCheckHandler = autoTypeCheckHandler;
//创建Field相关数据的存放位置
Map<String, FieldDeserializer> alterNameFieldDeserializers = null;
sortedFieldDeserializers = new FieldDeserializer[beanInfo.sortedFields.length];
//给已排序的字段创建反序列化实例,如果字段有别名,关联别名到反序列化的映射
for (int i = 0, size = beanInfo.sortedFields.length; i < size; ++i) {
FieldInfo fieldInfo = beanInfo.sortedFields[i];
FieldDeserializer fieldDeserializer = config.createFieldDeserializer(config, beanInfo, fieldInfo);
sortedFieldDeserializers[i] = fieldDeserializer;
if (size > 128) {
if (fieldDeserializerMap == null) {
fieldDeserializerMap = new HashMap<String, FieldDeserializer>();
}
fieldDeserializerMap.put(fieldInfo.name, fieldDeserializer);
}
for (String name : fieldInfo.alternateNames) {
if (alterNameFieldDeserializers == null) {
alterNameFieldDeserializers = new HashMap<String, FieldDeserializer>();
}
alterNameFieldDeserializers.put(name, fieldDeserializer);
}
}
this.alterNameFieldDeserializers = alterNameFieldDeserializers;
fieldDeserializers = new FieldDeserializer[beanInfo.fields.length];
for (int i = 0, size = beanInfo.fields.length; i < size; ++i) {
FieldInfo fieldInfo = beanInfo.fields[i];
//采用二分法在sortedFieldDeserializers中查找已创建的反序列化类型
FieldDeserializer fieldDeserializer = getFieldDeserializer(fieldInfo.name);
fieldDeserializers[i] = fieldDeserializer;
}
}
以上构造函数已经给反序列化的具体操作提供了所有需要的条件,接下来就轮到deserialze
方法真正执行反序列化操作。
超长代码警告!!!
这段代码太长,虽然里面也有我的注释,但是不想看也可以直接跳过,看我在代码结束部分的分析
protected <T> T deserialze(DefaultJSONParser parser, //
Type type, //
Object fieldName, //
Object object, //
int features, //
int[] setFlags) {
if (type == JSON.class || type == JSONObject.class) {
/** 根据当前token类型判断解析对象 */
return (T) parser.parse();
}
final JSONLexerBase lexer = (JSONLexerBase) parser.lexer; // xxx
final ParserConfig config = parser.getConfig();
int token = lexer.token();
if (token == JSONToken.NULL) {
lexer.nextToken(JSONToken.COMMA);//解析null,预读下一个token并返回
return null;
}
ParseContext context = parser.getContext();
if (object != null && context != null) {
context = context.parent;
}
ParseContext childContext = null;
try {
Map<String, Object> fieldValues = null;
if (token == JSONToken.RBRACE) {
lexer.nextToken(JSONToken.COMMA);
if (object == null) {
/** 遇到}认为遇到对象结束,尝试创建实例对象 */
object = createInstance(parser, type);
}
return (T) object;
}
if (token == JSONToken.LBRACKET) {
final int mask = Feature.SupportArrayToBean.mask;
boolean isSupportArrayToBean = (beanInfo.parserFeatures & mask) != 0 //
|| lexer.isEnabled(Feature.SupportArrayToBean)
|| (features & mask) != 0
;
if (isSupportArrayToBean) {
// 将数组值反序列化为对象,根据sortedFieldDeserializers依次写字段值
return deserialzeArrayMapping(parser, type, fieldName, object);
}
}
if (token != JSONToken.LBRACE && token != JSONToken.COMMA) {
if (lexer.isBlankInput()) {
return null;
}
if (token == JSONToken.LITERAL_STRING) {
String strVal = lexer.stringVal();
/** 读到空值字符串,返回null */
if (strVal.length() == 0) {
lexer.nextToken();
return null;
}
if (beanInfo.jsonType != null) {
/** 探测是否是枚举类型 */
for (Class<?> seeAlsoClass : beanInfo.jsonType.seeAlso()) {
if (Enum.class.isAssignableFrom(seeAlsoClass)) {
try {
Enum<?> e = Enum.valueOf((Class<Enum>) seeAlsoClass, strVal);
return (T) e;
} catch (IllegalArgumentException e) {
// skip
}
}
}
}
}
if (token == JSONToken.LBRACKET && lexer.getCurrent() == ']') {
/** 包含零元素的数组 */
lexer.next();
lexer.nextToken();
return null;
}
if (beanInfo.factoryMethod != null && beanInfo.fields.length == 1) {
try {
FieldInfo field = beanInfo.fields[0];
if (field.fieldClass == Integer.class) {
if (token == JSONToken.LITERAL_INT) {
int intValue = lexer.intValue();
lexer.nextToken();
return (T) createFactoryInstance(config, intValue);
}
} else if (field.fieldClass == String.class) {
if (token == JSONToken.LITERAL_STRING) {
String stringVal = lexer.stringVal();
lexer.nextToken();
return (T) createFactoryInstance(config, stringVal);
}
}
} catch (Exception ex) {
throw new JSONException(ex.getMessage(), ex);
}
}
StringBuilder buf = (new StringBuilder()) //
.append("syntax error, expect {, actual ") //
.append(lexer.tokenName()) //
.append(", pos ") //
.append(lexer.pos());
if (fieldName instanceof String) {
buf //
.append(", fieldName ") //
.append(fieldName);
}
buf.append(", fastjson-version ").append(JSON.VERSION);
throw new JSONException(buf.toString());
}
if (parser.resolveStatus == DefaultJSONParser.TypeNameRedirect) {
parser.resolveStatus = DefaultJSONParser.NONE;
}
String typeKey = beanInfo.typeKey;
for (int fieldIndex = 0, notMatchCount = 0;; fieldIndex++) {
String key = null;
FieldDeserializer fieldDeserializer = null;
FieldInfo fieldInfo = null;
Class<?> fieldClass = null;
JSONField fieldAnnotation = null;
boolean customDeserializer = false;
if (fieldIndex < sortedFieldDeserializers.length && notMatchCount < 16) {
/** 检查是否所有字段都已经处理 */
fieldDeserializer = sortedFieldDeserializers[fieldIndex];
fieldInfo = fieldDeserializer.fieldInfo;
fieldClass = fieldInfo.fieldClass;
fieldAnnotation = fieldInfo.getAnnotation();
if (fieldAnnotation != null && fieldDeserializer instanceof DefaultFieldDeserializer) {
customDeserializer = ((DefaultFieldDeserializer) fieldDeserializer).customDeserilizer;
}
}
boolean matchField = false;
boolean valueParsed = false;
Object fieldValue = null;
if (fieldDeserializer != null) {
char[] name_chars = fieldInfo.name_chars;
if (customDeserializer && lexer.matchField(name_chars)) {
matchField = true;
} else if (fieldClass == int.class || fieldClass == Integer.class) {
/** 扫描整数值 */
int intVal = lexer.scanFieldInt(name_chars);
if (intVal == 0 && lexer.matchStat == JSONLexer.VALUE_NULL) {
fieldValue = null;
} else {
fieldValue = intVal;
}
if (lexer.matchStat > 0) {
matchField = true;
valueParsed = true;
} else if (lexer.matchStat == JSONLexer.NOT_MATCH_NAME) {
notMatchCount++;
continue;
}
} else if (fieldClass == long.class || fieldClass == Long.class) {
/** 扫描长整型值 */
long longVal = lexer.scanFieldLong(name_chars);
if (longVal == 0 && lexer.matchStat == JSONLexer.VALUE_NULL) {
fieldValue = null;
} else {
fieldValue = longVal;
}
if (lexer.matchStat > 0) {
matchField = true;
valueParsed = true;
} else if (lexer.matchStat == JSONLexer.NOT_MATCH_NAME) {
notMatchCount++;
continue;
}
} else if (fieldClass == String.class) {
/** 扫描字符串值 */
fieldValue = lexer.scanFieldString(name_chars);
if (lexer.matchStat > 0) {
matchField = true;
valueParsed = true;
} else if (lexer.matchStat == JSONLexer.NOT_MATCH_NAME) {
notMatchCount++;
continue;
}
} else if (fieldClass == java.util.Date.class && fieldInfo.format == null) {
/** 扫描日期值 */
fieldValue = lexer.scanFieldDate(name_chars);
if (lexer.matchStat > 0) {
matchField = true;
valueParsed = true;
} else if (lexer.matchStat == JSONLexer.NOT_MATCH_NAME) {
notMatchCount++;
continue;
}
} else if (fieldClass == BigDecimal.class) {
/** 扫描高精度值 */
fieldValue = lexer.scanFieldDecimal(name_chars);
if (lexer.matchStat > 0) {
matchField = true;
valueParsed = true;
} else if (lexer.matchStat == JSONLexer.NOT_MATCH_NAME) {
notMatchCount++;
continue;
}
} else if (fieldClass == BigInteger.class) {
/** 扫描BigInteger值 */
fieldValue = lexer.scanFieldBigInteger(name_chars);
if (lexer.matchStat > 0) {
matchField = true;
valueParsed = true;
} else if (lexer.matchStat == JSONLexer.NOT_MATCH_NAME) {
notMatchCount++;
continue;
}
} else if (fieldClass == boolean.class || fieldClass == Boolean.class) {
/** 扫描boolean值 */
boolean booleanVal = lexer.scanFieldBoolean(name_chars);
if (lexer.matchStat == JSONLexer.VALUE_NULL) {
fieldValue = null;
} else {
fieldValue = booleanVal;
}
if (lexer.matchStat > 0) {
matchField = true;
valueParsed = true;
} else if (lexer.matchStat == JSONLexer.NOT_MATCH_NAME) {
notMatchCount++;
continue;
}
} else if (fieldClass == float.class || fieldClass == Float.class) {
/** 扫描单精度浮点值 */
float floatVal = lexer.scanFieldFloat(name_chars);
if (floatVal == 0 && lexer.matchStat == JSONLexer.VALUE_NULL) {
fieldValue = null;
} else {
fieldValue = floatVal;
}
if (lexer.matchStat > 0) {
matchField = true;
valueParsed = true;
} else if (lexer.matchStat == JSONLexer.NOT_MATCH_NAME) {
notMatchCount++;
continue;
}
} else if (fieldClass == double.class || fieldClass == Double.class) {、
/** 扫描双精度浮点值 */
double doubleVal = lexer.scanFieldDouble(name_chars);
if (doubleVal == 0 && lexer.matchStat == JSONLexer.VALUE_NULL) {
fieldValue = null;
} else {
fieldValue = doubleVal;
}
if (lexer.matchStat > 0) {
matchField = true;
valueParsed = true;
} else if (lexer.matchStat == JSONLexer.NOT_MATCH_NAME) {
notMatchCount++;
continue;
}
} else if (fieldClass.isEnum()
/** 扫描枚举值 */
&& parser.getConfig().getDeserializer(fieldClass) instanceof EnumDeserializer
&& (fieldAnnotation == null || fieldAnnotation.deserializeUsing() == Void.class)
) {
if (fieldDeserializer instanceof DefaultFieldDeserializer) {
ObjectDeserializer fieldValueDeserilizer = ((DefaultFieldDeserializer) fieldDeserializer).fieldValueDeserilizer;
fieldValue = this.scanEnum(lexer, name_chars, fieldValueDeserilizer);
if (lexer.matchStat > 0) {
matchField = true;
valueParsed = true;
} else if (lexer.matchStat == JSONLexer.NOT_MATCH_NAME) {
notMatchCount++;
continue;
}
}
} else if (fieldClass == int[].class) {
/** 扫描整型数组值 */
fieldValue = lexer.scanFieldIntArray(name_chars);
if (lexer.matchStat > 0) {
matchField = true;
valueParsed = true;
} else if (lexer.matchStat == JSONLexer.NOT_MATCH_NAME) {
notMatchCount++;
continue;
}
} else if (fieldClass == float[].class) {
/** 扫描浮点数组值 */
fieldValue = lexer.scanFieldFloatArray(name_chars);
if (lexer.matchStat > 0) {
matchField = true;
valueParsed = true;
} else if (lexer.matchStat == JSONLexer.NOT_MATCH_NAME) {
notMatchCount++;
continue;
}
} else if (fieldClass == float[][].class) {
fieldValue = lexer.scanFieldFloatArray2(name_chars);
if (lexer.matchStat > 0) {
matchField = true;
valueParsed = true;
} else if (lexer.matchStat == JSONLexer.NOT_MATCH_NAME) {
notMatchCount++;
continue;
}
} else if (lexer.matchField(name_chars)) {
matchField = true;
} else {
continue;
}
}
/** 如果当前字符串的json不匹配当前字段名称 */
if (!matchField) {
/** 将当前的字段名称加入符号表 */
key = lexer.scanSymbol(parser.symbolTable);
/** 当前是无效的字段标识符,比如是,等符号 */
if (key == null) {
token = lexer.token();
if (token == JSONToken.RBRACE) {
/** 结束花括号, 预读下一个token */
lexer.nextToken(JSONToken.COMMA);
break;
}
if (token == JSONToken.COMMA) {
if (lexer.isEnabled(Feature.AllowArbitraryCommas)) {
continue;
}
}
}
if ("$ref" == key && context != null) {
lexer.nextTokenWithColon(JSONToken.LITERAL_STRING);
token = lexer.token();
if (token == JSONToken.LITERAL_STRING) {
String ref = lexer.stringVal();
if ("@".equals(ref)) {
object = context.object;
} else if ("..".equals(ref)) {
ParseContext parentContext = context.parent;
if (parentContext.object != null) {
object = parentContext.object;
} else {
parser.addResolveTask(new ResolveTask(parentContext, ref));
parser.resolveStatus = DefaultJSONParser.NeedToResolve;
}
} else if ("$".equals(ref)) {
ParseContext rootContext = context;
while (rootContext.parent != null) {
rootContext = rootContext.parent;
}
if (rootContext.object != null) {
object = rootContext.object;
} else {
parser.addResolveTask(new ResolveTask(rootContext, ref));
parser.resolveStatus = DefaultJSONParser.NeedToResolve;
}
} else {
if (ref.indexOf('\\') > 0) {
StringBuilder buf = new StringBuilder();
for (int i = 0; i < ref.length(); ++i) {
char ch = ref.charAt(i);
if (ch == '\\') {
ch = ref.charAt(++i);
}
buf.append(ch);
}
ref = buf.toString();
}
Object refObj = parser.resolveReference(ref);
if (refObj != null) {
object = refObj;
} else {
parser.addResolveTask(new ResolveTask(context, ref));
parser.resolveStatus = DefaultJSONParser.NeedToResolve;
}
}
} else {
throw new JSONException("illegal ref, " + JSONToken.name(token));
}
lexer.nextToken(JSONToken.RBRACE);
if (lexer.token() != JSONToken.RBRACE) {
throw new JSONException("illegal ref");
}
lexer.nextToken(JSONToken.COMMA);
parser.setContext(context, object, fieldName);
return (T) object;
}
if ((typeKey != null && typeKey.equals(key))
|| JSON.DEFAULT_TYPE_KEY == key) {
lexer.nextTokenWithColon(JSONToken.LITERAL_STRING);
if (lexer.token() == JSONToken.LITERAL_STRING) {
String typeName = lexer.stringVal();
lexer.nextToken(JSONToken.COMMA);
if (typeName.equals(beanInfo.typeName)|| parser.isEnabled(Feature.IgnoreAutoType)) {
/** 忽略字符串中包含@type解析 */
if (lexer.token() == JSONToken.RBRACE) {
lexer.nextToken();
break;
}
continue;
}
/** 根据枚举seeAlso查找反序列化实例 */
ObjectDeserializer deserializer = getSeeAlso(config, this.beanInfo, typeName);
Class<?> userType = null;
if (deserializer == null) {
/** 无法匹配,查找类对应的泛型或者参数化类型关联的反序列化实例 */
Class<?> expectClass = TypeUtils.getClass(type);
if (autoTypeCheckHandler != null) {
userType = autoTypeCheckHandler.handler(typeName, expectClass, lexer.getFeatures());
}
if (userType == null) {
if (typeName.equals("java.util.HashMap") || typeName.equals("java.util.LinkedHashMap")) {
if (lexer.token() == JSONToken.RBRACE) {
lexer.nextToken();
break;
}
continue;
}
}
if (userType == null) {
userType = config.checkAutoType(typeName, expectClass, lexer.getFeatures());
}
deserializer = parser.getConfig().getDeserializer(userType);
}
Object typedObject = deserializer.deserialze(parser, userType, fieldName);
if (deserializer instanceof JavaBeanDeserializer) {
JavaBeanDeserializer javaBeanDeserializer = (JavaBeanDeserializer) deserializer;
if (typeKey != null) {
FieldDeserializer typeKeyFieldDeser = javaBeanDeserializer.getFieldDeserializer(typeKey);
if (typeKeyFieldDeser != null) {
typeKeyFieldDeser.setValue(typedObject, typeName);
}
}
}
return (T) typedObject;
} else {
throw new JSONException("syntax error");
}
}
}
if (object == null && fieldValues == null) {
/** 第一次创建并初始化对象实例 */
object = createInstance(parser, type);
if (object == null) {
fieldValues = new HashMap<String, Object>(this.fieldDeserializers.length);
}
childContext = parser.setContext(context, object, fieldName);
if (setFlags == null) {
setFlags = new int[(this.fieldDeserializers.length / 32) + 1];
}
}
if (matchField) {
if (!valueParsed) {
/**json串当前满足字段名称,并且没有解析过值 ,
*直接使用当前字段关联的反序列化实例解析
*/
fieldDeserializer.parseField(parser, object, type, fieldValues);
} else {
if (object == null) {
/** 值已经解析过了,存储到map中 */
fieldValues.put(fieldInfo.name, fieldValue);
} else if (fieldValue == null) {
/** 字段值是null,排除int,long,float,double,boolean */
if (fieldClass != int.class //
&& fieldClass != long.class //
&& fieldClass != float.class //
&& fieldClass != double.class //
&& fieldClass != boolean.class //
) {
fieldDeserializer.setValue(object, fieldValue);
}
} else {
if (fieldClass == String.class
&& ((features & Feature.TrimStringFieldValue.mask) != 0
|| (beanInfo.parserFeatures & Feature.TrimStringFieldValue.mask) != 0
|| (fieldInfo.parserFeatures & Feature.TrimStringFieldValue.mask) != 0)) {
fieldValue = ((String) fieldValue).trim();
}
fieldDeserializer.setValue(object, fieldValue);
}
if (setFlags != null) {
int flagIndex = fieldIndex / 32;
int bitIndex = fieldIndex % 32;
setFlags[flagIndex] |= (1 << bitIndex);
}
if (lexer.matchStat == JSONLexer.END) {
break;
}
}
} else {
/** 字段名称当前和json串不匹配,通常顺序或者字段增加或者缺少,
* 根据key查找反序列化实例解析
*/
boolean match = parseField(parser, key, object, type,
fieldValues == null ? new HashMap<String, Object>(this.fieldDeserializers.length) : fieldValues, setFlags);
if (!match) {
if (lexer.token() == JSONToken.RBRACE) {
/** 遇到封闭花括号,预读下一个token,跳出循环 */
lexer.nextToken();
break;
}
continue;
} else if (lexer.token() == JSONToken.COLON) {
throw new JSONException("syntax error, unexpect token ':'");
}
}
if (lexer.token() == JSONToken.COMMA) {
continue;
}
if (lexer.token() == JSONToken.RBRACE) {
lexer.nextToken(JSONToken.COMMA);
break;
}
if (lexer.token() == JSONToken.IDENTIFIER || lexer.token() == JSONToken.ERROR) {
throw new JSONException("syntax error, unexpect token " + JSONToken.name(lexer.token()));
}
}
if (object == null) {
if (fieldValues == null) {
/** 第一次创建并初始化对象实例 */
object = createInstance(parser, type);
if (childContext == null) {
childContext = parser.setContext(context, object, fieldName);
}
return (T) object;
}
/** 提取构造函数参数名称 */
String[] paramNames = beanInfo.creatorConstructorParameters;
final Object[] params;
if (paramNames != null) {
params = new Object[paramNames.length];
for (int i = 0; i < paramNames.length; i++) {
String paramName = paramNames[i];
Object param = fieldValues.remove(paramName);
/** 解析过的字段不包含当前参数名字 */
if (param == null) {
Type fieldType = beanInfo.creatorConstructorParameterTypes[i];
FieldInfo fieldInfo = beanInfo.fields[i];
/** 探测并设置类型默认值 */
if (fieldType == byte.class) {
param = (byte) 0;
} else if (fieldType == short.class) {
param = (short) 0;
} else if (fieldType == int.class) {
param = 0;
} else if (fieldType == long.class) {
param = 0L;
} else if (fieldType == float.class) {
param = 0F;
} else if (fieldType == double.class) {
param = 0D;
} else if (fieldType == boolean.class) {
param = Boolean.FALSE;
} else if (fieldType == String.class
&& (fieldInfo.parserFeatures & Feature.InitStringFieldAsEmpty.mask) != 0) {
param = "";
}
} else {
if (beanInfo.creatorConstructorParameterTypes != null && i < beanInfo.creatorConstructorParameterTypes.length) {
Type paramType = beanInfo.creatorConstructorParameterTypes[i];
if (paramType instanceof Class) {
Class paramClass = (Class) paramType;
if (!paramClass.isInstance(param)) {
if (param instanceof List) {
List list = (List) param;
if (list.size() == 1) {
Object first = list.get(0);
if (paramClass.isInstance(first)) {
param = list.get(0);
}
}
}
}
}
}
}
params[i] = param;
}
} else {
/** 根据字段探测并初始化构造函数参数默认值 */
FieldInfo[] fieldInfoList = beanInfo.fields;
int size = fieldInfoList.length;
params = new Object[size];
for (int i = 0; i < size; ++i) {
FieldInfo fieldInfo = fieldInfoList[i];
Object param = fieldValues.get(fieldInfo.name);
if (param == null) {
Type fieldType = fieldInfo.fieldType;
if (fieldType == byte.class) {
param = (byte) 0;
} else if (fieldType == short.class) {
param = (short) 0;
} else if (fieldType == int.class) {
param = 0;
} else if (fieldType == long.class) {
param = 0L;
} else if (fieldType == float.class) {
param = 0F;
} else if (fieldType == double.class) {
param = 0D;
} else if (fieldType == boolean.class) {
param = Boolean.FALSE;
} else if (fieldType == String.class
&& (fieldInfo.parserFeatures & Feature.InitStringFieldAsEmpty.mask) != 0) {
param = "";
}
}
params[i] = param;
}
}
if (beanInfo.creatorConstructor != null) {
boolean hasNull = false;
if (beanInfo.kotlin) {
for (int i = 0; i < params.length; i++) {
if (params[i] == null && beanInfo.fields != null && i < beanInfo.fields.length) {
FieldInfo fieldInfo = beanInfo.fields[i];
if (fieldInfo.fieldClass == String.class) {
hasNull = true;
}
break;
}
}
}
try {
if (hasNull && beanInfo.kotlinDefaultConstructor != null) {
object = beanInfo.kotlinDefaultConstructor.newInstance(new Object[0]);
for (int i = 0; i < params.length; i++) {
final Object param = params[i];
if (param != null && beanInfo.fields != null && i < beanInfo.fields.length) {
FieldInfo fieldInfo = beanInfo.fields[i];
fieldInfo.set(object, param);
}
}
} else {
object = beanInfo.creatorConstructor.newInstance(params);
}
} catch (Exception e) {
throw new JSONException("create instance error, " + paramNames + ", "
+ beanInfo.creatorConstructor.toGenericString(), e);
}
if (paramNames != null) {
/** 剩余字段查找反序列化器set值 */
for (Map.Entry<String, Object> entry : fieldValues.entrySet()) {
FieldDeserializer fieldDeserializer = getFieldDeserializer(entry.getKey());
if (fieldDeserializer != null) {
fieldDeserializer.setValue(object, entry.getValue());
}
}
}
} else if (beanInfo.factoryMethod != null) {
try {
object = beanInfo.factoryMethod.invoke(null, params);
} catch (Exception e) {
throw new JSONException("create factory method error, " + beanInfo.factoryMethod.toString(), e);
}
}
if (childContext != null) {
childContext.object = object;
}
}
/** 检查是否扩展后置方法buildMethod,如果有进行调用 */
Method buildMethod = beanInfo.buildMethod;
if (buildMethod == null) {
return (T) object;
}
Object builtObj;
try {
builtObj = buildMethod.invoke(object);
} catch (Exception e) {
throw new JSONException("build object error", e);
}
return (T) builtObj;
} finally {
if (childContext != null) {
childContext.object = object;
}
parser.setContext(context);
}
}
这段代码实在又臭又长,实际做的事情如下:
到这里,我们对象反序列化的全部操作就已经完成了,我们获得了一个已经set
所有已知参数的对象。回顾这段历程,我们从最初的入口开始,不知不觉已经深入fastjson反序列化的底层实现,将其核心代码一一分析。我们不仅学到了开发者严密的逻辑,我们也看到了各种代码技巧。
随着对象反序列化parseObject
被我们彻底解析,这篇文章是特辑的最后一篇了,当然,我们的旅程不会结束,下一期我将回到我们的入口,开始介绍下一个入口parseArray()
。这个方法将包含有多个json对象的JSON字符串转化为数组以供调用。
感谢各位老师的阅读与指导!