最近在做一个Springboot-Cassandra相关项目,由于本人是新接触Cassandra,所以遇到很多棘手的问题,耗费了大量时间查找资料定位问题等,现将期间遇到的一些棘手问题罗列,以便日后复盘学习。
文中将使用到的表定义
CREATE TYPE cass_stdy.address (
addr text,
mail text
);
CREATE TABLE cass_stdy.user (
id uuid PRIMARY KEY,
addr list>,
age int,
birthday timestamp,
d_addr frozen,
name text,
score map
);
(因工程开发过程中遇到问题,需要频繁重启应用,为了提升定位问题效率,重新新建了一个工程和表空间用于针对性的定位相关问题,故表结构均为测试数据,当然也为信息安全考虑)
1. CQL语法方面有一些小细节需要注意
CQL语句规范与SQL相似,但是由于Cassandra是分布式key-value存储,所以语法上CQL语句存在很多局限性和不同。
CQL语句详细规范请参照Cassandra官方文档 The Cassandra Query Language (CQL)
1) WHERE 条件中小括号“()”内只能带一个条件(通常情况下最好是不带小括号,但是某些工具拼接的语句会带)
误:
SELECT * FROM user WHERE ( id = cf49db81-7821-40e1-8502-871660c14571 AND name = 'MadMonkey' ) ALLOW FILTERING;
正:
SELECT * FROM user WHERE ( id = cf49db81-7821-40e1-8502-871660c14571 ) AND ( name = 'MadMonkey' ) ALLOW FILTERING;
提示:CQL查询条件若涉及非主键字段查询,必须在末尾带上 ALLOW FILTERING
说明:在拼接CQL时如果用到AbstractSQL类,拼接的WHERE条件语句形式是WHERE (id = ? AND name = ?),这种情况下多条件的CQL会报错。
2) 字符串类型只能用单引号 ', 不能用双引号 "
误:
SELECT * FROM user WHERE name = "MadMonkey" ALLOW FILTERING;
正:
SELECT * FROM user WHERE name = 'MadMonkey' ALLOW FILTERING;
说明:SQL语句可同时支持单双引号,但是CQL只支持单引号,同样CQL也不支持反引号 `。
3) 用户自定义类型(User-defined-type, UDT),在INSERT和UPDATE操作时字段名不能是字符串
误:
INSERT INTO user (id, d_addr) VALUES (cf49db81-7821-40e1-8502-871660c14571, {'addr': 'SZ-GD-CN', 'mail': '518000'});
正:
INSERT INTO user (id, d_addr) VALUES (cf49db81-7821-40e1-8502-871660c14571, { addr: 'SZ-GD-CN', mail: '518000'});
说明:由于UDT定义相当于表,所以UDT字段相当于表字段,在INSERT和UPDATE操作时即跟表字段的操作一致,而并不等同于map类型或SQL的JSON类型
4) 值类型强校验,字段类型强制限定了值类型,SQL中纯数字字符串可以转换为数字的情况,在CQL中不存在,UUID类型不能是字符串UUID,数字类型也不能是数字字符串
误:
INSERT INTO user (id, age) VALUES ('cf49db81-7821-40e1-8502-871660c14571', '13');
正:
INSERT INTO user (id, age) VALUES (cf49db81-7821-40e1-8502-871660c14571, 13);
提示:时间类型,必须是字符串,如:正:'2020-01-01',误:2020-01-01
2. 工程异常
cassandra-driver-core和cassandra-driver-mapping经过spring-data的封装,可以实现与SQL相当的大多数功能,详细使用方式请参考 Spring Data Cassandra官方参考文档
以下列举我最近遇到的一些比较棘手的异常,和简单的解决方案
以下将要用到的是Entity和CQL语句
@Table("user")
public class User{
@PrimaryKeyColumn(name = "id", type = PrimaryKeyType.PARTITIONED)
private UUID id;
@Column("name")
private String name;
@Column("addr")
private List addresses;
@Column("d_addr")
private Address def;
@UserDefinedType("address")
public static class Address {
@Column("addr")
private String addr;
@Column("mail")
private String mail;
/** getter and setter */
}
/** getter and setter */
}
UPDATE cass_stdy.user SET addr = addr + ? WHERE id = ? IF name = ?
因业务场景要求,涉及到UDT集合类型的增删改查操作,为了防止并发修改,使用了UPDATE IF语句,启动工程执行UPDATE IF出现如下问题:
1) com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation:
com.datastax.driver.core.exceptions.CodecNotFoundException: Codec not found for requested operation: [ANY <-> com.*****.sbstdy.entity.User$Address]
at com.datastax.driver.core.CodecRegistry.notFound(CodecRegistry.java:806) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.CodecRegistry.createCodec(CodecRegistry.java:649) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.CodecRegistry.findCodec(CodecRegistry.java:631) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.CodecRegistry.maybeCreateCodec(CodecRegistry.java:732) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.CodecRegistry.createCodec(CodecRegistry.java:648) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.CodecRegistry.findCodec(CodecRegistry.java:631) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.CodecRegistry.codecFor(CodecRegistry.java:476) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.SimpleStatement.convert(SimpleStatement.java:332) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.SimpleStatement.getValues(SimpleStatement.java:146) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.SessionManager.makeRequestMessage(SessionManager.java:600) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:142) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:58) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:45) ~[cassandra-driver-core-3.7.2.jar:na]
出现这个异常的问题是缺少类型映射,与MyBatis的TypeHandler一样,Cassandra-mapping需要一个TypeCodec,把UDT类型转换为CassandraType。
TypeCodec是一个接口,默认把Java对象转换为ByteBuffer类型供mapping解析,我们需要写一个实现TypeCodec接口的对象转换器。
import com.datastax.oss.driver.api.core.type.codec.TypeCodec;
public class AddressTypeCodec implements TypeCodec {
@NonNull
@Override
public GenericType getJavaType() {
return GenericType.of(User.Address.class);
}
@NonNull
@Override
public DataType getCqlType() {
return DataTypes.TEXT;
}
@Nullable
@Override
public ByteBuffer encode(@Nullable User.Address value, @NonNull ProtocolVersion protocolVersion) {
return ByteBuffer.wrap(new Gson().toJson(value).getBytes());
}
@Nullable
@Override
public User.Address decode(@Nullable ByteBuffer bytes, @NonNull ProtocolVersion protocolVersion) {
return new Gson().fromJson(new String(bytes.array()), User.Address.class);
}
@NonNull
@Override
public String format(@Nullable User.Address value) {
return new Gson().toJson(value);
}
@Nullable
@Override
public User.Address parse(@Nullable String value) {
return new Gson().fromJson(value, User.Address.class);
}
}
这里我把Address映射为text类型(这里映射的目标cqlType是什么类型并不重要),cassandra-driver-core 拼接CQL的时候用到了nio的ByteBuffer,所以编解码方法的入参和返回值为ByteBuffer和JavaType,同时TypeCodec生效需要在Configuration中把codec注册进去。
@Configuration
@EnableConfigurationProperties(CassandraProperties.class)
@EnableCassandraRepositories
public class CassandraConfiguration {
@Autowired
private CassandraProperties properties;
@Bean
public CqlSession cqlSession() {
List contactPoints = properties.getContactPoints().stream().map(address ->
new InetSocketAddress(address, properties.getPort())
).collect(Collectors.toList());
return CqlSession.builder()
.withAuthCredentials(properties.getUsername(), properties.getPassword())
.withKeyspace(properties.getKeyspaceName())
.addContactPoints(contactPoints)
.withLocalDatacenter(properties.getLocalDatacenter())
.addTypeCodecs(new AddressTypeCodec())
.build();
}
}
此时运行启动,再次执行,出现如下异常
2) com.datastax.driver.core.exceptions.InvalidQueryException: Not enough bytes to read 0th field addr
com.datastax.driver.core.exceptions.InvalidQueryException: Not enough bytes to read 0th field addr
at com.datastax.driver.core.Responses$Error.asException(Responses.java:181) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.DefaultResultSetFuture.onSet(DefaultResultSetFuture.java:215) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.RequestHandler.setFinalResult(RequestHandler.java:236) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.RequestHandler.access$2600(RequestHandler.java:62) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.setFinalResult(RequestHandler.java:1005) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.RequestHandler$SpeculativeExecution.onSet(RequestHandler.java:808) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1240) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.Connection$Dispatcher.channelRead0(Connection.java:1158) ~[cassandra-driver-core-3.7.2.jar:na]
at io.netty.channel.SimpleChannelInboundHandler.channelRead(SimpleChannelInboundHandler.java:99) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.handler.timeout.IdleStateHandler.channelRead(IdleStateHandler.java:287) ~[netty-handler-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102) ~[netty-codec-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:326) ~[netty-codec-4.1.43.Final.jar:4.1.43.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:300) ~[netty-codec-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.ChannelInboundHandlerAdapter.channelRead(ChannelInboundHandlerAdapter.java:93) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at com.datastax.driver.core.InboundTrafficMeter.channelRead(InboundTrafficMeter.java:38) ~[cassandra-driver-core-3.7.2.jar:na]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:352) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1422) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:374) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:360) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:931) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:700) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:635) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:552) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:514) ~[netty-transport-4.1.43.Final.jar:4.1.43.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$6.run(SingleThreadEventExecutor.java:1050) ~[netty-common-4.1.43.Final.jar:4.1.43.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.43.Final.jar:4.1.43.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.43.Final.jar:4.1.43.Final]
at java.lang.Thread.run(Thread.java:748) [na:1.8.0_231]
这个问题我在经历了1个月时间的定位,寻遍sof、csdn等各大论坛、Spring-Data-Cassandra、Cassandra官方文档,均无法解决,最终通过跟踪Spring-data-cassandra的源码,对比CrudRepository提供的save方法和@Query代理方法的底层实现逻辑,发现最根本的问题在于,TypeCodec的encode和decode方法中由对象转换生成的ByteBuffer必须按照固定的格式来生成,而我写的ByteBuffer是直接把对象转换为json字符串,再转换成byte[]再转换成的ByteBuffer,由于cassandra-codec取值的方式并不灵活,因此参照UdtCodec提供的方法源码来写自定义的AddressCodec
public class UdtCodec implements TypeCodec {
private final UserDefinedType cqlType;
public UdtCodec(@NonNull UserDefinedType cqlType) {
this.cqlType = cqlType;
}
@NonNull
@Override
public GenericType getJavaType() {
return GenericType.UDT_VALUE;
}
@NonNull
@Override
public DataType getCqlType() {
return cqlType;
}
@Override
public boolean accepts(@NonNull Object value) {
return value instanceof UdtValue && ((UdtValue) value).getType().equals(cqlType);
}
@Override
public boolean accepts(@NonNull Class> javaClass) {
return UdtValue.class.equals(javaClass);
}
@Nullable
@Override
public ByteBuffer encode(@Nullable UdtValue value, @NonNull ProtocolVersion protocolVersion) {
if (value == null) {
return null;
}
if (!value.getType().equals(cqlType)) {
throw new IllegalArgumentException(
String.format(
"Invalid user defined type, expected %s but got %s", cqlType, value.getType()));
}
// Encoding: each field as a [bytes] value ([bytes] = int length + contents, null is
// represented by -1)
int toAllocate = 0;
int size = cqlType.getFieldTypes().size();
for (int i = 0; i < size; i++) {
ByteBuffer field = value.getBytesUnsafe(i);
toAllocate += 4 + (field == null ? 0 : field.remaining());
}
ByteBuffer result = ByteBuffer.allocate(toAllocate);
for (int i = 0; i < value.size(); i++) {
ByteBuffer field = value.getBytesUnsafe(i);
if (field == null) {
result.putInt(-1);
} else {
result.putInt(field.remaining());
result.put(field.duplicate());
}
}
return (ByteBuffer) result.flip();
}
@Nullable
@Override
public UdtValue decode(@Nullable ByteBuffer bytes, @NonNull ProtocolVersion protocolVersion) {
if (bytes == null) {
return null;
}
// empty byte buffers will result in empty values
try {
ByteBuffer input = bytes.duplicate();
UdtValue value = cqlType.newValue();
int i = 0;
while (input.hasRemaining()) {
if (i > cqlType.getFieldTypes().size()) {
throw new IllegalArgumentException(
String.format(
"Too many fields in encoded UDT value, expected %d",
cqlType.getFieldTypes().size()));
}
int elementSize = input.getInt();
ByteBuffer element;
if (elementSize < 0) {
element = null;
} else {
element = input.slice();
element.limit(elementSize);
input.position(input.position() + elementSize);
}
value = value.setBytesUnsafe(i, element);
i += 1;
}
return value;
} catch (BufferUnderflowException e) {
throw new IllegalArgumentException("Not enough bytes to deserialize a UDT value", e);
}
}
@NonNull
@Override
public String format(@Nullable UdtValue value) {
// 此处省略无关代码
}
@Nullable
@Override
public UdtValue parse(@Nullable String value) {
// 此处省略无关代码
}
}
@Nullable
@Override
public ByteBuffer encode(@Nullable User.Address value, @NonNull ProtocolVersion protocolVersion) {
if (value == null) {
return null;
}
int toAllocate = 0;
ByteBuffer[] values = new ByteBuffer[2];
ByteBuffer field0 = ByteBuffer.wrap(value.getAddr().getBytes());
toAllocate += 4 + field0.remaining();
values[0] = field0;
ByteBuffer field1 = ByteBuffer.wrap(value.getMail().getBytes());
toAllocate += 4 + field1.remaining();
values[1] = field1;
ByteBuffer result = ByteBuffer.allocate(toAllocate);
for (ByteBuffer field : values) {
if (field == null) {
result.putInt(-1);
} else {
result.putInt(field.remaining());
result.put(field.duplicate());
}
}
return (ByteBuffer) result.flip();
}
@Nullable
@Override
public User.Address decode(@Nullable ByteBuffer bytes, @NonNull ProtocolVersion protocolVersion) {
if (bytes == null) {
return null;
}
// empty byte buffers will result in empty values
try {
ByteBuffer input = bytes.duplicate();
User.Address value = new User.Address();
int i = 0;
while (input.hasRemaining()) {
int elementSize = input.getInt();
ByteBuffer element;
if (elementSize < 0) {
element = null;
} else {
element = input.slice();
element.limit(elementSize);
input.position(input.position() + elementSize);
}
if (element != null) {
if (i == 0)
value.setAddr(new String(element.array()));
else if (i == 1)
value.setMail(new String(element.array()));
}
i += 1;
}
return value;
} catch (BufferUnderflowException e) {
throw new IllegalArgumentException("Not enough bytes to deserialize a UDT value", e);
}
}
此时只是做了一个临时的改动,把Address的addr和mail参数硬性塞进ByteBuffer中,尝试一下是否可用。
cqlsh:cass_stdy> SELECT * FROM user;
id | addr | age | birthday | d_addr | name | score
--------------------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------+------+----------+--------+----------+-------
f7702276-694a-47e1-92cf-312db4101353 | [{addr: 'SZ GD CN', mail: '[email protected]'}, {addr: 'GZ GD CN', mail: '[email protected]'}, {addr: 'SZ GD CN', mail: '[email protected]'}, {addr: 'GZ GD CN', mail: '[email protected]'}] | null | null | null | name0000 | null
(1 rows)
cqlsh:cass_stdy>
查询表结果证实成功。
其实仔细研究可以发现encode和decode方法只是把field值取出来按顺序按规则塞进ByteBuffer中,这样的操作完全可以通过反射实现,而不再需要一个一个硬代码取值。当然这是后话。
另附:另外一种替代方案:
CREATE TABLE cass_stdy.user (
id uuid PRIMARY KEY,
addr list>,
age int,
d_addr frozen,
mx list>>,
name text
)
使用list
import com.datastax.driver.core.TypeCodec;
import com.datastax.driver.extras.codecs.MappingCodec;
import com.google.common.collect.Maps;
import org.apache.commons.beanutils.BeanUtils;
import java.util.Map;
import java.util.stream.Collectors;
public class AddressCodec extends MappingCodec> {
public AddressCodec() {
super(TypeCodec.map(TypeCodec.varchar(), TypeCodec.varchar()), User.Address.class);
}
@Override
protected User.Address deserialize(Map map) {
User.Address addr = new User.Address();
try {
BeanUtils.populate(addr, map);
} catch (Exception e) {
e.printStackTrace();
}
return addr;
}
@Override
protected Map serialize(User.Address o) {
Map map = Maps.newHashMap();
try {
map.putAll(BeanUtils.describe(o)
.entrySet()
.stream()
// cassandra map不能接收null value
.filter(entry -> null != entry.getValue())
.collect(Collectors.toMap(Map.Entry::getKey, Map.Entry::getValue)));
} catch (Exception e) {
e.printStackTrace();
}
return map;
}
}
这里需要注意,Cassandra map类型不能接收null value,会提示: Map values cannot be null
java.lang.NullPointerException: Map values cannot be null
at com.datastax.driver.core.TypeCodec$AbstractMapCodec.serialize(TypeCodec.java:2086) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.TypeCodec$AbstractMapCodec.serialize(TypeCodec.java:1952) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.extras.codecs.MappingCodec.serialize(MappingCodec.java:56) ~[cassandra-driver-extras-3.9.0.jar:na]
at com.datastax.driver.core.TypeCodec$AbstractCollectionCodec.serialize(TypeCodec.java:1782) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.TypeCodec$AbstractCollectionCodec.serialize(TypeCodec.java:1756) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.SimpleStatement.convert(SimpleStatement.java:333) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.SimpleStatement.getValues(SimpleStatement.java:146) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.SessionManager.makeRequestMessage(SessionManager.java:600) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.SessionManager.executeAsync(SessionManager.java:142) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:58) ~[cassandra-driver-core-3.7.2.jar:na]
at com.datastax.driver.core.AbstractSession.execute(AbstractSession.java:45) ~[cassandra-driver-core-3.7.2.jar:na]
所以必须过滤掉null值的Map.Entry,如果需要修改Cassandra map中的值,只能对list中的map
UPDATE user SET mx[0] = {'addr':'SZ GD CN'} WHERE id = cf49db81-7821-40e1-8502-871660c14571;
好了,本文暂时到此为止,接下来我将继续探索发现更多Springboot-Cassandra相关的问题和解决方案,并及时更新,请拭目以待。