CASSANDRA READ or write timeout

CASSANDRA READ  or write timeout:


problem: 
  Cassandra timeout during write query at consistency LOCAL_ONE (1 replica were required but only 0 acknowledged the write)


  Cassandra timeout during read query at consistency LOCAL_ONE (1 replica were required but only 0 acknowledged the write)




Exception in thread "main" com.datastax.driver.core.exceptions.ReadTimeoutException: Cassandra timeout during read query at consistency LOCAL_ONE (1 responses were required but only 0 replica responded)






原因: 节点查询压力过大。。。。。数据compaction过程中,数据不一致,读写超时


解决办法:  


1.
在conf/cassandra.yaml file 修改:
read_request_timeout_in_ms: 60000
range_request_timeout_in_ms: 60000
write_request_timeout_in_ms: 40000
cas_contention_timeout_in_ms: 3000
truncate_request_timeout_in_ms: 60000
request_timeout_in_ms: 60000






# How long the coordinator should wait for read operations to complete
#read_request_timeout_in_ms: 5000
read_request_timeout_in_ms: 10000
# How long the coordinator should wait for seq or index scans to complete
range_request_timeout_in_ms: 10000
# How long the coordinator should wait for writes to complete
#write_request_timeout_in_ms: 2000
write_request_timeout_in_ms: 5000
# How long the coordinator should wait for counter writes to complete
counter_write_request_timeout_in_ms: 5000
# How long a coordinator should continue to retry a CAS operation
# that contends with other proposals for the same row
cas_contention_timeout_in_ms: 1000
# How long the coordinator should wait for truncates to complete
# (This can be much longer, because unless auto_snapshot is disabled
# we need to flush first so we can snapshot before removing the data.)
truncate_request_timeout_in_ms: 60000
# The default timeout for other, miscellaneous operations
request_timeout_in_ms: 10000






2   修改数据库连接池的配置:


//        poolingOptions
//                .setCoreConnectionsPerHost(HostDistance.LOCAL,  Integer.valueOf(ConnectPoolCoreConnectionsPerHost) )
//                .setMaxConnectionsPerHost( HostDistance.LOCAL, 140)
//                .setCoreConnectionsPerHost(HostDistance.REMOTE, 18)
//                .setMaxConnectionsPerHost( HostDistance.REMOTE, 56)
//                .setPoolTimeoutMillis(0)
//                .setHeartbeatIntervalSeconds(60);


改为默认的 poolingOptions




ref: https://github.com/datastax/java-driver/tree/3.1.0/manual


     http://docs.datastax.com/en/cassandra/latest/cassandra/dml/dmlConfigConsistency.html


     http://docs.datastax.com/en/cassandra/2.1/cassandra/dml/architectureClientRequestsRead_c.html


https://stackoverflow.com/questions/18101839/cassandra-frequent-read-write-timeouts


3.  请求超时,将请求发送给其他节点:


ALTER TABLE users WITH speculative_retry = '99percentile';


改为:
ALTER TABLE users WITH speculative_retry = '10ms';


ALTER TABLE gpsfullwithstate WITH speculative_retry = '10ms';




ref: http://docs.datastax.com/en/cassandra/2.1/cassandra/dml/architectureClientRequestsRead_c.html
    https://www.datastax.com/dev/blog/rapid-read-protection-in-cassandra-2-0-2


读超时后, 会将请求发送给其他节点
This gives Cassandra maximum throughput, but at the at the cost of some fragility: if the replica to which the request is routed fails before responding, the request will time out:
Rapid read protection allows the coordinator to monitor the outstanding requests and send redundant requests to other replicas when the original is slower than expected:     


4. 代码中设置请求策略,一次请求失败后, 可以再次发送请求进行查询




搞了 一天,问题基本上解决,单次查询和批量查询不再报超时; 但是当个key查询 时间跨度范围比较大,返回数据量比较大时,偶尔还会报出 超时(10次请求有一次超时); 需要进一步优化



你可能感兴趣的:(cassandra)