keyspace :{replication_factor:1, class:SimpleStrategy}
[root@edog1 apache-cassandra-2.0.9]# bin/nodetool status
Note: Ownership information does not include topology; for complete information, specify a keyspace
Datacenter: datacenter1
=======================
Status=Up/Down
|/ State=Normal/Leaving/Joining/Moving
-- Address Load Tokens Owns Host ID Rack
UN 192.168.1.245 1.98 TB 256 35.2% 3f5395be-b346-404e-8a72-cc6fda5716f8 rack1
UN 192.168.1.205 959.96 GB 256 31.1% b294b651-3403-4f6f-a253-b7c9c0f2caf5 rack1
UN 192.168.1.254 1.53 TB 256 33.7% 9adf54bc-8727-4463-a205-0a4cbcfa47da rack1
---------------------------------------------------------------------------------------------------------------------------
报错如下:
Traceback (most recent call last):
File "python_del_cassandra.py", line 21, in
d_time = K_devdata.execute(" select dtime from devicestatus where did = '%s' and dtime > %s and dtime < %s order by dtime asc ;" % (devsid,start,end))
File "/usr/lib64/python2.6/site-packages/cassandra/cluster.py", line 1594, in execute
result = future.result(timeout)
File "/usr/lib64/python2.6/site-packages/cassandra/cluster.py", line 3296, in result
raise self._final_exception
cassandra.Unavailable: code=1000 [Unavailable exception] message="Cannot achieve consistency level ONE" info={'required_replicas': 1, 'alive_replicas': 0, 'consistency': 'ONE'}
上面是select时报的错,
------------------------------------------------------------------------------------------------------------------------
原因:
repliction_factor=1,时,只要有一个节点DN(down)掉的话,就会报这样的错,如果把replication_factor 改为2以上的话就不会报这个错了。
replication_factor 的作用是指,在集群里数据保存的的份数
原文如下:
To directly answer the question, replication factor (RF) controls the number of replicas of each data partition that exist in a cluster or data center (DC). In your case, you have 3 nodes and a RF of 1. That means that when a row is written to your cluster, that it is only stored on 1 node. This also means that your cluster cannot withstand the failure of a single node.
In contrast, consider a RF of 3 on a 3 node cluster. Such a cluster could withstand the failure of 1 or 2 nodes, and still be able to support queries for all of its data.