Redis-server本身并没有sharding方法,不过我们可以借助客户端程序来实现此功能,Jedis中已经为我们提供了足够的API,接下来通过2种方式分别介绍3个API使用方法。不过首先介绍一下Jedis中sharding原理
一.Sharding与一致性Hash
sharding的核心理念就是将数据按照一定的策略"分散"存储在集群中不同的物理server上,本质上实现了"大数据"分布式存储,以及体现了"集群"的高可用性.比如1亿数据,我们按照数据的hashcode散列存储在5个server上.
Jedis中sharding基于“一致性hash”算法,其思路非常清晰,代码基本也是标准的“一致性hash”的实现,我们先来“欣赏”一下:
1) hashcode取值:源码来自redis.clients.util.Hashing,Jedis中默认的hash值计算采取了MD5作为辅助,似乎此算法已经成为“标准”:
//少量优化性能 public ThreadLocal<MessageDigest> md5Holder = new ThreadLocal<MessageDigest>(); public static final Hashing MD5 = new Hashing() { public long hash(String key) { return hash(SafeEncoder.encode(key)); } public long hash(byte[] key) { try { if (md5Holder.get() == null) { md5Holder.set(MessageDigest.getInstance("MD5")); } } catch (NoSuchAlgorithmException e) { throw new IllegalStateException("++++ no md5 algorythm found"); } MessageDigest md5 = md5Holder.get(); md5.reset(); md5.update(key); byte[] bKey = md5.digest();//获得MD5字节序列 //前四个字节作为计算参数,最终获得一个32位int值. //此种计算方式,能够确保key的hash值更加“随即”/“离散” //如果hash值过于密集,不利于一致性hash的实现(特别是有“虚拟节点”设计时) long res = ((long) (bKey[3] & 0xFF) << 24) | ((long) (bKey[2] & 0xFF) << 16) | ((long) (bKey[1] & 0xFF) << 8) | (long) (bKey[0] & 0xFF); return res; } };
2) node构建过程(redis.clients.util.Sharded):
//shards列表为客户端提供了所有redis-server配置信息,包括:ip,port,weight,name //其中weight为权重,将直接决定“虚拟节点”的“比例”(密度),权重越高,在存储是被hash命中的概率越高 //--其上存储的数据越多。 //其中name为“节点名称”,jedis使用name作为“节点hash值”的一个计算参数。 //--- //一致性hash算法,要求每个“虚拟节点”必须具备“hash值”,每个实际的server可以有多个“虚拟节点”(API级别) //其中虚拟节点的个数= “逻辑区间长度” * weight,每个server的“虚拟节点”将会以“hash”的方式分布在全局区域中 //全局区域总长为2^32.每个“虚拟节点”以hash值的方式映射在全局区域中。 // 环形:0-->vnode1(:1230)-->vnode2(:2800)-->vnode3(400000)---2^32-->0 //所有的“虚拟节点”将按照其”节点hash“顺序排列(正序/反序均可),因此相邻两个“虚拟节点”之间必有hash值差, //那么此差值,即为前一个(或者后一个,根据实现而定)“虚拟节点”所负载的数据hash值区间。 //比如hash值为“2000”的数据将会被vnode1所接受。 //--- private void initialize(List<S> shards) { nodes = new TreeMap<Long, S>();//虚拟节点,采取TreeMap存储:排序,二叉树 for (int i = 0; i != shards.size(); ++i) { final S shardInfo = shards.get(i); if (shardInfo.getName() == null) //当没有设置“name”是,将“SHARD-NODE”作为“虚拟节点”hash值计算的参数 //"逻辑区间步长"为160,为什么呢?? //最终多个server的“虚拟节点”将会交错布局,不一定非常均匀。 for (int n = 0; n < 160 * shardInfo.getWeight(); n++) { nodes.put(this.algo.hash("SHARD-" + i + "-NODE-" + n), shardInfo); } else for (int n = 0; n < 160 * shardInfo.getWeight(); n++) { nodes.put(this.algo.hash(shardInfo.getName() + "*" + shardInfo.getWeight() + n), shardInfo); } resources.put(shardInfo, shardInfo.createResource()); } }
3) node选择方式:
public R getShard(String key) { return resources.get(getShardInfo(key)); } //here: public S getShardInfo(byte[] key) { //获取>=key的“虚拟节点”的列表 SortedMap<Long, S> tail = nodes.tailMap(algo.hash(key)); //如果不存在“虚拟节点”,则将返回首节点。 if (tail.size() == 0) { return nodes.get(nodes.firstKey()); } //如果存在,则返回符合(>=key)条件的“虚拟节点”的第一个节点 return tail.get(tail.firstKey()); }
4) 补充:
Jedis sharding默认的方案比较适合于数据为cache-only的情景,这也是一致性hash所适合的环境,不太适合数据为持久存储的情况.
不过,在持久存储的情况下,我们可以使用"强hash"分片,则需要重写其Hash算法,参见"程序实例1中的InnnerHashing"实现.强hash算法下,如果某个虚拟节点所在的物理server故障,将导致数据无法访问(读取/存储);即不会从虚拟节点列表中删除那些失效的server。
对于jedis如果重写了Hashing算法,你需要兼顾几个方面:1) 虚拟节点hash是否相对均匀 2) 数据的hash值分布是否均匀 3) 虚拟节点在“全局”是否散列均匀。。如果设计不良,很有可能导致数据在server上分布不均,而失去了sharding的本身意义。
二.ShardedJedis
1) 程序实例
public static void main(String[] args){ //ip,port,timeout,weight JedisShardInfo si1 = new JedisShardInfo("127.0.0.1", 6379,15000,1); JedisShardInfo si2 = new JedisShardInfo("127.0.0.1", 6479,15000,1); List<JedisShardInfo> shards = new ArrayList<JedisShardInfo>(); shards.add(si1); shards.add(si2); //指定hash算法,默认为"cache-only"的一致性hash,不过此处InnerHashing为强hash分片 ShardedJedis shardedJedis = new ShardedJedis(shards,new InnerHashing()); //指定hash算法 shardedJedis.set("k1", "v1"); Charset charset = Charset.forName("utf-8"); //注意此处对key的字节转换时,一定要和Innerhashing.hash(String)保持一致 System.out.println(shardedJedis.get("k1").getBytes(charset)); } //不建议自己重新hash算法,jedis的默认算法已经足够良好,默认为"一致性hash"分片 //此hash算法,为"强Hash"分片 static class InnerHashing implements Hashing{ static Charset charset = Charset.forName("utf-8"); @Override public long hash(String key) { return hash(key.getBytes(charset)); } @Override public long hash(byte[] key) { int hashcode = new HashCodeBuilder().append(key).toHashCode(); return hashcode & 0x7FFFFFFF; } }
2) spring环境下
<bean id="shardedJedis" class="redis.clients.jedis.ShardedJedis"> <constructor-arg> <list> <bean class="redis.clients.jedis.JedisShardInfo"> <constructor-arg value="127.0.0.1"></constructor-arg> <constructor-arg value="6379"></constructor-arg> <property name="password" value="0123456"></property> </bean> <bean class="redis.clients.jedis.JedisShardInfo"> <constructor-arg value="127.0.0.1"></constructor-arg> <constructor-arg value="6379"></constructor-arg> <property name="password" value="0123456"></property> </bean> </list> </constructor-arg> </bean>
//resources/beans.xml ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("classpath:beans.xml"); ShardedJedis shardedJedis = (ShardedJedis)context.getBean("shardedJedis"); try{ shardedJedis.set("k1", "v2"); System.out.println(shardedJedis.get("k1")); }catch(Exception e){ e.printStackTrace(); }
三.ShardedJedisPool
基于连接池的sharding代码实例.
1) 程序实例
JedisPoolConfig config = new JedisPoolConfig(); config.setMaxActive(32); config.setMaxIdle(6); config.setMinIdle(0); config.setMaxWait(15000); JedisShardInfo si1 = new JedisShardInfo("127.0.0.1", 6379,15000,1); JedisShardInfo si2 = new JedisShardInfo("127.0.0.1", 6479,15000,1); List<JedisShardInfo> shards = new ArrayList<JedisShardInfo>(); shards.add(si1); shards.add(si2); ShardedJedisPool sjp = new ShardedJedisPool(config, shards, new InnerHashing()); ShardedJedis shardedJedis = sjp.getResource(); try{ System.out.println(shardedJedis.get("k1")); }catch(Exception e){ e.printStackTrace(); }finally{ sjp.returnResource(shardedJedis); }
2) spring环境下
<bean id="jedisPoolConfig" class="redis.clients.jedis.JedisPoolConfig"> <property name="maxActive" value="32"></property> <property name="maxIdle" value="6"></property> <property name="maxWait" value="15000"></property> <property name="minEvictableIdleTimeMillis" value="300000"></property> <property name="numTestsPerEvictionRun" value="3"></property> <property name="timeBetweenEvictionRunsMillis" value="60000"></property> <property name="whenExhaustedAction" value="1"></property> </bean> <bean id="shardedJedisPool" class="redis.clients.jedis.ShardedJedisPool" destroy-method="destroy"> <constructor-arg ref="jedisPoolConfig"></constructor-arg> <constructor-arg> <list> <bean class="redis.clients.jedis.JedisShardInfo"> <constructor-arg value="127.0.0.1"></constructor-arg> <constructor-arg value="6379"></constructor-arg> <property name="password" value="0123456"></property> </bean> <bean class="redis.clients.jedis.JedisShardInfo"> <constructor-arg value="127.0.0.1"></constructor-arg> <constructor-arg value="6379"></constructor-arg> <property name="password" value="0123456"></property> </bean> </list> </constructor-arg> </bean>
更多连接池的配置参数,请参考:jedis连接池
//resources/beans.xml ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("classpath:beans.xml"); ShardedJedisPool shardedJedisPool = (ShardedJedisPool)context.getBean("shardedJedisPool"); ShardedJedis shardedJedis = shardedJedisPool.getResource(); try{ shardedJedis.set("k1", "v2"); System.out.println(shardedJedis.get("k1")); }catch(Exception e){ e.printStackTrace(); }finally{ shardedJedisPool.returnResource(shardedJedis); }
四.ShardedJedisPipeline 其他配置参见上文
ClassPathXmlApplicationContext context = new ClassPathXmlApplicationContext("classpath:beans.xml"); ShardedJedisPool shardedJedisPool = (ShardedJedisPool)context.getBean("shardedJedisPool"); ShardedJedis shardedJedis = shardedJedisPool.getResource(); try{ ShardedJedisPipeline shardedJedisPipeline = new ShardedJedisPipeline(); shardedJedisPipeline.setShardedJedis(shardedJedis); shardedJedisPipeline.set("k1", "v1"); shardedJedisPipeline.set("k3", "v3"); shardedJedisPipeline.get("k3"); List<Object> results = shardedJedisPipeline.syncAndReturnAll(); for(Object result : results){ System.out.println(result.toString()); } }catch(Exception e){ e.printStackTrace(); }finally{ shardedJedisPool.returnResource(shardedJedis); }