Lucene/Solr Dev 3 Solr Cache and Load Balance

Warmming up:

      Analysing the solr src class, which packaged in org.apache.solr.search. There are two implementations of cache available for Solr, LRUCache, based on a synchronized LinkedHashMap, and FastLRUCache, based on a ConcurrentHashMap. FastLRUCache has faster gets and slower puts in single threaded operation and thus is generally faster than LRUCache when the hit ratio of the cache is high (> 75%). And then i will delve into the whloe solr's cache, let's start from FastLRUCache FIFO strategy.

FastLRUCache FIFO strategy:

     FIFO means first come first in, which is a very common strategy in software design, i will give a experiment directly to account for FIFO strategy.

     In order to make the experiment result more persuasively, we should add a row code in Solr's org.apache.solr.common.util.ConcurrentLRUCache<K,V>, we add the below code

 

	  //add by kylin
	  System.out.println(key +  " " + map.keySet());

 

to ConcurrentLRUCache's put(K key, V val), yes, it's just a output sentence, use to output the cache's size before a put has coming.

and then we run the below code;

public void testFastLRUCacheFIFO() {
		FastLRUCache cache = new FastLRUCache();
		Map map = new HashMap();
		map.put("size", "3");
		CacheRegenerator regenerator = new CacheRegenerator() {
			public boolean regenerateItem(SolrIndexSearcher newSearcher,
					SolrCache newCache, SolrCache oldCache, Object oldKey,
					Object oldVal) throws IOException {
				newCache.put(oldKey, oldVal);
				return true;
			}};
		
		Object obj = cache.init(map, null, regenerator);
		cache.setState(SolrCache.State.LIVE);
		
		for (int i = 1; i < 10; i++) {
			cache.put(i , "" + i);
		}
	}

 map.put("size", "3"), this is very imporant, is the key factor of  FastLRUCache FIFO, implement the code, and the result is:

 

1 [1]
2 [2, 1]
3 [2, 1, 3]
4 [3, 4]
5 [5, 3, 4]
6 [6, 5]
7 [6, 5, 7]
8 [7, 8]
9 [7, 8, 9]

 Through the result data, we can know that the latest record was saved, but this not totally FIFO, because the Out number is count by the size you have set: Out numer = size * 10%, but the minimal Out size is 2. so we can count the above Out number(3 * 10% = 0, 0 < 2, so Out num is 2).

 

change the above code, we look at the warming mechenism.

public void test() throws IOException {
		FastLRUCache cache = new FastLRUCache();
		Map map = new HashMap();
		map.put("size", "100");
		map.put("initialSize", "10");
		map.put("autowarmCount", "30");
		
		CacheRegenerator regenerator = new CacheRegenerator() {
			public boolean regenerateItem(SolrIndexSearcher newSearcher,
					SolrCache newCache, SolrCache oldCache, Object oldKey,
					Object oldVal) throws IOException {
				newCache.put(oldKey, oldVal);
				return true;
			}};
		
		Object obj = cache.init(map, null, regenerator);
		cache.setState(SolrCache.State.LIVE);
		
		for (int i = 1; i < 102; i++) {
			cache.put(i , "" + i);
		}
	
		System.out.println(cache.get(10));
		System.out.println(cache.get(11));
		
		FastLRUCache cacheNew = new FastLRUCache();
		cacheNew.init(map, obj, regenerator);
		cacheNew.warm(null, cache);
		cacheNew.setState(SolrCache.State.LIVE);
		cache.close();
		cacheNew.put(103, "103");
		System.out.println(cacheNew.get(72));
		System.out.println(cacheNew.get(73));
		
	}

 and also i give the implement result:

null
11
null
73

  Anaysing the result:

    The size = 100, so 100 * 10% = 10, OutNumber = 10, when the 101th recod has added, the first came 10 recod were remove, so the null printed, and then 11 was printed.

    The autowarmCount = 30, so when a new SolrSearcher coming, it will be bould with new Searcher, 102-30=72, So when you get 72 in new Searcher, the null is printed,and if you get 73, the responding value is printed.

 

 

 

 

 

 

 

 

 

 

 

 

你可能感兴趣的:(apache,cache,Solr,Lucene,UP)