hadoop配置 - DFSClient Packet dfs.write.packet.size

HBase 里面调用DFSOutputStream的方法常用的是:write、sync 

write会往当前Packet里面写数据,sync会强制生成一个Packet。

在DFSClient Packet里面每次都会new 一个 big byte array,size 是65557(more than 64K,data+checksum+head),由于sync会强制生成Packet,对于小kv数据来说,实际的数据会比较少,几百、几千、上万,很多时候其实没有64k的数据,所以这个使用率比较低。

(1)想到的方法是:使用一个Packet Pool

(2)减少dfs.write.packet.size,默认是64k

 

第一种方法:Packet不在经常new了,每次取Packet时从一个pool里面取,如果pool里面没有就new一个,使用完后就放到pool里面。测试结果不是很好,gc频率以及总gc时间都变少了很多,但是平均RT( response time)变长,其中比较有意思的是:有很多请求RT变少了,也有不少RT变长了,两级分化严重,总体上面平均RT变长,导致TPS下降。

原先gc频率很快,单次gc时间比较短,总的gc时间比较长。

据说pool的方式对于现在的jvm来说不是最优的,因为现在的jvm创建对象速度已经很快了,并且这个pool时间长了会放到old区,old区到young区的引用比较多的话,那么ygc效率就会比较低下

 

第二种方法:

调整dfs.write.packet.size,可以调整成32k、16k

效果大概如下:

测试的是 key/value = 10/100 bytes 

export HBASE_REGIONSERVER_OPTS="-Xms8g -Xmx8g -Xmn2g -XX:SurvivorRatio=16 -XX:+UseCMSInitiatingOccupancyOnly -XX:CMSInitiatingOccupancyFraction=75 -Xloggc:$HBASE_HOME/logs/gc-regionserver-`date +%Y%m%d-%H-%M`.log"

 

dfs.write.packet.size = 64k
 Writer Finished,Consume time is: 383 seconds
==================Summary==================
 Tables And Columns: hbasetest(10){caches:value1(100)}
----------Writer Statistics--------------
 Write Threads: 100
 Write Rows: 10000000
 Consume Time: 383s
 Requests: 10000000 Success: 100% (10000000) Error: 0% (0)
 Avg TPS: 26041 Max TPS: 44460 Min TPS: 3183
 Avg RT: 3ms
 RT <= 0: 0% 3483/10000000
 RT (0,1]: 12% 1273633/10000000
 RT (1,5]: 81% 8123190/10000000
 RT (5,10]: 1% 147321/10000000
 RT (10,50]: 3% 371064/10000000
 RT (50,100]: 0% 74896/10000000
 RT (100,500]: 0% 6400/10000000
 RT (500,1000]: 0% 13/10000000
 RT > 1000: 0% 0/10000000

  jstat -gcutil 19968 2s
  S0     S1     E      O      P       YGC     YGCT    FGC    FGCT     GCT   
 58.44   0.00  54.47  15.66  99.85      154      4.191     0    0.000    4.191

dfs.write.packet.size = 32k

Writer Finished,Consume time is: 367 seconds
==================Summary==================
 Tables And Columns: hbasetest(10){caches:value1(100)}
----------Writer Statistics--------------
 Write Threads: 100
 Write Rows: 10000000
 Consume Time: 367s
 Requests: 10000000 Success: 100% (10000000) Error: 0% (0)
 Avg TPS: 27173 Max TPS: 45276 Min TPS: 968
 Avg RT: 3ms
 RT <= 0: 0% 7274/10000000
 RT (0,1]: 19% 1948293/10000000
 RT (1,5]: 73% 7350970/10000000
 RT (5,10]: 2% 259443/10000000
 RT (10,50]: 3% 371545/10000000
 RT (50,100]: 0% 56944/10000000
 RT (100,500]: 0% 5360/10000000
 RT (500,1000]: 0% 85/10000000
 RT > 1000: 0% 86/10000000
 
  S0     S1     E      O      P                YGC     YGCT    FGC    FGCT     GCT   
  0.00  92.02  92.32  14.89  99.74     67        2.668     0        0.000    2.668

 dfs.write.packet.size = 16k
 
Writer Finished,Consume time is: 364 seconds
==================Summary==================
 Tables And Columns: hbasetest(10){caches:value1(100)}
----------Writer Statistics--------------
 Write Threads: 100
 Write Rows: 10000000
 Consume Time: 364s
 Requests: 10000000 Success: 100% (10000000) Error: 0% (0)
 Avg TPS: 27397 Max TPS: 45309 Min TPS: 890
 Avg RT: 3ms
 RT <= 0: 0% 9291/10000000
 RT (0,1]: 21% 2118605/10000000
 RT (1,5]: 71% 7192119/10000000
 RT (5,10]: 2% 265516/10000000
 RT (10,50]: 3% 346697/10000000
 RT (50,100]: 0% 61084/10000000
 RT (100,500]: 0% 6590/10000000
 RT (500,1000]: 0% 15/10000000
 RT > 1000: 0% 83/10000000

  S0     S1     E      O      P                YGC      YGCT    FGC    FGCT     GCT   
 53.45   0.00  77.52  15.31  99.24     50        2.295     0        0.000      2.295

 

YGC(young gc次数)    YGCT(young gc总时间,单位s)     dfs.write.packet.size    单次gc时间(单位ms)

 154                            4.191                                          64K                           27.57236842105263

 67                              2.668                                          32k                           39.82089552238806

 50                              2.295                                          16K                           45.9

 

你可能感兴趣的:(hadoop配置 - DFSClient Packet dfs.write.packet.size)