快学Big Data -- Redis(十一)

Redis 总结

快学Big Data -- Redis(十一)_第1张图片

官网:http://redis.io/download

 

概述

     Redis 是一种高性能数据库,储存结构为key-value方式,redis 中的value可以储存很多类型,而却储存的数据特别大,实现在市场上用的比较多的一种非关系型数据库。

Redis的特点

  1. 访问的速度快,数据保存在内存中
  2. 有持久化的机制,可以定期的把数据dump到磁盘中
  3. 为每一条的数据记录了一个更新操作,一旦发生事故可以在日志中获取数据的信息。
  4. 支持分布式储存,更大大的提高了储存能力
  5. 支持更多的储存结构。异常的快速,每秒可以处理约11万SETs,每秒约8.1万GETs。

Redis储存类型

 

 

   在以上的图上可以看出Redis是使用redisObject的对象来表示所有的key和value的,数据类型包括:String,Hash,List,Set,Sort Set编码的方式有Row,int,ht,zipmap,linkedlist,ziplist,intset方式,只有打开了 Redis 的虚拟内存功能,此字段才会真正的分配内存,该功能默认是关闭状态的。

在设计时要注意一下几点:

  1. key不要太长,尽量不要超过1024字节,这样不仅消耗内存,也会降低查找的效率
  2. Key也不要太短,太短的话,key的可读性会降低
  3. 在项目中key的设计尽量使用规范的命名规则,如:userId:name:sex等

 

1-1)、String

A)、常用命令

set设置key-value 的值

get:获取键为key的值

incr:递加键一次的整数值

decr:递减键一次的整数值

mget:得到所有的给定键的值

B)、实例 

redis 127.0.0.1:6379> set baidu www.baidu.com

OK

redis 127.0.0.1:6379> get baidu

"www.baidu.com"

redis 127.0.0.1:6379> append baidu .link

(integer) 18

redis 127.0.0.1:6379> get baidu

"www.baidu.com.link"

redis 127.0.0.1:6379> set version 0

OK

redis 127.0.0.1:6379> incr version

(integer) 1

redis 127.0.0.1:6379> incr version

(integer) 2

redis 127.0.0.1:6379> get version

"2"

redis 127.0.0.1:6379> incrby versions 100

(integer) 100

redis 127.0.0.1:6379> get versions

"100"

redis 127.0.0.1:6379> type baidu

string

redis 127.0.0.1:6379> type version

string

redis 127.0.0.1:6379> rename baidu re-baidu

OK

redis 127.0.0.1:6379> get baidu

(nil)

redis 127.0.0.1:6379> get re-baidu

"www.baidu.com.link"

redis 127.0.0.1:6379> mget baidu

1) (nil)

redis 127.0.0.1:6379>

 

C)、使用场景

String是最常用的诗句的一种类型,普通的key/value的储存都可以归于此类。

D)、实现方式

String在redis储存的默认的是一个字符串,被reidsobject所引用,当与incr,decr时,就会转化为数值进行计算,此时的redisObject的encoding字段为int

1-2)、Hash

A)、常用命令

hset:HSET key field value 设置对象指定字段的值

hget:HGET key field 获取对象中该field属性域的值

hmset:HMSET key field value [field value ...] 同时设置对象中一个或多个字段的值

hmget:HMGET key field[field...] 获取对象的一个或多个指定字段的值

hgetall:HGETALL key 获取对象的所有属性域和值

hvals:HVALS key 获取对象的所有属性值

hlen:HLEN key 获取对象的所有属性字段的总数

hexists:HEXISTS key field 查看对象是否存在该属性域

hdel:HDEL key field[field...] 删除对象的一个或几个属性域,不存在的属性将被忽略

B)、实例

redis 127.0.0.1:6379> HSET person name jack

(integer) 1

redis 127.0.0.1:6379> HSET peson  age 10

(integer) 1

redis 127.0.0.1:6379> HSET person sex  famale

(integer) 1

redis 127.0.0.1:6379> HGETALL person

1) "name"

2) "jack"

3) "sex"

4) "famale"

redis 127.0.0.1:6379> HKEYS  person

1) "name"

2) "sex"

redis 127.0.0.1:6379> HVALS person

1) "jack"

2) "famale"

redis 127.0.0.1:6379> HDEL person name

(integer) 1

redis 127.0.0.1:6379> HGETALL person

1) "sex"

2) "famale"

redis 127.0.0.1:6379> HMGET person name

1) "jack"

redis 127.0.0.1:6379> HLEN person

(integer) 2

 

C)、使用场景

Hash一般的储存用户的对象的信息,例如用户的姓名,性别,生日等信息,使用普通的key/value来储存数据。

1-1)、以Key作为储存

 

方式一的是将ID作为key,其他的作为value封装成对象一序列化的方式来储存,这种方式的缺点是,增加序列化/反序列化的开销,并且在需要时修改其中的一项信息,需要把整个对象取回,并且修改操作需要对并发进行保护,引入CAS等复杂问题。

 

1-2)、储存key-value匹配的方式

 

第二种是把用户信息对象有多少成员就存成多少个 key-value 对的形式,用用户 ID +对应属性的名称作为唯一标识来取得对应属性的值,虽然省去了序列化反序列化的开销和并发问题,但是用户 ID 为重复存储,如果存在大量这样的数据,内存浪费还是非常可观的。

1-3)、Redis 储存Hash的方式

 

 

   Key依然是ID,value是一个map,这个map的key是成员的属性,value是属性值,这样对数据的修改和储存都可以直接通过其内部Map的key(redis内部的key成为field),也就是通过key(id)+field(属性的标签)就可以查询出对应的数据了,既不需要重复储存,也不需要序列化与反序列化带来的性能问题,很好的解决了这个问题。

    这里需要注意的是,redis提供了接口hgetall会把全部的属性查询出来,这样数据多了会去遍历整个Map,由于redis是单线程的,遍历Map会比较慢,则会影响其他的线程的操作,索引查询时需要注意。

D)、实现方式

Redis Hash 对应的value的内部就是一个HashMap,HashMap的数据量的大小可以分为不用的方式,分为数据量少时会采用类似一维数组的方式来紧凑储存,而不会采用hashMap目的是节省redis的内存的空间,对应的value  redisobject的encoding为zipmap,当数据量多时则会自动转化为hashmap此时的encoding为ht。

1-3)、List

A)、常用命令

lst:LSET key index value 在列表中的索引设置一个元素的值

lrange:LRANGE key start stop 从一个列表获取各种元素

rpush:RPUSH key value1 [value2] 添加一个或多个值到列表

rpushx:RPUSHX key value 添加一个值列表,仅当列表中存在

lindex:LINDEX key index 从一个列表其索引获取对应的元素

linsert:LINSERT key BEFORE|AFTER pivot value 在列表中的其他元素之后或之前插入一个元素

llen:LLEN key 获取列表的长度

lpop:LPOP key 获取并取出列表中的第一个元素

lrem:LREM key count value 从列表中删除元素

ltrim:LTRIM key start stop 修剪列表到指定的范围内

B)、实例

redis 127.0.0.1:6379> LPUSH list redis

(integer) 1

redis 127.0.0.1:6379> LPUSH list redis1

(integer) 2

redis 127.0.0.1:6379> LPUSH list hello

(integer) 3

redis 127.0.0.1:6379> LPUSH list word

(integer) 4

redis 127.0.0.1:6379> LLEN list

(integer) 4

redis 127.0.0.1:6379> LRANGE list 0 3

1) "word"

2) "hello"

3) "redis"

4) "redis"

redis 127.0.0.1:6379> LRANGE list 0 5

1) "word"

2) "hello"

3) "redis"

4) "redis"

redis 127.0.0.1:6379> LPOP list

"word"

redis 127.0.0.1:6379> RPOP list

"redis"

redis 127.0.0.1:6379> LTRIM list 0 3

OK

redis 127.0.0.1:6379> LINDEX list 1

"redis"

redis 127.0.0.1:6379>

 

C)、使用场景

Redis  list使用的比较多,由于他是一个消息队列,可以确保先后顺序,不必用mysql那样order by 来排序,利用LRANGE可以很方便的实现分页的功能,也可以实现关注的列表,粉丝列表等都可以用redis的list结构来实现,

D)、实现方式

Redis  list 是以双向链表的方式来实现的,既可以支持反向查找和遍历,更方便操作,不过是给内存增加开销,redis内部的很多现实,包括发送缓冲队列等也是用的这个数据结构。

1-4)、Set

A)、常用命令

sunion:SUNION key [key ...] 添加多个set元素

srem:SREM key member [member ...] 从集合里删除一个或多个元素,不存在的元素会被忽略

spop:SPOP key [count] 获取并删除一个集合里面的元素

smove:SMOVE source destination member 移动集合里面的一个key到另一个集合

sinter:SINTER key [key ...] 获得两个集合的交集

sdiff:SDIFF key [key ...] 获得队列不存在的元素

sacrd:SCARD key 获取集合里面的元素数量

sadd:SADD key member [member ...] 添加一个或者多个元素到集合(set)里

sscan:SSCAN key cursor [MATCH pattern] [COUNT count] 迭代set里面的元素

smembers:SMEMBERS key 获取集合里面的所有key

sismember:SISMEMBER key member 确定一个给定的值是一个集合的成员

sdiffstore:SDIFFSTORE destination key [key ...] 获得队列不存在的元素,并存储在一个关键的结果集

B)、实例

redis 127.0.0.1:6379> SADD myset "hello"

(integer) 1

redis 127.0.0.1:6379> SADD myset "word"

(integer) 1

redis 127.0.0.1:6379> SMEMBERS myset

1) "word"

2) "hello"

redis 127.0.0.1:6379> SADD myset "one"

(integer) 1

redis 127.0.0.1:6379> SISMEMBER myset "one"

(integer) 1

redis 127.0.0.1:6379> SISMEMBER myset "two"

(integer) 0

redis 127.0.0.1:6379> sadd friends:leto ghanima paul chani jessica

(integer) 4

redis 127.0.0.1:6379> sadd friends:duncan paul jessica alia

(integer) 3

redis 127.0.0.1:6379> sismember friends:leto jessica

(integer) 1

redis 127.0.0.1:6379> sismember friends:leto vladimir

(integer) 0

redis 127.0.0.1:6379> sinter friends:leto friends:duncan

1) "jessica"

2) "paul"

redis 127.0.0.1:6379> sinterstore friends:leto_duncan friends:leto friends:duncan

(integer) 2

redis 127.0.0.1:6379>

 

C)、使用场景

Redis set 对外提供的功能与 list 类似是一个列表的功能,特殊之处在于 set 是可以自动排重的,当你需要存储一个列表数据,又不希望出现重复数据时,set 是一个很好的选择,并且 set 提供了判断某个成员是否在一个 set 集合内的重要接口,这个也是 list 所不能提供的。基本的操作包括添加,删除,交并集等等操作。也可以实现文章的标签,群聊中的成员等。

D)、实现方式

set 的内部实现是一个 value 永远为 null 的 HashMap,实际就是通过计算 hash 的方式来快速排重的,这也是 set 能提供判断一个成员是否在集合内的原因。

1-5)、Sorted  Set

A)、常用命令

zadd:ZADD key score1 member1 [score2 member2] 添加一个或多个成员到有序集合,或者如果它已经存在更新其分数

zcard:ZCARD key 得到的有序集合成员的数量

zincrby:ZINCRBY key increment member 在有序集合增加成员的分数

zrange:ZRANGE key start stop [WITHSCORES] 由索引返回一个成员范围的有序集合(从低到高)

zrangebylex:ZRANGEBYLEX key min max [LIMIT offset count]返回一个成员范围的有序集合(由字典范围)

zrangebyscore:ZRANGEBYSCORE key min max [WITHSCORES] [LIMIT] 返回有序集key中,所有 score 值介于 min 和 max 之间(包括等于 min 或 max )的成员,有序集成员按 score 值递增(从小到大)次序排列

zrank:ZRANK key member 确定成员的索引中有序集合

zrem:ZREM key member [member ...] 从有序集合中删除一个或多个成员,不存在的成员将被忽略

zscore:ZSCORE key member 获取给定成员相关联的分数在一个有序集合

zscan:ZSCAN key cursor [MATCH pattern] [COUNT count] 增量迭代排序元素集和相关的分数

B)、实例

redis 127.0.0.1:6379> ZADD dbs 100 redis

(integer) 1

redis 127.0.0.1:6379> ZADD dbs 98  mencache

(integer) 1

redis 127.0.0.1:6379> ZADD dbs 99 mongndb

(integer) 1

redis 127.0.0.1:6379> ZADD dbs 99 java

(integer) 1

redis 127.0.0.1:6379> ZCARD dbs

(integer) 4

redis 127.0.0.1:6379> ZCOUNT dbs 10 99

(integer) 3

redis 127.0.0.1:6379> ZRANK dbs java

(integer) 1

redis 127.0.0.1:6379> ZRANK dbs other

(nil)

redis 127.0.0.1:6379> ZRANGEBYSCORE dbs 98 100

1) "mencache"

2) "java"

3) "mongndb"

4) "redis"

 

C)、使用场景

Redis sorted set 的使用场景与 set 类似,区别是 set 不是自动有序的,而 sorted set 可以通过用户额外提供一个优先级(score)的参数来为成员排序,并且是插入有序的,即自动排序。当你需要一个有序的并且不重复的集合列表,那么可以选择 sorted set 数据结构,比如 twitter 的 public timeline 可以以发表时间作为 score 来存储,这样获取时就是自动按时间排好序的。

D)、使用场景

Redis sorted set 的内部使用 HashMap 和跳跃表(SkipList)来保证数据的存储和有序,HashMap 里放的是成员到 score 的映射,而跳跃表里存放的是所有的成员,排序依据是 HashMap 里存的 score,使用跳跃表的结构可以获得比较高的查找效率,并且在实现上比较简单。

 

Skiplist 详解:http://blog.csdn.net/acceptedxukai/article/details/17333673

Redis 的安装

1-1)、安装

[root@hadoop1 redis]# tar -zxvf redis-3.0.7.tar.gz

[root@hadoop1 redis-3.0.7]# ls

00-RELEASENOTES  BUGS  CONTRIBUTING  COPYING  deps  INSTALL  Makefile  MANIFESTO  README  redis.conf  runtest  runtest-cluster  runtest-sentinel  sentinel.conf  src  tests  utils

[root@hadoop1 redis-3.0.7]# cd src/

 

进行编译

[root@hadoop1 src]# make

********

   LINK redis-check-dump

    CC redis-check-aof.o

    LINK redis-check-aof

 

Hint: It's a good idea to run 'make test' ;)

看到以上的信息表示已经编译成功

 

进行安装

[root@hadoop1 src]# make  install

Hint: It's a good idea to run 'make test' ;)

 

    INSTALL install

    INSTALL install

    INSTALL install

    INSTALL install

INSTALL install

 

只有看到以上的Hint: It's a good idea to run 'make test' ;)表示安装完毕

1-2)、查看配置文件

[root@hadoop1 redis-2.6.16]# cat redis.conf

 

# Redis configuration file example

# Note on units: when memory size is needed, it is possible to specify

# it in the usual form of 1k 5GB 4M and so forth:

#

# 1k => 1000 bytes

# 1kb => 1024 bytes

# 1m => 1000000 bytes

# 1mb => 1024*1024 bytes

# 1g => 1000000000 bytes

# 1gb => 1024*1024*1024 bytes

#

# units are case insensitive so 1GB 1Gb 1gB are all the same.

 

# By default Redis does not run as a daemon. Use 'yes' if you need it.

# Note that Redis will write a pid file in /var/run/redis.pid when daemonized.

# Redis默认不是以守护进程的方式运行,可以通过该配置项修改,使用yes启用守护进程

# redis 如果是以守护线程运行时,会把pid写入到/var/run/redis.id里面,可以进行修改

# 守护进程与与非守护进程的最大的区别在于是否是在后台运行

daemonize no

 

# When running daemonized, Redis writes a pid file in /var/run/redis.pid by

# default. You can specify a custom pid file location here.

# 当 Redis 以守护进程的方式运行的时候,Redis 默认会把 pid 文件放在/var/run/redis.pid

# 可配置到其他地址,当运行多个 redis 服务时,需要指定不同的 pid 文件和端口

pidfile /var/run/redis.pid

 

# Accept connections on the specified port, default is 6379.

# If port 0 is specified Redis will not listen on a TCP socket.

# 客户端端口

port 6379

 

# If you want you can bind a single interface, if the bind option is not

# specified all the interfaces will listen for incoming connections.

# 指定Redis可接收请求的IP地址,不设置将处理所有请求,建议生产环境中设置

# bind 127.0.0.1

 

# Specify the path for the unix socket that will be used to listen for

# incoming connections. There is no default, so Redis will not listen

# on a unix socket when not specified.

#

# unixsocket /tmp/redis.sock

# unixsocketperm 755

 

# Close the connection after a client is idle for N seconds (0 to disable)

# 客户端连接的超时时间,单位为秒,超时后会关闭连接,设置为0表示永不关闭

timeout 0

 

# TCP keepalive.

#

# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence

# of communication. This is useful for two reasons:

#

# 1) Detect dead peers.

# 2) Take the connection alive from the point of view of network

#    equipment in the middle.

#

# On Linux, the specified value (in seconds) is the period used to send ACKs.

# Note that to close the connection the double of the time is needed.

# On other kernels the period depends on the kernel configuration.

#

# A reasonable value for this option is 60 seconds.

#

tcp-keepalive 0

 

# Specify the server verbosity level.

# This can be one of:

# debug (a lot of information, useful for development/testing)

# verbose (many rarely useful info, but not a mess like the debug level)

# notice (moderately verbose, what you want in production probably)

# warning (only very important / critical messages are logged)

# 日志记录等级,4个可选值,debug、verbose、notice、warning分别表示:

# debug:很详细的信息,适合开发和测试

# verbose:包含许多不太有用的信息,但比debug要清爽一些(many rarely useful info, but not # a mess like #the debug level)

# notice:比较适合生产环境

# warning:警告信息

loglevel notice

 

# Specify the log file name. Also 'stdout' can be used to force

# Redis to log on the standard output. Note that if you use standard

# output for logging but daemonize, logs will be sent to /dev/null

# 日志记录方式,默认为标准输出,如果配置 Redis 为守护进程方式运行,

# 而这里又配置为日志记录方式为标准输出,则日志将会发送给 /dev/null

logfile stdout

 

# To enable logging to the system logger, just set 'syslog-enabled' to yes,

# and optionally update the other syslog parameters to suit your needs.

# 'syslog-enabled'设置为yes会把日志输出到系统日志,默认是no

# syslog-enabled no

 

# Specify the syslog identity.

# 指定syslog的标示符,如果'syslog-enabled'是no,则这个选项无效。

# syslog-ident redis

 

# Specify the syslog facility. Must be USER or between LOCAL0-LOCAL7.

# 指定syslog。必须是USER 或local0-local7

# syslog-facility local0

 

# Set the number of databases. The default database is DB 0, you can select

# a different one on a per-connection basis using SELECT where

# dbid is a number between 0 and 'databases'-1

# 设置数据库的个数,可以使用 SELECT 命令来切换数据库,数据库的索引是从0-15排序的

databases 16

 

################################ SNAPSHOTTING  #################################

#

# Save the DB on disk:

#

#   save

#

#   Will save the DB if both the given number of seconds and the given

#   number of write operations against the DB occurred.

#

#   In the example below the behaviour will be to save:

#   after 900 sec (15 min) if at least 1 key changed

#   after 300 sec (5 min) if at least 10 keys changed

#   after 60 sec if at least 10000 keys changed

#

#   Note: you can disable saving at all commenting all the "save" lines.

#

#   It is also possible to remove all the previously configured save

#   points by adding a save directive with a single empty string argument

#   like in the following example:

#

#   save ""

#设置 Redis 进行数据库镜像的频率。保存数据到disk的策略

#900秒之内有1个keys发生变化时

#300秒之内有10个keys发生变化时

#60秒之内有10000个keys发生变化时

 

save 900  1

save 300  10

save 60  10000

 

# By default Redis will stop accepting writes if RDB snapshots are enabled

# (at least one save point) and the latest background save failed.

# This will make the user aware (in an hard way) that data is not persisting

# on disk properly, otherwise chances are that no one will notice and some

# distater will happen.

#

# If the background saving process will start working again Redis will

# automatically allow writes again.

#

# However if you have setup your proper monitoring of the Redis server

# and persistence, you may want to disable this feature so that Redis will

# continue to work as usually even if there are problems with disk,

# permissions, and so forth.

# 停止写入的bgsave 错误

stop-writes-on-bgsave-error yes

 

# Compress string objects using LZF when dump .rdb databases?

# For default that's set to 'yes' as it's almost always a win.

# If you want to save some CPU in the saving child set it to 'no' but

# the dataset will likely be bigger if you have compressible values or keys.

# 在进行镜像备份时,是否进行压缩,Redis 采用 LZF 压缩,如果为了节省 CPU 时间,可以#关闭该选项,但会导致数据库文件变的巨大

rdbcompression yes

 

# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.

# This makes the format more resistant to corruption but there is a performance

# hit to pay (around 10%) when saving and loading RDB files, so you can disable it

# for maximum performances.

#

# RDB files created with checksum disabled have a checksum of zero that will

# tell the loading code to skip the check.

#

rdbchecksum yes

 

# The filename where to dump the DB

# 镜像备份文件的文件名

dbfilename dump.rdb

 

# The working directory.

#

# The DB will be written inside this directory, with the filename specified

# above using the 'dbfilename' configuration directive.

#

# The Append Only File will also be created inside this directory.

#

# Note that you must specify a directory here, not a file name.

# 指定本地数据库存放目录,注意,这里指定的必须是目录而不能是文件。

dir ./

 

################################# REPLICATION #################################

 

# Master-Slave replication. Use slaveof to make a Redis instance a copy of

# another Redis server. Note that the configuration is local to the slave

# so for example it is possible to configure the slave to save the DB with a

# different interval, or to listen to another port, and so on.

#  Master-Slave replication. 使用slaveof把一个 Redis 实例设置成为另一个Redis server的从# 库(热备). 注意:配置只对当前slave有效。因此可以把某个slave配置成使用不同的时# 间间隔来保存数据或者监听其他端口等等。

# slaveof

 

# If the master is password protected (using the "requirepass" configuration

# directive below) it is possible to tell the slave to authenticate before

# starting the replication synchronization process, otherwise the master will

# refuse the slave request.

# 如果master有密码保护,则在slave与master进行数据同步之前需要进行密码校验,否

# 则master会拒绝slave的请#求。

# masterauth

 

# When a slave loses its connection with the master, or when the replication

# is still in progress, the slave can act in two different ways:

#

# 1) if slave-serve-stale-data is set to 'yes' (the default) the slave will

#    still reply to client requests, possibly with out of date data, or the

#    data set may just be empty if this is the first synchronization.

#

# 2) if slave-serve-stale-data is set to 'no' the slave will reply with

#    an error "SYNC with master in progress" to all the kind of commands

#    but to INFO and SLAVEOF.

#

#当slave丢失与master的连接时,或slave仍然在于master进行数据同步时(未与master

# 保持一致)

# slave可有两种方式来响应客户端请求:

# 1)如果 slave-serve-stale-data 设置成 'yes'(默认),slave仍会响应客户端请求,此时可能会有

# 问题

# 2)如果 slave-serve-stale-data 设置成 'no',slave会返回"SYNC with master in progress"错误

# 信息,但 INFO 和SLAVEOF命令除外。

slave-serve-stale-data yes

 

# You can configure a slave instance to accept writes or not. Writing against

# a slave instance may be useful to store some ephemeral data (because data

# written on a slave will be easily deleted after resync with the master) but

# may also cause problems if clients are writing to it because of a

# misconfiguration.

#

# Since Redis 2.6 by default slaves are read-only.

#

# Note: read only slaves are not designed to be exposed to untrusted clients

# on the internet. It's just a protection layer against misuse of the instance.

# Still a read only slave exports by default all the administrative commands

# such as CONFIG, DEBUG, and so forth. To a limited extend you can improve

# security of read only slaves using 'rename-command' to shadow all the

# administrative / dangerous commands.

slave-read-only yes

 

# Slaves send PINGs to server in a predefined interval. It's possible to change

# this interval with the repl_ping_slave_period option. The default value is 10

# seconds.

#

# repl-ping-slave-period 10

 

# The following option sets a timeout for both Bulk transfer I/O timeout and

# master data or ping response timeout. The default value is 60 seconds.

#

# It is important to make sure that this value is greater than the value

# specified for repl-ping-slave-period otherwise a timeout will be detected

# every time there is low traffic between the master and the slave.

#

# repl-timeout 60

 

# Disable TCP_NODELAY on the slave socket after SYNC?

#

# If you select "yes" Redis will use a smaller number of TCP packets and

# less bandwidth to send data to slaves. But this can add a delay for

# the data to appear on the slave side, up to 40 milliseconds with

# Linux kernels using a default configuration.

#

# If you select "no" the delay for data to appear on the slave side will

# be reduced but more bandwidth will be used for replication.

#

# By default we optimize for low latency, but in very high traffic conditions

# or when the master and slaves are many hops away, turning this to "yes" may

# be a good idea.

repl-disable-tcp-nodelay no

 

# The slave priority is an integer number published by Redis in the INFO output.

# It is used by Redis Sentinel in order to select a slave to promote into a

# master if the master is no longer working correctly.

#

# A slave with a low priority number is considered better for promotion, so

# for instance if there are three slaves with priority 10, 100, 25 Sentinel will

# pick the one wtih priority 10, that is the lowest.

#

# However a special priority of 0 marks the slave as not able to perform the

# role of master, so a slave with priority of 0 will never be selected by

# Redis Sentinel for promotion.

#

# By default the priority is 100.

slave-priority 100

 

################################## SECURITY ###################################

 

# Require clients to issue AUTH before processing any other

# commands.  This might be useful in environments in which you do not trust

# others with access to the host running redis-server.

#

# This should stay commented out for backward compatibility and because most

# people do not need auth (e.g. they run their own servers).

#

# Warning: since Redis is pretty fast an outside user can try up to

# 150k passwords per second against a good box. This means that you should

# use a very strong password otherwise it will be very easy to break.

#

# requirepass foobared

 

# Command renaming.

#

# It is possible to change the name of dangerous commands in a shared

# environment. For instance the CONFIG command may be renamed into something

# hard to guess so that it will still be available for internal-use tools

# but not available for general clients.

#

# Example:

#

# rename-command CONFIG b840fc02d524045429941cc15f59e41cb7be6c52

#

# It is also possible to completely kill a command by renaming it into

# an empty string:

#

# rename-command CONFIG ""

#

# Please note that changing the name of commands that are logged into the

# AOF file or transmitted to slaves may cause problems.

 

################################### LIMITS ####################################

 

# Set the max number of connected clients at the same time. By default

# this limit is set to 10000 clients, however if the Redis server is not

# able to configure the process file limit to allow for the specified limit

# the max number of allowed clients is set to the current file limit

# minus 32 (as Redis reserves a few file descriptors for internal uses).

#

# Once the limit is reached Redis will close all the new connections sending

# an error 'max number of clients reached'.

# 限制同时连接的客户数量,0表示不限制链接的数量,当连接数超过这个值时,redis 将

# 不再接收其他连接请求,客户端尝试连接时将收到 error # 信息

# maxclients 10000

 

# Don't use more memory than the specified amount of bytes.

# When the memory limit is reached Redis will try to remove keys

# accordingly to the eviction policy selected (see maxmemmory-policy).

#

# If Redis can't remove keys according to the policy, or if the policy is

# set to 'noeviction', Redis will start to reply with errors to commands

# that would use more memory, like SET, LPUSH, and so on, and will continue

# to reply to read-only commands like GET.

#

# This option is usually useful when using Redis as an LRU cache, or to set

# an hard memory limit for an instance (using the 'noeviction' policy).

#

# WARNING: If you have slaves attached to an instance with maxmemory on,

# the size of the output buffers needed to feed the slaves are subtracted

# from the used memory count, so that network problems / resyncs will

# not trigger a loop where keys are evicted, and in turn the output

# buffer of slaves is full with DELs of keys evicted triggering the deletion

# of more keys, and so forth until the database is completely emptied.

#

# In short... if you have slaves attached it is suggested that you set a lower

# limit for maxmemory so that there is some free RAM on the system for slave

# output buffers (but this is not needed if the policy is 'noeviction').

# 设置redis能够使用的最大内存。

# 达到最大内存设置后,Redis会先尝试清除已到期或即将到期的Key(设置过expire信息

# 的key)

# 在删除时,按照过期时间进行删除,最早将要被过期的key将最先被删除

# 如果已到期或即将到期的key删光,仍进行set操作,那么将返回错误

# 此时redis将不再接收写请求,只接收get请求。

# maxmemory的设置比较适合于把redis当作于类似memcached 的缓存来使用

#  maxmemory

 

# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory

# is reached. You can select among five behaviors:

#

# volatile-lru -> remove the key with an expire set using an LRU algorithm

# allkeys-lru -> remove any key accordingly to the LRU algorithm

# volatile-random -> remove a random key with an expire set

# allkeys-random -> remove a random key, any key

# volatile-ttl -> remove the key with the nearest expire time (minor TTL)

# noeviction -> don't expire at all, just return an error on write operations

#

# Note: with any of the above policies, Redis will return an error on write

#       operations, when there are not suitable keys for eviction.

#

#       At the date of writing this commands are: set setnx setex append

#       incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd

#       sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby

#       zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby

#       getset mset msetnx exec sort

#

# The default is:

#

# maxmemory-policy volatile-lru

 

# LRU and minimal TTL algorithms are not precise algorithms but approximated

# algorithms (in order to save memory), so you can select as well the sample

# size to check. For instance for default Redis will check three keys and

# pick the one that was used less recently, you can change the sample size

# using the following configuration directive.

#

# maxmemory-samples 3

 

############################## APPEND ONLY MODE ###############################

 

# By default Redis asynchronously dumps the dataset on disk. This mode is

# good enough in many applications, but an issue with the Redis process or

# a power outage may result into a few minutes of writes lost (depending on

# the configured save points).

#

# The Append Only File is an alternative persistence mode that provides

# much better durability. For instance using the default data fsync policy

# (see later in the config file) Redis can lose just one second of writes in a

# dramatic event like a server power outage, or a single write if something

# wrong with the Redis process itself happens, but the operating system is

# still running correctly.

#

# AOF and RDB persistence can be enabled at the same time without problems.

# If the AOF is enabled on startup Redis will load the AOF, that is the file

# with the better durability guarantees.

#

# Please check http://redis.io/topics/persistence for more information.

 

# Redis 默认每次更新操作后会在后台异步的把数据库镜像备份到磁盘,但该备份非常耗时,# 且备份不宜太频繁

# redis 同步数据文件是按上面save条件来同步的

# 如果发生诸如拉闸限电、拔插头等状况,那么将造成比较大范围的数据丢失

# 所以redis提供了另外一种更加高效的数据库备份及灾难恢复方式

# 开启append only 模式后,redis 将每一次写操作请求都追加到appendonly.aof 文件中

# redis重新启动时,会从该文件恢复出之前的状态。

# 但可能会造成 appendonly.aof 文件过大,所以redis支持BGREWRITEAOF 指令,对

# appendonly.aof重新整理

appendonly no

 

# The name of the append only file (default: "appendonly.aof")

# appendfilename appendonly.aof

 

# The fsync() call tells the Operating System to actually write data on disk

# instead to wait for more data in the output buffer. Some OS will really flush

# data on disk, some other OS will just try to do it ASAP.

#

# Redis supports three different modes:

#

# no: don't fsync, just let the OS flush the data when it wants. Faster.

# always: fsync after every write to the append only log . Slow, Safest.

# everysec: fsync only one time every second. Compromise.

#

# The default is "everysec", as that's usually the right compromise between

# speed and data safety. It's up to you to understand if you can relax this to

# "no" that will let the operating system flush the output buffer when

# it wants, for better performances (but if you can live with the idea of

# some data loss consider the default persistence mode that's snapshotting),

# or on the contrary, use "always" that's very slow but a bit safer than

# everysec.

#

# More details please check the following article:

# http://antirez.com/post/redis-persistence-demystified.html

#

# If unsure, use "everysec".

# appendfsync always

 

# 调用fsync()函数通知操作系统立刻向硬盘写数据

# Redis支持3中模式:

# no:不fsync, 只是通知OS可以flush数据了,具体是否flush取决于OS.性能更好.

# always: 每次写入append only 日志文件后都会fsync . 性能差,但很安全.

# everysec: 没间隔1秒进行一次fsync. 折中.

# 默认是 "everysec"

 

appendfsync everysec

# appendfsync no

 

# When the AOF fsync policy is set to always or everysec, and a background

# saving process (a background save or AOF log background rewriting) is

# performing a lot of I/O against the disk, in some Linux configurations

# Redis may block too long on the fsync() call. Note that there is no fix for

# this currently, as even performing fsync in a different thread will block

# our synchronous write(2) call.

#

# In order to mitigate this problem it's possible to use the following option

# that will prevent fsync() from being called in the main process while a

# BGSAVE or BGREWRITEAOF is in progress.

#

# This means that while another child is saving, the durability of Redis is

# the same as "appendfsync none". In practical terms, this means that it is

# possible to lose up to 30 seconds of log in the worst scenario (with the

# default Linux settings).

#

# If you have latency problems turn this to "yes". Otherwise leave it as

# "no" that is the safest pick from the point of view of durability.

# 当AOF fsync策略被设置为always或者everysec并且后台保存进程(saving process)正在执# 行大量I/O操作时

# Redis可能会在fsync()调用上阻塞过长时间

no-appendfsync-on-rewrite no

 

# Automatic rewrite of the append only file.

# Redis is able to automatically rewrite the log file implicitly calling

# BGREWRITEAOF when the AOF log size grows by the specified percentage.

#

# This is how it works: Redis remembers the size of the AOF file after the

# latest rewrite (if no rewrite has happened since the restart, the size of

# the AOF at startup is used).

#

# This base size is compared to the current size. If the current size is

# bigger than the specified percentage, the rewrite is triggered. Also

# you need to specify a minimal size for the AOF file to be rewritten, this

# is useful to avoid rewriting the AOF file even if the percentage increase

# is reached but it is still pretty small.

#

# Specify a percentage of zero in order to disable the automatic AOF

# rewrite feature.

 

#  append only 文件的自动重写

# 当AOF 日志文件即将增长到指定百分比时,Redis可以通过调用BGREWRITEAOF 来自动

# 重写append only文件。

# 它是这么干的:Redis会记住最近一次重写后的AOF 文件size。然后它会把这个size与当

# 前size进行比较,如果当前# size比指定的百分比大,就会触发重写。同样,你需要指定# # AOF文件被重写的最小size,这对避免虽然百分比达到了# 但是实际上文件size还是很小

#(这# 种情况没有必要重写)却导致AOF文件重写的情况很有用。

# auto-aof-rewrite-percentage 设置为 0 可以关闭AOF重写功能

 

auto-aof-rewrite-percentage 100

auto-aof-rewrite-min-size 64mb

 

################################ LUA SCRIPTING  ###############################

 

# Max execution time of a Lua script in milliseconds.

#

# If the maximum execution time is reached Redis will log that a script is

# still in execution after the maximum allowed time and will start to

# reply to queries with an error.

#

# When a long running script exceed the maximum execution time only the

# SCRIPT KILL and SHUTDOWN NOSAVE commands are available. The first can be

# used to stop a script that did not yet called write commands. The second

# is the only way to shut down the server in the case a write commands was

# already issue by the script but the user don't want to wait for the natural

# termination of the script.

#

# Set it to 0 or a negative value for unlimited execution without warnings.

lua-time-limit 5000

 

################################## SLOW LOG ###################################

 

# The Redis Slow Log is a system to log queries that exceeded a specified

# execution time. The execution time does not include the I/O operations

# like talking with the client, sending the reply and so forth,

# but just the time needed to actually execute the command (this is the only

# stage of command execution where the thread is blocked and can not serve

# other requests in the meantime).

#

# You can configure the slow log with two parameters: one tells Redis

# what is the execution time, in microseconds, to exceed in order for the

# command to get logged, and the other parameter is the length of the

# slow log. When a new command is logged the oldest one is removed from the

# queue of logged commands.

 

# The following time is expressed in microseconds, so 1000000 is equivalent

# to one second. Note that a negative number disables the slow log, while

# a value of zero forces the logging of every command.

slowlog-log-slower-than 10000

 

# There is no limit to this length. Just be aware that it will consume memory.

# You can reclaim memory used by the slow log with SLOWLOG RESET.

slowlog-max-len 128

 

############################### ADVANCED CONFIG ###############################

 

# Hashes are encoded using a memory efficient data structure when they have a

# small number of entries, and the biggest entry does not exceed a given

# threshold. These thresholds can be configured using the following directives.

hash-max-ziplist-entries 512

hash-max-ziplist-value 64

 

# Similarly to hashes, small lists are also encoded in a special way in order

# to save a lot of space. The special representation is only used when

# you are under the following limits:

# list 数据类型多少节点以下会采用去指针的紧凑存储格式。

# list 数据类型节点值大小小于多少字节会采用紧凑存储格式。

list-max-ziplist-entries 512

list-max-ziplist-value 64

 

# Sets have a special encoding in just one case: when a set is composed

# of just strings that happens to be integers in radix 10 in the range

# of 64 bit signed integers.

# The following configuration setting sets the limit in the size of the

# set in order to use this special memory saving encoding.

# set 数据类型内部数据如果全部是数值型,且包含多少节点以下会采用紧凑格式存储。

set-max-intset-entries 512

 

# Similarly to hashes and lists, sorted sets are also specially encoded in

# order to save a lot of space. This encoding is only used when the length and

# elements of a sorted set are below the following limits:

zset-max-ziplist-entries 128

zset-max-ziplist-value 64

 

# Active rehashing uses 1 millisecond every 100 milliseconds of CPU time in

# order to help rehashing the main Redis hash table (the one mapping top-level

# keys to values). The hash table implementation Redis uses (see dict.c)

# performs a lazy rehashing: the more operation you run into an hash table

# that is rehashing, the more rehashing "steps" are performed, so if the

# server is idle the rehashing is never complete and some more memory is used

# by the hash table.

#

# The default is to use this millisecond 10 times every second in order to

# active rehashing the main dictionaries, freeing memory when possible.

#

# If unsure:

# use "activerehashing no" if you have hard latency requirements and it is

# not a good thing in your environment that Redis can reply form time to time

# to queries with 2 milliseconds delay.

#

# use "activerehashing yes" if you don't have such hard requirements but

# want to free memory asap when possible.

activerehashing yes

 

# The client output buffer limits can be used to force disconnection of clients

# that are not reading data from the server fast enough for some reason (a

# common reason is that a Pub/Sub client can't consume messages as fast as the

# publisher can produce them).

#

# The limit can be set differently for the three different classes of clients:

#

# normal -> normal clients

# slave  -> slave clients and MONITOR clients

# pubsub -> clients subcribed to at least one pubsub channel or pattern

#

# The syntax of every client-output-buffer-limit directive is the following:

#

# client-output-buffer-limit

#

# A client is immediately disconnected once the hard limit is reached, or if

# the soft limit is reached and remains reached for the specified number of

# seconds (continuously).

# So for instance if the hard limit is 32 megabytes and the soft limit is

# 16 megabytes / 10 seconds, the client will get disconnected immediately

# if the size of the output buffers reach 32 megabytes, but will also get

# disconnected if the client reaches 16 megabytes and continuously overcomes

# the limit for 10 seconds.

#

# By default normal clients are not limited because they don't receive data

# without asking (in a push way), but just after a request, so only

# asynchronous clients may create a scenario where data is requested faster

# than it can read.

#

# Instead there is a default limit for pubsub and slave clients, since

# subscribers and slaves receive data in a push fashion.

#

# Both the hard or the soft limit can be disabled by setting them to zero.

client-output-buffer-limit normal 0 0 0

client-output-buffer-limit slave 256mb 64mb 60

client-output-buffer-limit pubsub 32mb 8mb 60

 

# Redis calls an internal function to perform many background tasks, like

# closing connections of clients in timeot, purging expired keys that are

# never requested, and so forth.

#

# Not all tasks are perforemd with the same frequency, but Redis checks for

# tasks to perform accordingly to the specified "hz" value.

#

# By default "hz" is set to 10. Raising the value will use more CPU when

# Redis is idle, but at the same time will make Redis more responsive when

# there are many keys expiring at the same time, and timeouts may be

# handled with more precision.

#

# The range is between 1 and 500, however a value over 100 is usually not

# a good idea. Most users should use the default of 10 and raise this up to

# 100 only in environments where very low latency is required.

hz 10

 

# When a child rewrites the AOF file, if the following option is enabled

# the file will be fsync-ed every 32 MB of data generated. This is useful

# in order to commit the file to the disk more incrementally and avoid

# big latency spikes.

aof-rewrite-incremental-fsync yes

 

################################## INCLUDES ###################################

 

# Include one or more other config files here.  This is useful if you

# have a standard template that goes to all Redis server but also need

# to customize a few per-server settings.  Include files can include

# other files, so use this wisely.

#

# include /path/to/local.conf

# include /path/to/other.conf

 

这么长的配置文件之所以拿出来,是因为redis需要的信息几乎在这里可以找到,以上标红的是需要注意的。

 

******************

1-3)、启动

启动服务器端:

 

[root@hadoop1 src]# ./redis-server

4065:C 22 Aug 13:13:03.290 # Warning: no config file specified, using the default config. In order to specify a config file use ./redis-server /path/to/redis.conf

4065:M 22 Aug 13:13:03.290 * Increased maximum number of open files to 10032 (it was originally set to 1024).

                _._                                                  

           _.-``__ ''-._                                             

      _.-``    `.  `_.  ''-._           Redis 3.0.7 (00000000/0) 64 bit

  .-`` .-```.  ```\/    _.,_ ''-._                                   

 (    '      ,       .-`  | `,    )     Running in standalone mode

 |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379

 |    `-._   `._    /     _.-'    |     PID: 4065

  `-._    `-._  `-./  _.-'    _.-'                                   

 |`-._`-._    `-.__.-'    _.-'_.-'|                                  

 |    `-._`-._        _.-'_.-'    |           http://redis.io        

  `-._    `-._`-.__.-'_.-'    _.-'                                   

 |`-._`-._    `-.__.-'    _.-'_.-'|                                  

 |    `-._`-._        _.-'_.-'    |                                  

  `-._    `-._`-.__.-'_.-'    _.-'                                   

      `-._    `-.__.-'    _.-'                                       

          `-._        _.-'                                           

              `-.__.-'                                               

 

4065:M 22 Aug 13:13:03.301 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.

4065:M 22 Aug 13:13:03.301 # Server started, Redis version 3.0.7

4065:M 22 Aug 13:13:03.302 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.

4065:M 22 Aug 13:13:03.302 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.

4065:M 22 Aug 13:13:03.302 * The server is now ready to accept connections on port 6379

2837:M 14 Oct 18:42:11.618 * DB loaded from disk: 0.011 seconds

7886:C 10 Feb 19:28:46.048 * DB saved on disk

7886:C 10 Feb 19:28:46.077 * RDB: 4 MB of memory used by copy-on-write

7847:M 10 Feb 19:28:46.132 * Background saving terminated with success

7847:M 10 Feb 19:33:49.556 * 100 changes in 300 seconds. Saving...

7847:M 10 Feb 19:33:49.559 * Background saving started by pid 7905

7905:C 10 Feb 19:33:49.600 * DB saved on disk

7905:C 10 Feb 19:33:49.601 * RDB: 4 MB of memory used by copy-on-write

7847:M 10 Feb 19:33:49.663 * Background saving terminated with success

7847:M 10 Feb 19:43:57.357 * 100 changes in 300 seconds. Saving...

7847:M 10 Feb 19:43:57.358 * Background saving started by pid 7979

 

可以看出每隔一段时间就会忘磁盘上写入数据,PID是时刻在换的。

在启动的时候可以看出Redis会去加载磁盘上的数据到内存中,以便提高查询的速度。

1-4)、启动方式

A)、前台启动

[root@hadoop1 bin]# redis-server

B)、后台启动

[root@hadoop1 bin]# redis-server &. > /dev/null 2>&1  &

 

方式一

1>/dev/null : 把程序的“1”——标准输出,重定向到文件/dev/null

2>&1      : 把程序的“2”——错误输出,重定向到“1”所去的文件

&         :  把程序放到后台运行  

 

方式二:

修改配置文件,

vi  redis.conf

修改其中一个配置

 

daemonize yes  默认的是no

 

保存文件后再用普通命令启动,也可以启动为后台模式

[root@notrue-centos redis]# bin/redis-server  ../redis.conf

1-5)、客户端链接

JAR下载:链接:http://pan.baidu.com/s/1eRW5pXw 密码:k2nx 如果无法下载请联系作者。

  1. 、Linux链接

1-1)、连接默认的本机

[root@hadoop1 src]# ./redis-cli

127.0.0.1:6379> set aas dff

OK

127.0.0.1:6379> get aas

"dff"

127.0.0.1:6379>

 

1-2)、连接其他的机器

[root@skycloud1 ~]# redis-cli -h 192.168.215.134  -p 7000

192.168.215.134:7000>

 

-h  需要链接的IP

-p  需要链接的port

B)、JAVA代码链接

public class JedisClientTest {

public static void main(String[] args) {

// 创建一个jedis客户端对象(redis的客户端连接)

Jedis client = new Jedis("hadoop1", 6379);

// 测试服务器是否连通

String resp = client.ping();

System.out.println(resp);

}

}

 

如果能返回PONG则表示链接成功

1-6)、Redis 3.2.2集群搭建

A)、环境准备

Redis-trib.rb 需要ruby环境,先安装ruby环境

[root@skycloud1 ~]# yum -y  install zlib ruby rubygems

 

安装ruby的redis

[root@skycloud1 ~]# gem install redis

 

升级ruby安装的软件

[root@skycloud1 ~]#gem update --system

[root@skycloud1 ~]#gem update

 

查看ruby安装的软件的结合

[root@skycloud1 ~]#gem list

B)、安装redis 3.2.2.tar.gz

链接:http://pan.baidu.com/s/1o82jvce 密码:gjnn 如果无法下载请联系作者。

[root@hadoop2 opt]# tar -zxvf redis-3.2.2.tar.gz

[root@hadoop2 opt]# cd redis-3.2.2

[root@hadoop2 redis-3.2.2]# cd src/

 

编译并安装

[root@hadoop2 src]# make && make install

C)、设置集群的环境

创建不同的文件夹保存不同的配置文件

[root@hadoop1 redis-3.2.2]# mkdir redis_cluster/

[root@hadoop1 redis_cluster]# mkdir 7000

[root@hadoop1 redis_cluster]# mkdir 7001

[root@hadoop1 redis_cluster]# mkdir 7002

 

在hadoop2集群上进行一下操作:

[root@hadoop2 redis-3.2.2]# mkdir redis_cluster/

[root@hadoop2 redis_cluster]# mkdir 7003

[root@hadoop2 redis_cluster]# mkdir 7004

[root@hadoop2 redis_cluster]# mkdir 7005

 

D)、修改redis的配置文件

[root@hadoop1 7000]# vi redis.conf

port  7000

#端口7000,7002,7003

bind  192.168.215.156

#默认ip为127.0.0.1 需要改为其他节点机器可访问的ip 否则创建集群时无法访问对应的端口,无法创建集群

daemonize    yes

#redis后台运行

pidfile  /var/run/redis_7000.pid

#pidfile文件对应7000,7001,7002

cluster-enabled  yes

#开启集群  把注释#去掉

cluster-config-file  nodes_7000.conf

#集群的配置  配置文件首次启动自动生成 7000,7001,7002

cluster-node-timeout  15000

#请求超时  默认15秒,可自行设置

appendonly  yes

 

不同的机器上修改不同的配置文件

 

E)、开启每一台机器上的redis的服务

 

[root@hadoop1 redis-3.2.2]# redis-server redis_cluster/7000/redis.conf

 

依次开启服务

 

F)、查看端口信息

[root@hadoop1 redis-3.2.2]# ps -ef|grep redis

root      11302      1  0 11:41 ?        00:00:03 redis-server 192.168.215.156:7000 [cluster]

root      11306      1  0 11:41 ?        00:00:03 redis-server 192.168.215.156:7001 [cluster]

root      11310      1  0 11:41 ?        00:00:03 redis-server 192.168.215.156:7002 [cluster]

root      12125   2558  0 12:00 pts/4    00:00:00 redis-cli -h 192.168.215.156 -c -p 7000

root      12135   5806  0 12:15 pts/0    00:00:00 grep redis

G)、创建集群

[root@hadoop1 redis-3.2.2]# redis-trib.rb create --replicas 1 192.168.215.156:7000 192.168.215.156:7001 192.168.215.156:7002 192.168.215.157:7003 192.168.215.157:7004 192.168.215.157:7005

>>> Creating cluster

>>> Performing hash slots allocation on 6 nodes...

Using 3 masters:

192.168.215.157:7003

192.168.215.156:7000

192.168.215.157:7004

Adding replica 192.168.215.156:7001 to 192.168.215.157:7003

Adding replica 192.168.215.157:7005 to 192.168.215.156:7000

Adding replica 192.168.215.156:7002 to 192.168.215.157:7004

M: 2a2dacffc3b39817331cffac19732996684c26dc 192.168.215.156:7000

   slots:5461-10922 (5462 slots) master

S: 4f8aef43b5be88f1b4fffa429b4b5c954a88b802 192.168.215.156:7001

   replicates 8c5c4e9a7ae3bcf6c492ba6182d3f6e9d33dd2fd

S: 281345a0ce5907d795f4f48a43a57d1de7efc9a7 192.168.215.156:7002

   replicates 2a578bf2906f6d408da7ee51007ec214016c1dac

M: 8c5c4e9a7ae3bcf6c492ba6182d3f6e9d33dd2fd 192.168.215.157:7003

   slots:0-5460 (5461 slots) master

M: 2a578bf2906f6d408da7ee51007ec214016c1dac 192.168.215.157:7004

   slots:10923-16383 (5461 slots) master

S: dd868e8f78d6b1f105fd6d8dac535fff2132204f 192.168.215.157:7005

   replicates 2a2dacffc3b39817331cffac19732996684c26dc

Can I set the above configuration? (type 'yes' to accept): yes

>>> Nodes configuration updated

>>> Assign a different config epoch to each node

>>> Sending CLUSTER MEET messages to join the cluster

Waiting for the cluster to join...

>>> Performing Cluster Check (using node 192.168.215.156:7000)

M: 2a2dacffc3b39817331cffac19732996684c26dc 192.168.215.156:7000

   slots:5461-10922 (5462 slots) master

   1 additional replica(s)

S: dd868e8f78d6b1f105fd6d8dac535fff2132204f 192.168.215.157:7005

   slots: (0 slots) slave

   replicates 2a2dacffc3b39817331cffac19732996684c26dc

S: 4f8aef43b5be88f1b4fffa429b4b5c954a88b802 192.168.215.156:7001

   slots: (0 slots) slave

   replicates 8c5c4e9a7ae3bcf6c492ba6182d3f6e9d33dd2fd

M: 2a578bf2906f6d408da7ee51007ec214016c1dac 192.168.215.157:7004

   slots:10923-16383 (5461 slots) master

   1 additional replica(s)

M: 8c5c4e9a7ae3bcf6c492ba6182d3f6e9d33dd2fd 192.168.215.157:7003

   slots:0-5460 (5461 slots) master

   1 additional replica(s)

S: 281345a0ce5907d795f4f48a43a57d1de7efc9a7 192.168.215.156:7002

   slots: (0 slots) slave

   replicates 2a578bf2906f6d408da7ee51007ec214016c1dac

[OK] All nodes agree about slots configuration.

>>> Check for open slots...

>>> Check slots coverage...

[OK] All 16384 slots covered.

 

看到以上的配置说明安装成功,其中replicas 表示副本的数量,尽量与机器的个数成正比,详细的创建过程请查看:http://blog.csdn.net/xfg0218/article/details/60319219

 

H)、测试集群

集群的链接redis-cli   -h  host  -c  -p port

 

[root@hadoop1 redis_cluster]# redis-cli -h 192.168.215.156 -c -p 7000

[root@hadoop2 redis-3.2.2]# redis-cli -h 192.168.215.157 -c  -p 7003

 

192.168.215.157:7003> SET "xiaozhang" "男"

-> Redirected to slot [6387] located at 192.168.215.156:7000

OK

 

192.168.215.156:7000> KEYS *

1) "xiaozhang"

192.168.215.156:7000> get "xiaozhang"

"\xe7\x94\xb7"

Redis 常用命令

查看全部的keys

127.0.0.1:6379> KEYS *

(empty list or set)

 

查看前缀是xiao的信息

127.0.0.1:6379> KEYS "xiao*"

(empty list or set)

 

随机抽取一个key

127.0.0.1:6379> RANDOMKEY

(nil)

 

查看日志

127.0.0.1:6379> SLOWLOG get

(empty list or set)

127.0.0.1:6379> SLOWLOG get 10

(empty list or set)

 

查看redis 服务器的信息

127.0.0.1:6379> INFO

# Server

redis_version:3.2.3

redis_git_sha1:00000000

redis_git_dirty:0

redis_build_id:ab05c86184ac1d8a

redis_mode:standalone

os:Linux 2.6.32-431.el6.x86_64 x86_64

arch_bits:64

multiplexing_api:epoll

gcc_version:4.4.7

process_id:2437

run_id:e21aeaa38b0aa04663ea96b3d6fe6dd773c9e905

tcp_port:6379

uptime_in_seconds:193

uptime_in_days:0

hz:10

lru_clock:9564347

executable:/opt/redis-3.2.3/src/./redis-server

config_file:

 

*************************

 

详细的请查看:http://blog.csdn.net/xfg0218/article/details/54813347

 

查看redis正在做什么

127.0.0.1:6379> MONITOR

OK

 

清除数据

127.0.0.1:6379> FLUSHALL

OK

192.168.215.156:7000> CLUSTER reset

OK

 

查看有多少key

127.0.0.1:6379> DBSIZE

12344

 

 

 

Redis 内部工具

一下工具在Redis的安装目录下的src目录下

 

工具                             描述

redis-server                         服务端

redis-cli                             客户端

redis-benchmark                     Redis性能测试工具

redis-check-aof                     AOF文件修复工具

redis-check-dump                     RDB文件检测工具

redis-sentinel                         Sentinel服务器(仅在2.8之后)

代码示例

1-1)、链接工具

package com.otsuser.usualpassenger.redis;

 

import redis.clients.jedis.Jedis;

import redis.clients.jedis.JedisPool;

import redis.clients.jedis.JedisPoolConfig;

 

public class RedisConnectUtils {

 

// Redis服务器IP

private static String HOST = "127.0.0.1";

// Redis的端口号

private static int PORT = 6379;

// 可用连接实例的最大数目,默认值为8;

// 如果赋值为-1,则表示不限制;如果pool已经分配了maxActive个jedis实例,则此时pool的状态为exhausted(耗尽)。

private static int MAX_ACTIVE = 1024;

// 控制一个pool最多有多少个状态为idle(空闲的)的jedis实例,默认值也是8。

private static int MAX_IDLE = 200;

// 等待可用连接的最大时间,单位毫秒,默认值为-1,表示永不超时。如果超过等待时间,则直接抛出JedisConnectionException;

private static int MAX_WAIT = 10000;

 

// 在borrow一个jedis实例时,是否提前进行validate操作;如果为true,则得到的jedis实例均是可用的;

private static boolean TEST_ON_BORROW = true;

private static JedisPool jedisPool = null;

 

/**

 * 初始化Redis连接池

 */

static {

try {

JedisPoolConfig config = new JedisPoolConfig();

config.setMaxTotal(MAX_ACTIVE);

config.setMaxIdle(MAX_IDLE);

config.setMaxWaitMillis(MAX_WAIT);

config.setTestOnBorrow(TEST_ON_BORROW);

jedisPool = new JedisPool(config, HOST, PORT);

} catch (Exception e) {

e.printStackTrace();

}

}

 

/**

 * 获取Jedis实例

 *

 * @return

 */

public synchronized static Jedis getJedis() {

try {

if (jedisPool != null) {

Jedis resource = jedisPool.getResource();

return resource;

} else {

return null;

}

} catch (Exception e) {

e.printStackTrace();

return null;

}

}

 

/**

 * 释放jedis资源

 *

 * @param jedis

 */

public static void returnResource(final Jedis jedis) {

if (jedis != null) {

jedisPool.returnResourceObject(jedis);

}

}

 

}

1-2)、Redis API使用

package com.otsuser.usualpassenger.redis;

 

import java.util.List;

import java.util.Map;

import java.util.Set;

 

import redis.clients.jedis.Jedis;

 

public class RedisApiClient {

 

/**

 * 检查是否连接成功

 *

 * @return

 */

public static String ping() {

Jedis jedis = RedisConnectUtils.getJedis();

String str = jedis.ping();

RedisConnectUtils.returnResource(jedis);

return str;

}

 

/**

 * 通过key删除(字节)

 *

 * @param keys

 * @return Integer reply, specifically: an integer greater than 0 if one or

 *         more keys were removed 0 if none of the specified key existed

 */

public static Long del(byte[] key) {

Jedis jedis = RedisConnectUtils.getJedis();

Long returnResult = jedis.del(key);

RedisConnectUtils.returnResource(jedis);

return returnResult;

}

 

/**

 * 通过key删除

 *

 * @param key

 */

public static void del(String key) {

Jedis jedis = RedisConnectUtils.getJedis();

jedis.del(key);

RedisConnectUtils.returnResource(jedis);

}

 

/**

 * 添加key value 并且设置存活时间(byte)

 *

 * @param key

 * @param value

 * @param liveTime

 */

public static void set(byte[] key, byte[] value, int liveTime) {

Jedis jedis = RedisConnectUtils.getJedis();

jedis.set(key, value);

jedis.expire(key, liveTime);

RedisConnectUtils.returnResource(jedis);

}

 

/**

 * 添加key value 并且设置存活时间

 *

 * @param key

 * @param value

 * @param liveTime

 */

public void set(String key, String value, int liveTime) {

Jedis jedis = RedisConnectUtils.getJedis();

jedis.set(key, value);

jedis.expire(key, liveTime);

RedisConnectUtils.returnResource(jedis);

}

 

/**

 * 添加key value

 *

 * @param key

 * @param value

 */

public void set(String key, String value) {

Jedis jedis = RedisConnectUtils.getJedis();

jedis.set(key, value);

RedisConnectUtils.returnResource(jedis);

}

 

/**

 * 添加key value (字节)(序列化)

 *

 * @param key

 * @param value

 */

public void set(byte[] key, byte[] value) {

Jedis jedis = RedisConnectUtils.getJedis();

jedis.set(key, value);

RedisConnectUtils.returnResource(jedis);

}

 

/**

 * 获取redis value (String)

 *

 * @param key

 * @return

 */

public static String get(String key) {

Jedis jedis = RedisConnectUtils.getJedis();

String value = jedis.get(key);

RedisConnectUtils.returnResource(jedis);

return value;

}

 

/**

 * 获取redis value (byte [] )(反序列化)

 *

 * @param key

 * @return

 */

public static byte[] get(byte[] key) {

Jedis jedis = RedisConnectUtils.getJedis();

byte[] value = jedis.get(key);

RedisConnectUtils.returnResource(jedis);

return value;

}

 

/**

 * 通过正则匹配keys

 *

 * @param pattern

 * @return

 */

public static Set keys(String pattern) {

Jedis jedis = RedisConnectUtils.getJedis();

Set value = jedis.keys(pattern);

RedisConnectUtils.returnResource(jedis);

return value;

}

 

/**

 * 检查key是否已经存在

 *

 * @param key

 * @return

 */

public static boolean exists(String key) {

Jedis jedis = RedisConnectUtils.getJedis();

boolean value = jedis.exists(key);

RedisConnectUtils.returnResource(jedis);

return value;

}

 

/**

 * 往list中添加元素

 *

 * @param key

 * @param value

 */

public void lpush(String key, String value) {

Jedis jedis = RedisConnectUtils.getJedis();

jedis.lpush(key, value);

RedisConnectUtils.returnResource(jedis);

}

 

public void rpush(String key, String value) {

Jedis jedis = RedisConnectUtils.getJedis();

jedis.rpush(key, value);

RedisConnectUtils.returnResource(jedis);

}

 

/**

 * 数组长度

 *

 * @param key

 * @return

 */

public static Long llen(String key) {

Jedis jedis = RedisConnectUtils.getJedis();

Long len = jedis.llen(key);

RedisConnectUtils.returnResource(jedis);

return len;

}

 

/**

 * 获取下标为index的value

 *

 * @param key

 * @param index

 * @return

 */

public static String lindex(String key, Long index) {

Jedis jedis = RedisConnectUtils.getJedis();

String str = jedis.lindex(key, index);

RedisConnectUtils.returnResource(jedis);

return str;

}

 

public static String lpop(String key) {

Jedis jedis = RedisConnectUtils.getJedis();

String str = jedis.lpop(key);

RedisConnectUtils.returnResource(jedis);

return str;

}

 

public static List lrange(String key, long start, long end) {

Jedis jedis = RedisConnectUtils.getJedis();

List str = jedis.lrange(key, start, end);

RedisConnectUtils.returnResource(jedis);

return str;

}

 

/**

 * @param key

 * @param field

 * @param value

 * @return If the field already exists, and the HSET just produced an update

 *         of the value, 0 is returned, otherwise if a new field is created

 *         1 is returned.

 */

public static Long hset(String key, String field, String value) {

Jedis jedis = RedisConnectUtils.getJedis();

Long alreadyExists = jedis.hset(key, field, value);

RedisConnectUtils.returnResource(jedis);

return alreadyExists;

}

 

/**

 * @param key

 * @param field

 * @param value

 * @return If the field already exists, and the HSET just produced an update

 *         of the value, 0 is returned, otherwise if a new field is created

 *         1 is returned.

 */

public static Long hset(byte[] key, byte[] field, byte[] value) {

Jedis jedis = RedisConnectUtils.getJedis();

Long alreadyExists = jedis.hset(key, field, value);

RedisConnectUtils.returnResource(jedis);

return alreadyExists;

}

 

/**

 * @param key

 * @param field

 * @return Bulk reply

 */

public static String hget(final String key, final String field) {

Jedis jedis = RedisConnectUtils.getJedis();

String str = jedis.hget(key, field);

RedisConnectUtils.returnResource(jedis);

return str;

}

 

/**

 * @param key

 * @param field

 * @return Bulk reply

 */

public static byte[] hget(final byte[] key, final byte[] field) {

Jedis jedis = RedisConnectUtils.getJedis();

byte[] bt = jedis.hget(key, field);

jedis.hgetAll(key);

RedisConnectUtils.returnResource(jedis);

return bt;

}

 

/**

 * @param key

 * @return All the fields and values contained into a hash.

 */

public static Map hgetAll(String key) {

Jedis jedis = RedisConnectUtils.getJedis();

Map map = jedis.hgetAll(key);

RedisConnectUtils.returnResource(jedis);

return map;

}

 

/**

 * @param key

 * @return All the fields and values contained into a hash.

 */

public static Map<byte[], byte[]> hgetAll(byte[] key) {

Jedis jedis = RedisConnectUtils.getJedis();

Map<byte[], byte[]> map = jedis.hgetAll(key);

RedisConnectUtils.returnResource(jedis);

return map;

}

 

/**

 * @param key

 * @param fields

 * @return If the field was present in the hash it is deleted and 1 is

 *         returned, otherwise 0 is returned and no operation is performed.

 */

public static Long hdel(final String key, final String... fields) {

Jedis jedis = RedisConnectUtils.getJedis();

Long returnResult = jedis.hdel(key, fields);

RedisConnectUtils.returnResource(jedis);

return returnResult;

}

 

/**

 * @param key

 * @param fields

 * @return If the field was present in the hash it is deleted and 1 is

 *         returned, otherwise 0 is returned and no operation is performed.

 */

public static Long hdel(final byte[] key, final byte[]... fields) {

Jedis jedis = RedisConnectUtils.getJedis();

Long returnResult = jedis.hdel(key, fields);

RedisConnectUtils.returnResource(jedis);

return returnResult;

}

 

/**

 * @param key

 * @param field

 * @return Return 1 if the hash stored at key contains the specified field.

 *         Return 0 if the key is not found or the field is not present.

 */

public static Boolean hexists(String key, final String field) {

Jedis jedis = RedisConnectUtils.getJedis();

Boolean returnResult = jedis.hexists(key, field);

RedisConnectUtils.returnResource(jedis);

return returnResult;

}

 

/**

 * @param key

 * @param hash

 * @return

 */

public static String hmset(final String key, final Map hash) {

Jedis jedis = RedisConnectUtils.getJedis();

String str = jedis.hmset(key, hash);

RedisConnectUtils.returnResource(jedis);

return str;

}

 

/**

 * @param key

 * @param fields

 * @return

 */

public static List hmget(final String key, final String fields) {

Jedis jedis = RedisConnectUtils.getJedis();

List list = jedis.hmget(key, fields);

RedisConnectUtils.returnResource(jedis);

return list;

}

 

/**

 * 清空redis 所有数据

 *

 * @return

 */

public static String flushDB() {

Jedis jedis = RedisConnectUtils.getJedis();

String str = jedis.flushDB();

RedisConnectUtils.returnResource(jedis);

return str;

}

 

/**

 * 查看redis里有多少数据

 */

public static long dbSize() {

Jedis jedis = RedisConnectUtils.getJedis();

long len = jedis.dbSize();

RedisConnectUtils.returnResource(jedis);

return len;

}

 

}

RedisStudio客户端连接工具

链接:http://pan.baidu.com/s/1gf9dknp 密码:hfyd  如果无法下载请联系作者。

 

里面包含两个工具,一个安装版的一部免安装版的,建议使用免安装版的,简单,界面清洁,界面如下

 

 

 

 

 

Redis 持久化储存机制

虽然是内存数据库,一般来说为了保险起见,还是会有一些持久化的机制,Redis 采用了其中两种方式,一是 RDB(Redis DataBase),也就是存数据,另一种是 AOF(Append Only File),也就是存操作。当然,即使是 Redis 本身提供的,我们也可以选择用还是不用,如果两种都不用的化,Redis 就和 memcache 差不多了。

1-1)、定时快照方式(RDB)

    RDB 方式,是将 redis 某一时刻的数据持久化到磁盘中,是一种快照式的持久化方法。

Redis 在进行数据持久化的过程中,会先将数据写入到一个临时文件中,待持久化过程都结束了,才会用这个临时文件替换上次持久化好的文件。正是这种特性,让我们可以随时来进行备份,因为快照文件总是完整可用的。

     对于 RDB 方式,redis 会单独创建(fork)一个子进程来进行持久化,而主进程是不会进行任何IO操作的,这样就确保了redis极高的性能。

如果需要进行大规模数据的恢复,且对于数据恢复的完整性不是非常敏感,那 RDB 方式要比 AOF 方式更加的高效。

    虽然 RDB 有不少优点,但它的缺点也是不容忽视的。如果你对数据的完整性非常敏感,那么 RDB 方式就不太适合你,因为即使你每 5 分钟都持久化一次,当 redis 故障时,仍然会有近 5 分钟的数据丢失。所以,redis 还提供了另一种持久化方式,那就是 AOF。

1-2)、基于语句追加文件的方式(AOF)

AOF(Append Only File)方式实际类似 mysql 基于语句的 binlog 方式,即每条会使 Redis 内存数据发生改变的命令都会追加到一个 log 文件中,也就是说这个 log 文件就是 Redis 的持久化数据。

 

aof 的方式的主要缺点是追加 log 文件可能导致体积过大,当系统重启恢复数据时如果是 aof 的方式则加载数据会非常慢,几十G的数据可能需要几小时才能加载完,当然这个耗时并不是因为磁盘文件读取速度慢,而是由于读取的所有命令都要在内存中执行一遍。另外由于每条命令都要写 log,所以使用 AOF的方式,Redis 的读写性能也会有所下降。

1-3)、虚拟内存(vm)

diskstore 方式是作者放弃了虚拟内存方式后选择的一种新的实现方式,也就是传统的 B-tree 的方式,目前仍在实验阶段,后续是否可用我们可以拭目以待。

1-4)、Diskstore 方式

diskstore 方式是作者放弃了虚拟内存方式后选择的一种新的实现方式,也就是传统的 B-tree 的方式,目前仍在实验阶段,后续是否可用我们可以拭目以待。

 

Redis 事物的处理

在数据库中能确保数据不丢失的就是事务的设置,应该把状态恢复到之前的执行该整体之前的状态,Redis的事务包括MULTI,EXCE,DISCARD,WARCH这是个事务

MULTI:用来组装一个事务

EXCE:用来执行一个事务

DISCARD:用来取消一个事务

WARCH:用来监视一些KEY,一旦这些KEY在事务之前被改变,则取消事务的执行。

 

1-1)、MULTI事务的使用

127.0.0.1:6379> set oneId 1

OK

127.0.0.1:6379> MULTI

OK

127.0.0.1:6379> INCR oneId

QUEUED

127.0.0.1:6379> INCR oneId

QUEUED

127.0.0.1:6379> ping  

QUEUED

127.0.0.1:6379> ping

QUEUED

127.0.0.1:6379> exec

1) (integer) 3

2) (integer) 4

3) PONG

4) PONG

127.0.0.1:6379> INCR oneId

(integer) 5

 

在以上的例子中可以看出当执行MULTI命令后会开启队列的事务,再有数据进入时则会把数据放入到队列中,当执行EXCE是,会把所有的数据进行提交。

 

 

对于事务的执行来说,如果Redis开启了AOF持久化的话,那么一旦事务执行成功,事务中的命令就会通过write命令一次性把数据写入到磁盘,如果再想磁盘写入的过成功出现断电,硬盘事故的情况下那么就会出现部分的文件被AOF持久化了,这是AOF就会存在不完整的情况,这时我们可以使用redis-check-aof工具来进行修复这一问题,这个工具可以把不完整的数据进行移除,以确保AOF文件的完整性。

 

1-2)、WATCH 事务的使用

127.0.0.1:6379> set username xiaozhang

OK

127.0.0.1:6379> WATCH username

OK

127.0.0.1:6379> set username xiaowang

OK

127.0.0.1:6379> MULTI

OK

127.0.0.1:6379> set username xiaoxiao

QUEUED

127.0.0.1:6379> get username

QUEUED

127.0.0.1:6379> EXEC

(nil)

 

WATCH可以实现乐观锁的效果,即是CAS(check  and  set )。WACTH 本身就是监听KEY值的变化,也可以监听多个KEY的值的变化的情况,如果没有触发事务,那么WACTH也会认真的去工作,一旦发现KEY被修改了,在执行EXEC时就会返回NIL,表示事务无法触发。

在执行EXEC之前被改变了,可以任务这个数据是脏数据,对于以后的操作就没有意义了,所以就不会去执行了。

 

Redis 发布订阅

Redis 的消息订阅/发布(pub/sub)是一种消息的模型,Redis客户端可以订阅任意数量级的频道,一旦某频道接收到消息时,订阅他的客户端就会收到信息,接下来演示一下实例:

1-1)、订阅窗口

[root@hadoop3 src]# ./redis-cli

127.0.0.1:6379> SUBSCRIBE message

Reading messages... (press Ctrl-C to quit)

1) "subscribe"

2) "message"

3) (integer) 1

1-2)、发布窗口

[root@hadoop3 src]# ./redis-cli

127.0.0.1:6379> PUBLISH message "new message"

(integer) 1

127.0.0.1:6379> PUBLISH message "new message1"

(integer) 1

 

1-3)、查看订阅窗口

127.0.0.1:6379> SUBSCRIBE message

Reading messages... (press Ctrl-C to quit)

1) "subscribe"

2) "message"

3) (integer) 1

1) "message"

2) "message"

3) "new message"

1) "message"

2) "message"

3) "new message1"

 

Redis 的性能测试

我们使用Redis自带的redis-benchmark进行测试

1-1)、查看帮助信息

[root@hadoop3 src]# ./redis-benchmark -h

Invalid option "-h" or option argument missing

 

Usage: redis-benchmark [-h ] [-p ] [-c ] [-n [-k ]

 

 -h      Server hostname (default 127.0.0.1)

 -p          Server port (default 6379)

 -s        Server socket (overrides host and port)

 -a      Password for Redis Auth

 -c       Number of parallel connections (default 50)

 -n      Total number of requests (default 100000)

 -d          Data size of SET/GET value in bytes (default 2)

 -dbnum        SELECT the specified db number (default 0)

 -k       1=keep alive 0=reconnect (default 1)

 -r   Use random keys for SET/GET/INCR, random values for SADD

  Using this option the benchmark will expand the string __rand_int__

  inside an argument with a 12 digits number in the specified range

  from 0 to keyspacelen-1. The substitution changes every time a command

  is executed. Default tests use this to hit random keys in the

  specified range.

 -P        Pipeline requests. Default 1 (no pipeline).

 -q                 Quiet. Just show query/sec values

 --csv              Output in CSV format

 -l                 Loop. Run the tests forever

 -t         Only run the comma separated list of tests. The test

                    names are the same as the ones produced as output.

 -I                 Idle mode. Just open N idle connections and wait.

 

Examples:

 

 Run the benchmark with the default configuration against 127.0.0.1:6379:

   $ redis-benchmark

 

 Use 20 parallel clients, for a total of 100k requests, against 192.168.1.1:

   $ redis-benchmark -h 192.168.1.1 -p 6379 -n 100000 -c 20

 

 Fill 127.0.0.1:6379 with about 1 million keys only using the SET test:

   $ redis-benchmark -t set -n 1000000 -r 100000000

 

 Benchmark 127.0.0.1:6379 for a few commands producing CSV output:

   $ redis-benchmark -t ping,set,get -n 100000 --csv

 

 Benchmark a specific command line:

   $ redis-benchmark -r 10000 -n 10000 eval 'return redis.call("ping")' 0

 

 Fill a list with 10000 random elements:

   $ redis-benchmark -r 10000 -n 10000 lpush mylist __rand_int__

 

 On user specified command lines __rand_int__ is replaced with a random integer

 with a range of values selected by the -r option.   

1-2)、实例

A)、测试并发

100个并发连接,100000个请求,检测host为localhost 端口为6379的redis服务器性能

[root@hadoop3 src]# ./redis-benchmark -h localhost -p 6379 -c 100 -n 1000

====== PING_INLINE ======

  1000 requests completed in 0.04 seconds

  100 parallel clients

  3 bytes payload

  keep alive: 1

****************

 

 

详细的参数请参考Blog

http://blog.csdn.net/xfg0218/article/details/52825874

 

B)、测试数据包的问题

测试存取大小为100字节的数据包的性能

[root@hadoop3 src]# ./redis-benchmark -h localhost -p 6379 -q -d 100

PING_INLINE: 31172.07 requests per second

PING_BULK: 33886.82 requests per second

SET: 30998.14 requests per second

GET: 30693.68 requests per second

INCR: 33079.72 requests per second

LPUSH: 29403.12 requests per second

LPOP: 32530.91 requests per second

SADD: 31377.47 requests per second

SPOP: 33134.53 requests per second

LPUSH (needed to benchmark LRANGE): 32393.91 requests per second

LRANGE_100 (first 100 elements): 12055.46 requests per second

LRANGE_300 (first 300 elements): 4528.58 requests per second

LRANGE_500 (first 450 elements): 2971.68 requests per second

LRANGE_600 (first 600 elements): 2184.22 requests per second

MSET (10 keys): 19361.08 requests per second

 

 

C)、测试set,lpush的性能

[root@hadoop3 src]# ./redis-benchmark -t set,lpush -n 1000 -q

SET: 33333.34 requests per second

LPUSH: 33333.34 requests per second

 

-q  是只显示测试的结果

D)、只测试某些数值存取的性能

[root@hadoop3 src]# ./redis-benchmark -n 1000 -q script load "redis.call('set','username','xiaozhang')"

script load redis.call('set','username','xiaozhang'): 27777.78 requests per second

 

Redis-trib.rb详解

[root@hadoop1 src]# ./redis-trib.rb help

Usage: redis-trib

 

  set-timeout     host:port milliseconds

  info            host:port

  check           host:port

  call            host:port command arg arg .. arg

  fix             host:port

                  --timeout

  help            (show this help)

  add-node        new_host:new_port existing_host:existing_port

                  --slave

                  --master-id

  rebalance       host:port

                  --timeout

                  --use-empty-masters

                  --threshold

                  --auto-weights

                  --pipeline

                  --weight

                  --simulate

  import          host:port

                  --from

                  --replace

                  --copy

  del-node        host:port node_id

  create          host1:port1 ... hostN:portN

                  --replicas

  reshard         host:port

                  --from

                  --timeout

                  --yes

                  --slots

                  --to

                  --pipeline

 

For check, fix, reshard, del-node, set-timeout you can specify the host and port of any working node in the cluster.

 

可以看到redis-trib.rb具有以下功能:

1、create:创建集群

2、check:检查集群

3、info:查看集群信息

4、fix:修复集群

5、reshard:在线迁移slot

6、rebalance:平衡集群节点slot数量

7、add-node:将新节点加入集群

8、del-node:从集群中删除节点

9、set-timeout:设置集群节点间心跳连接的超时时间

10、call:在集群全部节点上执行命令

11、import:将外部redis数据导入集群

 

 

具体的每一个功能请查看:

http://blog.csdn.net/xfg0218/article/details/56505216

或者

http://blog.csdn.net/xfg0218/article/details/56678783

你可能感兴趣的:(大数据书籍)