Redis(Remote Dictionary Server ),即远程字典服务,是一个开源的使用 ANSI C 语言编写、支持网络、可基于内存亦可持久化的日志型、Key-Value 数据库,并提供多种语言的 API。从 2010 年 3 月 15 日起,Redis 的开发工作由 VMware 主持。从 2013 年5 月开始,Redis 的开发由 Pivotal 赞助。
~百科。
REmote DIctionary Server(Redis) 是一个由 Salvatore Sanfilippo 写的 key-value 存储系统。
~
Redis 是一个开源的使用 ANSI C 语言编写、遵守 BSD 协议、支持网络、可基于内存亦可持久化的日志型、Key-Value 数据库,并提供多种语言的 API。
~
它通常被称为数据结构服务器,因为值(value)可以是 字符串(String),哈希(Hash),列表(list),集合(sets)和有序集合(sorted sets)等类型。
~ runoob。
Redis 是一个开源(BSD 许可)的,内存中的数据结构存储系统,它可以用作数据库、缓存和消息中间件。 它支持多种类型的数据结构,如字符串(strings),散列(hashes),列表(lists),集合(sets),有序集合(sorted sets)与范围查询, bitmaps,hyperloglogs 和地理空间(geospatial) 索引半径查询。 Redis 内置了复制(replication),LUA 脚本(Lua scripting),LRU 驱动事件(LRU eviction),事务(transactions) 和不同级别的磁盘持久化(persistence), 并通过 Redis 哨兵(Sentinel)和自动分区(Cluster)提供高可用性(high availability)。
~ http://www.redis.cn/
Redis is an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. It supports data structures such as strings, hashes, lists, sets, sorted sets with range queries, bitmaps, hyperloglogs, geospatial indexes with radius queries and streams. Redis has built-in replication, Lua scripting, LRU eviction, transactions and different levels of on-disk persistence, and provides high availability via Redis Sentinel and automatic partitioning with Redis Cluster. Learn more →
~https://redis.io/
Redis 支持数据的持久化,可以内存中的数据保存到磁盘中,重启的时候可以再次加载进行使用。
Redis 不仅支持简单的 key-value
类型的数据,同时还提供 list
, set
, zset
, hash
等数据结构的存储。
Redis 支持数据的备份,即 master-slave 模式的数据备份。
http://download.redis.io/releases/redis-4.0.11.tar.gz
geek@geek-PC:~/Downloads$ scp redis-4.0.11.tar.gz [email protected]:/root/geek/tools_my/
[email protected]'s password:
redis-4.0.11.tar.gz 100% 1699KB 36.9MB/s 00:00
进入服务器。
make
[root@localhost tools_my]# tar -zxvf redis-4.0.11.tar.gz
[root@localhost tools_my]# cd redis-4.0.11
[root@localhost redis-4.0.11]# ls
00-RELEASENOTES deps README.md runtest-sentinel utils
BUGS INSTALL redis.conf sentinel.conf
CONTRIBUTING Makefile runtest src
COPYING MANIFESTO runtest-cluster tests
[root@localhost redis-4.0.11]# make
...
Hint: It's a good idea to run 'make test' ;)
make[1]: Leaving directory `/root/geek/tools_my/redis-4.0.11/src'
最好不要 make test
。复杂。
[root@localhost redis-4.0.11]# make install
cd src && make install
make[1]: Entering directory `/root/geek/tools_my/redis-4.0.11/src'
CC Makefile.dep
make[1]: Leaving directory `/root/geek/tools_my/redis-4.0.11/src'
make[1]: Entering directory `/root/geek/tools_my/redis-4.0.11/src'
Hint: It's a good idea to run 'make test' ;)
INSTALL install
INSTALL install
INSTALL install
INSTALL install
INSTALL install
make[1]: Leaving directory `/root/geek/tools_my/redis-4.0.11/src'
[root@localhost ~]# cd /usr/local/bin/
[root@localhost bin]# ls
pcre-config pcretest redis-check-aof redis-cli redis-server
pcregrep redis-benchmark redis-check-rdb redis-sentinel
[root@localhost bin]# ll
total 35780
-rwxr-xr-x. 1 root root 2363 Feb 19 09:30 pcre-config
-rwxr-xr-x. 1 root root 90207 Feb 19 09:30 pcregrep
-rwxr-xr-x. 1 root root 186075 Feb 19 09:30 pcretest
-rwxr-xr-x. 1 root root 5599918 Mar 15 03:53 redis-benchmark
-rwxr-xr-x. 1 root root 8333544 Mar 15 03:53 redis-check-aof
-rwxr-xr-x. 1 root root 8333544 Mar 15 03:53 redis-check-rdb
-rwxr-xr-x. 1 root root 5740282 Mar 15 03:53 redis-cli
lrwxrwxrwx. 1 root root 12 Mar 15 03:53 redis-sentinel -> redis-server
-rwxr-xr-x. 1 root root 8333544 Mar 15 03:53 redis-server
[root@localhost redis-4.0.11]# cp redis.conf redis.conf.bak
[root@localhost redis-4.0.11]# vim redis.conf
131
132 ################################# GENERAL ############################# ########
133
134 # By default Redis does not run as a daemon. Use 'yes' if you need it.
135 # Note that Redis will write a pid file in /var/run/redis.pid when daem onized.
136 #daemonize no
137 daemonize yes
138
[root@localhost redis-4.0.11]# /usr/local/bin/redis-server redis.conf
7189:C 15 Mar 05:34:17.025 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
7189:C 15 Mar 05:34:17.025 # Redis version=4.0.11, bits=64, commit=00000000, modified=0, pid=7189, just started
7189:C 15 Mar 05:34:17.025 # Configuration loaded
6376
。[root@localhost redis-4.0.11]# /usr/local/bin/redis-cli
127.0.0.1:6379> ping
PONG
127.0.0.1:6379>
[root@localhost ~]# ps -ef | grep redis
root 7190 1 0 05:34 ? 00:00:00 /usr/local/bin/redis-server 127.0.0.1:6379
root 7214 3666 0 05:36 pts/0 00:00:00 /usr/local/bin/redis-cli
root 7232 7218 0 05:39 pts/1 00:00:00 grep redis
[root@localhost redis-4.0.11]# /usr/local/bin/redis-cli
127.0.0.1:6379> shutdown
not connected> exit
在交互命令行中 shutdown
,就是关闭 redis-server。
在交互命令行中 exit
,就是关闭 redis-cli。
[root@localhost ~]# ps -ef | grep redis
root 7275 7218 0 05:59 pts/1 00:00:00 grep redis
[root@localhost bin]# ll
total 35780
-rwxr-xr-x. 1 root root 2363 Feb 19 09:30 pcre-config
-rwxr-xr-x. 1 root root 90207 Feb 19 09:30 pcregrep
-rwxr-xr-x. 1 root root 186075 Feb 19 09:30 pcretest
-rwxr-xr-x. 1 root root 5599918 Mar 15 03:53 redis-benchmark
-rwxr-xr-x. 1 root root 8333544 Mar 15 03:53 redis-check-aof
-rwxr-xr-x. 1 root root 8333544 Mar 15 03:53 redis-check-rdb
-rwxr-xr-x. 1 root root 5740282 Mar 15 03:53 redis-cli
lrwxrwxrwx. 1 root root 12 Mar 15 03:53 redis-sentinel -> redis-server
-rwxr-xr-x. 1 root root 8333544 Mar 15 03:53 redis-server
[root@localhost bin]# pwd
/usr/local/bin
[root@localhost bin]# ./redis-benchmark
Redis 用单线程模型来处理客户端的请求。对读写等事件的响应是通过对 epoll 函数的包装做到的。Redis 的实际处理速度完全依靠主线程的执行效率。
Epoll 是 Linux 内核为处理大批量文件描述而作了改进的 epoll。是 Linux 下多路复用 IO 接口 select/poll 的增强版本,ta 能显著提高程序在大量并发连接中只有少量活跃的情况下的系统 CPU 利用率。
[root@localhost redis-4.0.11]# vim redis.conf
184 # Set the number of databases. The default database is DB 0, you can se lect
185 # a different one on a per-connection basis using SELECT where
186 # dbid is a number between 0 and 'databases'-1
187 databases 16
select
命令切换数据库。127.0.0.1:6379> select 1
OK
127.0.0.1:6379[1]>
dbsize
查看当前数据库的 key 的数量。127.0.0.1:6379> DBSIZE
(integer) 4
127.0.0.1:6379> keys *
1) "myset:__rand_int__"
2) "key:__rand_int__"
3) "mylist"
4) "counter:__rand_int__"
flushdb
清空当前库。flushall
通杀全部库。9 键键盘:6379 ——> merz ——> Alessia Merz 演员。
http://redisdoc.com/
可以理解成 Memcached 一样的类型。
一个 key 对应一个 value。
二进制安全 ——> 可以包含任何数据,eg. jpg 图片 or 序列化的对象。
Redis 最基本的数据类型。一个 Redis 中字符串 value 最多可以是 512M。
127.0.0.1:6379> get k1
"geek"
127.0.0.1:6379> append k1 666
(integer) 7
127.0.0.1:6379> get k1
"geek666"
127.0.0.1:6379> strlen k1
(integer) 7
127.0.0.1:6379>
127.0.0.1:6379> incr k2
(integer) 1
127.0.0.1:6379> incr k2
(integer) 2
127.0.0.1:6379> incr k2
(integer) 3
127.0.0.1:6379> incrby k2 2
(integer) 5
127.0.0.1:6379> incrby k2 2
(integer) 7
127.0.0.1:6379> incrby k2 2
(integer) 9
127.0.0.1:6379> get k1
"geek666"
127.0.0.1:6379> GETRANGE k1 0 -1
"geek666"
127.0.0.1:6379> GETRANGE k1 0 2
"gee"
127.0.0.1:6379> SETRANGE k1 0 xxx
(integer) 7
127.0.0.1:6379> get k1
"xxxk666"
127.0.0.1:6379> setex k3 10 v3
OK
127.0.0.1:6379> ttl k3
(integer) 8
127.0.0.1:6379> setnx k1 q
(integer) 0
127.0.0.1:6379> mset k1 v1 k2 v2 k3 v3
OK
127.0.0.1:6379> mget k1 k2 k3
1) "v1"
2) "v2"
3) "v3"
127.0.0.1:6379> mget k1 k2 k3 k4
1) "v1"
2) "v2"
3) "v3"
4) (nil)
127.0.0.1:6379> msetnx k3 v3 k4 v4
(integer) 0
Redis 列表是简单的字符串列表,按照插入顺序排序。
可以添加一个元素到列表的头部(左)或尾部(右)。
底层是链表。
127.0.0.1:6379> LPUSH list01 1 2 3 4 5
(integer) 5
127.0.0.1:6379> LRANGE list01 0 -1
1) "5"
2) "4"
3) "3"
4) "2"
5) "1"
127.0.0.1:6379> RPUSH list02 1 2 3 4 5
(integer) 5
127.0.0.1:6379> LRANGE list02 0 -1
1) "1"
2) "2"
3) "3"
4) "4"
5) "5"
127.0.0.1:6379> LRANGE list02 0 -1
1) "1"
2) "2"
3) "3"
4) "4"
5) "5"
6) "1"
7) "1"
8) "2"
9) "2"
10) "3"
11) "3"
12) "3"
13) "5"
14) "4"
15) "1"
127.0.0.1:6379> LREM list02 2 3
(integer) 2
127.0.0.1:6379> LRANGE list02 0 -1
1) "1"
2) "2"
3) "4"
4) "5"
5) "1"
6) "1"
7) "2"
8) "2"
9) "3"
10) "3"
11) "5"
12) "4"
13) "1"
127.0.0.1:6379>
127.0.0.1:6379> lpush list01 1 2 3 4 5 6 7 8
(integer) 8
127.0.0.1:6379> LTRIM list01 0 4
OK
127.0.0.1:6379> lrange list01 0 -1
1) "8"
2) "7"
3) "6"
4) "5"
5) "4"
127.0.0.1:6379>
127.0.0.1:6379> lset list01 1 6
OK
127.0.0.1:6379> LRANGE list01 0 -1
1) "5"
2) "6"
3) "3"
4) "2"
5) "1"
127.0.0.1:6379> LINSERT list01 before 6 Java
(integer) 6
127.0.0.1:6379> LRANGE list01 0 -1
1) "5"
2) "Java"
3) "6"
4) "3"
5) "2"
6) "1"
String 类型的无序集合。通过 HashTable 实现。
127.0.0.1:6379> sadd set01 1 1 2 2 3 3
(integer) 3
127.0.0.1:6379> SMEMBERS set01
1) "1"
2) "2"
3) "3"
127.0.0.1:6379> SISMEMBER set01 1
(integer) 1
127.0.0.1:6379> SISMEMBER set01 x
(integer) 0
127.0.0.1:6379> SCARD set01
(integer) 3
127.0.0.1:6379> SMEMBERS set01
1) "1"
2) "2"
3) "3"
127.0.0.1:6379> SREM set01 2
(integer) 1
127.0.0.1:6379> SMEMBERS set01
1) "1"
2) "3"
127.0.0.1:6379> sadd set01 1 2 3 4 5 6 7 8
(integer) 6
127.0.0.1:6379> SMEMBERS set01
1) "1"
2) "2"
3) "3"
4) "4"
5) "5"
6) "6"
7) "7"
8) "8"
127.0.0.1:6379> SRANDMEMBER set01 5
1) "6"
2) "1"
3) "2"
4) "5"
5) "3"
127.0.0.1:6379> spop set01
"4"
127.0.0.1:6379> SMEMBERS set01
1) "1"
2) "2"
3) "3"
4) "5"
5) "6"
6) "7"
7) "8"
127.0.0.1:6379> smove set01 set02 8
(integer) 1
127.0.0.1:6379> SMEMBERS set02
1) "8"
127.0.0.1:6379> SMEMBERS set01
1) "1"
2) "2"
3) "3"
4) "5"
5) "6"
6) "7"
127.0.0.1:6379> SADD set01 1 2 3 4 5
(integer) 5
127.0.0.1:6379> SADD set02 1 2 3 a b
(integer) 5
127.0.0.1:6379> SDIFF set01 set02
1) "4"
2) "5"
127.0.0.1:6379> SINTER set01 set02
1) "1"
2) "2"
3) "3"
127.0.0.1:6379> SUNION set01 set02
1) "2"
2) "1"
3) "4"
4) "b"
5) "a"
6) "5"
7) "3"
Redis Hash 是一个键值对集合。
类似 Java 中的 Map。
Redis Hash 是一个 String 类型的 field 和 value 的映射表。
Hash 特别适用于存储对象。
127.0.0.1:6379> set str01 v1
OK
127.0.0.1:6379> get str01
"v1"
方便对应 json。
127.0.0.1:6379> hset user name geek
(integer) 1
127.0.0.1:6379> hget user name
"geek"
127.0.0.1:6379> HSET customer id 11 name zh3 age 25
(integer) 3
127.0.0.1:6379> HGET customer id
"11"
127.0.0.1:6379> HGET customer name
"zh3"
127.0.0.1:6379> HGETALL customer
1) "id"
2) "11"
3) "name"
4) "zh3"
5) "age"
6) "25"
127.0.0.1:6379> HLEN customer
(integer) 3
127.0.0.1:6379> HLEN customer
(integer) 3
127.0.0.1:6379> HEXISTS customer id
(integer) 1
127.0.0.1:6379> HEXISTS customer add
(integer) 0
127.0.0.1:6379> hkeys customer
1) "id"
2) "name"
3) "age"
127.0.0.1:6379> HVALS customer
1) "11"
2) "zh3"
3) "25"
127.0.0.1:6379> HGET customer age
"25"
127.0.0.1:6379> HINCRBY customer age 2
(integer) 27
127.0.0.1:6379> hset customer score 91
(integer) 1
127.0.0.1:6379> HINCRBYFLOAT customer score 0.5
"91.5"
127.0.0.1:6379> hset customer age 18
(integer) 0
Redis zset 和 set 一样也是 String 类型元素的集合,且不允许重复。
不同的是每个元素都会关联一个 double 类型的分数。
redis 正是通过分数来为集合中的成员进行从小到大的排序。
zset 的成员是唯一的,但分数(score)可以重复。
127.0.0.1:6379> zadd zset01 60 v1 70 v2 80 v3 90 v4 100 v5
(integer) 5
127.0.0.1:6379> ZRANGE zset01 0 -1
1) "v1"
2) "v2"
3) "v3"
4) "v4"
5) "v5"
127.0.0.1:6379> ZRANGE zset01 0 -1 withscores
1) "v1"
2) "60"
3) "v2"
4) "70"
5) "v3"
6) "80"
7) "v4"
8) "90"
9) "v5"
10) "100"
127.0.0.1:6379> ZRANGEBYSCORE zset01 70 90
1) "v2"
2) "v3"
3) "v4"
(
。127.0.0.1:6379> ZRANGEBYSCORE zset01 (70 90
1) "v3"
2) "v4"
从第二个开始取 2 个。
127.0.0.1:6379> ZRANGEBYSCORE zset01 60 90 limit 1 2
1) "v2"
2) "v3"
127.0.0.1:6379> ZREM zset01 v5
(integer) 1
127.0.0.1:6379> ZRANGE zset01 0 -1
1) "v1"
2) "v2"
3) "v3"
4) "v4"
127.0.0.1:6379> ZCARD zset01
(integer) 4
127.0.0.1:6379> ZCOUNT zset01 60 80
(integer) 3
127.0.0.1:6379> ZRANK zset01 v4
(integer) 3
127.0.0.1:6379> ZSCORE zset01 v4
"90"
127.0.0.1:6379> ZSCORE zset01 v4
"90"
127.0.0.1:6379> ZREVRANGE zset01 0 -1
1) "v4"
2) "v3"
3) "v2"
4) "v1"
127.0.0.1:6379> set k1 v1
OK
127.0.0.1:6379> set k2 v2
OK
127.0.0.1:6379> exists k1
(integer) 1
127.0.0.1:6379> move k2 1
(integer) 1
127.0.0.1:6379> exists k2
(integer) 0
127.0.0.1:6379> select 1
OK
127.0.0.1:6379[1]> keys *
1) "k2"
-1
表示永不过期。-2 表示已过期。127.0.0.1:6379> ttl k2
(integer) -1
127.0.0.1:6379> expire k2 10
(integer) 1
127.0.0.1:6379> ttl k2
(integer) 8
127.0.0.1:6379> ttl k2
(integer) -2
127.0.0.1:6379> get k1
"v1"
127.0.0.1:6379> set k1 geek
OK
127.0.0.1:6379> get k1
"geek"
127.0.0.1:6379>
127.0.0.1:6379> type k1
string
.conf
.conf.bak
)。 8 # Note on units: when memory size is needed, it is possible to specify
9 # it in the usual form of 1k 5GB 4M and so forth:
10 #
11 # 1k => 1000 bytes
12 # 1kb => 1024 bytes
13 # 1m => 1000000 bytes
14 # 1mb => 1024*1024 bytes
15 # 1g => 1000000000 bytes
16 # 1gb => 1024*1024*1024 bytes
17 #
18 # units are case insensitive so 1GB 1Gb 1gB are all the same.
和 Struts2 配置类似,redis.conf 作为总闸,包含其他。
19
20 ################################## INCLUDES ###################################
21
22 # Include one or more other config files here. This is useful if you
23 # have a standard template that goes to all Redis servers but also need
24 # to customize a few per-server settings. Include files can include
25 # other files, so use this wisely.
26 #
27 # Notice option "include" won't be rewritten by command "CONFIG REWRITE"
28 # from admin or Redis Sentinel. Since Redis always uses the last processed
29 # line as value of a configuration directive, you'd better put includes
30 # at the beginning of this file to avoid overwriting config change at runtime.
31 #
32 # If instead you are interested in using includes to override configuration
33 # options, it is better to use include as the last line.
34 #
35 # include /path/to/local.conf
36 # include /path/to/other.conf
37
daemonize no —> yes
supervised no
pidfile /var/run/redis_6379.pid
port 6379
tcp-backlog
高并发环境下你需要一个高 backlog 值,以避免客户端连接速度慢的问题。 请注意,Linux内核默认将其设置为 /proc/sys/net/core/somaxconn 的值,因此请确保同时提高 somaxconn 和 tcp_max_syn_backlog 的值,以获得所需的效果。
tcp-backlog 511
# TCP listen() backlog.
#
# In high requests-per-second environments you need an high backlog in order
# to avoid slow clients connections issues. Note that the Linux kernel
# will silently truncate it to the value of /proc/sys/net/core/somaxconn so# make sure to raise both the value of somaxconn and tcp_max_syn_backlog
# in order to get the desired effect.
tcp-backlog 511
################################## NETWORK #####################################
# By default, if no "bind" configuration directive is specified, Redis listens
# for connections from all the network interfaces available on the server.
# It is possible to listen to just one or multiple selected interfaces using
# the "bind" configuration directive, followed by one or more IP addresses.
#
# Examples:
#
# bind 192.168.1.100 10.0.0.1
# bind 127.0.0.1 ::1
#
# ~~~ WARNING ~~~ If the computer running Redis is directly exposed to the
# internet, binding to all the interfaces is dangerous and will expose the
# instance to everybody on the internet. So by default we uncomment the
# following bind directive, that will force Redis to listen only into
# the IPv4 lookback interface address (this means Redis will be able to
# accept connections only from clients running into the same computer it
# is running).
#
# IF YOU ARE SURE YOU WANT YOUR INSTANCE TO LISTEN TO ALL THE INTERFACES
# JUST COMMENT THE FOLLOWING LINE.
# ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
#bind 127.0.0.1
# Close the connection after a client is idle for N seconds (0 to disable)
timeout 0
# TCP keepalive.
#
# If non-zero, use SO_KEEPALIVE to send TCP ACKs to clients in absence
# of communication. This is useful for two reasons:
#
# 1) Detect dead peers.
# 2) Take the connection alive from the point of view of network
# equipment in the middle.
#
# On Linux, the specified value (in seconds) is the period used to send ACKs.
# Note that to close the connection the double of the time is needed.
# On other kernels the period depends on the kernel configuration.
#
# A reasonable value for this option is 300 seconds, which is the new
# Redis default starting with Redis 3.2.1.
tcp-keepalive 300
loglevel。
logfile。
[root@localhost redis-4.0.11]# redis-server redis.conf
1723:C 16 Mar 02:00:50.549 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
1723:C 16 Mar 02:00:50.549 # Redis version=4.0.11, bits=64, commit=00000000, modified=0, pid=1723, just started
1723:C 16 Mar 02:00:50.549 # Configuration loaded
[root@localhost redis-4.0.11]# redis-cli
127.0.0.1:6379> ping
PONG
127.0.0.1:6379> CONFIG GET dir
1) "dir"
2) "/usr/local/bin"
127.0.0.1:6379> CONFIG GET requirepass
1) "requirepass"
2) ""
设置密码后,ping-pong。×
。
127.0.0.1:6379> CONFIG SET requirepass 123.
OK
127.0.0.1:6379> ping
(error) NOAUTH Authentication required.
127.0.0.1:6379> auth 123.
OK
127.0.0.1:6379> ping
PONG
取消密码。
127.0.0.1:6379> CONFIG SET requirepass ""
OK
127.0.0.1:6379> ping
PONG
LRU ——> Least Recently Used
volatile-lru ——> 使用 LRU 算法移除 key。只对设置了过期时间的 key。
allkeys-lru ——> 使用 LRU 算法移除 key。
volatile-random ——> 在过期集合中移除随机的 key,只对设置了过期时间的 key。
allkeys-random ——> 移除随机的 key。
volatile-ttl ——> 移除那些 TTL 值最小的 key,即那些最近要过期的 key。
noeviction ——> 不进行移除。针对写操作,只是返回错误信息。
# MAXMEMORY POLICY: how Redis will select what to remove when maxmemory
# is reached. You can select among five behaviors:
#
# volatile-lru -> Evict using approximated LRU among the keys with an expire set.
# allkeys-lru -> Evict any key using approximated LRU.
# volatile-lfu -> Evict using approximated LFU among the keys with an expire set.
# allkeys-lfu -> Evict any key using approximated LFU.
# volatile-random -> Remove a random key among the ones with an expire set.
# allkeys-random -> Remove a random key, any key.
# volatile-ttl -> Remove the key with the nearest expire time (minor TTL)
# noeviction -> Don't evict anything, just return an error on write operations.
#
# LRU means Least Recently Used
# LFU means Least Frequently Used
#
# Both LRU, LFU and volatile-ttl are implemented using approximated
# randomized algorithms.
#
# Note: with any of the above policies, Redis will return an error on write
# operations, when there are no suitable keys for eviction.
#
# At the date of writing these commands are: set setnx setex append
# incr decr rpush lpush rpushx lpushx linsert lset rpoplpush sadd
# sinter sinterstore sunion sunionstore sdiff sdiffstore zadd zincrby
# zunionstore zinterstore hset hsetnx hmset hincrby incrby decrby
# getset mset msetnx exec sort
#
# The default is:
#
# maxmemory-policy noeviction
设置样本数量。LRU 算法和最小 TTL 算法都并非是精确的算法,而是估算法。所以你可以设置样本的大小。
Redis 会默认检查这么多个 key 并选择其中 LRU 的那个。
# LRU, LFU and minimal TTL algorithms are not precise algorithms but approximated
# algorithms (in order to save memory), so you can tune it for speed or
# accuracy. For default Redis will check five keys and pick the one that was
# used less recently, you can change the sample size using the following
# configuration directive.
#
# The default of 5 produces good enough results. 10 Approximates very closely
# true LRU but costs more CPU. 3 is faster but not very accurate.
#
# maxmemory-samples 5
在指定时间间隔内将内存中的数据集快照写入磁盘。
行话:SnapShot ——> 恢复时直接将快照读到内存中。
~
Redis 会单独创建(fork)一个子进程来进行持久化。会先将一个文件写入一个临时文件中,待持久化过程都结束,再利用这个临时文件替换上次持久化好的文件。
整个过程中,主进程是不进行任何 IO 操作的,这就确保了极高的性能。
~
如果需要进行大规模数据的恢复,且对于数据恢复的完整性不是非常敏感,那 rdb 方式要比 aof 方式更高效。rdb 的缺点是最后一次持久化后的数据可能丢失。
fork 的作用是复制一个与当前进程一样的进程。新进程的所有数据(变量、环境变量、程序计数器等)数值都和原进程一致。但是是一个全新的进程,并作为原进程的子进程。
如果主进程很大,那么浪费内存。
################################ SNAPSHOTTING ################################
#
# Save the DB on disk:
#
# save
#
# Will save the DB if both the given number of seconds and the given
# number of write operations against the DB occurred.
#
# In the example below the behaviour will be to save:
# after 900 sec (15 min) if at least 1 key changed
# after 300 sec (5 min) if at least 10 keys changed
# after 60 sec if at least 10000 keys changed
#
# Note: you can disable saving completely by commenting out all "save" lines.
#
# It is also possible to remove all the previously configured save
# points by adding a save directive with a single empty string argument
# like in the following example:
#
# save ""
save 900 1
save 300 10
save 60 10000
默认。
1 分钟 1W
5 分钟 10
15 分钟 1
修改配置文件。
save 120 10
在 Redis 中 2 分钟内操作 10 次。就会生成 dump.rdb
。
注意:如果使用
flushall
命令,Redis 数据在 120s 内改变了超过 10 项数据,会再次生成一个 dump.rdb 文件,会覆盖以前的。所以 dump.rdb 要定期备份。
save
命令立即生成 dump.rdb。后台保存出错,前台要停止写操作。
# By default Redis will stop accepting writes if RDB snapshots are enabled
# (at least one save point) and the latest background save failed.
# This will make the user aware (in a hard way) that data is not persisting
# on disk properly, otherwise chances are that no one will notice and some
# disaster will happen.
#
# If the background saving process will start working again Redis will
# automatically allow writes again.
#
# However if you have setup your proper monitoring of the Redis server
# and persistence, you may want to disable this feature so that Redis will
# continue to work as usual even if there are problems with disk,
# permissions, and so forth.
stop-writes-on-bgsave-error yes
对于存储到硬盘中的快照,Redis 会采用 LZF 算法进行压缩。消耗CPU。
# Compress string objects using LZF when dump .rdb databases?
# For default that's set to 'yes' as it's almost always a win.
# If you want to save some CPU in the saving child set it to 'no' but
# the dataset will likely be bigger if you have compressible values or keys.
rdbcompression yes
存储快照后 ,让 Redis 使用 CRC64 算法进行数据校验。增大 10% 性能消耗。
# Since version 5 of RDB a CRC64 checksum is placed at the end of the file.
# This makes the format more resistant to corruption but there is a performance
# hit to pay (around 10%) when saving and loading RDB files, so you can disable it
# for maximum performances.
#
# RDB files created with checksum disabled have a checksum of zero that will
# tell the loading code to skip the check.
rdbchecksum yes
# The filename where to dump the DB
dbfilename dump.rdb
# The working directory.
#
# The DB will be written inside this directory, with the filename specified
# above using the 'dbfilename' configuration directive.
#
# The Append Only File will also be created inside this directory.
#
# Note that you must specify a directory here, not a file name.
dir ./
- save。
save 时只管保存,其他不管。全部阻塞。- bgsave。
Redis 会在后台异步进行快照操作。
快照的同时还可以响应客户端请求。
可以通过 lastsave 命令获取最后一次成功执行快照的时间。
flushall。无意义。
如何关闭。
在 redis-cli 中
config set save “”
[root@localhost redis-4.0.11]# ps -ef | grep redis
root 1838 1 0 02:40 ? 00:00:02 redis-server *:6379
root 1951 1846 0 03:25 pts/0 00:00:00 grep redis
[root@localhost redis-4.0.11]# lsof -i :6379
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
redis-ser 1838 root 6u IPv6 13836 0t0 TCP *:6379 (LISTEN)
redis-ser 1838 root 7u IPv4 13837 0t0 TCP *:6379 (LISTEN)
rdf 会丢失最后一次备份的数据。
↓ ↓ ↓
aof。可能损失一秒钟的数据。
以日志的形式记录每个写操作,将 Redis 执行过的所有写指令记录下来(读操作不记录)。只许通过文件但不可改写文件。Redis 启动之初会选取这个文件重新构建数据。换言之,Redis 重启的话就根据日志文件的内容将写指令从前到后执行一次以完成数据的恢复工作。
appendonly no 改为 yes。
############################## APPEND ONLY MODE ###############################
# By default Redis asynchronously dumps the dataset on disk. This mode is
# good enough in many applications, but an issue with the Redis process or
# a power outage may result into a few minutes of writes lost (depending on
# the configured save points).
#
# The Append Only File is an alternative persistence mode that provides
# much better durability. For instance using the default data fsync policy
# (see later in the config file) Redis can lose just one second of writes in a
# dramatic event like a server power outage, or a single write if something
# wrong with the Redis process itself happens, but the operating system is
# still running correctly.
#
# AOF and RDB persistence can be enabled at the same time without problems.
# If the AOF is enabled on startup Redis will load the AOF, that is the file
# with the better durability guarantees.
#
# Please check http://redis.io/topics/persistence for more information.
appendonly no
# The name of the append only file (default: "appendonly.aof")
appendfilename "appendonly.aof"
[root@localhost redis-4.0.11]# cat appendonly.aof
*2
$6
SELECT
$1
0
*3
$3
set
$2
k1
$2
v1
*3
$3
set
$2
k2
$2
v2
注:FLUSHALL 也算写操作。
默认 everysec。
# Redis supports three different modes:
#
# no: don't fsync, just let the OS flush the data when it wants. Faster.
# always: fsync after every write to the append only log. Slow, Safest.
# everysec: fsync only one time every second. Compromise.
#
同步持久化,每次发生数据变更会被立即记录到磁盘。
~ 性能较差。数据完整性较好。
出厂默认推荐。异步操作,每秒记录,如果一秒内宕机,有数据丢失。
重写时是否可以运用 appendfsync。用默认 no 即可,保证数据安全性。
AOF 采用文件追加方式,文件会越来越大。为避免出现这种情况,新增了重写机制。
当 aof 文件的大小超过所设定的阀值时,Redis 就会启动 AOF 文件的内容压缩,只保留可以恢复数据的最小指令集,可以使用命令 bgrewriteaof
。
AOF 文件持续增长而过大时,会 fork 出一条新线程来将文件重写(也是先写临时文件最后再 rename),遍历新进程的内存中的数据,每条记录有 set 语句。重写 aof 文件的操作,并没有读取旧的 aof 文件,而是将整个内存中的数据库内容用命令的方式重写了一个新的 aof 文件,这点和快照有点类似。
Redis 会记录上次重写时的 aof 大小,默认配置是当 aof 文件大小是上次 rewrite 后大小的一倍且文件大于 64M 时触发。
如果 appendony.aof 被我们胡搞一通(修改文件内容,加一些“外星语”。文件内容是写操作的语句),Redis 服务将不能启动。
[root@localhost redis-4.0.11]# vim appendonly.aof
[root@localhost redis-4.0.11]# redis-server redis.conf
[root@localhost ~]# redis-cli
Could not connect to Redis at 127.0.0.1:6379: Connection refused
Could not connect to Redis at 127.0.0.1:6379: Connection refused
not connected>
[root@localhost redis-4.0.11]# ps -ef | grep redis
root 2985 2894 0 11:40 pts/1 00:00:00 redis-cli
root 2987 2852 0 11:41 pts/0 00:00:00 grep redis
发现 redis-server 并没有启动。
用法:
[root@localhost bin]# ./redis-check-aof
Usage: ./redis-check-aof [--fix] <file.aof>
[root@localhost redis-4.0.11]# /usr/local/bin/redis-check-aof --fix appendonly.aof
0x 0: Expected prefix '*', got: 'a'
AOF analyzed: size=98, ok_up_to=0, diff=98
This will shrink the AOF from 98 bytes, with 98 bytes, to 0 bytes
Continue? [y/N]: y
Successfully truncated AOF
同步持久化,每次发生数据变更会被立即记录到磁盘。
~ 性能较差。数据完整性较好。
出厂默认推荐。异步操作,每秒记录,如果一秒内宕机,有数据丢失。
可以一次执行多个命令,本质是一组命令的集合。一个事务中的所有命令都会序列化,按顺序地串行化执行而不会被其他命令插入,不许加塞。
https://redis.io/topics/transactions
Usage
A Redis transaction is entered using the MULTI command. The command always replies with OK. At this point the user can issue multiple commands. Instead of executing these commands, Redis will queue them. All the commands are executed once EXEC is called.Calling DISCARD instead will flush the transaction queue and will exit the transaction.
The following example increments keys foo and bar atomically.
- MULTI
标记一个事务块的开始。- EXEC
执行所有事务块内的命令。- DISCARD
取消事务。放弃执行事务内的所有命令。返回值
代码示例
- WATCH [key (keys…)]
监视一个或多个 key,如果在事务执行过程前这个(或这些)key 被其他命令所改变,那么事务将被打断。- UNWATCH
取消 WATCH 命令对所有 key 的监视。
127.0.0.1:6379> MULTI
OK
127.0.0.1:6379> set k1 v1
QUEUED
127.0.0.1:6379> set k2 v2
QUEUED
127.0.0.1:6379> get k1
QUEUED
127.0.0.1:6379> set k3 v3
QUEUED
127.0.0.1:6379> EXEC
1) OK
2) OK
3) "v1"
4) OK
127.0.0.1:6379>
127.0.0.1:6379> set k1 v1
QUEUED
127.0.0.1:6379> set k2 v1
QUEUED
127.0.0.1:6379> DISCARD
OK
127.0.0.1:6379>
127.0.0.1:6379> multi
OK
127.0.0.1:6379> set k1 v1
QUEUED
127.0.0.1:6379> set k2 v2
QUEUED
127.0.0.1:6379> setk3v3
(error) ERR unknown command `setk3v3`, with args beginning with:
127.0.0.1:6379> get k1
QUEUED
127.0.0.1:6379> EXEC
(error) EXECABORT Transaction discarded because of previous errors.
127.0.0.1:6379>
事务期间没有
(error)
。
127.0.0.1:6379> MULTI
OK
127.0.0.1:6379> incr k1
QUEUED
127.0.0.1:6379> set k2 v2
QUEUED
127.0.0.1:6379> set k3 v3
QUEUED
127.0.0.1:6379> set k5 v
QUEUED
127.0.0.1:6379> get k5
QUEUED
127.0.0.1:6379> EXEC
1) (error) ERR value is not an integer or out of range
2) OK
3) OK
4) OK
5) "v"
127.0.0.1:6379>
初始化信用卡可用余额和欠额。
无加塞篡改,先监控再开启 multi。
保证两笔金额变动在同一个事务内。
有加塞篡改。
unwatch。
一旦执行了 exec,之前加的监控锁都会被取消。
- watch 指令,类似乐观锁。事务提交时,如果 key 的值已被别的客户端修改,比如某个 list 已被别的客户端 push / pop 过了,整个事务队列都不会被执行。
- 通过 WATCH 命令在事务执行之前监控了多个 keys,倘若在 WATCH 之后有任何 key 的值发生了变化,EXEC 命令执行的事务都将被放弃。同时返回 Nullmulti-bulk 应答以通知调用者事务执行失败。
悲观。认为总有人会改数据,每次有连接去拿数据前都会先上锁(表锁)。
乐观。每次有连接去拿数据都认为别人不会修改数据。所以不会上锁。但是在更新时会判断一下数据有没有更改(使用版本号机制)。乐观锁适用于多读的应用类型,这样可以提高吞吐量。
既保证高并发,又不锁整张表 ——> 行锁。
表每条记录后加一个字段 ~ version。
A 和 B 修改了表,update … where version = 1。
假设 A 先修改了,version 就变成了 2。
这里 B 也在修改,就找不到 where version = 1 的记录。
提交版本必须大于记录当前版本才能执行更新。
并发性差,一致性好。
并发性好,一致性差。
进程间的一种消息通讯模式。
发送者(pub)发送消息,订阅者(sub)接受消息。
演示。
客户端 1 订阅 c1 c2 c3。
127.0.0.1:6379> SUBSCRIBE c1 c2 c3
Reading messages... (press Ctrl-C to quit)
1) "subscribe"
2) "c1"
3) (integer) 1
1) "subscribe"
2) "c2"
3) (integer) 2
1) "subscribe"
2) "c3"
3) (integer) 3
[root@localhost ~]# redis-cli
127.0.0.1:6379>
127.0.0.1:6379> PUBLISH c2 hello-redis
(integer) 1
127.0.0.1:6379>
1) "message"
2) "c2"
3) "hello-redis"
[root@localhost redis-4.0.11]# redis-cli
127.0.0.1:6379> PSUBSCRIBE news*
Reading messages... (press Ctrl-C to quit)
1) "psubscribe"
2) "news*"
3) (integer) 1
1) "pmessage"
2) "news*"
3) "news1"
4) "redis_news"
1) "pmessage"
2) "news*"
3) "news13"
4) "redis_news"
127.0.0.1:6379> PUBLISH news1 redis_news
(integer) 1
127.0.0.1:6379> PUBLISH news13 redis_news
(integer) 1
https://redis.io/topics/replication
主从复制。主机数据更新后根据配置和策略,自动同步到备机的 master / slave 机制。Master 以写为主,Slave 以读为主。
读写分离。
容灾恢复。
配从不配主。
从库配置:slaveof 主库IP 主库端口。
配置项目。
port 6380
daemonize yes
pidfile /var/run/redis_6380.pid
logfile “6380.log”
dbfilename dump-6380.rdb
[root@localhost redis-4.0.11]# redis-server redis-6379.conf
[root@localhost redis-4.0.11]# redis-server redis-6380.conf
[root@localhost redis-4.0.11]# redis-server redis-6381.conf
[root@localhost redis-4.0.11]# redis-cli -p 6379
127.0.0.1:6379>
[root@localhost redis-4.0.11]# redis-cli -p 6380
127.0.0.1:6380>
[root@localhost redis-4.0.11]# redis-cli -p 6381
127.0.0.1:6381>
查看状态,都是 master。
127.0.0.1:6379> info replication
# Replication
role:master
connected_slaves:0
master_replid:52255e3fe90866acc9384656b9cee1bf36e34e82
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
127.0.0.1:6380> SLAVEOF 127.0.0.1 6379
OK
127.0.0.1:6381> SLAVEOF 127.0.0.1 6379
OK
此时主机 Master 的全部数据可以同步(包括之前的)。
127.0.0.1:6380> info replication
# Replication
role:slave
master_host:127.0.0.1
master_port:6379
master_link_status:up
master_last_io_seconds_ago:5
master_sync_in_progress:0
slave_repl_offset:182
slave_priority:100
slave_read_only:1
connected_slaves:0
master_replid:92c548584478de05760ccddc9a68eb9f18202174
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:182
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:182
127.0.0.1:6380>
127.0.0.1:6380> set k6 v6
(error) READONLY You can’t write against a read only slave.
主机挂了,slave 还是 slave。不会上位。
从机挂了,重新连接后会另起炉灶,自己成为单独的 Master。(需要重新 slaveof 127.0.0.1 6379)。
”去中心化“。
上一个 slave 可以是下一个 slave 的 master。有效减轻 master 的写压力。
中途变更转向,会清除之前的数据,重新建立拷贝最新的。
slaveof 新主库 IP 新主库端口。
127.0.0.1:6380> SLAVEOF 127.0.0.1 6379
OK Already connected to specified master
127.0.0.1:6380>
127.0.0.1:6381> SLAVEOF 127.0.0.1 6380
OK
127.0.0.1:6380> info replication
# Replication
role:slave
master_host:127.0.0.1
master_port:6379
master_link_status:up
master_last_io_seconds_ago:10
master_sync_in_progress:0
slave_repl_offset:1970
slave_priority:100
slave_read_only:1
connected_slaves:1
slave0:ip=127.0.0.1,port=6381,state=online,offset=1970,lag=0
master_replid:92c548584478de05760ccddc9a68eb9f18202174
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:1970
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:1970
127.0.0.1:6380>
slaveof no one。
反客为主的自动版。后台监控主机是否故障,如果故障了根据投票数自动将从库转换位主库。
6379 挂了就从 6380 和 6381 随机选一个,6379 回来后沦为 slave。
# sentinel monitor
#
# Tells Sentinel to monitor this master, and to consider it in O_DOWN
# (Objectively Down) state only if at least sentinels agree.
#
# Note that whatever is the ODOWN quorum, a Sentinel will require to
# be elected by the majority of the known Sentinels in order to
# start a failover, so no failover can be performed in minority.
#
# Slaves are auto-discovered, so you don't need to specify slaves in
# any way. Sentinel itself will rewrite this configuration file adding
# the slaves using additional configuration options.
# Also note that the configuration file is rewritten when a
# slave is promoted to master.
#
# Note: master name should not include special characters or spaces.
# The valid charset is A-z 0-9 and the three characters ".-_".
#sentinel monitor mymaster 127.0.0.1 6379 2
sentinel monitor mymaster 127.0.0.1 6379 2
sentinel monitor 被监控数据库名称(自己起名字) 127.0.0.1 6379 2
// 最后一个数字 2 表示主机挂掉后 slave 投票看让谁接替位主机,得票数多少后多少(2)后成为主机。
[root@localhost redis-4.0.11]# redis-sentinel sentinel.conf
[root@localhost redis-4.0.11]# redis-sentinel sentinel.conf
3940:X 16 Mar 19:17:00.845 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
3940:X 16 Mar 19:17:00.846 # Redis version=4.0.11, bits=64, commit=00000000, modified=0, pid=3940, just started
3940:X 16 Mar 19:17:00.846 # Configuration loaded
3940:X 16 Mar 19:17:00.847 * Increased maximum number of open files to 10032 (it was originally set to 1024).
_._
_.-``__ ''-._
_.-`` `. `_. ''-._ Redis 4.0.11 (00000000/0) 64 bit
.-`` .-```. ```\/ _.,_ ''-._
( ' , .-` | `, ) Running in sentinel mode
|`-._`-...-` __...-.``-._|'` _.-'| Port: 26379
| `-._ `._ / _.-' | PID: 3940
`-._ `-._ `-./ _.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' | http://redis.io
`-._ `-._`-.__.-'_.-' _.-'
|`-._`-._ `-.__.-' _.-'_.-'|
| `-._`-._ _.-'_.-' |
`-._ `-._`-.__.-'_.-' _.-'
`-._ `-.__.-' _.-'
`-._ _.-'
`-.__.-'
3940:X 16 Mar 19:17:00.849 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.
3940:X 16 Mar 19:17:00.859 # Sentinel ID is c1e5809a7c8bf1425bc09362b7fccc74383a6471
3940:X 16 Mar 19:17:00.859 # +monitor master mymaster 127.0.0.1 6379 quorum 1
3940:X 16 Mar 19:17:00.860 * +slave slave 127.0.0.1:6380 127.0.0.1 6380 @ mymaster 127.0.0.1 6379
3940:X 16 Mar 19:17:00.861 * +slave slave 127.0.0.1:6381 127.0.0.1 6381 @ mymaster 127.0.0.1 6379
<dependencies>
<dependency>
<groupId>redis.clientsgroupId>
<artifactId>jedisartifactId>
<version>3.1.0version>
dependency>
dependencies>
package com.geek.jedisDemo;
import redis.clients.jedis.Jedis;
public class JedisDemo {
public static void main(String[] args) {
Jedis jedis = new Jedis("192.168.223.129");
System.out.println(jedis.ping());
}
}
~~~
PONG
Process finished with exit code 0
package com.geek.jedisDemo;
import redis.clients.jedis.Jedis;
import java.util.Set;
public class JedisDemo {
public static void main(String[] args) {
Jedis jedis = new Jedis("192.168.223.129");
// System.out.println(jedis.ping());
jedis.set("k1", "v1");
jedis.set("k2", "v2");
jedis.set("k3", "v3");
String k1 = jedis.get("k1");
System.out.println("k1 = " + k1);
Set<String> keys = jedis.keys("*");
System.out.println("keys = " + keys);
System.out.println(keys.size());
}
}
package com.geek.jedisDemo;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.Transaction;
import java.util.List;
public class JedisTransaction {
public static void main(String[] args) {
Jedis jedis = new Jedis("192.168.223.129", 6379);
Transaction transaction = jedis.multi();
transaction.set("k4", "v4");
transaction.set("k5", "v5");
transaction.set("k6", "v6");
List<Object> exec = transaction.exec();
// transaction.discard();
System.out.println("exec = " + exec);
}
}
~~~
exec = [OK, OK, OK]
Process finished with exit code 0
package com.geek.jedisDemo;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.Response;
import redis.clients.jedis.Transaction;
public class TestTX {
public static void main(String[] args) throws InterruptedException {
TestTX testTX = new TestTX();
boolean retValue = testTX.transMethod();
System.out.println("~ ~ ~ ~ ~ ~ ~");
System.out.println("retValue = " + retValue);
System.out.println("~ ~ ~ ~ ~ ~ ~");
}
public boolean transMethod() throws InterruptedException {
Jedis jedis = new Jedis("192.168.223.129", 6379);
int balance;// 可用余额。
int debt;// 欠款。
int amtToSubtract = 10;// 实用额度。
String balance1 = jedis.watch("balance");
// jedis.set("balance", "5");// 模拟其他程序已经修改了该条目。
Thread.sleep(7000);
balance = Integer.parseInt(jedis.get("balance"));
if (balance < amtToSubtract) {
jedis.unwatch();
System.out.println("modified.");
return false;
} else {
System.out.println("~ ~ ~ ~ ~ ~ ~ transaction.");
Transaction transaction = jedis.multi();
Response<Long> longResponse = transaction.decrBy("balance", amtToSubtract);
System.out.println("longResponse = " + longResponse);
transaction.incrBy("debt", amtToSubtract);
transaction.exec();
balance = Integer.parseInt(jedis.get("balance"));
debt = Integer.parseInt(jedis.get("debt"));
System.out.println("balance = " + balance);
System.out.println("debt = " + debt);
return true;
}
}
}
package com.geek.jedisDemo;
import redis.clients.jedis.Jedis;
public class TestMS {
public static void main(String[] args) {
Jedis jedis_M = new Jedis("192.168.223.129", 6379);
Jedis jedis_S = new Jedis("192.168.223.129", 6380);
String slaveof = jedis_S.slaveof("192.168.223.129", 6379);
System.out.println("slaveof = " + slaveof);
jedis_M.set("name", "geek");
String name = jedis_S.get("name");
System.out.println("name = " + name);
}
}
第一次 get 的 name 为空。(还未同步)。
package com.geek.jedisDemo;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisPool;
import redis.clients.jedis.JedisPoolConfig;
public class JedisPoolUtils {
private static volatile JedisPool jedisPool = null;
;
private JedisPoolUtils() {
}
public static JedisPool getJedisPoolInstance() {
if (null == jedisPool) {
JedisPoolConfig jedisPoolConfig = new JedisPoolConfig();
jedisPoolConfig.setMaxTotal(1000);
jedisPoolConfig.setMaxIdle(32);
jedisPoolConfig.setMaxWaitMillis(100 * 1000);
jedisPoolConfig.setTestOnBorrow(true);
jedisPool = new JedisPool(jedisPoolConfig, "192.168.223.129", 6379);
}
return jedisPool;
}
public static void release(JedisPool jedisPool, Jedis jedis) {
if (null != jedis) {
// jedisPool.returnResourceObject(jedis);
}
}
}
package com.geek.jedisDemo;
import redis.clients.jedis.Jedis;
import redis.clients.jedis.JedisPool;
public class TestPool {
public static void main(String[] args) {
JedisPool jedisPool = JedisPoolUtils.getJedisPoolInstance();
JedisPool jedisPool2 = JedisPoolUtils.getJedisPoolInstance();
// System.out.println("jedisPool2 = " + jedisPool2);
Jedis jedis = null;
try {
jedis = jedisPool.getResource();
jedis.set("jp", "jp");
} catch (Exception e) {
e.printStackTrace();
} finally {
JedisPoolUtils.release(jedisPool, jedis);
}
}
}