The goal of Redis Virtual Memory (VM) is to swap infrequently-accessed data from RAM to disk, without drastically changing the performance characteristics of the database. This enables a single instance of Redis to support datasets that are larger than main memory.
Virtual Memory is a very important feature of most modern operating systems. However, for efficiency reasons, Redis does not use the OS-supplied VM facilities and instead implements its own system. The rationale is as follows:
There are a few main limitations of Redis Virtual Memory:
When Virtual Memory is enabled, Redis stores the last time that each object was accessed. Additionally, Redis maintains a swap file that is divided into pages of configurable size, with the page allocation table stored in memory. Each page uses 1 bit of actual RAM.
When Redis is out of memory and there is something to swap, a few random objects from the dataset are sampled. The object with the higher “swappability factor” is the object that will be swapped to disk.
Swappability = Object.age * Logarithm(Object.used_memory)
Redis maintains a pool of I/O threads that are solely responsible for loading values from disk into RAM.
When a request arrives, the command is read and the list of keys is examined. If any of the keys have been swapped to disk, the client is temporarily suspended while an I/O job is enqueued. Finally, once all keys that are needed by a given client are loaded, then the client resumes execution of the command.
From a configuration perspective, the vm-max-memory
setting can be used to set the maximum amount of memory that Redis can use before it swaps to disk.
For more detail, see Redis Virtual Memory: the Story and the Code.
Redis has native support for publish/subscribe.
In addition to supporting exact matches on channel names, it is also possible to subscribe against a pattern. In this way, subscribers do not need to know the exact name of all channels a priori, thereby increasing the flexibility of this messaging mechanism.
Although pub/sub may seem like an odd fit, Redis' internals are very well suited for this feature. Furthermore, pub/sub brings with it numerous advantages. In particular, this feature is highly convenient in the context of the use cases of a large class of modern web applications, and, with some creativity, can be used as a substitute for not having native scripting support within Redis.
Imagine the scenario where a news-related site needs to update the cached copy of its home page every time that a new article is published.
The background cache worker process subscribes to all channels that begin with ‘new.article.’:
redis> PSUBSCRIBE new.article.*
The article publishing process creates a new technology article (in this example, this article has ID ‘1021’), adds the article’s ID to the set of all technology articles, and publishes the article’s ID to the ‘new.article.technology’ channel:
redis> MULTI
OK
redis> SET article.technology.1021 "In today's technology news, ..."
QUEUED
redis> SADD article.technology 1021
QUEUED
redis> PUBLISH new.article.technology 1021
QUEUED
redis> EXEC
1. OK
2. (integer) 1
3. (integer) 1
At this point, the background cache worker process will receive a message and know immediately that a new technology article was published, subsequently executing the appropriate callback to re-generate the home page.
Redis is extremely flexible and highly usable in a number of different scenarios.
I see Redis definitely more as a flexible tool than as a solution specialized to solve a specific problem: his mixed soul of cache, store, and messaging server shows this very well.
Salvatore Sanfilippo
A small sampling of potential applications:
Caching (particularly for web applications) is likely Redis' most common use case. For details on configuring Redis as an LRU cache, see here.
Interestingly, despite memcached’s dominance in this area, plain key-value stores (i.e. those without support for data types like lists and sets) are at a disadvantage when acting as a web application cache.
For example, the resources returned from requests to web apps are typically composed of lists (lists of posts, lists of comments, lists of friends, etc.). With plain key-value stores, these lists will almost always be stored in single units (“blobs”). This makes very common list-related operations, such as adding an element to a list, getting the first ten items in a list, deleting the last item in a list, etc. very inefficient because the list is stored as a single unit and needs to frequently be serialized and deserialized within the application server. Furthermore, atomic updates of these lists are impossible without implementing some other mutual exclusion system. (Redis, with native support for lists, can perform these operations efficiently and atomically.)
This flexibility enables other cache-related advantages. For example:
One potential use for Redis is as a smarter replacement for memcached. A common challenge with caching systems is de-caching things based on dependencies - if a blog entry tagged with "redis" and "python" has its title updated, the cache for both the entry page and the "redis" and "python" tag pages needs to be cleared. Redis sets could be used to keep track of dependencies and hence take a much more finely grained approach to cache invalidation.
Simon Willison, Redis Tutorial
This is a more specific type of (web application) caching than described above. Here, responses for certain types of dynamic requests are delivered directly to the requestor via the cache, bypassing the application server entirely. (See here for a more detailed treatment of the subject.)
With the HttpRedis module, the Nginx web server can serve certain requests directly from Redis.
Redis provides a very effective set of primitives for multiple processes on a single machine (or multiple machines connected via a network) to share state and communicate via message passing.
Redis can be used to compute “views” for tables in relational (or other NoSQL) databases that are difficult to query effectively, due to factors such as schema design, index design, data volume, write volume, etc.
For example, given a relational table that is used in an append-only fashion, a daemon could periodically pull down rows that it has not yet processed and “explode” the data into Redis, building out a number of lists, sets, sorted sets, counters, etc. (This is, effectively, hand-rolled index generation.) A reporting script can then perform operations against these data structures to compute all of the desired metrics.
Resque (and alternate implementations, like Pyres) leverage Redis' capabilities very extensively.
A number of other job systems/ task queues (e.g. Celery and Octobot) also support Redis.
Redis can be used to implement a lock service. As described earlier, SETNX
is a key element of this locking algorithm.
There is no query optimizer. Redis provides extremely fast primitives, but overall query performance is highly dependent on how the user chooses to arrange the data.
The most important things to remember are:
As a direct consequence, data will almost always be duplicated in several places.
For example, imagine the scenario of using Redis to store a book database. An efficient data layout will include storing the details of each book (title, author, publisher, ISBN, genre, etc.) in a Redis hash.
In order to query the database to answer questions like “what other books did this book’s author write?”, the data layout should also include a number of manually-designed indexes. In this case, sets like the following should be built, each of which contain the ID number of all applicable books:
In this example, we have duplicated the ID number of each book across multiple disparate data structures. (More generally, we have de-normalized our data to optimize the speed of each query.)
Redis cannot automatically remove all instances of a book from all indexes when the book is deleted. The application developer should keep track of all sets that a book is in (using an additional set) so that clean-up can be performed efficiently.
This type of data duplication is extremely common with non-relational data sets. For most systems, this necessitates running background workers that are responsible for constantly scanning the data set and repairing any inconsistencies that are detected.
Some other fantastic Redis-related resources include:
Simon Willison’s extremely comprehensive Redis Workshop/ Tutorial
You should follow me on Twitter here.