Investigating local queuing: Redis, NSQ and LMDB

Systems designed for cloud services assume instances can die at any time, so they’re written to defend against this. It’s important to also remember that networks in cloud services are also incredibly unreliable, and often much less reliable than the instances themselves. When considering a design, it’s important to remember that a node can be partitioned from other services and possibly for long periods of time.

One easy consideration here is logs (including stats and analytics events). We want to ensure delivery of logs, but we also don’t want delivery to affect service operation.

There’s lots of ways to handle this. Our original solution was to write logs to files, then to forward them along with logstash. We were doing this for bulk logs and for analytics events. However, logstash was using considerable resources per-node, so we switched to a local Redis daemon and a local python daemon (using gevent) to forward analytics events.

For short partition times a local Redis daemon with a worker is quite effective. Delivery is quick and the queue stays empty. For long partition times (or a long failure in a remote service) we’d continue serving requests, but at some point Redis would run out of memory and we could start dropping events.

We’ve been really happy with the Redis based solution. To date we haven’t had a partition event (either network failure or service failure) long enough for us to worry, but we also had a mismash of solutions for handling analytics events (and partitioning) across our services and wanted a standard solution for the problem. We had the option of rolling the local Redis solution out to everything, or going with something that was a bit more robust.

We made a choice to do a bit of investigation into options that were in-memory, but could go to disk when the data-set grows past memory limits. I won’t go too much into the details of the investigation (sorry), but we eventually narrowed the choice to NSQ.

During this same time period I had been solving another issue using LMDB, a memory-mapped database that’s thread-safe and multiprocess-safe. I wondered if we could avoid running a daemon for the queue at all, and simply have the processes push and pop from LMDB. Less daemons can mean less work and fewer possible failure points.

Before going too far into LMDB I also considered some other memory-mapped databases, but most explicitly state that they’re only thread safe and shouldn’t be used multi-process (like LevelDB and its variants). BDB could be a consideration, but its licensing change to AGPL3 makes it a bit toxic.

Initial testing for LMDB was promising. Write speeds were more than adequate, even with the writes being serialized across processes. Library support was generally adequate, even across languages. However, the major consideration was deadlocks across processes. LMDB claims it supports multi-process concurrency, which is true assuming perfect conditions.

With LMDB, reads are never blocked, but writes are serial, using a mutually exclusive lock at the database level. The lock is taken when a write transaction starts. Once a write transaction starts, all other threads or processes waiting for the write lock will block. If a process starts a transaction and exits uncleanly, any other process that was waiting on the lock will block indefinitely.

In LMDB’s development branch support has been added for robust mutexes, which solves this problem; however, it’s not available in a stable release and I also can’t seem to find any information about robust mutex support across containers, which would be necessary for this solution to work for us in the distant future.

LMDB was a fun diversion and mental exercise, but wasn’t an ideal solution for this. After spending a couple days on exploring LMDB I moved on to a product we had been wanting to explore for a while: NSQ.

NSQ is a realtime distributed messaging platform. In our use-case we’re only using it for local queuing, though. It’s really well suited for it. Performance for our use case was more than adequate and library support is reasonable. Even in the cases where there’s no libraries, the protocol for writes is simple and can either be TCP or HTTP based. The language support in python is good, assuming you’re using tornado, but the support for gevent isn’t wonderful. There’s a fork of the bitly python library that has gevent support, but it hasn’t been updated in a while and it looks like it was meant as a temporary project to make the bitly library more generic and that effort hasn’t been fully followed through.

From the consumer side we’re using the same in-house custom python daemon, adapted for NSQ, using the forked nsq-py project. The fork met the needs of our use-case, though it had issues with stale connections (which we’ve fixed).

The biggest benefit we’ve gained from the switch is that we can backoff to disk in case of long partitions. That said, there’s a lot of options that we have now as well. NSQ has a healthy suite of utilities. We could replace our custom python daemon with nsq_to_http, we could listen to a topic from multiple channels for a fast path (off to http) and a slow path (off to S3) for events, and we could forward from the local NSQ to centralized NSQs using nsq_to_nsq. Additionally the monitoring of NSQ is quite good. There’s a really helpful CLI utility nsq_stat for quick monitoring and by default NSQ ships stats off to statsd.

NSQ doesn’t seem to have a robust method of restarting or reloading for configuration changes, but we will rarely need to restart the daemon. NSQ process restarts are generally less than one second, so for services pushing into NSQ we do retries with backoff and take the latency hit associated with it.

你可能感兴趣的:(Investigating local queuing: Redis, NSQ and LMDB)