Async Solr Queries in Python

I frequently hit the wall of needing to work asynchronously with Solr requests in Python. I’ll have some code that blocks on a Solr HTTP request, waits for it to complete, then execute a second request. Something like this code

import requests

#Search 1
solrResp = requests.get('http://mysolr.com/solr/statedecoded/search?q=law')

for doc in solrResp.json()['response']['docs']:
    print doc['catch_line']

#Search 2
solrResp = requests.get('http://mysolr.com/solr/statedecoded/search?q=shoplifting')

for doc in solrResp.json()['response']['docs']:
    print doc['catch_line']
(we’re using the Requests library to do HTTP):

Being able to parallelize work is especially helpful with scripts that index documents into Solr. I need to scale my work up so that Solr, not network access, is the indexing bottleneck.

Unfortunately, Python isn’t exactly Javascript or Go when it comes to doing asynchronous programming. But the gevent coroutine library can help us a bit with that. Under the hood gevent uses the libevent library. Built on top of native async calls (select, poll, etc — the original async ), libevent nicely leverages a lot of low-level async functionality.

Working with gevent is fairly straightforward. One slight sticking point is thegevent.monkey.patch_all()which patches a lot of the standard library to cooperate better with gevent’s asychrony. It sounds scary, but I have yet to have a problem with the monkey patched implementations.

Without further ado, here’s how you use gevents to do parallel Solr requests:

import requests
from gevent import monkey
import gevent
monkey.patch_all()


class Searcher(object):
    """ Simple wrapper for doing a search and collecting the
        results """
    def __init__(self, searchUrl):
        self.searchUrl = searchUrl

    def search(self):
        solrResp = requests.get(self.searchUrl)
        self.docs = solrResp.json()['response']['docs']


def searchMultiple(urls):
    """ Use gevent to execute the passed in urls;
        dump the results"""
    searchers = [Searcher(url) for url in urls]

    # Gather a handle for each task
    handles = []
    for searcher in searchers:
        handles.append(gevent.spawn(searcher.search))

    # Block until all work is done
    gevent.joinall(handles)

    # Dump the results
    for searcher in searchers:
        print "Search Results for %s" % searcher.searchUrl
        for doc in searcher.docs:
            print doc['catch_line']

searchUrls = ['http://mysolr.com/solr/statedecoded/search?q=law', 
              'http://mysolr.com/solr/statedecoded/search?q=shoplifting']

searchMultiple(searchUrls)
Lots more code, and not nearly as pretty as the equivalent Javascript, but it gets the job done. The meat of the code is these lines:
# Gather a handle for each task
handles = []
for searcher in searchers:
    handles.append(gevent.spawn(searcher.search))

# Block until all work is done
gevent.joinall(handles)
We tell gevent to spawnsearcher.search. This gives us a handle to the spawned task. We can then optionally wait for all the spawned tasks to complete, then dump the results.

That’s about it! As always, comment if you have any thoughts on pointers. And let us know how we can help with any part of your Solr search application!

你可能感兴趣的:(Async Solr Queries in Python)