At Grooveshark, Gearman is an integral part of our backend technology stack.
30 Second Introduction to Gearman
- Gearman is a simple, fast job queuing server.
- Gearman is an anagram for manager. Gearman does not do any work itself, it just distributes jobs to workers.
- Clients, workers and servers can all be on different boxes.
- Jobs can be synchronous or asynchronous.
- Jobs can have priorities.
To learn more about gearman, visit gearman.org, and peruse the presentations. I got started with this intro (or one much like it), but there may be better presentations available now, I haven’t checked.
The rest of this article will assume that you understand the basics Gearman including the terminology, and are looking for some use cases involving a real live high traffic website.
Architecture
With some exceptions, our architecture with respect to Gearman is a bit unconventional. In a typical deployment, you would have a large set of Apache + PHP servers (at Grooveshark we call these “front end nodes” or FENs for short) communicating with a smaller set of Gearman job servers, and a set of workers that are each connected to all of the job servers. In our setup, we have a gearman job server running on each FEN, and jobs are submitted over localhost. That’s because most of the jobs we submit are asynchronous, and we want the latency to be as low as possible so the FENs can fire off a job and get back to handling the user’s request. Then we have workers running on other boxes which connect to the gearman servers on the FENs and process the jobs. Where the workers run depends on their purpose, for example workers that insert data into a data store usually live on the same box as the data store, which again cuts down on network latency. This architecture means that in general, each FEN is isolated from the rest of the FENs, and Gearman servers are not another potential point of failure or even slowdowns. The only way a Gearman server is unavailable is if the FEN itself is out of commission. The only way a Gearman server is running slow is if the whole FEN is running slow.
Rate Limiting
One of the things that is really neat about this gearman architecture, especially when used asynchronously, is that jobs that need to happen eventually but not necessarily immediately can be easily rate limited by simply controlling the number of workers that are running. For example, we recently migrated Playlists from MySQL to MongoDB. Because many playlists have been abandoned over the years, we didn’t want to just blindly import all playlists into mongo. Instead we import them from MySQL as they are accessed. Once the data is in MongoDB, it is no longer needed in MySQL, so we would like to be able to delete that data to free up some memory. Deleting that data is by no means an urgent task, and we know that deletes of this nature cannot run in parallel; running more than one at a time just results in extra lock contention.
Our solution is to insert a job into the gearman queue to delete a given playlist from MySQL. We then have a single worker connecting to all of the FENs asking for playlist deletion jobs and then running the deletes one at a time from the MySQL server. Not surprisingly, when we flipped the switch deletion jobs came in much faster than they could be processed; at the peak we had a backlog of 800,000 deletion jobs waiting to be processed, and it took us about 2.5 weeks to get that number down to zero. During that time we had no DB hiccups, and server load was kept low.
Data Logging
We have certain high volume data that must be logged, such as song plays for accounting purposes, and searches performed so we can make our search algorithm better. We need to be able to log this high volume data in real time, without affecting the responsiveness of the site. Logging jobs are submitted asynchronously to Gearman over localhost. On our hadoop cluster, we have multiple workers per FEN collecting and processing jobs as quickly as possible. Each worker only connects to one FEN — in fact, each FEN has about 12 workers just for processing logging jobs. For a more in depth explanation for why we went with this setup, see lessons learned.
Backend API
We have some disjointed systems written in various languages that need to be able to interface with our core library, which is written in PHP. We considered making a simple API over HTTP much like the one that powers the website, but decided that it was silly to pay the cost of all the overhead of HTTP for an internal API. Instead, a set of PHP workers handle the incoming messages from these systems and respond accordingly. This also provides some level of rate limiting or control over how parallelized we want the processing to be. If a well meaning junior developer writes a some crazy piece of software with 2048 processes all trying to look up song information at once, we can rest assured that the database won’t actually be swamped with that much concurrency, because at most it will be limited to the number of workers that we have running.
Lessons Learned/Caveats
No technology is perfect, especially a technology when you are using it in a way other than how it was intended to be used.
We found that gearman workers (at least the pecl gearman extension’s implementation) connect to and process jobs on gearman servers in a round-robin fashion, draining all jobs from one server before moving to the next. That creates a few different headaches for us:
- If one server has a large backlog of jobs and workers get to it first, they will process those jobs exclusively until they are all done, leaving the other servers to end up with a huge backlog
- If one server is unreachable, workers will wait however long the timeout is configured for every time they run through the round-robin list. Even if the timeout is as low as 1 second, that is 1 second out of 20 that the worker cannot be processing any jobs. In a high volume logging situation, those jobs can add up quickly
- Gearman doesn’t give memory that was used for long queues back to the OS when it’s done with it. It will reuse this memory, but if your normal gearman memory needs are 60MB and an epic backlog caused by these interactions leads it to use 2GB of memory, you won’t get that memory back until Gearman is restarted.
Our solution to these issues is, unless there is a strong need to rate limit the work, just configure a separate worker for each FEN so if one FEN is having weird issues, it won’t affect the others.
Our architecture combined with the fact that each request from a user will go to a different FEN means that we can’t take advantage of one really cool gearman feature: unique jobs. Unique jobs means that we could fire asynchronous jobs to prefetch data we know the client is going to ask for, and if the client asks for it before it is ready, we could have a synchronous request hook into the same job, waiting for the response.
Talking to a Gearman server over localhost is not the fastest thing in the world. We considered using Gearman to handle geolocation lookups by IP address so we can provide localized concert recommendations, since those jobs could be asynchronous, but we found that submitting an asynchronous job to Gearman was an order of magnitude slower than doing the lookup directly with the geoip PHP extension once we compiled it with mmap support. Gearman was still insanely fast, but this goes to show that not everything is better served being processed through Gearman.
Wish List
From reading the above you can probably guess what our wish list is:
- Gearman should return memory to the OS when it is no longer needed. The argument here is that if you don’t want Gearman to use 2GB of memory, you can set a ulimit or take other measures to make sure you don’t ever get that far behind. That’s fine but in our case we would usually rather allow Gearman to use 2GB when it is needed, but we’d still like to have it back when it’s done!
- Workers should be better at balancing. Even if one server is far behind it should not be able to monopolize a worker to the detriment of all other job servers.
- Workers should be more aware of timeouts. Workers should have a way to remember when they failed to connect to a server and not try again for a configurable number of seconds. Or connections should be established to all job servers in a non-blocking manner, so that one timing out doesn’t affect the others.
- Servers should be capable of replication/aggregation. This is more of a want than a need, but sometimes it would be nice if one job server could be configured to pull jobs off of other job servers and pool them. That way jobs could be submitted over localhost on each FEN, but aggregated elsewhere so that one worker could process them in serial if rate limiting is desired, without potentially being slowed down by a malfunctioning FEN.
- Reduce latency for submitting asynchronous jobs locally. Submitting asynchronous jobs over localhost is fast, but it could probably be even faster. For example, I’d love to see how it could perform using unix sockets.
Even with these niggles and wants, Gearman has been a great, reliable and performant product that we are able to rely on to help keep the site fast and reliable for our users.
Supervisord
When talking about Gearman, I would be in remiss if I did not mention Supervisord, which we use to babysit all of our workers. Supervisord is a nice little python utility you can use to daemonize any process for you, and it will handle things like redirecting stdout to a log file, auto-restarting the process if it fails, starting as many instances of the process as you specify, and automatically backing off if your process fails to start a specified number of times in a row. It also has an RPC interface so you can manage it remotely, for instance if you notice a backlog of jobs piling up on one of your gearman servers, you can tell supervisord to fire up another 20 workers.