Brandon Rice

Software development with a focus on web technologies.

How the Fork Does Resque Work?!

| Comments

Any reasonably sized Rails application will eventually find itself in need of background job processing. Sending users email, generating reports, and interacting with third party APIs are just a few examples of secondary things that can ideally happen without slowing down web requests. There are several answers to the background job question for Rails, but the architectural differences usually boil down to a choice between threads or processes for concurrency. Resque is one that uses processes, and it’s one we use at Optoro. Several work projects have given me a reason to familiarize myself with the internals of Resque, and as a result I routinely end up supporting my teammates when they have questions about how it works and deploying things that use it. Here is a high level look at how Resque does its magic.

1. Enqueue a Request

An operation that can run in the background is identified. The Rails application needs to enqueue a request. The mechanism for doing this depends on the particular versions of Rails and Resque. On Rails 4.x using ActiveJob, the interface is MyJob.perform_later. On earlier versions of Rails using Resque 1.x, this is done using Resque.enqueue. Resque uses Redis behind the scenes, and adding a job to the queue actually means serializing some details about the job and inserting that information into Redis. The Rails application finishes dealing with Resque at this point, and returns to the web request-response cycle.

2. Start a Worker Process

A piece of information now sits in a Redis data structure representing the job to execute. Something needs to consume that queue, and it comes in the form of a completely separate Ruby process. This process is typically started by running rake resque:work from the application directory. The new process waits to consume information from the Redis queue, and then uses that information to identify and execute the background job.

When the worker process starts, a completely new copy of the entire Rails application is loaded into memory. One of the most common examples of a background job is sending an email using ActionMailer. That code lives inside the Rails application, and so the entire app must be loaded. However, an important thing to consider is that you don’t actually need Rails to run the job code. Rails is potentially a huge amount of overhead. If a job is enqueued using the MyJob class from Rails, then the only requirement for running that job is a MyJob class in the consuming process that listens on the same queue.

1
2
3
4
5
6
7
8
9
require 'resque'

class MyJob
  @queue = :my_job_queue

  def self.perform
      # do some stuff
  end
end

The code above knows nothing about Rails, but will happily consume a background job that was enqueued from a Rails application. Add a minimal Rakefile that pulls in resque/tasks, and you can run this worker with rake resque:work from a completely different application. This example uses Resque directly without the ActiveJob interface. Using ActiveJob means adding it as a dependency. In general, the more domain knowledge needed to run the background job, the more dependencies the consumer process will have in common with the Rails application. Use this as inspiration to write several small applications – only one of which uses Rails – instead of one huge monolithic Rails app. This could be an intermediary step toward some sort of microservices based solution.

3. Fork a Child Process

The Ruby process that sits and listens for jobs in Redis is not the process that ultimately runs the job code written in the perform method. It is the “master” process, and its only responsibility is to listen for jobs. When it receives a job, it forks yet another process to run the code. This other “child” process is managed entirely by its master. The user is not responsible for starting or interacting with it using rake tasks. When the child process finishes running the job code, it exits and returns control to its master. The master now continues listening to Redis for its next job.

The advantage of this master-child process organization – and the advantage of Resque processes over threads – is the isolation of job code. Resque assumes that your code is flawed, and that it contains memory leaks or other errors that will cause abnormal behavior. Any memory claimed by the child process will be released when it exits. This eliminates the possibility of unmanaged memory growth over time. It also provides the master process with the ability to recover from any error in the child, no matter how severe. For example, if the child process needs to be terminated using kill -9, it will not affect the master’s ability to continue processing jobs from the Redis queue.

In earlier versions of Ruby, Resque’s main criticism was its potential to consume a lot of memory. Creating new processes means creating a separate memory space for each one. Some of this overhead was mitigated with the release of Ruby 2.0 thanks to copy-on-write. However, Resque will always require more memory than a solution that uses threads because the master process is not forked. It’s created manually using a rake task, and therefore must load whatever it needs into memory from the start. Of course, manually managing each worker process in a production application with a potentially large number of jobs quickly becomes untenable. Thankfully, we have pool managers for that.

4. Pools and Schedulers

One of the most ubiquitous pool managers for Resque (and the one we use at Optoro) is resque-pool. This plugin provides a rake task that manages all of the workers normally started using rake resque:work. Earlier, I pointed out that each worker process requires its own copy of the application in memory. A pool can potentially alleviate these memory concerns. When the pool starts, it loads the entire application into memory. Then, it forks a process for each (master) worker. Once again, copy-on-write significantly reduces the amount of memory used by each forked process. The memory benefits combined with the convenience of process management make resque-pool (or some other pool solution) an easy win.

The other tool worthy of consideration for part of your Resque infrastructure is a scheduler. One of the most popular solutions is resque-scheduler. The scheduler is a very simple, cron-based application that inserts jobs into the Redis queue based on a configuration file. It has very few dependencies in general, and doesn’t need the Rails app or the job code in memory. As a matter of fact, it doesn’t need any constant definitions at all if the job class names are passed as string arguments.

Conclusion

It’s valuable to understand the tools you’re using, especially when it comes to rogue processes outside the normal scope of the Rails application. Understanding leads to better architectural decisions. The concepts that apply to Resque will certainly be applicable to other background job solutions. Implementation is details. The most important skill is learning how to think about the application in a different way. Go forth and enqueue.

If you enjoyed this post, please consider subscribing.

Comments