Based on configurable thresholds for memory and number of requests received it will kill off a Unicorn Worker. For those of you on Heroku this is valuable in that it then returns all the requests back to the Heroku random routing. While not an exact correlation between too many requests and memory size it does help keep individual workers from becoming too overloaded.
Add this to your Gemfile:
The gem suggests that you configure your `config.ru` file for the thresholds. This can be cumbersome on Heroku if you need to test out different settings.
Thankfully you can also control the thresholds via environment variables. In config.ru do:
max_request_min = ENV['MAX_REQUEST_MIN'].to_i || 3072 max_request_max = ENV['MAX_REQUEST_MAX'].to_i || 4096 # Max requests per worker use Unicorn::WorkerKiller::MaxRequests, max_request_min, max_request_max oom_min = ((ENV['OOM_MIN'].to_i || 192) * (1024**2)) oom_max = ((ENV['OOM_MAX'].to_i || 256) * (1024**2)) # Max memory size (RSS) per worker use Unicorn::WorkerKiller::Oom, oom_min, oom_max
The run this on the heroku command line:
That will add in the variables needed to control the thresholds. The example shows the defaults though we found dropping the OOM_MAX to 216 worked best
heroku config:add OOM_MAX=256 memory_limit_min =192 MAX_REQUEST_MIN=3072 MAX_REQUEST_MAX=4096 -a unicon-ttm-sandbox