breaking into the unknown…

Leave a comment

Subdomain on localhost Rails

Our product is in production and we decided to have a new subdomain which will target a different set of Users. The end goal is to have the same Code base but render different CSS based on subdomain.

Since our App is hosted on Heroku, the first thing I do is checked feasibility of two DNS pointing to same app, I raised the below ticket on heroku:

Heroku Confirmed that it is quite easy to do, so now we move ahead with the development.

Here the first problem is to get two URL pointing to our same App in local.

We all know and use below URL in local:


We need another URL as below:


You can achieve this with below steps:

1 – Login as admin user on terminal:

sudo su –

2 – Edit /etc/hosts file

nano /etc/hosts

The above command will open /etc/hosts file for you in terminal.

You will find few lines there, the important one is: localhost

This is the line which basically map localhost to IP

Add a new line below it: MBEportal.localhost

This is telling that the new DNS MBEportal.localhost should also map to IP

press cntr + x to exist editing

It will ask you to save before existing. Press Y to save the change.

3 -restart your rails server.

Now you can access your localhost at both the below URL



4 – Parse Subdomain in Controller

So, at this point we have  simulated subdomain behavior. Also both hitting same code base as expected.

But still when I go to controller action and try to see subdomain, it return empty array

request.subdomains -> []

It look like our server treating http://MBEportal.localhost as a single domain.

This can be fixed by adding below line in environment/development.rb file

config.action_dispatch.tld_length = 0

Restart your server again and this time you will get the subdomain from URL

request.subdomains -> [“mbeportal”]

great ! now we can find the subdomain from request object and thus execute different logic or layout or any other thing specific to the particular domain.

Some other detail which you can read from request object are:

request.base_url -> http://MBEportal.localhost:3000 -> mbeportal.localhost
request.domain -> localhost



Leave a comment

undefined method `eof?’ for #

Below is one of my API code in my new Rails application

module Api
  module V1
    class VendorOnboardingController < Api::BaseController
      include Concerns::Api::V1::VendorOnboardingApipie
      def enrollment
        result =!)
        ...remaining code----

The API working fine locally and on server running thin as application server.After testing we moved the code to UAT server running on Passenger as application server and the API boomed with below error:

“undefined method `eof?’ for PhusionPassenger::Utils::TeeInput:0x000056122fc69a20”

After breaking a lot of head and changing code with hit and trial method realized the culprit code is request.body in below line of code.

result =!)

changing it to as below fixed the issue.

result =!)

Understanding the Problem:
Rails communicate with application server through Rack.

Infact Rack sits between all the frameworks (Rails, Sinatra, Rulers) and all the app servers (thin, unicorn, Rainbows, mongrel) as an adapter. Now if you see  rack spec it says

The input stream is an IO-like object which contains the raw HTTP POST data. When applicable, its external encoding must be “ASCII-8BIT” and it must be opened in binary mode, for Ruby 1.9 compatibility. The input stream must respond to gets, each, read and rewind.

So in my local or with thin the request body was always coming as a StringIO object, so not causing any problem when the subsequent line of my code operating on it, but with passenger some how, this is not the case so code is breaking.

calling will explicitly convert the request body to StringIO object and so fixed the problem.

1 Comment

fatal error: error writing to /tmp/ccAtSusl.s: No space left on device

Recently get a frustrating error on one of the server. I introduced a new gem amatch in my rails application. I do not see any issue on local, dev or UAT server instance in installing the gem through the bundler, But on QC the bundler failed with below error:

Installing amatch 0.3.0 with native extensions
Gem::Ext::BuildError: ERROR: Failed to build gem native extension.

creating Makefile

make "DESTDIR=" clean

make "DESTDIR="
compiling amatch_ext.c
amatch_ext.c:1661:1: fatal error: error writing to /tmp/ccAtSusl.s: 
No space left on device
compilation terminated.
make: *** [amatch_ext.o] Error 1

make failed, exit code 2

Since the error clearing saying that No space left on device, I checked the space on server.

$df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       35G  6.1G   27G  19% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
udev            1.9G   12K  1.9G   1% /dev
tmpfs           375M  304K  375M   1% /run
none            5.0M     0  5.0M   0% /run/lock
none            1.9G     0  1.9G   0% /run/shm
none            100M  4.0K  100M   1% /run/user
overflow        1.0M     0  1.0M   0% /tmp

But I see there is a lot of space on the disk

Since, the error say problem while writing to tmp folder, I checked permission of temp folder, thinking may we it causing problem.

$ ls -l / | grep tmp
drwxrwxrwt 3 root root 4096 Nov 19 13:51 tmp

But it also showing all the permissions of read and write.

Then I tried to remove all the content of tmp folder thinking , may be something is there which taking all the space

$ rm -fr /tmp/*

But again got the same error.

Now googling for some time on space distribution and partitioning on linux, came to know about overflow partition .

And I got the problem, you can see that the in the df -h command output above, the tmp is mounted on overflow . This will happen if your server continuously keep writing to tmp folder, and if you have not given a separate partition to tmp .  When system deduct some partition or memory issue it automatically mount the tmp folder to overflow partition with size of 1 MB, so that things keep working. But there is no automatic reversal i,e bringing back the tmp to root partition.

Now this could be happened on our server over the time. We have consumed our disk space, system moved tmp folder to overflow, later on we free the space by deleting unused file or increasing space, thus having free space as shown above in df -h command. But tmp folder didn’t moved back from overflow partition and having size of 1 MB. Since amatch need more then 1 MB, it is failing with space issue.

So solution is simple, unmount the tmp folder and mount it again.

# umount /tmp
umount: /tmp: device is busy.
        (In some cases useful info about processes that use
         the device is found by lsof(8) or fuser(1))

You need to kill the process using tmp folder, get the list of those process, with below command:

# lsof /tmp
Passenger 6730 root   13rW  REG   0,23        0 14247683 /tmp/passenger

Kill the running process 6730 and try to unmount again.

# kill -9 6730
# umount /tmp

Mount tmp folder again and see the disk space again.

# mount -a
# df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/xvda1       35G  6.1G   27G  19% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
udev            1.9G   12K  1.9G   1% /dev
tmpfs           375M  308K  375M   1% /run
none            5.0M     0  5.0M   0% /run/lock
none            1.9G     0  1.9G   0% /run/shm
none            100M  4.0K  100M   1% /run/user

You can see that, there is no separate overflow partition for tmp folder as it now mounted on the root partition, thus it can use all the available memory in the main partition.

This time amatch didn’t reported any issue and installed successfully.


  1. Check permission of folder
  2. Check available space.Free space and see if solve your problem
  3. If on overflow partition unmount and mount it again.


Leave a comment

heroku timeout code H13

Recently while going live with one of our financial domain project on Heroku we get with this dreaded error.

at=error code=H13 desc=”Connection closed without response”

The error occurring whenever a user try to submit his loan application. Basically a user never able to submit his application due to the above error as the process getting killed due to response taking too long to comeback.

some of the culprit line in logs look like as below:

2015-06-08T19:10:34.297539+00:00 heroku[router]: at=error code=H13 desc=”Connection closed without response” method=POST path=”/loans/credit_request” request_id=0f4a7e2f-28a6-4542-a308-db3a9a72c0e6 fwd=”″ dyno=web.2 connect=0ms service=26007ms status=503 bytes=0
2015-06-08T19:10:34Z app[postgres.32439]: [MAUVE] could not receive data from client: Connection reset by peer
2015-06-08T19:10:34.277236+00:00 app[web.2]: E, [2015-06-08T19:10:34.277078 #3] ERROR — : worker=0 PID:9 timeout (26s > 25s), killing

So basically Heroku is not to blamed. It is right in killing a process if it see no response coming in 25 second.

We know that oue API is the culprit. This is how our whole application work

-> User come to our application build in rails
-> Fill the details related to loan he want to apply and submit it.user see progress bar rotating
-> On submit we capture the data and make a API call to another application written in java.
-> The API do a lot of processing on the data, create citadel certificate, register the user on OpenAm,trigger few emails etc and then send back response.
-> when response is success we reload the page with success message or render the error

It is the second last step in which Heroku seeing a problem. Heroku keep waiting for the response, and when it see that it is exceeding 25 second, it kill the existing process, so the user page never get refreshed and hanged for ever with the progress bar rotating.

So we know the problem…but what will be the solution.

Idle solution is to improve on our API code and make it to return a response in say 15 second(our taking between 25 to 30 seconds). but it is long term plan. we are already live and it is not that we have not worked on reducing the response time(well it is crime if you make user to wait 30 second for response) but since a lot of processing undergo before sending the response, we can’t do much.

So we decided to live with the current response time and fool Heroku to believe that request response cycle is working.

Here is the Plan:
-> push the API call to background job
-> submit the form as ajax
-> In the action which handle submit, trigger the background job and render the job_id , the path to which need to be redirected and other parameter you want to reuse
-> On success of ajax request, trigger another ajax call to a separate method which keep checking the status of the job after a fix interval say 3 seconds.
-> when the second ajax call see that status is complete, it will reload the page

So now, although user see some increase in time due to overhead introduced in checking status after every 3 seconds, Heroku see the request response cycle working as in every 3 second it see a request coming asking for the job status and a response going on with the current status : pending , queued, running etc .

So here is the code:

Add sidekiq to Gemfile and run bundle install

gem 'sidekiq'
gem 'sidekiq-status'


  resources :loans
  post 'credit_request' => 'loans#check_credit_request', :as => :credit_request
  get 'check_job_status' => 'loans#check_job_status', :as => :loan_status

write the background code in lib folder say – lib/application.rb

require 'rest_client'
require 'base64'

module LoanPath
  class Application
    include Sidekiq::Worker
    include Sidekiq::Status::Worker
    sidekiq_options :retry => false
    def perform(*args)
      lp_status_code = background_task_output[:lpcode].present? ? background_task_output[:lpcode] : ""
      lp_status_message = background_task_output[:lp_message].present? ? background_task_output[:lp_message] : ""
      lp_data = background_task_output[:lp_data].present? ? background_task_output[:lp_data] : ""
      store :lp_status_code => lp_status_code
      store :lp_status_message => lp_status_message
      store :lp_data => lp_data
      at 100,100, background_task_output[:lp_message] 
                   if background_task_output[:lp_message].present?
  def apply_credit(detail)
      uri = APP_CONSTANTS["credit_request_endpoint"]
      payload = "whatever xml or other data you want to send"
       rest_resource =, {:user => "your username", :password => "xyz", :timeout => 60, :open_timeout =>60})
      credit_submit = payload, :content_type => "application/xml"
      {:lpcode => "LpValid", :lp_message => "Credit Request Submitted Successfully", :lp_data => credit_submit}
    rescue Exception => e
      error_message = "System encountered error, please try later"
      {:lpcode => "LpError", :lp_message => error_message, :lp_data => nil}
  def running_background_job(job_id)
    status = Sidekiq::Status::status(job_id)
    status_message = Sidekiq::Status::message(job_id)
    lp_status_code = Sidekiq::Status::get(job_id, :lp_status_code)
    lp_status_message = Sidekiq::Status::get(job_id, :lp_status_message)
    lp_data = Sidekiq::Status::get(job_id, :lp_data)
    {:status => status.to_s, :status_message => status_message, 
     :lp_status_code => lp_status_code, :lp_status_message => 
     lp_status_message,:lp_data => lp_data}

The loan view to fill the details is as below:

<%= form_tag credit_request_path, :method => :post, :id => "credit-request" do %>
   your form fields
   <%= submit_tag 'Submit Credit Request', :id => 'apply-credit'%>
<% end %>

controller code which handle the loan submit is as below:

def credit_request
  job_id = LoanPath::Application.perform_async("apply_credit", params)
  render :json => {:job_id => job_id , :current_url => loan_url(params[:id])}

def check_job_status
  job_status =[:job_id])
  @application_saved = "Error"
  if (job_status[:lp_status_code].to_s == "LpValid")
    @application_saved = "Success"
    @message = "Credit Application Submitted Successfully."
  elsif (job_status[:lp_status_code].to_s == "LpError")
    @message = job_status[:lp_status_message].present? ? 
               job_status[:lp_status_message] : 
               "System encountered error, please try later"
  render :json => {:status => job_status[:status], :redirect_to => params[:current_path], :message => @message, :application_status => @application_saved}

Now the most important, the  js code which submit the form through ajax and keep checking status is as below

$('#apply-credit').click(function (e) {
    if ($('#accept_tc').is(':checked')) {
        var intervalId = '';
            url: "/loans/credit_request",
            data: $("#credit-request").serialize(),
            dataType: "json",
            type: "POST",
            success: function (job) {
              intervalId = setInterval(function () {
                }, 3000);
            message: null
        return false;
      } else {
        alert('Please read and accept the terms, above, before submitting credit request.');
        return false;

function checkStatus(job, intervalId) {
        url: "/check_job_status",
        data: {
          job_id: job.job_id,
          current_path: job.current_url,
        type: "get",
        dataType: "json",
        success: function (jobStatus) {
          if (jobStatus.status === 'complete' || jobStatus.status === "failed") {
              if (jobStatus.application_status === "Success") {
                  window.location = jobStatus.redirect_to;
              else {

That’s all now Heroku will not complain about Timeout error


multiple project using sidekiq on same machine

In this post I have explained how to install , configure, start and stop sidekiq . You can start sidekiq from terminal with below command on your project root

$ bundle exec sidekiq -e staging -d -L log/sidekiq.log # -e option set the environment (staging in this case), -d option make the sidekiq to start as daemon i,e it will keep running even after you close the terminal, -L option set the log file path, you can get the list of all available option with -h as below

$ sidekiq -h
-c, –concurrency INT                           processor threads to use
-d, –daemon                                           Daemonize process
-e, –environment ENV                          Application environment
-g, –tag TAG                                            Process tag for procline
-i, –index INT                                         unique process index on this machine
-p, –profile                                              Profile all code run by Sidekiq
-q, –queue QUEUE[,WEIGHT]…       Queues to process with optional weights
-r, –require [PATH|DIR]                     Location of Rails application with workers or file to require
-t, –timeout NUM                                  Shutdown timeout
-v, –verbose                                            Print more verbose output
-C, –config PATH                                  path to YAML config file
-L, –logfile PATH                                 path to writable logfile
-P, –pidfile PATH                                 path to pidfile
-V, –version                                           Print version and exit
-h, –help                                                 Show help

O.K , so we know how to start sidekiq with all the possible option. But what happen if you start the sidekiq on two or more project on the same machine. In my case, My staging server host two projects, and all of them need sidekiq as background worker, so I moved to project folder and started sidekiq on both

ThirdPillar]# bundle exec sidekiq -e staging -d -L log/sidekiq.log
Barcelona]# bundle exec sidekiq -e staging -d -L log/sidekiq.log

Now I checked the running instance of sidekiq

ps -ef | grep sidekiq
root      3823     1  1 03:03 ?  00:00:38 sidekiq 2.12.0 ThirdPillar [0 of 25 busy]
root      4116     1  1 03:04 ?  00:00:35 sidekiq 2.12.0 Barcelona [0 of 25 busy]

So, I can see that, sidekiq have started on both the project and running on different port. But when I tried to load my application, all the pages which involve background processing start breaking. So the sidekiq not working . If I kill one sidekiq, the other start working. It is obvious that, the multiple sidekiq daemon conflicting with each other.

Now understand why is the conflict :

=> sidekiq work on redis as database server. basically it store the background processes on redis

=> One machine can have only single redis server, but redis can have multiple database on it

=> Unless you provide separate redis configuration for sidekiq, it will work with default i,e try to use redis on the local machine itself and the  database number 0 on it

So the problem is that, both sidekiq instance you have started, using the same redis server and the database number 0, so causing the conflict and none of them working. So now we have two solutions:

solution 1: configure sidekiq to point to different redis server

solution 2: configure sidekiq to use the same redis server available locally, but make them to use different database and also separate them with namespace

We will go for solution 2 . Let us try something with redis on console before changing sidekiq configuration.

> # no options is passed
=> #<Redis client v3.0.4 for redis://> # o number database being used
> 1) # database 1 is paased
=> #<Redis client v3.0.4 for redis://> # 1 number database being used
> 4)
=> #<Redis client v3.0.4 for redis://> 4 number database being used

Here, you can see that, your redis server is running at and number after / is the database number

So now we can introduce below line in config/initializer/sidekiq.rb file of both the project

config.redis = { :namespace => ‘ThirdPillar’, :url => ‘redis://’ }

config.redis = { :namespace => ‘Barcelona’, :url => ‘redis://’ }

The namespace can have any value, but as convention I have keep the name same as the folder name of the project. So the Barcelona will use datbase 1 of the redis server at redis:// . The modified configuration files for Barcelona look as below.


require 'sidekiq'
require 'sidekiq-status'

Sidekiq.configure_client do |config|
  config.client_middleware do |chain|
    chain.add Sidekiq::Status::ClientMiddleware
  config.redis = { :namespace => 'Barcelona', :url => 'redis://' }

Sidekiq.configure_server do |config|
  config.server_middleware do |chain|
    chain.add Sidekiq::Status::ServerMiddleware, expiration: 30.minutes # default
  config.redis = { :namespace => 'Barcelona', :url => 'redis://' }

Make similar change in sidekiq.rb of the other project. Restart the sidekiq on both the project, both will strat working  🙂


Leave a comment

Upgrading ruby version with RVM

One of my application is running on Heroku. Recently I got a email from Heroku explaining possible threat in existing ruby versions. It says –

You are receiving this email because you run at least one Ruby (MRI) application on Heroku.
Early this morning, the Ruby project announced a security vulnerability in MRI 1.8.7, 1.9.2, 1.9.3, 2.0.0. The CVE identifier is CVE-2013-4164. Rubinius and JRuby are unaffected.
We believe this is limited to a denial of service vulnerability. Any Ruby application that parses JSON from an untrusted source can potentially be made to crash with little difficulty. There is also a slim theoretical possibility of a much more serious vulnerability, an Arbitrary Code Execution. We would like to stress that there are no known Proofs of Concept and this is purely theoretical, but can not be ruled out.
In response, we have released Ruby 1.8.7p375, 1.9.2p321, 1.9.3p484 and 2.0.0p353 which closes this attack vulnerability. Please upgrade as soon as possible .

Upgrade on Heroku will take place automatically when you deploy any changes to it. To see what version Heroku using for your application run below command

$ heroku run “ruby -v” -a APPNAME # it will show the current ruby

To upgrade ruby version on heroku, just make an empty commit , so that Heroku trigger new deploy and will update the version itself.

$ git commit –allow-empty -m “upgrade ruby version”
$ git push heroku master

Anyway, our main goal here is to ruby upgrade on  other server or local machine which is using RVM. You can do it with below simple steps.

STEP 1 : check the current ruby used by your machine

$ ruby -v
ruby 1.9.3p125 (2012-02-16 revision 34643) [x86_64-linux]

O.K show we are using 1.9.3p125 i,e patch 125 version of ruby, which is vulnerable to threat as per the above finding in the email. We need to upgrade it to patch 484 which have the security fix.

STEP 2: check current ruby versions supported by your RVM

$rvm list known
# MRI Rubies

O.K, so it do not show patch 484, so you need to upgrade your RVM first

STEP 3: Upgrading RVM to current stable version

$ rvm get stable

STEP 4: Again check current ruby versions supported by your RVM

$rvm list known
# MRI Rubies

So now, our RVM have the currently released patches for all the versions

STEP 5 : Upgrading the ruby version

$ rvm upgrade 1.9.2-p125 1.9.3-p484 # it will upgrade the current  version 1.9.2-p125 to 1.9.3-p484, infact you can upgrade it to any version
Are you sure you wish to upgrade from ruby-1.9.3-p125 to ruby-1.9.3-p484? (Y/n):  # press Y

Are you sure you wish to MOVE gems from ruby-1.9.3-p125 to ruby-1.9.3-p484?
This will overwrite existing gems in ruby-1.9.3-p484 and remove them from ruby-1.9.3-p125 (Y/n): y #press Y
Moving gemsets…
Moving ruby-1.9.3-p125 to ruby-1.9.3-p484
Making gemset ruby-1.9.3-p484 pristine….

take 10 to 15 minute depending on your connection


=> keep pressing Y, when ever it ask you. press n only if you want to configure something yourself. But I suggest to go with the default as it work smoothly for me

=> If you have installed passenger on server with passenger gem, you need to reinstall it, as your gemset location has changed from ruby-1.9.3-p125 to ruby-1.9.3-p484



Leave a comment

passenger.conf file location on nginx or apache

Recently, we have observed a strange behaviour on the staging server(nginx + passenger) , we have created for testing. It is found that, when a set of 10 or 15 users try to access the home page one after other, the first user took relatively more time (50 seconds against the average of 10 second) . We get clue about the problem here on stackoverflow .

PROBLEM : Passenger spawning new Rails instance for some request causing high load time for the request  .

The problem is that, our site remain idle for long time, since we are using it only for testing. The default passenger configuration for passenger_pool_idle_time allow idle time of 300 seconds, after which it shut down itself. It restart rails instance again ,when new request come, so basically it loading all the configuration, database.yml, initialization etc, which results in more loading time for the user visiting first. Since, Rails instance got restarted the subsequent user visiting the site witness normal time of 10 seconds.

SOLUTION :  set passenger_pool_idle_time to 0

It will make passenger to never shut down it self even if the server is idle for long time. It cost you memory as inactive process never get killed.

So, if you are aware of traffic on your site, you can adjust accordingly. Say, one user visit your site every 10 minute. so you can set the value to 2×10 i,e 20 minutes.

passenger_pool_idle_time 1200 #the time should be in integre

You can get complete list of passenger configuration here .  Now the question is where we can write all these available options for passenger configuration. The Answer is that : All passenger configuration goes in nginx.conf or apache.conf file itself .

depending on the way, you have installed apache or nginx, there respective conf file may be present at below location:






path will remain same for apache.conf also . So we have hold on the respective conf file. Now you can add passenger related configuration in below two ways.

METHOD 1: Add the passenger configuration in nginx.conf file directly.

You can add passenger configuration in http , server or location block. I have added it to http block

worker_processes  4;
events {
    worker_connections  1024;

worker_rlimit_nofile    1200;

http {
      passenger_root /usr/local/rvm/gems/ruby-1.9.3-p125@new_customer_portal/gems/passenger-3.0.17;
      passenger_ruby /usr/local/rvm/wrappers/ruby-1.9.3-p125@new_customer_portal/ruby;
      passenger_pool_idle_time 0;

      include	  mime.types;
      default_type  application/octet-stream;
      underscores_in_headers on;

      sendfile        on;
      keepalive_timeout  65;

     server {
	          listen       80;        
            root "/var/www/";
            passenger_enabled on;
            rails_env staging;
            error_page   500 502 503 504  /50x.html;

            location = /50x.html {
                   root   html;

     index  index.html index.htm;

The orange marked part is passenger configuration added to nginx.conf . But as your passenger configuration increase it become cumbersome to maintain. It is better to write all passenger configuration in passenger.conf file and include it in nginx.conf file. we will do it in next Method.

METHOD 2: Add passenger configuration in passenger.conf file and include it in nginx.conf file .

create passenger.conf file in the same folder where nginx.conf file present i,e say /etc/nginx/conf (you may create it anywhere, but it is a convention) folder and add passenger related configuration to it.

      passenger_root /usr/local/rvm/gems/ruby-1.9.3-p125@new_customer_portal/gems/passenger-3.0.17;
      passenger_ruby /usr/local/rvm/wrappers/ruby-1.9.3-p125@new_customer_portal/ruby;
      passenger_pool_idle_time 0;

Now include this file in nginx.conf file. So the configuration in Method 1, will now look like

worker_processes  4;
events {
    worker_connections  1024;

worker_rlimit_nofile    1200;

http {
      include /etc/nginx/conf/passenger.conf; # it load passenger.conf file

      include	  mime.types;
      default_type  application/octet-stream;
      underscores_in_headers on;

      sendfile        on;
      keepalive_timeout  65;

     server {
	          listen       80;        
            root "/var/www/";
            passenger_enabled on;
            rails_env staging;
            error_page   500 502 503 504  /50x.html;

            location = /50x.html {
                   root   html;

     index  index.html index.htm;

REFERENCES: # passenger documentation #passenger with nginx # passenger with apache