breaking into the unknown…

Leave a comment

ArgumentError: invalid byte sequence in US-ASCII

I have automated my rails deployment with Capistrano. It was working fine. I have added a new gem useragent to my Gemfile as below

gem 'useragent', '0.2.3', :git => ""

I committed the code and when try to deploy the code again got below error

2013-09-17 16:50:35 executing `bundle:install'
    [root@] executing command
 ** [out :: root@] ArgumentError: invalid byte sequence in US-ASCII
 ** [out :: root@] An error occurred while installing useragent (0.2.3),
 ** [out :: root@]  and Bundler cannot continue.
 ** [out :: root@] Make sure that `gem install useragent -v '0.2.3'`
    succeeds before bundling.

The issue is discussed in length here at bundler issue tracking .

The reason is that by default ruby set to evaluate any file as US-ASCII encoding. But useragent gem is having some non ASCII characters in it, so throwing the error. It can be solved by setting the default language to utf-8 which support much wider range of encoding. The solution is to add below line to Gemfile at the top.

if RUBY_VERSION =~ /1.9/ # assuming you're running Ruby ~1.9
  Encoding.default_external = Encoding::UTF_8
  Encoding.default_internal = Encoding::UTF_8

Leave a comment

pagination missing rails_admin view

Recently, I have integrated, the current version of rails_admin 0.4.9 and found that, everything working fine except, the pagination link. The pagination link not appearing on any of the index view of the rails_admin. I tried hard to google the solution, but failed and finally posted my problem here on stack overflow. But, since no answer coming from stack overflow, I tried to dig deeper in the problem myself. So this is how I followed

first thing first, check the code, which render the rails_admin index view. my target file is rails_admin-0.4.9/app/views/rails_admin/main/index.html.haml  of the rails_admin gem. On the first look, itself I got the problem, they have put the pagination within if else condition, in their this recent commit

if @objects.respond_to?(:total_count)

.....pagination code...


......without pagination code......


Infact, this change has resulted in a known issue for rails_admin reported here . The thread also have the solution. So what is happening due to the above change. The code is trying to check, if the object have total_count method defined on it or not. The gem owner has introduced this change to prevent “undefined method total_count” . unfortunately, it solve the error at the cost of escaping pagination. So now, we know the cause. The Solution is to define total_count method on the object . The objects here is returned from will_paginate gem. so we are going to add the total_count method to it through monkey patching . monkey patching is nothing but adding new methods or overriding the existing one, by maintaining same hierarchy of class and module in your application. Say, to solve our problem, we will monkey patch will_paginate module as below.

create a file with any name say will_paginate_patch.rb in /config/intializer folder of your app, and add below line to it.

if defined?(WillPaginate)
  module WillPaginate
    module ActiveRecord
      module RelationMethods
        def per(value = nil)
        def total_count()
    module CollectionMethods
      alias_method :num_pages, :total_pages

so, if you see the /lib/will_paginate/active_record.rb file of will_paginate gem, you can see that, they already have per_page and count method defined in it, but there is no total_count or per method in it, so the above patch will add the method and solve our problem.


cannot load such file — safe_yaml

Everything working fine on staging server, testing was complete. We have decided to move ahead with deploying our product in production. Heroku is decided to be our production server. We have made some changes, needed by herkou, and some other for our own foreseen benefits. Some Important changes are.

=> Added pg and unicorn gem under production group to our gem file

=> Added Procfile in root of our project to start our worker and unicorn server

=> Added some configuration file for unicorn server

=> some other changes in our environment file

code is committed and pushed to git…Herkou is up and running…yepieeee

Next day…I loged in to staging server..pull the current code on git..and restarted the staging server….BOOM…server crashed with below message.

safe_yaml load error

safe_yaml load error

Error message is self explainatory….”safe_yaml file is not getting loaded“. When you start your application server, It try to load all the gems and there dependency gems. So here, it is failing to load the safe_yaml gem. Well solution is simple then. just install the safe_yaml gem and it should work. But what the hell…safe_yaml gem is allready installed.

$ bundle show safe_yaml # below it list the path of the gem…thus it is allready installed

I struggled whole day…to trace back the error in the changes introduced last day…thinking staging server is fine before yesterday. Read  up and down of Procfile, unicorn server and pg gem for possible clue about the error…but nothing helped. O.K…so time to follow the old saying….. 🙂

“If you get LOST in your journey….go back to the place you have started”

If you read the error backtrace properly, You will always get clue….so first thing first….what is the error message…. safe_yaml file is not loading. well that, I knew from beginning. What, Iam doing wrong in my approach is linking it with pg, unicorn and Procfile added one day back, as everything fine before that. But, NO..they do not need safe_yaml. If you see the log, you will find that, actually rails_admin gem trying to load it...see line 4 of the image above.

check the path of rails_admin

$ bundle show rails_admin

So, you can see that, safe_yaml and rails_admin is installed at different location (see the bold part)and rails_admin trying to load safe_yaml relative to its current path.
In production mode, we generally use below command for bundle install which pull all the gem in vendor:

$ bundle install – -deployment # this will store all gems in vendor/bundle/ruby/version_no folder of your project.If you use simply bundle install, it will get installed in your rvm folder

The location of the the two gems get differed, due to way the rails_admin included in the gemfile and the way bundler install them.when a gem is fetched from gem source, it is installed to gems folder , and if from some specific repository, it get installed to bundler/gems folder .

Now, since I want some fixes related to jQuery of rails_admin which is present in git master branch, but not yet published as gem, I provided the git url as source of my gem

gem ‘rails_admin’, :git => “” # bundle install command will install rails_admin to /bundler/gems folder of rvm or the vendor folder of the project, and since safe_yaml is one of its dependency, will get installed to gems folder of RVM.

SOLUTION : My problem is solved by removing the git source from the rails_admin, letting it to be fetched from the gem repository, so that both rails_admin and its dependency get installed in the same folder.

gem ‘rails_admin’

run bundle install, committed  Gemfile and Gemfile.lock, pulled the code on staging server, restarted the server….huh…its UP again

Leave a comment

fatal: The remote end hung up unexpectedly in heroku

I have explained deploying rails app on heroku in this post. I get into below error while trying to push the code on HEROKU.

$ git push heroku master # command to push code to heroku, got below error message on console
Permission denied (publickey).
fatal: The remote end hung up unexpectedly

From the message, we can see that, there is some problem with the publickey. public key of your system should be present in your HEROKU account.

This key get added when you try to login to heroku from you development machine for the first time as below
$ heroku login
Enter your Heroku credentials.
Password (typing will be hidden): **************
Could not find an existing public key.
Would you like to generate one? [Yn]
Generating new SSH public key.
Uploading ssh public key /Users/adam/.ssh/

just press enter, the public key is needed to push your code later on. Here, new public key is created and uploaded to your heroku account. But in my case, since my system already have the public key, my terminal log look like below.

$ heroku login
Enter your Heroku credentials.
Password (typing will be hidden): **************
Authentication successful.

PEOBLEM : In my case , since I already have the public key, but is not uploaded to my HEROKU account, so Iam getting the above error

SOLUTION : upload your public key to your HEROKU account

It can be done with below command:
$ heroku keys:add # it will upload your key as you can see in the terminal message
Found existing public key: /home/arun/.ssh/
Uploading SSH public key /home/arun/.ssh/… done

If you are not aware of certain HEROKU command you can get the detail with below step. Infact, I do not know myself the command to add the keys but get it with the below steps.

$ heroku # it will list all availabe command
Usage: heroku COMMAND [–app APP] [command-specific-options]

Primary help topics, type “heroku help TOPIC” for more details:

  addons    #  manage addon resources
  apps      #  manage apps (create, destroy)
  auth      #  authentication (login, logout)
  config    #  manage app config vars
  domains   #  manage custom domains
  logs      #  display logs for an app
  ps        #  manage processes (dynos, workers)
  releases  #  manage app releases
  run       #  run one-off commands (console, rake)
  sharing   #  manage collaborators on an app

Additional topics:

  account      #  manage heroku account options
  certs        #  manage ssl endpoints for an app
  db           #  manage the database for an app
  drains       #  display syslog drains for an app
  fork         #  clone an existing app
  git          #  manage git for apps
  help         #  list commands and display help
  keys         #  manage authentication keys
  labs         #  manage optional features
  maintenance  #  manage maintenance mode for an app
  pg           #  manage heroku-postgresql databases
  pgbackups    #  manage backups of heroku postgresql databases
  plugins      #  manage plugins to the heroku gem
  stack        #  manage the stack for an app
  status       #  check status of heroku platform
  update       #  update the heroku client
  version      #  display version
Now to know about any of the command say key we can get further detail as below with –help option:

$ heroku keys –help # it will tell you all options available for keys and how to use them
Usage: heroku keys

display keys for the current user

 -l, –long  # display extended information for each key

Additional commands, type “heroku help COMMAND” for more details:

keys:add [KEY]   #  add a key for the current user
  keys:clear       #  remove all authentication keys from the current user
keys:remove KEY  #  remove a key from the current user


System start/stop links for /etc/init.d/nginx already exist

I have been trying to install nginx with passenger. Things worked fine untill, I copied the nginx init.d script to /etc/init.d. and tried to update update-rc.d file, I got this terrible message  System start/stop links for /etc/init.d/nginx already exist .

This is the command to update-rc.d

$ sudo /usr/sbin/update-rc.d -f nginx defaults
 System start/stop links for /etc/init.d/nginx already exist.

SOLUTION: remove the already existing link… simple 🙂

I know, what is happening here. Actually, I have tried  to install nginx from source code few month back, but get into some problem and so, uninstalled it. But I think some how , System start/stop links for /etc/init.d/nginx remain there. which is now causing the problem.Well, then how to remove, the old link. As usual, to know more about any command, run it will –help option, and it will give you good amount of information, see it yourself below.

$ sudo /usr/sbin/update-rc.d –help
usage: update-rc.d [-n] [-f] <basename> remove
       update-rc.d [-n] <basename> defaults [NN | SS KK]
       update-rc.d [-n] <basename> start|stop NN runlvl [runlvl] […] .
       update-rc.d [-n] <basename> disable|enable [S|2|3|4|5]
        -n: not really
        -f: force

So, the first line give the syntax to remove any file from update-rc.d, in our case it is the nginx file

$ sudo update-rc.d -f nginx remove # -f option will remove the link forcefully
 Removing any system startup links for /etc/init.d/nginx …

Now, we have get rid of the old nginx link, Let us update with the new one

$ sudo /usr/sbin/update-rc.d -f nginx defaults
 Adding system startup for /etc/init.d/nginx …
   /etc/rc0.d/K20nginx -> ../init.d/nginx
   /etc/rc1.d/K20nginx -> ../init.d/nginx
   /etc/rc6.d/K20nginx -> ../init.d/nginx
   /etc/rc2.d/S20nginx -> ../init.d/nginx
   /etc/rc3.d/S20nginx -> ../init.d/nginx
   /etc/rc4.d/S20nginx -> ../init.d/nginx
   /etc/rc5.d/S20nginx -> ../init.d/nginx

Well, it work fine…So we are done 🙂


requested resource (/openam/json/users/) is not available

I am experimenting out with openam (installation discussed here ) as solution to my single sign on ( SSO ) need when I got below error while making API call to its REST services. The detail of all its REST services is available here .

The requested resource (/openam/json/users/) is not available

This is very strange as I am making the calls exactly as specified in there documentation here . Some of the base URL they have mentioned as example  is as below. # for creating users # for updating user called bjensen

The documentation, expect json data in the payload

So, I’ am doing everything right, but strangely error keep occurring. From, the error itself I can figure out that the problem is with the REST API url they have documented i,e the URL itself is not valid. But, now what I can do. May, be Iam using older version of openam. But it is not the case as I have downloaded and installed the latest version 10.1.3 .

Pulling, my hair for hours, before deciding to use there older API available here .

Strangely, It worked. It specified the REST url as below # for user creation # for user updation # for user deletion

SOLUTION : Use the old REST API of openam instead of current one. The legacy API is documented here . I think, they have released the documentation before releasing the version which support it. So use the older API and it will work fine.

Leave a comment

browser not laoding any wab page

Recently, I get into strange problem, none of the browser I have on my ubuntu system: – firefox, chrome, internet explorer loading any of the web page, not even get loaded. I tried to ping google from terminal but still no result. This problem generally occur if your internet conection is down. But my internet is up as I can see the connection icon up there on Right hand side of my system, Infact I can login to skype and chat with my friends. So, No doubt the problem is not with the Internet connection. Then what the hell going on here…..???

I uninstalled and installed back firefox, but issue remain same. Looking back, I realised that, everything is working when I left the office in night, so it must have to do something with VPN connection I was using last night, may be it is the culprit. Finally, I got the solution by googling for hours

SOLUTION: delete etc/resolv.conf file and create it back. restarting the browser will solve the problem

You better backup the file before deleting it as you may need to see  it back if someone has made some custom change to this and want you to restore the things back.

arun@arun-yadav:~$ sudo su – # you need to be root user to edit file in etc
[sudo] password for arun: #your password
root@arun-yadav:~# cd /etc/ # moved to etc folder
root@arun-yadav:/etc# cat resolv.conf # let us see the content of this file
root@arun-yadav:/etc# rm resolv.conf # delete the file
root@arun-yadav:/etc# touch resolv.conf # create the file again
root@arun-yadav:/etc# cat resolv.conf # try to see the content after running the browser, it is empty.

Recently, I am on a system with Ubuntu 15.1 . Here resolv.conf is not a normal file but a symlink file pointing to /run/resolvconf/resolv.conf , so unlike older version, here after deleting the /etc/resolv.conf file we need to recreate as symlink pointing to /run/resolvconf/resolv.conf .

root@arun-yadav:/etc# ln -s  /run/resolvconf/resolv.conf resolv.conf

If the problem still exist, check the content of /etc/hosts file. In my case, it look as below.

# BEGIN hosts added by Network Connect
# END hosts added by Network Connect       localhost       rorexpert

You can see at top that, added by Network connect. It get deleted automatically when you logout of VPN, since you didn’t logged out and shut down the system or the VPN crashed, the above line failed to delete. So delete them, so that it look as below       localhost       rorexpert

Now restart the network with below command.

sudo /etc/init.d/networking restart

close the browser and start it again. It will work now .


PROBLEM : Internet up skype working but browser not laoding any wab page
The content of resolv.conf file is getting added by juniper, which provide VPN(virtual private network) for the financial website Iam working on. When I started juniper and again see the file, I see the both entry again.
root@arun-yadav:/etc# cat resolv.conf

Now, I logout of the juniper and try to see this file again

root@arun-yadav:/etc# cat resolv.conf #it print nothing

I read about resolv.conf file here and try to make some sense of the problem with which I was pulling my hair.I do not know exactly, but can make out below explanation:

when you request a page say,Then name server (often known as DNS) on your system will resolve to its IP address. generally, if nothing specified in resolv.conf file, the nameserver will be assumed to be that of localhost running at port 53. But, if you define it in resolv.conf file,it will be regarded as nameserver for your system, Infact If you give multiple nameserver in resolv.conf file, they will be tried in that order i,e If first nameserver not able to resolve the url into IP, the next nameserver in the list will try. So, you should mention the most reliable nameserver at top.

Below, is my system nameserver before loging in to juniper VPN and after login to it

arun@arun-yadav:~/Projects/third-pillar$ nslookup localhost # not logedin to VPN, so my DNS is at my localhost port 53 i,e

Non-authoritative answer:
Name:    localhost

arun@arun-yadav:~/Projects/third-pillar$ nslookup localhost # loged in to VPN and so DNS get changed

Non-authoritative answer:
Name:    localhost

Now above we have seen that, when we start juniper it introduce a nameserver in resolv.conf file and when we logout of it, the entry also get removed. But, last night I switched off the system without loging out of juniper, so the entry get remained there. Now, in morning, when I start to access any web page, it is trying to use DNS of juniper, which will obviously not work as Iam not connected to it, Nor I can connect as juniper url will itself not get resolved. It resulted in deadlock which get removed only by deleting the etc/resolv.conf file and recreating it.

Anyway….Thank God it get sorted out 🙂