codedecoder

breaking into the unknown…


Leave a comment

can’t mass-assign protected attributes

This is inbuilt security feature introduced by rails. more detail is available here. First try to understand mass assignment. In simple words , It means trying to assign value to multiple attributes at a time. This generally happen when you try to assign value to a objects from params. Let us consider the example below.

Let, our User model have name, age, dob, city fields, role

class User < ActiveRecord::Base # this is our model

end

class UsersController < ApplicationController

def new

@user = User.new

end

def create

@user = params[:user] # so here you are trying to assign all the attributes of user with the matching value present in the params. so it will be treated as mass assignment

@user.age = params[:user][:age] # This will be treated as simple assignment as you are assigning it explicitly

@user.role = “admin “# this is also simple assignment

@user.save # this will throw mass assignment error for name , dob and city as you are trying to mass assign that from params

end

end

SOLUTION :

whatever fields you want to expose to users i,e want to get from users in form of params, should be defined as attr_accessible in model. Thus we will define the fields name, city, dob, age as attribute_accessible in our user model. We will left out role from this list as we do not want user to set that role. You can see that role in create method is not taken from params but added directly.

class User < ActiveRecord::Base # this is our model
attr_accessible :name, :age, :city, :dob
end

NOTE : This is known as white listing of attributes i,e you list all the attribute which you think is safe to expose to users.

REASON BEHIND THE NEW APPROACH IN RAILS3:

The main purpose to prevent mass assignment is to prevent user from changing the unauthorized field, say in above case if mass assignment approach is not supported by rails, a user may have edited the form he is filling and added a new text field for role and passed admin value to that and without your knowledge he will become admin of your application. Thus, to prevent this mass assignment security is part of rails. only there is difference in the way it is implemented in older version and the version after RAILS3. The two approach are.

BLACKLISTING OF ATTRIBUTES :

It is done by attr_protected keyword, so we can write our above model code as

class User < ActiveRecord::Base # this is our model
attr_protected :role
end

It means that, only this attribute is protected i,e can’t mass assigned but all other are open to user. So it secure the role field from users as it has to be assigned explicitly. But problem with this that many time you introduce new field to your model and if you forget to add it to attr_protected list it will become available to user for tampering, however sensitive it may be. So this flaw is corrected with white listing in rails3

WHITE LISTING OF ATTRIBUTES :

It is achieved with attr_accessible keyword i,e only those attributes in this list will be available to a user all other can’t. Thus, it is more secure. You keep adding the attribute to it which you feel is safe while all other will remain protected by default.

Advertisements


3 Comments

bundle exec in rails

Bundler maintains a consistent environment for ruby applications. It tracks an application’s code and the rubygems it needs to run, so that an application will always have the exact gems (and versions) that it needs to run. This is the definition of bundler available on its site. The site already have all the detail in concise way. I’am just paraphrasing them here with my own observation.

First, thing is that it is really good gem to manage all your gem dependency as well as there consistency. So, better to install it at the start of your project. If You are using RVM it automatically install it for you under global gemset. The global gemset get created for every ruby version you installed. The default gems created within global gemset is as below.

$ gem list #listing all the gem within global gemset

*** LOCAL GEMS ***

bundler (1.1.5)
rake (0.9.2.2)
rubygems-bundler (1.0.3)
rvm (1.11.3.5)

Now, any gemset you create, inherit all the gem within global gemset, show basically you do not need to install them. So let say, our bundle is working.

So, what is the problem ..???

The problem is that, if you have multiple version of some executable(which is used to run other files) like rake, rspec, cap etc in your gemset , it will throw error due to conflict. The common one is for rake below :

$ rake db:migrate # any command with rake will fail with this error
rake aborted!
You have already activated rake 10.0.3, but your Gemfile requires rake 10.0.2

When I checked my gemset, I do find multiple versions of  rake as below

rake (10.0.3, 10.0.2, 0.9.2.2)

SOLUTION :

1 -> prefix bundle exec to run all your exicutable # It will run the version associated with your gem file, thus removing the conflict

$ bundle exec rake db:migrate # it will run rake-10.0.2 associated with your gemfile

2 -> remove the gem causing the conflict # if you do not want to prefix bundle exec with your executable command like rake respec etc, remove the conflicting gem

$ gem uninstall rake -v 10.0.3

NOTE :

It is good practice to run all the executable commands with bundle exec prefix, which will ensure that your command will always work. It is must when you are writing deployment script with capistrano mina etc, so that your deployment do not fail due to conflicting executable


Leave a comment

Failed to build gem native extension

While, running bundle install, many a time you encounter this error message :  Failed to build gem native extension . what I understand from my observation is that, these are the gems which have some dependency on you local resources. Example, when you install Mysql gem, it do not really install Mysql for you but expect that Mysql client and Mysql server is already installed in your system (you need to install it manually on your system) and through its native extension it just configure your application with Mysql.To build these native extension, the gem need some predefined package on your system which basically facilitate the build. So whenever, you get this error you just need to install the required package on your system. I will keep updating the dependency for different gem whenever I will come across them.

Below is the list of dependency for different gem to build there native extensions. They are listed with there install command

1 -> Mysql

$ sudo apt-get install libmysqlclient-dev # will install libmysqlclient-dev

2 -> pg

$ sudo apt-get install libpq-dev # will install libpq-dev

3 -> gherkin

Here, I do not installed any dependency, but installing the current version of gherkin gem solved the problem. This is a known issue with the gherkin gem itself and has been fixed in the current release. more detail on this issue is available here

4 -> therubyracer

When I run bundle install, it tried to install therubyracer version 0.11.0 with native extension, but failed to build the native extension. Some detail of solution is available here on stackoverflow. But, It didn’t solve my problem, So I explicitly mentioned the older version 0.10.2 of therubyracer in my gem file which do not need native extensions.

gem ‘therubyracer’ #removed this line, which install the current version 0.11.0 and need native extension

gem ‘therubyracer’, ‘0.10.2’ #added this line, it will install the older version which do not need native extension.so it will work

5 -> passenger

You may get this error for passenger with below detail:

    /usr/bin/ruby extconf.rb
    mkmf.rb can’t find header files for ruby at /usr/lib/ruby/ruby.h

So the ruby header file is missing here.

It can be solved by installing the ruby header , they are present in different package under different linux flavour

for ubuntu use:

$ sudo apt-get install ruby-dev

for centos use:

$ sudo yum install ruby-devel

 6 -> nokogiri

Below packages are required for it libxml2 libxslt

for ubuntu use:

$ sudo apt-get install libxml2-dev

$ sudo apt-get install libxslt-dev

for centos use:

$ sudo yum install libxml2-devel

$ sudo yum install libxslt-devel

 

7 => pg

It give error like

Can't find the 'libpq-fe.h header.

Below packages are required for it pg dev package is required

for ubuntu use:

$ sudo apt-get install libpq-dev

for centos use:

$ sudo yum install postgresql-devel

$ sudo yum install postgresql-libs

for mac use below:

$ brew install libpqxx

 

8 => json

It throw below error:

cannot load such file — mkmf (LoadError)

The mkmf file is present in ruby-dev package . Installing it will solve the problem

$sudo apt-get install ruby-dev

9 => capybara-webkit

for Ubuntu 14.04 and up installing below package will solve the problem

$sudo apt-get install libqt4-dev

for othe OS see this link


19 Comments

linux command with examples

I am using Ubuntu for last 5 years and doing a lot of things from command line. Many of the day to day command is now on my tip, but many of more I have used is out of my mind, and I have to google it back, So I have decided to note it down here for my future reference. I will keep adding to it whenever I get time or use new command.

Note : 

Linux command can take a no of options. These options can be passed in both shorthand form or expanded form. with shorthand form use single hyphen – and with expanded form use double hyphen –  -, say you can get help option to a command as -h or – -h

Some options do not need any argument while other need some argument to be passed with them. you can pass all the options separately, but it is better to pass argument less options grouped together and passing option with argument separately. Say,  $ mkdir -pv -m 777 myfolder is more concise then $ mkdir -p -v -m 777 myfolder. In the first case we have grouped -p -v together as -pv. Since -m option need further argument here we have passes 777, we have kept it separate from -pv

| , known as pipe is very important for filtering result when used with grep. | basically, pass the result of first function as a block to the next function. Example,  ls | grep myfile will list only those files and folder which have myfile in there name. Here ls will return list of all file and folders which will passed to grep, which will do the filtering.

1 -> history # will show you all the command you have typed in past

Syntax : history #it do not take any argument

Example:

$ history # will list all comand used in past

$ history | grep netbeans # since we have piped the result of history to grep it will return only those command , which contain word history in them

2 -> man #show complete detail of a command

syntax : man command_name

It is basically short form of manual and will tell you detail about other commands you pass to it as argument. It will explain you what that particular command do, how it work, what options it will take and so on. Basically, if you remember name of any command, you can get all detail of it with man command.

Example

$ man man # it will tell about how to use man itself

$ man sudo # will tell you all about sudo command

$ man dpkg # will tell you all about dpkg command

3 -> help #show tha available options for a command

Syntax: command_name – -help or command_name -h

unlike man , it will just tell you about different options a command can take. Since, most of the time we are most concerned about the available options a command can take rather then the whole detail, It is used more often then the man command. Also, most of the software , say netbeans comes with help option which will help you on how to execute them. On other hand man will work only with ubuntu command.

NOTE : If you know name of a command, you can get all available options with -h or – -help option. Remember that for some command -h not work but you have to pass – -help. for example:  rm -h will not work, you have to use rm – -help to know about rm options

Example:

$ cd /netbeans-6.9.1/bin #move to netbeans bin directory

$ ./netbeans – -help # you want to start netbeans and want to know available options. It has listed down all options for you
General options:
  –help                          show this help
  –jdkhome <path>      path to Java(TM) 2 SDK, Standard Edition
  -J<jvm_option>        pass <jvm_option> to JVM

  –cp:p <classpath>    prepend <classpath> to classpath
  –cp:a <classpath>    append <classpath> to classpath

4  -> ls # Listing files and folders

Syntax : ls [options] #options are optional, if you do not pass it will take the default, you can see complete list of options

with ls – -help

Examples:

$ ls #will list all folder and files but not hidden files or folder

$ ls -a # will list hidden files and folders also

$ ls -B #will not list the backup file ending with ~

$ ls -l # will do complete listing with all details like owner, date, permission etc. but will not show hidden file

$ ls -l -a -B #will do all the above

5 -> mkdir # creating directory

Syntax: mkdir [options] directory_name #options are optional, if you do not pass it will take the default, you can see list of options with mkdir – -help

Example:

$ mkdir myfolder

$ mkdir -p myfolder1/myfolder2/myfolder3 # -p option tell to create nested directory with parent child relation. Thus myfolder3 will be created in myfolder2 and so on

$ mkdir -p myfolder# the p option means parent , if the directory already exist do not throw error but use the existing directory as parent

$ mkdir -pv -m 777 myfolder # it will not complain if directory exist, set the permission to 777 and also print the task it performing on terminal

6 -> cd #changing directory

Syntax : cd directory_name # you can pass directory_name or .. or ~ to cd

Examples:

$ cd myfolder # take you inside myfolder

$ cd .. # take you out from current folder to the previous folder

$ cd ~ # take you out of every previous folder to the root of terminal

7-> touch # it will create a file if not present and if already present will modify its modification or access time depending on argument you pass to it, but not overwrite its content

Syntax : touch [options] file_name # touch –help will show you all available option. If you do not pass any option you can create multiple file with single touch command

Examples :

$ touch test1 #will create test1 file

$ touch test1 test2 test3 test4 # will create test2 test3 test4 as new file. Since test1 is already there it will just modify its access time

$ touch -a test1 # will change access time of test1 file

$ touch -m test1 # will change modification time of file test1

$ touch -am test1 # will change both access as well as the modification time

$ touch -d '1 May 2005 10:22' test1 # with d option you can set any date you want but maintain the formate of string, say '1 05 2005 10:22' otherwise '2005 1 May 10:22' will give error. however, you can ignore time if you want

$ touch -d '1 May 2005' test1 # it will set only the date, if you do not pass year then it will take current year i,e '1 May' will also work

$ touch -r test2 test1 # -r is reference option i,e it will set datetime of file test2 to test1 file

8 -> rm # removing file or folder, by default it remove only files to remove a folder, you need to pass additional options

Syntax: rm [options] file_or_directory_name #options are optional, if you do not pass it will take the default, you can see list of options with rm – -help

Example :

$ rm my_file # will remove my_file

$ rm -f my_file # will not throw error if my_file do not exist i,e it will ignore the removal

$ rm -fi my_file # Will ignore if file is not there and ask you to press Y to delete the file i,e it will become interactive

$ rm -r my_folder # The -r option must be passed to delete a folder other while it will not delete it

$ rm -rf my_folder # will delete the folder and will not complain if the folder do not exist

9 -> rmdir #it will list empty directory

Syntax : rmdir [options] directory_name # without argument it will delete all empty directory name passed to it. It will throw error if directory is not empty. for that case use rm as above

Examples:

$ rmdir dir1 dir2 dir3 # it will delete the empty directory dir1 dir2 dir3

$ rmdir -p dir1/dir2/dir3/dir4 # It will delete the parent also if deleting the child make it empty, so first dir4 will be deleted then dir3 then dir2 and so on

10 -> mv # It will move file or directory to other location. if the location of source destination is same the file or directory will be get renamed

Syntax: mv [options] source destination # If two files are provided as source and destination source will be renamed as destination. If you pass more then two argument and the last one is directory all the previous will be treated as source and will be moved into the last directory, but if the last argument is not directory it will throw error.

Examples :

$ mv myfile.txt myfile.rb # it will rename myfile.txt to myfile.rb i,e the name is changed.

$ mv -bvi myfile.txt myfile.rb # here we are passing some option b will backup the file before renaming, i will ask your yes no before renaming and v will print what going on the console

$ mv test1 test_folder myfile.rb /home/arun/my_folder # it will move the file test , myfile.rb and the test_folder to /home/arun/my_folder

$ mv *  /home/arun/my_folder # it will move everything in current folder to /home/arun/my_folder

11-> grep # It is used to search a pattern specified by user in the given text or files or block passed from first command to it through the pipe symbol |

Syntax: grep [options] pattern [files] # the complete list of options can be accessed with grep – -help command. If you want to match more than one word pass  within ” “

$ grep “Linux is good” file1 file2 file3 # It will list all lines of these three file containing “Linux is good” in it in separate line, each line will be preceded with name of the file in which they appear. If you do not want the file name you can use -h option as below

$ grep -h “Linux is good” file1 file2 file3 # It will not precede the matched line with the file in which they are found

$ grep -i “Linux is good” file1 file2 file3 # it tell to ignore the case i,e “linux is good” will also match

$ grep “Linux is good” * #will search the text “Linux is good” in all the text file in the current directory

$ grep -r “Linux is good” * #-r option will make it to search the text “Linux is good” in all the text file recursively in the current directory as well as all it child directory

$ grep -c “Linux is good” file1 file2 # It will not print each line but just return the count of match in each file

$ grep -n “Linux is good” file1 file2 # It will also print the line number of the line containing the pattern in each file

$ grep -rni “Linux is good” * # will recursively search each directory, print the line no and will ignore the case

$ ls -l | grep “foo” # it will list only those file and directory which containn the word foo

$ history | grep rails # will list history of only those command which contain the word rails

$ history | grep -in rails #will ignore case,  and print line no.

12 -> less # can be used to see content of a file .It allow you to display a large data in block and let you to scroll down, basically, it keep the screen at first line and let you scroll down. If you do not use this, screen will set to the last line and you have to scroll up to see the first line. you can exit the content by pressing q .

Syntax : less [options] file_name # you can see complete list of option with less – -help. Though, generally, I used it to see the content of file or a command from top and have not tried any options .

Examples:

$ less myfile.txt # It will show content of file from the top, need to scroll down to see more content. press q to exit the content

$ ps -ef | less #will show the result of ps -ef command from the top

13 -> more # It can be also used to display content of a file on terminal.

syntax: more [options] file_name # By default It do not provide scroll down facility like less to see the whole content but show only that much content which fit into your screen. however, using different options you can set the no of line you want to see, which will activate the scrolling if the set line not come on the screen

Examples:

$ more myfile # will show that much content of file which fit in screen

$ more -100 + 50 myfile # will show 100 lines of file starting from line 50

$ more +/”Iam arun” -100 # will show 100 line of the file from the line which contain “Iam arun”

$ more -s -u -100  +50 myfile #  s will cause to remove blank line from display and u will remove underlines

$ ps -ef | more -10 + 20 # will display 10 lines of the result of command ps -ef starting from the 20 line

14 -> cat # will be used to create a file, read a file, concenate content of many files

syntax : cat [options] [input_filename] [> or >>] [output_filename] # all the things in [] is optional i,e cat can work without them also

NOTE :

1 -> If you do not provide output_filename the output will be printed on terminal itself i,e terminal is the standard output for cat command

2 -> > and >> both are called append operator i,e write content to a file but > will erase the file of content if exist and write to it but >> will add the new content to the existing content , so it is safer to use >> instead of>

Example:

$ cat >> test_file1 # it will create test_file1 and when you press enter it will take you to the next line where you write to it, to exist press enter then press ctrl and d
I have pressed enter on last line to reach here
again pressed eneter to come to next line and keep writing
All content Iam writing will get written to test_files1                                 
to exist writing we will press Enter followed by Ctrl D

$ cat test_file1 # it will show you content of file test_file1, so you can see the content you have written on the terminal
I have pressed enter on last line to reach here
again pressed eneter to come to next line and keep writing
All content Iam writing will get written to test_files1
to exist writing we will press Enter followed by Ctrl D

arun@arun-yadav:~$ cat -n test_file1 #the -n option will print the line no also
  1    I have pressed enter on last line to reach here
     2    again pressed eneter to come to next line and keep writing
     3    All content Iam writing will get written to test_files1
     4    to exist writing we will press Enter followed by Ctrl D

$ cat >> test_file2 # will create a new file
This is the second file # you added this line to test_file2

$ cat test_file1 test_file2 >> test_file3 # will create file test_file3 and append content of test_file1 test_file2 to it

$ cat test_file3 # it show the content of test_file3, you can see that this file contain content of both the files
I have pressed enter on last line to reach here
again pressed eneter to come to next line and keep writing
All content Iam writing will get written to test_files1
to exist writing we will press Enter followed by Ctrl D
This is the second file

15 -> echo # you can use it to write a string to file or terminal itself

syntax: echo string [filename] # file name is optional if you do not pass it the output will be printed to the terminal

unlike cat, it can use to write single string to a file or terminal i,e you can write multiple lines as with cat command

Example:

$ echo Iam Arun # it will print the output to terminal
Iam Arun
$ echo Iam Arun to test_file >> test_file # it will append the string to the test_file, if test_file is not present >> operator will create it and the append to it
$ cat test_file # we will see the content of test_file
Iam Arun to test_file
$ echo Iam adding one more line >> test_file # one more line is appended
$ cat test_file # you can see both the line
Iam Arun to test_file
Iam adding one more line
$ echo this “>” operator is dangerous as it will erase and write > test_file # see the danger of using >, if the file exist it will erase its content before appending
arun@arun-yadav:~$ cat test_file
this > operator is dangerous as it will erase and write # see earlier line is erased and only single line present

16 -> pstree # will return all the running process in tree structure with there name and process id

Syntax : pstree [options] [pid or username] # you can see the  complete list of option with  pstree – -help. the argument is user or process_id whose tree you want to see

Example:

$ pstree #will list all the running process with all the user

$ pstree root # will show all the process running with root user

$ pstree arun # will list all the process running with the user arun

$ pstree -p arun # will print the PID i,e process id also with the name

$ pstree -pn  # will print the PID and also in the sorted order

$ pstree -pnh # the h option will highlight the current running process

$ pstree -pnhu # the u option will also print the user owning that process

$ pstree -pnhul # l option will not allow to truncate the process detail if it is too long

17 -> pgrep # It will return PID of the process

Syntax : pgrep process_name # it will return the PID of process_name. It will be helpful if you know the process name otherwise use pstree or ps to find PID

Examples:

$ pgrep firefox # will return PID of firefox

$ pgrep ruby # will return PID of ruby

$ pgrep skype # will return PID of skype

18 -> ps # it means process status and return all the running process with name and PID

Syntax :  ps [options] # you can see all the available options with ps – -help

Examples :

$ ps # will return currently active process

$ ps -e # will return all the running process

$ ps -ef # will return all running process with full detail

$ ps -ef | less # as process list is very long, we have piped the output of ps -ef to less command. It help you to display result in small blocks and let you to scroll down to see more.

NOTE : IF you want to kill some process through script, you will jut need the PID to pass it to kill command. say to kill apache 2 from script you can use below:

kill -9 $(ps -e | grep apache2 | awk '{print $1}')

19 -> pkill # will kill the process passed to it

Syntax: pkill process_name # It would be helpful only if you know the name of process, you want to kill

Examples:

$ pkill ruby

$ pkill firefox

20 -> kill # will kill the process whose PID is passed to it

syntax : kill [signals] PID # there is 60 type of signal which can be passed to kill, but you have to just remember 9 which is called SIGKILL, the default signal passed is 15 which is called SIGTERM. Actually, kill do not kill the process itself instead, it just pass proper signal to a process which cause it to terminate i,e logic of termination is in every process itself. Say, for example when you close firefox browser, signal 9 get passed to the firefox process by our OS and it get terminated. The kill command just give you a way to pass signal to a process from the terminal.Thus by passing proper signal with kill you can control any state of a process. But mostly that is not needed. Mostly it is use to close any hanged process. If any process get hanged, you get its PID from the above pstree or pgrep or ps command and the pass it to kill to terminate that hanged process.

NOTE: the complete list of signal available for kill can be listed with kill -l

Examples:

$ kill 485 #will terminate the process with PID 485. here the default signa 15 get passed, there may be chance that it fail to kill the process, say if it is making some system call

$ kill -9 485 # it will be guaranteed that the process will be called. If the process is making any system call, it will wait and kill it when it return from the call

21 -> killall #will kill the process along with its child whose name passed to it. helpful if you know the process name

syntax: killall [options] process_name #will kill the process and all its child

NOTE: Killing by file name only works for executables  (i.e., runnable programs) that are kept open during execution

Examples:

$ killall nautilus #will kill the nautilus process

$ killall -e nautilus # it will tell to kill whose name exactly match with nautilus. It will be helpful if your process name is more that 15 character, since killall by default match only upto that, so will delete all other process also whose first 15 character matches. the -e option will prevent this.

$ killall -ew nautilus # the w option will tell to wait till all the process are killed, otherwise killall check it every second by default and resend the signal if found some is still running, but since some has already get  killed, some signal become unresponsive. Also killall can kill all the process, including other killall process, except itself

22 -> pwd # will print the current working directory

23 -> clear # It will remove all previous command and output from the console

24->SCP # You can take it as server copy. scp command is used to copy files from one server to other.It can be from your local machine to some  other machine or vice versa. You can also use it to copy file between two different machine from your system.

Below will be the syntex :

CASE 1: Copy file from from your system to some other system:
scp  /path/to/local/file  username@hostname:/path/to/remote/file

CASE 2: Copy file from another system to your system:

scp  username@hostname:/path/to/remote/file  /path/to/local/file

CASE 3: Copy file from some system to some other system from your machine:

scp  username1@hostname1:/path/to/file  username2@hostname2:/path/to/other/file

NOTE : to copy a directory instead of file in any of the above case use scp -r instead of just scp

NOTE : username@hostname is basically the url to which you ssh

24-> lscpu

This will tell you information about your CPU. Like you may want to know, you are having 32 bit or 64 bit processor (remember many a time at the time of download you do not know you should download 32 or 64 bit version).

Example :

$ lscpu
Architecture:                          i686
CPU op-mode(s):                   32-bit, 64-bit
Byte Order:                           Little Endian
CPU(s):                                   4
On-line CPU(s) list:              0-3
Thread(s) per core:              2
Core(s) per socket:               2
Socket(s):                              1
Vendor ID:                          GenuineIntel
CPU family:                         6
Model:                                 42
Stepping:                             7
CPU MHz:                           3092.905
BogoMIPS:                         6185.90
Virtualization:                     VT-x
L1d cache:                           32K
L1i cache:                            32K
L2 cache:                             256K
L3 cache:                             3072K

Architecture showing to be i686(X86, i686, or i386 is same), which means your OS is of 32 bit. But the CPU op-mode showing that it can work in both 32 and 62 bit mod i,e your CPU is of 64 bit. So if you want you can install 64 bit Ubuntu OS on it, but if the mode is only 32 bit, you can’t

NOTE :

X86, i686, or i386      means you are running a 32 bit kernel i,e OS.
X86_64 , amd64 , or X64 means you are running a 64 bit kernel i,e OS.

24 -> netstat

It display information about all the running internet connection be default. if you pass -a option it will display inactive connection also. It take a number of options. You can find all the available option with below command:

        # netstat -h

        -r, –route              display routing table
        -i, –interfaces         display interface table
        -g, –groups             display multicast group memberships
        -s, –statistics         display networking statistics (like SNMP)
        -M, –masquerade         display masqueraded connections

        -v, –verbose            be verbose
        -W, –wide               don’t truncate IP addresses
        -n, –numeric            don’t resolve names
        -N, –symbolic           resolve hardware names
        -e, –extend             display other/more information
        -p, –programs           display PID/Program name for sockets
        -c, –continuous         continuous listing

        -l, –listening          display listening server sockets
        -a, –all, –listening   display all sockets (default: connected)
        -o, –timers             display timers
        -F, –fib                display Forwarding Information Base (default)
        -C, –cache              display routing cache instead of FIB

You can use any of the above options

Example:

root@arun-desktop:~# netstat -ltnp
Active Internet connections (only servers)
Proto     Recv-Q     Send-Q         Local Address           Foreign Address     State       PID/Program name
tcp        0                        0            127.0.0.1:631           0.0.0.0:*               LISTEN      2322/cupsd      
tcp        0                        0            0.0.0.0:64283           0.0.0.0:*               LISTEN      3176/skype      
tcp        0                        0            0.0.0.0:902             0.0.0.0:*                  LISTEN      1135/vmware-authdla
tcp        0                        0            127.0.1.1:53            0.0.0.0:*               LISTEN      1460/dnsmasq    
tcp6       0                       0            ::1:631                         :::*                    LISTEN      2322/cupsd      
tcp6       0                       0            :::443                            :::*                    LISTEN      10916/apache2   
tcp6       0                       0            :::8140                          :::*                    LISTEN      10916/apache2   
tcp6       0                       0            :::80                               :::*                    LISTEN      10916/apache2

 

25-> browsing folder as root user.

sometime you want to open folder as root user, so that you can directly browse and edit a file. For example, say you want to edit standalone.xml file of jboss . If you go to that file through the UI, open it and edit it, you can’t as you have browsed it as normal user so don’t have permission to edit it. So in such case you can browse the file as root user with below command.

root@arun-desktop:~# nautilus

arun@arun-desktop:~$ sudo nautilus # for non root user

26->Complete Listing

Complete list of Linux command with detailed explanation is available here.

Reference :

http://www.linfo.org/pipes.html


Leave a comment

rails deployment with rvm and capistrano

In my last post I have explained deploying rails with mina.  This time we will do it with Capistrano. If this post not solved your problem you can get more detail on Capistrano deployment here. For advance learning you can look at Capistrano wiki here. Anyway, we will proceed through the below step

STEP 1: Understanding You server

A server is just like any other machine with restricted access, always up with more resources like RAM, hard disk etc. I will deploy to my friend kapil system as my target server. This machine is running on ubuntu 12.04.

So, our server has below details
IP: 192.168.173.53
username: kapil
password: xyz4154
system name : kapil-handa # it  basically represent the ip 192.168.173.53

Before moving to next step make below checklist on your server:

  1. You should able to ssh to server i,e kapil@192.168.173.53 from your terminal.   see how to ssh to remote machine here
  2. Capistrano deploy your code from SVN or GIT. whatever, Your source repository may be , it should be accessible from your Server . for GIT it is explained here
  3. If you are using RVM on server. create a gemset say myapp to be used by our deployed code. We can automate it by writing few line of code in deploy.rb in step 4. But since it is one time process, better to create it manually on server. managing RVM is explained in this post. Installing ruby and gemset is explained here. Also, I am assuming that you are using RVM > 1.11.0. If you are using some other version of RVM, you better refer this document on rvm configuration with Capistrano
  4. Create a database on which you will run your migration after deployment. It is also one time process, so do it manually on the server running your database. here, we have single server(kapil machine) for everything, so create it here itself.

STEP 2: Installing Capistrano on Your machine

You can install and configure Capistrano with below lines.

-> install gem and run the bundle

Add below lines to the gem file

gem ‘capistrano’
gem ‘rvm-capistrano’ #this is required if you are using RVM on server.

$ bundle install # at project root on terminal

-> generate the default files

$ capify . # this will add two file Capfile and deploy.rb for you as you can see below
[add] writing ‘./Capfile’
[add] writing ‘./config/deploy.rb’
[done] capified!

-> take a look at the capfile, not much to change there.

The ./Capfile has below content
load ‘deploy’ # it will load all default Capistrano task for you, never remove this line unless you do not want to use the default but create your own recepie
load ‘deploy/assets’ # remove this line if you are not using Rails’ asset pipeline
load ‘config/deploy’ # This will load your configuration settings. you add all your setting and script to this file.

-> we familiar with deploy.rb file

This file will contain our deployment script.By default, it have some content, which is more of guiding instruction for you. We will change all of them as per our need. on safer side backup the default file for future reference by renaming it as deploy_backup.rb and create new deploy.rb file. We will write our deployment script to this file.

-> Checkout the  Capistrano Default Methods

Capistrano provides a no of methods which you can use for deployment. You can see the list of method provided by Capistrano with below command

$ cap -vT #it will list all the available methods with sort description as below
cap deploy # Deploys your project.
cap deploy:assets:clean # Run the asset clean rake task.
cap deploy:assets:precompile # Run the asset precompilation rake task.
cap deploy:assets:symlink # [internal] This task will set up a symlink to …
cap deploy:check # Test deployment dependencies.
cap deploy:cleanup # Clean up old releases.
cap deploy:cold # Deploys and starts a `cold’ application.
cap deploy:create_symlink # Updates the symlink to the most recently deplo…
cap deploy:finalize_update # [internal] Touches up the released code.
cap deploy:migrate # Run the migrate rake task.
cap deploy:migrations # Deploy and run pending migrations.
cap deploy:pending # Displays the commits since your last deploy.
cap deploy:pending:diff # Displays the `diff’ since your last deploy.
cap deploy:restart # Blank task exists as a hook into which to inst…
cap deploy:rollback # Rolls back to a previous version and restarts.
cap deploy:rollback:cleanup # [internal] Removes the most recently deployed …
cap deploy:rollback:code # Rolls back to the previously deployed version.
cap deploy:rollback:revision # [internal] Points the current symlink at the p…
cap deploy:setup # Prepares one or more servers for deployment.
cap deploy:start # Blank task exists as a hook into which to inst…
cap deploy:stop # Blank task exists as a hook into which to inst…
cap deploy:symlink # Deprecated API.
cap deploy:update # Copies your project and updates the symlink.
cap deploy:update_code # Copies your project to the remote servers.
cap deploy:upload # Copy files to the currently deployed version.
cap deploy:web:disable # Present a maintenance page to visitors.
cap deploy:web:enable # Makes the application web-accessible again.
cap invoke # Invoke a single command on the remote servers.
cap shell # Begin an interactive Capistrano session.

NOTE : once, you have written the basic required configuration as described in next step, you can use any of the above command to manage your deployment

STEP 4: writing deployment script

set :scm, :git #repository you are using, git is default, so if you are using git, this line is not needed
set :repository, ‘https://github.com/trustarun/myapp.git&#8217; #Your repository url
set :application, “myapp.staging.com” # name of your application
set :deploy_to, “/var/www/myapp.com” # path where your code will be deployed
set :use_sudo, false  # If you do not mention it as false, Capistrano will try to execute command as sudo user  and will throw error like no tty present and no askpass program specified. basically you will need  the default true, when you do not have root access to server and want to run command as sudo user. In that case, You need to do some extra configuration, which is not covered in this post. Generally, you always have root access to server so don’t worry much

#for us all three role are running on same machine so having same value
role :web, “kapil@192.168.173.53” # this is where your web server software runs i,e apache, nginx etc
role :app, “kapil@192.168.173.53” # this is where your application layer runs i,e webrick, passenger, mongrel etc
role :db, “kapil@192.168.173.53” , :primary => true  # this where your database server run i,e mysql, sql oracle etc

NOTE: These are the essential configuration you need to set for capistrano to work. There are other variable also which you can use as per you specific need. for example,if  you do not want to access the git repository through deploy key setting as described here,but through username password you can set below variables.

set :scm_username, “trustarun@yahoo.co.in” # this should be username of admin of repository
set :scm_password, “happyarun8765” #this should be password of admin of repository

The complete list of all the variable is available here

So, we are done with the most basic configuration. let us see what we can achieve with this and how we can improve it in next steps

STEP 5: Setup the folder structure on server

Now, with basic deployment script written in step 4, you are ready to see the power of Capistrano in managing your deployment. first we make the directory structure.run below command on the terminal
$ cap deploy:setup

After completion of this command , if you go to your server and look at deploy_to path you have set
in step 4 (/var/www/myapp.com) you will find below folder structure.

-> /current # it is symlinked folder, which point to most recent folder in releases directory
-> /releases # whenever you deploy your code, it create a new folder with value of date and time when it is deployed,
say if you deployed on 6 dec 2012 at 4.30 folder name will be 201212060430. Then current folder link will be
changed to point to this directory
-> /shared # this folder will contain shared data which remain consistent. for example you can put your
database.yml file in this folder. generally, people do not commit database.yml file to git as it will expose the detail
to outside people. the good practice is to create database.yml file on server manually and later on through
deployment scrip symlink config/database.yml file to the database.yml file in the shared folder.
->/shared/log # it will contain the log information
->/shared/pids # it will contain process ids of running process say nginx, ruby etc
->/shared/system # you can put system related information here

NOTE: If you have not committed your database.yml file to git due to security reason. After running setup you must
create database.yml file with all required configuration in shared folder.

STEP 6: Deploy the code
run below command on the terminal
$ cap deploy

If you have fulfilled the checklist in step 1. The above command will complete successfully. If you login to
server you can find the current git code in /var/www/mayapp/current folder. So are we ready to start the server ??
NO…because , we have not did below step on server which we used to do in development

-> rvm use ruby-1.9.3-p194 # assuming ruby-1.9.3-p194 is used by you
-> rvm gemset use myapp #assuming myapp is gemset you are using for this project
-> bundle install # assuming you have added some new gem to gemfile
-> rake db:migrate # assuming you have added some new migration which need to be migrated

So now, we will modify our deployment script in step 4 to take care of these thing.
Also, I am assuming that database.yml is not commited to git due to security reason, so when Capistrano clone your git
repository in /var/www/myapp/current the /config/database.yml is not there(don’t worry even if it is there), we will generate a symlink of this to /shared/database.yml file

STEP 7:
modifying deployment script

require "rvm/capistrano" #This will add more method to cap list under rvm namespace. 
You can see them as usual by typing cap -vT on terminal

set :rvm_ruby_string, 'ruby-1.9.3-p194@myapp' # the name here is important.
here 'ruby-1.9.3-p194@myapp' means you want to use ruby ruby-1.9.3-p194 and 
gemset myapp. this line is must and responsible for configuring rvm with 
Capistrano on server
set :rvm_type, :system # if RVM is installed on server as non root user use
:user instead of :system

set :scm, :git
set :repository, 'https://github.com/trustarun/myapp.git'
set :application, "myapp.staging.com"
set :deploy_to, "/var/www/myapp.com"
set :use_sudo, false

role :web, "kapil@192.168.173.53"
role :app, "kapil@192.168.173.53"
role :db, "kapil@192.168.173.53" , :primary => true

namespace :deploy do

  desc 'Re-establish database.yml'
  task :set_database_symlink do
    run "rm -fr #{current_path}/config/database.yml && cd #{current_path}/config &&
        ln -nfs #{shared_path}/database.yml database.yml" 
  end

  desc 'run bundle install'
  task :install_bundle do
    run "cd #{current_path} && bundle install"
  end

  desc 'run the migration'
  task :migrate do
    run "cd #{current_path} && bundle exec rake db:migrate RAILS_ENV=staging --trace"
   end

end

before "deploy:setup", "rvm:install_ruby"
before "deploy:migrate", "deploy:set_database_symlink"
before "deploy:migrate", "deploy:install_bundle"
before "deploy:restart", "deploy:migrate"

NOTE: Capistrano provide before and after callback which you can use for chaining different methods of
your deployment script. See the sequence I have created here. deploy:restart method is called after deployment
is done,ie cap deploy command pass successfully. This is a empty method provided by Capistrano to write our server
specific code. But befor starting our server, it is must that our database is migrated. but for migrating database
it is must that all the gems are installed and the database is symlinked properly. Thus , these things are taken
care, by the way we have chained the method

STEP 8: The deployment command
Now, we have perfected our deployment script in STEP  7, so ready to deploy our app anytime with, below two
lines of code

$ cap deploy:setup #need to run only once as it simply set the directory structure and will do nothing if run second time unless you have made some change to deploy:setup
$ cap deploy # run it every time you want to deploy your current code to server after pushing it to git

STEP 9: Starting the Server and Running the APP

I have explained configuring your app with nginx and passenger in this post. You need to restart the server
manually after step 8. I think We can define restart method in our script to do it automatically. I will update the
post once, I confirm it.


Leave a comment

sh command in linux ubuntu

While, trying out rails deployment with Capistrano, I have seen use of sh multiple time in the Capistrano terminal log. To get some detail on it, I searched on Google and find very good documentation here. Technically, sh command executes the Bourne shell, likely taking you to a $ prompt.The simplest definition I will give is that “sh command execute terminal command from any file with .sh extension”

=> Example:

Create a file with .sh extension, say trying_out.sh

Add below lines to the file

#!/bin/sh # this is the default location, so you can ignore it, but mention it if your sh script is located at some other place. As a good practice always mention it

mkdir my_folder #it will create my_folder directory
cd my_folder # will take you to my_folder directory
mkdir my_notes #again make a directory my_notes within my_folder directory
cd my_notes #take you to my_notes folder
touch first_note.txt #create a first_notes.txt file within my notes directory
echo “see your sh example is working” >> first_note.txt # add a line to the first_notes file
cat first_note.txt #print the content of first_notes.txt on the terminal

Now go terminal and run the below command

arun@arun-yadav:~$ sh trying_out.sh
see your sh example is working # this is the result of last command in our trying_out.sh file

Now, if you check your folders, you will see all the newly created content

=> Running a .sh file

.sh file is nothing but the shell script to install given application or to perform other tasks under UNIX like operating systems. When we download any software, we find install.sh file which basically conten a set of command to be exicuted on the terminal. We can run any .sh file in following way

-> $ sh filename.sh

-> $ bash filename.sh # sh is also a bash script, so this will also work

-> $ ./filename.sh #this will work if the file.sh have executable permission, you can grant it with chmod +x filename.sh command, if not already there.more detail on managing file permission is available here

-> $ sudo bash filename.sh # above will not work if, the file need to be executed by root user, in that case use this command, alternatively you can login as root user with su – command and use any of the above method

=> sh options

You can, provide a no of options to sh to suit your need. The complete list is available here. A sh command with all the options will look something like this

sh [-a] [-c] [-C] [-e] [-E] [-f] [-h] [-i] [-I][-k] [-m] [-n] [-p] [-r] [-s] [-t] [-T] [-u] [-v] [-x] [ argument ]

Explanation of some most commonly used options is  is as below:

-c string #If the -c option is present, then commands are read from string. If there are arguments after the string, they are assigned to the positional parameters, starting with $0.

Example:

$ sh -c “mkdir my_folder && cd my_folder && touch my_notes.txt”

-C    #Don’t overwrite existing files with “>.”

-i # If the -i option is present, the shell is interactive.

-I # Make bash act as if it had been invoked as a login shel

-s #If the -s option is present, or if no arguments remain after option processing, then commands are read from the standard input. This option allows the positional parameters to be set when invoking an interactive shell.

[argument] # this is basically a string of command when you are using -c option or the value you pass, interactively by using -s option. If arguments remain after option processing, and neither the -c nor the -s option has been supplied, the first argument is assumed to be the name of a file containing shell commands. If bash is invoked in this fashion, $0 is set to the name of the file, and the positional parameters are set to the remaining arguments. Bash reads and executes commands from this file, then exits. Bash’s exit status is the exit status of the last command executed in the script. If no commands are executed, the exit status is 0. An attempt is first made to open the file in the current directory, and, if no file is found, then the shell searches the directories in PATH for the script.


7 Comments

what is API ??

API stands for application programming interface.  A no of definition and explanation is available on wiki here.  But the simpler example, I would give you is 3 pin electric plug, we all see in our home. The plug is basically, an interface to the electricity supply we are getting. We did’nt have to go into what going behind the plug like how the circuit is made,how the current is circulated, how it is produced etc. The interface user, like freeze, tv, radio etc simply connect to the plug and start getting the electricity. I have created a Sample API in this post

In the same, way in programming (system or software or hardware) we create an interface, so that other can use it. Infact, someone has said that a program without API is like a house without any door or window. Your application will become popular only by interacting with the outside world. Great example of this is Facebook and Twitter. Facebook has provide a no of API to retrieve its feeds, do signin, signup etc. It is upto you how much functionality, you want to expose to the outer world.

In Web development, API deals with handling request from outside the application and returning back a response to the requesting client. The request and response structure is predefined by API developer. The response is generally data in xml or json formate. Below lines I have taken from wiki, which add to my point

When used in the context of web development, an API is typically defined as a set of Hypertext Transfer Protocol (HTTP) request messages, along with a definition of the structure of response messages, which is usually in an Extensible Markup Language (XML) or JavaScript Object Notation (JSON) format. While “Web API” is virtually a synonym for web service, the recent trend (so-called Web 2.0) has been moving away from Simple Object Access Protocol (SOAP) based services towards more direct Representational State Transfer (REST) style communications. Web APIs allow the combination of multiple services into new applications known as mashups.

Now, I will give an example of API from one of my project, I have used it

We are working on digitising, documentation process for Applying loan with a bank . Due, to security risk, the bank can’t give us access to there database. But to implement our functionality, we need all the document associated with a loan, which we can put before the user to sign online. Thus our functionality will trigger once we have the document.

The client proposed the below scheme

-> User will login to the Bank portal with credential registered with the bank

-> He will see the loan he has applied for and also the list of document to be signed. when he will click on Sign Button, he will be redirected to my site with his user_id and loan_id

-> Using the user_id and loan_id, my application will made a API call to say the below dummy url

https://my_finance.com/rest/loan/document

They are expecting xml request. So, The request body, should have below structure.

<document>
<name>form16.jpg</name>
<type>jpg/gif</type>
<content>hjhkkkllkl</content>
</document>

-> They also provided detail of security header i,e username, password, secret token etc each request should contain   in order to connect to their API
-> When request will be successful, we will get all the document detail in response in XML formate. They also told us detail of the response i,e at what xml nodes, the required data will be present.

Summary of Web API usage:

-> You should know the url to which you need to make the call

-> You should have access token detail as used by the API to make conection to it

-> Your request body should be in formate as defined by the API and carry all the required parameter

-> You should be able to handle the returned response.