Upgrading from a CDH 5 Release Earlier than 5.2.0 to the Latest Release

Use the steps on this page to upgrade from a CDH 5 release earlier than CDH 5.2.0 to the latest release. This upgrade requires an HDFS metadata upgrade. (There is a troubleshooting section at the end in case you miss the relevant steps in the instructions that follow.)

Step 1: Prepare the Cluster for the Upgrade

  1. Put the NameNode into safe mode and save thefsimage:
    1. Put the NameNode (or active NameNode in an HA configuration) into safe mode:
      $ sudo -u hdfs hdfs dfsadmin -safemode enter
    2. Perform a saveNamespace operation:
      $ sudo -u hdfs hdfs dfsadmin -saveNamespace 

      This will result in a new fsimage being written out with no edit log entries.

    3. With the NameNode still in safe mode, shut down all services as instructed below.
  2. Shut down Hadoop services across your entire cluster by running the following command on every host in your cluster:
    $ for x in `cd /etc/init.d ; ls hadoop-*` ; do sudo service $x stop ; done
  3. Check each host to make sure that there are no processes running as the hdfs, yarn, mapred or httpfs users from root:
    # ps -aef | grep java
  4. Back up the HDFS metadata on the NameNode machine, as follows.
    1. Find the location of your dfs.name.dir (or dfs.namenode.name.dir); for example:
      $ grep -C1 dfs.name.dir /etc/hadoop/conf/hdfs-site.xml 
      <property> <name>dfs.name.dir</name> <value>/mnt/hadoop/hdfs/name</value> 
      </property>
    2. Back up the directory. The path inside the <value> XML element is the path to your HDFS metadata. If you see a comma-separated list of paths, there is no need to back up all of them; they store the same data. Back up the first directory, for example, by using the following commands:
      $ cd /mnt/hadoop/hdfs/name
      # tar -cvf /root/nn_backup_data.tar .
      ./ 
      ./current/
      ./current/fsimage 
      ./current/fstime 
      ./current/VERSION 
      ./current/edits 
      ./image/ 
      ./image/fsimage

Step 2: If Necessary, Download the CDH 5 "1-click"Package on Each of the Hosts in your Cluster

Before you begin: Check whether you have the CDH 5 "1-click" repository installed.

  • On Red Hat/CentOS-compatible and SLES systems:
rpm -q cdh5-repository

If you are upgrading from CDH 5 Beta 1 or later, you should see:

cdh5-repository-1-0

In this case, skip to Step 3. If instead you see:

package cdh5-repository is not installed

proceed with these RHEL instructions or these SLES instructions.

  • On Ubuntu and Debian systems:
 dpkg -l | grep cdh5-repository

If the repository is installed, skip to Step 3; otherwise proceed with these instructions.

Summary: If the CDH 5 "1-click" repository is not already installed on each host in the cluster, follow the instructions below for that host's operating system:

On Red Hat-compatible systems:

  1. Download the CDH 5 "1-click Install" package.

    Click the entry in the table below that matches your Red Hat or CentOS system, choose Save File, and save the file to a directory to which you have write access (for example, your home directory).

    OS Version Click this Link
    Red Hat/CentOS/Oracle 5 Red Hat/CentOS/Oracle 5 link
    Red Hat/CentOS/Oracle 6 Red Hat/CentOS/Oracle 6 link
  2. Install the RPM:
    • Red Hat/CentOS/Oracle 5
      $ sudo yum --nogpgcheck localinstall cloudera-cdh-5-0.x86_64.rpm 
    • Red Hat/CentOS/Oracle 6
      $ sudo yum --nogpgcheck localinstall cloudera-cdh-5-0.x86_64.rpm

On SLES systems:

  1. Download the CDH 5 "1-click Install" package.

    Click this link, choose Save File, and save it to a directory to which you have write access (for example, your home directory).

  2. Install the RPM:
    $ sudo rpm -i cloudera-cdh-5-0.x86_64.rpm
  3. Update your system package index by running:
    $ sudo zypper refresh
$ sudo rpm -i cloudera-cdh-5-0.x86_64.rpm

On Ubuntu and Debian systems:

  1. Download the CDH 5 "1-click Install" package:
    OS Version Click this Link
    Wheezy Wheezy link
    Precise Precise link
    Trusty Trusty link
  2. Install the package by doing one of the following:
    • Choose Open with in the download window to use the package manager.
    • Choose Save File, save the package to a directory to which you have write access (for example, your home directory), and install it from the command line. For example:
      sudo dpkg -i cdh5-repository_1.0_all.deb

Step 3: Upgrade the Packages on the Appropriate Hosts

Upgrade MRv1, YARN, or both, depending on what you intend to use.

Before installing MRv1 or YARN: (Optionally) add a repository key on each system in the cluster, if you have not already done so. Add the Cloudera Public GPG Key to your repository by executing one of the following commands:

  • For Red Hat/CentOS/Oracle 5 systems:
    $ sudo rpm --import
    http://archive.cloudera.com/cdh5/redhat/5/x86_64/cdh/RPM-GPG-KEY-cloudera
    
  • For Red Hat/CentOS/Oracle 6 systems:
    $ sudo rpm --import
    http://archive.cloudera.com/cdh5/redhat/6/x86_64/cdh/RPM-GPG-KEY-cloudera
    
  • For all SLES systems:
    $ sudo rpm --import
    http://archive.cloudera.com/cdh5/sles/11/x86_64/cdh/RPM-GPG-KEY-cloudera
    
  • For Ubuntu Precise systems:
    $ curl -s
    http://archive.cloudera.com/cdh5/ubuntu/precise/amd64/cdh/archive.key
    | sudo apt-key add -
  • For Debian Wheezy systems:
    $ curl -s
    http://archive.cloudera.com/cdh5/debian/wheezy/amd64/cdh/archive.key
    | sudo apt-key add -

Step 3a: If you are using MRv1, upgrade the MRv1 packages on the appropriate hosts.

Skip this step if you are using YARN exclusively. Otherwise upgrade each type of daemon package on the appropriate hosts as follows:

  1. Install and deploy ZooKeeper:

    Follow instructions under ZooKeeper Installation.

  2. Install each type of daemon package on the appropriate systems(s), as follows.

    Where to install

    Install commands

    JobTracker host running:

     

    Red Hat/CentOS compatible

    $ sudo yum clean all; sudo yum install
    hadoop-0.20-mapreduce-jobtracker

    SLES

    $ sudo zypper clean --all; sudo zypper install
    hadoop-0.20-mapreduce-jobtracker

    Ubuntu or Debian

    $ sudo apt-get update; sudo apt-get install
    hadoop-0.20-mapreduce-jobtracker

    NameNode host running:

     

    Red Hat/CentOS compatible

    $ sudo yum clean all; sudo yum install
    hadoop-hdfs-namenode

    SLES

    $ sudo zypper clean --all; sudo zypper install
    hadoop-hdfs-namenode

    Ubuntu or Debian

    $ sudo apt-get update; sudo apt-get install
    hadoop-hdfs-namenode

    Secondary NameNode host (if used) running:

     

    Red Hat/CentOS compatible

    $ sudo yum clean all; sudo yum install
    hadoop-hdfs-secondarynamenode

    SLES

    $ sudo zypper clean --all; sudo zypper install
    hadoop-hdfs-secondarynamenode

    Ubuntu or Debian

    $ sudo apt-get update; sudo apt-get install
    hadoop-hdfs-secondarynamenode

    All cluster hosts except the JobTracker, NameNode, and Secondary (or Standby) NameNode hosts, running:

     

    Red Hat/CentOS compatible

    $ sudo yum clean all; sudo yum install
    hadoop-0.20-mapreduce-tasktracker
    hadoop-hdfs-datanode

    SLES

    $ sudo zypper clean --all; sudo zypper install
    hadoop-0.20-mapreduce-tasktracker
    hadoop-hdfs-datanode

    Ubuntu or Debian

    $ sudo apt-get update; sudo apt-get install
    hadoop-0.20-mapreduce-tasktracker
    hadoop-hdfs-datanode

    All client hosts, running:

     

    Red Hat/CentOS compatible

    $ sudo yum clean all; sudo yum install hadoop-client

    SLES

    $ sudo zypper clean --all; sudo zypper install
    hadoop-client

    Ubuntu or Debian

    $ sudo apt-get update; sudo apt-get install
    hadoop-client

Step 3b: If you are using YARN, upgrade the YARN packages on the appropriate hosts.

Skip this step if you are using MRv1 exclusively. Otherwise upgrade each type of daemon package on the appropriate hosts as follows:

  1. Install and deploy ZooKeeper:

    Follow instructions under ZooKeeper Installation.

  2. Install each type of daemon package on the appropriate systems(s), as follows.

    Where to install

    Install commands

    Resource Manager host (analogous to MRv1 JobTracker) running:

     

    Red Hat/CentOS compatible

    $ sudo yum clean all; sudo yum install hadoop-yarn-resourcemanager

    SLES

    $ sudo zypper clean --all; sudo zypper install hadoop-yarn-resourcemanager

    Ubuntu or Debian

    $ sudo apt-get update; sudo apt-get install hadoop-yarn-resourcemanager

    NameNode host running:

     

    Red Hat/CentOS compatible

    $ sudo yum clean all; sudo yum install hadoop-hdfs-namenode

    SLES

    $ sudo zypper clean --all; sudo zypper install hadoop-hdfs-namenode

    Ubuntu or Debian

    $ sudo apt-get update; sudo apt-get install hadoop-hdfs-namenode

    Secondary NameNode host (if used) running:

     

    Red Hat/CentOS compatible

    $ sudo yum clean all; sudo yum install hadoop-hdfs-secondarynamenode

    SLES

    $ sudo zypper clean --all; sudo zypper install hadoop-hdfs-secondarynamenode

    Ubuntu or Debian

    $ sudo apt-get update; sudo apt-get install hadoop-hdfs-secondarynamenode

    All cluster hosts except the Resource Manager (analogous to MRv1 TaskTrackers) running:

     

    Red Hat/CentOS compatible

    $ sudo yum clean all; sudo yum install hadoop-yarn-nodemanager hadoop-hdfs-datanode hadoop-mapreduce

    SLES

    $ sudo zypper clean --all; sudo zypper install hadoop-yarn-nodemanager hadoop-hdfs-datanode hadoop-mapreduce

    Ubuntu or Debian

    $ sudo apt-get update; sudo apt-get install hadoop-yarn-nodemanager hadoop-hdfs-datanode hadoop-mapreduce

    One host in the cluster running:

     

    Red Hat/CentOS compatible

    $ sudo yum clean all; sudo yum install hadoop-mapreduce-historyserver hadoop-yarn-proxyserver

    SLES

    $ sudo zypper clean --all; sudo zypper install hadoop-mapreduce-historyserver hadoop-yarn-proxyserver

    Ubuntu or Debian

    $ sudo apt-get update; sudo apt-get install hadoop-mapreduce-historyserver hadoop-yarn-proxyserver

    All client hosts, running:

     

    Red Hat/CentOS compatible

    $ sudo yum clean all; sudo yum install hadoop-client

    SLES

    $ sudo zypper clean --all; sudo zypper install hadoop-client

    Ubuntu or Debian

    sudo apt-get update; sudo apt-get install hadoop-client

Step 4: In an HA Deployment, Upgrade and Start the Journal Nodes

  1. Install the JournalNode daemons on each of the machines where they will run.

    To install JournalNode on Red Hat-compatible systems:

    $ sudo yum install hadoop-hdfs-journalnode

    To install JournalNode on Ubuntu and Debian systems:

    $ sudo apt-get install hadoop-hdfs-journalnode 

    To install JournalNode on SLES systems:

    $ sudo zypper install hadoop-hdfs-journalnode
  2. Start the JournalNode daemons on each of the machines where they will run:
    sudo service hadoop-hdfs-journalnode start 

Wait for the daemons to start before proceeding to the next step.

Step 5: Upgrade the HDFS Metadata

Step 5a: Upgrade the metadata.

To upgrade the HDFS metadata, run the following command on the NameNode. If HA is enabled, do this on the active NameNode only, and make sure the JournalNodes have been upgraded to CDH 5 and are up and running before you run the command.
$ sudo service hadoop-hdfs-namenode upgrade
You can watch the progress of the upgrade by running:
$ sudo tail -f /var/log/hadoop-hdfs/hadoop-hdfs-namenode-<hostname>.log 
Look for a line that confirms the upgrade is complete, such as: /var/lib/hadoop-hdfs/cache/hadoop/dfs/<name> is complete

Step 5b: Do this step only in an HA deployment. Otherwise skip to starting up the DataNodes.

Wait for NameNode to exit safe mode, and then re-start the standby NameNode.

  • If Kerberos is enabled:
    $ kinit -kt /path/to/hdfs.keytab hdfs/<fully.qualified.domain.name@YOUR-REALM.COM> && hdfs namenode -bootstrapStandby
    $ sudo service hadoop-hdfs-namenode start
  • If Kerberos is not enabled:
    $ sudo -u hdfs hdfs namenode -bootstrapStandby
    $ sudo service hadoop-hdfs-namenode start

Step 5c: Start up the DataNodes:

On each DataNode:
$ sudo service hadoop-hdfs-datanode start

Step 5d: Do this step only in a non-HA deployment. Otherwise skip to starting YARN or MRv1.

Wait for NameNode to exit safe mode, and then start the Secondary NameNode.

1. To check that the NameNode has exited safe mode, look for messages in the log file, or the NameNode's web interface, that say "...no longer in safe mode."

2. To start the Secondary NameNode, enter the following command on the Secondary NameNode host:
$ sudo service hadoop-hdfs-secondarynamenode start

3. To complete the cluster upgrade, follow the remaining steps below.

Step 6: Start MapReduce (MRv1) or YARN

You are now ready to start and test MRv1 or YARN.

Step 6a: Start MapReduce (MRv1)

After you have verified HDFS is operating correctly, you are ready to start MapReduce. On each TaskTracker system:

$ sudo service hadoop-0.20-mapreduce-tasktracker start

On the JobTracker system:

$ sudo service hadoop-0.20-mapreduce-jobtracker start

Verify that the JobTracker and TaskTracker started properly.

$ sudo jps | grep Tracker

If the permissions of directories are not configured correctly, the JobTracker and TaskTracker processes start and immediately fail. If this happens, check the JobTracker and TaskTracker logs and set the permissions correctly.

Verify basic cluster operation for MRv1.

At this point your cluster is upgraded and ready to run jobs. Before running your production jobs, verify basic cluster operation by running an example from the Apache Hadoop web site.

1. Create a home directory on HDFS for the user who will be running the job (for example, joe):
$ sudo -u hdfs hadoop fs -mkdir -p /user/joe 
$ sudo -u hdfs hadoop fs -chown joe /user/joe

Do the following steps as the user joe.

2. Make a directory in HDFS called input and copy some XML files into it by running the following commands:
$ hadoop fs -mkdir input 
$ hadoop fs -put /etc/hadoop/conf/*.xml input 
$ hadoop fs -ls input 
Found 3 items: 
-rw-r--r-- 1 joe supergroup 1348 2012-02-13 12:21 input/core-site.xml
-rw-r--r-- 1 joe supergroup 1913 2012-02-13 12:21 input/hdfs-site.xml 
-rw-r--r-- 1 joe supergroup 1001 2012-02-13 12:21 input/mapred-site.xml
3. Run an example Hadoop job to grep with a regular expression in your input data.
$ /usr/bin/hadoop jar /usr/lib/hadoop-0.20-mapreduce/hadoop-examples.jar grep input output 'dfs[a-z.]+'
4. After the job completes, you can find the output in the HDFS directory named output because you specified that output directory to Hadoop.
$ hadoop fs -ls 
Found 2 items 
drwxr-xr-x - joe supergroup 0 2009-08-18 18:36 /user/joe/input 
drwxr-xr-x - joe supergroup 0 2009-08-18 18:38 /user/joe/output

You can see that there is a new directory called output.

5. List the output files.
$ hadoop fs -ls output 
Found 2 items 
drwxr-xr-x - joe supergroup 0 2009-02-25 10:33 /user/joe/output/_logs 
-rw-r--r-- 1 joe supergroup 1068 2009-02-25 10:33 /user/joe/output/part-00000 
-rw-r--r- 1 joe supergroup 0 2009-02-25 10:33 /user/joe/output/_SUCCESS
6. Read the results in the output file; for example:
$ hadoop fs -cat output/part-00000 | head 
1 dfs.datanode.data.dir 
1 dfs.namenode.checkpoint.dir 
1 dfs.namenode.name.dir 
1 dfs.replication 
1 dfs.safemode.extension 
1 dfs.safemode.min.datanodes

You have now confirmed your cluster is successfully running CDH 5.

Step 6b: Start MapReduce with YARN

After you have verified HDFS is operating correctly, you are ready to start YARN. First, if you have not already done so, create directories and set the correct permissions.

Create a history directory and set permissions; for example:
$ sudo -u hdfs hadoop fs -mkdir -p /user/history 
$ sudo -u hdfs hadoop fs -chmod -R 1777 /user/history  
 $ sudo -u hdfs hadoop fs -chown yarn /user/history 
Create the /var/log/hadoop-yarn directory and set ownership:
$ sudo -u hdfs hadoop fs -mkdir -p /var/log/hadoop-yarn  
$ sudo -u hdfs hadoop fs -chown yarn:mapred /var/log/hadoop-yarn 

Verify the directory structure, ownership, and permissions:

$ sudo -u hdfs hadoop fs -ls -R / 
You should see:
drwxrwxrwt - hdfs supergroup 0 2012-04-19 14:31 /tmp  
drwxr-xr-x - hdfs supergroup 0 2012-05-31 10:26 /user  
drwxrwxrwt - yarn supergroup 0 2012-04-19 14:31 /user/history  
drwxr-xr-x - hdfs supergroup 0 2012-05-31 15:31 /var  
drwxr-xr-x - hdfs supergroup 0 2012-05-31 15:31 /var/log  
drwxr-xr-x - yarn mapred 0 2012-05-31 15:31 /var/log/hadoop-yarn 

To start YARN, start the ResourceManager and NodeManager services:

On the ResourceManager system:

$ sudo service hadoop-yarn-resourcemanager start 

On each NodeManager system (typically the same ones where DataNode service runs):

$ sudo service hadoop-yarn-nodemanager start 

To start the MapReduce JobHistory Server

On the MapReduce JobHistory Server system:

$ sudo service hadoop-mapreduce-historyserver start 

For each user who will be submitting MapReduce jobs using MapReduce v2 (YARN), or running Pig, Hive, or Sqoop 1 in a YARN installation, set the HADOOP_MAPRED_HOME environment variable as follows:

$ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce 

Verify basic cluster operation for YARN.

At this point your cluster is upgraded and ready to run jobs. Before running your production jobs, verify basic cluster operation by running an example from the Apache Hadoop web site.

1. Create a home directory on HDFS for the user who will be running the job (for example, joe):
$ sudo -u hdfs hadoop fs -mkdir -p /user/joe 
$ sudo -u hdfs hadoop fs -chown joe /user/joe

Do the following steps as the user joe.

2. Make a directory in HDFS called input and copy some XML files into it by running the following commands in pseudo-distributed mode:
$ hadoop fs -mkdir input 
$ hadoop fs -put /etc/hadoop/conf/*.xml input 
$ hadoop fs -ls input 
Found 3 items: 
-rw-r--r-- 1 joe supergroup 1348 2012-02-13 12:21 input/core-site.xml 
-rw-r--r-- 1 joe supergroup 1913 2012-02-13 12:21 input/hdfs-site.xml 
-rw-r--r-- 1 joe supergroup 1001 2012-02-13 12:21 input/mapred-site.xml
3. Set HADOOP_MAPRED_HOME for user joe:
$ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-mapreduce
4. Run an example Hadoop job to grep with a regular expression in your input data.
$ hadoop jar /usr/lib/hadoop-mapreduce/hadoop-mapreduce-examples.jar grep input output23 'dfs[a-z.]+'
After the job completes, you can find the output in the HDFS directory named output23 because you specified that output directory to Hadoop.
$ hadoop fs -ls 
Found 2 items 
drwxr-xr-x - joe supergroup 0 2009-08-18 18:36 /user/joe/input 
drwxr-xr-x - joe supergroup 0 2009-08-18 18:38 /user/joe/output23

You can see that there is a new directory called output23.

List the output files:
$ hadoop fs -ls output23 
Found 2 items 
drwxr-xr-x - joe supergroup 0 2009-02-25 10:33 /user/joe/output23/_SUCCESS 
-rw-r--r-- 1 joe supergroup 1068 2009-02-25 10:33 /user/joe/output23/part-r-00000
Read the results in the output file:
$ hadoop fs -cat output23/part-r-00000 | head 
1 dfs.safemode.min.datanodes 
1 dfs.safemode.extension 
1 dfs.replication 
1 dfs.permissions.enabled 
1 dfs.namenode.name.dir 
1 dfs.namenode.checkpoint.dir 
1 dfs.datanode.data.dir

You have now confirmed your cluster is successfully running CDH 5.

Step 7: Set the Sticky Bit

For security reasons Cloudera strongly recommends you set the sticky bit on directories if you have not already done so.

The sticky bit prevents anyone except the superuser, directory owner, or file owner from deleting or moving the files within a directory. (Setting the sticky bit for a file has no effect.) Do this for directories such as /tmp. (For instructions on creating /tmp and setting its permissions, see these instructions).

Step 8: Upgrade Components

CDH 5 Components

Step 9: Apply Configuration File Changes if Necessary

For example, if you have modified your zoo.cfg configuration file (/etc/zookeeper/zoo.cfg), the upgrade renames and preserves a copy of your modified zoo.cfg as /etc/zookeeper/zoo.cfg.rpmsave. If you have not already done so, you should now compare this to the new /etc/zookeeper/conf/zoo.cfg, resolve differences, and make any changes that should be carried forward (typically where you have changed property value defaults). Do this for each component you upgrade.

Step 10: Finalize the HDFS Metadata Upgrade

To finalize the HDFS metadata upgrade you began earlier in this procedure, proceed as follows:

  • Make sure you are satisfied that the CDH 5 upgrade has succeeded and everything is running smoothly. This could take a matter of days, or even weeks.
  • Finalize the HDFS metadata upgrade: use one of the following commands, depending on whether Kerberos is enabled (see Configuring Hadoop Security in CDH 5).
    • If Kerberos is enabled:
      $ kinit -kt /path/to/hdfs.keytab hdfs/<fully.qualified.domain.name@YOUR-REALM.COM> && hdfs dfsadmin -finalizeUpgrade
    • If Kerberos is not enabled:
      $ sudo -u hdfs hdfs dfsadmin -finalizeUpgrade

Troubleshooting: If You Missed the HDFS Metadata Upgrade Steps

If you skipped Step 5: Upgrade the HDFS Metadata, HDFS will not start; the metadata upgrade is required for all upgrades to CDH 5.2.0 and later from any earlier release. You will see errors such as the following:
2014-10-16 18:36:29,112 WARN org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Encountered exception loading fsimage
        java.io.IOException:File system image contains an old layout version -55.An upgrade to version -59 is required.
        Please  restart NameNode with the "-rollingUpgrade started" option if a rolling  upgrade is already started; or restart NameNode with the "-upgrade"
        option to start a new upgrade.        
              at
        org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:231) 
              at
        org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:994) 
              at
        org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:726) 
              at
        org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:529) 
              at
        org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585) 
              at
        org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:751) 
              at
        org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:735) 
              at
        org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1410) 
              at
        org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1476)
        2014-10-16 18:36:29,126 INFO org.mortbay.log: Stopped HttpServer2$SelectChannelConnectorWithSafeStartup@0.0.0.0:50070
        2014-10-16 18:36:29,127 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false. Rechecking.
        2014-10-16 18:36:29,127 WARN org.apache.hadoop.http.HttpServer2: HttpServer Acceptor: isRunning is false
        2014-10-16 18:36:29,127 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Stopping NameNode metrics system...
        2014-10-16 18:36:29,128 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system stopped.
        2014-10-16 18:36:29,128 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system shutdown complete.
        2014-10-16 18:36:29,128 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
        java.io.IOException: File system image contains an old layout version -55.An upgrade to version -59 is required.
        Please  restart NameNode with the "-rollingUpgrade started" option if a rolling  upgrade is already
        started; or restart NameNode with the "-upgrade"  option to start a new upgrade.        
              at
        org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:231) 
              at
        org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFSImage(FSNamesystem.java:994) 
              at
        org.apache.hadoop.hdfs.server.namenode.FSNamesystem.loadFromDisk(FSNamesystem.java:726) 
              at
        org.apache.hadoop.hdfs.server.namenode.NameNode.loadNamesystem(NameNode.java:529) 
              at
        org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:585) 
              at
        org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:751) 
              at
        org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:735) 
              at
        org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1410) 
              at
        org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1476)
        2014-10-16 18:36:29,130 INFO org.apache.hadoop.util.ExitUtil: Exiting with status 1
        2014-10-16 18:36:29,132 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: SHUTDOWN_MSG:
To recover, proceed as follows:
  1. Make sure you have completed all the preceding steps (Step 1: Prepare the Cluster for the Upgrade through Step 4: In an HA Deployment, Upgrade and Start the Journal Nodes; or Step 1: Prepare the Cluster for the Upgrade through Step 3: Upgrade the Packages on the Appropriate Hosts if this is not an HA deployment).
  2. Starting with Step 5: Upgrade the HDFS Metadata, complete all the remaining steps through Step 10: Finalize the HDFS Metadata Upgrade.