This is the documentation for Cloudera 5.3.x.
Documentation for other versions is available at Cloudera Documentation.

Upgrading from CDH 4 to CDH 5 Parcels

Required Role:

This topic covers upgrading a CDH 4 cluster to a CDH 5 cluster using the upgrade wizard, which will install CDH 5 parcels. Your CDH 4 cluster can be using either parcels or packages; you can use the cluster upgrade wizard to upgrade using parcels in either case.

If you want to upgrade using CDH 5 packages, you can do so using a manual process. See Upgrading from CDH 4 Packages to CDH 5 Packages.

The steps to upgrade a CDH installation managed by Cloudera Manager using parcels are as follows.

  1. Before You Begin
  2. Stop All Services
  3. Perform Service-Specific Prerequisite Actions
  4. Remove CDH Packages
  5. Deactivate and Remove the GPL Extras Parcel
  6. Run the Upgrade Wizard
  7. Recover from Failed Steps
  8. Upgrade the GPL Extras Parcel
  9. Restart the Reports Manager Role
  10. Recompile Custom JARs
  11. Finalize the HDFS Metadata Upgrade
  12. Upgrade Wizard Actions

Before You Begin

  • Read the CDH 5 Release Notes.
  • Read the Cloudera Manager 5 Release Notes.
  • Ensure that the Cloudera Manager minor version is equal to or greater than the CDH minor version.
  • Make sure there are no Oozie workflows in RUNNING or SUSPENDED status; otherwise the Oozie database upgrade will fail and you will have to reinstall CDH 4 to complete or kill those running workflows.
  • Recompile classes of any custom JARs you may have created in /var/lib/oozie.
  • Run the Host Inspector and fix every issue.
  • If using security, run the Security Inspector.
  • Run hdfs fsck / and hdfs dfsadmin -report and fix every issue.
  • If using HBase:
    • Run hbase hbck.
    • Before you can upgrade HBase from CDH 4 to CDH 5, your HFiles must be upgraded from HFile v1 format to HFile v2, because CDH 5 no longer supports HFile v1. The upgrade procedure itself is different if you are using Cloudera Manager or the command line, but has the same results. The first step is to check for instances of HFile v1 in the HFiles and mark them to be upgraded to HFile v2, and to check for and report about corrupted files or files with unknown versions, which need to be removed manually. The next step is to rewrite the HFiles during the next major compaction. After the HFiles are upgraded, you can continue the upgrade. After the upgrade is complete, you must recompile custom coprocessors and JARs. To check and upgrade the files:
      1. In the Cloudera Admin Console, go to the HBase service and run Actions > Check HFile Version.
      2. Check the output of the command in the stderr log.
        Your output should be similar to the following:
        Tables Processed:
        hdfs://localhost:41020/myHBase/.META.
        hdfs://localhost:41020/myHBase/usertable
        hdfs://localhost:41020/myHBase/TestTable
        hdfs://localhost:41020/myHBase/t
        
        Count of HFileV1: 2
        HFileV1:
        hdfs://localhost:41020/myHBase/usertable /fa02dac1f38d03577bd0f7e666f12812/family/249450144068442524
        hdfs://localhost:41020/myHBase/usertable /ecdd3eaee2d2fcf8184ac025555bb2af/family/249450144068442512
        
        Count of corrupted files: 1
        Corrupted Files:
        hdfs://localhost:41020/myHBase/usertable/fa02dac1f38d03577bd0f7e666f12812/family/1
        Count of Regions with HFileV1: 2
        Regions to Major Compact:
        hdfs://localhost:41020/myHBase/usertable/fa02dac1f38d03577bd0f7e666f12812
        hdfs://localhost:41020/myHBase/usertable/ecdd3eaee2d2fcf8184ac025555bb2af
        In the example above, you can see that the script has detected two HFile v1 files, one corrupt file and the regions to major compact.
      3. Trigger a major compaction on each of the reported regions. This major compaction rewrites the files from HFile v1 to HFile v2 format. To run the major compaction, start HBase Shell and issue the major_compact command.
        $ bin/hbase shell
        hbase> major_compact 'usertable'
        You can also do this in a single step by using the echo shell built-in command.
        $ echo "major_compact 'usertable'" | bin/hbase shell
  • Review the upgrade procedure and reserve a maintenance window with enough time allotted to perform all steps. For production clusters, Cloudera recommends allocating up to a full day maintenance window to perform the upgrade, depending on the number of hosts, the amount of experience you have with Hadoop and Linux, and the particular hardware you are using.
  • To avoid lots of alerts during the upgrade process, you can enable maintenance mode on your cluster before you start the upgrade. This will stop email alerts and SNMP traps from being sent, but will not stop checks and configuration validations from being made. Be sure to exit maintenance mode when you have finished the upgrade in order to re-enable Cloudera Manager alerts.

Stop All Services

  1. Stop the cluster.
    1. On the Home page, click to the right of the cluster name and select Stop.
    2. Click Stop in the confirmation screen. The Command Details window shows the progress of stopping services.

      When All services successfully stopped appears, the task is complete and you can close the Command Details window.

  2. Stop the Cloudera Management Service:
    1. Do one of the following:
        1. Select Clusters > Cloudera Management Service > Cloudera Management Service.
        2. Select Actions > Stop.
        1. On the Home page, click to the right of Cloudera Management Service and select Stop.
    2. Click Stop to confirm. The Command Details window shows the progress of stopping the roles.
    3. When Command completed with n/n successful subcommands appears, the task is complete. Click Close.

Perform Service-Specific Prerequisite Actions

  • Accumulo - if you have installed the Accumulo parcel, deactivate it following the instructions in Managing Parcels.
  • HDFS - Back up HDFS metadata on the NameNode:
    1. Stop the cluster. It is particularly important that the NameNode role process is not running so that you can make a consistent backup.
    2. Go to the HDFS service.
    3. Click the Configuration tab.
    4. In the Search field, search for "NameNode Data Directories". This locates the NameNode Data Directories property.
    5. From the command line on the NameNode host, back up the directory listed in the NameNode Data Directories property. If more than one is listed, then you only need to make a backup of one directory, since each directory is a complete copy. For example, if the data directory is /mnt/hadoop/hdfs/name, do the following as root:
      # cd /mnt/hadoop/hdfs/name
      # tar -cvf /root/nn_backup_data.tar .

      You should see output like this:

      ./
      ./current/
      ./current/fsimage
      ./current/fstime
      ./current/VERSION
      ./current/edits
      ./image/
      ./image/fsimage
        Warning: If you see a file containing the word lock, the NameNode is probably still running. Repeat the preceding steps, starting by shutting down the CDH services.

Remove CDH Packages

If your previous installation of CDH was done using packages, you must remove those packages on all hosts in the cluster being upgraded. This will definitely be the case if you are running a version of CDH prior to CDH 4.1.3, since parcels were not available with those releases.
  1. Uninstall the CDH packages. On each host:
    Operating System Command
    RHEL $ sudo yum remove bigtop-jsvc bigtop-utils bigtop-tomcat hue-common sqoop2-client hbase-solr-doc solr-doc
    SLES $ sudo zypper remove bigtop-jsvc bigtop-utils bigtop-tomcat hue-common sqoop2-client hbase-solr-doc solr-doc
    Ubuntu or Debian $ sudo apt-get purge bigtop-jsvc bigtop-utils bigtop-tomcat hue-common sqoop2-client hbase-solr-doc solr-doc
  2. Restart all the Cloudera Manager Agents to force an update of the installed binaries reported by the Agent. On each host:
    $ sudo service cloudera-scm-agent restart
  3. Run the Host Inspector to verify that the packages have been removed:
    1. Click Hosts tab and then click the Host Inspector button.
    2. When the command completes, click Show Inspector Results.

Deactivate and Remove the GPL Extras Parcel

If you are using LZO, deactivate and remove the CDH 4 GPL Extras parcel.

Run the Upgrade Wizard

  1. Log into the Cloudera Manager Admin console.
  2. From the Home tab Status page, click next to the cluster name and select Upgrade Cluster. The Upgrade Wizard starts.
  3. If the option to pick between packages and parcels displays, click the Use Parcels radio button.
  4. In the Choose CDH Version (Parcels) field, select the CDH version. If there are no qualifying parcels, click the click here link to go to the Parcel Configuration Settings page where you can add the locations of parcel repositories. Click Continue.
  5. Read the notices for steps you must complete before upgrading, click the Yes, I ... checkboxes after completing the steps, and click Continue.
  6. Cloudera Manager checks that hosts have the correct software installed. Click Continue.
  7. The selected parcels are downloaded and distributed. Click Continue.
  8. The wizard advises that it will shut down the cluster to start the upgrade process. Click Continue.
  9. The Command Progress screen displays the result of the commands run by the wizard as it shuts down all services, activates the new parcel, upgrades services as necessary, deploys client configuration files, and restarts services. Click Continue.
  10. The Host Inspector runs and displays the CDH version running on the hosts. Click Continue.
  11. The wizard reports the result of the upgrade. Choose one of the following:
    • Leave OK, set up YARN and import existing configuration from my MapReduce service checked.
      1. Click Continue to proceed. Cloudera Manager stops the YARN service (if running) and its dependencies.
      2. Click Continue to proceed. The next page indicates some additional configuration required by YARN.
      3. Verify or modify the configurations and click Continue. The Switch Cluster to MR2 step proceeds.
      4. When all steps have completed, click Continue.
    • Uncheck OK, set up YARN and import existing configuration from my MapReduce service.
  12. Click Finish to return to the Home page.

Recover from Failed Steps

  Note: If you encounter errors during these steps:
  • If the converting configuration parameters step fails, Cloudera Manager rolls back all configurations to CDH 4. Fix any reported problems and retry the upgrade.
  • If the upgrade command fails at any point after the convert configuration step, there is no retry support in Cloudera Manager. You must first correct the error, then manually re-run the individual commands. You can view the remaining commands in the Recent Commands page.
  • If the HDFS upgrade metadata step fails, you cannot revert back to CDH 4 unless you restore a backup of Cloudera Manager.
The actions performed by the upgrade wizard are listed in Upgrade Wizard Actions. If any of the steps in the Command Progress screen fails, complete the step as described in that section before proceeding.

Upgrade the GPL Extras Parcel

If you are using LZO:
  1. Install the CDH 5 GPL Extras parcel. See Installing GPL Extras.
  2. Reconfigure and restart services that use the parcel. See Configuring Services to Use the GPL Extras Parcel.

Restart the Reports Manager Role

  1. Do one of the following:
    • Select Clusters > Cloudera Management Service > Cloudera Management Service.
    • On the Status tab of the Home page, in Cloudera Management Service table, click the Cloudera Management Service link.
  2. Click the Instances tab.
  3. Check the checkbox next to Reports Manager.
  4. Select Actions for Selected > Restart and then Restart to confirm.

Recompile Custom JARs

  • HBase - Before using any HBase applications that use coprocessor or custom JARs, recompile the JARs.
  • Oozie - Recompile all classes in the JARs in /var/lib/oozie and their dependencies.

Finalize the HDFS Metadata Upgrade

If upgrading from 5.0 or 5.1 to 5.2 or higher, after ensuring that the CDH 5 upgrade has succeeded and that everything is running smoothly, finalize the HDFS metadata upgrade. It is not unusual to wait days or even weeks before finalizing the upgrade.
  1. Go to the HDFS service.
  2. Click the Instances tab.
  3. Click the NameNode instance.
  4. Select Actions > Finalize Metadata Upgrade and click Finalize Metadata Upgrade to confirm.

Upgrade Wizard Actions

Do the steps in this section only if the upgrade wizard reports a failure.

Upgrade HDFS Metadata

  1. Start the ZooKeeper service.
  2. Go to the HDFS service.
  3. Select Actions > Upgrade HDFS Metadata.

Upgrade the Hive Metastore Database

  1. Back up the Hive metastore database.
  2. Go to the Hive service.
  3. Select Actions > Upgrade Hive Metastore Database Schema and click Upgrade Hive Metastore Database Schema to confirm.
  4. If you have multiple instances of Hive, perform the upgrade on each metastore database.

Upgrade Oozie

  1. Recompile all classes in the JARs in /var/lib/oozie and their dependencies with the new version of Oozie.
  2. Go to the Oozie service.
  3. Select Actions > Upgrade Database and click Upgrade Database to confirm.
  4. Start the Oozie service.
  5. Select Actions > Install Oozie ShareLib and click Install Oozie ShareLib to confirm.

Upgrade Sqoop

  1. Go to the Sqoop service.
  2. Select Actions > Upgrade Sqoop and click Upgrade Sqoop to confirm.

Start Cluster Services

  1. On the Home page, click to the right of the cluster name and select Start.
  2. Click Start that appears in the next screen to confirm. The Command Details window shows the progress of starting services.

    When All services successfully started appears, the task is complete and you can close the Command Details window.

Deploy Client Configuration Files

  1. On the Home page, click to the right of the cluster name and select Deploy Client Configuration.
  2. Click the Deploy Client Configuration button in the confirmation pop-up that appears.