This is the documentation for CDH 4.7.1. Documentation for other versions is available at Cloudera Documentation.

Cloudera Manager 4 and CDH 4 have reached End of Maintenance (EOM) on August 9, 2015. Cloudera will not support or provide patches for any of the Cloudera Manager 4 or CDH 4 releases after that date.

Configuring Other CDH Components to Use HDFS HA

You can use the HDFS High Availability NameNodes with other components of CDH, including HBase, Oozie, and Hive.

Configuring HBase to Use HDFS HA

To configure HBase to use HDFS HA, proceed as follows.

Step 1: Shut Down the HBase Cluster

To shut HBase down gracefully, stop the Thrift server and clients, then stop the cluster:

  1. Stop the Thrift server and clients:
    sudo service hbase-thrift stop
  2. Stop the cluster by shutting down the master and the region servers:
    • Use the following command on the master node:
      sudo service hbase-master stop
    • Use the following command on each node hosting a region server:
      sudo service hbase-regionserver stop

Step 2: Configure hbase.rootdir

Change the distributed file system URI in hbase-site.xml to the name specified in the dfs.nameservices property in hdfs-site.xml. The clients must also have access to hdfs-site.xml's dfs.client.* settings to properly use HA.

For example, suppose the HDFS HA property dfs.nameservices is set to ha-nn in hdfs-site.xml. To configure HBase to use the HA NameNodes, specify that same value as part of your hbase-site.xml's hbase.rootdir value:

<!-- Configure HBase to use the HA NameNode nameservice -->

Step 3: Clean up /hbase/splitlogs

  Note: If you fail to perform this step HBase may fail to start because it is trying to use the old copy of the namespace.

Do the following on the ZooKeeper node:

  1. Run /usr/lib/zookeeper/bin/
    ls /hbase/splitlogs
  2. rmr /hbase/splitlogs
  3. If this shows any content, do:
    rmr /hbase/splitlogs

Step 4: Restart HBase

  1. Start the HBase Master
  2. Start each of the HBase Region Servers

HBase-HDFS HA Troubleshooting

Problem: HMasters fail to start.

Solution: Check for this error in the hmaster logs:

2012-05-17 12:21:28,929 FATAL master.HMaster ( - Unhandled exception. Starting shutdown.
java.lang.IllegalArgumentException: ha-nn
        at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(
        at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(

If so, verify that Hadoop's hdfs-site.xml and core-site.xml files are in your hbase/conf directory. This may be necessary if you put your configurations in non-standard places.

Configuring Oozie to Use HDFS HA

To configure an Oozie workflow to use HDFS HA, use the HA HDFS URI instead of the NameNode URI in the <name-node> element of the workflow.


<action name="mr-node">

where ha-nn is the value of dfs.nameservices in hdfs-site.xml.

Upgrading the Hive Metastore to Use HDFS HA

For CDH 4.1 and later, the Hive Metastore can be configured to use HDFS High Availability.. See Hive Installation.

To configure the Hive metastore to use HDFS HA, change the records to reflect the location specified in the dfs.nameservices property, using the Hive metatool to obtain and change the locations.

  Note: Before attempting to upgrade the Hive metastore to use HDFS HA, shut down the metastore and back it up to a persistent store.

If you are unsure which version of Avro SerDe is used, use both the serdePropKey and tablePropKey arguments. For example:

$ metatool -listFSRoot
$ metatool -updateLocation hdfs://nameservice1 hdfs:// -tablePropKey avro.schema.url
-serdePropKey schema.url
$ metatool -listFSRoot


  • hdfs:// identifies the NameNode location.
  • hdfs://nameservice1 specifies the new location and should match the value of the dfs.nameservices property.
  • tablePropKey is a table property key whose value field may reference the HDFS NameNode location and hence may require an update. To update the Avro SerDe schema URL, specify avro.schema.url for this argument.
  • serdePropKey is a SerDe property key whose value field may reference the HDFS NameNode location and hence may require an update. To update the Haivvero schema URL, specify schema.url for this argument.
  Note: The Hive MetaTool is a best effort service that tries to update as many Hive metastore records as possible. If it encounters an error during the update of a record, it skips to the next record.
This page last updated August 5, 2015