This is the documentation for Cloudera 5.2.x.
Documentation for other versions is available at Cloudera Documentation.

Configuring Other CDH Components to Use HDFS HA

Configuring HBase to Use HDFS HA

Configuring HBase to Use HDFS HA Using the Command Line

To configure HBase to use HDFS HA, proceed as follows.

  1. Shut Down the HBase Cluster
  2. Configure hbase.rootdir
  3. Restart HBase
  4. HBase-HDFS HA Troubleshooting

Shut Down the HBase Cluster

  1. Stop the Thrift server and clients:
    sudo service hbase-thrift stop
  2. Stop the cluster by shutting down the Master and the RegionServers:
    • Use the following command on the Master host:
      sudo service hbase-master stop
    • Use the following command on each host hosting a RegionServer:
      sudo service hbase-regionserver stop

Configure hbase.rootdir

Change the distributed file system URI in hbase-site.xml to the name specified in the dfs.nameservices property in hdfs-site.xml. The clients must also have access to hdfs-site.xml's dfs.client.* settings to properly use HA.

For example, suppose the HDFS HA property dfs.nameservices is set to ha-nn in hdfs-site.xml. To configure HBase to use the HA NameNodes, specify that same value as part of your hbase-site.xml's hbase.rootdir value:

<!-- Configure HBase to use the HA NameNode nameservice -->
<property>
  <name>hbase.rootdir</name>
  <value>hdfs://ha-nn/hbase</value>
</property>

Restart HBase

  1. Start the HBase Master.
  2. Start each of the HBase RegionServers.

HBase-HDFS HA Troubleshooting

Problem: HMasters fail to start.

Solution: Check for this error in the HMaster log:

2012-05-17 12:21:28,929 FATAL master.HMaster (HMaster.java:abort(1317)) - Unhandled exception. Starting shutdown.
java.lang.IllegalArgumentException: java.net.UnknownHostException: ha-nn
        at org.apache.hadoop.security.SecurityUtil.buildTokenService(SecurityUtil.java:431)
        at org.apache.hadoop.hdfs.NameNodeProxies.createNonHAProxy(NameNodeProxies.java:161)
        at org.apache.hadoop.hdfs.NameNodeProxies.createProxy(NameNodeProxies.java:126)
...

If so, verify that Hadoop's hdfs-site.xml and core-site.xml files are in your hbase/conf directory. This may be necessary if you put your configurations in non-standard places.

Upgrading the Hive Metastore to Use HDFS HA

The Hive metastore can be configured to use HDFS high availability.

Using Cloudera Manager

  1. Go the Hive service.
  2. Select Actions > Stop.
      Note: You may want to stop the Hue and Impala services first, if present, as they depend on the Hive service.
    Click Stop to confirm the command.
  3. Back up the Hive metastore database.
  4. Select Actions > Update Hive Metastore NameNodes and confirm the command.
  5. Select Actions > Start.
  6. Restart the Hue and Impala services if you stopped them prior to updating the metastore.

Using the Command Line

To configure the Hive metastore to use HDFS HA, change the records to reflect the location specified in the dfs.nameservices property, using the Hive metatool to obtain and change the locations.

  Note: Before attempting to upgrade the Hive metastore to use HDFS HA, shut down the metastore and back it up to a persistent store.

If you are unsure which version of Avro SerDe is used, use both the serdePropKey and tablePropKey arguments. For example:

$ metatool -listFSRoot  
hdfs://oldnamenode.com/user/hive/warehouse  
$ metatool -updateLocation hdfs://nameservice1 hdfs://oldnamenode.com -tablePropKey avro.schema.url 
-serdePropKey schema.url  
$ metatool -listFSRoot 
hdfs://nameservice1/user/hive/warehouse

where:

  • hdfs://oldnamenode.com/user/hive/warehouse identifies the NameNode location.
  • hdfs://nameservice1 specifies the new location and should match the value of the dfs.nameservices property.
  • tablePropKey is a table property key whose value field may reference the HDFS NameNode location and hence may require an update. To update the Avro SerDe schema URL, specify avro.schema.url for this argument.
  • serdePropKey is a SerDe property key whose value field may reference the HDFS NameNode location and hence may require an update. To update the Haivvero schema URL, specify schema.url for this argument.
  Note: The Hive metatool is a best effort service that tries to update as many Hive metastore records as possible. If it encounters an error during the update of a record, it skips to the next record.

Configuring Hue to Work with HDFS HA

  1. Go to the HDFS service.
  2. Click the Instances tab.
  3. Click the Add Role Instances button.
  4. Click the textbox under the HttpFS role, select a host where you want to install the HttpFS role, and click Continue.
  5. After you are returned to the Instances page, select the new HttpFS role.
  6. Select Actions > Start.
  7. After the command has completed, go to the Hue service.
  8. Click the Configuration tab.
  9. Select the Service-Wide > HDFS Web Interface Role property.
  10. Select HttpFS instead of the NameNode role, and save your changes.
  11. Restart the Hue service.

Configuring Impala to Work with HDFS HA

  1. Complete the steps to reconfigure the Hive metastore database, as described in the preceding section. Impala shares the same underlying database with Hive, to manage metadata for databases, tables, and so on.
  2. Issue the INVALIDATE METADATA statement from an Impala shell. This one-time operation makes all Impala daemons across the cluster aware of the latest settings for the Hive metastore database. Alternatively, restart the Impala service.

Configuring Oozie to Use HDFS HA

To configure an Oozie workflow to use HDFS HA, use the HDFS nameservice instead of the NameNode URI in the <name-node> element of the workflow.

Example:

<action name="mr-node">
  <map-reduce>
    <job-tracker>${jobTracker}</job-tracker>
    <name-node>hdfs://ha-nn

where ha-nn is the value of dfs.nameservices in hdfs-site.xml.