This is the documentation for Cloudera Manager 5.1.x. Documentation for other versions is available at Cloudera Documentation.

Changing Hostnames

Required Role:

  Important: The process described here requires Cloudera Manager and cluster downtime.
After you have installed Cloudera Manager and created a cluster, you may need to update the names of the hosts running the Cloudera Manager Server or cluster services. To update a deployment with new hostnames, follow these steps:
  1. Verify if SSL/TLS certificates have been issued for any of the services and make sure to create new SSL/TLS certificates in advance for services protected by TLS/SSL. Review Cloudera Manager and CDH documentation at Cloudera Documentation.
      Tip: Search for SSL and TLS in the documentation.
  2. Export the Cloudera Manager configuration using one of the following methods:
    • Open a browser and go to this URL http://cm_hostname:7180/api/api_version/cm/deployment. Save the displayed configuration.
    • From terminal type:

      $ curl -u admin:admin http://cm_hostname:7180/api/api_version/cm/deployment > cme-cm-export.json

      If Cloudera Manager SSL is in use, specify the -k switch:

      $ curl -k -u admin:admin http://cm_hostname:7180/api/api_version/cm/deployment > cme-cm-export.json

    where cm_hostname is the name of the Cloudera Manager host and api_version is the correct version of the API for the version of Cloudera Manager you are using. For example, http://tcdn5-1.ent.cloudera.com:7180/api/v6/cm/deployment.
  3. Stop all services on the cluster.
  4. Stop the Cloudera Management Service.
  5. Stop the Cloudera Manager Server.
  6. Stop the Cloudera Manager Agents on the hosts that will be having the hostname changed.
  7. Back up the Cloudera Manager Server database using mysqldump, pg_dump, or another preferred backup utility. Store the backup in a safe location.
  8. Update names and principals:
    1. Update the target hosts using standard per-OS/name service methods (/etc/hosts, dns, /etc/sysconfig/network, hostname, and so on). Ensure that you remove the old hostname.
    2. If you are changing the hostname of the host running Cloudera Manager Server do the following:
      1. Change the hostname per step 8.a.
      2. Update the Cloudera Manager hostname in /etc/cloudera-scm-agent/config.ini on all Agents.
    3. If the cluster is configured for Kerberos security, do the following:
      1. In the Cloudera Manager database, set the merged_keytab value:
        • PostgreSQL
          update roles set merged_keytab=NULL;
        • MySQL
          update ROLES set MERGED_KEYTAB=NULL;
      2. Remove old hostname cluster service principals from the KDC database using one of the following:
        • Use the delprinc command within kadmin.local interactive shell.
        • From the command line:
          kadmin.local -q "listprincs" | grep -E "(HTTP|hbase|hdfs|hive|httpfs|hue|impala|mapred|solr|oozie|yarn|zookeeper)[^/]*/ [^/]*@" > cluster-princ.txt

          Open cluster-princ.txt and remove any non-cluster service principal entries within it. Make sure that the default krbtgt and other principals you created, or were created by Kerberos by default, are not removed by running the following: for i in `cat cluster-princ.txt`; do yes yes | kadmin.local -q "delprinc $i"; done.

      3. Start the Cloudera Manager database and Cloudera Manager Server.
      4. Start the Cloudera Manager Agents on the newly renamed hosts. The Agents should show a current heartbeat in Cloudera Manager.
      5. Within the Cloudera Manager Admin Console recreate all the principals based on the new hostnames:
        1. Select Administration > Kerberos.
        2. Do one of the following:
          • If there are no principals listed, click the Generate Principals button.
          • If there are principals listed, click the top checkbox to select all principals and click the Regenerate button.
  9. If one of the hosts that was renamed has a NameNode configured with High Availability and automatic failover enabled, reconfigure the ZooKeeper failover controller znodes to reflect the new hostname.
      Warning:
    • Do not perform this step if you are also running JobTracker in a High Availability configuration, as clearing the hadoop-ha znode will negatively impact JobTracker HA.
    • All other services, and most importantly HDFS, should not be running.
    1. Start ZooKeeper services.
        Note: Make sure the ZooKeeper Failover Controller role is stopped within the HDFS service; start only the ZooKeeper Server role instances.
    2. On one of the hosts that has a ZooKeeper Server role, log into the Zookeeper CLI to delete the Nameservice znode:
      • On a package-based installation zkCli.sh is found at: /usr/lib/zookeeper/bin/zkCli.sh
      • On a parcel-based installation zkCli.sh is found at: $/opt/cloudera/parcels/CDH/lib/zookeeper/bin/zkCli.sh
      1. Verify that the HA znode exists: zkCli$ ls /hadoop-ha
      2. Delete the old znode: zkCli$ rmr /hadoop-ha/nameservice1
    3. In the Cloudera Manager Admin Console, go to the HDFS service.
    4. Click the Instances tab.
    5. Select Actions > Initialize High Availability State in ZooKeeper....
  10. For each of the Cloudera Management Service roles (Host Monitor, Service Monitor, Reports Manager, Activity Monitor, Navigator) go to their configuration and update the Database Hostname property.
  11. Start all cluster services.
  12. Start the Cloudera Management Service.
  13. Deploy client configurations.
Page generated September 3, 2015.