This is the documentation for Cloudera 5.4.x. Documentation for other versions is available at Cloudera Documentation.

Known Issues and Workarounds in Cloudera Manager 5

The following sections describe the current known issues in Cloudera Manager 5.

Typo in Sqoop DB path suffix (SqoopParams.DERBY_SUFFIX)

Sqoop 2 appears to lose data when upgrading to CDH 5.4. This is due to Cloudera Manager erroneously configuring the Derby path with "repositoy" instead of "repository".

Workaround:
  1. SSH into your Sqoop 2 server host and move the Derby database files to the new location, usually from /var/lib/sqoop2/repository to /var/lib/sqoop2/repositoy.
  2. Run the Sqoop 2 database upgrade command using the Actions drop-down menu for Sqoop 2.

Automated Solr SSL configuration may fail silently

Cloudera Manager 5.4.1 offers simplified SSL configuration for Solr. This process uses a solrctl command to configure the urlSchemeSolr cluster property. The solrctl command produces the same results as the Solr REST API call /solr/admin/collections?action=CLUSTERPROP&name=urlScheme&val=https. For example, the call might appear as: https://example.com:8983/solr/admin/collections?action=CLUSTERPROP&name=urlScheme&val=https

Cloudera Manager automatically executes this command during Solr service startup. If this command fails, the Solr service startup continues without reporting errors, despite the resulting incorrect SSL configuration.

Workaround: If Solr service startup completes without properly configuring urlScheme, set the property manually by invoking the previously described Solr REST API call.

Backup and Disaster Recover replication does not set MapReduce Java options

Replication used for backup and disaster recovery relies on system-wide MapReduce memory options, and you cannot configure the options using the Advanced Configuration Snippet.

Removing the default value of a property fails

For example, when you access the Automatically Downloaded Parcels property on the following page: Home > Administration > Settings and remove the default CDH value, the following error message displays: "Could not find config to delete with template name: parcel_autodownload_products".

Agent fails when retrieving log files with very long messages

When searching or retrieving large log files using the Agent, the Agent may consume near 100% CPU until it is restarted. This can also happen then the collect host statistics command is issued.

One way this can happen is when the Hive hive.log.explain.output property is set to its default value of true, very large messages with EXPLAIN outputs can cause the Cloudera Manager Agent to hang or become unstable. In this case the workaround is to set the hive.log.explain.output property to false.

New Sentry Synchronization Path Prefixes added in NameNode configuration are not enforced correctly

Any new path prefixes added in the NameNode configuration are not correctly enforced by Sentry. The ALCs are initially set correctly, however they would be reset to old default after some time interval.

Workaround: Set the following property in Sentry Service Advanced Configuration Snippet (Safety Valve) and Hive Metastore Server Advanced Configuration Snippet (Safety Valve) for hive-site.xml:
<property>
<name>sentry.hdfs.integration.path.prefixes</name>
<value>/user/hive/warehouse, ADDITIONAL_DATA_PATHS</value>
</property>
where ADDITIONAL_DATA_PATHS is a comma-separated list of HDFS paths where Hive data will be stored. The value should be the same value as sentry.authorization-provider.hdfs-path-prefixes set in the hdfs-site.xml on the NameNode.

Kafka 1.2 CSD conflicts with CSD included in Cloudera Manager 5.4

If the Kafka CSD was installed in Cloudera Manager to 5.3 or lower, the old version must be uninstalled, otherwise it will conflict with the version of the Kafka CSD bundled with Cloudera Manager 5.4.

Workaround: Remove the Kafka 1.2 CSD before upgrading Cloudera Manager to 5.4:
  1. Determine the location of the CSD directory:
    1. Select Administration > Settings.
    2. Click the Custom Service Descriptors category.
    3. Retrieve the directory from the Local Descriptor Repository Path property.
  2. Delete the Kafka CSD from the directory.

Recommission host doesn't deploy client configurations

The failure to deploy client configurations can result in client configuration pointing to the wrong locations, which can cause errors such as the NodeManager failing to start with "Failed to initialize container executor".

Workaround: Deploy client configurations first and then restart roles on the recommissioned host.

Hive on Spark is not supported in Cloudera Manager and CDH 5.4

You can configure Hive on Spark, but it is not recommended for production clusters.

CDH 5 requires JDK 1.7

JDK 1.6 is not supported on any CDH 5 release, but before CDH 5.4.0, CDH libraries have been compatible with JDK 1.6. As of CDH 5.4.0, CDH libraries are no longer compatible with JDK 1.6 and applications using CDH libraries must use JDK 1.7.

In addition, you must upgrade your cluster to a supported version of JDK 1.7 before upgrading to CDH 5. See Upgrading to Oracle JDK 1.7 before Upgrading to CDH 5 for instructions.

Upgrade wizard incorrectly upgrades the Sentry DB

There's no Sentry DB upgrade in 5.4, but the upgrade wizard says there is. Performing the upgrade command is not harmful, and taking the backup is also not harmful, but the steps are unnecessary.

Cloudera Manager doesn't correctly generate client configurations for services deployed using CSDs

HiveServer2 requires a Spark on YARN gateway on the same host in order for Hive on Spark to work. You must deploy Spark client configurations whenever there's a change in order for HiveServer2 to pick up the change.

CSDs that depend on Spark will get incomplete Spark client configuration. Note that Cloudera Manager does not ship with any such CSDs by default.

Workaround: Use /etc/spark/conf for Spark configuration, and ensure there is a Spark on YARN gateway on that host.

Cloudera Manager 5.3.1 upgrade fails if Spark standalone and Kerberos are configured

CDH upgrade fails if Kerberos is enabled and Spark standalone is installed. Spark standalone doesn't work in a kerberized cluster.

Workaround: To upgrade, remove the Spark standalone service first and then proceed with upgrade.

KMS and Key Trustee ACLs do not work in Cloudera Manager 5.3

ACLs configured for the KMS (File) and KMS (Navigator Key Trustee) services do not work since these services do not receive the values for hadoop.security.group.mapping and related group mapping configuration properties.

Workaround:

KMS (File): Add all configuration properties starting with hadoop.security.group.mapping from the NameNode core-site.xml to the KMS (File) property, Key Management Server Advanced Configuration Snippet (Safety Valve) for core-site.xml

KMS (Navigator Key Trustee): Add all configuration properties starting with hadoop.security.group.mapping from the NameNode core-site.xml to the KMS (Navigator Key Trustee) property, Key Management Server Proxy Advanced Configuration Snippet (Safety Valve) for core-site.xml.

Exporting and importing Hue database sometimes times out after 90 seconds

Executing 'dump database' or 'load database' of Hue from Cloudera Manager returns "command aborted because of exception: Command timed-out after 90 seconds". The Hue database can be exported to JSON from within Cloudera Manager. Unfortunately, sometimes the Hue database is quite large and the export times out after 90 seconds.

Workaround: Ignore the timeout. The command should eventually succeed even though Cloudera Manager reports that it timed out.

Changing hostname of key trustee server requires editing the keytrustee.conf file

If you change the hostname of your primary or backup server, you will need to edit your keytrustee.conf file. This issue typically arises if you replace a primary or backup server with a server having a different hostname. If the same hostname is used on the new server, there will be no issues.

Workaround: Use the same hostname on the replacement server.

Hosts with Impala Llama roles must also have at least one YARN role

When integrated resource management is enabled for Impala, host(s) where the Impala Llama role(s) are running must have at least one YARN role. This is because Llama requires the topology.py script from the YARN configuration. If this requirement is not met, you may see errors such as:
"Exception running /etc/hadoop/conf.cloudera.yarn/topology.py
java.io.IOException: Cannot run program "/etc/hadoop/conf.cloudera.yarn/topology.py"
in the Llama role logs, and Impala queries may fail.

Workaround: Add a YARN gateway role to each Llama host that does not already have at least one YARN role (of any type).

The high availability wizard does not verify that there is a running ZooKeeper service

If one of the following is true:
  • 1. ZooKeeper present and not running and the HDFS dependency on ZooKeeper dependency is not set
  • 2. ZooKeeper absent
the enable high-availability wizard fails.
Workaround: Before enabling high availability, do the following:
  1. Create and start a ZooKeeper service if one doesn't exist.
  2. Go to the HDFS service.
  3. Click the Configuration tab.
  4. Select Scope > Service-Wide
  5. Set the ZooKeeper Service property to the ZooKeeper service.
  6. Click Save Changes to commit the changes.

Cloudera Manager Installation Path A fails on RHEL 5.7 due to PostgreSQL conflict

On RHEL 5.7, cloudera-manager-installer.bin fails due to a PostgreSQL conflict if PostgreSQL 8.1 is already installed on your host.

Workaround: Remove PostgreSQL from host and rerun cloudera-manager-installer.bin.

Cloudera Management Service roles fail to start after upgrade to Cloudera Manager

If you have enabled TLS security for the Cloudera Manager Admin Console before upgrading to Cloudera Manager, after the upgrade, the Cloudera Management Service roles will try to communicate with Cloudera Manager using TLS and will fail to start unless the following SSL properties have been configured.

Hence, if you have the following property enabled in Cloudera Manager, use the workaround below to allow the Cloudera Management Service roles to communicate with Cloudera Manager.
Property Description
Use TLS Encryption for Admin Console Select this option to enable TLS encryption between the Server and user's web browser.

Workaround:

  1. Open the Cloudera Manager Admin Console and navigate to the Cloudera Management Service.
  2. Click Configuration.
  3. In the Search field, type SSL to show the SSL properties (found under the Service-Wide > Security category).
  4. Edit the following SSL properties according to your cluster configuration.
    Table 1. Cloudera Management Service SSL Properties
    Property Description
    SSL Client Truststore File Location Path to the client truststore file used in HTTPS communication. The contents of this truststore can be modified without restarting the Cloudera Management Service roles. By default, changes to its contents are picked up within ten seconds.
    SSL Client Truststore File Password Password for the client truststore file.
  5. Click Save Changes.
  6. Restart the Cloudera Management Service. For more information, see HTTPS Communication in Cloudera Manager.

Spurious warning on Accumulo 1.6 gateway hosts

When using the Accumulo shell on a host with only an Accumulo 1.6 Service gateway role, users will receive a warning about failing to create the directory /var/log/accumulo. The shell works normally otherwise.

Workaround: The warning is safe to ignore.

Accumulo 1.6 service log aggregation and search does not work

Cloudera Manager log aggregation and search features are incompatible with the log formatting needed by the Accumulo Monitor. Attempting to use either the "Log Search" diagnostics feature or the log file link off of an individual service role's summary page will result in empty search results.

Severity: High

Workaround: Operators can use the Accumulo Monitor to see recent severe log messages. They can see recent log messages below the WARNING level via a given role's process page and can inspect full logs on individual hosts by looking in /var/log/accumulo.

Cloudera Manager incorrectly sizes Accumulo Tablet Server max heap size after 1.4.4-cdh4.5.0 to 1.6.0-cdh4.6.0 upgrade

Because the upgrade path from Accumulo 1.4.4-cdh4.5.0 to 1.6.0-cdh4.6.0 involves having both services installed simultaneously, Cloudera Manager will be under the impression that worker hosts in the cluster are oversubscribed on memory and attempt to downsize the max heap size allowed for 1.6.0-cdh4.6.0 Tablet Servers.

Severity: High

Workaround: Manually verify that the Accumulo 1.6.0-cdh4.6.0 Tablet Server max heap size is large enough for your needs. Cloudera recommends you set this value to the sum of 1.4.4-cdh4.5.0 Tablet Server and Logger heap sizes.

Cluster CDH version not configured correctly for package installs

If you have installed CDH as a package, after the install make sure that the cluster CDH version matches the package CDH version. If the cluster CDH version does not match the package CDH version, Cloudera Manager will incorrectly enable and disable service features based on the cluster's configured CDH version.

Workaround:

Manually configure the cluster CDH version to match the package CDH version. Click ClusterName > Actions > Configure CDH Version. In the dialog, Cloudera Manager displays the installed CDH version, and asks for confirmation to configure itself with the new version.

Accumulo installations using LZO do not indicate dependence on the GPL Extras parcel

Accumulo 1.6 installations that use LZO compression functionality do not indicate that LZO depends on the GPL Extras parcel. When Accumulo is configured to use LZO, Cloudera Manager has no way to track that the Accumulo service now relies on the GPL Extras parcel. This prevents Cloudera Manager from warning administrators before they remove the parcel while Accumulo still requires it for proper operation.

Workaround: Check your Accumulo 1.6 service for the configuration changes mentioned in the Cloudera documentation for using Accumulo with CDH prior to removing the GPL Extras parcel. If the parcel is mistakenly removed, reinstall it and restart the Accumulo 1.6 service.

Created pools are not preserved when Dynamic Resource Pools page is used to configure YARN or Impala

Pools created on demand are not preserved when changes are made using the Dynamic Resource Pools page. If the Dynamic Resource Pools page is used to configure YARN and/or Impala services in a cluster, it is possible to specify pool placement rules that create a pool if one does not already exist. If changes are made to the configuration using this page, pools created as a result of such rules are not preserved across the configuration change.

Workaround: Submit the YARN application or Impala query as before, and the pool will be created on demand once again.

User should be prompted to add the AMON role when adding MapReduce to a CDH 5 cluster

When the MapReduce service is added to a CDH 5 cluster, the user is not asked to add the AMON role. Then, an error displays when the user tries to view MapReduce activities.

Workaround: Manually add the AMON role after adding the MapReduce service.

Enterprise license expiration alert not displayed until Cloudera Manager Server is restarted

When an enterprise license expires, the expiration notification banner is not displayed until the Cloudera Manager Server has been restarted. The enterprise features of Cloudera Manager are not affected by an expired license.

Workaround: None.

Cluster installation with CDH 4.1 and Impala fails

In Cloudera Manager 5.0, installing a new cluster through the wizard with CDH 4.1 and Impala fails with the following error message, "dfs.client.use.legacy.blockreader.local is not enabled."

Workaround: Perform one of the following:
  1. Use CDH 4.2 or higher, or
  2. Install all desired services except Impala in your initial cluster setup. From the home page, use the drop-down menu near the cluster name and select Configure CDH Version. Confirm the version, then add Impala.

Configurations for decommissioned roles not migrated from MapReduce to YARN

When the Import MapReduce Configuration wizard is used to import MapReduce configurations to YARN, decommissioned roles in the MapReduce service do not cause the corresponding imported roles to be marked as decommissioned in YARN.

Workaround: Delete or decommission the roles in YARN after running the import.

The HDFS command Roll Edits does not work in the UI when HDFS is federated

The HDFS command Roll Edits does not work in the Cloudera Manager UI when HDFS is federated because the command doesn't know which nameservice to use.

Workaround: Use the API, not the Cloudera Manager UI, to execute the Roll Edits command.

Cloudera Manager reports a confusing version number if you have oozie-client, but not oozie installed on a CDH 4.4 node

In CDH versions before 4.4, the metadata identifying Oozie was placed in the client, rather than the server package. Consequently, if the client package is not installed, but the server is, Cloudera Manager will report Oozie has been present but as coming from CDH 3 instead of CDH 4.

Workaround: Either install the oozie-client package, or upgrade to at least CDH 4.4. Parcel based installations are unaffected.

Cloudera Manager doesn't work with CDH 5.0.0 Beta 1

When you upgrade from Cloudera Manager 5.0.0 Beta 1 with CDH 5.0.0 Beta 1 to Cloudera Manager 5.0.0 Beta 2, Cloudera Manager won't work with CDH 5.0.0 Beta 1 and there's no notification of that fact.

Workaround: None. Do a new installation of CDH 5.0.0 Beta 2.

On CDH 4.1 secure clusters managed by Cloudera Manager 4.8.1 and higher, the Impala Catalog server needs advanced configuration snippet update

Impala queries fail on CDH 4.1 when Hive "Bypass Hive Metastore Server" option is selected.

Workaround: Add the following to Impala catalog server advanced configuration snippet for hive-site.xml, replacing Hive_Metastore_Server_Host with the host name of your Hive Metastore Server:

<property>
<name>hive.metastore.local</name> 
<value>false</value> 
</property> 
<property> 
<name>hive.metastore.uris</name> 
<value>thrift://Hive_Metastore_Server_Host:9083</value>
</property>

Rolling Upgrade to CDH 5 is not supported.

Rolling upgrade between CDH 4 and CDH 5 is not supported. Incompatibilities between major versions means rolling restarts are not possible. In addition, rolling upgrade will not be supported from CDH 5.0.0 Beta 1 to any later releases, and may not be supported between any future beta versions of CDH 5 and the General Availability release of CDH 5.

Workaround: None.

Error reading .zip file created with the Collect Diagnostic Data command.

After collecting Diagnostic Data and using the Download Diagnostic Data button to download the created zip file to the local system, the zip file cannot be opened using the FireFox browser on a Macintosh. This is because the zip file is created as a Zip64 file, and the unzip utility included with Macs does not support Zip64. The zip utility must be version 6.0 or later. You can determine the zip version with unzip -v.

Workaround: Update the unzip utility to a version that supports Zip64.

After JobTracker failover, complete jobs from the previous active JobTracker are not visible.

When a JobTracker failover occurs and a new JobTracker becomes active, the new JobTracker UI does not show the completed jobs from the previously active JobTracker (that is now the standby JobTracker). For these jobs the "Job Details" link does not work.

Severity: Med

Workaround: None.

After JobTracker failover, information about rerun jobs is not updated in Activity Monitor.

When a JobTracker failover occurs while there are running jobs, jobs are restarted by the new active JobTracker by default. For the restarted jobs the Activity Monitor will not update the following: 1) The start time of the restarted job will remain the start time of the original job. 2) Any Map or Reduce task that had finished before the failure happened will not be updated with information about the corresponding task that was rerun by the new active JobTracker.

Severity: Med

Workaround: None.

Installing on AWS, you must use private EC2 hostnames.

When installing on an AWS instance, and adding hosts using their public names, the installation will fail when the hosts fail to heartbeat.

Severity: Med

Workaround:

Use the Back button in the wizard to return to the original screen, where it prompts for a license.

Rerun the wizard, but choose "Use existing hosts" instead of searching for hosts. Now those hosts show up with their internal EC2 names.

Continue through the wizard and the installation should succeed.

If HDFS uses Quorum-based Storage without HA enabled, the SecondaryNameNode cannot checkpoint.

If HDFS is set up in non-HA mode, but with Quorum-based storage configured, the dfs.namenode.edits.dir is automatically configured to the Quorum-based Storage URI. However, the SecondaryNameNode cannot currently read the edits from a Quorum-based Storage URI, and will be unable to do a checkpoint.

Severity: Medium

Workaround: Add to the NameNode's advanced configuration snippet the dfs.namenode.edits.dir property with both the value of the Quorum-based Storage URI as well as a local directory, and restart the NameNode. For example,

<property> <name>dfs.namenode.edits.dir</name>
<value>qjournal://jn1HostName:8485;jn2HostName:8485;jn3HostName:8485/journalhdfs1,file:///dfs/edits</value>
</property>

Changing the rack configuration may temporarily cause mis-replicated blocks to be reported.

A rack re-configuration will cause HDFS to report mis-replicated blocks until HDFS rebalances the system, which may take some time. This is a normal side-effect of changing the configuration.

Severity: Low

Workaround: None

Cannot use '/' as a mount point with a Federated HDFS Nameservice.

A Federated HDFS Service doesn't support nested mount points, so it is impossible to mount anything at '/'. Because of this issue, the root directory will always be read-only, and any client application that requires a writeable root directory will fail.

Severity: Low

Workaround:
  1. In the CDH 4 HDFS Service > Configuration tab of the Cloudera Manager Admin Console, search for "nameservice".
  2. In the Mountpoints field, change the mount point from "/" to a list of mount points that are in the namespace that the Nameservice will manage. (You can enter this as a comma-separated list - for example, "/hbase, /tmp, /user" or by clicking the plus icon to add each mount point in its own field.) You can determine the list of mount points by running the command hadoop fs -ls / from the CLI on the NameNode host.

Historical disk usage reports do not work with federated HDFS.

Severity: Low

Workaround: None.

(CDH 4 only) Activity monitoring does not work on YARN activities.

Activity monitoring is not supported for YARN in CDH 4.

Severity: Low

Workaround: None

HDFS monitoring configuration applies to all Nameservices

The monitoring configurations at the HDFS level apply to all Nameservices. So, if there are two federated Nameservices, it's not possible to disable a check on one but not the other. Likewise, it's not possible to have different thresholds for the two Nameservices.

Severity: Low

Workaround: None

Supported and Unsupported Replication Scenarios and Limitations

Restoring snapshot of a file to an empty directory does not overwrite the directory

Restoring the snapshot of an HDFS file to an HDFS path that is an empty HDFS directory (using the Restore As action) will result in the restored file present inside the HDFS directory instead of overwriting the empty HDFS directory.

Workaround: None.

HDFS Snapshot appears to fail if policy specifies duplicate directories.

In an HDFS snapshot policy, if a directory is specified more than once, the snapshot appears to fail with an error message on the Snapshot page. However, in the HDFS Browser, the snapshot is shown as having been created successfully.

Severity: Low

Workaround: Remove the duplicate directory specification from the policy.

Hive replication fails if "Force Overwrite" is not set.

The Force Overwrite option, if checked, forces overwriting data in the target metastore if there are incompatible changes detected. For example, if the target metastore was modified and a new partition was added to a table, this option would force deletion of that partition, overwriting the table with the version found on the source. If the Force Overwrite option is not set, recurring replications may fail.

Severity: Med

Workaround: Set the Force Overwrite option.