The following sections describe fixed issues in each Cloudera Manager 5 release.
Fixed Issues in Cloudera Manager 5.0.2
Cloudera Manager Impala query monitoring does not work with Impala 1.3.1
Impala 1.3.1 contains changes to the runtime profile format that break the Cloudera Manager Query Monitoring feature. This leads to exceptions in the Cloudera Manager Service Monitor logs, and Impala queries no longer appear in the Cloudera Manager UI or API. The issue affects Cloudera Manager 5.0 and 4.6 - 4.8.2.
Workaround: None. To avoid the Service Monitor exceptions, turn off the Cloudera Manager Query Monitoring feature by going toand setting the Query Monitoring Period to 0 seconds. Note that the Impala Daemons must be restarted when changing this setting, and the setting must be restored once the fix is deployed to turn the query monitoring feature back on. Impala queries will then appear again in Cloudera Manager’s Impala query monitoring feature. Fixed in Cloudera Manager 5.0.2.
Fixed Issues in Cloudera Manager 5.0.1
Sensitive configuration values exposed in Cloudera Manager
Certain configuration values that are stored in Cloudera Manager are considered sensitive, such as database passwords. These configuration values should be inaccessible to non-administrator users, and this is enforced in the Cloudera Manager Administration Console. However, these configuration values are not redacted when they are read through the API, possibly making them accessible to users who should not have such access.
Gateway role configurations not respected when deploying client configurations
Gateway configurations set for gateway role groups other than the default one or at the role level were not being respected.
Documentation reflects requirement to enable at least Level 1 encryption before enabling Kerberos authentication
The Cloudera Manager manual Configuring Hadoop Security with Cloudera Manager now indicates that before enabling Kerberos authentication you should first enable at least Level 1 encryption.
HDFS NFS gateway does not work on all Cloudera-supported platforms
The NFS gateway cannot be started on some Cloudera-supported platforms.
Workaround: None. Fixed in Cloudera Manager 5.0.1.
Replace YARN_HOME with HADOOP_YARN_HOME during upgrade
If yarn.application.classpath was set to a non-default value on a CDH 4 cluster, and that cluster is upgraded to CDH 5, the classpath is not updated to reflect that $YARN_HOME was replaced with $HADOOP_YARN_HOME. This will cause YARN jobs to fail.
Workaround: Reset yarn.application.classpath to the default, then re-apply your classpath customizations if needed. Fixed in Cloudera Manager 5.0.1.
Insufficient password hashing in Cloudera Manager
In versions of Cloudera Manager earlier than 4.8.3 and earlier than 5.0.1, user passwords are only hashed once. Passwords should be hashed multiple times to increase the cost of dictionary based attacks, where an attacker tries many candidate passwords to find a match. The issue only affects user accounts that are stored in the Cloudera Manager database. User accounts that are managed externally (for example, with LDAP or Active Directory) are not affected.
In addition, because of this issue, Cloudera Manager 4.8.3 cannot be upgraded to Cloudera Manager 5.0.0. Cloudera Manager 4.8.3 must be upgraded to 5.0.1 or later.
Workaround: Upgrade to Cloudera Manager 5.0.1.
Upgrade to Cloudera Manager 5.0.0 from SLES older than Service Pack 3 with PostgreSQL older than 8.4 fails
Upgrading to Cloudera Manager 5.0.0 from SUSE Linux Enterprise Server (SLES) older than Service Pack 3 will fail if the embedded PostgreSQL database is in use and the installed version of PostgreSQL is less than 8.4.
Workaround: Either migrate away from the embedded PostgreSQL database (use MySQL or Oracle) or upgrade PostgreSQL to 8.4 or greater. Fixed in Cloudera Manager 5.0.1.
MR1 to MR2 import fails on a secure cluster
When running the MR1 to MR2 import on a secure cluster, YARN jobs will fail to find container-executor.cfg.
Workaround: Restart YARN after the import. Fixed in Cloudera Manager 5.0.1.
After upgrade from CDH 4 to CDH 5, Oozie is missing workflow extension schemas
After an upgrade from CDH 4 to CDH 5, Oozie does not pick up the new workflow extension schemas automatically. User will need to update oozie.service.SchemaService.wf.ext.schemas manually and add the schemas added in CDH 5: shell-action-0.3.xsd, sqoop-action-0.4.xsd, distcp-action-0.2.xsd, oozie-sla-0.1.xsd, oozie-sla-0.2.xsd. Note: None of the existing jobs will be affected by this bug, only new workflows that require new schemas.
Workaround: Add the new workflow extension schemas to Oozie manually by editing oozie.service.SchemaService.wf.ext.schemas. Fixed in Cloudera Manager 5.0.1.
Fixed Issues in Cloudera Manager 5.0.0
Cannot Restore a Snapshot of a deleted HBase Table
If you take a snapshot of an HBase table, and then delete that table in HBase, you will not be able to restore the snapshot.
Workaround: Use the "Restore As" command to recreate the table in HBase. Fixed in Cloudera Manager 5.0.0.
Stop dependent HBase services before enabling HDFS Automatic Failover.
When enabling HDFS Automatic Failover, you need to first stop any dependent HBase services. The Automatic Failover configuration workflow restarts both NameNodes, which could cause HBase to become unavailable.
New schema extensions have been introduced for Oozie in CDH 4.1
In CDH 4.1, Oozie introduced new versions for Hive, Sqoop and workflow schema. To use them, you must add the new schema extensions to the Oozie SchemaService Workflow Extension Schemas configuration property in Cloudera Manager.
Workaround: In Cloudera Manager, do the following:
- Go to the CDH 4 Oozie service page.
- Go to the Configuration tab, View and Edit.
- Search for "Oozie Schema". This should show the Oozie SchemaService Workflow Extension Schemas property.
- Add the following to the Oozie SchemaService Workflow Extension Schemas property:
shell-action-0.2.xsd hive-action-0.3.xsd sqoop-action-0.3.xsd
- Save these changes.
YARN Resource Scheduler user FairScheduler rather than FIFO.
Cloudera Manager 5.0.0 sets the default YARN Resource Scheduler to FairScheduler. If a cluster was previously running YARN with the FIFO scheduler, it will be changed to FairScheduler next time YARN restarts. The FairScheduler is only supported with CDH4.2.1 and later, and older clusters may hit failures and need to manually change the scheduler to FIFO or CapacityScheduler.
- Go the YARN service Configuration page
- Search for "scheduler.class"
- Click in the Value field and select the schedule you want to use.
- Save your changes and restart YARN to update your configurations.
Resource Pools Summary is incorrect if time range is too large.
The Resource Pools Summary does not show correct information if the Time Range selector is set to show 6 hours or more.
When running the MR1 to MR2 import on a secure cluster, YARN jobs will fail to find container-executor.cfg
Workaround: Restart YARN after the import steps finish. This causes the file to be created under the YARN configuration path, and the jobs now work.
When upgrading to Cloudera Manager 5.0.0, the "Dynamic Resource Pools" page is not accessible
When upgrading to Cloudera Manager 5.0.0, users will not be able to directly access the "Dynamic Resource Pools" page. Instead, they will be presented with a dialog saying that the Fair Scheduler XML Advanced Configuration Snippet is set.
- Go to the YARN service.
- Select .
- Copy the value of the Fair Scheduler XML Advanced Configuration Snippet into a file.
- Clear the value of Fair Scheduler XML Advanced Configuration Snippet.
- Recreate the desired Fair Scheduler allocations in the Dynamic Resource Pools page, using the saved file for reference.
New Cloudera Enterprise licensing is not reflected in the wizard and license page
The AWS Cloud wizard fails to install Spark due to missing roles
- Use the traditional install wizard
- Open a new window, click the Spark service, click on the Instances tab, click Add, add all required roles to Spark. Once the roles are successfully added, click the Retry button on the First Run page in the wizard.
Spark on YARN requires manual configuration
Spark on YARN requires the following manual configuration to work correctly: modify the YARN Application Classpath by adding /etc/hadoop/conf, making it the very first entry.
Workaround: Add /etc/hadoop/conf as the first entry in the YARN Application classpath.
Monitoring works with Solr and Sentry only after configuration updates
Cloudera Manager monitoring does not work out of the box with Solr and Sentry on Cloudera Manager 5. The Solr service is in Bad health, and all Solr Servers have a failing "Solr Server API Liveness" health check.
Workaround: Complete the configuration steps below:
- Create "HTTP" user and group on all machines in the cluster (with useradd 'HTTP' on RHEL-type systems).
- The instructions that follow this step assume there is no existing Solr Sentry policy file in use. In that case, first create the policy file on /tmp and then copy it over to the appropriate location in HDFS that Solr Servers check. If there is already a Solr Sentry policy in use, it must be modified to add the following [group] / [role] entries for 'HTTP'. Create a file (for example, /tmp/cm-authz-solr-sentry-policy.ini) with the following contents:
[groups] HTTP = HTTP [roles] HTTP = collection = admin->action=query
- Copy this file to the location for the "Sentry Global Policy File" for Solr. The associated config name for this location is sentry.solr.provider.resource, and you can see the current value by navigating to the Sentry sub-category in the Service Wide configuration editing workflow in the Cloudera Manager UI. The default value for this entry is /user/solr/sentry/sentry-provider.ini. This refers to a path in HDFS.
- Check if you have entries in HDFS for the parent(s) directory:
sudo -u hdfs hadoop fs -ls /user
- You may need to create the appropriate parent directories if they are not present. For example:
sudo -u hdfs hadoop fs -mkdir /user/solr/sentry
- After ensuring the parent directory is present, copy the file created in step 2 to this location, as follows:
sudo -u hdfs hadoop fs -put /tmp/cm-authz-solr-sentry-policy.ini /user/solr/sentry/sentry-provider.ini
- Ensure that this file is owned/readable by the solr user (this is what the Solr Server runs as):
sudo -u hdfs hadoop fs -chown solr /user/solr/sentry/sentry-provider.ini
- Restart the Solr service. If both Kerberos and Sentry are being enabled for Solr, the MGMT services also need to be restarted. The Solr Server liveness health checks should clear up once SMON has had a chance to contact the servers and retrieve metrics.
Out-of-memory errors may occur when using the Reports Manager
Out-of-memory errors may occur when using the Cloudera Manager Reports Manager.
Workaround: Set the value of the "Java Heap Size of Reports Manager" property to at least the size of the HDFS filesystem image (fsimage) and restart the Reports Manager.
Applying license key using Internet Explorer 9 and Safari fails
Cloudera Manager is designed to work with IE 9 and above and Safari. However the file upload widget used to upload a license currently doesn't work with IE 9 or Safari. Therefore, installing an enterprise license doesn't work.
Workaround: Use another supported browser.
Fixed Issues in Cloudera Manager 5.0.0 Beta 2
The HDFS Canary Test is disabled for secured CDH 5 services.
Due to a bug in Hadoop's handling of multiple RPC clients with distinct configurations within a single process with Kerberos security enabled, Cloudera Manager will disable the HDFS canary test when security is enabled so as to prevent interference with Cloudera Manager's MapReduce monitoring functionality.
Not all monitoring configurations are migrated from MR1 to MR2.
When MapReduce v1 configurations are imported for use by YARN (MR2), not all of the monitoring configuration values are currently migrated. Users may need to reconfigure custom values for properties such as thresholds.
Workaround: Manually reconfigure any missing property values.
"Access Denied" may appear for some features after adding a license or starting a trial.
After starting a 60-day trial or installing a license for Enterprise Edition, you may see an "access denied" message when attempting to access certain Enterprise Edition-only features such as the Reports Manager. You need to log out of the Admin Console and log back in to access these features.
Workaround: Log out of the Admin Console and log in again.
Hue must set impersonation on when using Impala with impersonation.
When using Impala with impersonation, the impersonation_enabled flag must be present and configured in the hue.ini file. If impersonation is enabled in Impala (i.e. Impala is using Sentry) then this flag must be set true. If Impala is not using impersonation, it should be set false (the default).
- Go to the Hue Service Configuration Advanced Configuration Snippet for hue_safety_valve.ini under the Hue service Configuration settings, Service-Wide > Advanced category.
- Add the following, then uncomment the setting and set the value True or False as appropriate:
################################################################# # Settings to configure Impala ################################################################# [impala] .... # Turn on/off impersonation mechanism when talking to Impala ## impersonation_enabled=False
Cloudera Manager Server may fail to start when upgrading using a PostgreSQL database.
ERROR [main:dbutil.JavaRunner@57] Exception while executing com.cloudera.cmf.model.migration.MigrateConfigRevisions java.lang.RuntimeException: java.sql.SQLException: Batch entry <xxx> insert into REVISIONS (REVISION_ID, OPTIMISTIC_LOCK_VERSION, USER_ID, TIMESTAMP, MESSAGE) values (...) was aborted. Call getNextException to see the cause.
alter table REVISIONS alter column MESSAGE type varchar(1048576);After that, your Cloudera Manager server should start up normally.
Fixed Issues in Cloudera Manager 5.0.0 Beta 1
After an upgrade from Cloudera Manager 4.6.3 to 4.7, Impala does not start.
After an upgrade from Cloudera Manager 4.6.3 to 4.7 when Navigator is used, Impala will fail to start because the Audit Log Directory property has not been set by the upgrade procedure.
Workaround: Manually set the property to /var/log/impalad/audit. See the Service Auditing Properties section of the Cloudera Navigator Installation and User Guide for more information.