This is the documentation for Cloudera 5.4.x. Documentation for other versions is available at Cloudera Documentation.

Issues Fixed in Cloudera Manager 5

Issues Fixed in Cloudera Manager 5.4.3

Improve Impala queries coordinator node metrics handling

For Impala queries that returned very few rows, Cloudera Manager could fail to report information such as HDFS I/O metrics on the Impala Query Monitoring and Query Detail pages. The discrepancy was typically relatively small because those queries often did very little work.

Performance issues when changing configurations on HDFS

Fixed a performance issue where HDFS configuration pages responded slowly.

Typo in Cloudera Manager metrics reference

The word “Concerning” was misspelled in many metrics reference pages.

Issues with Navigator field audit_log_max_file_size

The log4j appender changed from RollingFileAppender to RollingFileWithoutDeleteAppender.

The Isilon client configuration core-site.xml file does not contain proxy users

The parameters are available in the Cloudera Manager Admin Console, but the configurations are not emitted in the core-site.xml file.

Solr gateway role should not have a log4j.properties advanced configuration snippet

The Solr gateway role does not have a log4j.properties file.

The Cloudera Manager Agent force_start's hard stop commands did not set all invariants

This resulted in NPE being reported in Cloudera Manager logs when accessing active and recent command operations.

Configuration staleness icons appear to be enabled for users in read-only role

When moused over, the icons change to a hand indicating that they are active. However, users in the read-only role cannot act on changed configurations.

Setting yarn.resourcemanager.am.max-retries throws error

Observed when setting the Maximum Number of Attempts for MapReduce Jobs and then setting ApplicationMaster Maximum Attempts, which also sets yarn.resourcemanager.am.max-retries.

Cloudera Manager reports the wrong value for Impala bytes read from cache

Instead of cached bytes it reported the value of short circuit bytes.

Fixed cross-site scripting vulnerabilities

A variety of possible cross-site scripting vulnerabilities have been fixed.

Location of Number of rows drop-down changed

On pages where multiple rows display, the drop-down menu where users select the number of rows to display on a page now appears at the bottom of all lists.

Minimum allowed value change for YARN property

The Max Shuffle Connections property now allows a value of 0, which indicates no limit on the number of connections.

Upgrade error

A bug was fixed that prevented upgrades from CDH 4.7.1 to CDH 5.4.3.

Change to Parcels page

On the Parcels page, the first cluster in the list is now automatically selected by default.

All Password Input Fields do not allow auto complete

All password input fields in Cloudera Manager do not allow auto complete.

TLS Keystore Configuration Error

It is no longer possible to delete the values of the Path to TLS Keystore File and Keystore Password properties and save them while the Use TLS Encryption for Admin Console property is enabled.

Host configuration properties and Agent restart messages

Some host configuration properties no longer incorrectly state that an Agent restart is required.

More detailed error messages for failed role migration

If there is a failure validating the NameNode or JournalNode data directories while migrating roles, Cloudera Manager now displays detailed error information, including error codes.

New property to configure Oozie shared library upload timeout

To prevent timeouts due to slow disks or networks, a new Oozie property, Oozie Upload ShareLib Command Timeout, has been added to set the timeout.

New Cluster-Wide Configuration Pages

The following new Cluster-Wide configuration pages have been added:
  • Databases
  • Local Data Directories
  • Local Data Files
  • Navigator Settings
  • Service Dependencies

To access these pages in Cloudera Manager, select Cluster > Cluster Name > Configuration.

Naming of Health Tests

The names of some Health Tests have changed to use consistent capitalization.

Impala Monitoring Queries for Per-node peak memory

Impala queries that report per-node peak memory were incorrect when the value is zero.

Enable Hive on Spark Property

The description of the Enable Hive on Spark property has been updated to remind the user that the Enable Spark on YARN property must also be selected.

Role Trigger property in Flume

Setting a value for the Flume Role Triggers property no longer causes validation warnings.

Restart of Service Monitor leaves files that can fill the disk

Restarts of the Service Monitor no longer leave extraneous copies of files that unnecessarily take up disk space.

HiveServer 2 properties omit Java options

Setting any of the following properties no longer causes Java options to be omitted:
  • Allow URIs in Database Policy File
  • HiveServer2 TLS/SSL Certificate Trust Store File
  • HiveServer2 TLS/SSL Certificate Trust Store Password

CDH Parcel distribution reports HTTP 503 errors

Cloudera Manager no longer displays HTTP 503 errors during distribution of the CDH parcel to a large cluster.

Diagnostic bundle reports incorrect status for SELinux

Diagnostic bundles sometimes reported SELinux as disabled when it was actually enabled. The bundle now reports the correct status.

Hue configuration warnings do not link to correct page

On the Cloudera Manager page that displays Hue configuration issues, the links now take the user to the correct page where the user can correct the configuration.

Date display in Cloudera Manager log viewer

The month and date have been added before the time value in logs displayed in Cloudera Manager.

Disabling Hive Metastore Canary Test

When you disable the Hive Metastore health test by deselecting the Hive Metastore Canary Health property, the Hive Canary is now also disabled.

Agent failure when TLS 1.0 is disabled

If TLS 1.0 is disabled, the Agent now tries to negotiate the connection using TLS 1.1 or TLS 1.2.

Slowness when displaying details of a stale configuration

The details page now displays more quickly when a user clicks on the Stale Configuration icon.

Slowness observed when accessing replication page in Cloudera Manager

When you access the replication page in Cloudera Manager, the page responds slowly due to a large number of replication history records. The number of displayed historical records has been changed from 100 to 20.

Log Searches for Cloudera Manager Server

Searching the Cloudera Manager Server logs now works as expected.

Failed TLS Configuration and Cloudera Manager Restart

If the TLS configuration has errors, Cloudera Manager now falls back to non-TLS operation when restarting.

New headers added

New headers have been added to Cloudera Manager HTTP response headers to protect against vulnerabilities.

Hive Logging property restored

The Enable Explain Logging (hive.log.explain.output) property was removed in an earlier release and is now included in the configurations.

Hive Metastore Update NameNodes Command

A 150 second timeout was removed from the Update Hive Metastore NameNodes command to prevent timeouts on deployments that use Hive extensively.

Kafka Parcel Installation

Cloudera Manager now correctly detects the Kafka version for parcel installation.

Agent restart failure

In a condition where an Agent restart was required due to a Hive configuration change and a subsequent disk failure, the Agent now restarts as expected.

Error message wording

Some Cloudera Manager error messages referred to Cloudera Manager as “CM”. These messages now use the full name “Cloudera Manager”.

Oozie metrics failures

Retrieval of Oozie metrics sometimes fails due to timeout issues which are now resolved.

NameNode Role Migration Failures

When a NameNode role migration fails due to the destination role data directories being non-empty or having incorrect permissions, you no longer need to complete the migration manually. An error message displays and you can now correct the problem and re-run the command.

AWS S3 HBase configuration property renamed to Amazon S3

Several configuration properties for HBase have been renamed from AWS S3 to Amazon S3, in order to use the correct product name.

NodeManager Host Resources page display for the NodeManager Recovery Directory

The NodeManager Recovery Directory now displays on the NodeManager host resources page.

Host Inspector page now includes link to Show Inspector Results

The Host Inspector page now displays a link to a page that displays detailed results.

Initialization Script Improvements

The Cloudera Manager Agent initialization script now checks correctly for running processes.

Default Value for Hue parameter changed

The default value for the Hue cherrypy_server_threads property has been changed from 10 to 50.

Express Installation Wizard Package Installation Page CDH Version

The Express Installation Wizard Package installation page no longer allows the user to proceed without selecting a CDH version.

Host Component page display

The Host Component page now displays the package version for the KMS Trustee Key Provider.

Installation Wizard hangs during package installation

The Installation Wizard hangs during a CDH package installation and the status displays as “Acquiring Installation Lock”. A bug was fixed where the Agent incorrectly failed to release a lock until the Agent is restarted.

Minimum allocation violation not caught by Cloudera Manager

NodeManager did not start because Cloudera Manager did not correctly validate memory and CPU settings against their minimum values.

Impala core dump directories are now configurable

Three new properties that specify the location of core dump directories have been added to the Impala configurations:
  • Catalog Server Core Dump Directory
  • Impala Daemon Core Dump Directory
  • StateStore Core Dump Directory

Typo in Sqoop DB path suffix (SqoopParams.DERBY_SUFFIX)

Sqoop 2 appears to lose data when upgrading to CDH 5.4. This is due to Cloudera Manager erroneously configuring the Derby path with "repositoy" instead of "repository". The correct path name is now used.

Agent fails when retrieving log files with very long messages

When searching or retrieving large log files using the Agent, the Agent no longer consumes near 100% CPU until it is restarted. This can also happen then the collect host statistics command is issued.

Automated Solr SSL configuration may fail silently

Cloudera Manager 5.4.1 offers simplified SSL configuration for Solr. This process uses a solrctl command to configure the urlSchemeSolr cluster property. The solrctl command produces the same results as the Solr REST API call /solr/admin/collections?action=CLUSTERPROP&name=urlScheme&val=https. For example, the call might appear as: https://example.com:8983/solr/admin/collections?action=CLUSTERPROP&name=urlScheme&val=https

Cloudera Manager automatically executes this command during Solr service startup. If this command fails, the Solr service startup now reports an error.

Removing the default value of a property fails

For example, when you access the Automatically Downloaded Parcels property on the following page: Home > Administration > Settings and remove the default CDH value, the following error message displays: "Could not find config to delete with template name: parcel_autodownload_products". This error has been fixed.

Issues Fixed in Cloudera Manager 5.4.1

distcp default configuration memory settings overwrite MapReduce settings

Replication used for backup and disaster recovery does not correctly set the MapReduce Java options, and you cannot configure them. In release 5.4.1, Cloudera Manager uses the MapReduce gateway configuration to determine the Java options for replication jobs. Replication job settings cannot be configured independently of MapReduce gateway configuration. See Backup and Disaster Recover replication does not set MapReduce Java options .

Oozie high availability plug-in is now configured by Cloudera Manager

In CDH 5.4.0, Oozie added a new HA plugin that allows all of the Oozie servers to synchronize their Job ID assignments and prevent collisions. Cloudera Manager 5.4.0 did not configure this new plugin; Cloudera Manager 5.4.1 now does so.

HDFS read throughput Impala query monitoring property is misleading

The hbase_bytes_read_per_second and hdfs_bytes_read_per_second Impala query properties have been renamed to hbase_scanner_average_bytes_read_per_second and hdfs_scanner_average_bytes_read_per_second to more accurately reflect that these properties return the average throughput of the query's HBase and HDFS scanner threads, respectively. The previous names and descriptions indicated that these properties were the query's total HBase and HDFS throughput, which was not accurate.

Enabling wildcarding in a secure environment causes NameNode to fail to start

In a secure cluster, if you use a wildcard for the NameNode's RPC or HTTP bind address, the NameNode fails to start. For example, dfs.namenode.http-address must be a real, routable address and port, not 0.0.0.0.port. In Cloudera Manager, the "Bind NameNode to Wildcard Address" property must not be enabled. This should affect you only if you are running a secure cluster and your NameNode needs to bind to multiple local addresses.

Bug: HDFS-4448

Severity: Medium

Workaround: Disable the "Bind NameNode to Wildcard Address" property found on the Configuration tab for the NameNode role group.

Support for adding Hue with high availability

The Express and Add Service wizards now allow users to define multiple Hue service roles. If Kerberos is enabled, a co-located KT Renewer role is automatically added for each Hue server row.

Parameter validation fails with more than one Hue role

When you add a second Hue role to a cluster, the error message "Failed parameter validation" displays.

Cross-site scripting vulnerabilities

Various cross-site scripting vulnerabilities were fixed.

Clicking the "Revert to default" icon stores the default value as a user-defined value in the new configuration pages

Cloudera Manager 5.4.1 fixes an issue in which saving an empty configuration value causes the value to be replaced by the default value. The empty value is now saved instead of the default value.

Spurious validation warning and missing validations when multiple Hue Server roles are present

When multiple Hue Server roles are created for a single Hue Service, Cloudera Manager displays a spurious validation warning for Hue with the label "Failed parameter validation." The Cloudera Manager Server log may also contain exception messages of the form:
2015-03-30 17:15:45,077 WARN ActionablesProvider-0:com.cloudera.cmf.service.ServiceModelValidatorImpl:
Parameter validation failed java.lang.IllegalArgumentException: There is more than one role with roletype: HUE_SERVER [...] {
These messages do not correspond to actual validation warnings and can be ignored. However, some validations normally performed are skipped when this spurious warning is generated, and should be done manually. Specifically, if Hue's authentication mechanism is set to LDAP, the following configuration should be validated:
  1. The Hue LDAP URL property must be set.
  2. For CDH 4.4 and lower, set one (but not both) of the following two Hue properties: NT Domain or LDAP Username Pattern.
  3. For CDH 4.5 and higher, if the Hue property Use Search Bind Authentication is selected, exactly one of the two Hue properties NT Domain and LDAP Username Pattern must be set, as described in step 2 above.

Logging of command unavailable message improved

When a command is unavailable, the error messages are now more descriptive.

Client configuration logs no longer deleted by the Agent

If the Agent fails to deploy a new client configuration, the client log file is no longer deleted by the agent. The Agent saves the log file and appends new log entries to the saved log file.

HDFS role migration requires certain HDFS roles to be running

Before using the Migrate Roles wizard to migrate HDFS roles, you must ensure that the following HDFS roles are running as described:

  • A majority of the JournalNodes in the JournalNode quorum must be running. With a quorum size of three JournalNodes, for example, at least two JournalNodes must be running. The JournalNode on the source host need not be running, as long as a majority of all JournalNodes are running.
  • When migrating a NameNode and co-located Failover Controller, the other Failover Controller (that is, the one that is not on the source host) must be running. This is true whether or not a co-located JournalNode is being migrated as well, in addition to the NameNode and Failover Controller.
  • When migrating a JournalNode by itself, at least one NameNode / Failover Controller co-located pair must be running.

HDFS role migration requires automatic failover to be enabled

Migration of HDFS NameNode, JournalNode, and Failover Controller roles through the Migrate Roles wizard is only supported when HDFS automatic failover is enabled. Otherwise, it causes a state in which both NameNodes are in standby mode.

HDFS/Hive replication fails when replicating to target cluster that runs CDH 4 and has Kerberos enabled

Workaround: None.

Issues Fixed in Cloudera Manager 5.4.0

Proxy Configuration in Single User Mode is Fixed

In single user mode, all services are using the same user to proxy other users in an unsecure cluster, which is the user that is running all the CDH processes on the cluster. To restrict that user so that it can proxy other users from only certain hosts and only certain groups, configure the YARN Proxy User Hosts and YARN Proxy User Groups properties in the HDFS service. The setting here supersedes all other proxy user configurations in single user mode.

The Parcels page allows access to the patch release notes

Clicking the icon with an "i" in a blue circle next to a parcel shows the release notes.

Monitoring Fails on Impala Catalog Server with SSL Enabled

When enabling SSL for Impala web servers (webserver_certificate_file), Cloudera Manager doesn't emit use_ssl in the cloudera-monitor.properties file for the Catalog Server. Other services (Impala Daemon and StateStore) are configured correctly. This causes monitoring to fail for the Catalog Server even though it is working as expected.

Issues Fixed in Cloudera Manager 5.3.3

hive.metastore.client.socket.timeout default value changed to 60

The default value of the hive.metastore.client.socket.timeout property has changed to 60 seconds.

SSL Enablement property name changes

The property hadoop.ssl.enabled is deprecated. Cloudera Manager has been updated to use either dfs.http.policy or yarn.http.policy properties instead.

Changing the Service Monitor Client Config Overrides property requires restart

Cloudera Manager no longer requires you to restart your cluster after changing the Service Monitor Client Config Overrides property for a service.

Cluster name changed from specified name to "cluster" after upgrade

After updating to a new release, Cloudera Manager replaces the specified cluster name with cluster. Cloudera Manager now uses the correct cluster name.

Configuration without host_id in upgrade DDL causes upgrade problems

A client configuration row in the database DDL did not set host_id, causing upgrade problems. Cloudera Manager now catches this condition before upgrading.

hive.log.explain.output property is hidden

The property hive.log.explain.output is known to create instability of Cloudera Manager Agents in some specific circumstances, especially when the hive queries generate extremely large EXPLAIN outputs. Therefore, the property has been hidden from the Cloudera Manager configuration screens. You can still configure the property through the use of advanced configuration snippets.

Slow staleness calculation can lead to ZooKeeper data loss when new servers are added

In Cloudera Manager 5.x, starting new ZooKeeper Servers shortly after adding them can cause ZooKeeper data loss when the number of new servers exceeds the number of old servers.

Spark and Spark (standalone) services fail to start if you upgrade to CDH 5.2.x parcels from an older CDH package

Spark and Spark standalone services fail to start if you upgrade to CDH 5.2.x parcels from an older CDH package.

Workaround: After upgrading rest of the services, uninstall the old CDH packages, and then start the Spark service.

Deploy client configuration across cluster after upgrade from Cloudera Manager 4.x to 5.3

Following a 4.x -> 5.3 upgrade, you must deploy client configuration across the entire cluster before deleting any gateway roles, any services, or any hosts. Otherwise the existing 4.x client configurations may be left registered and orphaned on the hosts where they were deployed, requiring you to manually intervene to delete them.

Oozie health bad when Oozie is HA, cluster is kerberized, and Cloudera Manager and CDH are upgraded

Oozie health will go bad if high availability is enabled in a kerberized cluster with Cloudera Manager 5.0 and CDH 5.0 and Cloudera Manager and CDH are then upgraded to 5.1 or higher.

Workaround: Disable Oozie HA and then re-enable HA again.

HDFS/Hive replication fails when replicating to target cluster that runs CDH 4.0 and has Kerberos enabled

Workaround: None.

Issues Fixed in Cloudera Manager 5.3.2

The Review Changes page sometimes hangs

The Review Changes page hangs due to the inability to handle the "File missing" scenario.

High volume of TGT events against AD server with "bad token" messages

A fix has been made to how Kerberos credential caching is handled by management services, resulting in a reduction in the number of Kerberos Ticket Granting Ticket (TGT) requests from the cluster to a KDC. This would have been noticed as "Bad Token" messages being seen in high volume in KDC logging and unnecessarily causing re-authentication by management services.

Accumulo missing kinit when running with Kerberos

Cloudera Manager is unable to run Accumulo when hostname command doesn't return FQDN of hosts.

HiveServer2 leaks threads when using impersonation

For CDH 5.3 and higher, Cloudera Manager will configure HiveServer2 to use the HDFS cache even when impersonation is on. For earlier CDH, there were bugs with the cache when impersonation was in use, so it is still disabled.

Deploying client configurations fails if there are dead hosts present in the cluster

If there are hosts in the cluster where the Cloudera Manager agent heartbeat is not working, then deploying client configurations doesn't work. Starting with Cloudera Manager 5.3.2, such hosts are ignored while deploying client configurations. When the issues with the host are fixed, Cloudera Manager will show those hosts as having stale client configurations, at which point you can redeploy them.

Health test monitors free space available on the wrong filesystem

The Cloudera Manager Health Test to monitor free space available for the Cloudera Manager Agent's process directory monitors space on the wrong filesystem. It should monitor the tmpfs that the Cloudera Manager Agent creates, but instead monitors the Cloudera Manager Agent working directory.

Starting ZooKeeper Servers from Service or Instance page fails

Stopped ZooKeeper servers cannot be started from the Service or Instance page, but only from the Role page of the server using the start action for the role.

Flume Metrics page doesn't render agent metrics

Starting in Cloudera Manager 5.3, some or all Flume component data was missing from the Flume Metrics Details page.

Broken link to help pages on Chart Builder page

The help icon (question mark) on the Chart Builder page returns a 404 error.

Import MapReduce configurations to YARN now handles NodeManager vcores and memory

Running the wizard to import MapReduce configurations to YARN will now populate yarn.nodemanager.resource.cpu-vcores and yarn.nodemanager.resource.memory-mb correctly based on equivalent MapReduce configuration.

Issues Fixed in Cloudera Manager 5.3.1

Deploy client configuration across cluster after upgrade from Cloudera Manager 4.x to 5.3

Following a 4.x -> 5.3 upgrade, you must deploy client configuration across the entire cluster before deleting any gateway roles, any services, or any hosts. Otherwise the existing 4.x client configurations may be left registered and orphaned on the hosts where they were deployed, requiring you to manually intervene to delete them.

Deploy client configuration across cluster after upgrade from Cloudera Manager 4.x to 5.3

Following a 4.x -> 5.3 upgrade, you must deploy client configuration across the entire cluster before deleting any gateway roles, any services, or any hosts. Otherwise the existing 4.x client configurations may be left registered and orphaned on the hosts where they were deployed, requiring you to manually intervene to delete them.

Oozie health bad when Oozie is HA, cluster is kerberized, and Cloudera Manager and CDH are upgraded

Oozie health will go bad if high availability is enabled in a kerberized cluster with Cloudera Manager 5.0 and CDH 5.0 and Cloudera Manager and CDH are then upgraded to 5.1 or higher.

Workaround: Disable Oozie HA and then re-enable HA again.

Deploy client configuration no longer fails after 60 seconds

When configuring a gateway role on a host that already contains a role of the same type—for example, an HDFS gateway on a DataNode—the deploy client configuration command no longer fails after 60 seconds.

service cloudera-scm-server force_start now works

After deleting services, the Cloudera Manager Server log no longer contains foreign key constraint failure exceptions

When using Isilon, Cloudera Manager now sets mapred_submit_replication correctly

When EMC Isilon storage is used, there is no DataNode, so you cannot set mapred_submit_replication to a number smaller than or equal to the number of DataNodes in the network. Cloudera Manager now does the following when setting mapred_submit_replication:

  • If using HDFS, sets to a minimum of 1 and issues a warning when greater than the number of DataNodes
  • If using Isilon, sets to 1 and does not check against the number of DataNodes

The Cloudera Manager Agent now sets the file descriptor ulimit correctly on Ubuntu

During upgrade, bootstrapping the standby NameNode step no longer fails with standby NameNode connection refused when connecting to active NameNode

Deploy krb5.conf now also deploys it on hosts with Cloudera Management Service roles

Cloudera Manager allows upgrades to unknown CDH maintenance releases

Cloudera Manager 5.3.0 supports any CDH release less than or equal to 5.3, even if the release did not exist when Cloudera Manager 5.3.0 was released. For packages, you cannot currently use the upgrade wizard to upgrade to such a release. This release adds a custom CDH field for the package case, where you can type in a version that did not exist at the time of the Cloudera Manager release.

impalad memory limit units error in EnableLlamaRMCommand

The EnableLlamaRMCommand sets the value of the impalad memory limit to equal the NM container memory value. But the latter is in MB, and the former is in bytes. Previously, the command did not perform the conversion; this has been fixed.

Running MapReduce v2 jobs are now visible using the Application Master view

In the Application view, selecting Application Master for a MRv2 job previously resulted in no action.

Deleting services no longer results in foreign key constraint exceptions

The Cloudera Manager Server log previously showed several foreign key constraint exceptions that were associated with deleted services. This has been fixed.

HiveServer2 keystore and LDAP group mapping passwords are no longer exposed in client configuration files

The HiveServer2 keystore password and LDAP group mapping passwords were emitted into the client configuration files. This exposed the passwords in plain text in a world-readable file. This has been fixed.

A cross-site scripting vulnerability in Cloudera Management Service web UIs fixed

The high availability wizard now sets the HDFS dependency on ZooKeeper

Workaround: Before enabling high availability, do the following:
  1. Create and start a ZooKeeper service if one does not exist.
  2. Go to the HDFS service.
  3. Click the Configuration tab.
  4. Select HDFS Service-Wide.
  5. Select Category > Main.
  6. Locate the ZooKeeper Service property or search for it by typing its name in the Search box. Select the ZooKeeper service you created.

    If more than one role group applies to this configuration, edit the value for the appropriate role group. See Modifying Configuration Properties.

  7. Click Save Changes to commit the changes.

BDR no longer assumes superuser is common if clusters have the same realm

If source and destination clusters are in the same Kerberos realm, Cloudera Manager assumed that superuser of the destination is also the superuser on the source cluster. However, HDFS can be configured so that this is not the case.

Issues Fixed in Cloudera Manager 5.3.0

Setting the default umask in HDFS fails in new configuration layout

Setting the default umask in the HDFS configuration section to 002 in the new configuration layout displays an error:"Could not parse: Default Umask : Could not parse parameter 'dfs_umaskmode'. Was expecting an octal value with a leading 0. Input: 2", preventing the change from being submitted.

Workaround: Submit the change using the classic configuration layout.

Spark and Spark (standalone) services fail to start if you upgrade to CDH 5.2.x parcels from an older CDH package

Spark and Spark standalone services fail to start if you upgrade to CDH 5.2.x parcels from an older CDH package.

Workaround: After upgrading rest of the services, uninstall the old CDH packages, and then start the Spark service.

Fixed MapReduce Usage by User reports when using an Oracle database backend

Setting the default umask in HDFS fails in new configuration layout

Setting the default umask in the HDFS configuration section to 002 in the new configuration layout displays an error:"Could not parse: Default Umask : Could not parse parameter 'dfs_umaskmode'. Was expecting an octal value with a leading 0. Input: 2", preventing the change from being submitted.

Workaround: Submit the change using the classic configuration layout.

Enabling Integrated Resource Management for Impala sets Impala Daemon Memory Limit Incorrectly

The Enable Integrated Resource Management command for Impala (available from the Actions pull-down menu on the Impala service page) sets the Impala Daemon Memory Limit to an unusably small value. This can cause Impala queries to fail.

Workaround 1: Upgrade to Cloudera Manager 5.3.

Workaround 2:
  1. Run the Enable Integrated Resource Management wizard up to the Restart Cluster step. Do not click Restart Now.
  2. Click on the leave this wizard link to exit the wizard without restarting the cluster.
  3. Go to the YARN service page. Click Configuration, expand the category NodeManager Default Group, and click Resource Management.
  4. Note the value of the Container Memory property.
  5. Go to the Impala service page and click Configuration. Type impala daemon memory limit into the search box.
  6. Set the value of the Impala Daemon Memory Limit property to the value noted in step 4 above.
  7. Restart the cluster.

Rolling restart and upgrade of Oozie fails if there is a single Oozie server

Rolling restart and upgrade of Oozie fails if there is only a single Oozie server. Cloudera Manager will show the error message "There is already a pending command on this role."

Workaround: If you have a single Oozie server, do a normal restart.

Allow "Started but crashed" processes to be restarted by a Start command

In Cloudera Manager 5.3, it is now possible to restart a crashed process with the Start command and not just the Restart command.

Add dependency from Agent to Daemons package to yum

In Cloudera Manager 5.3, an explicit dependency has been added from the Agent package to the Daemons package so that upgrading Cloudera Manager 5.2.0 or later to Cloudera Manager 5.3 causes the agent to be upgraded as well. Previously, the Cloudera Manager installer always installed both packages, but this is now enforced at the package dependency level as well.

Issues Fixed in Cloudera Manager 5.2.5

Slow staleness calculation can lead to ZooKeeper data loss when new servers are added

In Cloudera Manager 5, starting new ZooKeeper Servers shortly after adding them can cause ZooKeeper data loss when the number of new servers exceeds the number of old servers.

Permissions set incorrectly on YARN Keytab files

Permissions on YARN Keytab files for NodeManager were set incorrectly to allow read access to any user.

Issues Fixed in Cloudera Manager 5.2.2

Impalad memory limit units error in EnableLlamaRMCommand has been fixed

The EnableLlamaRMCommand sets the value of the impalad memory limit to equal the NM container memory value. But the latter is in MB, and the former is in bytes. Previously, the command did not perform the conversion; this has been fixed.

Fixed MapReduce Usage by User reports when using an Oracle database backend

HiveServer2 keystore and LDAP group mapping passwords are no longer exposed in client configuration files

The HiveServer2 keystore password and LDAP group mapping passwords were emitted into the client configuration files. This exposed the passwords in plain text in a world-readable file. This has been fixed.

Running MapReduce v2 jobs are now visible using the Application Master view

In the Application view, selecting Application Master for a MRv2 job previously resulted in no action.

Deleting services no longer results in foreign key constraint exceptions

The Cloudera Manager Server log previously showed several foreign key constraint exceptions that were associated with deleted services. This has been fixed.

Issues Fixed in Cloudera Manager 5.2.1

“POODLE” vulnerability on SSL/TLS enabled ports

The POODLE (Padding Oracle On Downgraded Legacy Encryption) attack takes advantage of a cryptographic flaw in the obsolete SSLv3 protocol, after first forcing the use of that protocol. The only solution is to disable SSLv3 entirely. This requires changes across a wide variety of components of CDH and Cloudera Manager in 5.2.0 and all earlier versions. Cloudera Manager 5.2.1 provides these changes for Cloudera Manager 5.2.0 deployments. All Cloudera Manager 5.2.0 users should upgrade to 5.2.1 as soon as possible. For more information, see the Cloudera Security Bulletin.

Can use the log4j advanced configuration snippet to override the default audit logging configuration even if not using Navigator

In Cloudera Manager 5.2.0 only, it was not possible to use the log4j advanced configuration snippet to override the default audit logging configuration when Navigator was not being used.

Cloudera Manager now collects metrics for CDH 5.0 DataNodes and NameNodes

A number of NameNode and DataNode charts show no data and a number of NameNode and DataNode health checks show unknown results. Metric collection for CDH 5.1 roles is unaffected.

Workaround: None.

The Reports Manager and Event Server Thrift servers no longer crash on HTTP requests

HTTP queries against the Reports Manager and Event Server Thrift server would earlier cause it to crash with out-of-memory exception.

Replication commands now use the correct JAVA_HOME if an override has been provided for it

ZooKeeper connection leaks from HBase clients in Service Monitor have been fixed

When a parcel is activated, user home directories are now created with umask 022 instead of using the "useradd" default 077

Issues Fixed in Cloudera Manager 5.2.0

Bug in openssl-1.0.1e-15 disrupts SSL communication between Cloudera Manager Agents and CDH services

This issue was observed in SSL-enabled clusters running CentOS 6.4 and 6.5, where the Cloudera Manager Agent failed when trying to communicate with CDH services. You can see the bug report here.

Workaround: Upgrade to openssl-1.0.1e-16.el6_5.7.x86_64.

Alternatives database points to client configurations of deleted service

In the past, if you created a service, deployed its client configurations, and then deleted that service, the client configurations lived in the alternative database, with a possibly high priority, until cleaned up manually. Now, for a given "alternatives path" (for example /etc/hadoop/conf) if there exist both "live" client configurations (ones that would be pushed out with deploy client configurations for active services) and ones that have been "orphaned" client configurations (the service they correspond to has been deleted), the orphaned ones will be removed from the alternatives database. In other words, to trigger cleanup of client configurations associated with a deleted service you must create a service to replace it.

The YARN property ApplicationMaster Max Retries has no effect in CDH 5

The issue arises because yarn.resourcemanager.am.max-retries was replaced with yarn.resourcemanager.am.max-attempts.

Workaround:
  1. Add the following to ResourceManager Advanced Configuration Snippet for yarn-site.xml, replacing MAX_ATTEMPTS with the desired maximum number of attempts:
    <property>
    <name>yarn.resourcemanager.am.max-attempts</name><value>MAX_ATTEMPTS</value>
    </property>
  2. Restart the ResourceManager(s) to pick up the change.

The Spark History Server does not start when Kerberos authentication is enabled.

The Spark History Server does not start when managed by a Cloudera Manager 5.1 instance when Kerberos authentication is enabled.

Workaround:
  1. Go to the Spark service.
  2. Expand the Service-Wide > Advanced category.
  3. Add the following configuration to the History Server Environment Advanced Configuration Snippet property:
    SPARK_HISTORY_OPTS=-Dspark.history.kerberos.enabled=true \
    -Dspark.history.kerberos.principal=principal \
    -Dspark.history.kerberos.keytab=keytab
where principal is the name of the Kerberos principal to use for the History Server, and keytab is the path to the principal's keytab file on the local filesystem of the host running the History Server.

Hive replication issue with TLS enabled

Hive replication will fail when the source Cloudera Manager instance has TLS enabled, even though the required certificates have been added to the target Cloudera Manager's trust store.

Workaround: Add the required Certificate Authority or self-signed certificates to the default Java trust store, which is typically a copy of the cacerts file named jssecacerts in the $JAVA_HOME/jre/lib/security/ path of your installed JDK. Use keytool to import your private CA certificates into the jssecacert file.

The Spark Upload Jar command fails in a secure cluster

The Spark Upload Jar command fails in a secure cluster.

Workaround: To run Spark on YARN, manually upload the Spark assembly jar to HDFS /user/spark/share/lib. The Spark assembly jar is located on the local filesystem, typically in /usr/lib/spark/assembly/lib or /opt/cloudera/parcels/CDH/lib/spark/assembly/lib.

Clients of the JobHistory Server Admin Interface Require Advanced Configuration Snippet

Clients of the JobHistory server administrative interface, such as the mapred hsadmin tool, may fail to connect to the server when run on hosts other than the one where the JobHistory server is running.

Workaround: Add the following to both the MapReduce Client Advanced Configuration Snippet for mapred-site.xml and the Cluster-wide Advanced Configuration Snippet for core-site.xml, replacing JOBHISTORY_SERVER_HOST with the hostname of your JobHistory server:
<property>
<name>mapreduce.history.admin.address</name>
<value>JOBHISTORY_SERVER_HOST:10033</value>
</property>

Fixed Issues in Cloudera Manager 5.1.5

Slow staleness calculation can lead to ZooKeeper data loss when new servers are added

In Cloudera Manager 5, starting new ZooKeeper Servers shortly after adding them can cause ZooKeeper data loss when the number of new servers exceeds the number of old servers.

Permissions set incorrectly on YARN Keytab files

Permissions on YARN Keytab files for NodeManager were set incorrectly to allow read access to any user.

Fixed Issues in Cloudera Manager 5.1.4

“POODLE” vulnerability on SSL/TLS enabled ports

The POODLE (Padding Oracle On Downgraded Legacy Encryption) attack takes advantage of a cryptographic flaw in the obsolete SSLv3 protocol, after first forcing the use of that protocol. The only solution is to disable SSLv3 entirely. This requires changes across a wide variety of components of CDH and Cloudera Manager. Cloudera Manager 5.1.4 provides these changes for Cloudera Manager 5.1.x deployments. All Cloudera Manager 5.1.x users should upgrade to 5.1.4 as soon as possible. For more information, see the Cloudera Security Bulletin.

Issues Fixed in Cloudera Manager 5.1.3

Improved speed and heap usage when deleting hosts on cluster with long history

Speed and heap usage have been improved when deleting hosts on clusters that have been running for a long time.

When there are multiple clusters, each cluster's topology files and validation for legal topology is limited to hosts in that cluster

When there are multiple clusters, each cluster's topology files and validation for legal topology is limited to hosts in that cluster. Most commands will now fail up front if the cluster's topology is invalid.

The size of the statement cache has been reduced for Oracle databases

For users of Oracle databases, the size of the statement cache has been reduced to help with memory consumption.

Improvements to memory usage of "cluster diagnostics collection" for large clusters.

Memory usage of "cluster diagnostics collection" has been improved for large clusters.

Issues Fixed in Cloudera Manager 5.1.2

If a NodeManager that is used as ApplicationMaster is decommissioned, YARN jobs will hang

Jobs can hang on NodeManager decommission due to a race condition when continuous scheduling is enabled.

Workaround:
  1. Go to the YARN service.
  2. Expand the ResourceManager Default Group > Resource Management category.
  3. Uncheck the Enable Fair Scheduler Continuous Scheduling checkbox.
  4. Click Save Changes to commit the changes.
  5. Restart the YARN service.

Could not find a healthy host with CDH 5 on it to create HiveServer2 error during upgrade

When upgrading from CDH 4 to CDH 5, if no parcel is active then the error message "Could not find a healthy host with CDH5 on it to create HiveServer2" displays. This can happen when transitioning from packages to parcels, or if you explicitly deactivate the CDH 4 parcel (which is not necessary) before upgrade.

Workaround: Wait 30 seconds and retry the upgrade.

AWS installation wizard requires Java 7u45 to be installed on Cloudera Manager Server host

Cloudera Manager 5.1 installs Java 7u55 by default. However, the AWS installation wizard does not work with Java 7u55 due to a bug in the jClouds version packaged with Cloudera Manager.

Workaround:
  1. Stop the Cloudera Manager Server.
    $ sudo service cloudera-scm-server stop
  2. Uninstall Java 7u55 from the Cloudera Manager Server host.
  3. Install Java 7u45 (which you can download from http://www.oracle.com/technetwork/java/javase/downloads/java-archive-downloads-javase7-521261.html#jdk-7u45-oth-JPR) on the Cloudera Manager Server host.
  4. Start the Cloudera Manager Server.
    $ sudo service cloudera-scm-server start
  5. Run the AWS installation wizard.
  Note: Due to a bug in Java 7u45 (http://bugs.java.com/bugdatabase/view_bug.do?bug_id=8014618), SSL connections between the Cloudera Manager Server and Cloudera Manager Agents and between the Cloudera Management Service and CDH processes break intermittently. If you do not have SSL enabled on your cluster, there is no impact.

The YARN property ApplicationMaster Max Retries has no effect in CDH 5

The issue arises because yarn.resourcemanager.am.max-retries was replaced with yarn.resourcemanager.am.max-attempts.

Workaround:
  1. Add the following to ResourceManager Advanced Configuration Snippet for yarn-site.xml, replacing MAX_ATTEMPTS with the desired maximum number of attempts:
    <property>
    <name>yarn.resourcemanager.am.max-attempts</name><value>MAX_ATTEMPTS</value>
    </property>
  2. Restart the ResourceManager(s) to pick up the change.

(BDR) Replications can be affected by other replications or commands running at the same time

Replications can be affected by other replications or commands running at the same time, causing replications to fail unexpectedly or even be silently skipped sometimes. When this occurs, a StaleObjectException is logged to the Cloudera Manager logs. This is known to occur even with as few as four replications starting at the same time.

Issues Fixed in Cloudera Manager 5.1.1

Checking "Install Java Unlimited Strength Encryption Policy Files" During Add Cluster or Add/Upgrade Host Wizard on RPM based distributions if JDK 7 or above is pre-installed will cause Cloudera Manager and CDH to fail

If you have manually installed Oracle's official JDK 7 or 8 rpm on a host (or hosts), and check the Install Java Unlimited Strength Encryption Policy Files checkbox in the Add Cluster or Add Host wizard when installing Cloudera Manager on that host (or hosts), or when upgrading Cloudera Manager to 5.1, Cloudera Manager installs JDK 6 policy files, which will prevent any Java programs from running against that JDK. Additionally, if this situation does apply, Cloudera Manager/CDH will also choose that particular Java as the default to run against, meaning that Cloudera Manager/CDH fail to start, throwing the following message in logs: Caused by: java.lang.SecurityException: The jurisdiction policy files are not signed by a trusted signer!.

Workaround: Do not select the Install Java Unlimited Strength Encryption Policy Files checkbox during the aforementioned wizards. Instead download and install them manually, following the instructions on Oracle's website.
  Note: To return to the default limited strength files, reinstall the original Oracle rpm:
  • yum - yum reinstall jdk
  • zypper - zypper in -f jdk
  • rpm - rpm -iv --replacepkgs filename, where filename is jdk-7u65-linux-x64.rpm or jdk-8u11-linux-x64.rpm)

Issues Fixed in Cloudera Manager 5.1.0

  Important: Cloudera Manager 5.1.0 is no longer available for download from the Cloudera website or from archive.cloudera.com due to the JCE policy file issue described in the Issues Fixed in Cloudera Manager 5.1.1 section of the Release Notes. The download URL at archive.cloudera.com for Cloudera Manager 5.1.0 now forwards to Cloudera Manager 5.1.1 for the RPM-based distributions for Linux RHEL and SLES.

Changes to property for yarn.nodemanager.remote-app-log-dir are not included in the JobHistory Server yarn-site.xml and Gateway yarn-site.xml

When "Remote App Log Directory" is changed in YARN configuration, the property yarn.nodemanager.remote-app-log-dir are not included in the JobHistory Server yarn-site.xml and Gateway yarn-site.xml.

Workaround: Set JobHistory Server Advanced Configuration Snippet (Safety Valve) for yarn-site.xml and YARN Client Advanced Configuration Snippet (Safety Valve) for yarn-site.xml to:
<property>
<name>yarn.nodemanager.remote-app-log-dir</name>
<value>/path/to/logs</value>
</property>

Secure CDH 4.1 clusters can't have Hue and Impala share the same Hive

In a secure CDH 4.1 cluster, Hue and Impala cannot share the same Hive instance. If "Bypass Hive Metastore Server" is disabled on the Hive service, then Hue will not be able to talk to Hive. Conversely, if "Bypass Hive Metastore" enabled on the Hive service, then Impala will have a validation error.

Severity: High

Workaround: Upgrade to CDH 4.2.

The command history has an option to select the number of commands, but doesn't always return the number you request

Workaround: None.

Hue doesn't support YARN ResourceManager High Availability

Workaround: Configure the Hue Server to point to the active ResourceManager:
  1. Go to the Hue service.
  2. Click the Configuration tab.
  3. Select Scope > Hue or Hue Service-Wide.
  4. Select Category > Advanced.
  5. Locate the Hue Server Advanced Configuration Snippet (Safety Valve) for hue_safety_valve_server.ini property or search for it by typing its name in the Search box.
  6. In the Hue Server Advanced Configuration Snippet for hue_safety_valve_server.ini field, add the following:
    [hadoop]
    [[ yarn_clusters ]]
    [[[default]]]
    resourcemanager_host=<hostname of active ResourceManager>
    resourcemanager_api_url=http://<hostname of active resource manager>:<web port of active resource manager>
    proxy_api_url=http://<hostname of active resource manager>:<web port of active resource manager>
    The default web port of Resource Manager is 8088.
  7. Click Save Changes to have these configurations take effect.
  8. Restart the Hue service.

Cloudera Manager does not support encrypted shuffle.

Encrypted shuffle has been introduced in CDH 4.1, but it is not currently possible to enable it through Cloudera Manager.

Severity: Medium

Workaround: None.

Hive CLI does not work in CDH 4 when "Bypass Hive Metastore Server" is enabled

Hive CLI does not work in CDH 4 when "Bypass Hive Metastore Server" is enabled.

Workaround: Configure Hive and disable the "Bypass Hive Metastore Server" option.

Alternatively, an approach can be taken that will cause the "Hive Auxiliary JARs Directory" to not work, but will enable basic Hive commands to work. Add the following to "Gateway Client Environment Advanced Configuration Snippet for hive-env.sh," then re-deploy the Hive client configuration:

HIVE_AUX_JARS_PATH=""
AUX_CLASSPATH=/usr/share/java/mysql-connector-java.jar:/usr/share/java/oracle-connector-java.jar:$(find /usr/share/cmf/lib/postgresql-jdbc.jar 2> /dev/null | tail -n 1)

Incorrect Absolute Path to topology.py in Downloaded YARN Client Configuration

The downloaded client configuration for YARN includes the topology.py script. The location of this script is given by the net.topology.script.file.name property in core-site.xml. But the core-site.xml file downloaded with the client configuration has an incorrect absolute path to /etc/hadoop/... for topology.py. This can cause clients that run against this configuration to fail (including Spark clients run in yarn-client mode, as well as YARN clients).

Workaround: Edit core-site.xml to change the value of the net.topology.script.file.name property to the path where the downloaded copy of topology.py is located. This property must be set to an absolute path.

search_bind_authentication for Hue is not included in .ini file

When search_bind_authentication is set to false, CM does not include it in hue.ini.

Workaround: Add the following to the Hue Service Advanced Configuration Snippet (Safety Valve) for hue_safety_valve.ini:
[desktop]
[[ldap]]
search_bind_authentication=false

Erroneous warning displayed on the HBase configuration page on CDH 4.1 in Cloudera Manager 5.0.0

An erroneous "Failed parameter validation" warning is displayed on the HBase configuration page on CDH 4.1 in Cloudera Manager 5.0.0

Severity: Low

Workaround: Use CDH4.2 or higher, or ignore the warning.

Host recommissioning and decommissioning should occur independently

In large clusters, when problems appear with a host or role, administrators may choose to decommission the host or role to fix it and then recommission the host or role to put it back in production. Decommissioning, especially host decommissioning, is slow, hence the importance of parallelization, so that host recommissioning can be initiated before decommissioning is done.

Fixed Issues in Cloudera Manager 5.0.6

Slow staleness calculation can lead to ZooKeeper data loss when new servers are added

In Cloudera Manager 5, starting new ZooKeeper Servers shortly after adding them can cause ZooKeeper data loss when the number of new servers exceeds the number of old servers.

Fixed Issues in Cloudera Manager 5.0.5

“POODLE” vulnerability on SSL/TLS enabled ports

The POODLE (Padding Oracle On Downgraded Legacy Encryption) attack takes advantage of a cryptographic flaw in the obsolete SSLv3 protocol, after first forcing the use of that protocol. The only solution is to disable SSLv3 entirely. This requires changes across a wide variety of components of CDH and Cloudera Manager. Cloudera Manager 5.0.5 provides these changes for Cloudera Manager 5.0.x deployments. All Cloudera Manager 5.0.x users should upgrade to 5.0.5 as soon as possible. For more information, see the Cloudera Security Bulletin.

Issues Fixed in Cloudera Manager 5.0.2

Cloudera Manager Impala Query Monitoring does not work with Impala 1.3.1

Impala 1.3.1 contains changes to the runtime profile format that break the Cloudera Manager Query Monitoring feature. This leads to exceptions in the Cloudera Manager Service Monitor logs, and Impala queries no longer appear in the Cloudera Manager UI or API. The issue affects Cloudera Manager 5.0 and 4.6 - 4.8.2.

Workaround: None. The issue will be fixed in Cloudera Manager 4.8.3 and Cloudera Manager 5.0.1. To avoid the Service Monitor exceptions, turn off the Cloudera Manager Query Monitoring feature by going to Impala Daemon > Monitoring and setting the Query Monitoring Period to 0 seconds. Note that the Impala Daemons must be restarted when changing this setting, and the setting must be restored once the fix is deployed to turn the query monitoring feature back on. Impala queries will then appear again in Cloudera Manager’s Impala query monitoring feature.

Issues Fixed in Cloudera Manager 5.0.1

Upgrade from Cloudera Manager 5.0.0 beta 1 or beta 2 to Cloudera Manager 5.0.0 requires assistance from Cloudera Support

Contact Cloudera Support before upgrading from Cloudera Manager 5.0.0 beta 1 or beta 2 to Cloudera Manager 5.0.0.

Workaround: Contact Cloudera Support.

Failure of HDFS Replication between clusters with YARN

HDFS replication between clusters in different Kerberos realms fails when using YARN if the target cluster is CDH 5.

Workaround: Use MapReduce (MRv1) instead of YARN.

If installing CDH 4 packages, the Impala 1.3.0 option does not work because Impala 1.3 is not yet released for CDH 4.

If installing CDH 4 packages, the Impala 1.3.0 option listed in the install wizard does not work because Impala 1.3.0 is not yet released for CDH 4.

Workaround: Install using parcels (where the unreleased version of Impala does not appear), or select a different version of Impala when installing with packages.

When updating dynamic resource pools, Cloudera Manager updates roles but may fail to update role information displayed in the UI

When updating dynamic resource pools, Cloudera Manager automatically refreshes the affected roles, but they sometimes get marked incorrectly as running with outdated configurations and requiring a refresh.

Workaround: Invoke the Refresh Cluster command from the cluster actions drop-down menu.

Upgrade of secure cluster requires installation of JCE policy files

When upgrading a secure cluster via Cloudera Manager, the upgrade initially fails due to the JDK not having Java Cryptography Extension (JCE) unlimited strength policy files. This is because Cloudera Manager installs a copy of the Java 7 JDK during the upgrade, which does not include the unlimited strength policy files. To ensure that unlimited strength functionality continues to work, install the unlimited strength JCE policy files immediately after completing the Cloudera Manager Upgrade Wizard and before taking any other actions in Cloudera Manager.

Workaround: Install the unlimited strength JCE policy files immediately after completing the Cloudera Manager Upgrade Wizard and before taking any other action in Cloudera Manager.

The Details page for MapReduce jobs displays the wrong id for YARN-based replications

The Details link for MapReduce jobs is wrong for YARN-based replications.

Workaround: Find the job id in the link and then go to the YARN Applications page and look for the job there.

Reset non-default HDFS File Block Storage Location Timeout value after upgrade from CDH 4 to CDH 5

During an upgrade from CDH 4 to CDH 5, if the HDFS File Block Storage Locations Timeout was previously set to a custom value, it will now be set to 10 seconds or the custom value, whichever is higher. This is required for Impala to start in CDH 5, and any value under 10 seconds is now a validation error. This configuration is only emitted for Impala and no services should be adversely impacted.

Workaround: None.

HDFS NFS gateway works only on RHEL and similar systems

Because of a bug in native versions of portmap/rpcbind, the HDFS NFS gateway does not work out of the box on SLES, Ubuntu, or Debian systems if you install CDH from the command-line, using packages. It does work on supported versions of RHEL-compatible systems on which rpcbind-0.2.0-10.el6 or later is installed, and it does work if you use Cloudera Manager to install CDH, or if you start the gateway as root.

Bug: 731542 (Red Hat), 823364 (SLES), 594880 (Debian)

Severity: High

Workarounds and caveats:
  • On Red Hat and similar systems, make sure rpcbind-0.2.0-10.el6 or later is installed.
  • On SLES, Debian, and Ubuntu systems, do one of the following:
    • Install CDH using Cloudera Manager; or
    • As of CDH 5.1, start the NFS gateway as root; or
    • Start the NFS gateway without using packages; or
    • You can use the gateway by running rpcbind in insecure mode, using the -i option, but keep in mind that this allows anyone from a remote host to bind to the portmap.

Sensitive configuration values exposed in Cloudera Manager

Certain configuration values that are stored in Cloudera Manager are considered sensitive, such as database passwords. These configuration values should be inaccessible to non-administrator users, and this is enforced in the Cloudera Manager Administration Console. However, these configuration values are not redacted when they are read through the API, possibly making them accessible to users who should not have such access.

Gateway role configurations not respected when deploying client configurations

Gateway configurations set for gateway role groups other than the default one or at the role level were not being respected.

Documentation reflects requirement to enable at least Level 1 encryption before enabling Kerberos authentication

Cloudera Security now indicates that before enabling Kerberos authentication you should first enable at least Level 1 encryption.

HDFS NFS gateway does not work on all Cloudera-supported platforms

The NFS gateway cannot be started on some Cloudera-supported platforms.

Workaround: None. Fixed in Cloudera Manager 5.0.1.

Replace YARN_HOME with HADOOP_YARN_HOME during upgrade

If yarn.application.classpath was set to a non-default value on a CDH 4 cluster, and that cluster is upgraded to CDH 5, the classpath is not updated to reflect that $YARN_HOME was replaced with $HADOOP_YARN_HOME. This will cause YARN jobs to fail.

Workaround: Reset yarn.application.classpath to the default, then re-apply your classpath customizations if needed.

Insufficient password hashing in Cloudera Manager

In versions of Cloudera Manager earlier than 4.8.3 and earlier than 5.0.1, user passwords are only hashed once. Passwords should be hashed multiple times to increase the cost of dictionary based attacks, where an attacker tries many candidate passwords to find a match. The issue only affects user accounts that are stored in the Cloudera Manager database. User accounts that are managed externally (for example, with LDAP or Active Directory) are not affected.

In addition, because of this issue, Cloudera Manager 4.8.3 cannot be upgraded to Cloudera Manager 5.0.0. Cloudera Manager 4.8.3 must be upgraded to 5.0.1 or later.

Workaround: Upgrade to Cloudera Manager 5.0.1.

Upgrade to Cloudera Manager 5.0.0 from SLES older than Service Pack 3 with PostgreSQL older than 8.4 fails

Upgrading to Cloudera Manager 5.0.0 from SUSE Linux Enterprise Server (SLES) older than Service Pack 3 will fail if the embedded PostgreSQL database is in use and the installed version of PostgreSQL is less than 8.4.

Workaround: Either migrate away from the embedded PostgreSQL database (use MySQL or Oracle) or upgrade PostgreSQL to 8.4 or greater.

MR1 to MR2 import fails on a secure cluster

When running the MR1 to MR2 import on a secure cluster, YARN jobs will fail to find container-executor.cfg.

Workaround: Restart YARN after the import.

After upgrade from CDH 4 to CDH 5, Oozie is missing workflow extension schemas

After an upgrade from CDH 4 to CDH 5, Oozie does not pick up the new workflow extension schemas automatically. User will need to update oozie.service.SchemaService.wf.ext.schemas manually and add the schemas added in CDH 5: shell-action-0.3.xsd, sqoop-action-0.4.xsd, distcp-action-0.2.xsd, oozie-sla-0.1.xsd, oozie-sla-0.2.xsd. Note: None of the existing jobs will be affected by this bug, only new workflows that require new schemas.

Workaround: Add the new workflow extension schemas to Oozie manually by editing oozie.service.SchemaService.wf.ext.schemas.

Issues Fixed in Cloudera Manager 5.0.0

HDFS replication does not work from CDH 5 to CDH 4 with different realms

HDFS replication does not work from CDH 5 to CDH 4 with different realms. This is because authentication fails for services in a non-default realm via the WebHdfs API due to a JDK bug. This has been fixed in JDK6-u34 (b03)) and in JDK7.

Workaround: Use JDK 7 or upgrade JDK6 to at least version u34.

The Sqoop Upgrade command in Cloudera Manager may report success even when the upgrade fails

Workaround: Do one of the following:
    1. Click the Sqoop service and then the Instances tab.
    2. Click the Sqoop server role then the Commands tab.
    3. Click the stdout link and scan for the Sqoop Upgrade command.
  • In the All Recent Commands page, select the stdout link for latest Sqoop Upgrade command.
Verify that the upgrade did not fail.

Cannot restore a snapshot of a deleted HBase table

If you take a snapshot of an HBase table, and then delete that table in HBase, you will not be able to restore the snapshot.

Severity: Med

Workaround: Use the "Restore As" command to recreate the table in HBase.

Stop dependent HBase services before enabling HDFS Automatic Failover.

When enabling HDFS Automatic Failover, you need to first stop any dependent HBase services. The Automatic Failover configuration workflow restarts both NameNodes, which could cause HBase to become unavailable.

Severity: Medium

New schema extensions have been introduced for Oozie in CDH 4.1

In CDH 4.1, Oozie introduced new versions for Hive, Sqoop and workflow schema. To use them, you must add the new schema extensions to the Oozie SchemaService Workflow Extension Schemas configuration property in Cloudera Manager.

Severity: Low

Workaround: In Cloudera Manager, do the following:

  1. Go to the CDH 4 Oozie service page.
  2. Go to the Configuration tab, View and Edit.
  3. Search for "Oozie Schema". This should show the Oozie SchemaService Workflow Extension Schemas property.
  4. Add the following to the Oozie SchemaService Workflow Extension Schemas property:
    shell-action-0.2.xsd 
    hive-action-0.3.xsd 
    sqoop-action-0.3.xsd
  5. Save these changes.

YARN Resource Scheduler user FairScheduler rather than FIFO.

Cloudera Manager 5.0.0 sets the default YARN Resource Scheduler to FairScheduler. If a cluster was previously running YARN with the FIFO scheduler, it will be changed to FairScheduler next time YARN restarts. The FairScheduler is only supported with CDH4.2.1 and later, and older clusters may hit failures and need to manually change the scheduler to FIFO or CapacityScheduler.

Severity: Medium

Workaround: For clusters running CDH 4 prior to CDH 4.2.1:
  1. Go the YARN service Configuration page
  2. Search for "scheduler.class"
  3. Click in the Value field and select the schedule you want to use.
  4. Save your changes and restart YARN to update your configurations.

Resource Pools Summary is incorrect if time range is too large.

The Resource Pools Summary does not show correct information if the Time Range selector is set to show 6 hours or more.

Severity: Medium

Workaround: None.

When running the MR1 to MR2 import on a secure cluster, YARN jobs will fail to find container-executor.cfg

Workaround: Restart YARN after the import steps finish. This causes the file to be created under the YARN configuration path, and the jobs now work.

When upgrading to Cloudera Manager 5.0.0, the "Dynamic Resource Pools" page is not accessible

When upgrading to Cloudera Manager 5.0.0, users will not be able to directly access the "Dynamic Resource Pools" page. Instead, they will be presented with a dialog saying that the Fair Scheduler XML Advanced Configuration Snippet is set.

Workaround:
  1. Go to the YARN service.
  2. Click the Configuration tab.
  3. Select Scope > Resource Manager or YARN Service-Wide.
  4. Select Category > Advanced.
  5. Locate the Fair Scheduler XML Advanced Configuration Snippet property or search for it by typing its name in the Search box.
  6. Copy the value of the Fair Scheduler XML Advanced Configuration Snippet into a file.
  7. Clear the value of Fair Scheduler XML Advanced Configuration Snippet.
  8. Recreate the desired Fair Scheduler allocations in the Dynamic Resource Pools page, using the saved file for reference.

New Cloudera Enterprise licensing is not reflected in the wizard and license page

Workaround: None.

The AWS Cloud wizard fails to install Spark due to missing roles

Workaround: Do one of the following:
  • Use the Installation wizard.
  • Open a new window, click the Spark service, click on the Instances tab, click Add, add all required roles to Spark. Once the roles are successfully added, click the Retry button in the Installation wizard.

Spark on YARN requires manual configuration

Spark on YARN requires the following manual configuration to work correctly: modify the YARN Application Classpath by adding /etc/hadoop/conf, making it the very first entry.

Workaround: Add /etc/hadoop/conf as the first entry in the YARN Application classpath.

Monitoring works with Solr and Sentry only after configuration updates

Cloudera Manager monitoring does not work out of the box with Solr and Sentry on Cloudera Manager 5. The Solr service is in Bad health, and all Solr Servers have a failing "Solr Server API Liveness" health check.

Severity: Medium

Workaround: Complete the configuration steps below:

  1. Create "HTTP" user and group on all machines in the cluster (with useradd 'HTTP' on RHEL-type systems).
  2. The instructions that follow this step assume there is no existing Solr Sentry policy file in use. In that case, first create the policy file on /tmp and then copy it over to the appropriate location in HDFS that Solr Servers check. If there is already a Solr Sentry policy in use, it must be modified to add the following [group] / [role] entries for 'HTTP'. Create a file (for example, /tmp/cm-authz-solr-sentry-policy.ini) with the following contents:
    [groups]
    HTTP = HTTP
    [roles]
    HTTP = collection = admin->action=query
  3. Copy this file to the location for the "Sentry Global Policy File" for Solr. The associated config name for this location is sentry.solr.provider.resource, and you can see the current value by navigating to the Sentry sub-category in the Service Wide configuration editing workflow in the Cloudera Manager UI. The default value for this entry is /user/solr/sentry/sentry-provider.ini. This refers to a path in HDFS.
  4. Check if you have entries in HDFS for the parent(s) directory:
    sudo -u hdfs hadoop fs -ls /user
  5. You may need to create the appropriate parent directories if they are not present. For example:
    sudo -u hdfs hadoop fs -mkdir /user/solr/sentry
  6. After ensuring the parent directory is present, copy the file created in step 2 to this location, as follows:
    sudo -u hdfs hadoop fs -put /tmp/cm-authz-solr-sentry-policy.ini /user/solr/sentry/sentry-provider.ini
  7. Ensure that this file is owned/readable by the solr user (this is what the Solr Server runs as):
    sudo -u hdfs hadoop fs -chown solr /user/solr/sentry/sentry-provider.ini
  8. Restart the Solr service. If both Kerberos and Sentry are being enabled for Solr, the MGMT services also need to be restarted. The Solr Server liveness health checks should clear up once SMON has had a chance to contact the servers and retrieve metrics.

Out-of-memory errors may occur when using the Reports Manager

Out-of-memory errors may occur when using the Cloudera Manager Reports Manager.

Workaround: Set the value of the "Java Heap Size of Reports Manager" property to at least the size of the HDFS filesystem image (fsimage) and restart the Reports Manager.

Applying license key using Internet Explorer 9 and Safari fails

Cloudera Manager is designed to work with IE 9 and above and Safari. However the file upload widget used to upload a license currently doesn't work with IE 9 or Safari. Therefore, installing an enterprise license doesn't work.

Workaround: Use another supported browser.

Issues Fixed in Cloudera Manager 5.0.0 Beta 2

The Sqoop Upgrade command in Cloudera Manager may report success even when the upgrade fails

Workaround: Do one of the following:
    1. Click the Sqoop service and then the Instances tab.
    2. Click the Sqoop server role then the Commands tab.
    3. Click the stdout link and scan for the Sqoop Upgrade command.
  • In the All Recent Commands page, select the stdout link for latest Sqoop Upgrade command.
Verify that the upgrade did not fail.

The HDFS Canary Test is disabled for secured CDH 5 services.

Due to a bug in Hadoop's handling of multiple RPC clients with distinct configurations within a single process with Kerberos security enabled, Cloudera Manager will disable the HDFS canary test when security is enabled so as to prevent interference with Cloudera Manager's MapReduce monitoring functionality.

Severity: Medium

Workaround: None

Not all monitoring configurations are migrated from MR1 to MR2.

When MapReduce v1 configurations are imported for use by YARN (MR2), not all of the monitoring configuration values are currently migrated. Users may need to reconfigure custom values for properties such as thresholds.

Severity: Medium

Workaround: Manually reconfigure any missing property values.

"Access Denied" may appear for some features after adding a license or starting a trial.

After starting a 60-day trial or installing a license for Enterprise Edition, you may see an "access denied" message when attempting to access certain Enterprise Edition-only features such as the Reports Manager. You need to log out of the Admin Console and log back in to access these features.

Severity: Low

Workaround: Log out of the Admin Console and log in again.

Hue must set impersonation on when using Impala with impersonation.

When using Impala with impersonation, the impersonation_enabled flag must be present and configured in the hue.ini file. If impersonation is enabled in Impala (i.e. Impala is using Sentry) then this flag must be set true. If Impala is not using impersonation, it should be set false (the default).

Workaround: Set advanced configuration snippet value for hue.ini as follows:
  1. Go to the Hue Service Configuration Advanced Configuration Snippet for hue_safety_valve.ini under the Hue service Configuration settings, Service-Wide > Advanced category.
  2. Add the following, then uncomment the setting and set the value True or False as appropriate:
    #################################################################
    # Settings to configure Impala
    #################################################################
    
    [impala]
      ....
      # Turn on/off impersonation mechanism when talking to Impala
      ## impersonation_enabled=False

Cloudera Manager Server may fail to start when upgrading using a PostgreSQL database.

If you're upgrading to Cloudera Manager 5.0.0 beta 1 and you're using a PostgreSQL database, the Cloudera Manager Server may fail to start with a message similar to the following:
ERROR [main:dbutil.JavaRunner@57] Exception while executing 
com.cloudera.cmf.model.migration.MigrateConfigRevisions 
java.lang.RuntimeException: java.sql.SQLException: Batch entry <xxx> insert into REVISIONS 
(REVISION_ID, OPTIMISTIC_LOCK_VERSION, USER_ID, TIMESTAMP, MESSAGE) values (...) 
was aborted. Call getNextException to see the cause.
Workaround: Use psql to connect directly to the server's database and issue the following SQL command:
alter table REVISIONS alter column MESSAGE type varchar(1048576);
After that, your Cloudera Manager server should start up normally.

Issues Fixed in Cloudera Manager 5.0.0 Beta 1

After an upgrade from Cloudera Manager 4.6.3 to 4.7, Impala does not start.

After an upgrade from Cloudera Manager 4.6.3 to 4.7 when Navigator is used, Impala will fail to start because the Audit Log Directory property has not been set by the upgrade procedure.

Severity: Low.

Workaround: Manually set the property to /var/log/impalad/audit . See Service Auditing Properties for more information.