CDH Issues

Cross Site Scripting Vulnerabilities in Cloudera Manager

Multiple cross-site scripting (XSS) vulnerabilities in the Cloudera Manager UI before version 5.4.3 allow remote attackers to inject arbitrary web script or HTML via unspecified vectors. Authentication to Cloudera Manager is required to exploit these vulnerabilities.

Products affected: Cloudera Manager

Releases affected: All releases prior to 5.4.3

Users affected: All Cloudera Manager users

Date/time of detection: May 8th, 2015

Severity: (Low/Medium/High) Medium

Impact: Allows unauthorized modification.

CVE: CVE-2015-4457

Immediate action required: Upgrade to Cloudera Manager 5.4.3.

Addressed in release/refresh/patch: Cloudera Manager 5.4.3

Key Trustee Server Passive Not Storing Keys Synchronously

Under normal operations, the active Key Trustee Server will reject key creation requests when a passive server is down. However, due to lack of synchronous replication on the active server, the following scenario can occur, causing key loss:

  • Passive server goes down
  • Encryption zone is created generating a new key stored only in the active server
  • Active server suffers catastrophic, unrecoverable failure before passive server is restored
  • Passive server restored and promoted to active without new key
  • New passive server is created

This scenario is an extreme edge case, but the consequence of losing an encryption key can be severe. The immediate action documented below addresses this issue by turning on synchronous replication.

Products affected: Key Trustee Server 5.4.0

Releases affected: Key Trustee Server 5.4.0

Users affected: Users using Key Trustee 5.4.0 in High Availability mode.

Date/time of detection: Friday, May 29, 2015

Severity: (Low/Medium/High) High

Impact: Key data loss.

CVE: CVE-2015-4166

Immediate action required: Apply workaround documented below:
  1. Execute the following commands on the active server:
    $ echo synchronous_standby_names=ztslave 
    >>/var/lib/keytrustee/db/postgres.conf.include
    $ echo synchronous_commit=on 
    >>/var/lib/keytrustee/db/postgres.conf.include

    The commands above assume a default path to the database. If you modified the default database location, replace /var/lib/keytrustee/db with the modified path.

  2. Restart the active database server
    1. Using packages, on the active Key Trustee server restart the database server with the following commands.
      $ sudo -u keytrustee ktadmin db --stop --pg-rootdir /var/lib/keytrustee/db
      $ sudo -u keytrustee ktadmin db --start --pg-rootdir /var/lib/keytrustee/db --background

      The commands above assume a default path to the database. If you modified the default database location, replace /var/lib/keytrustee/db with the modified path.

    2. Using Cloudera Manager

      Restart the active database role, following the instructions in Starting, Stopping, and Restarting Role Instances.

Addressed in release/refresh/patch: Key Trustee Server 5.4.3

Cloudera Navigator Vulnerable to the POODLE Attack

Cloudera Navigator 2.2.0 through 2.2.3, and 2.3.0 and 2.3.1 includes SSL/TLS support; however, SSLv3 protocol support, which is vulnerable to the POODLE attack, was erroneously not removed.

This vulnerability affects only those installations of Cloudera Navigator that are configured to use SSL/TLS.

Products affected: Cloudera Navigator

Releases affected:
Cloudera Navigator Corresponding Cloudera Manager
2.2.0 5.3.0
2.2.1 5.3.1
2.2.2 5.3.2
2.2.3 5.3.3
2.3.0 5.4.0
2.3.1 5.4.1

Users affected: All web users and API clients of Cloudera Navigator when SSL/TLS is enabled.

Date/time of detection:

Severity: (Low/Medium/High) Medium

Impact: Allows unauthorized disclosure of information; allows component impersonation.

CVE: CVE-2015-4078

Immediate action required:
  • Cloudera Navigator 2.2.x (packaged with Cloudera Manager 5.3.x): upgrade to Cloudera Navigator 2.2.4 (packaged with Cloudera Manager 5.3.4) or later.
  • Cloudera Navigator 2.3.x (packaged with Cloudera Manager 5.4.x): upgrade to Cloudera Navigator 2.3.3 (packaged with Cloudera Manager 5.4.3) or later.
  • Please note: if you are upgrading from Cloudera Navigator 2.2.x to 2.3.3 or later (i.e., upgrading from Cloudera Manager 5.3.x to 5.4.3 or later) and are impacted by this issue, you must remove the Advanced Configuration (safety value) SSL settings and reconfigure SSL using the new configuration, as specified at:

    http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/sg_nav_ssl.html.

Addressed in release/refresh/patch:

  • Cloudera Navigator 2.2.4 (packaged with Cloudera Manager 5.3.4)

  • Cloudera Navigator 2.3.3 (packaged with Cloudera Manager 5.4.3)

HBase Metadata in ZooKeeper Can Lack Proper Authorization Controls

In certain circumstances, HBase does not properly set up access control in ZooKeeper. As a result, any user can modify this metadata and perform attacks, including service denial, or cause data loss in a replica cluster. Clusters configured using Cloudera Manager are not vulnerable.

Products affected: HBase

Releases affected: All CDH 4 and CDH 5 versions prior to 4.7.2, 5.0.7, 5.1.6, 5.2.6, 5.3.4, 5.4.3

Users affected: HBase users with security set up to use Kerberos

Date/time of detection: May 15, 2015

Severity: (Low/Medium/High) High

Impact: An attacker could cause potential data loss in a replica cluster, or denial of service.

CVE: CVE-2015-1836

Immediate action required: To determine if your cluster is affected by this problem, open a ZooKeeper shell using hbase zkcli and check the permission on the /hbase znode, using getAcl /hbase.

If the output reads: 'world,'anyone : cdrwa, any unauthenticated user can delete or modify HBase znodes.

To manually fix the problem:
  1. Change the configuration to use hbase.zookeeper.client.keytab.file on Master and RegionServers.
    • Edit hbase-site.xml (which should be in /etc/hbase/) and add:
      <property>
        <name>hbase.zookeeper.client.keytab.file</name>
        <value>hbase.keytab</value>
      </property>
  2. Do a rolling restart of HBase (Master and RegionServers), and wait until it has completed.
  3. To manually fix the ACLs, form a zkcli running as hbase user to have world with only read, and sasl/hbase with cdrwa.

    (Some znodes in the list may not be present in your setup, so skip the node not found exceptions.)

    $ hbase zkcli
    setAcl /hbase world:anyone:r,sasl:hbase:cdrwa
    setAcl /hbase/backup-masters sasl:hbase:cdrwa
    setAcl /hbase/draining sasl:hbase:cdrwa
    setAcl /hbase/flush-table-proc sasl:hbase:cdrwa
    setAcl /hbase/hbaseid world:anyone:r,sasl:hbase:cdrwa
    setAcl /hbase/master world:anyone:r,sasl:hbase:cdrwa
    setAcl /hbase/meta-region-server world:anyone:r,sasl:hbase:cdrwa
    setAcl /hbase/namespace sasl:hbase:cdrwa
    setAcl /hbase/online-snapshot sasl:hbase:cdrwa
    setAcl /hbase/region-in-transition sasl:hbase:cdrwa
    setAcl /hbase/recovering-regions sasl:hbase:cdrwa
    setAcl /hbase/replication sasl:hbase:cdrwa
    setAcl /hbase/rs sasl:hbase:cdrwa
    setAcl /hbase/running sasl:hbase:cdrwa
    setAcl /hbase/splitWAL sasl:hbase:cdrwa
    setAcl /hbase/table sasl:hbase:cdrwa
    setAcl /hbase/table-lock sasl:hbase:cdrwa
    setAcl /hbase/tokenauth sasl:hbase:cdrwa

Release-based solutions are currently under investigation.

Addressed in release/refresh/patch: An update will be provided when solutions are in place.

For more updates on this issue, see the corresponding Knowledge article:

TSB 2015-65: HBase Metadata in ZooKeeper can lack proper Authorization Controls

HiveServer2 LDAP Provider May Allow Authentication with Blank Password

Hive may allow a user to authenticate without entering a password, depending on the order in which classes are loaded.

Specifically, Hive's SaslPlainServerFactory checks passwords, but the same class provided in Hadoop does not. Therefore, if the Hadoop class is loaded first, users can authenticate with HiveServer2 without specifying the password.

Products affected: Hive

Releases affected:

  • CDH 5.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.5

  • CDH 5.1, 5.1.2, 5.1.3, 5.1.4

  • CDH 5.2, 5.2.1, 5.2.3, 5.2.4

  • CDH 5.3, 5.3.1, 5.3.2

  • CDH 5.4.1, 5.4.2, 5.4.3

      Note: CDH 5.4.0 is not affected by this issue.

Users affected: All users using Hive with LDAP authentication.

Date/time of detection: March 11, 2015

Severity: (Low/Medium/High) High

Impact: A malicious user may be able to authenticate with HiveServer2 without specifying a password.

CVE: CVE-2015-1772

Immediate action required: Upgrade to CDH 5.4.4, 5.3.3, 5.2.5, 5.1.5, or 5.0.6

Addressed in release/refresh/patch: CDH 5.4.4, 5.3.3, 5.2.5, 5.1.5, or 5.0.6

For more updates on this issue, see the corresponding Knowledge article:

HiveServer2 LDAP Provider may Allow Authentication with Blank Password

Critical security related files in the YARN NodeManager configuration directories can be accessed by any user

When Cloudera Manager starts a YARN NodeManager, it makes all files in its configuration directory (typically /var/run/cloudera-scm-agent/process) readable by all users. This includes the file containing the Kerberos keytabs (yarn.keytab) and the file containing passwords for the SSL keystore (ssl-server.xml).

Global read permissions must be removed on the NodeManager’s security-related files.

Products affected: Cloudera Manager

Releases affected: All releases of Cloudera Manager 4.0 and higher.

Users affected: Customers who are using YARN in environments where Kerberos or SSL is enabled.

Date/time of detection: March 8, 2015

Severity (Low/Medium/High): High

Impact: Any user who can log in to a host where the YARN NodeManager is running can get access to the keytab file, use it to authenticate to the cluster, and perform unauthorized operations. If SSL is enabled, the user can also decrypt data transmitted over the network.

CVE: CVE-2015-2263

Immediate action required:
  1. If you are running YARN with Kerberos/SSL with Cloudera Manager 5.x, upgrade to the maintenance release with the security fix. If you are running YARN with Kerberos with Cloudera Manager 4.x, upgrade to any Cloudera Manager 5.x release with the security fix.
  2. Delete all “yarn” and “HTTP” principals from KDC/Active Directory. After deleting them, regenerate them using Cloudera Manager.
  3. Regenerate SSL keystores that you are using with the YARN service, using a new password.

ETA for resolution: Patches are available immediately with the release of this TSB.

Addressed in release/refresh/patch: Cloudera Manager releases 5.0.6, 5.1.5, 5.2.5, 5.3.3, and 5.4.0 have the fix for this bug.

For further updatres on this issue see the corresponding Knowledge article:

Critical Security related Files in the YARN NodeManager Configuration Directories can be Accessed by any User

Some DataNode admin commands do not check if the caller is an HDFS admin

Three HDFS admin commands—refreshNamenodes, deleteBlockPool, and shutdownDatanode—lack proper privilege checks in Apache Hadoop 0.23.x prior to 0.23.11 and 2.x prior to 2.4.1, allowing arbitrary users to make DataNodes unnecessarily refresh their federated NameNode configs, delete inactive block pools, or shut down. The shutdownDatanode command was first introduced in 2.4.0 and refreshNamenodes and deleteBlockPool were added in 0.23.0. Note that the deleteBlockPool command does not actually remove any underlying data from affected DataNodes, so there is no data loss possibility due to this vulnerability, although cluster operations can be severely disrupted.

Products affected:
  • Hadoop HDFS
Releases affected:
  • CDH 5.0.0 and CDH 5.0.1
Users affected:
  • All users running an HDFS cluster configured with Kerberos security
Date/time of detection:
  • April 30, 2014
Severity: Medium

Impact: Through HDFS admin command-line tools, non-admin users can shut down DataNodes or force them to perform unnecessary operations.

CVE: CVE-2014-0229

Immediate action required: Upgrade to CDH 5.0.2 or higher.

Job History Server does not enforce ACLs when web authentication is enabled

The Job History Server does not enforce job ACLs when web authentication is enabled. This means that any user can see details of all jobs. This only affects users who are using MRv2/YARN with HTTP authentication enabled.

Products affected:
  • Hadoop
Releases affected:
  • All versions of CDH 4.5.x up to 4.5.0
  • All versions of CDH 4.4.x up to 4.4.0
  • All versions of CDH 4.3.x up to 4.3.1
  • All versions of CDH 4.2.x up to 4.2.2
  • All versions of CDH 4.1.x up to 4.1.5
  • All versions of CDH 4.0.x
  • CDH 5.0.0 Beta 1
Users affected:
  • Users of YARN who have web authentication enabled.

Date/time of detection: October 14, 2013

Severity: Low
  Note: YARN is an experimental feature in CDH 4; it is no longer experimental in CDH 5.

Impact: Low

CVE: CVE-2013-6446

Immediate action required:
  • None, if you are not using MRv2/YARN with HTTP authentication.
  • If you are using MRv2/YARN with HTTP authentication, upgrade to CDH 4.6.0 or CDH 5.0.0 Beta 2 or contact Cloudera for a patch.

ETA for resolution: Fixed in CDH 5.0.0 Beta 2 released on 2/10/2014 and CDH 4.6.0 released on 2/27/2014.

Addressed in release/refresh/patch: CDH 4.6.0 and CDH 5.0.0 Beta 2.

Verification:

This vulnerability affects the Job History Server Web Services; it does not affect the Job History Server Web UI.

  Important:

The vulnerability is exposed only when the Job History Server HTTP endpoint is configured with an authentication filter (such as Hadoop's built-in AuthenticationFilter or a custom filter) that populates the HttpServletRequest.getRemoteUser() that is propagated to the Job History Server. This configuration is independent of the Hadoop cluster being configured with Kerberos security.

To verify that the vulnerability has been fixed, do the following steps:
  1. Create two non-admin users: 'A' and 'B'
  2. Submit a MapReduce job as user 'A'. For example:
    $ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar pi 2 2
  3. From the output of the above submission, copy the job ID, for example: job_1389847214537_0001
  4. With a browser logged in to the Job History Server Web UI as user 'B', access the following URL:

    http://<JHS_HOST>:19888/ws/v1/history/mapreduce/jobs/job_1389847214537_0001

If the vulnerability has been fixed, you should get an HTTP UNAUTHORIZED response; if the vulnerability has not been fixed, you should get an XML output with basic information about the job.

Apache Hadoop and Apache HBase "Man-in-the-Middle" Vulnerability

The Apache Hadoop and HBase RPC protocols are intended to provide bi-directional authentication between clients and servers. However, a malicious server or network attacker can unilaterally disable these authentication checks. This allows for potential reduction in the configured quality of protection of the RPC traffic, and privilege escalation if authentication credentials are passed over RPC.

Products affected:
  • Hadoop
  • HBase
Releases affected:
  • All versions of CDH 4.3.x prior to 4.3.1
  • All versions of CDH 4.2.x prior to 4.2.2
  • All versions of CDH 4.1.x prior to 4.1.5
  • All versions of CDH 4.0.x
Users affected:
  • Users of HDFS who have enabled Hadoop Kerberos security features and HDFS data encryption features.
  • Users of MapReduce or YARN who have enabled Hadoop Kerberos security features.
  • Users of HBase who have enabled HBase Kerberos security features and who run HBase co-located on a cluster with MapReduce or YARN.

Date/time of detection: June 10th, 2013

Severity: Severe

Impact:

RPC traffic from Hadoop clients, potentially including authentication credentials, may be intercepted by any user who can submit jobs to Hadoop. RPC traffic from HBase clients to Region Servers may be intercepted by any user who can submit jobs to Hadoop.

CVE: CVE-2013-2192 (Hadoop) and CVE-2013-2193 (HBase)

Immediate action required:
  • Users of CDH 4.3.0 should immediately upgrade to CDH 4.3.1 or later.

  • Users of CDH 4.2.x should immediately upgrade to CDH 4.2.2 or later.

  • Users of CDH 4.1.x should immediately upgrade to CDH 4.1.5 or later.

ETA for resolution: August 23, 2013

Addressed in release/refresh/patch: CDH 4.1.5, CDH 4.2.2, and CDH 4.3.1.

Verification:

In order to verify that you are not affected by this vulnerability, you should ensure that you are running a version of CDH at or greater than the aforementioned versions. To verify that this is true, proceed as follows.

On all of the cluster nodes, run one of the following commands:
  • On RPM-based systems (RHEL, SLES) rpm -qi hadoop | grep -i version
  • On Debian-based systems
    dpkg -s hadoop | grep -i version
Make sure that the version listed is greater than or equal to 4.1.5 for the 4.1.x line, 4.2.2 for the 4.2.x line, and 4.3.1 for the 4.3.x line. The 4.4.x and later lines of releases are unaffected.

Several types of authentication tokens use a secret key of insufficient length

Products Affected: HDFS, MapReduce, YARN, Hive, HBase

Releases Affected: If you use MapReduce, HDFS, HBase, or YARN, CDH4.0.x and all CDH3 versions between CDH3 Beta 3 and CDH3u5 refresh 1.

Users Affected: Users who have enabled Hadoop Kerberos security features.

Date/Time of Announcement: 10/12/2012 2:00pm PDT (upstream)

Verification: Verified upstream

Severity: High

Impact: Malicious users may crack the secret keys used to sign security tokens, granting access to modify data stored in HDFS, HBase, or Hive without authorization. HDFS Transport Encryption may also be brute-forced.

Mechanism: This vulnerability impacts a piece of security infrastructure in Hadoop Common, which affects the security of authentication tokens used by HDFS, MapReduce, YARN, HBase, and Hive.

Several components in Hadoop issue authentication tokens to clients in order to authenticate and authorize later access to a secured resource. These tokens consist of an identifier and a signature generated using the well-known HMAC scheme. The HMAC algorithm is based on a secret key shared between multiple server-side components.

For example, the HDFS NameNode issues block access tokens, which authorize a client to access a particular block with either read or write access. These tokens are then verified using a rotating secret key, which is shared between the NameNode and DataNodes. Similarly, MapReduce issues job-specific tokens, which allow reducer tasks to retrieve map output. HBase similarly issues authentication tokens to MapReduce tasks, allowing those tasks to access HBase data. Hive uses the same token scheme to authenticate access from MapReduce tasks to the Hive metastore.

The HMAC scheme relies on a shared secret key unknown to the client. In currently released versions of Hadoop, this key was created with an insufficient length (20 bits), which allows an attacker to obtain the secret key by brute force. This may allow an attacker to perform several actions without authorization, including accessing other users' data.

Immediate action required: If Security is enabled, upgrade to the latest CDH release.

ETA for resolution: As of 10/12/2012, this is patched in CDH4.1.0 and CDH3u5 refresh 2. Both are available now.

Addressed in release/refresh/patch: CDH4.1.0 and CDH3u5 refresh 2

Details: CDH Downloads

DataNode Client Authentication Disabled After NameNode Restart or HA Enable

Products affected: HDFS

Releases affected: CDH 4.0.0

Users affected: Users of HDFS who have enabled HDFS Kerberos security features.

Date vulnerability discovered: June 26, 2012

Date vulnerability analysis and validation complete: June 29, 2012

Severity: Severe

Impact: Malicious clients may gain write access to data for which they have read-only permission, or gain read access to any data blocks whose IDs they can determine.

Mechanism: When Hadoop security features are enabled, clients authenticate to DataNodes using BlockTokens issued by the NameNode to the client. The DataNodes are able to verify the validity of a BlockToken, and will reject BlockTokens that were not issued by the NameNode. The DataNode determines whether or not it should check for BlockTokens when it registers with the NameNode.

Due to a bug in the DataNode/NameNode registration process, a DataNode which registers more than once for the same block pool will conclude that it thereafter no longer needs to check for BlockTokens sent by clients. That is, the client will continue to send BlockTokens as part of its communication with DataNodes, but the DataNodes will not check the validity of the tokens. A DataNode will register more than once for the same block pool whenever the NameNode restarts, or when HA is enabled.

Immediate action required:

  1. Understand the vulnerability introduced by restarting the NameNode, or enabling HA.
  2. Upgrade to CDH 4.0.1 as soon as it becomes available.

Resolution: July 6, 2012

Addressed in release/refresh/patch: CDH 4.0.1 This release addresses the vulnerability identified by CVE-2012-3376.

Verification: On the NameNode run one of the following:

  • yum list hadoop-hdfs-namenode on RPM-based systems
  • dpkg -l | hadoop-hdfs-namenode on Debian-based systems
  • zypper info hadoop-hdfs-namenode for SLES11

On all DataNodes run one of the following:

  • yum list hadoop-hdfs-datanode on RPM-based systems
  • dpkg -l | grep hadoop-hdfs-datanode on Debian-base
  • zypper info hadoop-hdfs-datanode for SLES11

The reported version should be >= 2.0.0+91-1.cdh4.0.1

MapReduce with Security

Products affected: MapReduce

Releases affected: Hadoop 1.0.1 and below, Hadoop 0.23, CDH3u0-CDH3u2, CDH3u3 containing the hadoop-0.20-sbin package, version 0.20.2+923.195 and below.

Users affected: Users who have enabled Hadoop Kerberos/MapReduce security features.

Severity: Critical

Impact: Vulnerability allows an authenticated malicious user to impersonate any other user on the cluster.

Immediate action required: Upgrade the hadoop-0.20-sbin package to version to 0.20.2+923.197 or higher on all TaskTrackers to address the vulnerability. Note that upgrading hadoop-0.20-sbin will cause upgrade of several related (but unchanged) hadoop packages. If using Cloudera Manager versions 3.7.3 and below, you will also need to upgrade to Cloudera Manager 3.7.4 or later before you can successfully run jobs with Kerberos enabled after upgrading the hadoop-0.20-sbin package.

Resolution: 3/21/2012

Addressed in release/refresh/patch: hadoop-0.20-sbin package, version 0.20.2+923.197 This release addresses the vulnerability identified by CVE-2012-1574.

Remediation verification: On all TaskTrackers run one of the following:

  • yum list hadoop-0.20-sbin on RPM-based systems
  • dpkg -l | grep hadoop-0.20-sbin on Debian-based systems
  • zypper info hadoop-0.20-sbin for SLES11

The reported version should be >= 0.20.2+923.197.

If you are a Cloudera Enterprise customer and have further questions or need assistance, log a ticket with Cloudera Support through http://support.cloudera.com.