Cloudera Security Bulletins

Cloudera Enterprise

This section lists security bulletins for vulnerabilities that potentially affect the entire Cloudera Enterprise product suite. Bulletins specific to a component, such as Cloudera Manager, Impala, Spark etc., can be found in the sections that follow.

Potentially Sensitive Information in Cloudera Diagnostic Support Bundles

Cloudera Manager transmits certain diagnostic data (or "bundles") to Cloudera. These diagnostic bundles are used by the Cloudera support team to reproduce, debug, and address technical issues for customers.

Cloudera support discovered that potentially sensitive data may be included in diagnostic bundles and transmitted to Cloudera. This sensitive data cannot be used by Cloudera for any purpose.

Cloudera has modified Cloudera Manager so that known sensitive data is redacted from the bundles before transmission to Cloudera. Work is in progress in Cloudera CDH components to remove logging and output of known potentially sensitive properties and configurations.

See Cloudera Manager Release Notes, specifically, What's New in Cloudera Manager 5.9.0 for more information (scroll to Diagnostic Bundles). Also see Sensitive Data Redaction in the Cloudera Security Guide for more information about bundles and redaction.

Cloudera strives to establish and follow best practices for the protection of customer information. Cloudera continually reviews and improves security practices, infrastructure, and data-handling policies.

Products affected: Cloudera CDH and Enterprise Editions

Releases affected: All Cloudera CDH and Enterprise Edition releases lower than 5.9.0

Users affected: All users

Date/time of detection: June 20th, 2016

Severity (Low/Medium/High): Medium

Impact: Possible logging and transmission of sensitive data

CVE: CVE-2016-5724

Immediate action required: Upgrade to Cloudera CDH and Enterprise Editions 5.9

Addressed in release/refresh/patch: Cloudera CDH and Enterprise Editions 5.9 and higher

For updates about this issue, see the Cloudera Knowledge article, TSB 2016-166: Potentially Sensitive Information in Cloudera Diagnostic Support Bundles.

Apache Commons Collections Deserialization Vulnerability

Cloudera has learned of a potential security vulnerability in a third-party library called the Apache Commons Collections. This library is used in products distributed and supported by Cloudera (“Cloudera Products”), including core Apache Hadoop. The Apache Commons Collections library is also in widespread use beyond the Hadoop ecosystem. At this time, no specific attack vector for this vulnerability has been identified as present in Cloudera Products.

In an abundance of caution, we are currently in the process of incorporating a version of the Apache Commons Collections library with a fix into the Cloudera Products. In most cases, this will require coordination with the projects in the Apache community. One example of this is tracked by HADOOP-12577.

The Apache Commons Collections potential security vulnerability is titled “Arbitrary remote code execution with InvokerTransformer” and is tracked by COLLECTIONS-580. MITRE has not issued a CVE, but related CVE-2015-4852 has been filed for the vulnerability. CERT has issued Vulnerability Note #576313 for this issue.

Cloudera Products affected:Cloudera Manager, Cloudera Navigator, Cloudera Director, CDH

Releases affected:CDH 5.5.0, CDH 5.4.8 and lower, Cloudera Manager 5.5.0, Cloudera Manager 5.4.8 and lower, Cloudera Navigator 2.4.0, Cloudera Navigator 2.3.8 and lower, Director 1.5.1 and lower

Users affected: All

Date/time of detection: Nov 7, 2015

Severity (Low/Medium/High): High

Impact: This potential vulnerability might enable an attacker to run arbitrary code from a remote machine without requiring authentication.

Immediate action required: Upgrade to the latest suitable version containing this fix when it is available.

Addressed in release/refresh/patch: Beginning with CDH 5.5.1, 5.4.9, and 5.3.9, Cloudera Manager 5.5.1, 5.4.9, and 5.3.9, Cloudera Navigator 2.4.1, 2.3.9 and 2.2.9, and Director 1.5.2, the new Apache Commons Collections library version is included in all Cloudera products.

Heartbleed Vulnerability in OpenSSL

The Heartbleed vulnerability is a serious vulnerability in OpenSSL as described at http://heartbleed.com/ (OpenSSL TLS heartbeat read overrun, CVE-2014-0160). Cloudera products do not ship with OpenSSL, but some components use this library. Customers using OpenSSL with Cloudera products need to update their OpenSSL library to one that doesn’t contain the vulnerability.

Products affected:
  • All versions of OpenSSL 1.0.1 prior to 1.0.1g
Components affected:
  • Hadoop Pipes uses OpenSSL.
  • If SSL encryption is enabled for Impala's RPC implementation (by setting --ssl_server_certificate). This applies to any of the three Impala demon processes: impalad, catalogd and statestored.
  • If HTTPS is enabled for Impala’s debug web server pages (by setting --webserver_certificate_file). This applies to any of the three Impala demon processes: impalad, catalogd and statestored.
  • If HTTPS is used with Hue.
  • Cloudera Manager agents, with TLS turned on, will use OpenSSL.
Users affected:
  • All users of the above scenarios.

Severity: High (If using the scenarios above)

CVE: CVE-2014-0160

Immediate action required:
  • Ensure your Linux distribution version does not have the vulnerability.

“POODLE” Vulnerability on SSL/TLS enabled ports

The POODLE (Padding Oracle On Downgraded Legacy Encryption) attack, announced by Bodo Möller, Thai Duong, and Krzysztof Kotowicz at Google, forces the use of the obsolete SSLv3 protocol and then exploits a cryptographic flaw in SSLv3. The result is that an attacker on the same network as the victim can potentially decrypt parts of an otherwise encrypted channel.

SSLv3 has been obsolete, and known to have vulnerabilities, for many years now, but its retirement has been slow because of backward-compatibility concerns. SSLv3 has in the meantime been replaced by TLSv1, TLSv1.1, and TLSv1.2. Under normal circumstances, the strongest protocol version that both sides support is negotiated at the start of the connection. However, an attacker can introduce errors into this negotiation and force a fallback to the weakest protocol version -- SSLv3.

The only solution to the POODLE attack is to completely disable SSLv3. This requires changes across a wide variety of components of CDH, and in Cloudera Manager.

Products affected: Cloudera Manager and CDH.

Releases affected: All CDH and Cloudera Manager versions earlier than the versions listed below:
  • Cloudera Manager and CDH 5.2.1
  • Cloudera Manager and CDH 5.1.4
  • Cloudera Manager and CDH 5.0.5
  • CDH 4.7.1
  • Cloudera Manager 4.8.5

Users affected: All users

Date and time of detection: October 14th, 2014.

Severity: (Low/Medium/High): Medium. NIST rates the severity at 4.3 out of 10 .

Impact: Allows unauthorized disclosure of information; allows component impersonation.

CVE: CVE-2014-3566

Immediate action required:Upgrade CDH and Cloudera Manager as follows:
  • If you are running Cloudera Manager and CDH 5.2.0, upgrade to Cloudera Manager and CDH 5.2.1
  • If you are running Cloudera Manager and CDH 5.1.0 through 5.1.3, upgrade to Cloudera Manager and CDH 5.1.4
  • If you are running Cloudera Manager and CDH 5.0.0 through 5.0.4, upgrade to Cloudera Manager and CDH 5.0.5
  • If you are running a CDH version earlier than 4.7.1, upgrade to CDH 4.7.1
  • If you are running a Cloudera Manager version earlier than 4.8.5, upgrade to Cloudera Manager 4.8.5

Cloudera Data Science Workbench

This section lists the security bulletins that have been released for Cloudera Data Science Workbench.

Privilege Escalation and Database Exposure in Cloudera Data Science Workbench

Several web application vulnerabilities allow malicious authenticated Cloudera Data Science Workbench (CDSW) users to escalate privileges in CDSW. In combination, such users can exploit these vulnerabilities to gain root access to CDSW nodes, gain access to the CDSW database which includes Kerberos keytabs of CDSW users and bcrypt hashed passwords, and obtain other privileged information such as session tokens, invitations tokens, and environmental variables.

Products affected: Cloudera Data Science Workbench

Releases affected: Cloudera Data Science Workbench 1.0.0, 1.0.1, 1.1.0, 1.1.1

Users affected: All users of Cloudera Data Science Workbench 1.0.0, 1.0.1, 1.1.0, 1.1.1

Date/time of detection: September 1, 2017

Detected by: NCC Group

Severity (Low/Medium/High): High

Impact: Privilege escalation and database exposure.

CVE: CVE-2017-15536

Immediate action required: Upgrade to the latest version of Cloudera Data Science Workbench.

Addressed in release/refresh/patch: Cloudera Data Science Workbench 1.2.0 or higher.

Apache Hadoop

No security exposure due to CVE-2017-3162 for Cloudera Hadoop clusters

Information only. No action required. In the spirit of being overly cautious, CVE-2017-3162 was filed by the Apache Hadoop community to document the ability of the HDFS client (in the CDH 5.x code base) to browse the HDFS namespace without validating the NameNode as a query parameter.

This benign exposure was discovered independently by Cloudera (as well as other members of the Hadoop community) during regular routine static source code analyses. It is considered benign because there are no known attack vectors from this vulnerability.

Products affected: N/A

Releases affected: CDH 5.x and prior.

Users affected: None

Severity (Low/Medium/High): None

Impact: No impact to Cloudera customers or others running Hadoop clusters.

CVE: CVE-2017-3162

Immediate action required: No action required.

Addressed in release/refresh/patch: Not applicable.

Cross-site scripting exposure (CVE-2017-3161) not an issue for Cloudera Hadoop

Information only: No action required. A vulnerability recently uncovered by the wider security community had already been caught and resolved by Cloudera.

Products affected: Hadoop

Releases affected: CDH prior to 5.2.6 specifically the HDFS web UI would have been exposed to this vulnerability.

Users affected: None

Severity (Low/Medium/High): N/A

Impact: No impact to Cloudera customers or others running Hadoop clusters.

CVE: CVE-2017-3161

Immediate action required: No action required.

Addressed in release/refresh/patch: The vulnerability described by CVE-2017-3161 was previously caught and patched in the CDH code base back to 5.2.x. The following Cloudera Hadoop clusters are safe from this vulnerability.
  • CDH5.2.6
  • CDH5.3.4, CDH5.3.5, CDH5.3.6, CDH5.3.8, CDH5.3.9, CDH5.3.10
  • CDH5.4.3, CDH5.4.4, CDH5.4.5, CDH5.4.7, CDH5.4.8, CDH5.4.9, CDH5.4.10, CDH5.4.11
  • CDH5.5.0 and all higher releases

Apache YARN NodeManager Password Exposure

The YARN NodeManager in Apache Hadoop may leak the password for its credential store. This credential store is created by Cloudera Manager and contains sensitive information used by the NodeManager. Any container launched by that NodeManager can gain access to the password that protects the credential store.

Examples of sensitive information inside the credential store include a keystore password and an LDAP bind user password.

The credential store is also protected by Unix file permissions. When managed by Cloudera Manager, the credential store is readable only by the yarn user and the hadoop group. As a result, the scope of this leak is mitigated, making this a Low severity issue.

Products affected: YARN

Releases affected:
  • CDH 5.4.0, 5.4.1, 5.4.2, 5.4.3, 5.4.4, 5.4.5, 5.4.7, 5.4.8, 5.4.9, 5.4.10
  • CDH 5.5.0, 5.5.1, 5.5.2, 5.5.3, 5.5.4
  • CDH 5.6.0, 5.6.1
  • CDH 5.7.0, 5.7.1, 5.7.2
  • CDH 5.8.0, 5.8.1

Users affected: Cloudera Manager users who configure YARN to connect to external services (such as LDAP) that require a password, or who have enabled TLS for YARN.

Date/time of detection: March 15, 2016

Detected by: Robert Kanter

Severity (Low/Medium/High): Low (The credential store itself has restrictive permissions.)

Impact: Potential sensitive data exposure

CVE: CVE-2016-3086

Immediate action required: Upgrade to a release in which this has been addressed or higher.

Addressed in release/refresh/patch: CDH 5.4.11, CDH 5.5.5, CDH 5.6.2, CDH 5.7.3, CDH 5.8.2

Short-Circuit Read Vulnerability

In HDFS short-circuit reads, a local user on an HDFS DataNode may be able to create a block token that grants unauthorized read access to random files by guessing certain fields in the token.

Products affected: HDFS

Releases affected:
  • CDH 5.0.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.5, 5.0.6
  • CDH 5.1.0, 5.1.2, 5.1.3, 5.1.4, 5.1.5
  • CDH 5.2.0, 5.2.1, 5.2.3, 5.2.4, 5.2.5, 5.2.6
  • CDH 5.3.0, 5.3.2, 5.3.3, 5.3.4, 5.3.5, 5.3.6, 5.3.8, 5.3.9, 5.3.10
  • CDH 5.4.0, 5.4.1, 5.4.3, 5.4.4, 5.4.5, 5.4.7, 5.4.8, 5.4.9, 5.4.10, 5.4.11
  • CDH 5.5.0, 5.5.1, 5.5.2, 5.5.4, 5.5.5, 5.5.6
  • CDH 5.6.0, 5.6.1

Users affected: All HDFS users

Detected by: This issue was reported by Kihwal Lee of Yahoo Inc.

Severity (Low/Medium/High): Medium

Impact: A local user may be able to gain unauthorized read access to block data.

CVE: CVE-2016-5001

Immediate action required: Upgrade to a fixed version.

Addressed in release/refresh/patch: 5.7.0 and higher, 5.8.0 and higher, 5.9.0 and higher.

For the latest update on this issue see the corresponding Knowledge article:

TSB 2016-157: Short-Circuit Read Vulnerability

Apache Hadoop Privilege Escalation Vulnerability

A remote user who can authenticate with the HDFS NameNode can possibly run arbitrary commands as the hdfs user.

See CVE-2016-5393 Apache Hadoop Privilege escalation vulnerability

Products affected: HDFS and YARN

Releases affected: CDH 5.0.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.5, 5.0.6

CDH 5.1.0, 5.1.2, 5.1.3, 5.1.4, 5.1.5

CDH 5.2.0, 5.2.1, 5.2.3, 5.2.4, 5.2.5, 5.2.6

CDH 5.3.0, 5.3.2, 5.3.3, 5.3.4, 5.3.5, 5.3.6, 5.3.8, 5.3.9, 5.3.10

CDH 5.4.0, 5.4.1, 5.4.3, 5.4.4, 5.4.5, 5.4.7, 5.4.8, 5.4.9, 5.4.10

CDH 5.5.0, 5.5.1, 5.5.2, 5.5.4

CDH 5.6.0, 5.6.1

CDH 5.7.0, 5.7.1, 5.7.2

CDH 5.8.0

Users affected: All

Date/time of detection: July 26th, 2016

Severity (Low/Medium/High): High

Impact: A remote user who can authenticate with the HDFS NameNode can possibly run arbitrary commands with the same privileges as the HDFS service.

This vulnerability is critical because it is easy to exploit and compromises system-wide security. As a result, a remote user can potentially run any arbitrary command as the hdfs user. This bypasses all Hadoop security. There is no mitigation for this vulnerability.

CVE: CVE-2016-5393

Immediate action required: Upgrade immediately.

Addressed in release/refresh/patch: CDH 5.4.11, CDH 5.5.5, CDH 5.7.3, CDH 5.8.2, CDH 5.9.0 and higher.

Encrypted MapReduce spill data on the local file system is vulnerable to unauthorized disclosure

MapReduce spills intermediate data to the local disk. The encryption key used to encrypt this spill data is stored in clear text on the local filesystem along with the encrypted data itself. A malicious user with access to the file with these credentials can load the tokens from the file, read the key, and then decrypt the spill data.

See the upstream announcement on the Mitre site.

Products affected: MapReduce

Releases affected: CDH 5.2.0, CDH 5.2.1, CDH 5.2.3, CDH 5.2.4, CDH 5.2.5, CDH 5.2.6

CDH 5.3.0, CDH 5.3.2, CDH 5.3.3, CDH 5.3.4, CDH 5.3.5, CDH 5.3.6, CDH 5.3.8, CDH 5.3.9

CDH 5.4.0, CDH 5.4.1, CDH 5.4.3, CDH 5.4.4, CDH 5.4.5, CDH 5.4.7, CDH 5.4.8, CDH 5.4.9

CDH 5.5.0, CDH 5.5.1, CDH 5.5.2

Users affected: Users who have enabled encryption of MapReduce intermediate/spilled data to the local filesystem

Severity (Low/Medium/High): High

CVE: CVE-2015-1776

Addressed in release/refresh/patch: CDH 5.3.10, CDH 5.4.10, CDH 5.5.4; CDH 5.6.0 and higher

Immediate action required: Upgrade to one of the above releases if you use spill data encryption. This security fix causes MapReduce ApplicationMaster failures to not be tolerated when spill data is encrypted; post-upgrade, individual MapReduce jobs might fail if the ApplicationMaster goes down.

Critical Security Related Files in YARN NodeManager Configuration Directories Accessible to Any User

When Cloudera Manager starts a YARN NodeManager, it makes all files in its configuration directory (typically /var/run/cloudera-scm-agent/process) readable by all users. This includes the file containing the Kerberos keytabs (yarn.keytab) and the file containing passwords for the SSL keystore (ssl-server.xml).

Global read permissions must be removed on the NodeManager’s security-related files.

Products affected: Cloudera Manager

Releases affected: All releases of Cloudera Manager 4.0 and higher.

Users affected: Customers who are using YARN in environments where Kerberos or SSL is enabled.

Date/time of detection: March 8, 2015

Severity (Low/Medium/High): High

Impact: Any user who can log in to a host where the YARN NodeManager is running can get access to the keytab file, use it to authenticate to the cluster, and perform unauthorized operations. If SSL is enabled, the user can also decrypt data transmitted over the network.

CVE: CVE-2015-2263

Immediate action required:
  1. If you are running YARN with Kerberos/SSL with Cloudera Manager 5.x, upgrade to the maintenance release with the security fix. If you are running YARN with Kerberos with Cloudera Manager 4.x, upgrade to any Cloudera Manager 5.x release with the security fix.
  2. Delete all “yarn” and “HTTP” principals from KDC/Active Directory. After deleting them, regenerate them using Cloudera Manager.
  3. Regenerate SSL keystores that you are using with the YARN service, using a new password.

ETA for resolution: Patches are available immediately with the release of this TSB.

Addressed in release/refresh/patch: Cloudera Manager releases 5.0.6, 5.1.5, 5.2.5, 5.3.3, and 5.4.0 have the fix for this bug.

For further updates on this issue see the corresponding Knowledge article:

Critical Security related Files in the YARN NodeManager Configuration Directories can be Accessed by any User

Apache Hadoop Distributed Cache Vulnerability

The Distributed Cache Vulnerability allows a malicious cluster user to expose private files owned by the user running the YARN NodeManager process. The malicious user can create a public tar archive containing a symbolic link to a local file on the host running the YARN NodeManager process.

Products affected: YARN in CDH 5.

Releases affected: All CDH and Cloudera Manager versions earlier than the versions listed below:
  • Cloudera Manager and CDH 5.2.1
  • Cloudera Manager and CDH 5.1.4
  • Cloudera Manager and CDH 5.0.5

Users affected: Users running the YARN NodeManager daemon with Kerberos authentication.

Severity: (Low/Medium/High): High.

Impact: Allows unauthorized disclosure of information.

CVE: CVE-2014-3627

Immediate action required:Upgrade CDH and Cloudera Manager as follows:
  • If you are running Cloudera Manager and CDH 5.2.0, upgrade to Cloudera Manager and CDH 5.2.1
  • If you are running Cloudera Manager and CDH 5.1.0 through 5.1.3, upgrade to Cloudera Manager and CDH 5.1.4
  • If you are running Cloudera Manager and CDH 5.0.0 through 5.0.4, upgrade to Cloudera Manager and CDH 5.0.5

Some DataNode Admin Commands Do Not Check If Caller Is An HDFS Admin

Three HDFS admin commands—refreshNamenodes, deleteBlockPool, and shutdownDatanode—lack proper privilege checks in Apache Hadoop 0.23.x prior to 0.23.11 and 2.x prior to 2.4.1, allowing arbitrary users to make DataNodes unnecessarily refresh their federated NameNode configs, delete inactive block pools, or shut down. The shutdownDatanode command was first introduced in 2.4.0 and refreshNamenodes and deleteBlockPool were added in 0.23.0. The deleteBlockPool command does not actually remove any underlying data from affected DataNodes, so there is no data loss possibility due to this vulnerability, although cluster operations can be severely disrupted.

Products affected:
  • Hadoop HDFS
Releases affected:
  • CDH 5.0.0 and CDH 5.0.1
Users affected:
  • All users running an HDFS cluster configured with Kerberos security
Date/time of detection:
  • April 30, 2014
Severity: Medium

Impact: Through HDFS admin command-line tools, non-admin users can shut down DataNodes or force them to perform unnecessary operations.

CVE: CVE-2014-0229

Immediate action required: Upgrade to CDH 5.0.2 or higher.

JobHistory Server Does Not Enforce ACLs When Web Authentication is Enabled

The JobHistory Server does not enforce job ACLs when web authentication is enabled. This means that any user can see details of all jobs. This only affects users who are using MRv2/YARN with HTTP authentication enabled.

Products affected:
  • Hadoop
Releases affected:
  • All versions of CDH 4.5.x up to 4.5.0
  • All versions of CDH 4.4.x up to 4.4.0
  • All versions of CDH 4.3.x up to 4.3.1
  • All versions of CDH 4.2.x up to 4.2.2
  • All versions of CDH 4.1.x up to 4.1.5
  • All versions of CDH 4.0.x
  • CDH 5.0.0 Beta 1
Users affected:
  • Users of YARN who have web authentication enabled.

Date/time of detection: October 14, 2013

Severity: Low

Impact: Low

CVE: CVE-2013-6446

Immediate action required:
  • None, if you are not using MRv2/YARN with HTTP authentication.
  • If you are using MRv2/YARN with HTTP authentication, upgrade to CDH 4.6.0 or CDH 5.0.0 Beta 2 or contact Cloudera for a patch.

ETA for resolution: Fixed in CDH 5.0.0 Beta 2 released on 2/10/2014 and CDH 4.6.0 released on 2/27/2014.

Addressed in release/refresh/patch: CDH 4.6.0 and CDH 5.0.0 Beta 2.

Verification:

This vulnerability affects the JobHistory Server Web Services; it does not affect the JobHistory Server Web UI.

To verify that the vulnerability has been fixed, do the following steps:
  1. Create two non-admin users: 'A' and 'B'
  2. Submit a MapReduce job as user 'A'. For example:
    $ hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.0.0-SNAPSHOT.jar pi 2 2
  3. From the output of the above submission, copy the job ID, for example: job_1389847214537_0001
  4. With a browser logged in to the JobHistory Server Web UI as user 'B', access the following URL:

    http://<JHS_HOST>:19888/ws/v1/history/mapreduce/jobs/job_1389847214537_0001

If the vulnerability has been fixed, you should get an HTTP UNAUTHORIZED response; if the vulnerability has not been fixed, you should get an XML output with basic information about the job.

Apache Hadoop and Apache HBase "Man-in-the-Middle" Vulnerability

The Apache Hadoop and HBase RPC protocols are intended to provide bi-directional authentication between clients and servers. However, a malicious server or network attacker can unilaterally disable these authentication checks. This allows for potential reduction in the configured quality of protection of the RPC traffic, and privilege escalation if authentication credentials are passed over RPC.

Products affected:
  • Hadoop
  • HBase
Releases affected:
  • All versions of CDH 4.3.x prior to 4.3.1
  • All versions of CDH 4.2.x prior to 4.2.2
  • All versions of CDH 4.1.x prior to 4.1.5
  • All versions of CDH 4.0.x
Users affected:
  • Users of HDFS who have enabled Hadoop Kerberos security features and HDFS data encryption features.
  • Users of MapReduce or YARN who have enabled Hadoop Kerberos security features.
  • Users of HBase who have enabled HBase Kerberos security features and who run HBase co-located on a cluster with MapReduce or YARN.

Date/time of detection: June 10th, 2013

Severity: Severe

Impact:

RPC traffic from Hadoop clients, potentially including authentication credentials, may be intercepted by any user who can submit jobs to Hadoop. RPC traffic from HBase clients to Region Servers may be intercepted by any user who can submit jobs to Hadoop.

CVE: CVE-2013-2192 (Hadoop) and CVE-2013-2193 (HBase)

Immediate action required:
  • Users of CDH 4.3.0 should immediately upgrade to CDH 4.3.1 or higher.

  • Users of CDH 4.2.x should immediately upgrade to CDH 4.2.2 or higher.

  • Users of CDH 4.1.x should immediately upgrade to CDH 4.1.5 or higher.

ETA for resolution: August 23, 2013

Addressed in release/refresh/patch: CDH 4.1.5, CDH 4.2.2, and CDH 4.3.1.

Verification:

To verify that you are not affected by this vulnerability, ensure that you are running a version of CDH at or higher than the aforementioned versions. To verify that this is true, proceed as follows.

On all of the cluster hosts, run one of the following commands:
  • On RPM-based systems (RHEL, SLES) rpm -qi hadoop | grep -i version
  • On Debian-based systems
    dpkg -s hadoop | grep -i version
Make sure that the version listed is greater than or equal to 4.1.5 for the 4.1.x line, 4.2.2 for the 4.2.x line, and 4.3.1 for the 4.3.x line. The 4.4.x and higher lines of releases are unaffected.

DataNode Client Authentication Disabled After NameNode Restart or HA Enable

Products affected: HDFS

Releases affected: CDH 4.0.0

Users affected: Users of HDFS who have enabled HDFS Kerberos security features.

Date vulnerability discovered: June 26, 2012

Date vulnerability analysis and validation complete: June 29, 2012

Severity: Severe

Impact: Malicious clients may gain write access to data for which they have read-only permission, or gain read access to any data blocks whose IDs they can determine.

Mechanism: When Hadoop security features are enabled, clients authenticate to DataNodes using BlockTokens issued by the NameNode to the client. The DataNodes are able to verify the validity of a BlockToken, and will reject BlockTokens that were not issued by the NameNode. The DataNode determines whether or not it should check for BlockTokens when it registers with the NameNode.

Due to a bug in the DataNode/NameNode registration process, a DataNode which registers more than once for the same block pool will conclude that it thereafter no longer needs to check for BlockTokens sent by clients. That is, the client will continue to send BlockTokens as part of its communication with DataNodes, but the DataNodes will not check the validity of the tokens. A DataNode will register more than once for the same block pool whenever the NameNode restarts, or when HA is enabled.

Immediate action required:

  1. Understand the vulnerability introduced by restarting the NameNode, or enabling HA.
  2. Upgrade to CDH 4.0.1 as soon as it becomes available.

Resolution: July 6, 2012

Addressed in release/refresh/patch: CDH 4.0.1 This release addresses the vulnerability identified by CVE-2012-3376.

Verification: On the NameNode run one of the following:

  • yum list hadoop-hdfs-namenode on RPM-based systems
  • dpkg -l | hadoop-hdfs-namenode on Debian-based systems
  • zypper info hadoop-hdfs-namenode for SLES11

On all DataNodes run one of the following:

  • yum list hadoop-hdfs-datanode on RPM-based systems
  • dpkg -l | grep hadoop-hdfs-datanode on Debian-base
  • zypper info hadoop-hdfs-datanode for SLES11

The reported version should be >= 2.0.0+91-1.cdh4.0.1

Several Authentication Token Types Use Secret Key of Insufficient Length

Products Affected: HDFS, MapReduce, YARN, Hive, HBase

Releases Affected: If you use MapReduce, HDFS, HBase, or YARN, CDH4.0.x and all CDH3 versions between CDH3 Beta 3 and CDH3u5 refresh 1.

Users Affected: Users who have enabled Hadoop Kerberos security features.

Date/Time of Announcement: 10/12/2012 2:00pm PDT (upstream)

Verification: Verified upstream

Severity: High

Impact: Malicious users may crack the secret keys used to sign security tokens, granting access to modify data stored in HDFS, HBase, or Hive without authorization. HDFS Transport Encryption may also be brute-forced.

Mechanism: This vulnerability impacts a piece of security infrastructure in Hadoop Common, which affects the security of authentication tokens used by HDFS, MapReduce, YARN, HBase, and Hive.

Several components in Hadoop issue authentication tokens to clients in order to authenticate and authorize later access to a secured resource. These tokens consist of an identifier and a signature generated using the well-known HMAC scheme. The HMAC algorithm is based on a secret key shared between multiple server-side components.

For example, the HDFS NameNode issues block access tokens, which authorize a client to access a particular block with either read or write access. These tokens are then verified using a rotating secret key, which is shared between the NameNode and DataNodes. Similarly, MapReduce issues job-specific tokens, which allow reducer tasks to retrieve map output. HBase similarly issues authentication tokens to MapReduce tasks, allowing those tasks to access HBase data. Hive uses the same token scheme to authenticate access from MapReduce tasks to the Hive metastore.

The HMAC scheme relies on a shared secret key unknown to the client. In currently released versions of Hadoop, this key was created with an insufficient length (20 bits), which allows an attacker to obtain the secret key by brute force. This may allow an attacker to perform several actions without authorization, including accessing other users' data.

Immediate action required: If Security is enabled, upgrade to the latest CDH release.

ETA for resolution: As of 10/12/2012, this is patched in CDH4.1.0 and CDH3u5 refresh 2. Both are available now.

Addressed in release/refresh/patch: CDH4.1.0 and CDH3u5 refresh 2

Details: CDH Downloads

MapReduce with Security

Products affected: MapReduce

Releases affected: Hadoop 1.0.1 and below, Hadoop 0.23, CDH3u0-CDH3u2, CDH3u3 containing the hadoop-0.20-sbin package, version 0.20.2+923.195 and below.

Users affected: Users who have enabled Hadoop Kerberos/MapReduce security features.

Severity: Critical

Impact: Vulnerability allows an authenticated malicious user to impersonate any other user on the cluster.

Immediate action required: Upgrade the hadoop-0.20-sbin package to version to 0.20.2+923.197 or higher on all TaskTrackers to address the vulnerability. Upgrading hadoop-0.20-sbin causes upgrade of several related (but unchanged) hadoop packages. If using Cloudera Manager versions 3.7.3 and below, you will also need to upgrade to Cloudera Manager 3.7.4 or higher before you can successfully run jobs with Kerberos enabled after upgrading the hadoop-0.20-sbin package.

Resolution: 3/21/2012

Addressed in release/refresh/patch: hadoop-0.20-sbin package, version 0.20.2+923.197 This release addresses the vulnerability identified by CVE-2012-1574.

Remediation verification: On all TaskTrackers run one of the following:

  • yum list hadoop-0.20-sbin on RPM-based systems
  • dpkg -l | grep hadoop-0.20-sbin on Debian-based systems
  • zypper info hadoop-0.20-sbin for SLES11

The reported version should be >= 0.20.2+923.197.

If you are a Cloudera Enterprise customer and have further questions or need assistance, log a ticket with Cloudera Support through http://support.cloudera.com.

Apache HBase

This section lists the security bulletins that have been released for Apache HBase.

HBase Metadata in ZooKeeper Can Lack Proper Authorization Controls

In certain circumstances, HBase does not properly set up access control in ZooKeeper. As a result, any user can modify this metadata and perform attacks, including service denial, or cause data loss in a replica cluster. Clusters configured using Cloudera Manager are not vulnerable.

Products affected: HBase

Releases affected: All CDH 4 and CDH 5 versions prior to 4.7.2 , 5.0.7, 5.1.6, 5.2.6, 5.3.4, 5.4.3

Users affected: HBase users with security set up to use Kerberos

Date/time of detection: May 15, 2015

Severity: (Low/Medium/High) High

Impact: An attacker could cause potential data loss in a replica cluster, or denial of service.

CVE: CVE-2015-1836

Immediate action required: To determine if your cluster is affected by this problem, open a ZooKeeper shell using hbase zkcli and check the permission on the /hbase znode, using getAcl /hbase.

If the output reads: 'world,'anyone : cdrwa, any unauthenticated user can delete or modify HBase znodes.

To manually fix the problem:
  1. Change the configuration to use hbase.zookeeper.client.keytab.file on Master and RegionServers.
    • Edit hbase-site.xml (which should be in /etc/hbase/) and add:
      <property>
        <name>hbase.zookeeper.client.keytab.file</name>
        <value>hbase.keytab</value>
      </property>
  2. Do a rolling restart of HBase (Master and RegionServers), and wait until it has completed.
  3. To manually fix the ACLs, form a zkcli running as hbase user to have world with only read, and sasl/hbase with cdrwa.

    (Some znodes in the list might not be present in your setup, so ignore the "node not found" exceptions.)

    $ hbase zkcli
    setAcl /hbase world:anyone:r,sasl:hbase:cdrwa
    setAcl /hbase/backup-masters sasl:hbase:cdrwa
    setAcl /hbase/draining sasl:hbase:cdrwa
    setAcl /hbase/flush-table-proc sasl:hbase:cdrwa
    setAcl /hbase/hbaseid world:anyone:r,sasl:hbase:cdrwa
    setAcl /hbase/master world:anyone:r,sasl:hbase:cdrwa
    setAcl /hbase/meta-region-server world:anyone:r,sasl:hbase:cdrwa
    setAcl /hbase/namespace sasl:hbase:cdrwa
    setAcl /hbase/online-snapshot sasl:hbase:cdrwa
    setAcl /hbase/region-in-transition sasl:hbase:cdrwa
    setAcl /hbase/recovering-regions sasl:hbase:cdrwa
    setAcl /hbase/replication sasl:hbase:cdrwa
    setAcl /hbase/rs sasl:hbase:cdrwa
    setAcl /hbase/running sasl:hbase:cdrwa
    setAcl /hbase/splitWAL sasl:hbase:cdrwa
    setAcl /hbase/table sasl:hbase:cdrwa
    setAcl /hbase/table-lock sasl:hbase:cdrwa
    setAcl /hbase/tokenauth sasl:hbase:cdrwa

Release-based solutions are currently under investigation.

Addressed in release/refresh/patch: An update will be provided when solutions are in place.

For more updates on this issue, see the corresponding Knowledge article:

TSB 2015-65: HBase Metadata in ZooKeeper can lack proper Authorization Controls

Apache Hadoop and Apache HBase "Man-in-the-Middle" Vulnerability

The Apache Hadoop and HBase RPC protocols are intended to provide bi-directional authentication between clients and servers. However, a malicious server or network attacker can unilaterally disable these authentication checks. This allows for potential reduction in the configured quality of protection of the RPC traffic, and privilege escalation if authentication credentials are passed over RPC.

Products affected:
  • Hadoop
  • HBase
Releases affected:
  • All versions of CDH 4.3.x prior to 4.3.1
  • All versions of CDH 4.2.x prior to 4.2.2
  • All versions of CDH 4.1.x prior to 4.1.5
  • All versions of CDH 4.0.x
Users affected:
  • Users of HDFS who have enabled Hadoop Kerberos security features and HDFS data encryption features.
  • Users of MapReduce or YARN who have enabled Hadoop Kerberos security features.
  • Users of HBase who have enabled HBase Kerberos security features and who run HBase co-located on a cluster with MapReduce or YARN.

Date/time of detection: June 10th, 2013

Severity: Severe

Impact:

RPC traffic from Hadoop clients, potentially including authentication credentials, may be intercepted by any user who can submit jobs to Hadoop. RPC traffic from HBase clients to Region Servers may be intercepted by any user who can submit jobs to Hadoop.

CVE: CVE-2013-2192 (Hadoop) and CVE-2013-2193 (HBase)

Immediate action required:
  • Users of CDH 4.3.0 should immediately upgrade to CDH 4.3.1 or higher.

  • Users of CDH 4.2.x should immediately upgrade to CDH 4.2.2 or higher.

  • Users of CDH 4.1.x should immediately upgrade to CDH 4.1.5 or higher.

ETA for resolution: August 23, 2013

Addressed in release/refresh/patch: CDH 4.1.5, CDH 4.2.2, and CDH 4.3.1.

Verification:

To verify that you are not affected by this vulnerability, ensure that you are running a version of CDH at or higher than the aforementioned versions. To verify that this is true, proceed as follows.

On all of the cluster hosts, run one of the following commands:
  • On RPM-based systems (RHEL, SLES) rpm -qi hadoop | grep -i version
  • On Debian-based systems
    dpkg -s hadoop | grep -i version
Make sure that the version listed is greater than or equal to 4.1.5 for the 4.1.x line, 4.2.2 for the 4.2.x line, and 4.3.1 for the 4.3.x line. The 4.4.x and higher lines of releases are unaffected.

Apache Hive

Apache Hive SSL Vulnerability Bug Disclosure

If you use Cloudera Hive JDBC drivers to connect your applications with HiveServer2, you are not affected if:
  • SSL is not turned on, or
  • SSL is turned on but only non-self-signed certificates are used.

If neither of the above statements describe your deployment, please read on.

In CDH 5.2 and later releases, the CVE-2016-3083: Apache Hive SSL vulnerability bug disclosure impacts applications and tools that use:

  • Apache JDBC driver with SSL enabled, or
  • Cloudera Hive JDBC drivers with self-signed certificates and SSL enabled

    The certificate must be self-signed. A certificate signed by a trusted (or untrusted) Certificate Authority (CA) is not impacted by this vulnerability.

Cloudera does not recommend the use of self-signed certificates.

The CVE-2016-3083: Apache Hive SSL vulnerability is fixed by HIVE-13390 and is documented in the Apache community as follows:

"Apache Hive (JDBC + HiveServer2) implements SSL for plain TCP and HTTP connections (it supports both transport modes). While validating the server's certificate during the connection setup, the client doesn't seem to be verifying the common name attribute of the certificate. In this way, if a JDBC client sends an SSL request to server abc.example.com, and the server responds with a valid certificate (certified by CA) but issued to xyz.example.com, the client will accept that as a valid certificate and the SSL handshake will go through."

This means that it would be possible to set up a man-in-the-middle attack to intercept all SSL-protected JDBC communication.

CDH Hive users have the option of deploying either the Apache Hive JDBC driver or the Cloudera Hive JDBC driver that is distributed by Cloudera for use with their JDBC applications. Traditionally, Cloudera has strongly recommended use of the Cloudera Hive JDBC driver — and offers limited support for the Apache Hive JDBC driver. The JDBC jars in the CLASSPATH environment variable can be examined to determine which JDBC driver is in use. If the hive-jdbc-1.1.0-cdh<CDH_VERSION>.jar is included in the CLASSPATH, the Apache JDBC driver is being used. If the HiveJDBC4.jar or the HiveJDBC41.jar is in the CLASSPATH, that indicates the Cloudera Hive JDBC driver is being used.

JDBC drivers can also be used in an embedded mode. For example, when connecting to HiveServer2 by way of tools such as Beeline, the JDBC Client is invoked internally over the Thrift API. The JDBC driver in use by Beeline can also be determined by examining the driver version information printed after the connection is established.

If the output shows:

  • Hive JDBC (version 1.1.0-cdh<CDH_VERSION>), the Apache JDBC driver is being used.
  • Driver: HiveJDBC (version 02.05.18.1050), the Cloudera Hive JDBC Driver is being used.
Enabling SSL is controlled by the following configuration parameter in Hive:
hive.server2.use.SSL=true
        
Or by the deprecated version of the configuration parameter:
hive.server2.enable.SSL=true
      

This information can be used to decide whether a tool or application is impacted by this vulnerability.

For Cloudera Hive JDBC drivers with self-signed certificates and SSL enabled: Generate non-self-signed certificates according to the following documentation: https://www.cloudera.com/documentation/enterprise/latest/topics/cm_sg_create_deploy_certs.html

For Apache JDBC drivers with SSL enabled: You can switch to use the Cloudera Hive JDBC driver. Note that the Cloudera Hive JDBC driver only displays query results and skips displaying informational messages such as those logged by MapReduce jobs (that are invoked as part of executing the JDBC command). For example:

INFO : Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 1
INFO : 2017-06-06 14:19:41,115 Stage-1 map = 0%, reduce = 0%
INFO : 2017-06-06 14:19:48,427 Stage-1 map = 100%, reduce = 0%, Cumulative CPU 1.87 sec
INFO : 2017-06-06 14:19:55,845 Stage-1 map = 100%, reduce = 100%, Cumulative CPU 3.75 sec
INFO : MapReduce Total cumulative CPU time: 3 seconds 750 msec
INFO : Ended Job = job_1496750846200_0001
INFO : MapReduce Jobs Launched: 
INFO : Stage-Stage-1: Map: 1 Reduce: 1 Cumulative CPU: 3.75 sec HDFS Read: 31539 HDFS Write: 3 SUCCESS
INFO : Total MapReduce CPU Time Spent: 3 seconds 750 msec
      

The steps required to switch to using the Cloudera Hive JDBC driver for Beeline are:

  1. Download the latest Cloudera Hive JDBC driver from: https://www.cloudera.com/downloads/connectors/hive/jdbc/2-5-18.html
  2. Unzip the archive:

    unzip Cloudera_HiveJDBC41_2.5.18.1050.zip
              
  3. Add the HiveJDBC41.jar to the beginning of the CLASSPATH:

    export HIVE_CLASSPATH=/root/HiveJDBC41.jar:$HIVE_CLASSPATH
              
  4. Execute Beeline, but change the connection URL according to the Cloudera driver documentation at the following location: http://www.cloudera.com/documentation/other/connectors/hive-jdbc/latest/Cloudera-JDBC-Driver-for-Apache-Hive-Install-Guide.pdf
  5. Confirm the change by checking the driver version when connecting to HiveServer2 with Beeline:
    Connecting to jdbc:hive2:://<HOST>:10000
    Connected to: Apache Hive (version 1.1.0-cdh<CDH_VERSION>)
    Driver: HiveJDBC (version 02.05.18.1050)
              
  6. The following error message that is displayed can be ignored:
    Error: [Cloudera] [JDBC] (11975) Unsupported transaction isolation
    level: 4. (state=HY000,code=11975)
              

For other third-party tools and applications, replace the Apache JDBC driver as follows:

  1. Add the HiveJDBC41.jar to the beginning of the CLASSPATH for the application:

    export CLASSPATH=/root/hiveJDBC41.jar:$CLASSPATH
              
  2. Change the JDBC connection URL according to the Cloudera driver documentation located at:

    http://www.cloudera.com/documentation/other/connectors/hive-jdbc/latest/Cloudera-JDBC-Driver-for-Apache-Hive-Install-Guide.pdf

Products affected: Hive

Releases affected:
  • CDH 5.2.0, 5.2.1, 5.2.3, 5.2.4, 5.2.5, 5.2.6
  • CDH 5.3.0, 5.3.1, 5.3.2, 5.3.3, 5.3.4, 5.3.5, 5.3.6, 5.3.8, 5.3.9, 5.3.10
  • CDH 5.4.0, 5.4.1, 5.4.2, 5.4.3, 5.4.4, 5.4.5, 5.4.7, 5.4.8, 5.4.9, 5.4.10, 5.4.11
  • CDH 5.5.0, 5.5.1, 5.5.2, 5.5.4, 5.5.5, 5.5.6
  • CDH 5.6.0, 5.6.1
  • CDH 5.7.0, 5.7.1, 5.7.2, 5,7.3, 5.7.4, 5.7.5, 5.7.6
  • CDH 5.8.0, 5.8.2, 5.8.3, 5.8.4, 5.8.5
  • CDH 5.9.0, 5.9.1, 5.9.2
  • CDH 5.10.0, 5.10.1
  • CDH 5.11.0

Users affected: JDBC (Apache Hive JDBC driver using SSL or Cloudera Hive JDBC driver with self-signed certificates) and HiveServer2 users

Detected by: Branden Crawford from Inteco Systems Limited

Severity (Low/Medium/High): Medium

Impact: As discussed above.

CVE: CVE-2016-3083

Immediate action required:
  • For non-Beeline clients (including third-party tools or applications): If Apache Hive JDBC drivers are being used, switch to Cloudera JDBC drivers (and use externally signed CA certs as always recommended for production use).
  • For Beeline (or Beeline-based clients, e.g. Oozie): Update Beeline’s embedded Apache JDBC driver to Cloudera JDBC driver as shown above. Alternatively, if these JDBC-based clients are invoked within a CDH cluster, upgrade the cluster to a release where the issue has been addressed.

Addressed release/refresh/patch: CDH5.11.1 and later

For the latest update on this issue, see the corresponding Knowledge article:

TSB 2017-238: Hive SSL vulnerability bug disclosure

Hive built-in functions “reflect”, “reflect2”, and “java_method” not blocked by default in Sentry

Sentry does not block the execution of Hive built-in functions “reflect”, “reflect2”, and “java_method” by default in some CDH versions. These functions allow the execution of arbitrary user code, which is a security issue.

This issue is documented in SENTRY-960.

Products affected: Hive, Sentry

Releases affected:

CDH 5.4.0, CDH 5.4.1, CDH 5.4.2, CDH 5.4.3, CDH 5.4.4, CDH 5.4.5, CDH 5.4.6, CDH 5.4.7, CDH 5.4.8, CDH 5.5.0, CDH 5.5.1

Users affected: Users running Sentry with Hive.

Date/time of detection: November 13, 2015

Severity (Low/Medium/High): High

Impact: This potential vulnerability may enable an authenticated user to execute arbitrary code as a Hive superuser.

CVE: CVE-2016-0760

Immediate action required: Explicitly add the following to the blacklist property in hive-site.xml of Hive Server2:

 <property>
    <name>hive.server2.builtin.udf.blacklist</name>
    <value>reflect,reflect2,java_method </value>
  </property>
 

Addressed in release/refresh/patch: CDH 5.4.9, CDH 5.5.2, CDH 5.6.0 and higher

HiveServer2 LDAP Provider May Allow Authentication with Blank Password

Hive may allow a user to authenticate without entering a password, depending on the order in which classes are loaded.

Specifically, Hive's SaslPlainServerFactory checks passwords, but the same class provided in Hadoop does not. Therefore, if the Hadoop class is loaded first, users can authenticate with HiveServer2 without specifying the password.

Products affected: Hive

Releases affected:

  • CDH 5.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.5

  • CDH 5.1, 5.1.2, 5.1.3, 5.1.4

  • CDH 5.2, 5.2.1, 5.2.3, 5.2.4

  • CDH 5.3, 5.3.1, 5.3.2

  • CDH 5.4.1, 5.4.2, 5.4.3

Users affected: All users using Hive with LDAP authentication.

Date/time of detection: March 11, 2015

Severity: (Low/Medium/High) High

Impact: A malicious user may be able to authenticate with HiveServer2 without specifying a password.

CVE: CVE-2015-1772

Immediate action required: Upgrade to CDH 5.4.4, 5.3.3, 5.2.5, 5.1.5, or 5.0.6

Addressed in release/refresh/patch: CDH 5.4.4, 5.3.3, 5.2.5, 5.1.5, or 5.0.6

For more updates on this issue, see the corresponding Knowledge article:

HiveServer2 LDAP Provider may Allow Authentication with Blank Password

Hue

This section lists the security bulletins that have been released for Hue.

Access control issue on /desktop/api endpoints on Hue

Hue, as shipped with the releases affected below, allows remote attackers to enumerate user accounts via a request to desktop/api/users/autocomplete.

Products affected: Hue

Releases affected:
  • CDH 5.0.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.5, 5.0.6
  • CDH 5.1.0, 5.1.2, 5.1.3, 5.1.4, 5.1.5
  • CDH 5.2.0, 5.2.1, 5.2.3, 5.2.4, 5.2.5, 5.2.6
  • CDH 5.3.0, 5.3.1, 5.3.2, 5.3.3, 5.3.4, 5.3.5, 5.3.7, 5.3.8, 5.3.9, 5.3.10
  • CDH 5.4.0, 5.4.1, 5.4.2, 5.4.3, 5.4.4, 5.4.5, 5.4.7, 5.4.8, 5.4.9, 5.4.10, 5.4.11
  • CDH 5.5.0, 5.5.1, 5.5.2, 5.5.3, 5.5.4, 5.5.5, 5.5.6
  • CDH 5.6.0, 5.6.1
  • CDH 5.7.0, 5.7.1, 5,7.3, 5.7.4, 5.7.5, 5.7.6
  • CDH 5.8.0, 5.8.1, 5.8.2
  • CDH 5.9.0

Users affected: All Cloudera Hue users

Date/time of detection: May 20, 2016

Severity (Low/Medium/High): Medium

Impact: An attacker can leverage this issue to harvest valid user accounts and attempt to use the accounts in brute-force attacks.

CVE: CVE-2016-4947

Immediate action required: Upgrade to any of the following releases, which resolve this issue.

Addressed in release/refresh/patch:
  • CDH 5.8.3 and higher
  • CDH 5.9.1 and higher
  • CDH 5.10.0 and higher

Hue Document Privilege Escalation

A user with read-only access to a document in Hue can grant oneself write access to that document, and change that document’s privileges for other users. If the document is a Hive, Impala, or Oozie job, the user can inject arbitrary code that runs with the permissions of the next user that runs the job.

Products affected: Hue

Releases affected: CDH 5.0.0, CDH 5.0.1, CDH 5.0.2, CDH 5.0.3, CDH 5.0.4, CDH 5.0.5, CDH 5.0.6, CDH 5.1.0, CDH 5.1.2, CDH 5.1.3, CDH 5.1.4, CDH 5.1.5, CDH 5.2.0, CDH 5.2.1, CDH 5.2.3, CDH 5.2.4, CDH 5.2.5, CDH 5.2.6, CDH 5.3.0, CDH 5.3.2, CDH 5.3.3, CDH 5.3.4, CDH 5.3.5, CDH 5.3.6, CDH 5.3.8, CDH 5.3.9, CDH 5.4.0, CDH 5.4.1, CDH 5.4.3, CDH 5.4.4, CDH 5.4.5, CDH 5.4.7, CDH 5.4.8

Users affected: Customers using Hue

Date/time of detection: October 9, 2015

Severity (Low/Medium/High): Medium

Impact: Malicious users may be able to run arbitrary code with the permissions of another user.

CVE: CVE-2015-7831

Immediate action required: Upgrade to CDH 5.4.9 or CDH 5.5.0.

Addressed in release/refresh/patch: CDH 5.4.9 or CDH 5.5.0

Apache Impala

Impala Statestore exposes plaintext data with SSL TLS enabled

During a security analysis, Cloudera found that despite TLS being enabled for “internal” Impala ports, the Statestore thrift port did not actually use TLS. This gap would allow an adversary with network access to eavesdrop and potentially modify the packets going to and coming from that port.

Products affected: Impala

Releases affected:

  • CDH 5.7 and lower
  • CDH 5.8.0, 5.8.1, 5.8.2, 5.8.3, 5.8.4
  • CDH 5.9.0, 5.9.1, 5.9.2
  • CDH 5.10.0, 5.10.1
  • CDH 5.11.0

Users affected: Deployments that use “internal” TLS (TLS between Impala daemons).

Date/time of detection: April 27, 2017

Detected by: Cloudera

Severity (Low/Medium/High): High

Impact: Data on the wire may be intercepted by a malicious server.

CVE: CVE-2017-5652

Immediate action required: Affected customers should upgrade to latest maintenance version with the fix.

Addressed in release/refresh/patch: CDH 5.8.5, CDH 5.9.3, CDH 5.10.2, CDH 5.11.1, CDH 5.12.0

Malicious server can cause Impala server to skip authentication checks

A malicious server which impersonates an Impala service (either Impala daemon, Catalog Server or Statestore) can cause a client (Impala daemon or Statestore) to skip its authentication checks when Kerberos is enabled. That malicious server may then intercept sensitive data intended for the Impala service.

Products affected: Impala

Releases affected:
  • CDH 5.7 and lower
  • CDH 5.8.0, 5.8.1, 5.8.2, 5.8.3, 5.8.4
  • CDH 5.9.0, 5.9.1
  • CDH 5.10.0

Users affected: Deployments that use Kerberos, but not TLS, for authentication between Impala daemons. Deployments that use TLS to secure communication between services are not affected by the same issue.

Date/time of detection: February 27, 2017

Detected by: Cloudera

Severity (Low/Medium/High): Medium

Impact: Data on the wire may be intercepted by a malicious server.

CVE: CVE-2017-5640

Immediate action required: Affected customers should upgrade to latest version, or enable TLS for connections between Impala services.

Addressed in release/refresh/patch: CDH 5.8.5, CDH 5.9.2, CDH 5.10.1, CDH 5.11.0.

Read Access to Impala Views in queries with WHERE-clause Subqueries

Impala bypasses Sentry authorization for views if the query or the view itself contains a subquery in any WHERE clause. This gives read access to the views to any user that would otherwise have insufficient privileges.

The underlying base tables of views are unaffected. Queries that do not have subqueries in the WHERE clause are unaffected (unless the view itself contains such a subquery).

Other operations, like accessing the view definition or altering the view, are unaffected.

Products affected: Impala

Releases affected:
  • CDH 5.2.0 and higher
  • CDH 5.3.0 and higher
  • CDH 5.4.0 and higher
  • CDH 5.5.0 and higher
  • CDH 5.6.0, 5.6.1
  • CDH 5.7.0, 5.7.1, 5.7.2
  • CDH 5.8.0

Users affected: Users who run Impala + Sentry and use views

Date/time of detection: July 26, 2016

Severity (Low/Medium/High): High

Impact: Users can bypass Sentry authorization for Impala views.

CVE: CVE-2016-6605

Immediate action required: Upgrade to a CDH version containing the fix.

Addressed in release/refresh/patch: CDH 5.9.0 and higher, CDH 5.8.2 and higher, CDH 5.7.3 and higher

For the latest update on this issue see the corresponding Knowledge article:

Read Access to Impala Views in the Presence of WHERE-clause Subqueries

Impala issued REVOKE ALL ON SERVER does not revoke all privileges

For Impala users that use Sentry for authorization, issuing a REVOKE ALL ON SERVER FROM <ROLE> statement does not remove all server-level privileges from the <ROLE>. Specifically, Sentry fails to revoke privileges that were issued to <ROLE> through a GRANT ALL ON SERVER TO <ROLE> statement. All other privileges are revoked, but <ROLE> still has ALL privileges at SERVER scope after the REVOKE ALL ON SERVER statement has been executed. The privileges are shown in the output of a SHOW GRANT statement.

Products affected: Impala, Sentry

Releases affected:

CDH 5.5.0, CDH 5.5.1, CDH 5.5.2, CDH 5.5.4

CDH 5.6.0, CDH 5.6.1

CDH 5.7.0

Users affected: Customers who use Sentry authorization in Impala

Date/time of detection: April 25, 2016

Severity (Low/Medium/High): Medium

Impact: Inability to revoke ALL SERVER privileges from a specific role using Impala if they have been granted through a GRANT ALL SERVER statement.

CVE: CVE-2016-4572

Immediate action required: If the affected role has ALL privileges on SERVER, you can remove these privileges by dropping and re-creating the role. Alternatively, upgrade to 5.7.1, or 5.8.0 or higher.

Addressed in release/refresh/patch: CDH 5.7.1, CDH 5.8.0 and higher.

Impala does not authorize authenticated Kerberos users who access internal APIs

In an Impala deployment secured with Kerberos, a malicious authenticated user can create a program that bypasses Impala and Sentry authorization mechanisms to issue internal API calls directly. That user can then query tables to which they should not have access, or alter table metadata.

Products affected: Impala

Releases affected: All versions of CDH 5, except for those indicated in the ‘Addressed in release/refresh/patch’ section below.

Users affected: All users of Impala and Sentry with Kerberos enabled.

Date/time of detection: February 4, 2016

Severity (Low/Medium/High): High

CVE: CVE-2016-3131

Immediate action required: Upgrade to most recent maintenance release.

Addressed in release/refresh/patch: CDH 5.3.10 and higher, 5.4.10 and higher, 5.5.4 and higher, 5.6.1 and higher, 5.7.0 and higher

Cloudera Manager

Privilege Escalation in Cloudera Manager

Under certain circumstances, a read-only Cloudera Manager user can discover the usernames of other users and elevate the privileges of another user. A user cannot elevate their own privilege.

Products affected: Cloudera Manager

Releases affected:

  • Cloudera Manager 5.0.0, 5.0.1, 5.0.2, 5.0.5, 5.0.6, 5.0.7
  • Cloudera Manager 5.1.0, 5.1.1, 5.1.2, 5.1.3, 5.1.4, 5.1.5, 5.1.6
  • Cloudera Manager 5.2.0, 5.2.1, 5.2.2, 5.2.4, 5.2.5, 5.2.6, 5.2.7
  • Cloudera Manager 5.3.0, 5.3.1, 5.3.2, 5.3.3, 5.3.4, 5.3.6, 5.3.7, 5.3.8, 5.3.9, 5.3.10
  • Cloudera Manager 5.4.0, 5.4.1, 5.4.3, 5.4.5, 5.4.6, 5.4.7, 5.4.8, 5.4.9, 5.4.10
  • Cloudera Manager 5.5.0, 5.5.1, 5.5.2, 5.5.3, 5.5.4, 5.5.6
  • Cloudera Manager 5.6.0, 5.6.1
  • Cloudera Manager 5.7.0, 5.7.1, 5.7.2, 5.7.4, 5.7.5
  • Cloudera Manager 5.8.0, 5.8.1, 5.8.3, 5.8.4
  • Cloudera Manager 5.9.0, 5.9.1
  • Cloudera Manager 5.10.0

Users affected: All Cloudera Manager users.

Severity (Low/Medium/High): High

CVE: CVE-2017-7399

Immediate action required: Upgrade Cloudera Manager to 5.8.5, 5.9.2, 5.10.1, 5.11.0 or higher.

Addressed in release/refresh/patch: Cloudera Manager 5.8.5, 5.9.2, 5.10.1, 5.11.0 or higher.

Sensitive data of processes managed by Cloudera Manager are not secured by file permissions

Products affected: Cloudera Manager

Releases affected: 5.9.2, 5.10.1, 5.11.0

Users affected: All users of Cloudera Manager on 5.9.2, 5.10.1, 5.11.0

Severity (Low/Medium/High): High

Impact: Sensitive data (such as passwords) might be exposed to users with direct access to cluster hosts due to overly-permissive local file system permissions for certain files created by Cloudera Manager.

The password is also visible in the Cloudera Manager Admin Console in the configuration files for the Spark History Server process.

CVE: CVE-2017-9327

Immediate action required: Upgrade Cloudera Manager to 5.9.3, 5.10.2, 5.11.1, 5.12.0 or higher

Addressed in release/refresh/patch: Cloudera Manager 5.9.3, 5.10.2, 5.11.1, 5.12.0 or higher

Local Script Injection Vulnerability In Cloudera Manager

There is a script injection vulnerability in Cloudera Manager’s help search box. The user of Cloudera Manager can enter a script but there is no way for an attacker to inject a script externally. Furthermore, the script entered into the search box has to actually return valid search results for the script to execute.

Products affected: Cloudera Manager

Releases affected:
  • Cloudera Manager 5.0.0, 5.0.1, 5.0.2, 5.0.5, 5.0.6, 5.0.7
  • Cloudera Manager 5.1.0, 5.1.1, 5.1.2, 5.1.3, 5.1.4, 5.1.5, 5.1.6
  • Cloudera Manager 5.2.0, 5.2.1, 5.2.2, 5.2.4, 5.2.5, 5.2.6, 5.2.7
  • Cloudera Manager 5.3.0, 5.3.1, 5.3.2, 5.3.3, 5.3.4, 5.3.6, 5.3.7, 5.3.8, 5.3.9, 5.3.10
  • Cloudera Manager 5.4.0, 5.4.1, 5.4.3, 5.4.5, 5.4.6, 5.4.7, 5.4.8, 5.4.9, 5.4.10
  • Cloudera Manager 5.5.0, 5.5.1, 5.5.2, 5.5.3, 5.5.4, 5.5.6
  • Cloudera Manager 5.6.0, 5.6.1
  • Cloudera Manager 5.7.0, 5.7.1, 5.7.2, 5.7.4, 5.7.5
  • Cloudera Manager 5.8.0, 5.8.1, 5.8.2, 5.8.3
  • Cloudera Manager 5.9.0

Users affected: All Cloudera Manager users

Date/time of detection: November 10th, 2016

Severity (Low/Medium/High): Low

Impact: Possible override of client-side JavaScript controls.

CVE: CVE-2016-9271

Immediate action required: Upgrade to one of the releases below

Addressed in release/refresh/patch:
  • Cloudera Manager 5.7.6 and higher
  • Cloudera Manager 5.8.4 and higher
  • Cloudera Manager 5.9.1 and higher
  • Cloudera Manager 5.10.0 and higher

Cross Site Scripting (XSS) Vulnerability in Cloudera Manager

Several pages in the Cloudera Manager UI are vulnerable to a XSS attack.

Products affected: Cloudera Manager

Releases affected: All versions of Cloudera Manager 5 except for those indicated in the ‘Addressed in release/refresh/patch’ section below.

Users affected: All customers who use Cloudera Manager.

Date/time of detection: May 19, 2016

Detected by: Solucom Advisory

Severity (Low/Medium/High): High

Impact: A XSS vulnerability can be used by an attacker to perform malicious actions. One probable form of attack is to steal the credentials for a victim Cloudera Manager account.

CVE: CVE-2016-4948

Immediate action required: Upgrade Cloudera Manager to version 5.7.2 or higher or 5.8.x

Addressed in release/refresh/patch: Cloudera Manager 5.7.2 and higher and 5.8.x.

Sensitive Data Exposed in Plain-Text Readable Files

Cloudera Manager Agent stores configuration information in various configuration files that are world-readable. Some of this configuration information may involve sensitive user data, including credentials values used for authentication with other services. These files are located in /var/run/cloudera-scm-agent/supervisor/include on every host. Cloudera Manager passes information such as credentials to Hadoop processes it manages via environment variables, which are written in configuration files in this directory.

Additionally, the response from Cloudera Manager Server to heartbeat messages sent by the Cloudera Manager Agent is stored in a world-readable file (/var/lib/cloudera-scm-agent/response.avro) on every host. This file may contain sensitive data.

These files and directories have been restricted to being readable only by the user running Cloudera Manager Agent, which by default is root.

Products affected: Cloudera Manager

Releases affected: All versions of Cloudera Manager 5, except for those indicated in the Addressed in release/refresh/patch section below.

Users affected: All users of Cloudera Manager using the releases affected above.

Date/time of detection: March 16, 2016

Severity (Low/Medium/High): High

Impact: An unauthorized user that gains access to an affected system may be able to leverage that access to subsequently authenticate with other services.

CVE: CVE-2016-3192

Immediate action required:

  • Upgrade Cloudera Manager to one of the maintenance releases indicated below.
  • Regenerate Kerberos principals used by all the services in the cluster.
  • Regenerate SSL keystores used by all the services in the cluster, with a new password.
  • If you are using a version of Cloudera Manager lower than 5.5.0, change the database passwords for all the CDH services, wherever applicable.

Addressed in release/refresh/patch: Cloudera Manager 5.5.4 and higher, 5.6.1 and higher, 5.7.1 and higher

Sensitive Information in Cloudera Manager Diagnostic Support Bundles

Cloudera Manager is designed to transmit certain diagnostic data (or "bundles") to Cloudera. These diagnostic bundles are used by the Cloudera support team to reproduce, debug, and address technical issues for our customers. Cloudera internally discovered a potential vulnerability in this feature, which could cause any sensitive data stored as "advanced configuration snippets (ACS)" (formerly called "safety valves") to be included in diagnostic bundles and transmitted to Cloudera. Notwithstanding any possible transmission, such sensitive data is not used by Cloudera for any purpose.

Cloudera has taken the following actions:

  1. Modified Cloudera Manager so that it no longer transmits advanced configuration snippets containing the sensitive data, and
  2. Modified Cloudera Manager SSL configuration to increase the protection level of the encrypted communication.

Cloudera strives to follow and also help establish best practices for the protection of customer information. In this effort, we continually review and improve our security practices, infrastructure, and data handling policies.

Products affected: Cloudera Manager

Releases affected:
  • All Cloudera Manager releases prior to 4.8.6
  • Cloudera Manager 5.0.x prior to Cloudera Manager 5.0.7
  • Cloudera Manager 5.1.x prior to Cloudera Manager 5.1.6
  • Cloudera Manager 5.2.x prior to Cloudera Manager 5.2.7
  • Cloudera Manager 5.3.x prior to Cloudera Manager 5.3.7
  • Cloudera Manager 5.4.x prior to Cloudera Manager 5.4.6

Users affected: Users storing sensitive data in advanced configuration snippets

Severity: High

Impact: Possible transmission of sensitive data

CVE: CVE-2015-6495

Immediate Action Required: Upgrade Cloudera Manager to one of the releases listed below.

ETA for resolution: September 1st, 2015

Addressed in release/refresh/patch:
  • Cloudera Manager 4.8.6
  • Cloudera Manager 5.0.7
  • Cloudera Manager 5.1.6
  • Cloudera Manager 5.2.7
  • Cloudera Manager 5.3.7
  • Cloudera Manager 5.4.6

Cross Site Scripting Vulnerabilities in Cloudera Manager

Multiple cross-site scripting (XSS) vulnerabilities in the Cloudera Manager UI before version 5.4.3 allow remote attackers to inject arbitrary web script or HTML using unspecified vectors. Authentication to Cloudera Manager is required to exploit these vulnerabilities.

Products affected: Cloudera Manager

Releases affected: All releases prior to 5.4.3

Users affected: All Cloudera Manager users

Date/time of detection: May 8th, 2015

Severity: (Low/Medium/High) Medium

Impact: Allows unauthorized modification.

CVE: CVE-2015-4457

Immediate action required: Upgrade to Cloudera Manager 5.4.3.

Addressed in release/refresh/patch: Cloudera Manager 5.4.3

Critical Security Related Files in YARN NodeManager Configuration Directories Accessible to Any User

When Cloudera Manager starts a YARN NodeManager, it makes all files in its configuration directory (typically /var/run/cloudera-scm-agent/process) readable by all users. This includes the file containing the Kerberos keytabs (yarn.keytab) and the file containing passwords for the SSL keystore (ssl-server.xml).

Global read permissions must be removed on the NodeManager’s security-related files.

Products affected: Cloudera Manager

Releases affected: All releases of Cloudera Manager 4.0 and higher.

Users affected: Customers who are using YARN in environments where Kerberos or SSL is enabled.

Date/time of detection: March 8, 2015

Severity (Low/Medium/High): High

Impact: Any user who can log in to a host where the YARN NodeManager is running can get access to the keytab file, use it to authenticate to the cluster, and perform unauthorized operations. If SSL is enabled, the user can also decrypt data transmitted over the network.

CVE: CVE-2015-2263

Immediate action required:
  1. If you are running YARN with Kerberos/SSL with Cloudera Manager 5.x, upgrade to the maintenance release with the security fix. If you are running YARN with Kerberos with Cloudera Manager 4.x, upgrade to any Cloudera Manager 5.x release with the security fix.
  2. Delete all “yarn” and “HTTP” principals from KDC/Active Directory. After deleting them, regenerate them using Cloudera Manager.
  3. Regenerate SSL keystores that you are using with the YARN service, using a new password.

ETA for resolution: Patches are available immediately with the release of this TSB.

Addressed in release/refresh/patch: Cloudera Manager releases 5.0.6, 5.1.5, 5.2.5, 5.3.3, and 5.4.0 have the fix for this bug.

For further updates on this issue see the corresponding Knowledge article:

Critical Security related Files in the YARN NodeManager Configuration Directories can be Accessed by any User

Cloudera Manager exposes sensitive data

In the Cloudera Manager 5.2 release, the LDAP bind password was erroneously marked such that it would be written to the world-readable files in /etc/hadoop, in addition to the more private files in /var/run. Thus, any user on any host of a Cloudera Manager managed cluster could read the LDAP bind password.

The fix to this issue removes the LDAP bind password from the files in /etc/hadoop; it is only written to configuration files in /var/run. Those files are owned by and only readable by the appropriate service.

Cloudera Manager writes configuration parameters to several locations. Each service gets every parameter that it requires in a directory in /var/run, and the files in those directories are not world-readable. Clients (for example, the “hdfs” command) obtain their configuration parameters from files in /etc/hadoop. The files in /etc/hadoop are world-readable. Cloudera Manager keeps track of where each configuration parameter is to be written so as to expose each parameter only in the location where it is required.

Products affected: Cloudera Manager

Releases affected: Cloudera Manager 5.2.0, Cloudera Manager 5.2.1, Cloudera Manager 5.3.0

Users Affected: All users

Date/time of detection: December 30, 2014

Severity: High

Impact: Exposure of sensitive data

CVE: CVE-2014-8733

Immediate action required: Upgrade to Cloudera Manager 5.2.2 or higher, or Cloudera Manager 5.3.1 or higher.

Sensitive configuration values exposed in Cloudera Manager

Certain configuration values that are stored in Cloudera Manager are considered "sensitive", such as database passwords. These configuration values are expected to be inaccessible to non-admin users, and this is enforced in the Cloudera Manager Admin Console. However, these configuration values are not redacted when reading them through the API, possibly making them accessible to users who should not have such access.

Products affected: Cloudera Manager

Releases affected: Cloudera Manager 4.8.2 and lower, Cloudera Manager 5.0.0

Users Affected: Cloudera Manager installations with non-admin users

Date/time of detection: May 7, 2014

Severity: High

Impact: Through the API only, non-admin users can access potentially sensitive configuration information

CVE: CVE-2014-0220

Immediate action required: Upgrade to Cloudera Manager 4.8.3 or Cloudera Manager 5.0.1 or disable non-admin users if you do not want them to have this access.

ETA for resolution: May 13, 2014

Addressed in release/refresh/patch: Cloudera Manager 4.8.3 and Cloudera Manager 5.0.1

Cloudera Manager installs taskcontroller.cfg in insecure mode

Products affected: Cloudera Manager and Service and Configuration Manager

Releases affected: Cloudera Manager 3.7.0-3.7.4, Service and Configuration Manager 3.5 (in certain cases)

Users affected: Users on multi-user systems who have not enabled Hadoop Kerberos features. Users using the Hadoop security features are not affected.

Severity: Critical

Impact: Vulnerability allows a malicious user to impersonate other users on the systems running the Hadoop cluster.

Immediate action required: Upgrade to Cloudera Manager 3.7.5 and subsequently restart the MapReduce service.

Workarounds are available: Any of these workarounds is sufficient.

  • For CM 3.7.x (Enterprise Edition), edit the configuration "Minimum user ID for job submission" to a number higher than any UIDs on the system. 65535 is the largest value that Cloudera Manager will accept, and is typically sufficient. Restart the MapReduce service. To find the current maximum UID on your system, run
getent passwd | awk -F: '{ if ($3 > max) { max = $3; name = $1 } } END { print name, max }' 
  • For CM 3.7.x Free Edition, remove the file /usr/lib/hadoop-0.20/sbin/Linux-amd64-64/task-controller. This file is part of the hadoop-0.20-sbin package and is re-installed by upgrades.
  • For SCM 3.5, if the cluster has been run in both secure and non-secure configurations, remove /etc/hadoop/conf/taskcontroller.cfg from all TaskTrackers. Repeat this in the future if you reconfigure the cluster from a Kerberized to a non-Kerberized configuration.

Resolution: Mar 27, 2012

Addressed in release/refresh/patch: Cloudera Manager 3.7.5

Verification: Verify that, in non-secure clusters, /etc/hadoop/conf/taskcontroller.cfg is unconfigured on all TaskTrackers. (A file with only lines starting with # is unconfigured.)

If you are a Cloudera Enterprise customer and have further questions or need assistance, log a ticket with Cloudera Support through http://support.cloudera.com.

Two links in the Cloudera Manager Admin Console allow read-only access to arbitrary files on managed hosts.

Products affected: Cloudera Manager

Releases affected: Cloudera Manager 3.7.0 through 3.7.6, 4.0.0 (beta), and 4.0.1 (GA)

Users affected: All Cloudera Manager Users

Date vulnerability discovered: June 6, 2012

Date vulnerability analysis and validation complete: June 15, 2012

Severity: Medium

Impact: Any user, including non-admin users, logged in to the Cloudera Manager Admin Console can access any file on any host managed by Cloudera Manager.

Immediate action required:

Solution:

Upgrade to Cloudera Manager or Cloudera Manager Free Edition, version 3.7.7 or higher, or version 4.0.2 or higher.

Work Around:

If immediate upgrade is not possible, disable non-admin user access to Cloudera Manager to limit the vulnerability to Cloudera Manager admins.

Resolution: June 25th

Addressed in release/refresh/patch: Cloudera Manager or Cloudera Manager Free Edition 3.7.7 or higher and 4.0.2 or higher.

Verification: Check the Cloudera Manager version number in the Help > About

If you are a Cloudera Enterprise customer and have further questions or need assistance, log a ticket with Cloudera Support at http://support.cloudera.com.

Cloudera Navigator

This section lists the security bulletins that have been released for Cloudera Navigator.

Cloudera Navigator Vulnerable to the POODLE Attack

Cloudera Navigator 2.2.0 through 2.2.3, and 2.3.0 and 2.3.1 includes SSL/TLS support; however, SSLv3 protocol support, which is vulnerable to the POODLE (CVE-2014-3566) attack, was erroneously not removed.

This vulnerability affects only those installations of Cloudera Navigator that are configured to use SSL/TLS.

Products affected: Cloudera Navigator

Releases affected:
Cloudera Navigator Corresponding Cloudera Manager
2.2.0 5.3.0
2.2.1 5.3.1
2.2.2 5.3.2
2.2.3 5.3.3
2.3.0 5.4.0
2.3.1 5.4.1

Users affected: All web users and API clients of Cloudera Navigator when SSL/TLS is enabled.

Date/time of detection:

Severity: (Low/Medium/High) Medium

Impact: Allows unauthorized disclosure of information; allows component impersonation.

CVE: CVE-2015-4078

Immediate action required:
  • Cloudera Navigator 2.2.x (packaged with Cloudera Manager 5.3.x): upgrade to Cloudera Navigator 2.2.4 (packaged with Cloudera Manager 5.3.4) or higher.
  • Cloudera Navigator 2.3.x (packaged with Cloudera Manager 5.4.x): upgrade to Cloudera Navigator 2.3.3 (packaged with Cloudera Manager 5.4.3) or higher.
  • Please note: if you are upgrading from Cloudera Navigator 2.2.x to 2.3.3 or higher (that is, upgrading from Cloudera Manager 5.3.x to 5.4.3 or higher) and are impacted by this issue, you must remove the Advanced Configuration (safety value) SSL settings and reconfigure SSL using the new configuration, as specified at:

    http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/sg_nav_ssl.html.

Addressed in release/refresh/patch:

  • Cloudera Navigator 2.2.4 (packaged with Cloudera Manager 5.3.4)

  • Cloudera Navigator 2.3.3 (packaged with Cloudera Manager 5.4.3)

Cloudera Search

Sample solrconfig.xml file for enabling Solr/Sentry Authorization is missing critical attribute

The solrconfig.xml.secure sample configuration which was provided with CDH, if used to create solrconfig.xml, does not enforce Sentry authorization on the request URI /update/json/docs because it is missing a necessary attribute.

Products affected: Solr (if Sentry enabled)

Releases affected:
  • CDH 5.8 and lower
  • CDH 5.9.2 and lower
  • CDH 5.10.1 and lower
  • CDH 5.11.1 and lower

Users affected: Those who are using Sentry authorization with Cloudera Search and who have used the provided sample configuration and have not specified the below attributes in their solrconfig.xml file.

Date/time of detection: May 18, 2017

Detected by: István Farkas, Hrishikesh Gadre

Severity (Low/Medium/High): High

Impact: Unauthorized users using the request URI /update/json/docs may insert, update, or delete documents.

CVE: CVE-2017-9325

Immediate action required: Every solrconfig.xml of a collection protected by Sentry should be updated in Zookeeper.

Line of:
<updateRequestProcessorChain name="updateIndexAuthorization">
Must be replaced with:
<updateRequestProcessorChain name="updateIndexAuthorization" default="true">

After updating the configuration in Zookeeper, the collections must be reloaded.

Addressed in release/refresh/patch: The following releases will contain the fixed sample configuration file:
  • CDH 5.9.3 and higher
  • CDH 5.10.2 and higher
  • CDH 5.11.2 and higher
  • CDH 5.12.0 and higher

    Upgrading will only correct the sample configuration file. The fix mentioned above will still need to be applied on the affected cluster.

Apache Solr ReplicationHandler Path Traversal Attack

When using the Index Replication feature, Solr nodes can pull index files from a master/leader node using an HTTP API that accepts a file name. However, Solr did not validate the file name, hence it was possible to craft a special request involving path traversal, leaving any file readable to the Solr server process exposed. Solr servers using Kerberos authentication are at less risk since only authenticated users can gain direct HTTP access.

See SOLR-10031 for details. Here is the relevant public announcement.

Products affected: Cloudera Search

Releases affected:
  • CDH 5.0.0, 5.0.1, 5.0.2, 5.0.3, 5.0.4, 5.0.5, 5.0.6
  • CDH 5.1.0, 5.1.2, 5.1.3, 5.1.4, 5.1.5
  • CDH 5.2.0, 5.2.1, 5.2.3, 5.2.4, 5.2.5, 5.2.6
  • CDH 5.3.0, 5.3.2, 5.3.3, 5.3.4, 5.3.5, 5.3.6, 5.3.8, 5.3.9, 5.3.10
  • CDH 5.4.0, 5.4.1, 5.4.3, 5.4.4, 5.4.5, 5.4.7, 5.4.8, 5.4.9, 5.4.10, 5.4.11
  • CDH 5.5.0, 5.5.1, 5.5.2, 5.5.4, 5.5.5, 5.5.6
  • CDH 5.6.0, 5.6.1
  • CDH 5.7.0, 5.7.1, 5.7.2, 5.7.3, 5.7.4, 5.7.5
  • CDH 5.8.0, 5.8.1, 5.8.2, 5.8.3
  • CDH 5.9.0, 5.9.1
  • CDH 5.10.0

Users affected: All users using Cloudera Search

Date/time of detection: January 25, 2017

Detected by: Hrishikesh Gadre (Cloudera Inc.)

Severity (Low/Medium/High): Medium

Impact: Moderate. This vulnerability will allow an authenticated remote user to read arbitrary files as the solr user.

CVE: CVE-2017-3163

Immediate action required:

Upgrade to a release that addresses this issue. Also consider enabling Kerberos authentication and TLS for Solr.

Addressed in release/refresh/patch:
  • 5.7.6, 5.8.4, 5.9.2, 5.10.1, 5.11.0 (and higher releases).

For the latest update on this issue see the corresponding Knowledge article:

TSB 2017-222: Apache Solr ReplicationHandler path traversal attack

Solr Queries by document id can bypass Sentry document-level security via the RealTimeGetHandler

Solr RealTimeGet queries with the id or ids parameters are not checked by Sentry document-level security in versions prior to CDH5.7.0. The id or ids parameters must be exact matches for document ids (wild-carding is not supported) and the document ids are not otherwise visible to users who are denied access by document-level security. However, a user with internal knowledge of the document id structure or who is able to guess document ids is able to access unauthorized documents. This issue is documented in SENTRY-989.

Products affected: Cloudera Search

Releases affected: All versions of CDH 5, except for those indicated in the Addressed in release/refresh/patch section below.

Users affected: Cloudera Search users implementing document-level security

Date/time of detection: December 17, 2015

Severity (Low/Medium/High): Medium

CVE: CVE-2016-6353

Immediate action required: Upgrade to CDH 5.7.0 or higher.

Addressed in release/refresh/patch: CDH 5.7.0 and higher.

Apache Sentry

Sample solrconfig.xml file for enabling Solr/Sentry Authorization is missing critical attribute

The solrconfig.xml.secure sample configuration which was provided with CDH, if used to create solrconfig.xml, does not enforce Sentry authorization on the request URI /update/json/docs because it is missing a necessary attribute.

Products affected: Solr (if Sentry enabled)

Releases affected:
  • CDH 5.8 and lower
  • CDH 5.9.2 and lower
  • CDH 5.10.1 and lower
  • CDH 5.11.1 and lower

Users affected: Those who are using Sentry authorization with Cloudera Search and who have used the provided sample configuration and have not specified the below attributes in their solrconfig.xml file.

Date/time of detection: May 18, 2017

Detected by: István Farkas, Hrishikesh Gadre

Severity (Low/Medium/High): High

Impact: Unauthorized users using the request URI /update/json/docs may insert, update, or delete documents.

CVE: CVE-2017-9325

Immediate action required: Every solrconfig.xml of a collection protected by Sentry should be updated in Zookeeper.

Line of:
<updateRequestProcessorChain name="updateIndexAuthorization">
Must be replaced with:
<updateRequestProcessorChain name="updateIndexAuthorization" default="true">

After updating the configuration in Zookeeper, the collections must be reloaded.

Addressed in release/refresh/patch: The following releases will contain the fixed sample configuration file:
  • CDH 5.9.3 and higher
  • CDH 5.10.2 and higher
  • CDH 5.11.2 and higher
  • CDH 5.12.0 and higher

    Upgrading will only correct the sample configuration file. The fix mentioned above will still need to be applied on the affected cluster.

Impala issued REVOKE ALL ON SERVER does not revoke all privileges

For Impala users that use Sentry for authorization, issuing a REVOKE ALL ON SERVER FROM <ROLE> statement does not remove all server-level privileges from the <ROLE>. Specifically, Sentry fails to revoke privileges that were issued to <ROLE> through a GRANT ALL ON SERVER TO <ROLE> statement. All other privileges are revoked, but <ROLE> still has ALL privileges at SERVER scope after the REVOKE ALL ON SERVER statement has been executed. The privileges are shown in the output of a SHOW GRANT statement.

Products affected: Impala, Sentry

Releases affected:

CDH 5.5.0, CDH 5.5.1, CDH 5.5.2, CDH 5.5.4

CDH 5.6.0, CDH 5.6.1

CDH 5.7.0

Users affected: Customers who use Sentry authorization in Impala

Date/time of detection: April 25, 2016

Severity (Low/Medium/High): Medium

Impact: Inability to revoke ALL SERVER privileges from a specific role using Impala if they have been granted through a GRANT ALL SERVER statement.

CVE: CVE-2016-4572

Immediate action required: If the affected role has ALL privileges on SERVER, you can remove these privileges by dropping and re-creating the role. Alternatively, upgrade to 5.7.1, or 5.8.0 or higher.

Addressed in release/refresh/patch: CDH 5.7.1, CDH 5.8.0 and higher.

Hive built-in functions “reflect”, “reflect2”, and “java_method” not blocked by default in Sentry

Sentry does not block the execution of Hive built-in functions “reflect”, “reflect2”, and “java_method” by default in some CDH versions. These functions allow the execution of arbitrary user code, which is a security issue.

This issue is documented in SENTRY-960.

Products affected: Hive, Sentry

Releases affected:

CDH 5.4.0, CDH 5.4.1, CDH 5.4.2, CDH 5.4.3, CDH 5.4.4, CDH 5.4.5, CDH 5.4.6, CDH 5.4.7, CDH 5.4.8, CDH 5.5.0, CDH 5.5.1

Users affected: Users running Sentry with Hive.

Date/time of detection: November 13, 2015

Severity (Low/Medium/High): High

Impact: This potential vulnerability may enable an authenticated user to execute arbitrary code as a Hive superuser.

CVE: CVE-2016-0760

Immediate action required: Explicitly add the following to the blacklist property in hive-site.xml of Hive Server2:

 <property>
    <name>hive.server2.builtin.udf.blacklist</name>
    <value>reflect,reflect2,java_method </value>
  </property>
 

Addressed in release/refresh/patch: CDH 5.4.9, CDH 5.5.2, CDH 5.6.0 and higher

Apache Spark

This section lists the security bulletins that have been released for Apache Spark.

Unsafe deserialization in Apache Spark launcher API

In Apache Spark 1.6.0 until 2.1.1, the launcher API performs unsafe deserialization of data received by its socket. This makes applications launched programmatically using the SparkLauncher#startApplication() API potentially vulnerable to arbitrary code execution by an attacker with access to any user account on the local machine. It does not affect apps run by spark-submit or spark-shell. The attacker would be able to execute code as the user that ran the Spark application. Users are encouraged to update to Spark version 2.2.0 or later.

Products affected: Cloudera Distribution of Apache Spark 2 and Spark in CDH.

Releases affected:

  • CDH: 5.9.0-5.9.2, 5.10.0-5.10.1, 5.11.0-5.11.1
  • Cloudera's Distribution of Apache Spark: 2.0 Release 1, 2.0 Release 2, 2.1 Release 1

Users affected: All

Date/time of detection: June 1, 2017

Detected by: Aditya Sharad (Semmle)

Severity (Low/Medium/High): Medium

Impact: Privilege escalation to the user who ran the Spark application.

CVE: CVE-2017-12612

Immediate action required: Affected customers should upgrade to latest maintenance version with the fix:

  • Spark 1.x users: Update to CDH 5.9.3, 5.10.2, 5.11.2, or 5.12.0 or later
  • Spark 2.x users: Update to Cloudera's Distribution of Apache Spark 2.1 Release 2 or later, or 2.2 Release 1 or later
  • Or, discontinue use of the programmatic launcher API.

Addressed in release/refresh/patch:

  • CDH: 5.9.3, CDH 5.10.2, CDH 5.11.2, CDH 5.12.0
  • Cloudera's Distribution of Apache Spark: 2.1 Release 2, 2.2 Release 1

Keystore password for Spark History Server not properly secured

Products affected: Cloudera Manager, Spark

Releases affected: 5.11.0

Users affected: All users with TLS enabled for the Spark History Server.

Date/time of detection: April 18, 2017

Severity (Low/Medium/High): Medium

Impact: The keystore password for the Spark History Server is exposed in a world-readable file on the machine running the Spark History Server. The keystore file itself is not exposed.

The password is also visible in the Cloudera Manager Admin Console in the configuration files for the Spark History Server process.

CVE: CVE-2017-9326

Immediate action required: Upgrade to Cloudera Manager 5.11.1.

Addressed in release/refresh/patch: 5.11.1 or higher.

For the latest update on this issue see the Cloudera Knowledge article, TSB 2017-237: Keystore password for the Spark History Server not properly secured.

Cloudera Distribution of Apache Spark 2

This section lists the security bulletins that have been released for Cloudera Distribution of Apache Spark 2.

Unsafe deserialization in Apache Spark launcher API

In Apache Spark 1.6.0 until 2.1.1, the launcher API performs unsafe deserialization of data received by its socket. This makes applications launched programmatically using the SparkLauncher#startApplication() API potentially vulnerable to arbitrary code execution by an attacker with access to any user account on the local machine. It does not affect apps run by spark-submit or spark-shell. The attacker would be able to execute code as the user that ran the Spark application. Users are encouraged to update to Spark version 2.2.0 or later.

Products affected: Cloudera Distribution of Apache Spark 2 and Spark in CDH.

Releases affected:

  • CDH: 5.9.0-5.9.2, 5.10.0-5.10.1, 5.11.0-5.11.1
  • Cloudera's Distribution of Apache Spark: 2.0 Release 1, 2.0 Release 2, 2.1 Release 1

Users affected: All

Date/time of detection: June 1, 2017

Detected by: Aditya Sharad (Semmle)

Severity (Low/Medium/High): Medium

Impact: Privilege escalation to the user who ran the Spark application.

CVE: CVE-2017-12612

Immediate action required: Affected customers should upgrade to latest maintenance version with the fix:

  • Spark 1.x users: Update to CDH 5.9.3, 5.10.2, 5.11.2, or 5.12.0 or later
  • Spark 2.x users: Update to Cloudera's Distribution of Apache Spark 2.1 Release 2 or later, or 2.2 Release 1 or later
  • Or, discontinue use of the programmatic launcher API.

Addressed in release/refresh/patch:

  • CDH: 5.9.3, CDH 5.10.2, CDH 5.11.2, CDH 5.12.0
  • Cloudera's Distribution of Apache Spark: 2.1 Release 2, 2.2 Release 1

Apache ZooKeeper

This section lists the security bulletins that have been released for Apache ZooKeeper.

Buffer Overflow Vulnerability in ZooKeeper C Command-Line Interface (CLI)

Products affected: ZooKeeper

Releases affected: All CDH 5.x versions lower than CDH 5.9.

Users affected: ZooKeeper users using the C CLI

Date/time of detection: September 21, 2016

Severity (Low/Medium/High): Low

Impact: The ZooKeeper C client shells cli_st and cli_mt have a buffer overflow vulnerability associated with parsing of the input command when using the cmd:<cmd> batch mode syntax. If the command string exceeds 1024 characters, a buffer overflow occurs. There is no known compromise that takes advantage of this vulnerability, and if security is enabled, the attacker is limited by client-level security constraints.

CVE: CVE-2016-5017

Immediate action required: Use the fully featured/supported Java CLI rather than the C CLI. This can be accomplished by executing the zookeeper-client command on hosts running the ZooKeeper server role.

Addressed in release/refresh/patch: CDH 5.9.0