Long term component architecture
As the main curator of open standards in Hadoop, Cloudera has a track record of bringing new open source solutions into its platform (such as Apache Spark, Apache HBase, and Apache Parquet) that are eventually adopted by the community at large. As standards, you can build longterm architecture on these components with confidence.
With the exception of DSSD support, Cloudera Enterprise 5.6.0 is identical to CDH 5.5.2/Cloudera Manager 5.5.3 If you do not need DSSD support, you do not need to upgrade if you are already using the latest 5.5.x release.
- System Requirements
- What's New
- Supported Operating Systems
- Supported Databases
- Supported JDK Versions
- Supported Browsers
- Supported Internet Protocol
- Supported Transport Layer Security Versions
Supported Operating Systems
Note: All CDH and Cloudera Manager hosts that make up a logical cluster need to run on the same major OS release to be covered by Cloudera Support.
CDH 5 provides 64-bit packages for RHEL-compatible, SLES, Ubuntu, and Debian systems as listed below.
|Red Hat Enterprise Linux (RHEL)-compatible|
|RHEL (+ SELinux mode in available versions)||5.7||64-bit|
|CentOS (+ SELinux mode in available versions)||5.7||64-bit|
|Oracle Enterprise Linux (OEL) with Unbreakable Enterprise Kernel (UEK)||5.7 (UEK R2)||64-bit|
|6.4 (UEK R2)||64-bit|
|6.5 (UEK R2, UEK R3)||64-bit|
|6.6 (UEK R3)||64-bit|
|6.7 (UEK R3)||64-bit|
|SUSE Linux Enterprise Server (SLES)||11 with Service Pack 2||64-bit|
|SUSE Linux Enterprise Server (SLES)||11 with Service Pack 3||64-bit|
|SUSE Linux Enterprise Server (SLES)||11 with Service Pack 4||64-bit|
|Ubuntu||Precise 12.04 - Long-Term Support (LTS)||64-bit|
|Trusty 14.04 - Long-Term Support (LTS)||64-bit|
|Debian||Wheezy 7.0, 7.1, and 7.8||64-bit|
Important: Cloudera supports RHEL 7 with the following limitations:
- Only RHEL 7.2 and 7.1 are supported. RHEL 7.0 is not supported.
- RHEL 7.1 is only supported with CDH 5.5 and higher.
- RHEL 7.2 is only supported with CDH 5.7 and higher.
- Only new installations of RHEL 7.2 and 7.1 are supported by Cloudera. For upgrades to RHEL 7.1 or 7.2, contact your OS vendor and see Does Red Hat support upgrades between major versions of Red Hat Enterprise Linux?
- Cloudera Enterprise is supported on platforms with Security-Enhanced Linux (SELinux) enabled. Cloudera is not responsible for policy support nor policy enforcement. If you experience issues with SELinux, contact your OS provider.
- CDH 5.8 DataNode hosts with EMC® DSSD™ D5™ are supported by RHEL 6.6, 7.1, and 7.2.
|Component||MariaDB||MySQL||SQLite||PostgreSQL||Oracle||Derby - see Note 5|
|Oozie||5.5||5.1, 5.5, 5.6, 5.7||–||8.1, 8.3, 8.4, 9.1, 9.2, 9.3, 9.4
See Note 3
|Flume||–||–||–||–||–||Default (for the JDBC Channel only)|
|Hue||5.5||5.1, 5.5, 5.6, 5.7
See Note 6
|Default||8.1, 8.3, 8.4, 9.1, 9.2, 9.3, 9.4
See Note 3
|Hive/Impala||5.5||5.1, 5.5, 5.6, 5.7
See Note 1
|–||8.1, 8.3, 8.4, 9.1, 9.2, 9.3, 9.4
See Note 3
|Sentry||5.5||5.1, 5.5, 5.6, 5.7
See Note 1
|–||8.1, 8.3, 8.4, 9.1, 9.2, 9.3, 9.4
See Note 3
|Sqoop 1||5.5||See Note 4||–||See Note 4||See Note 4||–|
- MySQL 5.5 is supported on CDH 5.1. MySQL 5.6 is supported on CDH 5.1 and higher. The InnoDB storage engine must be enabled in the MySQL server.
- Cloudera Manager installation fails if GTID-based replication is enabled in MySQL.
- PostgreSQL 9.2 is supported on CDH 5.1 and higher. PostgreSQL 9.3 is supported on CDH 5.2 and higher. PostgreSQL 9.4 is supported on CDH 5.5 and higher.
- For purposes of transferring data only, Sqoop 1 supports MySQL 5.0 and above, PostgreSQL 8.4 and above, Oracle 10.2 and above, Teradata 13.10 and above, and Netezza TwinFin 5.0 and above. The Sqoop metastore works only with HSQLDB (1.8.0 and higher 1.x versions; the metastore does not work with any HSQLDB 2.x versions).
- Derby is supported as shown in the table, but not always recommended. See the pages for individual components in the Cloudera Installation and Upgrade guide for recommendations.
- CDH 5 Hue requires the default MySQL version of the operating system on which it is being installed, which is usually MySQL 5.1, 5.5, or 5.6.
Supported JDK Versions
A supported minor JDK release will remain supported throughout a Cloudera major release lifecycle, from the time of its addition forward, unless specifically excluded.
Warning: JDK 1.8u40 and JDK 1.8u60 are excluded from support. Also, the Oozie Web Console returns 500 error when Oozie server runs on JDK 8u75 or higher.
Running CDH nodes within the same cluster on different JDK releases is not supported. JDK release across a cluster needs to match the patch level.
- All nodes in your cluster must run the same Oracle JDK version.
- All services must be deployed on the same Oracle JDK version.
The Cloudera Manager repository is packaged with Oracle JDK 1.7.0_67 (for example) and can be automatically installed during a new installation or an upgrade.
For a full list of supported JDK Versions please see CDH and Cloudera Manager Supported JDK Versions.
- Safari (not supported on Windows)
- Internet Explorer
Hue could display in older versions and even other browsers, but you might not have access to all of its features.
Supported Internet Protocol
CDH requires IPv4. IPv6 is not supported.
See also Configuring Network Names.
Multihoming CDH or Cloudera Manager is not supported outside specifically certified Cloudera partner appliances. Cloudera finds that current Hadoop architectures combined with modern network infrastructures and security practices remove the need for multihoming. Multihoming, however, is beneficial internally in appliance form factors to take advantage of high-bandwidth InfiniBand interconnects.
Although some subareas of the product may work with unsupported custom multihoming configurations, there are known issues with multihoming. In addition, unknown issues may arise because multihoming is not covered by our test matrix outside the Cloudera-certified partner appliances.
Supported Transport Layer Security Versions
The following components are supported by the indicated versions of Transport Layer Security (TLS):
|Flume||Avro Source/Sink||TLS 1.2|
|Flume||Flume HTTP Source/Sink||TLS 1.2|
|HBase||Master||HBase Master Web UI Port||60010||TLS 1.2|
|HDFS||NameNode||Secure NameNode Web UI Port||50470||TLS 1.2|
|HDFS||Secondary NameNode||Secure Secondary NameNode Web UI Port||50495||TLS 1.2|
|HDFS||HttpFS||REST Port||14000||TLS 1.1, TLS 1.2|
|Hive||HiveServer2||HiveServer2 Port||10000||TLS 1.2|
|Hue||Hue Server||Hue HTTP Port||8888||TLS 1.2|
|Cloudera Impala||Impala Daemon||Impala Daemon Beeswax Port||21000||TLS 1.2|
|Cloudera Impala||Impala Daemon||Impala Daemon HiveServer2 Port||21050||TLS 1.2|
|Cloudera Impala||Impala Daemon||Impala Daemon Backend Port||22000||TLS 1.2|
|Cloudera Impala||Impala Daemon||Impala Daemon HTTP Server Port||25000||TLS 1.2|
|Cloudera Impala||Impala StateStore||StateStore Service Port||24000||TLS 1.2|
|Cloudera Impala||Impala StateStore||StateStore HTTP Server Port||25010||TLS 1.2|
|Cloudera Impala||Impala Catalog Server||Catalog Server HTTP Server Port||25020||TLS 1.2|
|Cloudera Impala||Impala Catalog Server||Catalog Server Service Port||26000||TLS 1.2|
|Oozie||Oozie Server||Oozie HTTPS Port||11443||TLS 1.1, TLS 1.2|
|Solr||Solr Server||Solr HTTP Port||8983||TLS 1.1, TLS 1.2|
|Solr||Solr Server||Solr HTTPS Port||8985||TLS 1.1, TLS 1.2|
|YARN||ResourceManager||ResourceManager Web Application HTTP Port||8090||TLS 1.2|
|YARN||JobHistory Server||MRv1 JobHistory Web Application HTTP Port||19890||TLS 1.2|
Upstream Issues Fixed
The following upstream issues are fixed in CDH 5.8.3:
- FLUME-2797 - Use SourceCounter for SyslogTcpSource
- FLUME-2844 - SpillableMemoryChannel must start ChannelCounter
- HADOOP-12548 - Read s3a credentials from a Credential Provider
- HADOOP-13353 - LdapGroupsMapping getPassward should not return null when IOException throws
- HADOOP-13526 - Add detailed logging in KMS for the authentication failure of proxy user
- HADOOP-13558 - UserGroupInformation created from a Subject incorrectly tries to renew the Kerberos ticket
- HADOOP-13579 - Fix source-level compatibility after HADOOP-11252
- HADOOP-13638 - KMS should set UGI's Configuration object properly
- HDFS-7415 - Move FSNameSystem.resolvePath() to FSDirectory
- HDFS-7420 - Delegate permission checks to FSDirectory
- HDFS-7463 - Simplify FSNamesystem#getBlockLocationsUpdateTimes
- HDFS-7478 - Move org.apache.hadoop.hdfs.server.namenode.NNConf to FSNamesystem
- HDFS-7517 - Remove redundant non-null checks in FSNamesystem#getBlockLocations
- HDFS-8224 - Schedule a block for scanning if its metadata file is corrupt
- HDFS-8269 - getBlockLocations() does not resolve the .reserved path and generates incorrect edit logs when updating the atime
- HDFS-9601 - NNThroughputBenchmark.BlockReportStats should handle NotReplicatedYetException on adding block.
- HDFS-9781 - FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
- HDFS-10641 - TestBlockManager#testBlockReportQueueing fails intermittently
- HDFS-10879 - TestEncryptionZonesWithKMS#testReadWrite fails intermittently
- HDFS-10962 - TestRequestHedgingProxyProvider fails intermittently
- HDFS-10963 - Reduce log level when network topology cannot find enough datanodes
- MAPREDUCE-6628 - Potential memory leak in CryptoOutputStream
- MAPREDUCE-6641 - TestTaskAttempt fails in trunk
- MAPREDUCE-6718 - Add progress log to JHS during startup
- MAPREDUCE-6771 - RMContainerAllocator sends container diagnostics event after corresponding completion event
- YARN-4940 - yarn node -list -all fails if RM starts with decommissioned node
- HBASE-15856 - Addendum Fix UnknownHostException import in MetaTableLocator
- HBASE-15856 - Do not cache unresolved addresses for connections
- HBASE-16294 - hbck reporting "No HDFS region dir found" for replicas
- HBASE-16699 - Overflows in AverageIntervalRateLimiter's refill() and getWaitInterval()
- HBASE-16767 - Mob compaction needs to clean up files in /hbase/mobdir/.tmp and /hbase/mobdir/.tmp/.bulkload when running into IO exceptions
- HIVE-9570 - Investigate test failure on union_view.q
- HIVE-10965 - Direct SQL for stats fails in 0-column case
- HIVE-12083 - HIVE-10965 introduces thrift error if partNames or colNames are empty
- HIVE-12475 - Parquet schema evolution within array<struct<>> does not work
- HIVE-12785 - View with union type and UDF to the struct is broken
- HIVE-13058 - Add session and operation_log directory deletion messages
- HIVE-13198 - Authorization issues with cascading views
- HIVE-13237 - Select parquet struct field with upper case throws NPE
- HIVE-13620 - Merge llap branch work to master
- HIVE-13625 - Hive Prepared Statement when executed with escape characters in parameter fails
- HIVE-13645 - Beeline needs null-guard around hiveVars and hiveConfVars read
- HIVE-14296 - Session count is not decremented when HS2 clients do not shutdown cleanly
- HIVE-14383 - SparkClientImpl should pass principal and keytab to spark-submit instead of calling kinit explicitly
- HIVE-14715 - Hive throws NumberFormatException with query with Null value
- HIVE-14743 - ArrayIndexOutOfBoundsException - HBASE-backed views' query with JOINs
- HIVE-14784 - Operation logs are disabled automatically if the parent directory does not exist.
- HIVE-14805 - Subquery inside a view will have the object in the subquery as the direct input
- HUE-4064 - Format creation and update date on the table details popover
- HUE-4138 - Last modified time of a saved query is not in the correct timezone
- HUE-4141 - Graph breaks for external workflows when there is more than one kill node
- HUE-4804 - Download function of HTML widget breaks the display
- HUE-4809 - Add trustore parameters only if SSL is turned on
- HUE-4809 - Only add trustore paths when they are actually existing
- HUE-4810 - Fix tests by setting data to valid JSON type
- HUE-4871 - An unprivileged user can enumerate users
- HUE-4891 - An unprivileged user can list document items
- HUE-4916 - Truncate last name to 30 chars on ldap import
- HUE-4968 - Remove access to /oozie/import_wokflow when v2 is enabled
- HUE-4994 - Consider default path for decision nodes in dashboard graph
- HUE-5041 - Hue export large file to HDFS does not work on non-default database
- IMPALA-1619 - Support 64-bit allocations
- IMPALA-3687 - Prefer Avro field name during schema reconciliation
- IMPALA-3751 - Fix clang build errors and warnings
- IMPALA-4135 - Thrift threaded server times-out connections during high load
- IMPALA-4170 - Fix identifier quoting in COMPUTE INCREMENTAL STATS
- IMPALA-4180 - Synchronize accesses to RuntimeState::reader_contexts_
- IMPALA-4196 - Cross compile bit-byte-functions
- IMPALA-4237 - Fix materialization of 4-byte decimals in data source scan node
- OOZIE-1814 - Oozie should mask any passwords in logs and REST interfaces
- SOLR-9310 - PeerSync fails on a node restart due to IndexFingerPrint mismatch
- SPARK-12009 - Avoid reallocating YARN container when driver wants to stop all Executors
- SPARK-12392 - Optimize a location order of broadcast blocks by considering preferred local hosts
- SPARK-12941 - Spark-SQL JDBC Oracle dialect fails to map string datatypes to Oracle VARCHAR datatype mapping
- SPARK-12941 - Spark-SQL JDBC Oracle dialect fails to map string datatypes to Oracle VARCHAR datatype
- SPARK-13328 - Poor read performance for broadcast variables with dynamic resource allocation
- SPARK-16625 - General data types to be mapped to Oracle
- SPARK-16711 - YarnShuffleService doesn't re-init properly on YARN rolling upgrade
- SPARK-17171 - DAG will list all partitions in the graph
- SPARK-17433 - YarnShuffleService doesn't handle moving credentials levelDb
- SPARK-17611 - Make shuffle service test really test authentication
- SPARK-17644 - Do not add failedStages when abortStage for fetch failure
- SPARK-17696 - Partial backport of to branch-1.6.
- SQOOP-3021 - ClassWriter fails if a column name contains a backslash character
Want to Get Involved or Learn More?
Check out our other resources
Cloudera Educational Services
Receive expert Hadoop training through Cloudera Educational Services, the industry's only truly dynamic Hadoop training curriculum that’s updated regularly to reflect the state of the art in big data.