Long term component architecture
As the main curator of open standards in Hadoop, Cloudera has a track record of bringing new open source solutions into its platform (such as Apache Spark, Apache HBase, and Apache Parquet) that are eventually adopted by the community at large. As standards, you can build longterm architecture on these components with confidence.
With the exception of DSSD support, Cloudera Enterprise 5.6.0 is identical to CDH 5.5.2/Cloudera Manager 5.5.3 If you do not need DSSD support, you do not need to upgrade if you are already using the latest 5.5.x release.
- System Requirements
- What's New
- Supported Operating Systems
- Supported Databases
- Supported JDK Versions
- Supported Internet Protocol
Supported Operating Systems
CDH 5 provides packages for Red-Hat-compatible, SLES, Ubuntu, and Debian systems as described below.
|Red Hat Enterprise Linux (RHEL)-compatible|
|Red Hat Enterprise Linux||5.7||64-bit|
|6.4 in SE Linux mode||64-bit|
|6.4 in SE Linux mode||64-bit|
|Oracle Linux with default kernel and Unbreakable Enterprise Kernel||5.6 (UEK R2)||64-bit|
|6.4 (UEK R2)||64-bit|
|6.5 (UEK R2, UEK R3)||64-bit|
|SLES Linux Enterprise Server (SLES)||11 with Service Pack 2 or later||64-bit|
|Ubuntu||Precise (12.04) - Long-Term Support (LTS)||64-bit|
|Trusty (14.04) - Long-Term Support (LTS)||64-bit|
|Debian||Wheezy (7.0, 7.1)||64-bit|
- CDH 5 provides only 64-bit packages.
- Cloudera has received reports that our RPMs work well on Fedora, but we have not tested this.
- If you are using an operating system that is not supported by Cloudera packages, you can also download source tarballs from Downloads.
|Component||MySQL||SQLite||PostgreSQL||Oracle||Derby - see Note 4|
|Oozie||5.5, 5.6||-||8.4, 9.1, 9.2, 9.3
See Note 2
|Flume||-||-||-||-||Default (for the JDBC Channel only)|
See Note 1
|Default||8.4, 9.1, 9.2, 9.3
See Note 2
See Note 1
|-||8.4, 9.1, 9.2, 9.3
See Note 2
See Note 1
|-||8.4, 9.1, 9.2,, 9.3
See Note 2
|Sqoop 1||See Note 3||-||See Note 3||See Note 3||-|
|Sqoop 2||See Note 4||-||See Note 4||See Note 4||Default|
- MySQL 5.5 is supported on CDH 5.1. MySQL 5.6 is supported on CDH 5.1 and later.
- PostgreSQL 9.2 is supported on CDH 5.1 and later. PostgreSQL 9.3 is supported on CDH 5.2 and later.
- For the purposes of transferring data only, Sqoop 1 supports MySQL 5.0 and above, PostgreSQL 8.4 and above, Oracle 10.2 and above, Teradata 13.10 and above, and Netezza TwinFin 5.0 and above. The Sqoop metastore works only with HSQLDB (1.8.0 and higher 1.x versions; the metastore does not work with any HSQLDB 2.x versions).
- Sqoop 2 can transfer data to and from MySQL 5.0 and above, PostgreSQL 8.4 and above, Oracle 10.2 and above, and Microsoft SQL Server 2012 and above. The Sqoop 2 repository database is supported only on Derby.
- Derby is supported as shown in the table, but not always recommended. See the pages for individual components in the Cloudera Installation and Upgrade guide for recommendations.
Supported JDK Versions
CDH 5 is supported with the versions shown in the table that follows.
Table 1. Supported JDK Versions
|Latest Certified Version||Minimum Supported Version||Exceptions|
Supported Internet Protocol
Known Issues Fixed in CDH 5.3.4
Upstream Issues Fixed
The following upstream issues are fixed in CDH 5.3.4:
- HDFS-7980 - Incremental BlockReport will dramatically slow down the startup of a namenode
- HDFS-8380 - Always call addStoredBlock on blocks which have been shifted from one storage to another
- HDFS-7645 - Rolling upgrade is restoring blocks from trash multiple times
- HDFS-7869 - Inconsistency in the return information while performing rolling upgrade
- HDFS-7340 - make rollingUpgrade start/finalize idempotent
- HDFS-7312 - Update DistCp v1 to optionally not use tmp location (branch-1 only)
- HDFS-7530 - Allow renaming of encryption zone roots
- HDFS-7587 - Edit log corruption can happen if append fails with a quota violation
- YARN-3485 - FairScheduler headroom calculation doesn't consider maxResources for Fifo and FairShare policies
- YARN-3491 - PublicLocalizer#addResource is too slow.
- YARN-3021 - YARN's delegation-token handling disallows certain trust setups to operate properly over DistCp
- YARN-3241 - FairScheduler handles "invalid" queue names inconsistently
- YARN-3022 - Expose Container resource information from NodeManager for monitoring
- YARN-2984 - Metrics for container's actual memory usage
- YARN-3465 - Use LinkedHashMap to preserve order of resource requests
- MAPREDUCE-6339 - Job history file is not flushed correctly because isTimerActive flag is not set true when flushTimerTask is scheduled.
- MAPREDUCE-5710 - Backport MAPREDUCE-1305 to branch-1
- MAPREDUCE-6238 - MR2 can't run local jobs with -libjars command options which is a regression from MR1
- MAPREDUCE-6076 - Zero map split input length combine with none zero map split input length may cause MR1 job hung sometimes.
- HBASE-13374 - Small scanners (with particular configurations) do not return all rows
- HBASE-13269 - Limit result array preallocation to avoid OOME with large scan caching values
- HBASE-13422 - remove use of StandardCharsets in 0.98
- HBASE-13335 - Update ClientSmallScanner and ClientSmallReversedScanner
- HBASE-13262 - ResultScanner doesn't return all rows in Scan
- HIVE-10646 - ColumnValue does not handle NULL_TYPE
- HIVE-10453 - HS2 leaking open file descriptors when using UDFs
- HIVE-9655 - Dynamic partition table insertion error
- HIVE-10452 - Followup fix for HIVE-10202 to restrict it it for script mode.
- HIVE-10312 - SASL.QOP in JDBC URL is ignored for Delegation token Authentication
- HIVE-10202 - Beeline outputs prompt+query on standard output when used in non-interactive mode
- HIVE-10087 - Beeline's --silent option should suppress query from being echoed when running with -f option
- HIVE-10085 - Lateral view on top of a view throws RuntimeException
- HIVE-2828 - make timestamp accessible in the hbase KeyValue
- HUE-2741 - [home] Hide the document move dialog
- HUE-2732 - Hue isn't correctly doing add_column migrations with non-blank defaults
- HUE-2513 - [fb] File list column sorting is broken
- IMPALA-1519 - Fix wrapping of exprs via a TupleIsNullPredicate with analytics
- IMPALA-1952 - Expand parsing of decimals to include scientific notation
- IMPALA-1860 - INSERT/CTAS evaluates and applies constant predicates.
- IMPALA-1900 - Assign predicates below analytic functions with a compatible partition by clause
- IMPALA-1376 - Split up Planner into multiple classes.
- IMPALA-1888 - FIRST_VALUE may produce incorrect results with preceding windows
- IMPALA-1559 - FIRST_VALUE rewrite fn type might not match slot type
- IMPALA-1808 - AnalyticEvalNode cannot handle partition/order by exprs with NaN
- IMPALA-1562 - AnalyticEvalNode not properly handling nullable tuples
- OOZIE-2063 - Cron syntax creates duplicate actions
- OOZIE-2218 - META-INF directories in the war file have 777 permissions
- OOZIE-1878 - Can't execute dryrun on the CLI
- SENTRY-696 - Improve Metastoreplugin Cache Initialization time
- SENTRY-703 - Calls to add_partition fail when passed a Partition object with a null location
- SENTRY-408 - The URI permission should support more filesystem prefixes
- SENTRY-598 - Hive binding should support enforcing URI privilege for transforms
- SOLR-7478 - UpdateLog#close shutdown it's executor with interrupts before running close, preventing a clean close.
- SOLR-7437 - Make HDFS transaction log replication factor configurable.
- SOLR-7338 - A reloaded core will never register itself as active after a ZK session expiration
- SOLR-7370 - FSHDFSUtils#recoverFileLease tries to recover the lease every one second after the first four second wait.
- SPARK-6578 - Outbound channel in network library is not thread-safe, can lead to fetch failures
- SQOOP-2343 - AsyncSqlRecordWriter stucks if any exception is thrown out in its close method
- SQOOP-2286 - Ensure Sqoop generates valid avro column names
- SQOOP-2283 - Support usage of --exec and --password-alias
- SQOOP-2281 - Set overwrite on kite dataset
- SQOOP-2282 - Add validation check for --hive-import and --append
- SQOOP-2257 - Parquet target for imports with Hive overwrite option does not work
- ZOOKEEPER-2146 - BinaryInputArchive readString should check length before allocating memory
- ZOOKEEPER-2149 - Logging of client address when socket connection established
Published Known Issues Fixed
As a result of the fixes described above, the following issue, previously published in Known Issues in CDH 5, is also fixed.
—Executing oozie job -config properties file -dryrun fails because of a code defect in argument parsing
Want to Get Involved or Learn More?
Check out our other resources
Receive expert Hadoop training through Cloudera University, the industry's only truly dynamic Hadoop training curriculum that’s updated regularly to reflect the state of the art in big data.