Your browser is out of date!

Update your browser to view this website correctly. Update my browser now

×

Please Read and Accept our Terms


Long term component architecture

As the main curator of open standards in Hadoop, Cloudera has a track record of bringing new open source solutions into its platform (such as Apache Spark, Apache HBase, and Apache Parquet) that are eventually adopted by the community at large. As standards, you can build longterm architecture on these components with confidence.

 

PLEASE NOTE:

With the exception of DSSD support, Cloudera Enterprise 5.6.0 is identical to CDH 5.5.2/Cloudera Manager 5.5.3  If you do not need DSSD support, you do not need to upgrade if you are already using the latest 5.5.x release.

 

CDH 5 provides packages for Red-Hat-compatible, SLES, Ubuntu, and Debian systems as described below.

Operating System Version Packages
Red Hat Enterprise Linux (RHEL)-compatible
Red Hat Enterprise Linux 5.7 64-bit
  6.2 64-bit
  6.4 64-bit
  6.4 in SE Linux mode 64-bit
  6.5 64-bit
CentOS 5.7 64-bit
  6.2 64-bit
  6.4 64-bit
  6.4 in SE Linux mode 64-bit
  6.5 64-bit
Oracle Linux with default kernel and Unbreakable Enterprise Kernel 5.6 (UEK R2) 64-bit
  6.4 (UEK R2) 64-bit
  6.5 (UEK R2, UEK R3) 64-bit
SLES
SLES Linux Enterprise Server (SLES) 11 with Service Pack 2 or later 64-bit
Ubuntu/Debian
Ubuntu Precise (12.04) - Long-Term Support (LTS) 64-bit
  Trusty (14.04) - Long-Term Support (LTS) 64-bit
Debian Wheezy (7.0, 7.1) 64-bit

Note:

  • CDH 5 provides only 64-bit packages.
  • Cloudera has received reports that our RPMs work well on Fedora, but we have not tested this.
  • If you are using an operating system that is not supported by Cloudera packages, you can also download source tarballs from Downloads.

 

Selected tab: SupportedOperatingSystems
Component MySQL SQLite PostgreSQL Oracle Derby - see Note 4
Oozie 5.5, 5.6 - 8.4, 9.1, 9.2, 9.3

See Note 2

11gR2 Default
Flume - - - - Default (for the JDBC Channel only)
Hue 5.5, 5.6

See Note 1

Default 8.4, 9.1, 9.2, 9.3

See Note 2

11gR2 -
Hive/Impala 5.5, 5.6

See Note 1

- 8.4, 9.1, 9.2, 9.3

See Note 2

11gR2 Default
Sentry 5.5, 5.6

See Note 1

- 8.4, 9.1, 9.2,, 9.3

See Note 2

11gR2 -
Sqoop 1 See Note 3 - See Note 3 See Note 3 -
Sqoop 2 See Note 4 - See Note 4 See Note 4 Default

Note:

  1. MySQL 5.5 is supported on CDH 5.1. MySQL 5.6 is supported on CDH 5.1 and later.
  2. PostgreSQL 9.2 is supported on CDH 5.1 and later. PostgreSQL 9.3 is supported on CDH 5.2 and later.
  3. For the purposes of transferring data only, Sqoop 1 supports MySQL 5.0 and above, PostgreSQL 8.4 and above, Oracle 10.2 and above, Teradata 13.10 and above, and Netezza TwinFin 5.0 and above. The Sqoop metastore works only with HSQLDB (1.8.0 and higher 1.x versions; the metastore does not work with any HSQLDB 2.x versions).
  4. Sqoop 2 can transfer data to and from MySQL 5.0 and above, PostgreSQL 8.4 and above, Oracle 10.2 and above, and Microsoft SQL Server 2012 and above. The Sqoop 2 repository database is supported only on Derby.
  5. Derby is supported as shown in the table, but not always recommended. See the pages for individual components in the Cloudera Installation and Upgrade guide for recommendations.
Selected tab: SupportedDatabases

CDH 5 is supported with the versions shown in the table that follows.

Table 1. Supported JDK Versions

Latest Certified Version Minimum Supported Version Exceptions
1.7.0_67 1.7.0_67 None
1.8.0_11 1.8.0_11 None

Selected tab: SupportedJDKVersions

CDH requires IPv4. IPv6 is not supported.

See also Configuring Network Names.

Selected tab: SupportedInternetProtocol
Selected tab: SystemRequirements

Upstream Issues Fixed

The following upstream issues are fixed in CDH 5.2.4:

  • HDFS-7707 - Edit log corruption due to delayed block removal again
  • YARN-2846 - Incorrect persist exit code for running containers in reacquireContainer() that interrupted by NodeManager restart.
  • HIVE-7733 - Ambiguous column reference error on query
  • HIVE-8444 - update pom to junit 4.11
  • HIVE-9474 - truncate table changes permissions on the target
  • HIVE-6308 - COLUMNS_V2 Metastore table not populated for tables created without an explicit column list.
  • HIVE-9445 - Revert HIVE-5700 - enforce single date format for partition column storage
  • HIVE-7800 - Parquet Column Index Access Schema Size Checking Checking
  • HIVE-9393 - reduce noisy log level of ColumnarSerDe.java:116 from INFO to DEBUG
  • HUE-2501 - [metastore] Creating a table with header files bigger than 64MB truncates it
  • SOLR-7033 - [RecoveryStrategy should not publish any state when closed / cancelled.
  • SOLR-5961 - Solr gets crazy on /overseer/queue state change
  • SOLR-6640 - Replication can cause index corruption
  • SOLR-6920 - During replication use checksums to verify if files are the same
  • SOLR-5875 - QueryComponent.mergeIds() unmarshals all docs' sort field values once per doc instead of once per shard
  • SOLR-6919 - Log REST info before executing
  • SOLR-6969 - When opening an HDFSTransactionLog for append we must first attempt to recover it's lease to prevent data loss.
  • IMPALA-1471: Bug in spilling of PHJ that was affecting left anti and outer joins.
  • IMPALA-1451: Empty Row in HBase triggers NPE in Planner
  • IMPALA-1535: Partition pruning with NULL
  • IMPALA-1483: Substitute TupleIsNullPredicates to refer to physical analytic output.
  • IMPALA-1674: Fix serious memory leak in TSaslTransport
  • IMPALA-1668: Fix leak of transport objects in TSaslServerTransport::Factory
  • IMPALA-1565: Python sasl client transport perf issue
  • IMPALA-1556: Kerberos fetches 3x slower
  • IMPALA-1120: Fetch column statistics using Hive 0.13 bulk API

Selected tab: WhatsNew

Want to Get Involved or Learn More?

Check out our other resources

Cloudera Community

Collaborate with your peers, industry experts, and Clouderans to make the most of your investment in Hadoop.

Cloudera University

Receive expert Hadoop training through Cloudera University, the industry's only truly dynamic Hadoop training curriculum that’s updated regularly to reflect the state of the art in big data.