Your browser is out of date!

Update your browser to view this website correctly. Update my browser now

×

Please Read and Accept our Terms


Long term component architecture

As the main curator of open standards in Hadoop, Cloudera has a track record of bringing new open source solutions into its platform (such as Apache Spark, Apache HBase, and Apache Parquet) that are eventually adopted by the community at large. As standards, you can build longterm architecture on these components with confidence.

 

PLEASE NOTE:

With the exception of DSSD support, Cloudera Enterprise 5.6.0 is identical to CDH 5.5.2/Cloudera Manager 5.5.3  If you do not need DSSD support, you do not need to upgrade if you are already using the latest 5.5.x release.

 

CDH 5 provides packages for Red-Hat-compatible, SLES, Ubuntu, and Debian systems as described below.

Operating System

Version

Packages

Red Hat compatible

 

 

Red Hat Enterprise Linux (RHEL)

5.7

64-bit

 

6.2

64-bit

 

6.4

64-bit

 

6.5

64-bit

CentOS

5.7

64-bit

 

6.2

64-bit

 

6.4

64-bit

 

6.5

64-bit

Oracle Linux with default kernel and Unbreakable Enterprise Kernel

5.6

64-bit

 

6.4

64-bit

 

6.5

64-bit

SLES

 

 

SLES Linux Enterprise Server (SLES)

11 with Service Pack 1 or later

64-bit

Ubuntu/Debian

 

 

Ubuntu

Precise (12.04) - Long-Term Support (LTS)

64-bit

Debian

Wheezy (7.0, 7.1)

64-bit

Note:

  • CDH 5 provides only 64-bit packages.
  • Cloudera has received reports that our RPMs work well on Fedora, but we have not tested this.
  • If you are using an operating system that is not supported by Cloudera's packages, you can also download source tarballs from Downloads.
Selected tab: SupportedOperatingSystems

Component

MySQL

SQLite

PostgreSQL

Oracle

Derby - see Note 4

Oozie

5.5, 5.6

-

8.4, 9.2

11gR2

Default

Flume

-

-

-

-

Default (for the JDBC Channel only)

Hue

5.5, 5.6 See Note 1

Default

8.4, 9.2

11gR2

-

Hive/Impala

5.5, 5.6

-

8.4, 9.2

11gR2

Default

Sqoop 1

See Note 2

 -

See Note 2

See Note 2

-

Sqoop 2

See Note 3

 -

See Note 3

See Note 3

Default

Notes

  1. Cloudera's recommendations are:
    • For Red Hat and similar systems:
      • Use MySQL server version 5.0 (or higher) and version 5.0 client shared libraries on Red Hat 5 and similar systems.
      • Use MySQL server version 5.1 (or higher) and version 5.1 client shared libraries on Red Hat 6 and similar systems.

      If you use a higher server version than recommended here (for example, if you use 5.5) make sure you install the corresponding client libraries.

    • For SLES systems, use MySQL server version 5.0 (or higher) and version 5.0 client shared libraries.
    • For Ubuntu systems:
      • Use MySQL server version 5.5 (or higher) and version 5.0 client shared libraries on Precise (12.04).
  2. For connectivity purposes only, Sqoop 1 supports MySQL5.1, PostgreSQL 9.1.4, Oracle 10.2, Teradata 13.1, and Netezza TwinFin 5.0. The Sqoop metastore works only with HSQLDB (1.8.0 and higher 1.x versions; the metastore does not work with any HSQLDB 2.x versions).
  3. Sqoop 2 can transport data to and from MySQL5.1, PostgreSQL 9.1.4, Oracle 10.2, and Microsoft SQL Server 2012. The Sqoop 2 repository is supported only on Derby.
  4. Derby is supported as shown in the table, but not always recommended. See the pages for individual components in the CDH 5 Installation Guide for recommendations.
Selected tab: SupportedDatabases

CDH 5 is supported with Oracle JDK 1.7.

Table 1. Supported JDK 1.7 Versions
Latest Certified Version Minimum Supported Version Exceptions
1.7.0_55 1.7.0_55 None

Selected tab: SupportedJDKVersions

CDH requires IPv4. IPv6 is not supported.

See also Configuring Network Names.

Selected tab: SupportedInternetProtocol
Selected tab: SystemRequirements

What's New in CDH 5.1.3

This is a maintenance release that fixes the following issues:

  • HADOOP-11035 - distcp on mr1(branch-1) fails with NPE using a short relative source path.
  • HBASE-10012 - Hide ServerName constructor
  • HBASE-11349 - [Thrift] support authentication/impersonation
  • HBASE-11446 - Reduce the frequency of RNG calls in SecureWALCellCodec#EncryptedKvEncoder
  • HBASE-11457 - Increment HFile block encoding IVs accounting for ciper's internal use
  • HBASE-11474 - [Thrift2] support authentication/impersonation
  • HBASE-11565 - Stale connection could stay for a while
  • HBASE-11627 - RegionSplitter's rollingSplit terminated with "/ by zero", and the _balancedSplit file was not deleted properly
  • HBASE-11788 - hbase is not deleting the cell when a Put with a KeyValue, KeyValue.Type.Delete is submitted
  • HBASE-11828 - callers of SeverName.valueOf should use equals and not ==
  • HDFS-4257 - The ReplaceDatanodeOnFailure policies could have a forgiving option
  • HDFS-6776 - Using distcp to copy data between insecure and secure cluster via webdhfs doesn't work
  • HDFS-6908 - incorrect snapshot directory diff generated by snapshot deletion
  • HUE-2247 - [Impala] Support pass-through LDAP authentication
  • HUE-2295 - [librdbms] External oracle DB connection is broken due to a typo
  • HUE-2273 - [desktop] Blacklisting apps with existing document will break home page
  • HUE-2318 - [desktop] Documents shared with write group permissions are not editable
  • HIVE-5087 - Rename npath UDF to matchpath
  • HIVE-6820 - HiveServer(2) ignores HIVE_OPTS
  • HIVE-7635 - Query having same aggregate functions but different case throws IndexOutOfBoundsException
  • IMPALA-958 - Excessively long query plan serialization time in FE when querying huge tables
  • IMPALA-1091 - Improve TScanRangeLocation struct and associated code
  • OOZIE-1989 - NPE during a rerun with forks
  • YARN-1458 - FairScheduler: Zero weight can lead to livelock

Selected tab: WhatsNew

Want to Get Involved or Learn More?

Check out our other resources

Cloudera Community

Collaborate with your peers, industry experts, and Clouderans to make the most of your investment in Hadoop.

Cloudera University

Receive expert Hadoop training through Cloudera University, the industry's only truly dynamic Hadoop training curriculum that’s updated regularly to reflect the state of the art in big data.