Your browser is out of date!

Update your browser to view this website correctly. Update my browser now

×

Long term component architecture

As the main curator of open standards in Hadoop, Cloudera has a track record of bringing new open source solutions into its platform (such as Apache Spark, Apache HBase, and Apache Parquet) that are eventually adopted by the community at large. As standards, you can build longterm architecture on these components with confidence.

Thank you for choosing CDH, your download instructions are below:

Installation

This section introduces options for installing Cloudera Manager, CDH, and managed services. You can install:

  • Cloudera Manager, CDH, and managed services in a Cloudera Manager deployment. This is the recommended method for installing CDH and managed services.
  • CDH 5 into an unmanaged deployment.

Continue reading:

 

 

Cloudera Manager Deployment

A Cloudera Manager deployment consists of the following software components:

  • Oracle JDK
  • Cloudera Manager Server and Agent packages
  • Supporting database software
  • CDH and managed service software
This section describes the three main installation paths for creating a new Cloudera Manager deployment and the criteria for choosing an installation path. If your cluster already has an installation of a previous version of Cloudera Manager, follow the instructions in Upgrading Cloudera Manager.

The Cloudera Manager installation paths share some common phases, but the variant aspects of each path support different user and cluster host requirements:

  • Demonstration and proof of concept deployments - There are two installation options:
    • Installation Path A - Automated Installation by Cloudera Manager - Cloudera Manager automates the installation of the Oracle JDK, Cloudera Manager Server, embedded PostgreSQL database, and Cloudera Manager Agent, CDH, and managed service software on cluster hosts, and configures databases for the Cloudera Manager Server and Hive Metastore and optionally for Cloudera Management Service roles. This path is recommended for demonstration and proof of concept deployments, but is not recommended for production deployments because its not intended to scale and may require database migration as your cluster grows. To use this method, server and cluster hosts must satisfy the following requirements:
      • Provide the ability to log in to the Cloudera Manager Server host using a root account or an account that has password-less sudo permission.
      • Allow the Cloudera Manager Server host to have uniform SSH access on the same port to all hosts. See Networking and Security Requirements for further information.
      • All hosts must have access to standard package repositories and either archive.cloudera.com or a local repository with the necessary installation files.
    • Installation Path B - Manual Installation Using Cloudera Manager Packages - you install the Oracle JDK and Cloudera Manager Server, and embedded PostgreSQL database packages on the Cloudera Manager Server host. You have two options for installing Oracle JDK, Cloudera Manager Agent, CDH, and managed service software on cluster hosts: manually install it yourself or use Cloudera Manager to automate installation. However, in order for Cloudera Manager to automate installation of Cloudera Manager Agent packages or CDH and managed service software, cluster hosts must satisfy the following requirements:
      • Allow the Cloudera Manager Server host to have uniform SSH access on the same port to all hosts. See Networking and Security Requirements for further information.
      • All hosts must have access to standard package repositories and either archive.cloudera.com or a local repository with the necessary installation files.
  • Production deployments - require you to first manually install and configure a production database for the Cloudera Manager Server and Hive Metastore. There are two installation options:
    • Installation Path B - Manual Installation Using Cloudera Manager Packages - you install the Oracle JDK and Cloudera Manager Server packages on the Cloudera Manager Server host. You have two options for installing Oracle JDK, Cloudera Manager Agent, CDH, and managed service software on cluster hosts: manually install it yourself or use Cloudera Manager to automate installation. However, in order for Cloudera Manager to automate installation of Cloudera Manager Agent packages or CDH and managed service software, cluster hosts must satisfy the following requirements:
      • Allow the Cloudera Manager Server host to have uniform SSH access on the same port to all hosts. See Networking and Security Requirements for further information.
      • All hosts must have access to standard package repositories and either archive.cloudera.com or a local repository with the necessary installation files.
    • Installation Path C - Manual Installation Using Cloudera Manager Tarballs - you install the Oracle JDK, Cloudera Manager Server, and Cloudera Manager Agent software as tarballs and use Cloudera Manager to automate installation of CDH and managed service software as parcels.

 

Unmanaged Deployment

In an unmanaged deployment, you are responsible for managing all phases of the life cycle of CDH and managed service components on each host: installation, configuration, and service life cycle operations such as start and stop. This section describes alternatives for installing CDH 5 software in an unmanaged deployment.

  • Command-line methods:
    • Download and install the CDH 5 "1-click Install" package
    • Add the CDH 5 repository
    • Build your own CDH 5 repository
    If you use one of these command-line methods, the first (downloading and installing the "1-click Install" package) is recommended in most cases because it is simpler than building or adding a repository. See Installing the Latest CDH 5 Release for detailed instructions for each of these options.
  • Tarball You can download a tarball from CDH downloads. Keep the following points in mind:
    • Installing CDH 5 from a tarball installs YARN.
    • In CDH 5, there is no separate tarball for MRv1. Instead, the MRv1 binaries, examples, etc., are delivered in the Hadoop tarball. The scripts for running MRv1 are in the bin-mapreduce1 directory in the tarball, and the MRv1 examples are in the examples-mapreduce1 directory.

Please Read and Accept our Terms

CDH 5 provides packages for Red-Hat-compatible, SLES, Ubuntu, and Debian systems as described below.

Operating System Version Packages
Red Hat Enterprise Linux (RHEL)-compatible
Red Hat Enterprise Linux 5.7 64-bit
  6.2 64-bit
  6.4 64-bit
  6.4 in SE Linux mode 64-bit
  6.5 64-bit
CentOS 5.7 64-bit
  6.2 64-bit
  6.4 64-bit
  6.4 in SE Linux mode 64-bit
  6.5 64-bit
Oracle Linux with default kernel and Unbreakable Enterprise Kernel 5.6 (UEK R2) 64-bit
  6.4 (UEK R2) 64-bit
  6.5 (UEK R2, UEK R3) 64-bit
SLES
SLES Linux Enterprise Server (SLES) 11 with Service Pack 2 or later 64-bit
Ubuntu/Debian
Ubuntu Precise (12.04) - Long-Term Support (LTS) 64-bit
  Trusty (14.04) - Long-Term Support (LTS) 64-bit
Debian Wheezy (7.0, 7.1) 64-bit

Note:

  • CDH 5 provides only 64-bit packages.
  • Cloudera has received reports that our RPMs work well on Fedora, but we have not tested this.
  • If you are using an operating system that is not supported by Cloudera packages, you can also download source tarballs from Downloads.

 

Selected tab: SupportedOperatingSystems

 

 

Component MySQL SQLite PostgreSQL Oracle Derby - see Note 4
Oozie 5.5, 5.6 8.4, 9.1, 9.2, 9.3

See Note 2

11gR2 Default
Flume Default (for the JDBC Channel only)
Hue 5.5, 5.6

See Note 1

Default 8.4, 9.1, 9.2, 9.3

See Note 2

11gR2
Hive/Impala 5.5, 5.6

See Note 1

8.4, 9.1, 9.2, 9.3

See Note 2

11gR2 Default
Sentry 5.5, 5.6

See Note 1

8.4, 9.1, 9.2,, 9.3

See Note 2

11gR2
Sqoop 1 See Note 3 See Note 3 See Note 3
Sqoop 2 See Note 4 See Note 4 See Note 4 Default

Note:

  1. MySQL 5.5 is supported on CDH 5.1. MySQL 5.6 is supported on CDH 5.1 and later.
  2. PostgreSQL 9.2 is supported on CDH 5.1 and later. PostgreSQL 9.3 is supported on CDH 5.2 and later.
  3. For the purposes of transferring data only, Sqoop 1 supports MySQL 5.0 and above, PostgreSQL 8.4 and above, Oracle 10.2 and above, Teradata 13.10 and above, and Netezza TwinFin 5.0 and above. The Sqoop metastore works only with HSQLDB (1.8.0 and higher 1.x versions; the metastore does not work with any HSQLDB 2.x versions).
  4. Sqoop 2 can transfer data to and from MySQL 5.0 and above, PostgreSQL 8.4 and above, Oracle 10.2 and above, and Microsoft SQL Server 2012 and above. The Sqoop 2 repository database is supported only on Derby.
  5. Derby is supported as shown in the table, but not always recommended. See the pages for individual components in the Cloudera Installation and Upgrade guide for recommendations.
  6.  

 

 

 

 

Selected tab: SupportedDatabases

CDH 5 is supported with the versions shown in the table that follows.

Table 1. Supported JDK Versions

Latest Certified Version Minimum Supported Version Exceptions
1.7.0_67 1.7.0_67 None
1.8.0_11 1.8.0_11 None

Selected tab: SupportedJDKVersions

CDH requires IPv4. IPv6 is not supported.

See also Configuring Network Names.

Selected tab: SupportedInternetProtocol
Selected tab: SystemRequirements

Known Issues Fixed in CDH 5.2.6

 

Upstream Issues Fixed

The following upstream issues are fixed in CDH 5.2.6:

  • CRUNCH-516 - Scrunch needs some additional null checks
  • CRUNCH-508 - Improve performance of Scala Enumeration counters in Scrunch
  • CRUNCH-514 - AvroDerivedDeepCopier should initialize delegate MapFns
  • CRUNCH-530 - Fix object reuse bug in GenericRecordToTuple
  • HADOOP-12103 - Small refactoring of DelegationTokenAuthenticationFilter to allow code sharing
  • HADOOP-10839 - Add unregisterSource() to MetricsSystem API
  • HDFS-8337 - Accessing httpfs via webhdfs doesn't work from a jar with kerberos
  • HDFS-7546 - Document, and set an accepting default for dfs.namenode.kerberos.principal.pattern
  • HDFS-6997 - Archival Storage: add more tests for data migration and replicaion
  • HDFS-7980 - Incremental BlockReport will dramatically slow down the startup of a namenode
  • HDFS-8380 - Always call addStoredBlock on blocks which have been shifted from one storage to another
  • HDFS-7312 - Update DistCp v1 to optionally not use tmp location (branch-1 only)
  • YARN-3485 - FairScheduler headroom calculation doesn't consider maxResources for Fifo and FairShare policies
  • YARN-3241 - FairScheduler handles "invalid" queue names inconsistently
  • YARN-2669 - FairScheduler: queue names shouldn't allow periods
  • YARN-3022 - Expose Container resource information from NodeManager for monitoring
  • YARN-2984 - Metrics for container's actual memory usage
  • YARN-3465 - Use LinkedHashMap to preserve order of resource requests
  • MAPREDUCE-6387 - Serialize the recently added Task#encryptedSpillKey field at the end
  • MAPREDUCE-6339 - Job history file is not flushed correctly because isTimerActive flag is not set true when flushTimerTask is scheduled.
  • MAPREDUCE-5710 - Backport MAPREDUCE-1305 to branch-1
  • MAPREDUCE-6238 - MR2 can't run local jobs with -libjars command options which is a regression from MR1
  • HBASE-13826 - Unable to create table when group acls are appropriately set.
  • HBASE-13241 - Add tests for group level grants
  • HBASE-13239 - HBase grant at specific column level does not work for Groups
  • HBASE-13768 - ZooKeeper znodes are bootstrapped with insecure ACLs in a secure configuration
  • HBASE-13789 - ForeignException should not be sent to the client
  • HBASE-13779 - Calling table.exists() before table.get() end up with an empty Result
  • HBASE-13780 - Default to 700 for HDFS root dir permissions for secure deployments
  • HBASE-13768 - ZooKeeper znodes are bootstrapped with insecure ACLs in a secure configuration
  • HBASE-13767 - Allow ZKAclReset to set and not just clear ZK ACLs
  • HBASE-13086 - Show ZK root node on Master WebUI
  • HBASE-13342 - Fix incorrect interface annotations
  • HBASE-13162 - Add capability for cleaning hbase acls to hbase cleanup script.
  • HBASE-12641 - Grant all permissions of hbase zookeeper node to hbase superuser in a secure cluster
  • HBASE-13374 - Small scanners (with particular configurations) do not return all rows
  • HBASE-13269 - Limit result array preallocation to avoid OOME with large scan caching values
  • HBASE-13422 - remove use of StandardCharsets in 0.98
  • HBASE-13335 - Update ClientSmallScanner and ClientSmallReversedScanner
  • HBASE-13262 - ResultScanner doesn't return all rows in Scan
  • HIVE-10841 - [WHERE col is not null] does not work sometimes for queries with many JOIN statements
  • HIVE-9620 - Cannot retrieve column statistics using HMS API if column name contains uppercase characters
  • HIVE-8863 - Cannot drop table with uppercase name after "compute statistics for columns"
  • HIVE-6679 - HiveServer2 should support configurable the server side socket timeout and keepalive for various transports types where applicable
  • OOZIE-1944 - Recursive variable resolution broken when same parameter name in config-default and action conf
  • OOZIE-2218 - META-INF directories in the war file have 777 permissions
  • SENTRY-540 - Fix Sentry test validating special chars in username due to HIVE-8916
  • SENTRY-227 - Fix for "Unsupported entity type DUMMYPARTITION"
  • SOLR-7478 - UpdateLog#close shutdown it's executor with interrupts before running close, preventing a clean close.
  • SOLR-7338 - A reloaded core will never register itself as active after a ZK session expiration
  • SOLR-7370 - FSHDFSUtils#recoverFileLease tries to recover the lease every one second after the first four second wait.
  • ZOOKEEPER-2146 - BinaryInputArchive readString should check length before allocating memory
  • ZOOKEEPER-2149 - Logging of client address when socket connection established
  • IMPALA-1726: Move JNI / Thrift utilities to separate header
  • IMPALA-2002: Provide way to cache ext data source classes

Selected tab: WhatsNew

Want to Get Involved or Learn More?

Check out our other resources

Cloudera Community

Collaborate with your peers, industry experts, and Clouderans to make the most of your investment in Hadoop.

Cloudera University

Receive expert Hadoop training through Cloudera University, the industry's only truly dynamic Hadoop training curriculum that’s updated regularly to reflect the state of the art in big data.