Your browser is out of date!

Update your browser to view this website correctly. Update my browser now

×

Long term component architecture

As the main curator of open standards in Hadoop, Cloudera has a track record of bringing new open source solutions into its platform (such as Apache Spark, Apache HBase, and Apache Parquet) that are eventually adopted by the community at large. As standards, you can build longterm architecture on these components with confidence.

Thank you for choosing CDH, your download instructions are below:

Installation

This section introduces options for installing Cloudera Manager, CDH, and managed services. You can install:

  • Cloudera Manager, CDH, and managed services in a Cloudera Manager deployment. This is the recommended method for installing CDH and managed services.
  • CDH 5 into an unmanaged deployment.

Continue reading:

 

 

Cloudera Manager Deployment

A Cloudera Manager deployment consists of the following software components:

  • Oracle JDK
  • Cloudera Manager Server and Agent packages
  • Supporting database software
  • CDH and managed service software
This section describes the three main installation paths for creating a new Cloudera Manager deployment and the criteria for choosing an installation path. If your cluster already has an installation of a previous version of Cloudera Manager, follow the instructions in Upgrading Cloudera Manager.

The Cloudera Manager installation paths share some common phases, but the variant aspects of each path support different user and cluster host requirements:

  • Demonstration and proof of concept deployments - There are two installation options:
    • Installation Path A - Automated Installation by Cloudera Manager - Cloudera Manager automates the installation of the Oracle JDK, Cloudera Manager Server, embedded PostgreSQL database, and Cloudera Manager Agent, CDH, and managed service software on cluster hosts, and configures databases for the Cloudera Manager Server and Hive Metastore and optionally for Cloudera Management Service roles. This path is recommended for demonstration and proof of concept deployments, but is not recommended for production deployments because its not intended to scale and may require database migration as your cluster grows. To use this method, server and cluster hosts must satisfy the following requirements:
      • Provide the ability to log in to the Cloudera Manager Server host using a root account or an account that has password-less sudo permission.
      • Allow the Cloudera Manager Server host to have uniform SSH access on the same port to all hosts. See Networking and Security Requirements for further information.
      • All hosts must have access to standard package repositories and either archive.cloudera.com or a local repository with the necessary installation files.
    • Installation Path B - Manual Installation Using Cloudera Manager Packages - you install the Oracle JDK and Cloudera Manager Server, and embedded PostgreSQL database packages on the Cloudera Manager Server host. You have two options for installing Oracle JDK, Cloudera Manager Agent, CDH, and managed service software on cluster hosts: manually install it yourself or use Cloudera Manager to automate installation. However, in order for Cloudera Manager to automate installation of Cloudera Manager Agent packages or CDH and managed service software, cluster hosts must satisfy the following requirements:
      • Allow the Cloudera Manager Server host to have uniform SSH access on the same port to all hosts. See Networking and Security Requirements for further information.
      • All hosts must have access to standard package repositories and either archive.cloudera.com or a local repository with the necessary installation files.
  • Production deployments - require you to first manually install and configure a production database for the Cloudera Manager Server and Hive Metastore. There are two installation options:
    • Installation Path B - Manual Installation Using Cloudera Manager Packages - you install the Oracle JDK and Cloudera Manager Server packages on the Cloudera Manager Server host. You have two options for installing Oracle JDK, Cloudera Manager Agent, CDH, and managed service software on cluster hosts: manually install it yourself or use Cloudera Manager to automate installation. However, in order for Cloudera Manager to automate installation of Cloudera Manager Agent packages or CDH and managed service software, cluster hosts must satisfy the following requirements:
      • Allow the Cloudera Manager Server host to have uniform SSH access on the same port to all hosts. See Networking and Security Requirements for further information.
      • All hosts must have access to standard package repositories and either archive.cloudera.com or a local repository with the necessary installation files.
    • Installation Path C - Manual Installation Using Cloudera Manager Tarballs - you install the Oracle JDK, Cloudera Manager Server, and Cloudera Manager Agent software as tarballs and use Cloudera Manager to automate installation of CDH and managed service software as parcels.

 

Unmanaged Deployment

In an unmanaged deployment, you are responsible for managing all phases of the life cycle of CDH and managed service components on each host: installation, configuration, and service life cycle operations such as start and stop. This section describes alternatives for installing CDH 5 software in an unmanaged deployment.

  • Command-line methods:
    • Download and install the CDH 5 "1-click Install" package
    • Add the CDH 5 repository
    • Build your own CDH 5 repository
    If you use one of these command-line methods, the first (downloading and installing the "1-click Install" package) is recommended in most cases because it is simpler than building or adding a repository. See Installing the Latest CDH 5 Release for detailed instructions for each of these options.
  • Tarball You can download a tarball from CDH downloads. Keep the following points in mind:
    • Installing CDH 5 from a tarball installs YARN.
    • In CDH 5, there is no separate tarball for MRv1. Instead, the MRv1 binaries, examples, etc., are delivered in the Hadoop tarball. The scripts for running MRv1 are in the bin-mapreduce1 directory in the tarball, and the MRv1 examples are in the examples-mapreduce1 directory.

Please Read and Accept our Terms

CDH 5 provides packages for Red-Hat-compatible, SLES, Ubuntu, and Debian systems as described below.

Operating System Version Packages
Red Hat Enterprise Linux (RHEL)-compatible
Red Hat Enterprise Linux 5.7 64-bit
  6.2 64-bit
  6.4 64-bit
  6.4 in SE Linux mode 64-bit
  6.5 64-bit
CentOS 5.7 64-bit
  6.2 64-bit
  6.4 64-bit
  6.4 in SE Linux mode 64-bit
  6.5 64-bit
Oracle Linux with default kernel and Unbreakable Enterprise Kernel 5.6 (UEK R2) 64-bit
  6.4 (UEK R2) 64-bit
  6.5 (UEK R2, UEK R3) 64-bit
SLES
SLES Linux Enterprise Server (SLES) 11 with Service Pack 2 or later 64-bit
Ubuntu/Debian
Ubuntu Precise (12.04) - Long-Term Support (LTS) 64-bit
  Trusty (14.04) - Long-Term Support (LTS) 64-bit
Debian Wheezy (7.0, 7.1) 64-bit

Note:

  • CDH 5 provides only 64-bit packages.
  • Cloudera has received reports that our RPMs work well on Fedora, but we have not tested this.
  • If you are using an operating system that is not supported by Cloudera packages, you can also download source tarballs from Downloads.

 

Selected tab: SupportedOperatingSystems

 

 

Component MySQL SQLite PostgreSQL Oracle Derby - see Note 4
Oozie 5.5, 5.6 8.4, 9.1, 9.2, 9.3

See Note 2

11gR2 Default
Flume Default (for the JDBC Channel only)
Hue 5.5, 5.6

See Note 1

Default 8.4, 9.1, 9.2, 9.3

See Note 2

11gR2
Hive/Impala 5.5, 5.6

See Note 1

8.4, 9.1, 9.2, 9.3

See Note 2

11gR2 Default
Sentry 5.5, 5.6

See Note 1

8.4, 9.1, 9.2,, 9.3

See Note 2

11gR2
Sqoop 1 See Note 3 See Note 3 See Note 3
Sqoop 2 See Note 4 See Note 4 See Note 4 Default

Note:

  1. MySQL 5.5 is supported on CDH 5.1. MySQL 5.6 is supported on CDH 5.1 and later.
  2. PostgreSQL 9.2 is supported on CDH 5.1 and later. PostgreSQL 9.3 is supported on CDH 5.2 and later.
  3. For the purposes of transferring data only, Sqoop 1 supports MySQL 5.0 and above, PostgreSQL 8.4 and above, Oracle 10.2 and above, Teradata 13.10 and above, and Netezza TwinFin 5.0 and above. The Sqoop metastore works only with HSQLDB (1.8.0 and higher 1.x versions; the metastore does not work with any HSQLDB 2.x versions).
  4. Sqoop 2 can transfer data to and from MySQL 5.0 and above, PostgreSQL 8.4 and above, Oracle 10.2 and above, and Microsoft SQL Server 2012 and above. The Sqoop 2 repository database is supported only on Derby.
  5. Derby is supported as shown in the table, but not always recommended. See the pages for individual components in the Cloudera Installation and Upgrade guide for recommendations.
  6.  

 

 

 

 

Selected tab: SupportedDatabases

CDH 5 is supported with the versions shown in the table that follows.

Table 1. Supported JDK Versions

Latest Certified Version Minimum Supported Version Exceptions
1.7.0_67 1.7.0_67 None
1.8.0_11 1.8.0_11 None

Selected tab: SupportedJDKVersions

CDH requires IPv4. IPv6 is not supported.

See also Configuring Network Names.

Selected tab: SupportedInternetProtocol
Selected tab: SystemRequirements

Known Issues Fixed in CDH 5.2.3

Upstream Issues Fixed

The following upstream issues are fixed in CDH 5.2.3:

  • AVRO-1623 - GenericData#validate() of enum: IndexOutOfBoundsException
  • AVRO-1622 - Add missing license headers
  • AVRO-1604 - ReflectData.AllowNull fails to generate schemas when @Nullable is present.
  • AVRO-1407 - NettyTransceiver can cause a infinite loop when slow to connect
  • AVRO-834 - Data File corruption recovery tool
  • AVRO-1596 - Cannot read past corrupted block in Avro data file
  • CRUNCH-480 - AvroParquetFileSource doesn't properly configure user-supplied read schema
  • CRUNCH-479 - Writing to target with WriteMode.APPEND merges values into PCollection
  • CRUNCH-477 - Fix HFileTargetIT failures on hadoop1 under Java 1.7/1.8
  • CRUNCH-473 - Use specific class type for case class serialization
  • CRUNCH-473 - Use specific class type for case class serialization
  • CRUNCH-472 - Add Scrunch serialization support for Java Enums
  • HADOOP-11068 - Match hadoop.auth cookie format to jetty output
  • HADOOP-11343 - Overflow is not properly handled in caclulating final iv for AES CTR
  • HADOOP-11301 - [optionally] update jmx cache to drop old metrics
  • HADOOP-11085 - Excessive logging by org.apache.hadoop.util.Progress when value is NaN
  • HADOOP-11247 - Fix a couple javac warnings in NFS
  • HADOOP-11195 - Move Id-Name mapping in NFS to the hadoop-common area for better maintenance
  • HADOOP-11130 - NFS updateMaps OS check is reversed
  • HADOOP-10990 - Add missed NFSv3 request and response classes
  • HADOOP-11323 - WritableComparator#compare keeps reference to byte array
  • HDFS-7560 - ACLs removed by removeDefaultAcl() will be back after NameNode restart/failover
  • HDFS-7367 - HDFS short-circuit read cannot negotiate shared memory slot and file descriptors when SASL is enabled on DataTransferProtocol.
  • HDFS-7489 - Incorrect locking in FsVolumeList#checkDirs can hang datanodes
  • HDFS-7158 - Reduce the memory usage of WebImageViewer
  • HDFS-7497 - Inconsistent report of decommissioning DataNodes between dfsadmin and NameNode webui
  • HDFS-7146 - NFS ID/Group lookup requires SSSD enumeration on the server
  • HDFS-7387 - NFS may only do partial commit due to a race between COMMIT and write
  • HDFS-7356 - Use DirectoryListing.hasMore() directly in nfs
  • HDFS-7180 - NFSv3 gateway frequently gets stuck due to GC
  • HDFS-7259 - Unresponseive NFS mount point due to deferred COMMIT response
  • HDFS-6894 - Add XDR parser method for each NFS response
  • HDFS-6850 - Move NFS out of order write unit tests into TestWrites class
  • HDFS-7385 - ThreadLocal used in FSEditLog class causes FSImage permission mess up
  • HDFS-7409 - Allow dead nodes to finish decommissioning if all files are fully replicated
  • HDFS-7373 - Clean up temporary files after fsimage transfer failures
  • HDFS-7225 - Remove stale block invalidation work when DN re-registers with different UUID
  • YARN-2721 - Race condition: ZKRMStateStore retry logic may throw NodeExist exception
  • YARN-2975 - FSLeafQueue app lists are accessed without required locks
  • YARN-2992 - ZKRMStateStore crashes due to session expiry
  • YARN-2910 - FSLeafQueue can throw ConcurrentModificationException
  • YARN-2816 - NM fail to start with NPE during container recovery
  • MAPREDUCE-6198 - NPE from JobTracker#resolveAndAddToTopology in MR1 cause initJob and heartbeat failure.
  • MAPREDUCE-6169 - MergeQueue should release reference to the current item from key and value at the end of the iteration to save memory.
  • HBASE-11794 - StripeStoreFlusher causes NullPointerException
  • HBASE-12077 - FilterLists create many ArrayList$Itr objects per row.
  • HBASE-12386 - Replication gets stuck following a transient zookeeper error to remote peer cluster
  • HBASE-11979 - Compaction progress reporting is wrong
  • HBASE-12529 - Use ThreadLocalRandom for RandomQueueBalancer
  • HBASE-12445 - hbase is removing all remaining cells immediately after the cell marked with marker = KeyValue.Type.DeleteColumn via PUT
  • HBASE-12460 - Moving Chore to hbase-common module.
  • HBASE-12366 - Add login code to HBase Canary tool.
  • HBASE-12447 - Add support for setTimeRange for RowCounter and CellCounter
  • HIVE-9330 - DummyTxnManager will throw NPE if WriteEntity writeType has not been set
  • HIVE-9199 - Excessive exclusive lock used in some DDLs with DummyTxnManager
  • HIVE-6835 - Reading of partitioned Avro data fails if partition schema does not match table schema
  • HIVE-6978 - beeline always exits with 0 status, should exit with non-zero status on error
  • HIVE-8891 - Another possible cause to NucleusObjectNotFoundException from drops/rollback
  • HIVE-8874 - Error Accessing HBase from Hive via Oozie on Kerberos 5.0.1 cluster
  • HIVE-8916 - Handle user@domain username under LDAP authentication
  • HIVE-8889 - JDBC Driver ResultSet.getXXXXXX(String columnLabel) methods Broken
  • HIVE-9445 - Revert HIVE-5700 - enforce single date format for partition column storage
  • HIVE-5454 - HCatalog runs a partition listing with an empty filter
  • HIVE-8784 - Querying partition does not work with JDO enabled against PostgreSQL
  • HUE-2484 - [beeswax] Configure support for Hive Server2 LDAP authentication
  • HUE-2102 - [oozie] Workflow with credentials can't be used with Coordinator
  • HUE-2152 - [pig] Credentials support in editor
  • HUE-2472 - [impala] Stabilize result retrieval
  • HUE-2406 - [search] New dashboard page has a margin problem
  • HUE-2373 - [search] Heatmap can break
  • HUE-2395 - [search] Broken widget in Solr Apache logs example
  • HUE-2414 - [search] Timeline chart breaks when there's no extraSeries defined
  • HUE-2342 - [impala] SSL encryption
  • HUE-2426 - [pig] Dashboard gives a 500 error
  • HUE-2430 - [pig] Progress bars of running scripts not updated on Dashboard
  • HUE-2411 - [useradmin] Lazy load user and group list in permission sharing popup
  • HUE-2398 - [fb] Drag and Drop hover message should not appear when elements originating in DOM are dragged
  • HUE-2401 - [search] Visually report selected and excluded values for ranges too
  • HUE-2389 - [impala] Expand results table after the results are added to datatables
  • HUE-2360 - [sentry] Sometimes Groups are not loaded we see the input box instead
  • IMPALA-1453 - Fix many bugs with HS2 FETCH_FIRST
  • IMPALA-1623 - unix_timestamp() does not return correct time
  • IMPALA-1606 - Impala does not always give short name to Llama
  • IMPALA-1475 - accept unmangled native UDF symbols
  • OOZIE-2102 - Streaming actions are broken cause of incorrect method signature
  • PARQUET-145 - InternalParquetRecordReader.close() should not throw an exception if initialization has failed
  • PARQUET-140 - Allow clients to control the GenericData object that is used to read Avro records
  • PIG-4330 - Regression test for PIG-3584 - AvroStorage does not correctly translate arrays of strings
  • PIG-3584 - AvroStorage does not correctly translate arrays of strings
  • SOLR-5515 - NPE when getting stats on date field with empty result on solrcloud

 

Published Known Issues Fixed

As a result of the above fixes, the following issues, previously published as Known Issues in CDH 5, are also fixed.

— Upgrading a PostgreSQL Hive Metastore from Hive 0.12 to Hive 0.13 may result in a corrupt metastore

HIVE-5700 introduced a serious bug into the Hive Metastore upgrade scripts. This bug affects users who have a PostgreSQL Hive Metastore and have at least one table which is partitioned by date and the value is stored as a date type (not string).

Bug: HIVE-5700

Severity: High

Workaround: None. Do not upgrade your PostgreSQL metastore to version 0.13 if you satisfy the condition stated above.

DataNodes may become unresponsive to block creation requests

DataNodes may become unresponsive to block creation requests from clients when the directory scanner is running.

Bug: HDFS-7489

Severity: Low

Workaround: Disable the directory scanner by setting dfs.datanode.directoryscan.interval to -1.

 

Selected tab: WhatsNew

Want to Get Involved or Learn More?

Check out our other resources

Cloudera Community

Collaborate with your peers, industry experts, and Clouderans to make the most of your investment in Hadoop.

Cloudera University

Receive expert Hadoop training through Cloudera University, the industry's only truly dynamic Hadoop training curriculum that’s updated regularly to reflect the state of the art in big data.