Your browser is out of date!

Update your browser to view this website correctly. Update my browser now

×

Please Read and Accept our Terms


Long term component architecture

As the main curator of open standards in Hadoop, Cloudera has a track record of bringing new open source solutions into its platform (such as Apache Spark, Apache HBase, and Apache Parquet) that are eventually adopted by the community at large. As standards, you can build longterm architecture on these components with confidence.

 

PLEASE NOTE:

With the exception of DSSD support, Cloudera Enterprise 5.6.0 is identical to CDH 5.5.2/Cloudera Manager 5.5.3  If you do not need DSSD support, you do not need to upgrade if you are already using the latest 5.5.x release.

 

Note: All CDH hosts that make up a logical cluster need to run on the same major OS release to be covered by Cloudera Support. Cloudera Manager needs to run on the same OS release as one of the CDH clusters it manages, to be covered by Cloudera Support. The risk of issues caused by running different minor OS releases is considered lower than the risk of running different major OS releases. Cloudera recommends running the same minor release cross-cluster, because it simplifies issue tracking and supportability.

 

CDH 5 provides 64-bit packages for RHEL-compatible, SLES, Ubuntu, and Debian systems as listed below.

 

Operating System Version
Red Hat Enterprise Linux (RHEL)-compatible
RHEL (+ SELinux mode in available versions) 7.2, 7.1, 6.8, 6.7, 6.6, 6.5, 6.4, 5.11, 5.10, 5.7
CentOS (+ SELinux mode in available versions) 7.2, 7.1, 6.8, 6.7, 6.6, 6.5, 6.4, 5.11, 5.10, 5.7
Oracle Enterprise Linux (OEL) with Unbreakable Enterprise Kernel (UEK)

7.2 (UEK R2), 7.1, 6.8 (UEK R3), 6.7 (UEK R3),

6.6 (UEK R3), 6.5 (UEK R2, UEK R3),

6.4 (UEK R2), 5.11, 5.10, 5.7

SLES
SUSE Linux Enterprise Server (SLES)

12 with Service Pack 1,

11 with Service Pack 4,

11 with Service Pack 3,

11 with Service Pack 2

Hosts running Cloudera Manager Agents must use SUSE Linux Enterprise Software Development Kit 11 SP1.
Ubuntu/Debian
Ubuntu

Trusty 14.04 - Long-Term Support (LTS)

Precise 12.04 - Long-Term Support (LTS)

Debian

Jessie 8.4, 8.2

Wheezy 7.8, 7.1, 7.0

 

Important: Cloudera supports RHEL 7 with the following limitations:

 

Note:

  • Cloudera Enterprise is supported on platforms with Security-Enhanced Linux (SELinux) enabled. Cloudera is not responsible for policy support nor policy enforcement. If you experience issues with SELinux, contact your OS provider.
  • CDH 5.9 DataNode hosts with EMC® DSSD™ D5™ are supported by RHEL 6.6, 7.1, and 7.2.

 

Selected tab: SupportedOperatingSystems
Component MariaDB MySQL SQLite PostgreSQL Oracle Derby - see Note 5
Cloudera Manager 5.5, 10 5.6, 5.5, 5.1 9.4, 9.3, 9.2, 9.1. 8.4, 8.3, 8.1 12c, 11gR2  
Oozie 5.5, 10 5.6, 5.5, 5.1

9.4, 9.3, 9.2, 9.1. 8.4, 8.3, 8.1

See Note 3

12c, 11gR2 Default
Flume Default (for the JDBC Channel only)
Hue 5.5, 10 5.6, 5.5, 5.1

See Note 6

Default

9.4, 9.3, 9.2, 9.1. 8.4, 8.3, 8.1

See Note 3

12c, 11gR2
Hive/Impala 5.5, 10 5.6, 5.5, 5.1

See Note 1

9.4, 9.3, 9.2, 9.1. 8.4, 8.3, 8.1

See Note 3

12c, 11gR2 Default
Sentry 5.5, 10 5.6, 5.5, 5.1

See Note 1

9.4, 9.3, 9.2, 9.1. 8.4, 8.3, 8.1

See Note 3

12c, 11gR2
Sqoop 1 5.5, 10 See Note 4 See Note 4 See Note 4
Sqoop 2 5.5, 10 See Note 9 Default

 

 

Note:

  1. Cloudera supports the databases listed above provided they are supported by the underlying operating system on which they run.
  2. MySQL 5.5 is supported on CDH 5.1. MySQL 5.6 is supported on CDH 5.1 and higher. The InnoDB storage engine must be enabled in the MySQL server.
  3. Cloudera Manager installation fails if GTID-based replication is enabled in MySQL.
  4. PostgreSQL 9.2 is supported on CDH 5.1 and higher. PostgreSQL 9.3 is supported on CDH 5.2 and higher. PostgreSQL 9.4 is supported on CDH 5.5 and higher.
  5. For purposes of transferring data only, Sqoop 1 supports MySQL 5.0 and above, PostgreSQL 8.4 and above, Oracle 10.2 and above, Teradata 13.10 and above, and Netezza TwinFin 5.0 and above. The Sqoop metastore works only with HSQLDB (1.8.0 and higher 1.x versions; the metastore does not work with any HSQLDB 2.x versions).
  6. Derby is supported as shown in the table, but not always recommended. See the pages for individual components in the Cloudera Installation guide for recommendations.
  7. CDH 5 Hue requires the default MySQL version of the operating system on which it is being installed, which is usually MySQL 5.1, 5.5, or 5.6.
  8. When installing a JDBC driver, only the ojdbc6.jar file is supported for both Oracle 11g R2 and Oracle 12c; the ojdbc7.jar file is not supported.
  9. Sqoop 2 lacks some of the features of Sqoop 1. Cloudera recommends you use Sqoop 1. Use Sqoop 2 only if it contains all the features required for your use case.
  10. MariaDB 10 is supported only on CDH 5.9 and higher.
Selected tab: SupportedDatabases

CDH and Cloudera Manager Supported JDK Versions

Only 64 bit JDKs from Oracle are supported. Oracle JDK 7 is supported across all versions of Cloudera Manager 5 and CDH 5. Oracle JDK 8 is supported in C5.3.x and higher.

 

A supported minor JDK release will remain supported throughout a Cloudera major release lifecycle, from the time of its addition forward, unless specifically excluded.

 

Warning: JDK 1.8u40 and JDK 1.8u60 are excluded from support. Also, the Oozie Web Console returns 500 error when Oozie server runs on JDK 8u75 or higher.

 

Running CDH nodes within the same cluster on different JDK releases is not supported. JDK release across a cluster needs to match the patch level.

  • All nodes in your cluster must run the same Oracle JDK version.
  • All services must be deployed on the same Oracle JDK version.

 

The Cloudera Manager repository is packaged with Oracle JDK 1.7.0_67 (for example) and can be automatically installed during a new installation or an upgrade.

 

For a full list of supported JDK Versions please see CDH and Cloudera Manager Supported JDK Versions.

Selected tab: SupportedJDKVersions

Hue

Hue works with the two most recent versions of the following browsers. Cookies and JavaScript must be on.

  • Chrome
  • Firefox
  • Safari (not supported on Windows)
  • Internet Explorer

Hue could display in older versions and even other browsers, but you might not have access to all of its features.

 

Selected tab: SupportedBrowsers

CDH requires IPv4. IPv6 is not supported.

 

See also Configuring Network Names.

 

Multihoming CDH or Cloudera Manager is not supported outside specifically certified Cloudera partner appliances. Cloudera finds that current Hadoop architectures combined with modern network infrastructures and security practices remove the need for multihoming. Multihoming, however, is beneficial internally in appliance form factors to take advantage of high-bandwidth InfiniBand interconnects.

Although some subareas of the product may work with unsupported custom multihoming configurations, there are known issues with multihoming. In addition, unknown issues may arise because multihoming is not covered by our test matrix outside the Cloudera-certified partner appliances.

 

Selected tab: SupportedInternetProtocol

The following components are supported by the indicated versions of Transport Layer Security (TLS):

 

Components Supported by TLS

Component

Role Name Port Version
Cloudera Manager Cloudera Manager Server   7182 TLS 1.2
Cloudera Manager Cloudera Manager Server   7183 TLS 1.2
Flume     9099 TLS 1.2
Flume   Avro Source/Sink   TLS 1.2
Flume   Flume HTTP Source/Sink   TLS 1.2
HBase Master HBase Master Web UI Port 60010 TLS 1.2
HDFS NameNode Secure NameNode Web UI Port 50470 TLS 1.2
HDFS Secondary NameNode Secure Secondary NameNode Web UI Port 50495 TLS 1.2
HDFS HttpFS REST Port 14000 TLS 1.1, TLS 1.2
Hive HiveServer2 HiveServer2 Port 10000 TLS 1.2
Hue Hue Server Hue HTTP Port 8888 TLS 1.2
Impala Impala Daemon Impala Daemon Beeswax Port 21000 TLS 1.2
Impala Impala Daemon Impala Daemon HiveServer2 Port 21050 TLS 1.2
Impala Impala Daemon Impala Daemon Backend Port 22000 TLS 1.2
Impala Impala StateStore StateStore Service Port 24000 TLS 1.2
Impala Impala Daemon Impala Daemon HTTP Server Port 25000 TLS 1.2
Impala Impala StateStore StateStore HTTP Server Port 25010 TLS 1.2
Impala Impala Catalog Server Catalog Server HTTP Server Port 25020 TLS 1.2
Impala Impala Catalog Server Catalog Server Service Port 26000 TLS 1.2
Oozie Oozie Server Oozie HTTPS Port 11443 TLS 1.1, TLS 1.2
Solr Solr Server Solr HTTP Port 8983 TLS 1.1, TLS 1.2
Solr Solr Server Solr HTTPS Port 8985 TLS 1.1, TLS 1.2
Spark History Server   18080 TLS 1.2
YARN ResourceManager ResourceManager Web Application HTTP Port 8090 TLS 1.2
YARN JobHistory Server MRv1 JobHistory Web Application HTTP Port 19890 TLS 1.2
Selected tab: SupportedTransportLayerSecurityVersions
Selected tab: SystemRequirements

Issues Fixed in CDH 5.9.1

Upstream Issues Fixed

The following upstream issues are fixed in CDH 5.9.1:
 

  • AVRO-1943 - Fix test TestNettyServerWithCompression.testConnectionsCount
  • CRUNCH-592 - Job fails for null ByteBuffer value in Avro tables
  • FLUME-2797 - Use SourceCounter for SyslogTcpSource
  • FLUME-2844 - SpillableMemoryChannel must start ChannelCounter
  • FLUME-2982 - Add localhost escape sequence to HDFS sink
  • FLUME-3020 - Improve HDFS Sink escape sequence substitution
  • HADOOP-10300 - Allowed deferred sending of call responses
  • HADOOP-12453 - Support decoding KMS Delegation Token with its own Identifier
  • HADOOP-12483 - Maintain wrapped SASL ordering for postponed IPC responses
  • HADOOP-12973 - Make DU pluggable
  • HADOOP-12974 - Create a CachingGetSpaceUsed implementation that uses df
  • HADOOP-12975 - Add jitter to CachingGetSpaceUsed's thread
  • HADOOP-13034 - Log message about input options in distcp lacks some items
  • HADOOP-13072 - WindowsGetSpaceUsed constructor should be public
  • HADOOP-13317 - Add logs to KMS server-side to improve supportability
  • HADOOP-13353 - LdapGroupsMapping getPassward should not return null when IOException throws
  • HADOOP-13526 - Add detailed logging in KMS for the authentication failure of proxy user
  • HADOOP-13558 - UserGroupInformation created from a Subject incorrectly tries to renew the Kerberos ticket
  • HADOOP-13579 - Fix source-level compatibility after HADOOP-11252
  • HADOOP-13638 - KMS should set UGI's Configuration object properly
  • HADOOP-13669 - KMS Server should log exceptions before throwing
  • HADOOP-13693 - Remove the message about HTTP OPTIONS in SPNEGO initialization message from kms audit log
  • HDFS-4210 - Throw helpful exception when DNS entry for JournalNode cannot be resolved
  • HDFS-6962 - ACLs inheritance conflict with umaskmode
  • HDFS-7413 - Some unit tests should use NameNodeProtocols instead of FSNameSystem
  • HDFS-7415 - Move FSNameSystem.resolvePath() to FSDirectory
  • HDFS-7420 - Delegate permission checks to FSDirectory
  • HDFS-7463 - Simplify FSNamesystem#getBlockLocationsUpdateTimes
  • HDFS-7478 - Move org.apache.hadoop.hdfs.server.namenode.NNConf to FSNamesystem
  • HDFS-7517 - Remove redundant non-null checks in FSNamesystem#getBlockLocations
  • HDFS-7964 - Add support for async edit logging
  • HDFS-8224 - Schedule a block for scanning if its metadata file is corrupt
  • HDFS-8269 - getBlockLocations() does not resolve the .reserved path and generates incorrect edit logs when updating the atime
  • HDFS-8709 - Clarify automatic sync in FSEditLog#logEdit
  • HDFS-8809 - HDFS fsck reports under construction blocks as CORRUPT
  • HDFS-9038 - DFS reserved space is erroneously counted towards non-DFS used
  • HDFS-9601 - NNThroughputBenchmark.BlockReportStats should handle NotReplicatedYetException on adding block
  • HDFS-9630 - DistCp minor refactoring and clean up
  • HDFS-9638 - Improve DistCp Help and documentation
  • HDFS-9781 - FsDatasetImpl#getBlockReports can occasionally throw NullPointerException
  • HDFS-9820 - Improve distcp to support efficient restore to an earlier snapshot
  • HDFS-10216 - Distcp -diff throws exception when handling relative path
  • HDFS-10270 - TestJMXGet:testNameNode() fails
  • HDFS-10298 - Document the usage of distcp -diff option
  • HDFS-10312 - Large block reports can fail to decode at NameNode due to 64 MB protobuf maximum length restriction
  • HDFS-10313 - Distcp need to enforce the order of snapshot names passed to -diff
  • HDFS-10397 - Distcp should ignore -delete option if -diff option is provided instead of exiting
  • HDFS-10556 - DistCpOptions should be validated automatically
  • HDFS-10559 - DiskBalancer: Use SHA1 for Plan ID
  • HDFS-10567 - Improve plan command help message
  • HDFS-10609 - Uncaught InvalidEncryptionKeyException during pipeline recovery can abort downstream applications
  • HDFS-10641 - TestBlockManager#testBlockReportQueueing fails intermittently
  • HDFS-10652 - Add a unit test for HDFS-4660
  • HDFS-10722 - Fix race condition in TestEditLog#testBatchedSyncWithClosedLogs
  • HDFS-10760 - DataXceiver#run() should not log InvalidToken exception as an error
  • HDFS-10822 - Log DataNodes in the write pipeline
  • HDFS-10879 - TestEncryptionZonesWithKMS#testReadWrite fails intermittently
  • HDFS-10962 - TestRequestHedgingProxyProvider is unreliable
  • HDFS-10963 - Reduce log level when network topology cannot find enough datanodes
  • HDFS-11012 - Unnecessary INFO logging on DFSClients for InvalidToken
  • HDFS-11040 - Add documentation for HDFS-9820 distcp improvement
  • HDFS-11056 - Concurrent append and read operations lead to checksum error
  • MAPREDUCE-4784 - TestRecovery occasionally fails
  • MAPREDUCE-6628 - Potential memory leak in CryptoOutputStream
  • MAPREDUCE-6641 - TestTaskAttempt fails in trunk
  • MAPREDUCE-6670 - TestJobListCache#testEviction sometimes fails on Windows with timeout
  • MAPREDUCE-6718 - add progress log to JHS during startup
  • MAPREDUCE-6728 - Give fetchers hint when ShuffleHandler rejects a shuffling connection
  • MAPREDUCE-6738 - TestJobListCache.testAddExisting failed intermittently in slow VM testbed
  • MAPREDUCE-6771 - RMContainerAllocator sends container diagnostics event after corresponding completion event
  • MAPREDUCE-6798 - Fix intermittent failure of TestJobHistoryParsing.testJobHistoryMethods
  • YARN-2977 - Fixed intermittent TestNMClient failure
  • YARN-3601 - Fix UT TestRMFailover.testRMWebAppRedirect
  • YARN-3654 - ContainerLogsPage web UI should not have meta-refresh
  • YARN-3722 - Merge multiple TestWebAppUtils into o.a.h.yarn.webapp.util.TestWebAppUtils
  • YARN-4004 - container-executor should print output of docker logs if the docker container exits with non-0 exit status
  • YARN-4017 - container-executor overuses PATH_MAX
  • YARN-4092 - Fixed UI redirection to print useful messages when both RMs are in standby mode
  • YARN-4245 - Generalize config file handling in container-executor
  • YARN-4255 - container-executor does not clean up Docker operation command files
  • YARN-4820 - ResourceManager web redirects in HA mode drops query parameters
  • YARN-4940 - yarn node -list -all fail if RM starts with decommissioned node
  • YARN-5001 - Aggregated Logs root directory is created with wrong group if nonexistent
  • YARN-5107 - TestContainerMetrics fails
  • YARN-5246 - NMWebAppFilter web redirects drop query parameters
  • YARN-5704 - Provide configuration knobs to control enabling/disabling new/work in progress features in container-executor
  • YARN-5837 - NPE when getting node status of a decommissioned node after an RM restart
  • YARN-5862 - TestDiskFailures.testLocalDirsFailures failed
  • HBASE-15324 - Jitter can cause desiredMaxFileSize overflow in ConstantSizeRegionSplitPolicy and trigger unexpected split
  • HBASE-15430 - Failed taking snapshot - Manifest proto-message too large
  • HBASE-15856 - Do not cache unresolved addresses for connections
  • HBASE-16172 - Unify the retry logic in ScannerCallableWithReplicas and RpcRetryingCallerWithReadReplicas
  • HBASE-16270 - Handle duplicate clearing of snapshot in region replicas
  • HBASE-16294 - hbck reporting "No HDFS region dir found" for replicas
  • HBASE-16345 - RpcRetryingCallerWithReadReplicas#call() should catch some RegionServer Exceptions
  • HBASE-16360 - TableMapReduceUtil addHBaseDependencyJars has the wrong class name for PrefixTreeCodec
  • HBASE-16699 - Overflows in AverageIntervalRateLimiter's refill() and getWaitInterval()
  • HBASE-16767 - Mob compaction needs to clean up files in /hbase/mobdir/.tmp and /hbase/mobdir/.tmp/.bulkload when running into IO exceptions
  • HBASE-16824 - Writer.flush() can be called on already closed streams in WAL roll
  • HIVE-9570 - Investigate test failure on union_view.q
  • HIVE-10007 - Support qualified table name in analyze table compute statistics for columns
  • HIVE-10384 - BackportRetryingMetaStoreClient does not retry wrapped TTransportExceptions
  • HIVE-10728 - Deprecate unix_timestamp(void) and make it deterministic
  • HIVE-10965 - Direct SQL for stats fails in 0-column case
  • HIVE-11901 - StorageBasedAuthorizationProvider requires write permission on table for SELECT statements
  • HIVE-12077 - MSCK Repair table should fix partitions in batches
  • HIVE-12083 - HIVE-10965 introduces Thrift error if partNames or colNames are empty
  • HIVE-12475 - Parquet schema evolution within array<struct<>> does not work
  • HIVE-12646 - Revert "beeline and HIVE CLI do not parse"
  • HIVE-12757 - Fix TestCodahaleMetrics#testFileReporting
  • HIVE-12891 - Hive fails when java.io.tmpdir is set to a relative location
  • HIVE-13058 - Add session and operation_log directory deletion messages
  • HIVE-13198 - Authorization issues with cascading views
  • HIVE-13237 - Select parquet struct field with uppercase throws NPE
  • HIVE-13381 - Backport introduced potential differences in the q-file output which need to be investigated further
  • HIVE-13381 - Timestamp and date should have precedence in type hierarchy over string group
  • HIVE-13429 - Tool to remove dangling scratch directory
  • HIVE-13620 - Merge llap branch work to master
  • HIVE-13625 - Hive Prepared Statement when executed with escape characters in parameter fails
  • HIVE-13645 - Beeline needs null-guard around hiveVars and hiveConfVars read
  • HIVE-13997 - Insert overwrite directory does not overwrite existing files
  • HIVE-14173 - NPE was thrown after enabling directsql in the middle of session
  • HIVE-14205 - Hive does not support union type with AVRO file format
  • HIVE-14313 - Test failure TestMetaStoreMetrics.testConnections
  • HIVE-14383 - SparkClientImpl should pass principal and keytab to spark-submit instead of calling kinit explicitly
  • HIVE-14395 - Add the missing data files to Avro union tests (HIVE-14205 addendum)
  • HIVE-14421 - FS.deleteOnExit holds references to _tmp_space.db files
  • HIVE-14426 - Extensive logging on info level in WebHCat
  • HIVE-14436 - Hive 1.2.1/Hitting "ql.Driver: FAILED: IllegalArgumentException Error"
  • HIVE-14538 - beeline throws exceptions with parsing Hive config when using !sh statement
  • HIVE-14697 - Cannot access kerberized HS2 Web UI
  • HIVE-14715 - Hive throws NumberFormatException with query with Null value
  • HIVE-14743 - ArrayIndexOutOfBoundsException - HBASE-backed views' query with JOINs
  • HIVE-14762 - Add logging while removing scratch space
  • HIVE-14784 - Operation logs are disabled automatically if the parent directory does not exist
  • HIVE-14799 - Query operation are not thread safe during its cancellation
  • HIVE-14805 - Subquery inside a view will have the object in the subquery as the direct input
  • HIVE-14810 - Fix failing test: TestMetaStoreMetrics.testMetaDataCounts.
  • HIVE-14817 - Shutdown the SessionManager timeoutChecker thread properly upon shutdown
  • HIVE-14839 - Improve the stability of TestSessionManagerMetrics
  • HIVE-14889 - Beeline leaks sensitive environment variables of HiveServer2
  • HIVE-15054 - Hive insertion query execution fails on Hive on Spark
  • HIVE-15061 - Metastore types are sometimes case sensitive
  • HIVE-15090 - Temporary DB failure can stop ExpiredTokenRemover thread
  • HIVE-15231 - Query on view with CTE and alias fails with table not found error
  • HUE-4941 - [editor] Content Security Policy directive blocks an image when navigating on marker map
  • HUE-5041 - [editor] Hue export large file to HDFS doesn't work on non-default database
  • HUE-4631 - [home] DB transaction failing because of atomic block on home page
  • HUE-5218 - [search] Validate dashboard sharing works
  • HUE-5028 - [security] Share Oozie workflow with modify permission;however, the user can't edit the shared WF
  • HUE-5163 - [security] Speed up initial page rendering
  • IMPALA-3949 - Log error message in FileSystemUtil.copyToLocal()
  • IMPALA-4076 - Fix runtime filter sort compare method
  • IMPALA-4099 - Fix the error message while loading UDFs with no JARs
  • IMPALA-4120 - Incorrect results with LEAD() analytic function
  • IMPALA-4135 - Thrift threaded server times out connections during high load
  • IMPALA-4153 - Fix count(*) on all blank('') columns - test
  • IMPALA-4170 - Fix identifier quoting in COMPUTE INCREMENTAL STATS
  • IMPALA-4196 - Cross compile bit-byte-functions
  • IMPALA-4223 - Handle truncated file read from HDFS cache
  • IMPALA-4237 - Fix materialization of 4-byte decimals in data source scan node
  • IMPALA-4246 - SleepForMs() utility function has undefined behavior for > 1s
  • IMPALA-4301 - Fix IGNORE NULLS with subquery rewriting
  • IMPALA-4336 - Cast exprs after unnesting union operands
  • IMPALA-4387 - Validate decimal type in Avro file schema
  • IMPALA-4751 - For unknown query IDs, /query_profile_encoded?query_id=123 starts with an empty line.
  • IMPALA-4423 - Correct but conservative implementation of Subquery.equals().
  • OOZIE-1814 - Oozie should mask any passwords in logs and REST interfaces
  • OOZIE-2582 - Populating external child IDs for action failures
  • OOZIE-2660 - Create documentation for DB Dump/Load functionality
  • PIG-3807 - Pig creates wrong schema after dereferencing nested tuple fields with sorts
  • PIG-3818 - PIG-2499 is accidentally reverted
  • SENTRY-858 - Add a test case for "Database prefix is not honoured when executing grant statement"
  • SENTRY-1313 - Database prefix is not honoured when executing grant statement
  • SENTRY-1429 - Backport and fix conflicts in SENTRY-1454
  • SENTRY-1464 - Fix Sentry e2e test failure in org.apache.sentry.tests.e2e.dbprovider.TestDbUriPermissions.testAlterPartitionLocationPrivileges
  • SOLR-9310 - PeerSync fails on a node restart due to IndexFingerPrint mismatch
  • SPARK-12009 - Avoid re-allocating YARN container while driver wants to stop all executors
  • SPARK-12339 - Added a NullPointerException for executor stage kill from web UI
  • SPARK-12392 - Optimize a location order of broadcast blocks by considering preferred local hosts
  • SPARK-12941 - Spark-SQL JDBC Oracle dialect fails to map string datatypes to Oracle VARCHAR datatype
  • SPARK-12966 - ArrayType(DecimalType) support in Postgres JDBC
  • SPARK-13242 - codegen fallback in case-when if there many branches
  • SPARK-13328 - Poor read performance for broadcast variables with dynamic resource allocation
  • SPARK-16625 - General data types to be mapped to Oracle
  • SPARK-16711 - YarnShuffleService does not re-init properly on YARN rolling upgrade
  • SPARK-17171 - DAG lists all partitions in the graph
  • SPARK-17433 - YarnShuffleService does not handle moving credentials levelDb
  • SPARK-17611 - Make shuffle service test really test authentication
  • SPARK-17644 - Do not add failedStages when abortStage for fetch failure
  • SPARK-17696 - Partial backport to branch-1.6.
  • SQOOP-2884 - Document --temporary-rootdir
  • SQOOP-2915 - Fixing Oracle-related unit tests
  • SQOOP-2952 - Fix Sqoop1 (import + --hbase-bulkload) row key not added into column family
  • SQOOP-2983 - OraOop export has degraded performance with wide tables
  • SQOOP-2986 - Add validation check for --hive-import and --incremental lastmodified
  • SQOOP-3021 - ClassWriter fails if a column name contains a backslash character
  • SQOOP-3034 - HBase import should fail fast if using anything other than as-textfile

 

Selected tab: WhatsNew

Want to Get Involved or Learn More?

Check out our other resources

Cloudera Community

Collaborate with your peers, industry experts, and Clouderans to make the most of your investment in Hadoop.

Cloudera University

Receive expert Hadoop training through Cloudera University, the industry's only truly dynamic Hadoop training curriculum that’s updated regularly to reflect the state of the art in big data.