Your browser is out of date

Update your browser to view this website correctly. Update my browser now

×

Please Read and Accept our Terms


Long term component architecture

As the main curator of open standards in Hadoop, Cloudera has a track record of bringing new open source solutions into its platform (such as Apache Spark, Apache HBase, and Apache Parquet) that are eventually adopted by the community at large. As standards, you can build longterm architecture on these components with confidence.

 

PLEASE NOTE:

With the exception of DSSD support, Cloudera Enterprise 5.6.0 is identical to CDH 5.5.2/Cloudera Manager 5.5.3  If you do not need DSSD support, you do not need to upgrade if you are already using the latest 5.5.x release.

 

Note: All CDH hosts that make up a logical cluster need to run on the same major OS release to be covered by Cloudera Support. Cloudera Manager needs to run on the same OS release as one of the CDH clusters it manages, to be covered by Cloudera Support. The risk of issues caused by running different minor OS releases is considered lower than the risk of running different major OS releases. Cloudera recommends running the same minor release cross-cluster, because it simplifies issue tracking and supportability.

 

CDH 5 provides 64-bit packages for RHEL-compatible, SLES, Ubuntu, and Debian systems as listed below.
 

 

Operating System Version
Red Hat Enterprise Linux (RHEL)-compatible
RHEL (+ SELinux mode in available versions) 7.3, 7.2, 7.1, 6.8, 6.7, 6.6, 6.5, 6.4, 5.11, 5.10, 5.7
CentOS (+ SELinux mode in available versions) 7.3, 7.2, 7.1, 6.8, 6.7, 6.6, 6.5, 6.4, 5.11, 5.10, 5.7
Oracle Enterprise Linux (OEL) with Unbreakable Enterprise Kernel (UEK) and Standard Kernel

7.3, 7.2 (UEK R2), 7.1, 6.8 (UEK R3), 6.7 (UEK R3),

6.6 (UEK R3), 6.5 (UEK R2, UEK R3),

6.4 (UEK R2), 5.11, 5.10, 5.7

SLES
SUSE Linux Enterprise Server (SLES)

12 with Service Pack 1,

11 with Service Pack 4,

11 with Service Pack 3,

11 with Service Pack 2

Hosts running Cloudera Manager Agents must use SUSE Linux Enterprise Software Development Kit 11 SP1.
Ubuntu/Debian
Ubuntu

Trusty 14.04 - Long-Term Support (LTS)

Precise 12.04 - Long-Term Support (LTS)

Debian

Jessie 8.4, 8.2

Wheezy 7.8, 7.1, 7.0

 

 

  • Cloudera does not support CDH cluster deployments using hosts in Docker containers.
  • Cloudera supports RHEL 7 with the following limitations:
    • Only RHEL 7.2 and 7.1 are supported. RHEL 7.0 is not supported.
    • Red Hat currently supports only upgrades from Red Hat Enterprise Linux 6 to Red Hat Enterprise Linux 7 for specific/targeted use cases only. Contact your OS vendor and review What are the supported use cases for upgrading to RHEL 7?
  • Cloudera Enterprise is supported on platforms with Security-Enhanced Linux (SELinux) enabled. However, Cloudera does not support use of SELinux with Cloudera Navigator. Cloudera is not responsible for policy support nor policy enforcement. If you experience issues with SELinux, contact your OS provider.

 

Selected tab: SupportedOperatingSystems
Component MariaDB MySQL SQLite PostgreSQL Oracle Derby - see Note 5
Cloudera Manager 5.5, 10 5.7, 5.6, 5.5, 5.1 9.4, 9.3, 9.2, 9.1. 8.4, 8.3, 8.1

12c, 11gR2

See Note 11

 
Oozie 5.5, 10 5.7, 5.6, 5.5, 5.1

9.4, 9.3, 9.2, 9.1. 8.4, 8.3, 8.1

See Note 3

12c, 11gR2 Default
Flume Default (for the JDBC Channel only)
Hue 5.5, 10 5.7, 5.6, 5.5, 5.1

See Note 6

Default

9.4, 9.3, 9.2, 9.1. 8.4, 8.3, 8.1

See Note 3

12c, 11gR2
Hive/Impala 5.5, 10 5.7, 5.6, 5.5, 5.1

See Note 1

9.4, 9.3, 9.2, 9.1. 8.4, 8.3, 8.1

See Note 3

12c, 11gR2 Default
Sentry 5.5, 10 5.7, 5.6, 5.5, 5.1

See Note 1

9.4, 9.3, 9.2, 9.1. 8.4, 8.3, 8.1

See Note 3

12c, 11gR2
Sqoop 1 5.5, 10 See Note 4 See Note 4 See Note 4
Sqoop 2 5.5, 10 See Note 9 Default

 

Note:

  1. Cloudera supports the databases listed above provided they are supported by the underlying operating system on which they run.
  2. MySQL 5.5 is supported on CDH 5.1. MySQL 5.6 is supported on CDH 5.1 and higher. The InnoDB storage engine must be enabled in the MySQL server.
  3. Cloudera Manager installation fails if GTID-based replication is enabled in MySQL.
  4. PostgreSQL 9.2 is supported on CDH 5.1 and higher. PostgreSQL 9.3 is supported on CDH 5.2 and higher. PostgreSQL 9.4 is supported on CDH 5.5 and higher.
  5. For purposes of transferring data only, Sqoop 1 supports MySQL 5.0 and above, PostgreSQL 8.4 and above, Oracle 10.2 and above, Teradata 13.10 and above, and Netezza TwinFin 5.0 and above. The Sqoop metastore works only with HSQLDB (1.8.0 and higher 1.x versions; the metastore does not work with any HSQLDB 2.x versions).
  6. Derby is supported as shown in the table, but not always recommended. See the pages for individual components in the Cloudera Installation guide for recommendations.
  7. CDH 5 Hue requires the default MySQL version of the operating system on which it is being installed, which is usually MySQL 5.1, 5.5, or 5.6.
  8. When installing a JDBC driver, only the ojdbc6.jar file is supported for both Oracle 11g R2 and Oracle 12c; the ojdbc7.jar file is not supported.
  9. Sqoop 2 lacks some of the features of Sqoop 1. Cloudera recommends you use Sqoop 1. Use Sqoop 2 only if it contains all the features required for your use case.
  10. MariaDB 10 is supported only on CDH 5.9 and higher.
  11. For Oracle 12cR1, the Oracle 12 client library and the OJDBC7 connector are not supported.
Selected tab: SupportedDatabases

CDH and Cloudera Manager Supported JDK Versions

Only 64 bit JDKs from Oracle are supported. Oracle JDK 7 is supported across all versions of Cloudera Manager 5 and CDH 5. Oracle JDK 8 is supported in C5.3.x and higher.

 

A supported minor JDK release will remain supported throughout a Cloudera major release lifecycle, from the time of its addition forward, unless specifically excluded.

 

Warning: JDK 1.8u40 and JDK 1.8u60 are excluded from support. Also, the Oozie Web Console returns 500 error when Oozie server runs on JDK 8u75 or higher.

 

Running CDH nodes within the same cluster on different JDK releases is not supported. JDK release across a cluster needs to match the patch level.

  • All nodes in your cluster must run the same Oracle JDK version.
  • All services must be deployed on the same Oracle JDK version.

 

The Cloudera Manager repository is packaged with Oracle JDK 1.7.0_67 (for example) and can be automatically installed during a new installation or an upgrade.

 

For a full list of supported JDK Versions please see CDH and Cloudera Manager Supported JDK Versions.

Selected tab: SupportedJDKVersions

Hue

Hue works with the two most recent LTS (long term support) or ESR (extended support release) browsers. Cookies and JavaScript must be on.

Hue can display in older versions and even other browsers, but you might not have access to all of its features.

 

Selected tab: SupportedBrowsers

CDH requires IPv4. IPv6 is not supported.

 

See also Configuring Network Names.

Multihoming CDH or Cloudera Manager is not supported outside specifically certified Cloudera partner appliances. Cloudera finds that current Hadoop architectures combined with modern network infrastructures and security practices remove the need for multihoming. Multihoming, however, is beneficial internally in appliance form factors to take advantage of high-bandwidth InfiniBand interconnects.

 

Although some subareas of the product may work with unsupported custom multihoming configurations, there are known issues with multihoming. In addition, unknown issues may arise because multihoming is not covered by our test matrix outside the Cloudera-certified partner appliances.

 

Selected tab: SupportedInternetProtocol

The following components are supported by the indicated versions of Transport Layer Security (TLS):

 

Components Supported by TLS

Component

Role Name Port Version
Cloudera Manager Cloudera Manager Server   7182 TLS 1.2
Cloudera Manager Cloudera Manager Server   7183 TLS 1.2
Flume     9099 TLS 1.2
Flume   Avro Source/Sink   TLS 1.2
Flume   Flume HTTP Source/Sink   TLS 1.2
HBase Master HBase Master Web UI Port 60010 TLS 1.2
HDFS NameNode Secure NameNode Web UI Port 50470 TLS 1.2
HDFS Secondary NameNode Secure Secondary NameNode Web UI Port 50495 TLS 1.2
HDFS HttpFS REST Port 14000 TLS 1.1, TLS 1.2
Hive HiveServer2 HiveServer2 Port 10000 TLS 1.2
Hue Hue Server Hue HTTP Port 8888 TLS 1.2
Impala Impala Daemon Impala Daemon Beeswax Port 21000 TLS 1.2
Impala Impala Daemon Impala Daemon HiveServer2 Port 21050 TLS 1.2
Impala Impala Daemon Impala Daemon Backend Port 22000 TLS 1.2
Impala Impala StateStore StateStore Service Port 24000 TLS 1.2
Impala Impala Daemon Impala Daemon HTTP Server Port 25000 TLS 1.2
Impala Impala StateStore StateStore HTTP Server Port 25010 TLS 1.2
Impala Impala Catalog Server Catalog Server HTTP Server Port 25020 TLS 1.2
Impala Impala Catalog Server Catalog Server Service Port 26000 TLS 1.2
Oozie Oozie Server Oozie HTTPS Port 11443 TLS 1.1, TLS 1.2
Solr Solr Server Solr HTTP Port 8983 TLS 1.1, TLS 1.2
Solr Solr Server Solr HTTPS Port 8985 TLS 1.1, TLS 1.2
Spark History Server   18080 TLS 1.2
YARN ResourceManager ResourceManager Web Application HTTP Port 8090 TLS 1.2
YARN JobHistory Server MRv1 JobHistory Web Application HTTP Port 19890 TLS 1.2
Selected tab: SupportedTransportLayerSecurityVersions
Selected tab: SystemRequirements

Issues Fixed in CDH 5.10.2

Upstream Issues Fixed

The following upstream issues are fixed in CDH 5.10.2:

  • FLUME-2798 - Malformed Syslog messages can lead to OutOfMemoryException
  • FLUME-3080 - Close failure in HDFS Sink might cause data loss
  • FLUME-3085 - HDFS Sink can skip flushing some BucketWriters, might lead to data loss
  • HADOOP-11400 - GraphiteSink does not reconnect to Graphite after 'broken pipe'
  • HADOOP-11599 - Client#getTimeout should use IPC_CLIENT_PING_DEFAULT when IPC_CLIENT_PING_KEY is not configured
  • HADOOP-12672 - RPC timeout should not override IPC ping interval
  • HADOOP-12751 - While using kerberos Hadoop incorrectly assumes names with '@' to be non-simple.
  • HADOOP-13503 - Improve SaslRpcClient failure logging
  • HADOOP-13749 - KMSClientProvider combined with KeyProviderCache can result in wrong UGI being used
  • HADOOP-13826 - S3A Deadlock in multipart copy due to thread pool limits
  • HADOOP-14050 - Add process name to kms process
  • HADOOP-14083 - Add missing file hadoop-common-project/hadoop-kms/src/main/tomcat/catalina-default.properties
  • HADOOP-14083 - KMS should support old SSL clients.
  • HADOOP-14104 - Client should always ask namenode for kms provider path
  • HADOOP-14141 - Store KMS SSL keystore password in catalina.properties
  • HADOOP-14195 - CredentialProviderFactory$getProviders is not thread-safe
  • HADOOP-14242 - Make KMS Tomcat SSL property sslEnabledProtocols and clientAuth configurable
  • HDFS-10715 - NPE when applying AvailableSpaceBlockPlacementPolicy
  • HDFS-11390 - Add process name to httpfs process
  • HDFS-11418 - HttpFS should support old SSL clients
  • HDFS-11515 - -du throws ConcurrentModificationException
  • HDFS-11579 - Make HttpFS Tomcat SSL property sslEnabledProtocols and clientAuth configurable
  • HDFS-11689 - New exception thrown by DFSClient%isHDFSEncryptionEnabled broke hacky hive code
  • MAPREDUCE-6165 - [JDK8] TestCombineFileInputFormat failed on JDK8
  • MAPREDUCE-6201 - TestNetworkedJob fails on trunk
  • MAPREDUCE-6839 - TestRecovery.testCrashed failed
  • YARN-3251 - Fixed a deadlock in CapacityScheduler when computing absoluteMaxAvailableCapacity in LeafQueue
  • YARN-6042 - Dump scheduler and queue state information into FairScheduler DEBUG log.
  • YARN-6264 - AM not launched when a single vcore is available on the cluster.
  • YARN-6359 - TestRM#testApplicationKillAtAcceptedState fails rarely due to race condition
  • YARN-6360 - Prevent FS state dump logger from cramming other log files
  • YARN-6453 - fairscheduler-statedump.log gets generated regardless of service
  • YARN-6615 - AmIpFilter drops query parameters on redirect
  • HBASE-15837 - Memstore size accounting is wrong if postBatchMutate() throws exception
  • HBASE-15941 - HBCK repair should not unsplit healthy splitted region
  • HBASE-16350 - Undo server abort from HBASE-14968
  • HBASE-16630 - Fragmentation in long running Bucket Cache
  • HBASE-16931 - Setting cell's seqId to zero in compaction flow might cause RS down.
  • HBASE-16977 - VerifyReplication should log a printable representation of the row keys
  • HBASE-17460 - enable_table_replication can not perform cyclic replication of a table
  • HBASE-17501 - guard against NPE while reading FileTrailer and HFileBlock
  • HBASE-17574 - Clean up how to run tests under hbase-spark module
  • HBASE-17673 - Monitored RPC Handler not shown in the WebUI
  • HBASE-17688 - MultiRowRangeFilter not working correctly if given same start and stop RowKey
  • HBASE-17710 - HBase in standalone mode creates directories with 777 permission
  • HBASE-17717 - Explicitly use "sasl" ACL scheme for hbase superuser
  • HBASE-17731 - Fractional latency reporting in MultiThreadedAction
  • HBASE-17761 - Test TestRemoveRegionMetrics.testMoveRegion fails intermittently because of race condition
  • HBASE-17779 - disable_table_replication returns misleading message and does not turn off replication
  • HBASE-17798 - RpcServer.Listener.Reader can abort due to CancelledKeyException
  • HBASE-17970 - Set yarn.app.mapreduce.am.staging-dir when starting MiniMRCluster
  • HBASE-18096 - Limit HFileUtil visibility and add missing annotations
  • HIVE-9481 - allow column list specification in INSERT statement
  • HIVE-9567 - JSON SerDe not escaping special chars when writing char/varchar data
  • HIVE-10329 - Hadoop reflectionutils has issues
  • HIVE-11141 - Improve RuleRegExp when the Expression node stack gets huge
  • HIVE-11418 - Dropping a database in an encryption zone with CASCADE and trash enabled fails
  • HIVE-11428 - Performance: Struct IN() clauses are extremely slow
  • HIVE-11671 - Optimize RuleRegExp in DPP codepath
  • HIVE-11842 - Improve RuleRegExp by caching some internal data structures
  • HIVE-12179 - Add option to not add spark-assembly.jar to Hive classpath
  • HIVE-12768 - Thread safety: binary sortable serde decimal deserialization
  • HIVE-13390 - Partial backport of HIVE-13390. Backported only httpclient 4.5.2 and httpcore 4.4.4 to fix the Apache Hive SSL vulnerability bug.
  • HIVE-14210 - ExecDriver should call jobclient.close() to trigger cleanup
  • HIVE-14380 - Queries on tables with remote HDFS paths fail in "encryption" checks.
  • HIVE-14564 - Column Pruning generates out of order columns in SelectOperator which cause ArrayIndexOutOfBoundsException.
  • HIVE-14819 - FunctionInfo for permanent functions shows TEMPORARY FunctionType
  • HIVE-14943 - Partial backport of HIVE-14943 - Base Implementation (of HIVE-10924)
  • HIVE-15282 - Different modification times are used when an index is built and when its staleness is checked
  • HIVE-15572 - Improve the response time for query canceling when it happens during acquiring locks
  • HIVE-15997 - Resource leaks when query is cancelled
  • HIVE-16024 - MSCK Repair Requires nonstrict hive.mapred.mode
  • HIVE-16156 - FileSinkOperator should delete existing output target when renaming
  • HIVE-16175 - Possible race condition in InstanceCache
  • HIVE-16297 - Improving hive logging configuration variables
  • HIVE-16394 - HoS does not support queue name change in middle of session
  • HIVE-16413 - Create table as select does not check ownership of the location
  • HIVE-16459 - Forward channelInactive to RpcDispatcher
  • HIVE-16593 - SparkClientFactory.stop may prevent JVM from exiting
  • HIVE-16646 - Alias in transform ... as clause shouldn't be case sensitive
  • HIVE-16660 - Not able to add partition for views in hive when sentry is enabled
  • HIVE-16693 - beeline "source" command freezes if you have a comment in it?
  • HUE-5659 - [home] Ignore history dependencies when importing document from different cluster
  • HUE-5684 - [oozie] Remove duplicates in node data for graph and disable graph tab for large workflows
  • HUE-5714 - [hive] Close SQL canary query "Select "Hello World""
  • HUE-5742 - [core] Allow user to provide schema name for database via ini
  • HUE-5816 - Changing default setting as "allowed_hosts=*"
  • HUE-5873 - [editor] The download in progress modal doesn't close automatically on IE11
  • HUE-5984 - [search] Escaping corrupts link-meta for building external links in grid dashboard
  • HUE-6045 - [pig] Progress bar doesn't turn green on completion
  • HUE-6075 - [oozie] Remove email body while displaying external graphs in dashboard
  • HUE-6090 - Hue to do a keep alive on idle sessions to HiveServer2
  • HUE-6103 - [fb] Log filesystem initialization exceptions
  • HUE-6104 - [aws] Check if boto configuration section exists before adding it
  • HUE-6109 - [core] Remove the restriction on Document2 invalid chars
  • HUE-6115 - [core] Document paths have encoding problems
  • HUE-6131 - [hive] Select partition values based on the actual datatypes of the partition column
  • HUE-6133 - [home] Typing on the search box crashes IE 11
  • HUE-6144 - [oozie] Add generic XSL template to workflow graph parserv
  • HUE-6161 - [doc2] Move document conversion upgrade into a migration
  • HUE-6193 - [converter] Retain last_executed time when creating doc2 object
  • HUE-6197 - [impala] Fix XSS Vulnerability in the old editor error messages
  • HUE-6212 - [oozie] Prevent XSS injection in coordinator cron frequency field
  • HUE-6228 - [core] Disable touchscreen detection on Nicescroll6228
  • HUE-6250 - [frontend] Losing # fragement of full URL on login redirect
  • HUE-6261 - [oozie] Avoid JS error preventing workflow action status update
  • HUE-6262 - [core] Converter should separate history docs from saved docs
  • HUE-6263 - [converter] Delete Doc2 object incase of exception
  • HUE-6264 - [converter] Decrease memory usage for users with very high document1 objects
  • HUE-6266 - [converter] Remove unnecessary call to document link
  • HUE-6295 - [doc2] Avoid unrelated DB calls in sync_documents after import
  • HUE-6310 - [doc2] Create missing doc1 links for delete and copy operations
  • HUE-6407 - [pig] Play button doesn't come back after killing the running pig job
  • HUE-6446 - [oozie] User cant edit shared coordinator or bundle
  • HUE-6604 - [oozie] Fix timestamp conversion to server timezone
  • IMPALA-3641 - Fix catalogd RPC responses to DROP IF EXISTS.
  • IMPALA-4088 - Assign fix values to the minicluster server ports
  • IMPALA-4293 - query profile should include error log
  • IMPALA-4544 - ASAN should ignore SEGV and leaks
  • IMPALA-4546 - Fix Moscow timezone conversion after 2014
  • IMPALA-4615 - Fix create_table.sql command order
  • IMPALA-4631 - avoid DCHECK in PlanFragementExecutor::Close().
  • IMPALA-4716 - Expr rewrite causes IllegalStateException
  • IMPALA-4722 - Disable log caching in test_scratch_disk
  • IMPALA-4725 - Query option to control Parquet array resolution.
  • IMPALA-4733 - Change HBase ports to non-ephemeral
  • IMPALA-4738 - STDDEV_SAMP should return NULL for single record input
  • IMPALA-4787 - Optimize APPX_MEDIAN() memory usage
  • IMPALA-4822 - Implement dynamic log level changes
  • IMPALA-4899 - Fix parquet table writer dictionary leak
  • IMPALA-4902 - Copy parameters map in HdfsPartition.toThrift().
  • IMPALA-4920 - custom cluster tests: fix generation of py.test options
  • IMPALA-4998 - Fix missing table lock acquisition.
  • IMPALA-5021 - Fix count(*) remaining rows overflow in Parquet.
  • IMPALA-5028 - Lock table in /catalog_objects endpoint.
  • IMPALA-5055 - Fix DCHECK in parquet-column-readers.cc ReadPageHeader()
  • IMPALA-5088 - Fix heap buffer overflow
  • IMPALA-5115 - Handle status from HdfsTableSink::WriteClusteredRowBatch
  • IMPALA-5145 - Do not constant fold null in CastExprs
  • IMPALA-5154 - Handle 'unpartitioned' Kudu tables
  • IMPALA-5156 - Drop VLOG level passed into Kudu client
  • IMPALA-5172 - Buffer overrun for Snappy decompression
  • IMPALA-5183 - increase write wait timeout in BufferedBlockMgrTest
  • IMPALA-5186 - Handle failed CreateAndOpenScanner() in MT scan.
  • IMPALA-5189 - Pin version of setuptools-scm
  • IMPALA-5193 - Initialize decompressor before finding first tuple
  • IMPALA-5197 - Erroneous corrupted Parquet file message
  • IMPALA-5198 - Error messages are sometimes dropped before reaching client
  • IMPALA-5208 - Bump toolchain to include fixes forand IMPALA-5187
  • IMPALA-5217 - KuduTableSink checks null constraints incorrectly
  • IMPALA-5244 - test_hdfs_file_open_fail fails on local filesystem build
  • IMPALA-5252 - Fix crash in HiveUdfCall::GetStringVal() when mem_limit exceeded
  • IMPALA-5253 - Use appropriate transport for StatestoreSubscriber
  • IMPALA-5287 - Test skip.header.line.count on gzip
  • IMPALA-5297 - Reduce free-pool-test mem requirement to avoid OOM
  • IMPALA-5301 - Set Kudu minicluster memory limit
  • IMPALA-5318 - Generate access events with fully qualified table names
  • IMPALA-5322 - Fix a potential crash in Frontend & Catalog JNI startup
  • IMPALA-5487 - Race in runtime-profile.cc::toThrift() can lead to corrupt profiles being generated while query is running
  • IMPALA-5172 - fix incorrect cast in call to LZO decompress
  • OOZIE-2739 - Remove property expansion pattern from ShellMain's log4j properties content
  • OOZIE-2818 - Can't overwrite oozie.action.max.output.data on a per-workflow basis
  • OOZIE-2819 - Make Oozie REST API accept multibyte characters for script Actions
  • OOZIE-2844 - Increase stability of Oozie actions when log4j.properties is missing or not readable
  • OOZIE-2872 - Address backward compatibility issue introduced by OOZIE-2748
  • OOZIE-2908 - Fix typo in oozie.actions.null.args.allowed property in oozie-default.xml
  • SENTRY-1390 - Add test cases to ensure usability of URI privileges for HMS binding
  • SENTRY-1422 - JDO deadlocks while processing grant while a background thread processes Notificationlogs SENTRY-1512: Refactor the database transaction managementBackportRefactor the Sentry database transaction management
  • SENTRY-1476 - SentryStore is subject to JDQL injection (Alexander Kolbasov via Vamsee Yarlagadda)Sentry is subject to JDOQL injection
  • SENTRY-1505 - CommitContext isn't used by anything and should be removed (CommitContext isn't used by anything and should be removed
  • SENTRY-1515 - Cleanup exception handling in SentryStore
  • SENTRY-1517 - SentryStore should actually use function getMSentryRole to get roles
  • SENTRY-1557 - getRolesForGroups(),getRoleNamesForGroups() does too many trips to the the DB
  • SENTRY-1594 - TransactionBlock should become generic
  • SENTRY-1609 - DelegateSentryStore is subject to JDQL injection DelegateSentryStore is subject to JDQL injection
  • SENTRY-1615 - SentryStore should not allocate empty objects that are immediately returned
  • SENTRY-1625 - PrivilegeOperatePersistence can use QueryParamBuilder Fix [SQL Injection: JDO] in [sentry-provider/sentry-provider-db/src/main/java/org/apache/sentry/provider/db/generic/service/persistent/PrivilegeOperatePersistence.java]
  • SENTRY-1636 - Remove thrift dependency on fb303
  • SENTRY-1683 - MetastoreCacheInitializer has a race condition in handling results list
  • SENTRY-1714 - MetastorePlugin.java should quetly return from renameAuthzObject() when both paths are null
  • SENTRY-1759 - UpdatableCache leaks connections
  • SOLR-5776 - backportEnabled SSL tests can easily exhaust random generator entropy and block. Set the server side to SHA1PRNG as in Steve's original patch. Use less SSL in a test run. refactor SSLConfig so that SSLTestConfig can provide SSLContexts using a NullSecureRandom to prevent SSL tests from blocking on entropy starved machines Alternate (psuedo random) NullSecureRandom for Constants.SUN_OS replace NullSecureRandom w/ NotSecurePsuedoRandom
  • SOLR-8836 - Return 400, and a SolrException when an invalid json is provided to the update handler instead of 500.
  • SOLR-9153 - Update Apache commons beanutils version to 1.9.2
  • SOLR-9527 - Improve distribution of replicas when restoring a collection
  • SOLR-9836 - Add ability to recover from leader when index corruption is detected on SolrCore creation.
  • SOLR-9848 - Lower solr.cloud.wait-for-updates-with-stale-state-pause back down from 7 seconds.
  • SOLR-10076 - ofHide keystore and truststore passwords from /admin/info/* outputs. - Updated JMXJsonServlet to use new API - Set redaction default true
  • SOLR-10338 - backportConfigure SecureRandom non blocking for tests.
  • SOLR-10360 - Remove an extra space from Hadoop distcp cmd used by Solr backup/restore
  • SOLR-10430 - Add ls command to ZkCLI for listing sub-dirs
  • SPARK-13693 - [STREAMING][TESTS] Stop StreamingContext before deleting checkpoint dir
  • SPARK-14930 - [SPARK-13693] Fix race condition in CheckpointWriter.stop()
  • SPARK-18922 - [SQL][CORE][STREAMING][TESTS] Fix all identified tests failed due to path and resource-not-closed problems on Windows
  • SPARK-19178 - [SQL][Backport-to-1.6] convert string of large numbers to int should return null
  • SPARK-19263 - DAGScheduler should avoid sending conflicting task set.
  • SPARK-19537 - Move pendingPartitions to ShuffleMapStage.
  • SPARK-20922 - [CORE][HOTFIX] Don't use Java 8 lambdas in older branches.
  • SPARK-20922 - [CORE] Add whitelist of classes that can be deserialized by the launcher.
  • SQOOP-3123 - Introduce escaping logic for column mapping parameters (same what Sqoop already uses for the DB column names), thus special column names (e.g. containing '#' character) and mappings realted to those columns can be in the same format (thus not confusing the end users), and also eliminates the related AVRO format clashing issues.
  • SQOOP-3140 - Removing deprecated mapred.map.max.attempts, mapred.reduce.max.attempts entries and using the new constants directly from Hadoop instead
  • SQOOP-3159 - Sqoop (export + --table) with Oracle table_name having '$' fails with error
  • ZOOKEEPER-2044 - Back portto fix CancelledKeyException on ZooKeeper server.
Selected tab: WhatsNew

Want to Get Involved or Learn More?

Check out our other resources

Cloudera Community

Collaborate with your peers, industry experts, and Clouderans to make the most of your investment in Hadoop.

Cloudera University

Receive expert Hadoop training through Cloudera University, the industry's only truly dynamic Hadoop training curriculum that’s updated regularly to reflect the state of the art in big data.