This is the documentation for CDH 5.1.x. Documentation for other versions is available at Cloudera Documentation.

Apache HBase Known Issues

— Some New Features in HBase 0.98 Not Supported in CDH 5.1

The following features, introduced upstream in HBase 0.98, are not supported in CDH 5.1:
  • Visibility labels
  • Per-cell access controls
  • Transparent server-side encryption
  • Stripe compaction
  • Distributed log replay
For more information, see New Features and Changes for HBase in CDH 5.

— HBase moves to Protoc 2.5.0.

This change may cause JAR conflicts with applications that have older versions of protobuf in their Java classpath.

Bug: None

Severity: Medium

Workaround: Update applications to use Protoc 2.5.0. Work on a longer-term solution in progress.

— Write performance may be a little slower in CDH 5 than in CDH 4

Bug: None

Severity: Low

Workaround: None, but see Checksums in the HBase section of the CDH 5 Installation Guide.

Must explicitly add permissions for owner users before upgrading from 4.1.x

In CDH 4.1.x, an HBase table could have an owner. The owner user had full administrative permissions on the table (RWXCA). These permissions were implicit (that is, they were not stored explicitly in the HBase acl table), but the code checked them when determining if a user could perform an operation.

The owner construct was removed as of CDH 4.2.0, and the code now relies exclusively on entries in the acl table. Since table owners do not have an entry in this table, their permissions are removed on upgrade from CDH 4.1.x to CDH 4.2.0 or later.

Bug: None

Severity: Medium

Anticipated Resolution: None; use workaround

Workaround: Add permissions for owner users before upgrading from CDH 4.1.x. You can automate the task of making the owner users' implicit permissions explicit, using code similar to the following. (Note that this snippet is intended only to give you an idea of how to proceed; it may not compile and run as it stands.)
PERMISSIONS = 'RWXCA'

tables.each do |t|
  table_name = t.getNameAsString
  owner = t.getOwnerString
  LOG.warn( "Granting " + owner +  " with
        " + PERMISSIONS +  " for
        table " + table_name)
  user_permission = UserPermission. new(owner.to_java_bytes, table_name.to_java_bytes,
                                       nil, nil, PERMISSIONS.to_java_bytes)
  protocol.grant(user_permission)
end

— Change in default splitting policy from ConstantSizeRegionSplitPolicy to IncreasingToUpperBoundRegionSplitPolicy may create too many splits

This affects you only if you are upgrading from CDH 4.1 or earlier.

Split size is the number of regions that are on this server that all are part of the same table, squared, times the region flush size or the maximum region split size, whichever is smaller. For example, if the flush size is 128MB, then on first flush we will split, making two regions that will split when their size is 2 * 2 * 128MB = 512MB. If one of these regions splits, there are three regions and now the split size is 3 * 3 * 128MB = 1152MB, and so on until we reach the configured maximum file size, and then from then, we'll use that.

This new default policy could create many splits if you have many tables in your cluster.

This default split size has also changed - from 64MB to 128MB; and the region eventual split size, hbase.hregion.max.filesize, is now 10GB (it was 1GB).

Bug: None

Severity: Medium

Anticipated Resolution: None; use workaround

Workaround: If find you are getting too many splits, either go back to the old split policy or increase the hbase.hregion.memstore.flush.size.

— In a non-secure cluster, MapReduce over HBase does not properly handle splits in the BulkLoad case

You may see errors because of:

  • missing permissions on the directory that contains the files to bulk load
  • missing ACL rights for the table/families

Bug: None

Anticipated Resolution: None; use workaround

Severity: Medium

Workaround: In a non-secure cluster, execute BulkLoad as the hbase user.
  Note: For important information about configuration that is required for BulkLoad in a secure cluster as of CDH 4.3, see the Apache HBase Incompatible Changes subsection under Incompatible Changes in these Release Notes.

— Pluggable compaction and scan policies via coprocessors (HBASE-6427) not supported

Cloudera does not provide support for user-provided custom coprocessors.

Bug: HBASE-6427

Severity: Low

Workaround: None

— Custom constraints coprocessors (HBASE-4605) not supported

The constraints coprocessor feature provides a framework for constrains and requires you to add your own custom code. Cloudera does not support user-provided custom code, and hence does not support this feature.

Bug: HBASE-4605

Severity: Low

Workaround: None

— Pluggable split key policy (HBASE-5304) not supported

Cloudera supports the two split policies that are supplied and tested: ConstantSizeSplitPolicy and PrefixSplitKeyPolicy. The code also provides a mechanism for custom policies that are specified by adding a class name to the HTableDescriptor. Custom code added via this mechanism must be provided by the user. Cloudera does not support user-provided custom code, and hence does not support this feature.

Bug: HBASE-5304

Severity: Low

Workaround: None

— HBase may not tolerate HDFS root directory changes

While HBase is running, do not stop the HDFS instance running under it and restart it again with a different root directory for HBase.

Bug: None

Severity: Medium

Workaround: None

— AccessController postOperation problems in asynchronous operations

When security and Access Control are enabled, the following problems occur:

  • If a Delete Table fails for a reason other than missing permissions, the access rights are removed but the table may still exist and may be used again.
  • If hbaseAdmin.modifyTable() is used to delete column families, the rights are not removed from the Access Control List (ACL) table. The postOperation is implemented only for postDeleteColumn().
  • If Create Table fails, full rights for that table persist for the user who attempted to create it. If another user later succeeds in creating the table, the user who made the failed attempt still has the full rights.

Bug: HBASE-6992

Severity: Medium

Workaround: None

— Native library not included in tarballs

The native library that enables Region Server page pinning on Linux is not included in tarballs. This could impair performance if you install HBase from tarballs.

Bug: None

Severity: Low

Workaround: None

hbase.zookeeper.useMulti set to false by default

The default value of hbase.zookeeper.useMulti was changed from true to false in CDH 5. This affects environments with HBase replication enabled and large replication queues.

Bug: None

Severity: Low

Workaround: Enable hbase.zookeeper.useMulti by setting the value to true in hbase-site.xml.

Page generated September 3, 2015.