Known Issues Fixed in CDH 5.1.0
The following topics describe known issues fixed in CDH 5.1.0.
— The same DataNodes may appear in the NameNode web UI in both the live and dead node lists
YARN Fair Scheduler's Cluster Utilization Threshold check is broken
Workaround: Set the yarn.scheduler.fair.preemption.cluster-utilization-threshold property in yarn-site.xml to -1.
ResourceManager High Availability with manual failover does not work on secure clusters
Workaround: Enable automatic failover; this requires ZooKeeper.
— MapReduce over HBase Snapshot bypasses HBase-level security
The MapReduce over HBase Snapshot bypasses HBase-level security completely since the files are read from the HDFS directly. The user who is running the scan/job has to have read permissions to the data and snapshot files.
Workaround: MapReduce users must be trusted to process/view all data in HBase.
— HBase snapshots now saved to the /<hbase>/.hbase-snapshot directory
HBase snapshots are now saved to the /<hbase>/.hbase-snapshot directory instead of the /.snapshot directory. This was a conflict introduced by the HDFS snapshot feature in Hadoop 2.2/CDH 5 HDFS.
Workaround: This should be handled in the upgrade process.
— Oozie jobs don't support ResourceManager HA in YARN
If the ResourceManager fails, the workflow will fail.
— Oozie HA does not work properly with HCatalog integration or SLA notifications
This issue appears when you are using HCatalog as a data dependency in a coordinator; using HCatalog from an action (for example, Pig) works correctly.
|<< Known Issues Fixed in CDH 5.1.2||Known Issues Fixed in CDH 5.0.6 >>|