Synchronizing HDFS ACLs and Sentry Permissions

The HDFS-Sentry plugin allows you to configure synchronization of Sentry privileges with HDFS ACLs for specific HDFS directories.

Introduction

The integration of Sentry and HDFS permissions automatically keeps HDFS ACLs in sync with the privileges configured with Sentry. This feature offers the easiest way to share data between Hive, Impala and other components such as MapReduce, Spark, and Pig, while setting permissions for that data with just one set of rules through Sentry. It maintains the ability of Hive and Impala to set permissions on views, in addition to tables, while access to data outside of Hive and Impala (for example, reading files off HDFS) requires table permissions. HDFS permissions for some or all of the files that are part of tables defined in the Hive Metastore will now be controlled by Sentry.

This consists of three components:
  • An HDFS NameNode plugin
  • A Sentry-Hive Metastore plugin
  • A Sentry Service plugin

With synchronization enabled, Sentry will translate permissions on databases and tables to the appropriate corresponding HDFS ACL on the underlying files in HDFS. For example, if a user group is assigned to a Sentry role that has SELECT permissions on a particular table, that user group will also have read access to the HDFS files that are part of that table. When you list those files in HDFS, this permission will be listed as an HDFS ACL. Or if a user group is assigned to a Sentry role that has SELECT permissions on a database, that user group will also have read access to the HDFS files that are part of that database. When you list those files in HDFS, those permissions will also be listed as an HDFS ACL.

Note that when Sentry was enabled, the hive user/group was given ownership of all files/directories in the Hive warehouse (/user/hive/warehouse). Hence, the resulting synchronized Sentry permissions will reflect this fact. If you skipped that step, Sentry permissions will be based on the existing Hive warehouse ACLs. Sentry will not automatically grant ownership to thehive user.

The mapping of Sentry privileges to HDFS ACLs is as follows:

  • SELECT privilege -> Read access on the file.
  • INSERT privilege -> Write access on the file.
  • ALL privilege -> Read and Write access on the file.
Note that you must explicitly specify the path prefix to the Hive warehouse (default: user/hive/warehouse) and any other directories that must be managed by Sentry. This procedure is described in the Enabling the HDFS-Sentry Plugin Using Cloudera Manager section below.

Prompting HDFS ACL Changes

URIs do not have an impact on the HDFS-Sentry plugin. Therefore, you cannot manage all of your HDFS ACLs with the HDFS-Sentry plugin and you must continue to use standard HDFS ACLs for data outside of Hive.

HDFS ACL changes are triggered on:
  • Hive DATABASE object LOCATION (HDFS) when a role is granted to the object
  • Hive TABLE object LOCATION (HDFS) when a role is granted to the object
HDFS ACL changes are not triggered by:
  • Hive URI LOCATION (HDFS) when a role is granted to a URI
  • Hive SERVER object when a role is granted to the object. HDFS ACLs are not updated if a role is assigned to the SERVER. The privileges are inherited by child objects in standard Sentry interactions, but the plugin does not trickle the privileges down.
  • Permissions granted on views. Views are not synchronized as objects in the HDFS file system.

Prerequisites

  • CDH 5.3.0 or higher
  • (Strongly Recommended) Implement Kerberos authentication on your cluster.
The following conditions must be also be true when enabling Sentry-HDFS synchronization. Failure to comply with any of these will result in validation errors.
  • You must use the Sentry service, not policy file-based authorization.
  • Enabling HDFS Extended Access Control Lists (ACLs) is required.
  • There must be at least one Sentry service dependent on HDFS.
  • The Sentry service must have at least one Sentry Server role.
  • The Sentry service must have at least one dependent Hive service.
  • The Hive service must have at least one Hive metastore role.

Enabling the HDFS-Sentry Plugin Using Cloudera Manager

  1. Go to the HDFS service.
  2. Click the Configuration tab.
  3. Select Scope > HDFS (Service-Wide).
  4. Type Check HDFS Permissions in the Search box.
  5. Select Check HDFS Permissions.
  6. Select Enable Sentry Synchronization.
  7. Locate the Sentry Synchronization Path Prefixes property or search for it by typing its name in the Search box.
  8. Edit the Sentry Synchronization Path Prefixes property to list HDFS path prefixes where Sentry permissions should be enforced. Multiple HDFS path prefixes can be specified. By default, this property points to /user/hive/warehouse and must always be non-empty. If you are using a non-default location for the Hive warehouse, make sure you add it to the list of path prefixes. HDFS privilege synchronization will not occur for tables and databases located outside the HDFS regions listed here.
  9. Click Save Changes.
  10. Restart the cluster. Note that it may take an additional two minutes after cluster restart for privilege synchronization to take effect.

Enabling the HDFS-Sentry Plugin Using the Command Line

To enable the Sentry plugins on an unmanaged cluster, you must explicitly allow the hdfs user to interact with Sentry, and install the plugin packages as described in the following sections.

Allowing the hdfs user to connect with Sentry

For an unmanaged cluster, add hdfs to the sentry.service.allow.connect property in sentry-site.xml.
<property>
    <name>sentry.service.allow.connect</name>
    <value>impala,hive,hue,hdfs</value>
</property>

Installing the HDFS-Sentry Plugin

Use the following the instructions, depending on your operating system, to install the sentry-hdfs-plugin package. The package must be installed (at a minimum) on the following hosts:
  • The host running the NameNode and Secondary NameNode
  • The host running the Hive Metastore
  • The host running the Sentry Service
OS Command
RHEL-compatible
$ sudo yum install sentry-hdfs-plugin
SLES
$ sudo zypper install sentry-hdfs-plugin
Ubuntu or Debian
$ sudo apt-get install sentry-hdfs-plugin

Configuring the HDFS NameNode Plugin

Add the following properties to the hdfs-site.xml file on the NameNode host.
<property>
<name>dfs.namenode.acls.enabled</name>
<value>true</value>
</property>

<property>
<name>dfs.namenode.authorization.provider.class</name>
<value>org.apache.sentry.hdfs.SentryAuthorizationProvider</value>
</property>

<property>
<name>dfs.permissions</name>
<value>true</value>
</property>

<!-- Comma-separated list of HDFS path prefixes where Sentry permissions should be enforced. -->
<!-- Privilege synchronization will occur only for tables located in HDFS regions specified here. -->
<property>
<name>sentry.authorization-provider.hdfs-path-prefixes</name>
<value>/user/hive/warehouse</value>  
</property>

<property>
<name>sentry.hdfs.service.security.mode</name>
<value>kerberos</value>  
</property>

<property>
<name>sentry.hdfs.service.server.principal</name>
<value> SENTRY_SERVER_PRINCIPAL (for eg :  sentry/_HOST@VPC.CLOUDERA.COM )</value>
</property>

<property>
<name>sentry.hdfs.service.client.server.rpc-port</name>
<value>SENTRY_SERVER_PORT</value>
</property>

<property>
<name>sentry.hdfs.service.client.server.rpc-address</name>
<value>SENTRY_SERVER_HOST</value>  
</property>

Configuring the Hive Metastore Plugin

Add the following properties to hive-site.xml on the Hive Metastore Server host.
<property>
<name>sentry.metastore.plugins</name>
<value>org.apache.sentry.hdfs.MetastorePlugin</value>
</property>

<property>
<name>sentry.hdfs.service.client.server.rpc-port</name>
<value> SENTRY_SERVER_PORT </value>
</property>

<property>
<name>sentry.hdfs.service.client.server.rpc-address</name>
<value> SENTRY_SERVER_HOSTNAME </value>
</property>

<property>
<name>sentry.hdfs.service.client.server.rpc-connection-timeout</name>
<value>200000</value>
</property>

<property>
<name>sentry.hdfs.service.security.mode</name>
<value>kerberos</value>
</property>

<property>
<name>sentry.hdfs.service.server.principal</name>
<value> SENTRY_SERVER_PRINCIPAL (for eg :  sentry/_HOST@VPC.CLOUDERA.COM )</value>
</property>

Configuring the Sentry Service Plugin

Add the following properties to the sentry-site.xml file on the NameNode host.
<property>
<name>sentry.service.processor.factories</name>
<value>org.apache.sentry.provider.db.service.thrift.SentryPolicyStoreProcessorFactory,
org.apache.sentry.hdfs.SentryHDFSServiceProcessorFactory</value>
</property>

<property>
<name>sentry.policy.store.plugins</name>
<value>org.apache.sentry.hdfs.SentryPlugin</value>
</property>

Testing the Sentry Synchronization Plugins

The following tasks will help you ensure that Sentry-HDFS synchronization has been enabled and configured correctly:

For a folder that has been enabled for the plugin, such as the Hive warehouse, try accessing the files in that folder outside Hive and Impala. For this, you should know what tables and databases those HDFS files belong to and the Sentry permissions on those tables. Attempt to view or modify the Sentry permissions settings over those tables using one of the following tools:
  • (Recommended) Hue's Security application
  • HiveServer2 CLI
  • Impala CLI
  • Access the tables and databases directly in HDFS. For example:
    • List files inside the folder and verify that the file permissions shown in HDFS (including ACLs) match what was configured in Sentry.
    • Run a MapReduce, Pig or Spark job that accesses those files. Pick any tool besides HiveServer2 and Impala