HDFS Data At Rest Encryption

With CDH 5.2, HDFS implements transparent, end-to-end encryption of data read from and written to HDFS, without requiring changes to user application code. Since the encryption is end-to-end, this means data can only be encrypted and decrypted by the client. HDFS does not store or have access to unencrypted data and the encryption keys. This satisfies the two main aspects of encryption: at-rest encryption (data on persistent media, such as a disk) and in-transit encryption (data travelling over a network).

Continue reading:

Use Cases

Data encryption is required by a number of different government, financial, and regulatory entities. For example, the healthcare industry has HIPAA regulations, the card payment industry has PCI DSS regulations, and the United States government has FISMA regulations. Having transparent encryption built into HDFS makes it easier for organizations to comply with these regulations. Encryption can also be performed at the application-level, but by integrating it into HDFS, existing applications can operate on encrypted data without changes. This integrated architecture implies stronger encrypted file semantics and better coordination with other HDFS functions.

Architecture

Encryption Zones

An encryption zone is a directory in HDFS with all of its contents, that is, every file and subdirectory in it, encrypted. The files in this directory will be transparently encrypted upon write and transparently decrypted upon read. Each encryption zone is associated with a key which is specified when the zone is created. Each file within an encryption zone also has its own encryption/decryption key, called the Data Encryption Key (DEK). These DEKs are never stored persistently unless they are encrypted with the encryption zone's key. This encrypted DEK is known as the EDEK. The EDEK is then stored persistently as part of the file's metadata on the NameNode.

A key can have multiple key versions, where each key version has its own distinct key material (that is, the portion of the key used during encryption and decryption). Key rotation is achieved by modifying the encryption zone's key, that is, bumping up its version. Per-file key rotation is then achieved by re-encrypting the file's DEK with the new encryption zone key to create new EDEKs. An encryption key can be fetched either by its key name, returning the latest version of the key, or by a specific key version.

Key Management Server

A new service needs to be added to your cluster to store, manage, and access encryption keys, called the Hadoop Key Management Server (KMS). The KMS service is a proxy that interfaces with a backing key store on behalf of HDFS daemons and clients. Both the backing key store and the KMS implement the Hadoop KeyProvider client API.

Encryption and decryption of EDEKs happens entirely on the KMS. More importantly, the client requesting creation or decryption of an EDEK never handles the EDEK's encryption key (that is, the encryption zone key). When a new file is created in an encryption zone, the NameNode asks the KMS to generate a new EDEK encrypted with the encryption zone's key. When reading a file from an encryption zone, the NameNode provides the client with the file's EDEK and the encryption zone key version that was used to encrypt the EDEK. The client then asks the KMS to decrypt the EDEK, which involves checking that the client has permission to access the encryption zone key version. Assuming that is successful, the client uses the DEK to decrypt the file's contents. All the steps for read and write take place automatically through interactions between the DFSClient, the NameNode, and the KMS.

Access to encrypted file data and metadata is controlled by normal HDFS filesystem permissions. Typically, the backing key store is configured to only allow end-user access to the encryption zone keys used to encrypt DEKs. This means that EDEKs can be safely stored and handled by HDFS, since the hdfs user will not have access to EDEK encryption keys. This means that if HDFS is compromised (for example, by gaining unauthorized access to a superuser account), a malicious user only gains access to the ciphertext and EDEKs. This does not pose a security threat since access to encryption zone keys is controlled by a separate set of permissions on the KMS and key store.

For more details on configuring the KMS, see Configuring the Key Management Server (KMS).

Navigator Key Trustee

By default, the current implementation of HDFS encryption uses a local Java keystore for key management. This may not be sufficient for large enterprises where a more robust and secure key management solution is required. Navigator Key Trustee is a keystore server for managing encryption keys, certificates, and passwords that is completely integrated into Cloudera Navigator.

In order to leverage the manageable, highly-available key management capabilities of the Navigator key trustee server, the KMS service uses a Key Trustee-specific plugin called the TrusteeKeyProvider.

For more information on integrating Navigator Key Trustee with HDFS encryption, you can contact your Cloudera account team.

crypto Command Line Interface

createZone

Use this command to create a new encryption zone.
-createZone -keyName <keyName> -path <path>
Where:
  • path: The path of the encryption zone to be created. It must be an empty directory.
  • keyName: Name of the key to use for the encryption zone.

listZones

List all encryption zones. This command requires superuser permissions.
-listZones

Enabling HDFS Encryption on a Cluster

Minimum Required Role: Full Administrator

The following sections will guide you through enabling HDFS encryption on your cluster, using the default Java keystore-based KMS:

  1. Adding the KMS Service
  2. Enabling KMS for the HDFS Service
  3. Configuring Encryption Properties for the HDFS and NameNode
  4. Creating Encryption Zones
  5. Adding Files to an Encryption Zone

Adding the KMS Service

  1. On the Home page, click to the right of the cluster name and select Add a Service. A list of service types display. You can add one type of service at a time.
  2. Select the KMS service and click Continue.
  3. Customize the assignment of role instances to hosts. You can click the View By Host button for an overview of the role assignment by hostname ranges.

    Click the field below the Key Management Server (KMS) role to display a dialog containing a list of hosts. Select the host for the new KMS role and click OK.

  4. Review and modify the JavaKeyStoreProvider Directory configuration setting if required and click Continue. The KMS service is started.
  5. Click Continue, then click Finish. You are returned to the Home page.
  6. Verify the new KMS service has started properly by checking its health status. If the Health Status is Good, then the service started properly.

Enabling KMS for the HDFS Service

  1. Go to the HDFS service.
  2. Click the Configuration tab.
  3. Go to the Service-Wide category.
  4. Click the Value field for the KMS Service property and select KMS.
  5. Click Save Changes.
  6. Restart your cluster.
    1. On the Home page, click to the right of the cluster name and select Restart.
    2. Click Restart that appears in the next screen to confirm. The Command Details window shows the progress of stopping services.

      When All services successfully started appears, the task is complete and you can close the Command Details window.

  7. Deploy client configuration.
    1. On the Home page, click to the right of the cluster name and select Deploy Client Configuration.
    2. Click Deploy Client Configuration.

Configuring Encryption Properties for the HDFS and NameNode

Configure the following properties to select the encryption algorithm and KeyProvider that will be used during encryption. If you do not modify these properties, the default values will use AES-CTR to encrypt your data.

Property Description
Selecting an Encryption Algorithm: Set the following properties in the core-site.xml safety valve and redeploy client configuration.
hadoop.security.crypto.codec.classes.EXAMPLECIPHERSUITE The prefix for a given crypto codec, contains a comma-separated list of implementation classes for a given crypto codec (for example, EXAMPLECIPHERSUITE). The first implementation will be used if available, others are fallbacks.

By default, the cipher suite used is AES/CTR/NoPadding and its default classes are org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec and org.apache.hadoop.crypto.JceAesCtrCryptoCodec as described in the following properties.

hadoop.security.crypto.cipher.suite

Cipher suite for crypto codec.

Default: AES/CTR/NoPadding
hadoop.security.crypto.codec.classes.aes.ctr.nopadding

Comma-separated list of crypto codec implementations for the default cipher suite: AES/CTR/NoPadding. The first implementation will be used if available, others are fallbacks.

Default: org.apache.hadoop.crypto.OpensslAesCtrCryptoCodec, org.apache.hadoop.crypto.JceAesCtrCryptoCodec
hadoop.security.crypto.jce.provider

The JCE provider name used in CryptoCodec.

Default: None
hadoop.security.crypto.buffer.size

The buffer size used by CryptoInputStream and CryptoOutputStream.

Default: 8192
KeyProvider Configuration: Set this property in the hdfs-site.xml safety valve and restart the NameNode.
dfs.encryption.key.provider.uri

The KeyProvider to be used when interacting with encryption keys that are used to read and write to an encryption zone.

If you have a managed cluster, Cloudera Manager will point to the KMS server you have enabled above.

NameNode Configuration: Set this property in the hdfs-site.xml safety valve and restart the NameNode.
dfs.namenode.list.encryption.zones.num.responses

When listing encryption zones, the maximum number of zones that will be returned in a batch. Fetching the list incrementally in batches improves NameNode performance.

Default: 100

Creating Encryption Zones

Once a KMS has been set up and the NameNode and HDFS clients have been correctly configured, an admin user can use the hadoop key and hdfs crypto command-line tools to create encryption keys and set up new encryption zones.

  • Start by creating an encryption key for your zone.
    $ sudo hadoop key create <key_name>
  • As a superuser, create a new empty directory and make it an encryption zone using the key generated above.
    $ hadoop fs -mkdir /zone
    $ hdfs crypto -createZone -keyName <key_name> -path /zone
    You can verify creation of the new encryption zone by running the -listZones command. You should see the encryption zone along with its key listed as follows:
    $ sudo -u hdfs hdfs crypto -listZones 
    /zone    <key_name>

For more information and recommendations on creating encryption zones for each CDH component, see Configuring CDH Services for HDFS Encryption.

Adding Files to an Encryption Zone

Existing data can be encrypted by coping it copied into the new encryption zones using tools like distcp. See the DistCp Considerations section below for information on using DistCp with encrypted data files.

You can add files to an encryption zone by copying them over to the encryption zone. For example:
sudo -u hdfs hadoop distcp /user/dir /user/enczone
Additional Information:

DistCp Considerations

A common usecase for DistCp is to replicate data between clusters for backup and disaster recovery purposes. This is typically performed by the cluster administrator, who is an HDFS superuser. To retain this workflow when using HDFS encryption, a new virtual path prefix has been introduced, /.reserved/raw/, that gives superusers direct access to the underlying block data in the filesystem. This allows superusers to distcp data without requiring access to encryption keys, and avoids the overhead of decrypting and re-encrypting data. It also means the source and destination data will be byte-for-byte identical, which would not have been true if the data was being re-encrypted with a new EDEK.

Copying between encrypted and unencrypted locations

By default, distcp compares checksums provided by the filesystem to verify that data was successfully copied to the destination. When copying between an unencrypted and encrypted location, the filesystem checksums will not match since the underlying block data is different.

In this case, you can specify the -skipcrccheck and -update flags to avoid verifying checksums.

Attack Vectors

Type of Exploit Issue Mitigation
Hardware Access Exploit

These exploits assume the attacker has gained physical access to hard drives from cluster machines, that is, DataNodes and NameNodes.

Access to swap files of processes containing DEKs. This exploit does not expose cleartext, as it also requires access to encrypted block files. It can be mitigated by disabling swap, using encrypted swap, or using mlock to prevent keys from being swapped out.
Access to encrypted block files. This exploit does not expose cleartext, as it also requires access to the DEKs. It can only be mitigated by restricting physical access to the cluster machines.
Root Access Exploits

These exploits assume the attacker has gained root shell access to cluster machines running datanodes and namenodes. Many of these exploits cannot be addressed in HDFS, since a malicious root user has access to the in-memory state of processes holding encryption keys and cleartext. For these exploits, the only mitigation technique is carefully restricting and monitoring root shell access.

Access to encrypted block files.

By itself, this does not expose cleartext, as it also requires access to encryption keys.

No mitigation required.
Dump memory of client processes to obtain DEKs, delegation tokens, cleartext.

No mitigation.

Recording network traffic to sniff encryption keys and encrypted data in transit.

By itself, insufficient to read cleartext without the EDEK encryption key.

No mitigation required.
Dump memory of datanode process to obtain encrypted block data.

By itself, insufficient to read cleartext without the DEK.

No mitigation required.
Dump memory of namenode process to obtain encrypted data encryption keys.

By itself, insufficient to read cleartext without the EDEK's encryption key and encrypted block files.

No mitigation required.
HDFS Admin Exploits

These exploits assume that the attacker has compromised HDFS, but does not have root or hdfs user shell access.

Access to encrypted block files.

By itself, insufficient to read cleartext without the EDEK and EDEK encryption key.

No mitigation required.
Access to encryption zone and encrypted file metadata (including encrypted data encryption keys), using -fetchImage.

By itself, insufficient to read cleartext without EDEK encryption keys.

No mitigation required.
Root Access Exploits
  A rogue user can collect keys to which they have access, and use them later to decrypt encrypted data. This can be mitigated through periodic key rolling policies.