Copying Cluster Data Using DistCp

The Distcp Command

The distributed copy command, distcp, is a general utility for copying large data sets between distributed filesystems within and across clusters. The distcp command submits a regular MapReduce job that performs a file-by-file copy.

To see the distcp command options, run the built-in help:
$ hadoop distcp

Distcp Syntax and Examples

You can use distcp to copy files between compatible clusters in either direction, from or to the source or destination clusters.

For example, when upgrading, say from a CDH 5.7 cluster to a CDH 5.9, you should run distcp from the CDH 5.13 cluster in this manner:

$ hadoop distcp hftp://cdh57-namenode:50070/ hdfs://CDH59-nameservice/
$ hadoop distcp s3a://bucket/ hdfs://CDH59-nameservice/

You can also use a specific path, such as /hbase to move HBase data, for example:

$ hadoop distcp hftp://cdh57-namenode:50070/hbase hdfs://CDH59-nameservice/hbase
$ hadoop distcp s3a://bucket/file hdfs://CDH59-nameservice/bucket/file

HFTP Protocol

The HFTP protocol allows you to use FTP resources in an HTTP request. When copying with distcp across different versions of CDH, use hftp:// for the source file system and hdfs:// for the destination file system, and run distcp from the destination cluster. The default port for HFTP is 50070 and the default port for HDFS is 8020.

Example of a source URI: hftp://namenode-location:50070/basePath

  • hftp:// is the source protocol.
  • namenode-location is the CDH 4 (source) NameNode hostname as defined by its configured fs.default.name.
  • 50070 is the NameNode's HTTP server port, as defined by the configured dfs.http.address.

Example of a destination URI: hdfs://nameservice-id/basePath or hdfs://namenode-location

  • hdfs:// is the destination protocol
  • nameservice-id or namenode-location is the CDH 5 (destination) NameNode hostname as defined by its configured fs.defaultFS.
  • basePath in both examples refers to the directory you want to copy, if one is specifically needed.

Using DistCp with Amazon S3

You can copy HDFS files to and from an Amazon S3 instance. You must provision an S3 bucket using Amazon Web Services and obtain the access key and secret key. You can pass these credentials on the distcp command line, or you can reference a credential store to "hide" sensitive credentials so that they do not appear in the console output, configuration files, or log files.

Amazon S3 block and native filesystems are supported with the s3a:// protocol.

Example of an Amazon S3 Block Filesystem URI: s3a://bucket_name/path/to/file

S3 credentials can be provided in a configuration file (for example, core-site.xml):
<property>
    <name>fs.s3a.access.key</name>
    <value>...</value>
</property>
<property>
    <name>fs.s3a.secret.key</name>
    <value>...</value>
</property>
or run on the command line as follows:
hadoop distcp -Dfs.s3a.access.key=... -Dfs.s3a.secret.key=... s3a://
For example:
hadoop distcp -Dfs.s3a.access.key=myAccessKey -Dfs.s3a.secret.key=mySecretKey hdfs://user/hdfs/mydata s3a://myBucket/mydata_backup

Using a Credential Provider to Secure S3 Credentials

You can run the distcp command without having to enter the access key and secret key on the command line. This prevents these credentials from being exposed in console output, log files, configuration files, and other artifacts. Running the command in this way requires that you provision a credential store to securely store the access key and secret key.

To provision credentials in a credential store:
  1. Provision the credentials by running the following commands:
    hadoop credential create fs.s3a.access.key -value access_key -provider path_to_credential_store_file
    hadoop credential create fs.s3a.secret.key -value secret_key -provider path_to_credential_store_file
    Enter the value for the -provider option using the following syntax:
    provider_type://file/path_to_credential_store_file
    provider_type can be one of the following:
    • jceks – retrieves credentials from a JavaKeystoreProvider. Use when the credential provider is located in HDFS and requires access from multiple hosts.
    • localjceks – retrieves credentials from a LocalJavaKeystoreProvider. Use when the credential provider is located on the local filesystem and access is only required from this single host. Cloudera recommends localjceks.
    For example:
    hadoop credential create fs.s3a.access.key -value foobar -provider localjceks://file/home/keystores/aws.jceks
    hadoop credential create fs.s3a.secret.key -value barfoo -provider localjceks://file/home/keystores/aws.jceks

    For more details on the hadoop credential command, see Credential Management (Apache Software Foundation).

  2. Copy the contents of the /etc/hadoop/conf directory to a working directory.
  3. Add the following to the core-site.xml file in the working directory:
    <property>
    <name>hadoop.security.credential.provider.path</name>
    <value>provider_type://path_to_credential_store_file</value>
    </property>
  4. Set the HADOOP_CONF_DIR environment variable to the location of the working directory:
    export HADOOP_CONF_DIR=path_to_working_directory

After completing these steps, you can run the distcp command using the following syntax:

hadoop distcp hdfs://source_path s3a://destination_path
You can also reference the credential store on the command line, without having to enter it in a copy of the core-site.xml file. You also do not need to set a value for HADOOP_CONF_DIR. Use the following syntax:
hadoop distcp hdfs://source_path s3a://bucket_name/destination_path
-D hadoop.security.credential.provider.path=path_to_credential_store_file

There are additional options for the distcp command. See DistCp Guide (Apache Software Foundation).

Examples of DistCP Commands Using the S3 Protocol and Hidden Credentials

Copying files to Amazon S3
hadoop distcp hdfs://user/hdfs/mydata s3a://myBucket/mydata_backup
Copying files from Amazon S3
hadoop distcp s3a://myBucket/mydata_backup hdfs://user/hdfs/mydata
Copying files to Amazon S3 using the -filters option to exclude specified source files
You specify a file name with the -filters option. The referenced file contains regular expressions, one per line, that define file name patterns to exclude from the distcp job. The pattern specified in the regular expression should match the fully-qualified path of the intended files, including the scheme (hdfs, webhdfs, s3a, etc.). For example, the following are valid expressions for excluding files:
hdfs://x.y.z:8020/a/b/c
webhdfs://x.y.z:50070/a/b/c
s3a://bucket/a/b/c
Reference the file containing the filter expressions using -filters option. For example:
hadoop distcp -filters /user/joe/myFilters hdfs://user/hdfs/mydata s3a://myBucket/mydata_backup
Contents of the sample myFilters file:
.*foo.*
.*/bar/.*
hdfs://x.y.z:8020/tmp/.*
hdfs://x.y.z:8020/tmp1/file1
The regular expressions in the myFilters exclude the following files:
  • .*foo.* – excludes paths that contain the string "foo".
  • .*/bar/.* – excludes paths that include a directory named bar.
  • hdfs://x.y.z:8020/tmp/.* – excludes all files in the /tmp directory.
  • hdfs://x.y.z:8020/tmp1/file1 – excludes the file /tmp1/file1.
Copying files to Amazon S3 with the -overwrite option.
The -overwrite option overwrites destination files that already exist.
hadoop distcp -overwrite hdfs://user/hdfs/mydata  s3a://user/mydata_backup

For more information about the -filters, -overwrite, and other options, see DIstCp Guide: Command Line Options (Apache Software Foundation).

Enabling Fallback Configuration

To enable the fallback configuration, for copying between secure and insecure clusters, add the following to the HDFS configuration file, core-default.xml, by using an advanced configuration snippet if you use Cloudera Manager, or editing the file directly otherwise.
<property>
  <name>ipc.client.fallback-to-simple-auth-allowed</name>
  <value>true</value>
</property>

Protocol Support for Distcp

The following table lists the protocols supported with the distcp command on different versions of CDH. "Secure" means that the cluster is configured to use Kerberos.

Source Destination Where to Issue distcp Command Source Protocol Source Config Destination Protocol Destination Config Fallback Config Required
CDH 4 CDH 4 Destination hftp Secure hdfs or webhdfs Secure  
CDH 4 CDH 4 Source or Destination hdfs or webhdfs Secure hdfs or webhdfs Secure  
CDH 4 CDH 4 Source or Destination hdfs or webhdfs Insecure hdfs or webhdfs Insecure  
CDH 4 CDH 4 Destination hftp Insecure hdfs or webhdfs Insecure  
               
CDH 4 CDH 5 Destination webhdfs or hftp Secure webhdfs or hdfs Secure  
CDH 4 CDH 5.1.3+ Destination webhdfs Insecure webhdfs Secure Yes
CDH 4 CDH 5 Destination webhdfs or hftp Insecure webhdfs or hdfs Insecure  
CDH 4 CDH 5 Source hdfs or webhdfs Insecure webhdfs Insecure  
               
CDH 5 CDH 4 Source or Destination webhdfs Secure webhdfs Secure  
CDH 5 CDH 4 Source hdfs Secure webhdfs Secure  
CDH 5.1.3+ CDH 4 Source hdfs or webhdfs Secure webhdfs Insecure Yes
CDH 5 CDH 4 Source or Destination webhdfs Insecure webhdfs Insecure  
CDH 5 CDH 4 Destination webhdfs Insecure hdfs Insecure  
CDH 5 CDH 4 Source hdfs Insecure webhdfs Insecure  
CDH 5 CDH 4 Destination hftp Insecure hdfs or webhdfs Insecure  
               
CDH 5 CDH 5 Source or Destination hdfs or webhdfs Secure hdfs or webhdfs Secure  
CDH 5 CDH 5 Destination hftp Secure hdfs or webhdfs Secure  
CDH 5.1.3+ CDH 5 Source hdfs or webhdfs Secure hdfs or webhdfs Insecure Yes
CDH 5 CDH 5.1.3+ Destination hdfs or webhdfs Insecure hdfs or webhdfs Secure Yes
CDH 5 CDH 5 Source or Destination hdfs or webhdfs Insecure hdfs or webhdfs Insecure  
CDH 5 CDH 5 Destination hftp Insecure hdfs or webhdfs Insecure  

Distcp between Secure Clusters in Distinct Kerberos Realms

Specify the Destination Parameters in krb5.conf

Edit the krb5.conf file on the client (where the distcp job will be submitted) to include the destination hostname and realm.
[realms]
HADOOP.QA.domain.COM = { kdc = kdc.domain.com:88 admin_server = admin.test.com:749
default_domain = domain.com supported_enctypes = arcfour-hmac:normal des-cbc-crc:normal
des-cbc-md5:normal des:normal des:v4 des:norealm des:onlyrealm des:afs3 } 

[domain_realm]
.domain.com = HADOOP.test.domain.COM
domain.com = HADOOP.test.domain.COM
test03.domain.com = HADOOP.QA.domain.COM

Configure HDFS RPC Protection and Acceptable Kerberos Principal Patterns

Set the hadoop.rpc.protection property to authentication in both clusters. You can modify this property either in hdfs-site.xml, or using Cloudera Manager as follows:
  1. Open the Cloudera Manager Admin Console.
  2. Go to the HDFS service.
  3. Click the Configuration tab.
  4. Select Scope > HDFS-1 (Service-Wide)
  5. Select Category > Security.
  6. Locate the Hadoop RPC Protection property and select authentication.
  7. Click Save Changes to commit the changes.

The following steps are not required if the two realms are already set up to trust each other, or have the same principal pattern. However, this isn't usually the case.

Set the dfs.namenode.kerberos.principal.pattern property to * to allow distcp irrespective of the principal patterns of the source and destination clusters. You can modify this property either in hdfs-site.xml on both clusters, or using Cloudera Manager as follows:
  1. Open the Cloudera Manager Admin Console.
  2. Go to the HDFS service.
  3. Click the Configuration tab.
  4. Select Scope > Gateway
  5. Select Category > Advanced.
  6. Edit the HDFS Client Advanced Configuration Snippet (Safety Valve) for hdfs-site.xml property to add:
    <property>
      <name>dfs.namenode.kerberos.principal.pattern</name>
      <value>*</value>
    </property>
  7. Click Save Changes to commit the changes.

(If TLS/SSL is enabled) Specify Truststore Properties

The following properties must be configured in the ssl-client.xml file on the client submitting the distcp job to establish trust between the target and destination clusters.
<property>
<name>ssl.client.truststore.location</name>
<value>path_to_truststore</value>
</property>

<property>
<name>ssl.client.truststore.password</name>
<value>XXXXXX</value>
</property>

<property>
<name>ssl.client.truststore.type</name>
<value>jks</value>
</property>

Set HADOOP_CONF to the Destination Cluster

Set the HADOOP_CONF path to be the destination environment. If you are not using HFTP, set the HADOOP_CONF path to the source environment instead.

Launch Distcp

Kinit on the client and launch the distcp job.
hadoop distcp hdfs://test01.domain.com:8020/user/alice hdfs://test02.domain.com:8020/user/alice
If launching distcp fails, force Kerberos to use TCP instead of UDP by adding the following parameter to the krb5.conf file on the client.
[libdefaults]
udp_preference_limit = 1