This is the documentation for CDH 5.0.x. Documentation for other versions is available at Cloudera Documentation.

Configuring an NFSv3 Gateway

The NFSv3 gateway allows a client to mount HDFS as part of the client's local file system. The gateway machine can be any host in the cluster, including the NameNode, a DataNode, or any HDFS client. The client can be any NFSv3-client-compatible machine.

After mounting HDFS to his or her local filesystem, a user can:
  • Browse the HDFS file system through the local file system
  • Upload and download files from the HDFS file system to and from the local file system.
  • Stream data directly to HDFS through the mount point.
File append is supported, but random write is not.

The subsections that follow provide information on installing and configuring the gateway.

  Note: Install Cloudera Repository

Before using the instructions on this page to install or upgrade, install the Cloudera yum, zypper/YaST or apt repository, and install or upgrade CDH 5 and make sure it is functioning correctly. For instructions, see CDH 5 Installation and Upgrading to CDH 5.

Upgrading from a CDH 5 Beta Release

If you are upgrading from a CDH 5 Beta release, you must first remove the hadoop-hdfs-portmap package. Proceed as follows.

  1. Unmount existing HDFS gateway mounts. For example, on each client, assuming the file system is mounted on /hdfs_nfs_mount:
    $ umount /hdfs_nfs_mount
  2. Stop the services:
    $ sudo service hadoop-hdfs-nfs3 stop
    $ sudo hadoop-hdfs-portmap stop
  3. Remove the hadoop-hdfs-portmap package.
    • On a Red-Hat-compatible system:
      $ sudo yum remove hadoop-hdfs-portmap
    • On a SLES system:
      $ sudo zypper remove hadoop-hdfs-portmap
    • On an Ubuntu or Debian system:
      $ sudo apt-get remove hadoop-hdfs-portmap
  4. Install the new version
    • On a Red-Hat-compatible system:
      $ sudo yum install hadoop-hdfs-nfs3
    • On a SLES system:
      $ sudo zypper install hadoop-hdfs-nfs3
    • On an Ubuntu or Debian system:
      $ sudo apt-get install hadoop-hdfs-nfs3
  5. Start the system default portmapper service:
    $ sudo service portmap start
  6. Now proceed with Starting the NFSv3 Gateway, and then remount the HDFS gateway mounts.

Installing the Packages for the First Time

Install the following packages on the cluster host you choose for NFSv3 Gateway machine (we'll refer to it as the NFS server from here on). The first two items are standard NFS utilities; the last is a CDH package.
  • nfs-utils
  • nfs-utils-lib
  • hadoop-hdfs-nfs3
Proceed as follows, depending on the NFS server's operating system.

To install the NFSv3 Gateway packages on an Red-Hat-compatible system:

$ sudo yum install nfs-utils nfs-utils-lib hadoop-hdfs-nfs3

To install the NFSv3 Gateway packages on a SLES system:

$ sudo zypper install nfs-utils nfs-utils-lib hadoop-hdfs-nfs3

To install the NFSv3 Gateway packages on an Ubuntu or Debian system:

$ sudo apt-get install nfs-utils nfs-utils-lib hadoop-hdfs-nfs3

Configuring the NFSv3 Gateway

Proceed as follows to configure the gateway.
  1. Add the following property to hdfs-site.xml on the NameNode:
    <property>
        <name>dfs.namenode.accesstime.precision</name>
        <value>3600000</value>
        <description>The access time for an HDFS file is precise up to this value. The default value is 1 hour.
        Setting a value of 0 disables access times for HDFS.</description>
    </property>
    
  2. Add the following property to hdfs-site.xml on the NFS server:
    <property>
      <name>dfs.nfs3.dump.dir</name>
      <value>/tmp/.hdfs-nfs</value>
    </property>
      Note:

    You should change the location of the file dump directory, which temporarily saves out-of-order writes before writing them to HDFS. This directory is needed because the NFS client often reorders writes, and so sequential writes can arrive at the NFS gateway in random order and need to be saved until they can be ordered correctly. After these out-of-order writes have exceeded 1MB in memory for any given file, they are dumped to the dfs.nfs3.dump.dir (the memory threshold is not currently configurable).

    Make sure the directory you choose has enough space. For example, if an application uploads 10 files of 100MB each, dfs.nfs3.dump.dir should have roughly 1GB of free space to allow for a worst-case reordering of writes to every file.

  3. Configure the user running the gateway (normally the hdfs user as in this example) to be a proxy for other users. To allow the hdfs user to be a proxy for all other users, add the following entries to core-site.xml on the NameNode:
    <property>
       <name>hadoop.proxyuser.hdfs.groups</name>
       <value>*</value>
       <description>
         Set this to '*' to allow the gateway user to proxy any group.
       </description>
    </property>
    <property>
        <name>hadoop.proxyuser.hdfs.hosts</name>
        <value>*</value>
        <description>
         Set this to '*' to allow requests from any hosts to be proxied.
        </description>
    </property>
  4. Restart the NameNode.

Starting the NFSv3 Gateway

Do the following on the NFS server.

  1. First, stop the default NFS services, if they are running:
    $ sudo service nfs stop
  2. Start the HDFS-specific services:
    $ sudo service hadoop-hdfs-nfs3 start

Verifying that the NFSv3 Gateway is Working

To verify that the NFS services are running properly, you can use the rpcinfo command on any host on the local network:
$ rpcinfo -p <nfs_server_ip_address>
You should see output such as the following:
program    vers    proto   port

100005     1       tcp     4242  mountd
100005     2       udp     4242  mountd
100005     2       tcp     4242  mountd
100000     2       tcp     111   portmapper
100000     2       udp     111   portmapper
100005     3       udp     4242  mountd
100005     1       udp     4242  mountd
100003     3       tcp     2049  nfs
100005     3       tcp     4242  mountd
To verify that the HDFS namespace is exported and can be mounted, use the showmount command.
$ showmount -e <nfs_server_ip_address>
You should see output similar to the following:
Exports list on <nfs_server_ip_address>:
/ (everyone)

Mounting HDFS on an NFS Client

To import the HDFS file system on an NFS client, use a mount command such as the following on the client:
$ mount -t  nfs  -o vers=3,proto=tcp,nolock <nfs_server_hostname>:/ /hdfs_nfs_mount
  Note:

When you create a file or directory as user hdfs on the client (that is, in the HDFS file system imported via the NFS mount), the ownership may differ from what it would be if you had created it in HDFS directly. For example, ownership of a file created on the client might be hdfs:hdfs when the same operation done natively in HDFS resulted in hdfs:supergroup. This is because in native HDFS, BSD semantics determine the group ownership of a newly-created file: it is set to the same group as the parent directory where the file is created. When the operation is done over NFS, the typical Linux semantics create the file with the group of the effective GID (group ID) of the process creating the file, and this characteristic is explicitly passed to the NFS gateway and HDFS.

Page generated September 3, 2015.