Installing Cloudera Data Science Workbench 1.5.x Using Cloudera Manager

Prerequisites

Before you begin installing Cloudera Data Science Workbench, make sure you have completed the steps to secure your hosts, set up DNS subdomains, and configure block devices.

Configure Apache Spark 2

  1. (CDH 5 Only) Install and configure the CDS 2.x Powered by Apache Spark parcel and CSD. For instructions, see Installing CDS 2.x Powered by Apache Spark.

  2. (Required for CDH 5 and CDH 6) To be able to use Spark 2, each user must have their own /home directory in HDFS. If you sign in to Hue first, these directories will automatically be created for you. Alternatively, you can have cluster administrators create these directories.
    hdfs dfs -mkdir /user/<username>
    hdfs dfs -chown <username>:<username> /user/<username>

    If you are using CDS 2.3 release 2 (or higher), review the associated known issues here: CDS Powered By Apache Spark.

  3. Test Spark 2 integration on the gateway hosts.
    1. SSH to a gateway host.
    2. If your cluster is kerberized, run kinit to authenticate to the CDH cluster’s Kerberos Key Distribution Center. The Kerberos ticket you create is not visible to Cloudera Data Science Workbench users.
    3. Submit a test job to Spark by executing the following command:
      CDH 5
      spark2-submit --class org.apache.spark.examples.SparkPi 
      --master yarn --deploy-mode client 
      /opt/cloudera/parcels/SPARK2/lib/spark2/examples/jars/spark-example*.jar 100
      CDH 6
      spark-submit --class org.apache.spark.examples.SparkPi --master yarn \
      --deploy-mode client SPARK_HOME/lib/spark-examples.jar 100

Configure JAVA_HOME

On CSD-based deployments, Cloudera Manager automatically detects the path and version of Java installed on Cloudera Data Science Workbench gateway hosts. You do not need to explicitly set the value for JAVA_HOME unless you want to use a custom location, use JRE, or in the case of Spark 2, force Cloudera Manager to use JDK 1.8 as explained below.

Setting a value for JAVA_HOME - The value for JAVA_HOME depends on whether you are using JDK or JRE. For example, if you're using JDK 1.8_162, set JAVA_HOME to /usr/java/jdk1.8.0_162. If you are only using JRE, set it to /usr/java/jdk1.8.0_162/jre.

Issues with Spark 2.2 and higher - Spark 2.2 (and higher) requires JDK 1.8. However, if a host has both JDK 1.7 and JDK 1.8 installed, Cloudera Manager might choose to use JDK 1.7 over JDK 1.8. If you are using Spark 2.2 (or higher), this will create a problem during the first run of the service because Spark will not work with JDK 1.7. To work around this, explicitly configure Cloudera Manager to use JDK 1.8 on the gateway hosts that are running Cloudera Data Science Workbench.

For instructions on how to set JAVA_HOME, see Configuring a Custom Java Home Location in Cloudera Manager.

To upgrade the whole CDH cluster to JDK 1.8, see Upgrading to Oracle JDK 1.8.

Download and Install the Cloudera Data Science Workbench CSD

  1. Download the Cloudera Data Science Workbench CSD. Make sure you download the CSD that corresponds to the version of CDH you are using.

    Version Link to CSD
    Cloudera Data Science Workbench 1.5.0

    CDH 6 - CLOUDERA_DATA_SCIENCE_WORKBENCH_CDH6_1.5.0.jar

    CDH 5 - CLOUDERA_DATA_SCIENCE_WORKBENCH_CDH5_1.5.0.jar

  2. Log on to the Cloudera Manager Server host, and place the CSD file under /opt/cloudera/csd, which is the default location for CSD files. To configure a custom location for CSD files, refer the Cloudera Manager documentation at Configuring the Location of Custom Service Descriptor Files.
  3. Set the file ownership to cloudera-scm:cloudera-scm with permission 644.
  4. Restart the Cloudera Manager Server:
    service cloudera-scm-server restart
  5. Log into the Cloudera Manager Admin Console and restart the Cloudera Management Service.
    1. Select Clusters > Cloudera Management Service.
    2. Select Actions > Restart.

Install the Cloudera Data Science Workbench Parcel

  1. Log into the Cloudera Manager Admin Console.
  2. Click Hosts > Parcels in the main navigation bar.
  3. Add the Cloudera Data Science Workbench parcel repository URL to Cloudera Manager. In this case, the location of the repository has already been included in the Cloudera Data Science Workbench CSD. Therefore, the parcel should already be present and ready for downloading. If the parcel is already available, skip the rest of this step. If for some reason the parcel is not available, perform the followings steps to add the remote parcel repository URL to Cloudera Manager.
    1. On the Parcels page, click Configuration.
    2. In the Remote Parcel Repository URLs list, click the addition symbol to open an additional row.
    3. Enter the path to the repository. Cloudera Data Science Workbench publishes placeholder parcels for other operating systems as well. However, note that these do not work and have only been included to support mixed-OS clusters.
      Version Remote Parcel Repository URL
      Cloudera Data Science Workbench 1.5.0 https://archive.cloudera.com/cdsw1/1.5.0/parcels/
    4. Click Save Changes.
    5. Go to the Hosts > Parcels page. The external parcel should now appear in the set of parcels available for download.
  4. Click Download. Once the download is complete, click Distribute to distribute the parcel to all the CDH hosts in your cluster. Then click Activate. For more detailed information on each of these tasks, see Managing Parcels.
For airgapped installations, create your own local repository, put the Cloudera Data Science Workbench parcel there, and then configure the Cloudera Manager Server to target this newly-created repository.

Add the Cloudera Data Science Workbench Service

To add the Cloudera Data Science Workbench service to your cluster:

  1. Log into the Cloudera Manager Admin Console.
  2. On the Home > Status tab, click to the right of the cluster name and select Add a Service to launch the wizard. A list of services will be displayed.
  3. Select the Cloudera Data Science Workbench service and click Continue.
  4. Select the services on which the new CDSW service should depend. This includes HDFS, Spark 2, and YARN. Click Continue.

    (Required for CDH 6) If you want to run SparkSQL workloads, you must also add the Hive service as a dependency.

  5. Assign the Master and Worker roles to the gateway hosts. You must assign the Cloudera Data Science Workbench Master role to one gateway host, and optionally, assign the Worker role to one or more gateway hosts. Other Cloudera Data Science Workbench Role Groups - In addition to Master and Worker, there are two more role groups that fall under the Cloudera Data Science Workbench service: the Docker Daemon role, and the Application role.
    • The Docker Daemon role must be assigned to every Cloudera Data Science Workbench gateway host. On First Run, Cloudera Manager will automatically assign this role to each Cloudera Data Science Workbench gateway host. However, if any more hosts are added or reassigned to Cloudera Data Science Workbench, you must explicitly assign the Docker Daemon role to them.

    • On First Run, Cloudera Manager will assign the Application role to the host running the Cloudera Data Science Workbench Master role. The Application role is always assigned to the same host as the Master. Consequently, this role must never be assigned to a Worker host.
  6. Configure the following parameters and click Continue.
    Properties Description

    Cloudera Data Science Workbench Domain

    DNS domain configured to point to the master node.

    If the previously configured DNS subdomain entries are cdsw.<your_domain>.com and *.cdsw.<your_domain>.com, then this parameter should be set to cdsw.<your_domain>.com.

    Users' browsers will then be able to contact the Cloudera Data Science Workbench web application at http://cdsw.<your_domain>.com.

    This domain for DNS only, and is unrelated to Kerberos or LDAP domains.

    Master Node IPv4 Address

    IPv4 address for the master node that is reachable from the worker nodes. By default, this field is left blank and Cloudera Manager uses the IPv4 address of the Master node.

    Within an AWS VPC, set this parameter to the internal IP address of the master node; for instance, if your hostname is ip-10-251-50-12.ec2.internal, set this property to the corresponding IP address, 10.251.50.12.

    Install Required Packages

    When this parameter is enabled, the Prepare Node command will install all the required package dependencies on First Run. If you choose to disable this property, you must manually install the following packages on all gateway hosts running Cloudera Data Science Workbench roles.
    nfs-utils
    libseccomp
    lvm2
    bridge-utils
    libtool-ltdl
    iptables   
    rsync 
    policycoreutils-python 
    selinux-policy-base 
    selinux-policy-targeted 
    ntp 
    ebtables 
    bind-utils 
    nmap-ncat  
    openssl 
    e2fsprogs 
    redhat-lsb-core 
    socat

    Docker Block Device

    Block device(s) for Docker images. Use the full path to specify the image(s), for instance, /dev/xvde.

    The Cloudera Data Science Workbench installer will format and mount Docker on each gateway host that is assigned the Docker Daemon role. Do not mount these block devices prior to installation.

  7. The wizard will now begin a First Run of the Cloudera Data Science Workbench service. This includes deploying client configuration for HDFS, YARN and Spark 2, installing the package dependencies on all hosts, and formatting the Docker block device. The wizard will also assign the Application role to the host running Master, and the Docker Daemon role to all the gateway hosts running Cloudera Data Science Workbench.
  8. Once the First Run command has completed successfully, click Finish to go back to the Cloudera Manager home page.

Create the Administrator Account

After your installation is complete, set up the initial administrator account. Go to the Cloudera Data Science Workbench web application at http://cdsw.<your_domain>.com.

You must access Cloudera Data Science Workbench from the Cloudera Data Science Workbench Domain configured when setting up the service, and not the hostname of the master node. Visiting the hostname of the master node will result in a 404 error.

The first account that you create becomes the site administrator. You may now use this account to create a new project and start using the workbench to run data science workloads. For a brief example, see Getting Started with the Cloudera Data Science Workbench.

Next Steps

As a site administrator, you can invite new users, monitor resource utilization, secure the deployment, and upload a license key for the product. For more details on these tasks, see the Administration and Security guides.

You can also start using the product by configuring your personal account and creating a new project. For a quickstart that walks you through creating and running a simple template project, see Getting Started with Cloudera Data Science Workbench. For more details on collaborating with teams, working on projects, and sharing results, see the Managing Cloudera Data Science Workbench Users.