Spark and Hadoop Integration

This section describes how to write to various Hadoop ecosystem components from Spark.

Writing to HBase from Spark

You can use Spark to process data that is destined for HBase. See Importing Data Into HBase Using Spark.

You can also use Spark in conjunction with Apache Kafka to stream data from Spark to HBase. See Importing Data Into HBase Using Spark and Kafka.

Because Spark does not have a dependency on HBase, in order to access HBase from Spark, you must do the following:
  • Manually provide the location of HBase configurations and classes to the driver and executors. You do so by passing the locations to both classpaths when you run spark-submit, spark-shell, or pyspark:
    • parcel installation
      --driver-class-path /etc/hbase/conf:/opt/cloudera/parcels/CDH/lib/hbase/lib/*
      --conf "spark.executor.extraClassPath=/etc/hbase/conf/:/opt/cloudera/parcels/CDH/lib/hbase/lib/*"
    • package installation
      --driver-class-path /etc/hbase/conf:/usr/lib/hbase/lib/hbase/lib/*
      --conf "spark.executor.extraClassPath=/etc/hbase/conf/:/usr/lib/hbase/lib/*"
  • Add an HBase gateway role to all YARN worker hosts and the edge host where you run spark-submit, spark-shell, or pyspark and deploy HBase client configurations.

Limitation with Region Pruning for HBase Tables

When SparkSQL accesses an HBase table through the HiveContext, region pruning is not performed. This limitation can result in slower performance for some SparkSQL queries against tables that use the HBase SerDes than when the same table is accessed through Impala or Hive.

Limitations in Kerberized Environments

The following limitations apply to Spark applications that access HBase in a Kerberized cluster:
  • The application must be restarted every seven days.
  • If the cluster also has HA enabled, you must specify the keytab and principal parameters in your command line (as opposed to using kinit). For example:
    spark-shell --jars MySparkHbaseApp.jar --principal ME@DOMAIN.COM --keytab /path/to/local/keytab ...
    spark-submit --class com.example.SparkHbaseApp --principal ME@DOMAIN.COM --keytab /path/to/local/keytab
    SparkHBaseApp.jar [ application parameters....]"
    For further information, see Spark Authentication.

Accessing Hive from Spark

The host from which the Spark application is submitted or on which spark-shell or pyspark runs must have a Hive gateway role defined in Cloudera Manager and client configurations deployed.

When a Spark job accesses a Hive view, Spark must have privileges to read the data files in the underlying Hive tables. Currently, Spark cannot use fine-grained privileges based on the columns or the WHERE clause in the view definition. If Spark does not have the required privileges on the underlying data files, a SparkSQL query against the view returns an empty result set, rather than an error.

Running Spark Jobs from Oozie

For CDH 5.4 and higher you can invoke Spark jobs from Oozie using the Spark action. For information on the Spark action, see Oozie Spark Action Extension.

In CDH 5.4, to enable dynamic allocation when running the action, specify the following in the Oozie workflow:

<spark-opts>--conf spark.dynamicAllocation.enabled=true
--conf spark.shuffle.service.enabled=true
--conf spark.dynamicAllocation.minExecutors=1
</spark-opts>

If you have enabled the shuffle service in Cloudera Manager, you do not need to specify spark.shuffle.service.enabled.