Spark and Hadoop Integration
This section describes how to access various Hadoop ecosystem components from Spark.
Accessing HBase from Spark
To configure Spark to interact with HBase, you can specify an HBase service as a Spark service dependency in Cloudera Manager:
- In the Cloudera Manager admin console, go to the Spark service you want to configure.
- Go to the Configuration tab.
- Enter hbase in the Search box.
- In the HBase Service property, select your HBase service.
- Click Save Changes to commit the changes.
You can use Spark to process data that is destined for HBase. See Importing Data Into HBase Using Spark.
You can also use Spark in conjunction with Apache Kafka to stream data from Spark to HBase. See Importing Data Into HBase Using Spark and Kafka.
Limitation with Region Pruning for HBase Tables
When SparkSQL accesses an HBase table through the HiveContext, region pruning is not performed. This limitation can result in slower performance for some SparkSQL queries against tables that use the HBase SerDes than when the same table is accessed through Impala or Hive.
Limitations in Kerberized Environments
spark-shell --jars MySparkHbaseApp.jar --principal ME@DOMAIN.COM --keytab /path/to/local/keytab ...
spark-submit --class com.example.SparkHbaseApp --principal ME@DOMAIN.COM --keytab /path/to/local/keytab SparkHBaseApp.jar [ application parameters....]"For further information, see Spark Authentication.
Accessing Hive from Spark
When a Spark job accesses a Hive view, Spark must have privileges to read the data files in the underlying Hive tables. Currently, Spark cannot use fine-grained privileges based on the columns or the WHERE clause in the view definition. If Spark does not have the required privileges on the underlying data files, a SparkSQL query against the view returns an empty result set, rather than an error.
Running Spark Jobs from Oozie
For CDH 5.4 and higher you can invoke Spark jobs from Oozie using the Spark action. For information on the Spark action, see Oozie Spark Action Extension.
In CDH 5.4, to enable dynamic allocation when running the action, specify the following in the Oozie workflow:
<spark-opts>--conf spark.dynamicAllocation.enabled=true --conf spark.shuffle.service.enabled=true --conf spark.dynamicAllocation.minExecutors=1 </spark-opts>
If you have enabled the shuffle service in Cloudera Manager, you do not need to specify spark.shuffle.service.enabled.