Spark 2 Properties in CDH 5.12.0

Gateway

Advanced

Display Name Description Related Name Default Value API Name Required
Deploy Directory The directory where the client configs will be deployed /etc/spark2 client_config_root_dir true
Gateway Logging Advanced Configuration Snippet (Safety Valve) For advanced use only, a string to be inserted into log4j.properties for this role only. log4j_safety_valve false
Spark 2 Client Advanced Configuration Snippet (Safety Valve) for meta/version For advanced use only, a string to be inserted into the client configuration for meta/version. meta/version_client_config_safety_valve false
Gateway Advanced Configuration Snippet (Safety Valve) for navigator.lineage.client.properties For advanced use only. A string to be inserted into navigator.lineage.client.properties for this role only. navigator_lineage_client_config_safety_valve false
Spark 2 Client Advanced Configuration Snippet (Safety Valve) for spark2-conf/spark-defaults.conf For advanced use only, a string to be inserted into the client configuration for spark2-conf/spark-defaults.conf. spark2-conf/spark-defaults.conf_client_config_safety_valve false
Spark 2 Client Advanced Configuration Snippet (Safety Valve) for spark2-conf/spark-env.sh For advanced use only, a string to be inserted into the client configuration for spark2-conf/spark-env.sh. spark2-conf/spark-env.sh_client_config_safety_valve false

Logs

Display Name Description Related Name Default Value API Name Required
GATEWAY Lineage Log Directory The directory in which GATEWAY lineage log files are written. If changed from the default, Cloudera Manager will not be able to provide lineage information without restarting the Cloudera Manager Agent(s). lineage_event_log_dir /var/log/spark2/lineage lineage_event_log_dir true
Gateway Logging Threshold The minimum log level for Gateway logs INFO log_threshold false

Monitoring

Display Name Description Related Name Default Value API Name Required
Enable Configuration Change Alerts When set, Cloudera Manager will send alerts when this entity's configuration changes. false enable_config_alerts false

Other

Display Name Description Related Name Default Value API Name Required
Alternatives Priority The priority level that the client configuration will have in the Alternatives system on the hosts. Higher priority levels will cause Alternatives to prefer this configuration over any others. 51 client_config_priority true
Spark Data Serializer Name of class implementing org.apache.spark.serializer.Serializer to use in Spark applications. spark.serializer org.apache.spark.serializer.KryoSerializer spark_data_serializer true
Default Application Deploy Mode Which deploy mode to use by default. Can be overridden by users when launching applications. spark_deploy_mode client spark_deploy_mode false
Caching Executor Idle Timeout When dynamic allocation is enabled, time after which idle executors with cached RDDs blocks will be stopped. By default, they're never stopped. spark.dynamicAllocation.cachedExecutorIdleTimeout spark_dynamic_allocation_cached_idle_timeout false
Enable Dynamic Allocation Enable dynamic allocation of executors in Spark applications. spark.dynamicAllocation.enabled true spark_dynamic_allocation_enabled false
Executor Idle Timeout When dynamic allocation is enabled, time after which idle executors will be stopped. spark.dynamicAllocation.executorIdleTimeout 1 minute(s) spark_dynamic_allocation_idle_timeout false
Initial Executor Count When dynamic allocation is enabled, number of executors to allocate when the application starts. By default, this is the same value as the minimum number of executors. spark.dynamicAllocation.initialExecutors spark_dynamic_allocation_initial_executors false
Maximum Executor Count When dynamic allocation is enabled, maximum number of executors to allocate. By default, Spark relies on YARN to control the maximum number of executors for the application. spark.dynamicAllocation.maxExecutors spark_dynamic_allocation_max_executors false
Minimum Executor Count When dynamic allocation is enabled, minimum number of executors to keep alive while the application is running. spark.dynamicAllocation.minExecutors 0 spark_dynamic_allocation_min_executors false
Scheduler Backlog Timeout When dynamic allocation is enabled, timeout before requesting new executors when there are backlogged tasks. spark.dynamicAllocation.schedulerBacklogTimeout 1 second(s) spark_dynamic_allocation_scheduler_backlog_timeout false
Sustained Scheduler Backlog Timeout When dynamic allocation is enabled, timeout before requesting new executors after the initial backlog timeout has already expired. By default this is the same value as the initial backlog timeout. spark.dynamicAllocation.sustainedSchedulerBacklogTimeout spark_dynamic_allocation_sustained_scheduler_backlog_timeout false
Shell Logging Threshold The minimum log level for the Spark shell. spark_gateway_shell_logging_threshold WARN spark_gateway_shell_logging_threshold true
Enable Kill From UI Whether to allow users to kill running stages from the Spark Web UI. spark.ui.killEnabled true spark_gateway_ui_kill_enabled true
Enable History Write Spark application history logs to HDFS. spark.eventLog.enabled true spark_history_enabled false
Enable I/O Encryption Whether to encrypt temporary shuffle and cache files stored by Spark on the local disks. spark.io.encryption.enabled false spark_io_encryption_enabled false
Default Kafka Version Default Kafka library version to add to Spark applications. This can be overridden by setting the SPARK_KAFKA_VERSION environment variable when launching Spark applications. spark_kafka_version 0.9 spark_kafka_version true
Enable Network Encryption Whether to encrypt communication between Spark processes belonging to the same application. Requires authentication (spark.authenticate) to be enabled. spark.network.crypto.enabled false spark_network_encryption_enabled false
Extra Python Path Python library paths to add to PySpark applications. spark_python_path spark_python_path false
Enable Shuffle Service Enables the external shuffle service. The external shuffle service preserves shuffle files written by executors so that the executors can be deallocated without losing work. Must be enabled if Enable Dynamic Allocation is enabled. Recommended and enabled by default. spark.shuffle.service.enabled true spark_shuffle_service_enabled true
Enable Spark Web UI Whether to enable the Spark Web UI on individual applications. It's recommended that the UI be disabled in secure clusters. spark.ui.enabled true spark_ui_enabled false

Suppressions

Display Name Description Related Name Default Value API Name Required
Suppress Configuration Validator: CDH Version Validator Whether to suppress configuration warnings produced by the CDH Version Validator configuration validator. false role_config_suppression_cdh_version_validator true
Suppress Parameter Validation: Deploy Directory Whether to suppress configuration warnings produced by the built-in parameter validation for the Deploy Directory parameter. false role_config_suppression_client_config_root_dir true
Suppress Parameter Validation: GATEWAY Lineage Log Directory Whether to suppress configuration warnings produced by the built-in parameter validation for the GATEWAY Lineage Log Directory parameter. false role_config_suppression_lineage_event_log_dir true
Suppress Parameter Validation: Gateway Logging Advanced Configuration Snippet (Safety Valve) Whether to suppress configuration warnings produced by the built-in parameter validation for the Gateway Logging Advanced Configuration Snippet (Safety Valve) parameter. false role_config_suppression_log4j_safety_valve true
Suppress Parameter Validation: Spark 2 Client Advanced Configuration Snippet (Safety Valve) for meta/version Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark 2 Client Advanced Configuration Snippet (Safety Valve) for meta/version parameter. false role_config_suppression_meta/version_client_config_safety_valve true
Suppress Parameter Validation: Gateway Advanced Configuration Snippet (Safety Valve) for navigator.lineage.client.properties Whether to suppress configuration warnings produced by the built-in parameter validation for the Gateway Advanced Configuration Snippet (Safety Valve) for navigator.lineage.client.properties parameter. false role_config_suppression_navigator_lineage_client_config_safety_valve true
Suppress Parameter Validation: Spark 2 Client Advanced Configuration Snippet (Safety Valve) for spark2-conf/spark-defaults.conf Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark 2 Client Advanced Configuration Snippet (Safety Valve) for spark2-conf/spark-defaults.conf parameter. false role_config_suppression_spark2-conf/spark-defaults.conf_client_config_safety_valve true
Suppress Parameter Validation: Spark 2 Client Advanced Configuration Snippet (Safety Valve) for spark2-conf/spark-env.sh Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark 2 Client Advanced Configuration Snippet (Safety Valve) for spark2-conf/spark-env.sh parameter. false role_config_suppression_spark2-conf/spark-env.sh_client_config_safety_valve true
Suppress Parameter Validation: Spark Data Serializer Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark Data Serializer parameter. false role_config_suppression_spark_data_serializer true
Suppress Parameter Validation: Extra Python Path Whether to suppress configuration warnings produced by the built-in parameter validation for the Extra Python Path parameter. false role_config_suppression_spark_python_path true

History Server

Advanced

Display Name Description Related Name Default Value API Name Required
History Server Logging Advanced Configuration Snippet (Safety Valve) For advanced use only, a string to be inserted into log4j.properties for this role only. log4j_safety_valve false
History Server Advanced Configuration Snippet (Safety Valve) for meta/version For advanced use only. A string to be inserted into meta/version for this role only. meta/version_role_safety_valve false
Heap Dump Directory Path to directory where heap dumps are generated when java.lang.OutOfMemoryError error is thrown. This directory is automatically created if it does not exist. If this directory already exists, role user must have write access to this directory. If this directory is shared among multiple roles, it should have 1777 permissions. The heap dump files are created with 600 permissions and are owned by the role user. The amount of free space in this directory should be greater than the maximum Java Process heap size configured for this role. oom_heap_dump_dir /tmp oom_heap_dump_dir false
Dump Heap When Out of Memory When set, generates heap dump file when java.lang.OutOfMemoryError is thrown. true oom_heap_dump_enabled true
Kill When Out of Memory When set, a SIGKILL signal is sent to the role process when java.lang.OutOfMemoryError is thrown. true oom_sigkill_enabled true
Automatically Restart Process When set, this role's process is automatically (and transparently) restarted in the event of an unexpected failure. false process_auto_restart true
Enable Metric Collection Cloudera Manager agent monitors each service and each of its role by publishing metrics to the Cloudera Manager Service Monitor. Setting it to false will stop Cloudera Manager agent from publishing any metric for corresponding service/roles. This is usually helpful for services that generate large amount of metrics which Service Monitor is not able to process. true process_should_monitor true
History Server Advanced Configuration Snippet (Safety Valve) for spark2-conf/spark-env.sh For advanced use only. A string to be inserted into spark2-conf/spark-env.sh for this role only. spark2-conf/spark-env.sh_role_safety_valve false
History Server Advanced Configuration Snippet (Safety Valve) for spark2-conf/spark-history-server.conf For advanced use only. A string to be inserted into spark2-conf/spark-history-server.conf for this role only. spark2-conf/spark-history-server.conf_role_safety_valve false
History Server Environment Advanced Configuration Snippet (Safety Valve) For advanced use only, key-value pairs (one on each line) to be inserted into a role's environment. Applies to configurations of this role except client configuration. SPARK2_YARN_HISTORY_SERVER_role_env_safety_valve false

Logs

Display Name Description Related Name Default Value API Name Required
History Server Log Directory The log directory for log files of the role History Server. log_dir /var/log/spark2 log_dir false
History Server Logging Threshold The minimum log level for History Server logs INFO log_threshold false
History Server Maximum Log File Backups The maximum number of rolled log files to keep for History Server logs. Typically used by log4j or logback. 10 max_log_backup_index false
History Server Max Log Size The maximum size, in megabytes, per log file for History Server logs. Typically used by log4j or logback. 200 MiB max_log_size false

Monitoring

Display Name Description Related Name Default Value API Name Required
Enable Health Alerts for this Role When set, Cloudera Manager will send alerts when the health of this role reaches the threshold specified by the EventServer setting eventserver_health_events_alert_threshold true enable_alerts false
Enable Configuration Change Alerts When set, Cloudera Manager will send alerts when this entity's configuration changes. false enable_config_alerts false
Log Directory Free Space Monitoring Absolute Thresholds The health test thresholds for monitoring of free space on the filesystem that contains this role's log directory. Warning: 10 GiB, Critical: 5 GiB log_directory_free_space_absolute_thresholds false
Log Directory Free Space Monitoring Percentage Thresholds The health test thresholds for monitoring of free space on the filesystem that contains this role's log directory. Specified as a percentage of the capacity on that filesystem. This setting is not used if a Log Directory Free Space Monitoring Absolute Thresholds setting is configured. Warning: Never, Critical: Never log_directory_free_space_percentage_thresholds false
Process Swap Memory Thresholds The health test thresholds on the swap memory usage of the process. Warning: Any, Critical: Never process_swap_memory_thresholds false
Role Triggers The configured triggers for this role. This is a JSON-formatted list of triggers. These triggers are evaluated as part as the health system. Every trigger expression is parsed, and if the trigger condition is met, the list of actions provided in the trigger expression is executed. Each trigger has the following fields:
  • triggerName (mandatory) - The name of the trigger. This value must be unique for the specific role.
  • triggerExpression (mandatory) - A tsquery expression representing the trigger.
  • streamThreshold (optional) - The maximum number of streams that can satisfy a condition of a trigger before the condition fires. By default set to 0, and any stream returned causes the condition to fire.
  • enabled (optional) - By default set to 'true'. If set to 'false', the trigger is not evaluated.
  • expressionEditorConfig (optional) - Metadata for the trigger editor. If present, the trigger should only be edited from the Edit Trigger page; editing the trigger here can lead to inconsistencies.
For example, the following JSON formatted trigger configured for a DataNode fires if the DataNode has more than 1500 file descriptors opened:[{"triggerName": "sample-trigger", "triggerExpression": "IF (SELECT fd_open WHERE roleName=$ROLENAME and last(fd_open) > 1500) DO health:bad", "streamThreshold": 0, "enabled": "true"}]See the trigger rules documentation for more details on how to write triggers using tsquery.The JSON format is evolving and may change and, as a result, backward compatibility is not guaranteed between releases.
[] role_triggers true
File Descriptor Monitoring Thresholds The health test thresholds of the number of file descriptors used. Specified as a percentage of file descriptor limit. Warning: 50.0 %, Critical: 70.0 % spark2_yarn_history_server_fd_thresholds false
History Server Host Health Test When computing the overall History Server health, consider the host's health. true spark2_yarn_history_server_host_health_enabled false
History Server Process Health Test Enables the health test that the History Server's process state is consistent with the role configuration true spark2_yarn_history_server_scm_health_enabled false
Unexpected Exits Thresholds The health test thresholds for unexpected exits encountered within a recent period specified by the unexpected_exits_window configuration for the role. Warning: Never, Critical: Any unexpected_exits_thresholds false
Unexpected Exits Monitoring Period The period to review when computing unexpected exits. 5 minute(s) unexpected_exits_window false

Other

Display Name Description Related Name Default Value API Name Required
Use Local Storage Whether to use local storage for caching application history data, which reduces memory usage and makes service restarts faster. enable_local_storage false enable_local_storage false
Enable Event Log Cleaner Specifies whether the History Server should periodically clean up event logs from storage. spark.history.fs.cleaner.enabled true event_log_cleaner_enabled false
Event Log Cleaner Interval How often the History Server will clean up event log files. spark.history.fs.cleaner.interval 1 day(s) event_log_cleaner_interval false
Maximum Event Log Age Specifies the maximum age of the event logs. spark.history.fs.cleaner.maxAge 7 day(s) event_log_cleaner_max_age false
Admin Users Comma-separated list of users who can view all applications when authentication is enabled. spark.history.ui.admin.acls history_server_admin_users false
HDFS Polling Interval How often to poll HDFS for new applications. spark.history.fs.update.interval.seconds 10 second(s) history_server_fs_poll_interval false
Java Heap Size of History Server in Bytes Maximum size for the Java process heap memory. Passed to Java -Xmx. Measured in bytes. history_server_max_heapsize 512 MiB history_server_max_heapsize true
Retained App Count Max number of application UIs to keep in the History Server's memory. All applications will still be available, but may take longer to load if they're not in memory. spark.history.retainedApplications 50 history_server_retained_apps false
Enable User Authentication Enables user authentication using SPNEGO (requires Kerberos), and enables access control to application history data. history_server_spnego_enabled false history_server_spnego_enabled false
Local Storage Directory Directory where to keep local caches of application history data. spark.history.store.path /var/lib/spark2/history local_storage_dir false
Max Local Storage Size Approximate maximum amount of data to use in local storage for caching application history data. spark.history.store.maxDiskUsage 10 GiB local_storage_max_usage false
Enabled SSL/TLS Algorithms A comma-separated list of algorithm names to enable when TLS/SSL is enabled. By default, all algorithms supported by the JRE are enabled. spark.ssl.historyServer.enabledAlgorithms ssl_server_algorithms false
TLS/SSL Protocol The version of the TLS/SSL protocol to use when TLS/SSL is enabled. spark.ssl.historyServer.protocol TLSv1.2 ssl_server_protocol false

Performance

Display Name Description Related Name Default Value API Name Required
Maximum Process File Descriptors If configured, overrides the process soft and hard rlimits (also called ulimits) for file descriptors to the configured value. rlimit_fds false

Ports and Addresses

Display Name Description Related Name Default Value API Name Required
History Server WebUI Port The port of the history server WebUI spark.history.ui.port 18089 history_server_web_port true
TLS/SSL Port Number Port where to listen for TLS/SSL connections. HTTP connections will be redirected to this port when TLS/SSL is enabled. spark.ssl.historyServer.port 18489 ssl_server_port false

Resource Management

Display Name Description Related Name Default Value API Name Required
Cgroup CPU Shares Number of CPU shares to assign to this role. The greater the number of shares, the larger the share of the host's CPUs that will be given to this role when the host experiences CPU contention. Must be between 2 and 262144. Defaults to 1024 for processes not managed by Cloudera Manager. cpu.shares 1024 rm_cpu_shares true
Cgroup I/O Weight Weight for the read I/O requests issued by this role. The greater the weight, the higher the priority of the requests when the host experiences I/O contention. Must be between 100 and 1000. Defaults to 1000 for processes not managed by Cloudera Manager. blkio.weight 500 rm_io_weight true
Cgroup Memory Hard Limit Hard memory limit to assign to this role, enforced by the Linux kernel. When the limit is reached, the kernel will reclaim pages charged to the process. If reclaiming fails, the kernel may kill the process. Both anonymous as well as page cache pages contribute to the limit. Use a value of -1 B to specify no limit. By default processes not managed by Cloudera Manager will have no limit. memory.limit_in_bytes -1 MiB rm_memory_hard_limit true
Cgroup Memory Soft Limit Soft memory limit to assign to this role, enforced by the Linux kernel. When the limit is reached, the kernel will reclaim pages charged to the process if and only if the host is facing memory pressure. If reclaiming fails, the kernel may kill the process. Both anonymous as well as page cache pages contribute to the limit. Use a value of -1 B to specify no limit. By default processes not managed by Cloudera Manager will have no limit. memory.soft_limit_in_bytes -1 MiB rm_memory_soft_limit true

Security

Display Name Description Related Name Default Value API Name Required
Enable TLS/SSL for History Server Encrypt communication between clients and History Server using Transport Layer Security (TLS) (formerly known as Secure Socket Layer (SSL)). spark.ssl.historyServer.enabled false ssl_enabled false
History Server TLS/SSL Server JKS Keystore File Location The path to the TLS/SSL keystore file containing the server certificate and private key used for TLS/SSL. Used when History Server is acting as a TLS/SSL server. The keystore must be in JKS format. spark.ssl.historyServer.keyStore ssl_server_keystore_location false
History Server TLS/SSL Server JKS Keystore File Password The password for the History Server JKS keystore file. ssl_server_keystore_password false

Stacks Collection

Display Name Description Related Name Default Value API Name Required
Stacks Collection Data Retention The amount of stacks data that is retained. After the retention limit is reached, the oldest data is deleted. stacks_collection_data_retention 100 MiB stacks_collection_data_retention false
Stacks Collection Directory The directory in which stacks logs are placed. If not set, stacks are logged into a stacks subdirectory of the role's log directory. stacks_collection_directory stacks_collection_directory false
Stacks Collection Enabled Whether or not periodic stacks collection is enabled. stacks_collection_enabled false stacks_collection_enabled true
Stacks Collection Frequency The frequency with which stacks are collected. stacks_collection_frequency 5.0 second(s) stacks_collection_frequency false
Stacks Collection Method The method used to collect stacks. The jstack option involves periodically running the jstack command against the role's daemon process. The servlet method is available for those roles that have an HTTP server endpoint exposing the current stacks traces of all threads. When the servlet method is selected, that HTTP endpoint is periodically scraped. stacks_collection_method jstack stacks_collection_method false

Suppressions

Display Name Description Related Name Default Value API Name Required
Suppress Configuration Validator: CDH Version Validator Whether to suppress configuration warnings produced by the CDH Version Validator configuration validator. false role_config_suppression_cdh_version_validator true
Suppress Parameter Validation: Admin Users Whether to suppress configuration warnings produced by the built-in parameter validation for the Admin Users parameter. false role_config_suppression_history_server_admin_users true
Suppress Parameter Validation: Local Storage Directory Whether to suppress configuration warnings produced by the built-in parameter validation for the Local Storage Directory parameter. false role_config_suppression_local_storage_dir true
Suppress Parameter Validation: History Server Logging Advanced Configuration Snippet (Safety Valve) Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server Logging Advanced Configuration Snippet (Safety Valve) parameter. false role_config_suppression_log4j_safety_valve true
Suppress Parameter Validation: History Server Log Directory Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server Log Directory parameter. false role_config_suppression_log_dir true
Suppress Parameter Validation: History Server Advanced Configuration Snippet (Safety Valve) for meta/version Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server Advanced Configuration Snippet (Safety Valve) for meta/version parameter. false role_config_suppression_meta/version_role_safety_valve true
Suppress Parameter Validation: Heap Dump Directory Whether to suppress configuration warnings produced by the built-in parameter validation for the Heap Dump Directory parameter. false role_config_suppression_oom_heap_dump_dir true
Suppress Parameter Validation: Role Triggers Whether to suppress configuration warnings produced by the built-in parameter validation for the Role Triggers parameter. false role_config_suppression_role_triggers true
Suppress Parameter Validation: History Server Advanced Configuration Snippet (Safety Valve) for spark2-conf/spark-env.sh Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server Advanced Configuration Snippet (Safety Valve) for spark2-conf/spark-env.sh parameter. false role_config_suppression_spark2-conf/spark-env.sh_role_safety_valve true
Suppress Parameter Validation: History Server Advanced Configuration Snippet (Safety Valve) for spark2-conf/spark-history-server.conf Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server Advanced Configuration Snippet (Safety Valve) for spark2-conf/spark-history-server.conf parameter. false role_config_suppression_spark2-conf/spark-history-server.conf_role_safety_valve true
Suppress Parameter Validation: History Server Environment Advanced Configuration Snippet (Safety Valve) Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server Environment Advanced Configuration Snippet (Safety Valve) parameter. false role_config_suppression_spark2_yarn_history_server_role_env_safety_valve true
Suppress Parameter Validation: Enabled SSL/TLS Algorithms Whether to suppress configuration warnings produced by the built-in parameter validation for the Enabled SSL/TLS Algorithms parameter. false role_config_suppression_ssl_server_algorithms true
Suppress Parameter Validation: History Server TLS/SSL Server JKS Keystore File Location Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server TLS/SSL Server JKS Keystore File Location parameter. false role_config_suppression_ssl_server_keystore_location true
Suppress Parameter Validation: History Server TLS/SSL Server JKS Keystore File Password Whether to suppress configuration warnings produced by the built-in parameter validation for the History Server TLS/SSL Server JKS Keystore File Password parameter. false role_config_suppression_ssl_server_keystore_password true
Suppress Parameter Validation: TLS/SSL Protocol Whether to suppress configuration warnings produced by the built-in parameter validation for the TLS/SSL Protocol parameter. false role_config_suppression_ssl_server_protocol true
Suppress Parameter Validation: Stacks Collection Directory Whether to suppress configuration warnings produced by the built-in parameter validation for the Stacks Collection Directory parameter. false role_config_suppression_stacks_collection_directory true
Suppress Health Test: Audit Pipeline Test Whether to suppress the results of the Audit Pipeline Test heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts. false role_health_suppression_spark2_on_yarn_spark2_yarn_history_server_audit_health true
Suppress Health Test: File Descriptors Whether to suppress the results of the File Descriptors heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts. false role_health_suppression_spark2_on_yarn_spark2_yarn_history_server_file_descriptor true
Suppress Health Test: Host Health Whether to suppress the results of the Host Health heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts. false role_health_suppression_spark2_on_yarn_spark2_yarn_history_server_host_health true
Suppress Health Test: Log Directory Free Space Whether to suppress the results of the Log Directory Free Space heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts. false role_health_suppression_spark2_on_yarn_spark2_yarn_history_server_log_directory_free_space true
Suppress Health Test: Process Status Whether to suppress the results of the Process Status heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts. false role_health_suppression_spark2_on_yarn_spark2_yarn_history_server_scm_health true
Suppress Health Test: Swap Memory Usage Whether to suppress the results of the Swap Memory Usage heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts. false role_health_suppression_spark2_on_yarn_spark2_yarn_history_server_swap_memory_usage true
Suppress Health Test: Unexpected Exits Whether to suppress the results of the Unexpected Exits heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts. false role_health_suppression_spark2_on_yarn_spark2_yarn_history_server_unexpected_exits true

Service-Wide

Advanced

Display Name Description Related Name Default Value API Name Required
Spark 2 Service Advanced Configuration Snippet (Safety Valve) for meta/version For advanced use only, a string to be inserted into meta/version. Applies to configurations of all roles in this service except client configuration. meta/version_service_safety_valve false
System Group The group that this service's processes should run as. spark process_groupname true
System User The user that this service's processes should run as. spark process_username true
Spark 2 Service Advanced Configuration Snippet (Safety Valve) for spark2-conf/spark-env.sh For advanced use only, a string to be inserted into spark2-conf/spark-env.sh. Applies to configurations of all roles in this service except client configuration. spark2-conf/spark-env.sh_service_safety_valve false
Spark 2 Service Environment Advanced Configuration Snippet (Safety Valve) For advanced use only, key-value pairs (one on each line) to be inserted into a role's environment. Applies to configurations of all roles in this service except client configuration. SPARK2_ON_YARN_service_env_safety_valve false

Cloudera Navigator

Display Name Description Related Name Default Value API Name Required
Enable Lineage Collection Enable collection of lineage from the service's roles. config.navigator.lineage_enabled true navigator_lineage_enabled false

Monitoring

Display Name Description Related Name Default Value API Name Required
Enable Service Level Health Alerts When set, Cloudera Manager will send alerts when the health of this service reaches the threshold specified by the EventServer setting eventserver_health_events_alert_threshold true enable_alerts false
Enable Configuration Change Alerts When set, Cloudera Manager will send alerts when this entity's configuration changes. false enable_config_alerts false
Service Triggers The configured triggers for this service. This is a JSON-formatted list of triggers. These triggers are evaluated as part as the health system. Every trigger expression is parsed, and if the trigger condition is met, the list of actions provided in the trigger expression is executed. Each trigger has the following fields:
  • triggerName (mandatory) - The name of the trigger. This value must be unique for the specific service.
  • triggerExpression (mandatory) - A tsquery expression representing the trigger.
  • streamThreshold (optional) - The maximum number of streams that can satisfy a condition of a trigger before the condition fires. By default set to 0, and any stream returned causes the condition to fire.
  • enabled (optional) - By default set to 'true'. If set to 'false', the trigger is not evaluated.
  • expressionEditorConfig (optional) - Metadata for the trigger editor. If present, the trigger should only be edited from the Edit Trigger page; editing the trigger here can lead to inconsistencies.
For example, the followig JSON formatted trigger fires if there are more than 10 DataNodes with more than 500 file descriptors opened:[{"triggerName": "sample-trigger", "triggerExpression": "IF (SELECT fd_open WHERE roleType = DataNode and last(fd_open) > 500) DO health:bad", "streamThreshold": 10, "enabled": "true"}]See the trigger rules documentation for more details on how to write triggers using tsquery.The JSON format is evolving and may change and, as a result, backward compatibility is not guaranteed between releases.
[] service_triggers true
Service Monitor Derived Configs Advanced Configuration Snippet (Safety Valve) For advanced use only, a list of derived configuration properties that will be used by the Service Monitor instead of the default ones. smon_derived_configs_safety_valve false
History Server Role Health Test When computing the overall SPARK2_ON_YARN health, consider History Server's health true SPARK2_ON_YARN_SPARK2_YARN_HISTORY_SERVER_health_enabled false

Other

Display Name Description Related Name Default Value API Name Required
HBase Service Name of the HBase service that this Spark 2 service instance depends on hbase_service false
Hive Service Name of the Hive service that this Spark 2 service instance depends on hive_service false
Spark Authentication Enable whether the Spark communication protocols do authentication using a shared secret. If using SPARK_ON_YARN service (i.e. Spark 1 based service), ensure that value of this property is the same in both services. spark.authenticate false spark_authenticate true
Spark History Location (HDFS) The location of Spark application history logs in HDFS. Changing this value will not move existing logs to the new location. spark.eventLog.dir /user/spark/spark2ApplicationHistory spark_history_log_dir true
YARN (MR2 Included) Service Name of the YARN (MR2 Included) service that this Spark 2 service instance depends on yarn_service true

Ports and Addresses

Display Name Description Related Name Default Value API Name Required
Spark Shuffle Service Port The port the Spark Shuffle Service listens for fetch requests. If using SPARK_ON_YARN service (i.e. Spark 1 based service), ensure that value of this property is the same in both services. spark.shuffle.service.port 7337 spark_shuffle_service_port true

Security

Display Name Description Related Name Default Value API Name Required
Kerberos Principal Kerberos principal short name used by all roles of this service. spark kerberos_princ_name true

Suppressions

Display Name Description Related Name Default Value API Name Required
Suppress Configuration Validator: Gateway Count Validator Whether to suppress configuration warnings produced by the Gateway Count Validator configuration validator. false service_config_suppression_gateway_count_validator true
Suppress Parameter Validation: Kerberos Principal Whether to suppress configuration warnings produced by the built-in parameter validation for the Kerberos Principal parameter. false service_config_suppression_kerberos_princ_name true
Suppress Parameter Validation: Spark 2 Service Advanced Configuration Snippet (Safety Valve) for meta/version Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark 2 Service Advanced Configuration Snippet (Safety Valve) for meta/version parameter. false service_config_suppression_meta/version_service_safety_valve true
Suppress Parameter Validation: System Group Whether to suppress configuration warnings produced by the built-in parameter validation for the System Group parameter. false service_config_suppression_process_groupname true
Suppress Parameter Validation: System User Whether to suppress configuration warnings produced by the built-in parameter validation for the System User parameter. false service_config_suppression_process_username true
Suppress Parameter Validation: Service Triggers Whether to suppress configuration warnings produced by the built-in parameter validation for the Service Triggers parameter. false service_config_suppression_service_triggers true
Suppress Parameter Validation: Service Monitor Derived Configs Advanced Configuration Snippet (Safety Valve) Whether to suppress configuration warnings produced by the built-in parameter validation for the Service Monitor Derived Configs Advanced Configuration Snippet (Safety Valve) parameter. false service_config_suppression_smon_derived_configs_safety_valve true
Suppress Parameter Validation: Spark 2 Service Advanced Configuration Snippet (Safety Valve) for spark2-conf/spark-env.sh Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark 2 Service Advanced Configuration Snippet (Safety Valve) for spark2-conf/spark-env.sh parameter. false service_config_suppression_spark2-conf/spark-env.sh_service_safety_valve true
Suppress Parameter Validation: Spark 2 Service Environment Advanced Configuration Snippet (Safety Valve) Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark 2 Service Environment Advanced Configuration Snippet (Safety Valve) parameter. false service_config_suppression_spark2_on_yarn_service_env_safety_valve true
Suppress Configuration Validator: History Server Count Validator Whether to suppress configuration warnings produced by the History Server Count Validator configuration validator. false service_config_suppression_spark2_yarn_history_server_count_validator true
Suppress Parameter Validation: Spark History Location (HDFS) Whether to suppress configuration warnings produced by the built-in parameter validation for the Spark History Location (HDFS) parameter. false service_config_suppression_spark_history_log_dir true
Suppress Health Test: History Server Health Whether to suppress the results of the History Server Health heath test. The results of suppressed health tests are ignored when computing the overall health of the associated host, role or service, so suppressed health tests will not generate alerts. false service_health_suppression_spark2_on_yarn_spark2_on_yarn_spark2_yarn_history_server_health true