Flume Properties in CDH 5.3.0

agentdefaultgroup

Advanced

Display Name Description Related Name Default Value API Name Required
Agent Environment Advanced Configuration Snippet (Safety Valve) For advanced use only, key-value pairs (one on each line) to be inserted into a role's environment. Applies to configurations of this role except client configuration. AGENT_role_env_safety_valve false
HBase sink prefer hbase-site.xml over Zookeeper config Disables import of ZooKeeper configuration from the HBase classpath. This prevents zoo.cfg from overriding hbase-site.xml for Zookeeper quorum information. This option is only supported on CDH 4.4 or later deployments. true agent_disable_zoo_cfg true
Java Configuration Options for Flume Agent These arguments will be passed as part of the Java command line. Commonly, garbage collection flags or extra debugging flags would be passed here. Note that Flume agent only uses options that start with -D and -X (including -XX). flume_agent_java_opts false
Agent Logging Advanced Configuration Snippet (Safety Valve) For advanced use only, a string to be inserted into log4j.properties for this role only. log4j_safety_valve false
Heap Dump Directory Path to directory where heap dumps are generated when java.lang.OutOfMemoryError error is thrown. This directory is automatically created if it does not exist. If this directory already exists, role user must have write access to this directory. If this directory is shared among multiple roles, it should have 1777 permissions. The heap dump files are created with 600 permissions and are owned by the role user. The amount of free space in this directory should be greater than the maximum Java Process heap size configured for this role. oom_heap_dump_dir /tmp oom_heap_dump_dir false
Dump Heap When Out of Memory When set, generates heap dump file when java.lang.OutOfMemoryError is thrown. false oom_heap_dump_enabled true
Kill When Out of Memory When set, a SIGKILL signal is sent to the role process when java.lang.OutOfMemoryError is thrown. true oom_sigkill_enabled true
Automatically Restart Process When set, this role's process is automatically (and transparently) restarted in the event of an unexpected failure. true process_auto_restart true

Flume-NG Solr Sink

Display Name Description Related Name Default Value API Name Required
Custom Mime-types File Text that goes verbatim into custom-mimetypes.xml file used by the Flume-NG Solr sink. <?xml version=1.0 encoding=UTF-8?> <!-- Licensed to the Apache Software Foundation (ASF) under one or more contributor license agreements. See the NOTICE file distributed with this work for additional information regarding copyright ownership. The ASF licenses this file to You under the Apache License, Version 2.0 (the License); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. --> <mime-info> <mime-type type=text/space-separated-values> <glob pattern=*.ssv/> </mime-type> <mime-type type=avro/binary> <magic priority=50> <match value=0x4f626a01 type=string offset=0/> </magic> <glob pattern=*.avro/> </mime-type> <mime-type type=mytwittertest/json+delimited+length> <magic priority=50> <match value=[0-9]+(\r)?\n\\&quot; type=regex offset=0:16/> </magic> </mime-type> <mime-type type=application/hadoop-sequence-file> <magic priority=50> <match value=SEQ[\0-\6] type=regex offset=0/> </magic> </mime-type> </mime-info> agent_custom_mimetypes_file false
Grok Dictionary File Text that goes verbatim into grok-dictionary.conf file used by the Flume-NG Solr sink. USERNAME [a-zA-Z0-9._-]+ USER %USERNAME INT (?:[+-]?(?:[0-9]+)) BASE10NUM (?<![0-9.+-])(?>[+-]?(?:(?:[0-9]+(?:\.[0-9]+)?)|(?:\.[0-9]+))) NUMBER (?:%BASE10NUM) BASE16NUM (?<![0-9A-Fa-f])(?:[+-]?(?:0x)?(?:[0-9A-Fa-f]+)) BASE16FLOAT \b(?<![0-9A-Fa-f.])(?:[+-]?(?:0x)?(?:(?:[0-9A-Fa-f]+(?:\.[0-9A-Fa-f]*)?)|(?:\.[0-9A-Fa-f]+)))\b POSINT \b(?:[1-9][0-9]*)\b NONNEGINT \b(?:[0-9]+)\b WORD \b\w+\b NOTSPACE \S+ SPACE \s* DATA .*? GREEDYDATA .* #QUOTEDSTRING (?:(?<!\\)(?:(?:\\.|[^\\])*|(?:'(?:\\.|[^\\'])*')|(?:`(?:\\.|[^\\`])*`))) QUOTEDSTRING (?>(?<!\\)(?>(?>\\.|[^\\]+)+||(?>'(?>\\.|[^\\']+)+')|''|(?>`(?>\\.|[^\\`]+)+`)|``)) UUID [A-Fa-f0-9]8-(?:[A-Fa-f0-9]4-)3[A-Fa-f0-9]12 # Networking MAC (?:%CISCOMAC|%WINDOWSMAC|%COMMONMAC) CISCOMAC (?:(?:[A-Fa-f0-9]4\.)2[A-Fa-f0-9]4) WINDOWSMAC (?:(?:[A-Fa-f0-9]2-)5[A-Fa-f0-9]2) COMMONMAC (?:(?:[A-Fa-f0-9]2:)5[A-Fa-f0-9]2) IP (?<![0-9])(?:(?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]1, 2)[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]1, 2)[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]1, 2)[.](?:25[0-5]|2[0-4][0-9]|[0-1]?[0-9]1, 2))(?![0-9]) HOSTNAME \b(?:[0-9A-Za-z][0-9A-Za-z-]0, 62)(?:\.(?:[0-9A-Za-z][0-9A-Za-z-]0, 62))*(\.?|\b) HOST %HOSTNAME IPORHOST (?:%HOSTNAME|%IP) #HOSTPORT (?:%IPORHOST=~/\./:%POSINT) # WH # paths PATH (?:%UNIXPATH|%WINPATH) UNIXPATH (?>/(?>[\w_%!$@:., -]+|\\.)*)+ #UNIXPATH (?<![\w\/])(?:/[^\/\s?*]*)+ LINUXTTY (?>/dev/pts/%NONNEGINT) BSDTTY (?>/dev/tty[pq][a-z0-9]) TTY (?:%BSDTTY|%LINUXTTY) WINPATH (?>[A-Za-z]+:|\\)(?:\\[^\\?*]*)+ URIPROTO [A-Za-z]+(\+[A-Za-z+]+)? URIHOST %IPORHOST(?::%POSINT:port)? # uripath comes loosely from RFC1738, but mostly from what Firefox # doesn't turn into %XX URIPATH (?:/[A-Za-z0-9$.+!*'(), ~:;=#%_\-]*)+ #URIPARAM \?(?:[A-Za-z0-9]+(?:=(?:[^&]*))?(?:&(?:[A-Za-z0-9]+(?:=(?:[^&]*))?)?)*)? URIPARAM \?[A-Za-z0-9$.+!*'|(), ~#%&/=:;_?\-\[\]]* URIPATHPARAM %URIPATH(?:%URIPARAM)? URI %URIPROTO://(?:%USER(?::[^@]*)?@)?(?:%URIHOST)?(?:%URIPATHPARAM)? # Months: January, Feb, 3, 03, 12, December MONTH \b(?:Jan(?:uary)?|Feb(?:ruary)?|Mar(?:ch)?|Apr(?:il)?|May|Jun(?:e)?|Jul(?:y)?|Aug(?:ust)?|Sep(?:tember)?|Oct(?:ober)?|Nov(?:ember)?|Dec(?:ember)?)\b MONTHNUM (?:0?[1-9]|1[0-2]) MONTHDAY (?:(?:0[1-9])|(?:[12][0-9])|(?:3[01])|[1-9]) # Days: Monday, Tue, Thu, etc... DAY (?:Mon(?:day)?|Tue(?:sday)?|Wed(?:nesday)?|Thu(?:rsday)?|Fri(?:day)?|Sat(?:urday)?|Sun(?:day)?) # Years? YEAR (?>\d\d)1, 2 # Time: HH:MM:SS #TIME \d2:\d2(?::\d2(?:\.\d+)?)? # I'm still on the fence about using grok to perform the time match, # since it's probably slower. # TIME %POSINT<24:%POSINT<60(?::%POSINT<60(?:\.%POSINT)?)? HOUR (?:2[0123]|[01]?[0-9]) MINUTE (?:[0-5][0-9]) # '60' is a leap second in most time standards and thus is valid. SECOND (?:(?:[0-5][0-9]|60)(?:[:., ][0-9]+)?) TIME (?!<[0-9])%HOUR:%MINUTE(?::%SECOND)(?![0-9]) # datestamp is YYYY/MM/DD-HH:MM:SS.UUUU (or something like it) DATE_US %MONTHNUM[/-]%MONTHDAY[/-]%YEAR DATE_EU %MONTHDAY[./-]%MONTHNUM[./-]%YEAR ISO8601_TIMEZONE (?:Z|[+-]%HOUR(?::?%MINUTE)) ISO8601_SECOND (?:%SECOND|60) TIMESTAMP_ISO8601 %YEAR-%MONTHNUM-%MONTHDAY[T ]%HOUR:?%MINUTE(?::?%SECOND)?%ISO8601_TIMEZONE? DATE %DATE_US|%DATE_EU DATESTAMP %DATE[- ]%TIME TZ (?:[PMCE][SD]T) DATESTAMP_RFC822 %DAY %MONTH %MONTHDAY %YEAR %TIME %TZ DATESTAMP_OTHER %DAY %MONTH %MONTHDAY %TIME %TZ %YEAR # Syslog Dates: Month Day HH:MM:SS SYSLOGTIMESTAMP %MONTH +%MONTHDAY %TIME PROG (?:[\w._/%-]+) SYSLOGPROG %PROG:program(?:\[%POSINT:pid\])? SYSLOGHOST %IPORHOST SYSLOGFACILITY <%NONNEGINT:facility.%NONNEGINT:priority> HTTPDATE %MONTHDAY/%MONTH/%YEAR:%TIME %INT # Shortcuts QS %QUOTEDSTRING # Log formats SYSLOGBASE %SYSLOGTIMESTAMP:timestamp (?:%SYSLOGFACILITY )?%SYSLOGHOST:logsource %SYSLOGPROG: COMBINEDAPACHELOG %IPORHOST:clientip %USER:ident %USER:auth \[%HTTPDATE:timestamp\] (?:%WORD:verb %NOTSPACE:request(?: HTTP/%NUMBER:httpversion)?|%DATA:rawrequest) %NUMBER:response (?:%NUMBER:bytes|-) %QS:referrer %QS:agent # Log Levels LOGLEVEL ([T|t]race|TRACE|[D|d]ebug|DEBUG|[N|n]otice|NOTICE|[I|i]nfo|INFO|[W|w]arn?(?:ing)?|WARN?(?:ING)?|[E|e]rr?(?:or)?|ERR?(?:OR)?|[C|c]rit?(?:ical)?|CRIT?(?:ICAL)?|[F|f]atal|FATAL|[S|s]evere|SEVERE|EMERG(?:ENCY)?|[Ee]merg(?:ency)?) agent_grok_dictionary_conf_file false
Morphlines File Text that goes into morphlines.conf file used by the Flume-NG Solr sink. The text goes verbatim into the config file except that $ZK_HOST is replaced by the ZooKeeper quorum of the Solr service. # Licensed to the Apache Software Foundation (ASF) under one # or more contributor license agreements. See the NOTICE file # distributed with this work for additional information # regarding copyright ownership. The ASF licenses this file # to you under the Apache License, Version 2.0 (the # License); you may not use this file except in compliance # with the License. You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, # software distributed under the License is distributed on an # AS IS BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY # KIND, either express or implied. See the License for the # specific language governing permissions and limitations # under the License. # Application configuration file in HOCON format (Human-Optimized Config Object Notation). # HOCON syntax is defined at http://github.com/typesafehub/config/blob/master/HOCON.md # and also used by Akka (http://www.akka.io) and Play (http://www.playframework.org/). # For more examples see http://doc.akka.io/docs/akka/2.1.2/general/configuration.html # morphline.conf example file # this is a comment # Specify server locations in a SOLR_LOCATOR variable; used later in variable substitutions: SOLR_LOCATOR : # Name of solr collection collection : collection1 # ZooKeeper ensemble zkHost : $ZK_HOST # Relative or absolute path to a directory containing conf/solrconfig.xml and conf/schema.xml # If this path is uncommented it takes precedence over the configuration stored in ZooKeeper. # solrHomeDir : example/solr/collection1 # The maximum number of documents to send to Solr per network batch (throughput knob) # batchSize : 100 # Specify an array of one or more morphlines, each of which defines an ETL # transformation chain. A morphline consists of one or more (potentially # nested) commands. A morphline is a way to consume records (e.g. Flume events, # HDFS files or blocks), turn them into a stream of records, and pipe the stream # of records through a set of easily configurable transformations on it's way to # Solr (or a MapReduceIndexerTool RecordWriter that feeds via a Reducer into Solr). morphlines : [ # Name used to identify a morphline. E.g. used if there are multiple morphlines in a # morphline config file id : morphline1 # Import all morphline commands in these java packages and their subpackages. # Other commands that may be present on the classpath are not visible to this morphline. importCommands : [org.kitesdk.**, org.apache.solr.**] commands : [ # Parse Avro container file and emit a record for each avro object readAvroContainer # Optionally, require the input record to match one of these MIME types: # supportedMimeTypes : [avro/binary] # Optionally, use a custom Avro schema in JSON format inline: # schemaString : <json can go here> # Optionally, use a custom Avro schema file in JSON format: # schemaFile : /path/to/syslog.avsc # Consume the output record of the previous command and pipe another record downstream. # # extractAvroPaths is a command that uses zero or more avro path expressions to extract # values from an Avro object. Each expression consists of a record output field name (on # the left side of the colon ':') as well as zero or more path steps (on the right hand # side), each path step separated by a '/' slash. Avro arrays are traversed with the '[]' # notation. # # The result of a path expression is a list of objects, each of which is added to the # given record output field. # # The path language supports all Avro concepts, including nested structures, records, # arrays, maps, unions, etc, as well as a flatten option that collects the primitives in # a subtree into a flat list. extractAvroPaths flatten : false paths : id : /id text : /text user_friends_count : /user_friends_count user_location : /user_location user_description : /user_description user_statuses_count : /user_statuses_count user_followers_count : /user_followers_count user_name : /user_name user_screen_name : /user_screen_name created_at : /created_at retweet_count : /retweet_count retweeted : /retweeted in_reply_to_user_id : /in_reply_to_user_id source : /source in_reply_to_status_id : /in_reply_to_status_id media_url_https : /media_url_https expanded_url : /expanded_url # Consume the output record of the previous command and pipe another record downstream. # # convert timestamp field to native Solr timestamp format # e.g. 2012-09-06T07:14:34Z to 2012-09-06T07:14:34.000Z convertTimestamp field : created_at inputFormats : [yyyy-MM-dd'T'HH:mm:ss'Z', yyyy-MM-dd] inputTimezone : America/Los_Angeles # outputFormat : yyyy-MM-dd'T'HH:mm:ss.SSSZ outputTimezone : UTC # Consume the output record of the previous command and pipe another record downstream. # # Command that sanitizes record fields that are unknown to Solr schema.xml by either # deleting them (renameToPrefix is absent or a zero length string), or by moving them to a # field prefixed with the given renameToPrefix (e.g. renameToPrefix = ignored_ to use # typical dynamic Solr fields). # # Recall that Solr throws an exception on any attempt to load a document that contains a # field that isn't specified in schema.xml. sanitizeUnknownSolrFields # Location from which to fetch Solr schema solrLocator : $SOLR_LOCATOR # renameToPrefix : ignored_ # log the record at DEBUG level to SLF4J logDebug format : output record: , args : [@] # load the record into a SolrServer or MapReduce SolrOutputFormat. loadSolr solrLocator : $SOLR_LOCATOR ] ] agent_morphlines_conf_file false

Logs

Display Name Description Related Name Default Value API Name Required
Flume Agent Log Directory Directory where Flume Agent will place its log files. /var/log/flume-ng flume_agent_log_dir false
Agent Logging Threshold The minimum log level for Agent logs INFO log_threshold false
Agent Maximum Log File Backups The maximum number of rolled log files to keep for Agent logs. Typically used by log4j or logback. 10 max_log_backup_index false
Agent Max Log Size The maximum size, in megabytes, per log file for Agent logs. Typically used by log4j or logback. 200 MiB max_log_size false

Monitoring

Display Name Description Related Name Default Value API Name Required
Web Metric Collection Enables the health test that the Cloudera Manager Agent can successfully contact and gather metrics from the web server. true agent_web_metric_collection_enabled false
Web Metric Collection Duration The health test thresholds on the duration of the metrics request to the web server. Warning: 10 second(s), Critical: Never agent_web_metric_collection_thresholds false
Enable Health Alerts for this Role When set, Cloudera Manager will send alerts when the health of this role reaches the threshold specified by the EventServer setting eventserver_health_events_alert_threshold true enable_alerts false
Enable Configuration Change Alerts When set, Cloudera Manager will send alerts when this entity's configuration changes. false enable_config_alerts false
File Descriptor Monitoring Thresholds The health test thresholds of the number of file descriptors used. Specified as a percentage of file descriptor limit. Warning: 50.0 %, Critical: 70.0 % flume_agent_fd_thresholds false
Agent Host Health Test When computing the overall Agent health, consider the host's health. true flume_agent_host_health_enabled false
Agent Process Health Test Enables the health test that the Agent's process state is consistent with the role configuration true flume_agent_scm_health_enabled false
Heap Dump Directory Free Space Monitoring Absolute Thresholds The health test thresholds for monitoring of free space on the filesystem that contains this role's heap dump directory. Warning: 10 GiB, Critical: 5 GiB heap_dump_directory_free_space_absolute_thresholds false
Heap Dump Directory Free Space Monitoring Percentage Thresholds The health test thresholds for monitoring of free space on the filesystem that contains this role's heap dump directory. Specified as a percentage of the capacity on that filesystem. This setting is not used if a Heap Dump Directory Free Space Monitoring Absolute Thresholds setting is configured. Warning: Never, Critical: Never heap_dump_directory_free_space_percentage_thresholds false
Log Directory Free Space Monitoring Absolute Thresholds The health test thresholds for monitoring of free space on the filesystem that contains this role's log directory. Warning: 10 GiB, Critical: 5 GiB log_directory_free_space_absolute_thresholds false
Log Directory Free Space Monitoring Percentage Thresholds The health test thresholds for monitoring of free space on the filesystem that contains this role's log directory. Specified as a percentage of the capacity on that filesystem. This setting is not used if a Log Directory Free Space Monitoring Absolute Thresholds setting is configured. Warning: Never, Critical: Never log_directory_free_space_percentage_thresholds false
Rules to Extract Events from Log Files This file contains the rules which govern how log messages are turned into events by the custom log4j appender that this role loads. It is in JSON format, and is composed of a list of rules. Every log message is evaluated against each of these rules in turn to decide whether or not to send an event for that message. Each rule has some or all of the following fields:
  • alert - whether or not events generated from this rule should be promoted to alerts. A value of "true" will cause alerts to be generated. If not specified, the default is "false".
  • rate (mandatory) - the maximum number of log messages matching this rule that may be sent as events every minute. If more than rate matching log messages are received in a single minute, the extra messages are ignored. If rate is less than 0, the number of messages per minute is unlimited.
  • periodminutes - the number of minutes during which the publisher will only publish rate events or fewer. If not specified, the default is one minute
  • threshold - apply this rule only to messages with this log4j severity level or above. An example is "WARN" for warning level messages or higher.
  • content - match only those messages whose contents match this regular expression.
  • exceptiontype - match only those messages which are part of an exception message. The exception type must match this regular expression.
Example:{"alert": false, "rate": 10, "exceptiontype": "java.lang.StringIndexOutOfBoundsException"}This rule will send events to Cloudera Manager for every StringIndexOutOfBoundsException, up to a maximum of 10 every minute.
version: 0, rules: [ alert: false, rate: 1, periodminutes: 1, threshold:FATAL, alert: false, rate: 0, threshold:WARN, content: .* is deprecated. Instead, use .*, alert: false, rate: 0, threshold:WARN, content: .* is deprecated. Use .* instead, alert: false, rate: 1, periodminutes: 2, exceptiontype: .*, alert: false, rate: 1, periodminutes: 1, threshold:WARN ] log_event_whitelist false
Process Swap Memory Thresholds The health test thresholds on the swap memory usage of the process. Warning: Any, Critical: Never process_swap_memory_thresholds false
Role Triggers The configured triggers for this role. This is a JSON formatted list of triggers. These triggers are evaluated as part as the health system. Every trigger expression is parsed, and if the trigger condition is met, the list of actions provided in the trigger expression is executed. Each trigger has all of the following fields:
  • triggerName (mandatory) - The name of the trigger. This value must be unique for the specific role.
  • triggerExpression (mandatory) - A tsquery expression representing the trigger.
  • streamThreshold (optional) - The maximum number of streams that can satisfy a condition of a trigger before the condition fires. By default set to 0, and any stream returned causes the condition to fire.
  • enabled (optional) - By default set to 'true'. If set to 'false', the trigger will not be evaluated.
  • expressionEditorConfig (optional) - Metadata for the trigger editor. If present, the trigger should only be edited from the Edit Trigger page; editing the trigger here may lead to inconsistencies.
For example, the following JSON formatted trigger configured for a DataNode fires if the DataNode has more than 1500 file-descriptors opened:[{"triggerName": "sample-trigger", "triggerExpression": "IF (SELECT fd_open WHERE roleName=$ROLENAME and last(fd_open) > 1500) DO health:bad", "streamThreshold": 0, "enabled": "true"}]See the trigger rules documentation for more details on how to write triggers using tsquery.The JSON format is evolving and may change in the future and, as a result, backward compatibility is not guaranteed between releases at this time.
[] role_triggers true
Unexpected Exits Thresholds The health test thresholds for unexpected exits encountered within a recent period specified by the unexpected_exits_window configuration for the role. Warning: Never, Critical: Any unexpected_exits_thresholds false
Unexpected Exits Monitoring Period The period to review when computing unexpected exits. 5 minute(s) unexpected_exits_window false

Other

Display Name Description Related Name Default Value API Name Required
Configuration File Verbatim contents of flume.conf. Multiple agents may be configured from the same configuration file; the Agent Name setting can be overridden to select which agent configuration to use for each agent. To integrate with a secured cluster, you can use the substitution strings "$KERBEROS_PRINCIPAL" and "$KERBEROS_KEYTAB", which will be replaced by the principal name and the keytab path respectively. # Please paste flume.conf here. Example: # Sources, channels, and sinks are defined per # agent name, in this case 'tier1'. tier1.sources = source1 tier1.channels = channel1 tier1.sinks = sink1 # For each source, channel, and sink, set # standard properties. tier1.sources.source1.type = netcat tier1.sources.source1.bind = 127.0.0.1 tier1.sources.source1.port = 9999 tier1.sources.source1.channels = channel1 tier1.channels.channel1.type = memory tier1.sinks.sink1.type = logger tier1.sinks.sink1.channel = channel1 # Other properties are specific to each type of # source, channel, or sink. In this case, we # specify the capacity of the memory channel. tier1.channels.channel1.capacity = 100 agent_config_file true
Flume Home Directory Home directory for Flume user. The File Channel uses paths for checkpoint and data directories that are within the user home. /var/lib/flume-ng agent_home_dir true
Agent Name Used to select an agent configuration to use from flume.conf. Multiple agents may share the same agent name, in which case they will be assigned the same agent configuration. tier1 agent_name true
Plugin directories List of Flume plugin directories. This overrides the default Flume plugin directory. /usr/lib/flume-ng/plugins.d:/var/lib/flume-ng/plugins.d agent_plugin_dirs true

Performance

Display Name Description Related Name Default Value API Name Required
Maximum Process File Descriptors If configured, overrides the process soft and hard rlimits (also called ulimits) for file descriptors to the configured value. rlimit_fds false

Ports and Addresses

Display Name Description Related Name Default Value API Name Required
HTTP Port The port on which the Flume web server listens for requests. 41414 agent_http_port true

Resource Management

Display Name Description Related Name Default Value API Name Required
Java Heap Size of Agent in Bytes Maximum size in bytes for the Java Process heap memory. Passed to Java -Xmx. 1 GiB agent_java_heapsize false
Cgroup CPU Shares Number of CPU shares to assign to this role. The greater the number of shares, the larger the share of the host's CPUs that will be given to this role when the host experiences CPU contention. Must be between 2 and 262144. Defaults to 1024 for processes not managed by Cloudera Manager. cpu.shares 1024 rm_cpu_shares true
Cgroup I/O Weight Weight for the read I/O requests issued by this role. The greater the weight, the higher the priority of the requests when the host experiences I/O contention. Must be between 100 and 1000. Defaults to 1000 for processes not managed by Cloudera Manager. blkio.weight 500 rm_io_weight true
Cgroup Memory Hard Limit Hard memory limit to assign to this role, enforced by the Linux kernel. When the limit is reached, the kernel will reclaim pages charged to the process. If reclaiming fails, the kernel may kill the process. Both anonymous as well as page cache pages contribute to the limit. Use a value of -1 B to specify no limit. By default processes not managed by Cloudera Manager will have no limit. memory.limit_in_bytes -1 MiB rm_memory_hard_limit true
Cgroup Memory Soft Limit Soft memory limit to assign to this role, enforced by the Linux kernel. When the limit is reached, the kernel will reclaim pages charged to the process if and only if the host is facing memory pressure. If reclaiming fails, the kernel may kill the process. Both anonymous as well as page cache pages contribute to the limit. Use a value of -1 B to specify no limit. By default processes not managed by Cloudera Manager will have no limit. memory.soft_limit_in_bytes -1 MiB rm_memory_soft_limit true

Stacks Collection

Display Name Description Related Name Default Value API Name Required
Stacks Collection Data Retention The amount of stacks data that is retained. After the retention limit is reached, the oldest data is deleted. stacks_collection_data_retention 100 MiB stacks_collection_data_retention false
Stacks Collection Directory The directory in which stacks logs are placed. If not set, stacks are logged into a stacks subdirectory of the role's log directory. stacks_collection_directory stacks_collection_directory false
Stacks Collection Enabled Whether or not periodic stacks collection is enabled. stacks_collection_enabled false stacks_collection_enabled true
Stacks Collection Frequency The frequency with which stacks are collected. stacks_collection_frequency 5.0 second(s) stacks_collection_frequency false
Stacks Collection Method The method used to collect stacks. The jstack option involves periodically running the jstack command against the role's daemon process. The servlet method is available for those roles that have an HTTP server endpoint exposing the current stacks traces of all threads. When the servlet method is selected, that HTTP endpoint is periodically scraped. stacks_collection_method jstack stacks_collection_method false

service_wide

Advanced

Display Name Description Related Name Default Value API Name Required
Flume Service Environment Advanced Configuration Snippet (Safety Valve) For advanced use only, key-value pairs (one on each line) to be inserted into a role's environment. Applies to configurations of all roles in this service except client configuration. flume_env_safety_valve false

Monitoring

Display Name Description Related Name Default Value API Name Required
Enable Log Event Capture When set, each role identifies important log events and forwards them to Cloudera Manager. true catch_events false
Enable Service Level Health Alerts When set, Cloudera Manager will send alerts when the health of this service reaches the threshold specified by the EventServer setting eventserver_health_events_alert_threshold true enable_alerts false
Enable Configuration Change Alerts When set, Cloudera Manager will send alerts when this entity's configuration changes. false enable_config_alerts false
Healthy Agent Monitoring Thresholds The health test thresholds of the overall Agent health. The check returns "Concerning" health if the percentage of "Healthy" Agents falls below the warning threshold. The check is unhealthy if the total percentage of "Healthy" and "Concerning" Agents falls below the critical threshold. Warning: 95.0 %, Critical: Never flume_agents_healthy_thresholds false
Maximum displayed Flume metrics components Sets the maximum number of Flume components that will be returned under Flume Metric Details. Increasing this value will negatively impact the interactive performance of the Flume Metrics Details page. 1000 flume_context_groups_request_limit false
Log Event Retry Frequency The frequency in which the log4j event publication appender will retry sending undelivered log events to the Event server, in seconds 30 log_event_retry_frequency false
Service Triggers The configured triggers for this service. This is a JSON formatted list of triggers. These triggers are evaluated as part as the health system. Every trigger expression is parsed, and if the trigger condition is met, the list of actions provided in the trigger expression is executed. Each trigger has all of the following fields:
  • triggerName (mandatory) - The name of the trigger. This value must be unique for the specific service.
  • triggerExpression (mandatory) - A tsquery expression representing the trigger.
  • streamThreshold (optional) - The maximum number of streams that can satisfy a condition of a trigger before the condition fires. By default set to 0, and any stream returned causes the condition to fire.
  • enabled (optional) - By default set to 'true'. If set to 'false', the trigger will not be evaluated.
  • expressionEditorConfig (optional) - Metadata for the trigger editor. If present, the trigger should only be edited from the Edit Trigger page; editing the trigger here may lead to inconsistencies.
For example, the followig JSON formatted trigger fires if there are more than 10 DataNodes with more than 500 file-descriptors opened:[{"triggerName": "sample-trigger", "triggerExpression": "IF (SELECT fd_open WHERE roleType = DataNode and last(fd_open) > 500) DO health:bad", "streamThreshold": 10, "enabled": "true"}]See the trigger rules documentation for more details on how to write triggers using tsquery.The JSON format is evolving and may change in the future and, as a result, backward compatibility is not guaranteed between releases at this time.
[] service_triggers true
Service Monitor Derived Configs Advanced Configuration Snippet (Safety Valve) For advanced use only, a list of derived configuration properties that will be used by the Service Monitor instead of the default ones. smon_derived_configs_safety_valve false

Other

Display Name Description Related Name Default Value API Name Required
Hbase Service Name of the Hbase service that this Flume service instance depends on hbase_service false
HDFS Service Name of the HDFS service that this Flume service instance depends on hdfs_service false
System Group The group that this service's processes should run as. flume process_groupname true
System User The user that this service's processes should run as. flume process_username true
Solr Service Name of the Solr service that this Flume service instance depends on solr_service false

Security

Display Name Description Related Name Default Value API Name Required
Flume TLS/SSL Certificate Trust Store File The location on disk of the trust store, in .jks format, used to confirm the authenticity of TLS/SSL servers that Flume might connect to. This is used when Flume is the client in a TLS/SSL connection. This trust store must contain the certificate(s) used to sign the service(s) being connected to. If this parameter is not provided, the default list of well-known certificate authorities is used instead. flume_truststore_file false
Flume TLS/SSL Certificate Trust Store Password The password for the Flume TLS/SSL Certificate Trust Store File. Note that this password is not required to access the trust store: this field can be left blank. This password provides optional integrity checking of the file. The contents of trust stores are certificates, and certificates are public information. flume_truststore_password false
Kerberos Principal Kerberos principal short name used by all roles of this service. flume kerberos_princ_name true