Use the solrctl utility to manage a SolrCloud deployment. You can manipulate SolrCloud collections, SolrCloud collection instance directories, and individual cores.
In general, if an operation succeeds, solrctl exits silently with a success exit code. If an error occurs, solrctl prints a diagnostics message combined with a failure exit code. solrctl supports specifying a log4j.properties file by setting the LOG4J_PROPS environment variable. By default, the LOG4J_PROPS setting specifies the log4j.properties in the solr configuration directory. For example, /etc/solr/conf/log4j.properties. Many solrctl commands redirect stderr to /dev/null, so Cloudera recommends that your log4j properties file specify a location other than stderr for log output.
You can run solrctl on any host that is configured as part of the SolrCloud. To run any solrctl command on a host outside of SolrCloud deployment, ensure that SolrCloud hosts are reachable and provide --zk and --solr command line options.
If you are using solrctl to manage your deployment in an environment that requires Kerberos authentication, you must have a valid Kerberos ticket, which you can get using kinit.
You can see examples of using solrctl in Deploying Cloudera Search.
For collection configuration, users have the option of interacting directly with ZooKeeper using the instancedir option or using Solr's ConfigSet API using the config option. For more information, see Understanding configs and instancedirs.
You can initialize the state of the entire SolrCloud deployment and each individual host within the SolrCloud deployment by using solrctl. The general solrctl command syntax is:
solrctl [options] command [command-arg] [command [command-arg]] ...
Each element and its possible values are described in the following sections.
- --solr solr_uri: Directs solrctl to a SolrCloud web API available at a given URI. This option is required for hosts running outside of SolrCloud. A sample URI might be: http://host1.cluster.com:8983/solr.
- --zk zk_ensemble: Directs solrctl to a particular ZooKeeper coordination service ensemble. This option is required for hosts running outside of SolrCloud. For example: host1.cluster.com:2181,host2.cluster.com:2181/solr. Output from solrctl commands that use the zkcli option is sent to /dev/null, so no results are displayed.
- --jaas jaas.conf: Used to identify a JAAS configuration that specifies the principal with permissions to modify solr metadata. The principal is typically "solr". In Kerberos-enabled environments where ZooKeeper ACLs protect solr metadata, you must use this parameter if you want to use solrctl to modify metadata.
- --help: Prints help.
- --quiet: Suppresses most solrctl messages.
The solrctl commands init, instancedir, config, collection, core, cluster, and sentry affect the entire SolrCloud deployment and are run only once per required operation.
The solrctl core command affects a single SolrCloud host.
- init [--force]: The init command, which initializes the overall state of the SolrCloud deployment, must be run before starting solr-server daemons for the first time. Use this command cautiously because it erases all SolrCloud deployment state information. After successful initialization, you cannot recover any previous state.
- instancedir [--generate path [-schemaless]] [--create name path] [--update name path] [--get name path] [--delete name]
[--list]: Manipulates the instance directories. The following options are supported:
- --generate path: Allows users to generate the template of the instance directory. The template is stored at a designated
path in a local filesystem and has configuration files under ./conf.
- -schemaless A schemaless template of the instance directory is generated. For more information on schemaless support, see Schemaless Mode Overview and Best Practices.
- --create name path: Pushes a copy of the instance directory from the local filesystem to SolrCloud. If an instance directory is already available to SolrCloud, this command fails. See --update for changing name paths that already exist.
- --update name path: Updates an existiing SolrCloud copy of an instance directory based on the files in a local filesystem. This command is analogous to first using --delete name followed by --create name path.
- --get name path: Downloads the named collection instance directory at a specified path in a local filesystem. Once downloaded, files can be further edited.
- --delete name: Deletes the instance directory name from SolrCloud.
- --list: Prints a list of all available instance directories known to SolrCloud.
- --generate path: Allows users to generate the template of the instance directory. The template is stored at a designated path in a local filesystem and has configuration files under ./conf.
- config [--create name baseConfig [-p name=value]...] [--delete name]: Manipulates configs. The following options are supported:
- --create name baseConfig [-p name=value]...: Creates a new config based on an existing config. The config is created with the specified name, using baseConfig as the template. -p can be used to override a baseConfig setting. immutable is the only property that supports override. For more information about existing templates, see Included Immutable Config Templates.
- --delete name: Deletes a config.
- collection [--create name -s <numShards> [-c <collection.configName>] [-r <replicationFactor>] [-m
<maxShardsPerHost>] [-n <createHostSet>]] [--delete name] [--reload name] [--stat name] [--list] [--deletedocs name]: Manipulates collections. The following options are supported:
- --create name -s <numShards> [-a] [-c <collection.configName>] [-r <replicationFactor>] [-m
<maxShardsPerHost>] [-n <createHostSet>]]: Creates a new collection.
New collections are given the specified name and are sharded to <numShards>.
The -a option configures auto-addition of replicas if machines hosting existing shards become unavailable.
SolrCloud hosts are configured using the <collection.configName> instance directory. Replication is configured by a factor of <replicationFactor>. The maximum shards per host is determined by <maxShardsPerHost>, and the collection is allocated to the hosts specified in <createHostSet>.
The only required parameters are name and numShards. If collection.configName is not provided, it is assumed to be the same as the name of the collection.
- --delete name: Deletes a collection.
- --reload name: Reloads a collection.
- --stat name: Outputs SolrCloud specific run-time information for a collection.
- --list: Lists all collections registered in SolrCloud.
- --deletedocs name: Purges all indexed documents from a collection.
- --create name -s <numShards> [-a] [-c <collection.configName>] [-r <replicationFactor>] [-m <maxShardsPerHost>] [-n <createHostSet>]]: Creates a new collection.
- core [--create name [-p name=value]...] [--reload name] [--unload name] [--status name]: Manipulates cores. This is one
of two commands that you can run on a particular SolrCloud host. The following options are supported:
- --create name [-p name=value]...: Creates a new core on a specified SolrCloud host. The core is configured using name=values pairs. For more information about configuration options, see Solr documentation.
- --reload name: Reloads a core.
- --unload name: Unloads a core.
- --status name: Prints status of a core.
- cluster [--get-solrxml file] [--put-solrxml file] [--set-property name value] [--remove-property
name] [--get-clusterstate file] : Manages cluster configuration. The following options are supported:
- --get-solrxml file: Downloads the cluster configuration file solr.xml from ZooKeeper to the local system.
- --put-solrxml file: Uploads the specified file to ZooKeeper as the cluster configuration file solr.xml.
- [--set-property name value]: Sets property names and values. Typically used in a deployment that is not managed by Cloudera Manager. For example, to
configure a cluster to use TLS/SSL, you might use a command of the form:
solrctl --zk <solr_zk_conf> cluster --set-property urlScheme https
- [--remove-property name]: Removes properties.
- [--get-clusterstate file]: Downloads the clusterstate.json file from ZooKeeper to the local system.
- sentry [--create-role role] [--drop-role role] [--add-role-group role group] [--delete-role-group role group] [--list-roles [-g
group]] [--grant-privilege role privilege] [--revoke-privilege role privilege] [--list-privileges role] [--convert-policy-file file [-dry-run]]: Manages sentry configuration. The following
options are supported:
- [--create-role role]: Creates a new Sentry role using the name specified.
- [--drop-role role]: Deletes the specified Sentry role.
- [--add-role-group role group]: Adds an existing Sentry role to the specified group.
- [--delete-role-group role group]: Removes an existing Sentry role from the specified group.
- [--list-roles [-g group]]: Lists all roles. Optionally, lists all roles in the specified group when -g is used.
- [--grant-privilege role privilege]: Grants the specific privilege to the specified role.
- [--revoke-privilege role privilege]: Revokes the privilege from the role.
- [--list-privileges role]: Lists all privileges granted to the specified role.
- [--convert-policy-file file [-dry-run]]: Converts the specified policy file to permissions in the Sentry service. This
command adds existing roles, adds existing roles to group, and grants permissions.
The file-based model allows case-sensitive role names. During conversion, all roles and groups are converted to lower case.
- If a policy-file conversion will change the case of roles or groups, a warning is presented. Policy conversion can proceed, but if you have enabled document-level security and use role names as your tokens, you must re-index using the new lower case role names after conversion is complete.
- If a policy-file conversion will change the case of roles or groups, creating a name collision, an error occurs and conversion cannot occur. In such a case, you must eliminate the collisions before proceeding. For example, you could rename or delete all but one of the names that cause a collision.
The dry-run option runs the process of converting the policy file, but sends the results to stdout without applying the changes. This can be used for quicker turnaround during early trial debug sessions.
After converting the policy file to permissions in the Sentry service, you may want to enable Sentry for Solr, as described in Migrating from Sentry Policy Files to the Sentry Service.