This is a reference list of terms related to Cloudera products and services. Additional information is available from a number of resources.
access control list (ACL)
A list of permissions associated with an object in a computer file system. An ACL specifies which users or processes are allowed to access an object, and what operations can be performed.
A sorted, distributed key-value store based on the Google BigTable design. Apache Accumulo is a NoSQL DBMS that operates over HDFS, and supports efficient storage and retrieval of structured data, including queries for ranges. Accumulo tables can be used as input and output for MapReduce jobs. Accumulo includes automatic load-balancing and partitioning, data compression, and fine-grained security labels.
In Spark, a function that returns a value to the driver after running a computation on an RDD.
Apache Software Foundation gateway for open-source projects that aim to become Apache projects. Incubating projects are open source and may or may not become Apache projects.
Apache Software Foundation (ASF)
A non-profit corporation that supports various open-source software products, including Apache Hadoop and related projects on which Cloudera products are based. Apache projects are developed by teams of collaborators and protected by an ASF license that provides legal protection to volunteers who work on Apache products and protect the Apache brand name.
Apache projects are characterized by a collaborative, consensus-based development process and an open and pragmatic software license. Each project is managed by a self-selected team of technical experts who are active contributors to the project.
Cloudera employees are major contributors to many Apache projects.
A JAR containing a Spark application. In some cases you can use an "uber" JAR containing your application with its dependencies. The JAR should never include Hadoop or Spark libraries, however, because these are added at run time.
audit, audit event
Log of activity performed in the cluster. Many cluster services write an entry in an audit log for each action performed; these logs are collected by Cloudera Navigator Audit Server and can be reviewed through the Navigator console. Some ways to use audit logs include measuring activity against specific data assets or by individual users or groups, tracing failed attempts to access data assets, or identifying who accessed specific data assets and when.
A serialization system for storing and transmitting data over a network. Apache Avro supports rich data structures, a compact binary encoding, and a container file for sequences of Avro data (often referred to as Avro data files). Avro is language-independent and several language bindings are available, including Java, C, C++, Python, and Ruby. All components in CDH that produce or consume files support Avro data files.
A Hue application that enables you to perform queries on Hive. You can create Hive tables, load data, and run and manage Hive queries.
Data sets in which the input/output velocity, variety of data structure, and volume exceed the capabilities of systems which were designed for smaller data sets to capture, manage, and process the data within a tolerable elapsed time. Big data sizes are expanding, currently ranging from terabytes to many petabytes in a single data set.
A compressed, high-performance, column-oriented database built on Google File System (GFS). The BigTable design was the inspiration for HBase and Accumulo, but the implementation, unlike other Google projects such as Protocol Buffers, is proprietary.
An Apache project to develop the packaging and interoperability testing of the Apache Hadoop ecosystem projects.
In Cloudera Navigator, business metadata, such as key-value pairs and tags, that is added to extracted entities. You can add and modify user-defined properties before or after entities are extracted.
Cloudera distribution containing core Hadoop (HDFS, MapReduce, YARN) and the following related projects: Avro, Flume, Fuse-DFS, HBase, Hive, Hue,Impala, Kudu, Oozie, Pig, Cloudera Search, Sentry, Spark, Sqoop, ZooKeeper, DataFu, and Kite.
CDH is free, 100% open source, and licensed under the Apache 2.0 license. CDH is supported on many Linux distributions.
- Basic Edition offers an enterprise-ready distribution of CDH together with Cloudera Manager and other advanced management tools and technical support.
- Flex Edition offers an enterprise-ready distribution of CDH together with Cloudera Manager and other advanced management tools and technical support. In addition, the Flex Edition includes open source indemnification, an optional premium support extension for mission-critical environments, and your choice of one of the following advanced components:Apache Accumulo or Apache HBase for online NoSQL storage and applications; Impala for interactive analytic SQL queries; Cloudera Search for interactive search; Cloudera Navigator for data management including data auditing, lineage, and discovery; or Apache Spark for interactive analytics and stream processing.
- Data Hub Edition offers an enterprise-ready distribution of CDH together with Cloudera Manager and other advanced management tools and technical support. In addition, the Data Hub Edition includes open source indemnification, an optional premium support extension for mission-critical environments, and unlimited use of all of the following advanced components:Apache Accumulo and Apache HBase for online NoSQL storage and applications; Impala for interactive analytic SQL queries; Cloudera Search for interactive search; Cloudera Navigator for data management including data auditing, lineage, and discovery, and Apache Spark for interactive analytics and stream processing.
A free download that contains CDH and Cloudera Manager, which offers robust cluster management capabilities like automated deployment, centralized administration, monitoring, and diagnostic tools. Cloudera Express enables data-driven enterprises to evaluate CDH and Cloudera Manager.
An end-to-end management application for CDH, Impala, and Cloudera Search. Cloudera Manager enables administrators to easily and effectively provision, monitor, and manage Hadoop clusters and CDH installations. Cloudera Manager is available in two versions: Cloudera Express and Cloudera Enterprise.
A fully integrated data management and security tool for the Hadoop platform. Cloudera Navigator provides three categories of functionality:
- Auditing data access and verifying access privileges. Cloudera Navigator allows administrators to configure, collect, and view audit events, and generate reports that list the HDFS access permissions granted to groups. Cloudera Navigator tracks access permissions and actual accesses to all entities in HDFS, Hive, HBase, Hue, Impala, Sentry, and Solr.
- Searching metadata and visualizing lineage. Metadata management features allow DBAs, data modelers, business analysts, and data scientists to search for, amend the properties of, and tag data entities. Cloudera Navigator supports tracking the lineage of HDFS files, datasets, and directories, Hive tables and columns, MapReduce and YARN jobs, Hive queries, Impala queries, Pig scripts, Oozie workflows, Spark jobs, and Sqoop jobs.
- Securing data and simplifying storage and management of encryption keys. Data encryption and key management provide protection against potential threats by malicious actors on the network or in the datacenter. It is also a requirement for meeting key compliance initiatives and ensuring the integrity of enterprise data.
A fully integrated search tool for the Apache Hadoop platform that integrates Apache Solr, including Apache Lucene, Apache SolrCloud, and Apache Tika, with CDH. Cloudera Search makes searching more scalable, easy to use, and optimized for both near-real-time and batch-oriented indexing.
- A set of computers or racks of computers that contains an HDFS filesystem and runs MapReduce and other processes on that data. A pseudo-distributed cluster is a CDH installation run on a single machine and useful for demonstrations and individual study.
- In Cloudera Manager, a logical entity that contains a set of hosts, a single version of CDH installed on the hosts, and the service and role instances running on the hosts. A host can belong to only one cluster. Cloudera Manager can manage multiple CDH clusters, however each cluster can only be associated with a single Cloudera Manager Server or Cloudera Manager HA pair.
An external service for acquiring resources on the cluster: Spark Standalone or YARN.
- hard - A commit that starts the autowarm process, closes old searchers, and opens new ones. It may also trigger replication.
- soft - Functionality with NRT and SolrCloud that makes documents searchable without requiring hard commits.
A mechanism to reduce the size of a file so that it takes up less disk space for storage and consumes less network bandwidth when transferred. Common compression tools used with Apache Hadoop include gzip, bzip2, Snappy, and LZO.
containerA resource bucket and process space for a task. A container's resources consist of vcores and memory.
Usually refers to software for connecting external systems with Apache Hadoop. Some connectors work with Apache Sqoop to enable efficient data transfer between an external system and Hadoop. Other connectors translate ODBC driver calls from business intelligence systems into HiveQL queries.
In Cloudera Navigator, descriptions, key-value pairs, and tags that can be added to entities such as HDFS files, Hive tables, and YARN operations. You can add and modify business metadata before and after entities are extracted.
Data Encryption Key (DEK)
The encryption/decryption key assigned to a file in an encryption zone. Each file has its own DEK, and these DEKs are never stored persistently unless they are encrypted with the encryption zone's key.
A discipline that builds on techniques and theories from many fields, including mathematics, statistics, and computer science, with the goal of extracting meaning from data and creating data products.
A collection of Pig user-defined functions (UDFs) that can be used by Apache Pig for statistical analysis. Apache DataFu was deprecated in CDH 5.9 and is removed from CDH 6.0 Beta 1. For more information, see Removal of the Apache DataFu Pig JAR from CDH 6.
A repository of a set of integrated information objects. Datastores include repositories such as databases and files.
A collection of records, similar to a relational database table. Records are similar to table rows, but the columns can contain not only strings or numbers, but also nested data structures such as lists, maps, and other records.
A category of SQL statements that affect database state rather than table data. Includes all the CREATE, ALTER, and DROP statements.
A system composed of multiple autonomous computers that communicate through a computer network.
In Apache Spark, a process that represents an application session. The driver is responsible for converting the application to a directed graph of individual steps to execute on the cluster. There is one driver per application.
dynamic resource pool
In Cloudera Manager, a named configuration of resources and a policy for scheduling the resources among YARN applications or Impala queries running in the pool.
Provides the ability to execute Solr commands without having a separate servlet container. Use of embedded Solr is generally discouraged, particularly if used because HTTP is assumed to be too slow. However, in Cloudera Search, particularly if a MapReduce process is adopted, embedded Solr is advisable.
Encrypted Data Encryption Key (EDEK)
An encrypted DEK, which is stored persistently as part of the file's metadata on the NameNode.
A directory in HDFS in which every file and subdirectory is encrypted. The files in this directory are transparently encrypted on write and transparently decrypted on read. Each encryption zone is associated with a key that is specified when the zone is created.
encryption zone key
Key used to encrypt EDEKs. When a new file is created in an encryption zone, the NameNode sends a request to the KMS to generate a new EDEK encrypted with the encryption zone key. When reading a file from an encryption zone, the NameNode provides the client with the file's EDEK and the encryption zone key version used to encrypt the EDEK. The client then sends a request to the KMS to decrypt the EDEK. If successful, the client uses the DEK to decrypt the file contents.
Enterprise Data Hub
An enterprise data hub (EDH), built on Apache Hadoop, provides a single central system for the storage and management of all data in the enterprise. An EDH runs the full range of workloads that enterprises require, including batch processing, interactive SQL, enterprise search, and advanced analytics, together with the integrations to existing systems, robust security, data management, and data protection.
A process that serves a Spark application. An executor runs multiple tasks over its lifetime, and multiple tasks concurrently. A host may have several Spark executors and there are many hosts running Spark executors for each application.
A construct that allows certain policy properties to be specified programmatically using Java, instead of string literals.
extract, load, transform (ELT)
A variation of Extract, Transform, Load (ETL). The process of transferring data from a source to an end target (a database or data warehouse), and then transforming the data as required.
extract, transform, load (ETL)
A process that involves extracting data from sources, transforming the data to fit operational needs, and loading the data into the end target, typically a database or data warehouse.
In Cloudera Manager and Cloudera Navigator, an explicit dimension of an entity that enables it to be accessed and filtered in multiple ways. Facets correspond to entity properties.
Arrangement of query results into categories, usually with counts for each category. You can use these categories to explore and further restrict search results to find the information you need.
A design that enables a system to continue operation, possibly at a reduced level instead of failing completely, when some part of the system fails.
Level at which encryption and data masking can be applied. When protection is applied at this level, it is generally applied only to specific sensitive fields, such as credit card numbers, social security numbers, or names, not to all data.
A distributed, reliable, and available system for efficiently collecting, aggregating, and moving large amounts of text or streaming data from many different sources to a centralized datastore. Apache Flume is robust and fault tolerant and uses a simple, extensible data model that allows for online analytic application.
filter query (fq)
A clause that limits returned results in Cloudera Search. For example, “fq=sex:male” limits results to males. Filter queries are cached and reused.
A type of role that typically provides client access to specific cluster services. For example, HDFS, Hive, Kafka, MapReduce, Solr, and Spark each have gateway roles to provide access for their clients to their respective services. Gateway roles do not always have "gateway" in their names, nor are they exclusively for client access. For example, Hue Kerberos Ticket Renewer is a gateway role that proxies tickets from Kerberos.
The node supporting one or more gateway roles is sometimes referred to as the gateway node or edge node, with the notion of "edge" common in network or cloud environments. In terms of the Cloudera cluster, the gateway nodes in the cluster receive the appropriate client configuration files when Deploy Client Configuration is selected from the Actions menu in Cloudera Manager Admin Console.
A large-scale, fault-tolerant, graph-processing framework that runs on Apache Hadoop. Apache Giraph features include master computation, sharded aggregators, edge-oriented input, and out-of-core computation.
See high availability.
A free, open source software framework that supports data-intensive distributed applications. The core components of Apache Hadoop are the HDFS and the MapReduce and YARN processing frameworks. The term is also used for an ecosystem of projects related to Hadoop, under the umbrella of infrastructure for distributed computing and large-scale data processing.
Hadoop Distributed File System (HDFS)
A user space filesystem designed for storing very large files with streaming data access patterns, running on clusters of industry-standard machines. HDFS defines three components:
- NameNode - Maintains the namespace tree for HDFS and a mapping of file blocks to DataNodes where the data is stored. A simple HDFS cluster can have only one primary NameNode, supported by a secondary NameNode that periodically compresses the NameNode edits log file that contains a list of HDFS metadata modifications. This reduces the amount of disk space consumed by the log file on the NameNode, which also reduces the restart time for the primary NameNode. A high availability cluster contains two NameNodes: active and standby.
- DataNode - Stores data in a Hadoop cluster and is the name of the daemon that manages the data. File data is replicated on multiple DataNodes for reliability and so that localized computation can be executed near the data.
- JournalNode - Maintains a directory to log the modifications to the namespace metadata when using the Quorum-based Storage mechanism for providing high availability. During failover, the NameNode standby ensures that it has applied all of the edits from the JournalNodes before promoting itself to the active state.
An industry conference for Hadoop users, contributors, administrators, and application developers.
A scalable, distributed, column-oriented datastore. Apache HBase provides real-time read/write random access to very large datasets hosted on HDFS.
An industry conference for Apache HBase users, contributors, administrators, and application developers.
A storage framework that supports multiple storage types (ARCHIVE, DISK, SSD, RAM_DISK) determined by a storage policy.
high availability (HA)
A system and implementation design to keep a service available at all times in case of failure, without regard to its performance.
A server process that supports clients that connect to Hive over an Thrift connection. The name also refers to a Thrift protocol used by both Impala and Hive.
A server process that supports clients that connect to Hive over a network connection. These clients can be native command-line editors or applications and tools that use an ODBC or JDBC driver. The name also refers to a Thrift protocol used by both Impala and Hive.
The name of the SQL dialect used by the Hive component. It uses a syntax that is similar to standard SQL to execute MapReduce jobs on HDFS. HiveQL does not support all SQL functionality. Transactions and materialized views are not supported, and support for indexes and subquery is limited. It supports features that are not part of standard SQL, such as multitable, including multitable inserts and create table as select.
Internally, a compiler translates a HiveQL statement into a directed acyclic graph of MapReduce jobs, which are submitted to Hadoop for execution. Beeswax, which is included in Hue, provides a graphical front end for HiveQL queries.
In Cloudera Manager, a physical or virtual machine that runs role instances. A host can belong to only one cluster.
A set of role groups in Cloudera Manager. When a template is applied to a host, a role instance from each role group is created and assigned to that host.
A platform for building custom GUI applications for CDH services and a tool containing the following built-in applications: an application for submitting jobs, Apache Pig, Apache HBase, and Sqoop 2 shells, Pig Editor, the Beeswax Hive UI, Impala query editor, Solr Search application, Hive metastore manager, Oozie application editor, scheduler, and submitter, Apache HBase Browser, Sqoop 2 application, HDFS file manager, and MapReduce and YARN job browser.
Official name: Apache Impala. A service that enables real-time querying of data stored in HDFS or HBase. It supports the same metadata and ODBC and JDBC drivers as Apache Hive and a query language based on the Hive Standard Query Language (HiveQL). To avoid latency, Impala circumvents MapReduce to directly access data through a specialized distributed query engine that is similar to those found in commercial parallel RDBMS.
See Apache Incubator.
A data structure that improves the speed of data retrieval on a database table at the cost of slower writes and increased storage space. Indexes can be created using one or more columns of a database table, providing the basis for both rapid random lookups and efficient access of ordered records.
A client-side adapter that implements the JDBC Java programming language API for accessing relational database management systems.
A parallel computation consisting of multiple tasks that gets spawned in response to a Spark action.
See MapReduce v1 (MRv1).
A distributed publish-subscribe messaging system that provides high throughput for publishing and subscribing. as well as replication to prevent data loss. Apache Kafka is frequently used for log collection and stream processing and often (but not exclusively) used in tandem with Hadoop, Apache Storm, and Spark Streaming.
An authentication protocol in wide use since it was first developed by MIT in 1993 and standardized by the IETF in 2005 (RFC 4120, Kerberos Version 5). Cloudera recommends using Kerberos to secure clusters, by integrating either MIT Kerberos or Microsoft Active Directory (which uses Kerberos). With Kerberos enabled, user authentication is required. Once users authenticate, other components of the cluster can also be leveraged (for example, Sentry's role-based access privileges) to provide appropriate, secure access to the cluster. See Cloudera Security for more information.
Key Management Server (KMS)
Hadoop service that interfaces with a backing key store on behalf of HDFS daemons and clients. Both the backing key store and the KMS implement the Hadoop KeyProvider client API.
A collection of libraries, tools, examples, and documentation engineered to simplify the most common tasks when working with CDH. Just like CDH, Kite is 100% free, open source, and licensed under the Apache License v2, so you can use the code any way you choose in your existing commercial code base or open source project.
A columnar storage manager for the Hadoop platform. Like other Hadoop ecosystem applications, Apache Kudu runs on commodity hardware, is horizontally scalable, and supports highly available operation.
Fast processing, integration with Hadoop ecosystem components, high availability, and other benefits make it ideal for a variety of applications: reporting applications where new data must be immediately available for end users; time-series applications that must support queries across large amounts of historic data while simultaneously returning granular queries about an individual entity; and applications that use predictive models to make real-time decisions.
In Cloudera Navigator, a directed graph that depicts an entity and its relationship with other entities.
A Unix-like computer operating system assembled under the model of free, open-source software development and distribution. Linux is a leading operating system on servers, mainframe computers, supercomputers, and embedded systems such as mobile phones, tablets, network routers, televisions, and video game consoles. The major distributions of enterprise Linux are CentOS, Debian, RHEL, SLES, and Ubuntu.
A free, open source compression library. LZO compression provides a good balance between data size and speed of compression. The LZO compression algorithm is the most efficient of the codecs, using very little CPU. Its compression ratios are not as good as others, but its compression is still significant compared to the uncompressed file sizes. Unlike some other formats, LZO compressed files are splittable, enabling MapReduce to process splits in parallel.
LZO is published under the GNU General Public License and so is not included in CDH but can be used with CDH components; the Cloudera public Git repository hosts the hadoop-lzo package that provides a version of LZO that can be used with CDH.
- Recommendation mining - Identifies things users will like based on past preferences; for example, online shopping recommendations.
- Clustering - Groups similar items; for example, documents that address similar topics.
- Classification - Learns what members of existing categories have in common and then uses that information to categorize new items.
- Frequent item-set mining - Examines a set of item-groups (such as items in a query session or shopping cart content) and identifies items that usually appear together.
A machine learning library for Hadoop that is scalable to large datasets, thereby simplifying the task of building intelligent applications.
Apache Mahout was deprecated in CDH 5.5 and has been removed from CDH 6, starting with CDH 6.0 Beta. It is no longer supported.
In Cloudera Navigator, metadata in the form of key-value pairs that are added to entities such as HDFS files, Hive tables, and YARN operations. Managed properties are created by an administrator to be used on specific entity types. They are defined within a namespace and enforce conformance to value constraints (for example, require the value to be a date). You can add and modify managed properties after entities are extracted. Managed properties differ from user-defined properties in that they are centrally defined and managed; user-defined properties are created one-at-a-time without any constraints on consistency or content.
A distributed processing framework for processing and generating large data sets and an implementation that runs on large clusters of industry-standard machines.
The processing model defines two types of functions: a map function that processes a key-value pair to generate a set of intermediate key-value pairs, and a reduce function that merges all intermediate values associated with the same intermediate key.
A MapReduce job partitions the input data set into independent chunks that are processed by the map functions in a parallel manner. The framework sorts the outputs of the maps, which are then input to the reduce functions. Typically both the input and the output of the job are stored in a distributed filesystem.
The implementation provides an API for configuring and submitting jobs and job scheduling and management services; a library of search, sort, index, inverted index, and word co-occurrence algorithms; and the runtime. The runtime system partitions the input data, schedules the program's execution across a set of machines, handles machine failures, and manages the required inter-machine communication.
MapReduce v1 (MRv1)
The runtime framework on which MapReduce jobs execute. It defines two daemons:
- JobTracker - Coordinates running MapReduce jobs and provides resource management and job lifecycle management. In YARN, those functions are performed by two separate components.
- TaskTracker - Runs the tasks that the MapReduce jobs have been split into.
MapReduce v2 (MRv2)
A software project-management tool. Based on the concept of a project object model, Apache Maven can manage a project's build, reporting, and documentation. CDH artifacts are available in the Cloudera Maven repository.
Data about data. Metadata includes attributes that characterize the entities contained in or generated by the cluster. The name and date properties associated with a file created on a hard-disk drive are examples of metadata. Apache Sentry generates metadata to indicate who can access cluster files and tables. Cloudera Navigator collects system-generated metadata (technical metadata) for cluster entities and allows you to create additional metadata to associate with these entities.
Navigator Key Trustee
A virtual safe-deposit box for managing encryption keys, certificates, and passwords. It provides software-based key and certificate management that supports a variety of robust, configurable, and easy-to-implement policies governing access to the secure artifacts.
near real-time (NRT)
In Cloudera Search, the ability to search documents very soon after they are added to Solr. With SolrCloud, this is largely automatic and measured in seconds.
See Cloudera Navigator.
Level at which encryption and decryption are applied before and after data is sent across a network. In Hadoop, this includes data sent from client user interfaces as well as service-to-service communication like remote procedure calls (RPCs). This protection is available on virtually all transmissions within the Hadoop ecosystem using industry-standard protocols such as TLS/SSL.
A client-side adapter that implements a standard C programming language API for accessing relational database management systems.
A workflow and coordination service for Hadoop that orchestrates data ingest, store, transform, and analysis actions. Apache Oozie supports several types of Hadoop jobs, including MapReduce, Streaming, Pipes, Pig, Hive, and Sqoop.
Provides a simple, real-time, large-scale machine learning and predictive analytics infrastructure. Using Apache Hadoop, Oryx can continuously build models from a data stream. It also serves queries of those models in real time through an HTTP REST API, and can update models based on new streaming data.
A binary distribution format that contains compiled code and meta-information such as a package description, version, and dependencies.
An open source, column-oriented binary file format for Hadoop that supports very efficient compression and encoding schemes. Parquet allows compression schemes to be specified on a per-column level, and allows adding more encodings as they are invented and implemented. Encoding and compression are separated, allowing Parquet consumers to implement operators that work directly on encoded data without paying a decompression and decoding penalty, when possible.
A subset of the elements in an RDD. Partitions define the unit of parallelism; Spark processes elements within a partition in sequence and multiple partitions in parallel. When Spark reads a file from HDFS, it creates a single partition for a single input split. It returns a single partition for a single block of HDFS (but the split between partitions is on line split, not the block split), unless you have a compressed text file. With a compressed file, you get a single partition for a single file (because compressed text files are not splittable).
A Cloudera Manager instance that manages clusters and is used as the source of data to be replicated. See replication.
1015 bytes. 1,000 terabytes or 1,000,000 gigabytes.
A data flow language and parallel execution framework built on top of MapReduce. Internally, a compiler translates Apache Pig statements into a directed acyclic graph of MapReduce jobs, which are submitted to Hadoop for execution.
In Cloudera Navigator, a set of actions performed when a class of entities is extracted. Use policies to add managed metadata or tags to entities in Navigator's index.
In Cloudera Manager, a physical entity that contains a set of physical hosts typically served by the same switch.
In HBase, applications store data into labeled tables, which are partitioned horizontally into regions. RegionServer is responsible for managing one or more regions.
relational database management system (RDBMS)
A database management system based on the relational model, in which all data is represented in terms of tuples, grouped into relations. Most implementations of the relational model use the SQL data definition and query language.
In SolrCloud, a complete copy of a shard. Each replica is identical, so only one replica has to be queried (per shard) for searches.
The ability to copy HDFS directories and files, the Hive metastore and data, and HBase tables to another cluster.
resilient distributed dataset (RDD)In Spark, a fault-tolerant collection of elements that can be operated on in parallel.
In Cloudera Manager, a category of functionality within a service. For example, the HDFS service has the following roles: NameNode, SecondaryNameNode, DataNode, and Balancer. Sometimes referred to as a role type. See also user role.
In Cloudera Manager, an instance of a role running on a host. It typically maps to a Unix process. For example: "NameNode-h1" and "DataNode-h1".
Component of a computing framework such as YARN, MapReduce, or Spark, that is responsible for determining which jobs run, where and when they run, and resources allocated to the jobs.
Defines the field names and data types for a dataset. Kite relies on an Apache Avro schema definition for all datasets, standardizes data definition by using Avro schemas for both Parquet and Avro, and supports the standard Avro object models generic and specific.
In a dataset, defines its storage type and location. You can create datasets in Hive, HDFS, HBase, or as local files. You define dataset schemes using scheme-specific URI patterns.
Enterprise-grade big-data security that delivers fine-grained authorization to data stored in Apache Hadoop. An independent security module that integrates with open-source SQL query engines Hive and Impala, Apache Sentry delivers advanced authorization controls to enable multi-user applications and cross-functional processes for enterprise data sets.
The process of converting a data structure or object state into a format that can be stored (for example, in a file or memory buffer, or transmitted across a network connection). Deserialization is the process of converting a data structure or object state back to the original state later in the same or another computer environment. See Avro and Thrift.
- A Linux command that runs a System V init script in /etc/init.d/ in as predictable an environment as possible, removing most environment variables and setting the current working directory to /.
- A category of managed functionality in Cloudera Manager, which may be distributed or not, running in a cluster. Sometimes referred to as a service type. For example: MapReduce, HDFS, YARN, Spark, and Accumulo. In traditional environments, multiple services run on one host; in distributed systems, a service runs on many hosts.
In Cloudera Manager, an instance of a service running on a cluster. For example: "HDFS-1" and "yarn". A service instance spans many role instances.
In Cloudera Search, splitting a single logical index up into some number of sub-indexes, each of which can be hosted on a separate machine. Solr (and especially SolrCloud) handles querying each shard and assembling the response into a single, coherent list.
A compression library. Snappy aims for very high speeds and reasonable compression instead of maximum compression or compatibility with other compression libraries. Snappy is provided in the Hadoop package along with the other native libraries (such as native gzip compression).
Apache Spark is a general framework for distributed computing that offers high performance for both batch and interactive processing. It exposes APIs for Java, Python, and Scala and consists of Spark core and several related projects.
- Spark SQL - Module for working with structured data. Allows you to seamlessly mix SQL queries with Spark programs.
- Spark Streaming - API that allows you to build scalable fault-tolerant streaming applications.
- MLlib - API that implements common machine learning algorithms.
- GraphX - API for graphs and graph-parallel computation.
Cloudera supports Spark core, Spark SQL (including DataFrames), Spark Streaming, and MLlib. Cloudera does not currently offer commercial support for GraphX or SparkR.
A declarative programming language designed for managing data in relational database management systems. It includes features for creating schema objects such as databases and tables, and for querying and modifying data. CDH includes SQL support through Impala for high-performance interactive queries, and Hive for long-running batch-oriented jobs.
A tool for efficiently transferring bulk data between Hadoop and external structured datastores, such as relational databases. Apache Sqoop imports the contents of tables into HDFS, Hive, and HBase and generates Java classes that enable users to interpret the table's schema. Sqoop can also extract data from Hadoop storage and export records from HDFS to external structured datastores such as relational databases and enterprise data warehouses.
There are two versions: Sqoop and Sqoop 2. Sqoop requires client-side installation and configuration. Sqoop 2 is a web-based service with a client command-line interface. In Sqoop 2, connectors and database drivers are configured on the server.
In Spark, a collection of tasks that all execute the same code, each on a different partition. Each stage contains a sequence of transformations that can be completed without shuffling the data.
static service pool
In Cloudera Manager, a static partitioning of total cluster resources—CPU, memory, and I/O weight—across a set of services.
In Cloudera Manager, the ability to suppress the display of health test results, configuration warnings, and parameter validation warnings.
See MapReduce v1 (MRv1).
In Cloudera Navigator, metadata defined when entities are extracted from a CDH deployment. You cannot modify technical metadata.
An interface definition language, runtime library, and code-generation engine to build services that can be invoked from many languages. Apache Thrift can be used for serialization and RPC, but within Hadoop is mainly used for RPC.
In Spark, a function that creates a new RDD from an existing RDD. Spark uses "lazy evaluation": transformations do not execute on the cluster until an action is invoked. Examples of actions are collect, which pulls data to the client, and saveAsTextFile, which writes data to a filesystem like HDFS.
KeyTrustee-specific implementation of the Hadoop KeyProvider API, allowing the Hadoop KMS to use the Navigator KeyTrustee server as a key store and enabling key generation on behalf of clients.
Determines the Cloudera Manager or Cloudera Navigator features visible to the user and the actions the user can perform.
virtual core (vcore)
A CPU with a logical separation between areas of a processor. Virtual cores divide the processing resources of a physical core and work independent of one another.
A set of libraries that can be used to run CDH clusters on cloud services such as Amazon Elastic Compute Cloud (Amazon EC2).
Apache Whirr was deprecated in CDH 5.5 and has been removed from CDH 6, starting with CDH 6.0 Beta. It is no longer supported.
YARN (Yet Another Resource Negotiator)
A general architecture for running distributed applications. YARN specifies the following components:
- ResourceManager - A master daemon that authorizes submitted jobs to run, assigns an ApplicationMaster to them, and enforces resource limits.
- ApplicationMaster - A supervisory task that requests the resources needed for executor tasks. An ApplicationMaster runs on a different NodeManager for each application. The ApplicationMaster requests containers, which are sized by the resources a task requires to run.
- NodeManager - A worker daemon that launches and monitors the ApplicationMaster and task containers.
- JobHistory Server - Keeps track of completed applications.
The ApplicationMaster negotiates with the ResourceManager for cluster resources—described in terms of a number of containers, each with a certain memory limit—and then runs application-specific processes in those containers. The containers are overseen by NodeManagers running on cluster nodes, which ensure that the application does not use more resources than it has been allocated.
MapReduce v2 (MRv2) is implemented as a YARN application.