Apache Falcon

Apache™ Falcon addresses enterprise challenges related to Hadoop data replication, business continuity, and lineage tracing by deploying a framework for data management and processing. Falcon centrally manages the data lifecycle, facilitate quick data replication for business continuity and disaster recovery and provides a foundation for audit and compliance by tracking entity lineage and collection of audit logs.

What Falcon Does

Falcon allows an enterprise to process a single massive dataset stored in HDFS in multiple ways—for batch, interactive and streaming applications. With more data and more users of that data, Apache Falcon’s data governance capabilities play a critical role. As the value of Hadoop data increases, so does the importance of cleaning that data, preparing it for business intelligence tools, and removing it from the cluster when it outlives its useful life.

Falcon simplifies the development and management of data processing pipelines with a higher layer of abstraction, taking the complex coding out of data processing applications by providing out-of-the-box data management services. This simplifies the configuration and orchestration of data motion, disaster recovery and data retention workflows.

The Falcon framework can also leverage other HDP components, such as Pig, HDFS, and Oozie. Falcon enables this simplified management by providing a framework to define, deploy, and manage data pipelines.

Apache Falcon meets enterprise data governance needs in three areas:

Need Feature
Centralized data lifecycle management
  • Centralized definition & management of pipelines for data ingest, process & export
  • Ensure disaster readiness & business continuity
  • Out of the box policies for data replication & retention
  • End to end monitoring of data pipelines
Compliance and audit
  • Visualize data pipeline lineage
  • Track data pipeline audit logs
  • Tag data with business metadata
Database replication and archival
  • Replication across on-premise and cloud-based storages targets: Microsoft Azure and Amazon S3
  • Data lineage with supporting documentation and examples
  • Heterogeneous storage tiering in HDFS
  • Definition of hot/cold storage tiers within a cluster

How Falcon Works

Hadoop operators can use the Falcon web UI or the command-line interface (CLI) to create data pipelines, which consist of cluster storage location definitions, dataset feeds, and processing logic.  Each pipeline consists of XML pipeline specifications, called entities. These entities act together to provide a dynamic flow of information to load, clean, and process data. There are three types of entities:

  • Cluster: Defines where data and processes are stored.
  • Feed: Defines the datasets to be cleaned and processed.
  • Process: Consumes feeds, invokes processing logic, and produces further feeds. A process defines the configuration of the Oozie workflow and defines when and how often the workflow should run. Also allows for late data handling.

Each entity is defined separately and then linked together to form a data pipeline. Falcon provides predefined policies for data replication, retention, late data handling, and replication. These sample policies are easily customized to suit your needs.

  • Centrally management of the data lifecycle: Falcon enables you to manage the data lifecycle in one common place where you can define and manage policies and pipelines for data ingest, processing, and export.
  • Business continuity and disaster recovery: Falcon can replicate HDFS and Hive datasets, trigger processes for retry, and handle late data arrival logic. In addition, Falcon can mirror file systems or Hive HCatalog on clusters using recipes that enable to you re-use complex workflows.
  • Address audit and compliance requirements: Falcon provides audit and compliance features that enable you to visualize data pipeline lineage, track data pipeline audit logs, and tag data with business metadata.

Your form submission has failed.

This may have been caused by one of the following:

  • Your request timed out
  • A plugin/browser extension blocked the submission. If you have an ad blocking plugin please disable it and close this message to reload the page.