Your browser is out of date!

Update your browser to view this website correctly. Update my browser now


Build a custom big data pipeline

Data ingestion and transformation is the first step in all Big Data projects. Hadoop's extensibility results from high availability of varied and complex data, but the identification of data sources and the provision of HDFS and MapReduce instances can prove challenging. Cloudera will architect and implement a custom ingestion and ETL pipeline to quickly bootstrap your Big Data solution.

A typical Hadoop ETL Ingestion Pilot

Lasts two weeks and consists of the following activities:

  • Identify solution requirements to include data sources, transformations, and egress points
  • Architect and develop a pilot implementation for up to 3 data sources, 5 transformations, and 1 target system
  • Develop a deployment architecture that will result in a production deployment plan
  • Review the Hadoop cluster and application configuration


Learn more

Customer Success Story

Patterns and Predictions Use Big Data to Predict Suicide Risk

Professional Services

See a Full Overview of Onsite Technical Engagements


Learn how to deploy a Cloudera data management solution