Hadoop Ingestion ETL Pilot
Build a Custom Big Data Pipeline
Data ingestion and transformation is the first step in all Big Data projects. Hadoop's extensibility results from high availability of varied and complex data, but the identification of data sources and the provision of HDFS and MapReduce instances can prove challenging. Cloudera will architect and implement a custom ingestion and ETL pipeline to quickly bootstrap your Big Data solution.
A typical Hadoop ETL Ingestion Pilot lasts two weeks and consists of the following activities:
- Identify solution requirements to include data sources, transformations, and egress points
- Architect and develop a pilot implementation for up to 3 data sources, 5 transformations, and 1 target system
- Develop a deployment architecture that will result in a production deployment plan
- Review the Hadoop cluster and application configuration