Your browser is out of date!

Update your browser to view this website correctly. Update my browser now

×

Employing Hadoop ecosystem projects such as Spark, Hive, Flume, Sqoop, and Impala, this training course is the best preparation for the real-world challenges faced by Hadoop developers. Through instructor-led discussion and interactive, hands-on exercises, participants will learn Apache Spark and how it integrates with the entire Hadoop ecosystem, learning: How data is distributed, stored, and processed in a Hadoop cluster How to use Sqoop and Flume to ingest data  How to process distributed data with Apache Spark  How to model structured data as tables in Impala and Hive  How to choose the best data storage format for di erent data usage patterns  Best practices for data storage