Start on your path to Big Data expertise with our open, online Udacity course. Cloudera University’s free three-lesson program covers the fundamentals of Hadoop, including getting hands-on by developing MapReduce code on data in HDFS.
Using free, industry-leading tools like Cloudera Manager, you will learn how to install and manage Hadoop in the cloud for a quick and simple deployment. Hadoop can be a challenging platform to install, but with the knowledge you'll gain in this webinar, you can get POC cluster up and running in no time.
At their core, YARN and MapReduce 2’s improvements separate cluster resource management capabilities from MapReduce-specific logic. YARN enables Hadoop to share resources dynamically between multiple parallel processing frameworks such as Cloudera Impala, allows more sensible and finer-grained resource configuration for better cluster utilization, and scales Hadoop to accommodate more and larger jobs.
Pig is an Apache project that uses a scripting language to query and analyze large data sets. With Apache Pig, users can create MapReduce programs without writing Java code. This e-learning module teaches you how to write user-defined functions (UDFs) that can be executed inside of Pig to extend performance and develop a custom library of operations. We discuss what Pig UDFs are, supported functions and languages, and how to write custom UDFs in Java and Python. The module includes a hands-on exercise where you will write your own UDF in Python, complete with a sample solution.
Learn the new Parcel format for installing and upgrading CDH and other Hadoop ecosystem components. Parcels enable the new rolling upgrade functionality inCloudera Manager, provide rollback functionality, and make maintenance windows short and painless. In this e-learning module, we discuss the benefits of Parcels, compare Parcels and packages, and understand what a Parcel file contains. The module finishes with a complete demonstration of a CDH upgrade and several component installations, including Cloudera Impala and Cloudera Search.
Learn how to use interactive, full-text search to quickly find relevant data in Hadoop and solve critical business problems simply and in real time. Cloudera Search combines the established, feature-rich, open-source search platform of Apache Solr and its extensible APIs for easy integration with CDH. In this e-learning module, you will learn the fundamentals, use cases, and features of Cloudera Search. The module includes a short discussion of Cloudera Search architecture and a product demonstration.
Hive is an Apache project that facilitates ad hoc queries and analyses of large data sets in the Hadoop cluster using a SQL-like language. This e-learning module teaches you how to write user-defined functions (UDFs) to augment Hive's built-in capabilities. We discuss why UDFs are necessary, what kinds of UDFs exist, and how to write custom UDFs in Java. The module includes a hands-on exercise where you will write your own UDF, complete with a sample solution.
Work at the speed of thought! This e-learning course explores Cloudera Impala'sfeatures, architecture, and benefits over legacy Hadoop platforms. Learn how to run interactive queries inside Impala and understand how it optimizes data systems. This free online course includes a training module, homework, and an Impala demo VM download to experiment with this powerful new tool.
Cloudera Manager simplifies deployment, configuration, diagnostics, and reporting for CDH in production. Learn how to set up and customize Cloudera Manager to monitor and improve the performance of any size Hadoop cluster, increase compliance, and reduce costs.
Learn the objectives and features of Cloudera Enterprise BDR and see a demonstration of the new backup and disaster recovery product. Centrally configure and manage disaster recovery workflows for files (HDFS) and metadata (Hive) through an easy-to-use graphical interface. Consistently meet or exceed Service Level Agreements (SLAs) and Recovery Time Objectives (RTOs) through simplified management and process automation.
You know your data is BIG – you found Apache Hadoop. Now you need to understand what implications to consider when working at such massive scale. This video addresses common challenges and general best practices for scaling with your data.
Many Hadoop deployments start small solving a single business problem and then begin to grow as organizations find more value in their data. Moving a Hadoop deployment from the proof of concept phase into a full production system presents real challenges. Learn how some of the largest Hadoop clusters in the world were successfully productionized and the best practices they applied to running Hadoop.