Resource Library

Cloudera offers a variety of materials on big data consolidation, storage and processing. The library includes high-level overviews as well as detailed information on Apache Hadoop and the surrounding ecosystem.

  1. /content/cloudera/en/resources/library/recordedwebinar/intel-and-cloudera--accelerating-enterprise-big-data-success-video/jcr:content/mainContent/resourcecomponent.img.png/1405383703159.png
    Intel and Cloudera: Accelerating Enterprise Big Data Success
    • Thursday, Jun 12 2014
    • Category: Video, Recorded Webinars, Big Data, Data hub
    Learn how Cloudera and Intel are jointly innovating through open source software to enable Hadoop to run best on IA (Intel Architecture) and to foster the evolution of a vibrant Big Data ecosystem.
  2. /content/cloudera/en/resources/library/hbasecon2014/Harmonizing-Multi-Tenant-HBase-Clusters-for-Managing-Workload-Diversity/jcr:content/mainContent/resourcecomponent.img.jpg/1405465783974.jpg
    HBaseCon 2014 | Harmonizing Multi-tenant HBase Clusters for Managing Workload Diversity -Operations Session 1
    • Thursday, Jun 05 2014
    • Category: HBaseCon, Video, Presentation
    In early 2013, Yahoo! introduced multi-tenancy to HBase to offer it as a platform service for all Hadoop users. A certain degree of customization per tenant (a user or a project) was achieved through RegionServer groups, namespaces, and customized configs for each tenant. This talk covers how to accommodate diverse needs to individual tenants on the cluster, as well as operational tips and techniques that allow Yahoo! to automate the management of multi-tenant clusters at petabyte scale without errors.
  3. /content/cloudera/en/resources/library/video/merkle-delivers-connected-consumer-recognition-with-its-enterpri/jcr:content/mainContent/resourcecomponent.img.png/1405457401118.png
    Merkle Delivers Connected Consumer Recognition with Its Enterprise Data Hub
    • Wednesday, Jun 04 2014
    • Category: Video, Case Studies
    The Cloudera-powered EDH that Merkle deployed at the center of its big data infrastructure in about six months, "is a foundational component for our entire business because data is at the core of our marketing."
  4. /content/cloudera/en/resources/library/recordedwebinar/best-practices-for-the-hadoop-data-warehouse-video/jcr:content/mainContent/resourcecomponent.img.png/1405383645562.png
    Best Practices for the Hadoop Data Warehouse: EDW 101 for Hadoop Professionals
    • Thursday, May 29 2014
    • Category: Recorded Webinars, Video, Why Consolidation Data Platform, Data processing ETL offload
    Dr. Ralph Kimball and Eli Collins describe standard data warehouse best practices in Hadoop and how to implement them within a Hadoop environment. This includes identification of dimensions and facts, managing primary keys, and handling slowly changing dimensions (SCDs) and conformed dimensions.
  5. /content/cloudera/en/resources/library/recordedwebinar/best-practices-for-the-hadoop-data-warehouse-slides/jcr:content/mainContent/resourcecomponent.img.png/1407188576036.png
    Best Practices for the Hadoop Data Warehouse: EDW 101 for Hadoop Professionals
    • Thursday, May 29 2014
    • Category: Video, Why Consolidation Data Platform, Data processing ETL offload, Presentation Slides
    Dr. Ralph Kimball and Eli Collins describe standard data warehouse best practices in Hadoop and how to implement them within a Hadoop environment. This includes identification of dimensions and facts, managing primary keys, and handling slowly changing dimensions (SCDs) and conformed dimensions.
  6. /content/cloudera/en/resources/library/recordedwebinar/large-scale-machine-learning-with-apache-spark/jcr:content/mainContent/resourcecomponent.img.png/1405383605390.png
    Large Scale Machine Learning with Apache Spark
    • Wednesday, May 21 2014
    • Category: Recorded Webinars, Video, CDH, Predictive modeling, Cyber security, Fraud detection
    Spark offers a number of advantages over its predecessor MapReduce that make it ideal for large-scale machine learning. For example, Spark includes MLLib, a library of machine learning algorithms for large data. The presentation will cover the state of MLLib and the details of some of the scalable algorithms it includes, mainly K-means.
  7. /content/cloudera/en/resources/library/productdemo/sas-and-cloudera-demo/jcr:content/mainContent/resourcecomponent.img.png/1405556337477.png
    SAS and Cloudera Demo
    • Wednesday, May 07 2014
    • Category: Predictive modeling, Software Vendor (ISV), Video, Product Demos
    Watch this demo of SAS Visual Analytics where we explore example data from a potential super market wishing to create a new line of organic products.
  8. /content/cloudera/en/resources/library/recordedwebinar/sas-and-cloudera--analytics-at-scale/jcr:content/mainContent/resourcecomponent.img.png/1405383569146.png
    SAS® and Cloudera Analytics at Scale and Speed
    • Wednesday, May 07 2014
    • Category: Predictive modeling, Data hub, Business process optimization, Software Vendor (ISV), Video, CDH, Recorded Webinars
    Learn about SAS and Cloudera technical integration, how SAS builds on the enterprise data hub, and SAS In-Memory solutions for Hadoop and machine learning capabilities.
  9. /content/cloudera/en/resources/library/hbasecon2014/content-identification-using-hbase/jcr:content/mainContent/resourcecomponent.img.jpg/1405466622562.jpg
    Content Identification using HBase
    • Monday, May 05 2014
    • Category: HBaseCon, Presentation, Video
    This presentation will review the options a developer has for HBase querying and retrieval of hash data.
  10. /content/cloudera/en/resources/library/hbasecon2014/hbase-at-bloomberg--high-availability-needs-for-the-financial-in/jcr:content/mainContent/resourcecomponent.img.jpg/1405466661799.jpg
    HBase at Bloomberg: High Availability Needs for the Financial Industry
    • Monday, May 05 2014
    • Category: HBaseCon, Video, Document
    This talk covers data and analytics use cases at Bloomberg and operational challenges around HA. We'll explore the work currently being done under HBASE-10070, further extensions to it, and how this solution is qualitatively different to how failover is handled by Apache Cassandra.