Your browser is out of date!

Update your browser to view this website correctly. Update my browser now

×

Why Get Certified?

  • Prove your skills where it matters. CCA exams are performance-based; your CCA Spark and Hadoop Developer exam requires you to write code in Scala and Python and run it on a cluster. You prove your skills where it matters most.
  • Available Anytime, Anywhere: Forget taking a day off work to travel to a test center. CCA exams are available globally, from any computer at any time. 
  • Promote Your Achievement: Every CCA receives a logo for business cards, résumés, and online profiles.
  • Verify Your Achievement: Every CCA certification comes with a license that allows current and potential employers to validate your CCA status.
  • Current: The big data space evolves rapidly, no more so than in the Apache Spark and Hadoop developer space. We upate our CCA exams regularly to reflect the skills and tools relevant for today and beyond. And because change is the only constant in open-source environments, Cloudera requires all CCA credentials holders to stay current with two-year mandatory re-testing in order to maintain current status and privileges.

CCA Spark and Hadoop Developer Exam (CCA175) Details

  • Number of Questions: 10–12 performance-based (hands-on) tasks on CDH5 cluster. See below for full cluster configuration
  • Time Limit: 120 minutes
  • Passing Score: 70%
  • Language: English, Japanese (forthcoming)
  • Price: USD $295

Exam Question Format

Each CCA question requires you to solve a particular scenario. In some cases, a tool such as Impala or Hive may be used. In other cases, coding is required. In order to speed up development time of Spark questions, a template is often provided that contains a skeleton of the solution, asking the candidate to fill in the missing lines with functional code. This template is written in either Scala or Python.

You are not required to use the template and may solve the scenario using a language you prefer. Be aware, however, that coding every problem from scratch may take more time than is allocated for the exam.

Evaluation, Score Reporting, and Certificate

Your exam is graded immediately upon submission and you are e-mailed a score report the same day as your exam. Your score report displays the problem number for each problem you attempted and a grade on that problem. If you fail a problem, the score report includes the criteria you failed (e.g., “Records contain incorrect data” or “Incorrect file format”). We do not report more information in order to protect the exam content. Read more about reviewing exam content on the FAQ.

If you pass the exam, you receive a second e-mail within a few days of your exam with your digital certificate as a PDF, your license number, a Linkedin profile update, and a link to download your CCA logos for use in your personal business collateral and social media profiles

Audience and Prerequisites

There are no prerequisites required to take any Cloudera certification exam. The CCA Spark and Hadoop Developer exam (CCA175) follows the same objectives as Cloudera Developer Training for Spark and Hadoop and the training course is an excellent preparation for the exam. 

 

Register for CCA175

Required Skills

Data Ingest

The skills to transfer data between external systems and your cluster. This includes the following:

  • Import data from a MySQL database into HDFS using Sqoop
  • Export data to a MySQL database from HDFS using Sqoop
  • Change the delimiter and file format of data during import using Sqoop
  • Ingest real-time and near-real time (NRT) streaming data into HDFS using Flume
  • Load data into and out of HDFS using the Hadoop File System (FS) commands

Transform, Stage, Store

Convert a set of data values in a given format stored in HDFS into new data values and/or a new data format and write them into HDFS. This includes writing Spark applications in both Scala and Python (see note above on exam question format for more information on using either Scala or Python):

  • Load data from HDFS and store results back to HDFS using Spark
  • Join disparate datasets together using Spark
  • Calculate aggregate statistics (e.g., average or sum) using Spark
  • Filter data into a smaller dataset using Spark
  • Write a query that produces ranked or sorted data using Spark

Data Analysis

Use Data Definition Language (DDL) to create tables in the Hive metastore for use by Hive and Impala.

  • Read and/or create a table in the Hive metastore in a given schema
  • Extract an Avro schema from a set of datafiles using avro-tools
  • Create a table in the Hive metastore using the Avro file format and an external schema file
  • Improve query performance by creating partitioned tables in the Hive metastore
  • Evolve an Avro schema by changing JSON files

Exam delivery and cluster information

CCA175 is a remote-proctored exam available anywhere, anytime. See the FAQ for more information and system requirements.

CCA175 is a hands-on, practical exam using Cloudera technologies. Each user is given their own CDH5 (currently 5.3.2) cluster pre-loaded with Spark, Impala, Crunch, Hive, Pig, Sqoop, Kafka, Flume, Kite, Hue, Oozie, DataFu, and many others (See a full list). In addition the cluster also comes with Python (2.6, 2.7, and 3.4), Perl 5.10, Elephant Bird, Cascading 2.6, Brickhouse, Hive Swarm, Scala 2.11, Scalding, IDEA, Sublime, Eclipse, and NetBeans.

Documentation Available online during the exam

Only the documentation, links, and resources listed above are accessible during the exam. All other websites, including Google/search functionality is disabled. You may not use notes or other exam aids.