Your browser is out of date!

Update your browser to view this website correctly. Update my browser now

×

Date: Wednesday, May 21 2014

Description

Apache Spark is easy to develop with and fast to run. Understand how to use K-means for clustering data, where you can then find anomalies from the typical patterns for fraud detection, network intrusions, and such. Learn how Spark takes advantage of Resilient Distributed Datasets (RDD) – parallel transformations on data in stable storage.

Next Steps