Region’s hybrid cloud data lake has helped deliver a better customer experience, increased efficiency, and driven over $10 million per year in retention savings.
A production machine learning model for risk scoring improved fraud capture rates by 95%, decreased false-positive alerts by 30%, and resulted in a 50% reduction in average daily dollar losses.
Spinning data science workloads up and down as needed will be much more cost-effective, resulting in more than 10% cost savings.
Regions Financial Corporation is one of the United States’ largest full-service providers of consumer and commercial banking, wealth management and mortgage products and services.
Regions serves customers across the South, Midwest and Texas and through its subsidiary, Regions Bank, operates more than 1,300 banking offices and approximately 2,000 ATMs.
Regions began its big data journey to serve customers better, meet their needs and help them reach their financial goals. To do this, Regions needed to become a best in class data-driven organization - delivering measurable results by integrating both internal and external data and applying advanced analytics – which would require the right organizational structure, talent and a data science platform to build analytical solutions at scale.
Regions’ previous big data environment was used as an operational data store with fragmented data and no centralization. A variety of tools were used to build one-off, siloed models. The company also needed a solid foundation for data governance and models so data could be broadly leveraged for analytical and data science use cases.
"We needed all these components to work together. Trying to piecemeal multiple platforms is difficult, inefficient and leads to performance issues,” said Manav Misra, Chief Data and Analytics Officer. “You need a single platform to bring everything under one security model, have one set of data assets that can be leveraged in one place, and have one data pipeline with modelOps capability that allows you to build these models and deploy at scale."
For example, the bank’s corporate relationship managers had access to data, albeit across ten different systems and interfaces. They needed and wanted better insight into their client’s cash flow and financial stability, so they could provide better advice and recommendations.
Fraud detection and prevention was also critical. The growth of digital banking has opened new doors for criminals to try innovative ways to steal from banks and customers. Identifying fraud – at the account and transaction level – is imperative. While account-based fraud detection models maximize at about 5 million records, transaction data includes billions of records, so processing this volume for fraud models is a significant challenge.
Implement an enterprise data science platform as a flexible foundation for the bank to launch new 'data products' for customers and employees.
Regions’ built its new enterprise data science platform powered by Cloudera Data Platform (CDP) and Cloudera Data Science Workbench.
"We had all this data at our fingertips, but few good ways to take advantage of it. People were having to manage it, and scaling was a challenge,” said Daniel Stahl, Regions Data and Analytics Platforms Manager. “With the data lake, suddenly we could access all this historical data in one place and not have to bring that data anywhere else to do our analytics. That really enabled us to scale and use the data as we needed."
Regions continues to modernize its architecture, upgrading to CDP Private Cloud (PVC) Base. This upgrade allows Regions to be agile and respond quickly to the needs of its data product users and customers. Data is ingested in real-time using the robust streaming features available in CDP. Cloudera Professional Services supported the upgrade planning, process, and implementation. Along with tight upgrade windows, Regions needed an in-place upgrade by installing CDP on their existing environment, instead of a longer migration path that included data migrations, operational overhead to set up new clusters and the lead time needed to procure and set up new hardware. This minimized both downtime and risk for test, development, and production clusters.
"Upgrading to CDP has made everything run so much smoother and faster. We have so much more capacity available to us because we didn't have these legacy mediums that we had to worry about," stated Stahl.
Now, Regions leverages the multi-function analytics and data lifecycle capabilities within CDP. Data is ingested using Kafka and enriched and processed using Spark. Deep data analytics is performed using Hive and Impala, and finally Cloudera Data Science Workbench helps data scientists discover deeper insights. Cloudera SDX provides consistent security and governance, ensuring the right users have access to the right data the correct way.
As Regions evolves its data environment, they plan to take advantage of more functions:
Seamless access to different ML libraries for different types of analytics jobs.
Taking advantage of features for isolated workloads, leveraging a containerized environment with Kubernetes and benefitting from a hybrid cloud architecture.
Data science workloads will benefit from bursting to the public cloud, since they won’t have to be active 24/7, only ad hoc. Spinning workloads up and down as needed will be much more cost effective, resulting in a more than 10% cost savings.
Enhanced customer conversations with data insights
Regions uses predictive modeling to improve conversations between corporate relationship managers and customers. The models provide insights in a single, user-friendly interface. Bankers can easily use the data to better inform existing client meetings and drive conversations that may not have otherwise happened. Regions calls this and similar solutions “data products,” providing insights that help better meet client’s financial needs. These insights help bankers stay a step ahead, with answers to questions that haven’t even been asked yet.
Region’s hybrid cloud data lake is critical to deploy new data products. It’s helped the bank deliver a better customer experience, increased efficiency, and driven over $10 million per year in retention savings.
Flexibility has been another benefit, as well as easy access to a variety of advanced analytics capabilities.
"The flexibility, tools and partnership with Cloudera have also allowed us to start leveraging Spark for big data analysis. Through refinement, we’re up to four different fraud data products – all made possible because of our data lake environment. As we move more towards streaming data, especially on the transaction side, we'll be able to take advantage of things like Spark Streaming and Kafka to do more real time analytics," explained Stahl.
ML models lead to fraud prevention benefits
In the past, bank employees were flooded with numerous alerts from flagged transactions. Reviewing them all was challenging. With advanced analytics, fraudulent transactions can be more easily identified and addressed. And working with Cloudera partner IBM, the bank transformed advanced analytics using modern tools and new open and transparent methodologies.
The results have been impressive. A production ML model for risk scoring improved fraud capture rates by 95%, decreased false-positive alerts by 30%, and resulted in a 50% reduction in average daily dollar losses. Operationally, with fewer alerts to manage, manual interactions focus on suspicious transactions. Rules are informed by the models and adjusted dynamically when new fraud schemes arise.
"Creating an analytics Center of Excellence, we’ve brought data into a centralized data lake, rolled out a data governance framework, applied machine learning and AI techniques, and, above all, adopted an end-to-end business approach that emphasizes value delivered by the products we create.” Misra added. “The result has been trusted analytical solutions that help reduce risk, detect fraud, assist commercial relationship managers and private wealth advisors, and provide insights into consumers so we can better meet their needs."