Financial Services to Harness Machine Learning and Artificial Intelligence to Elevate Quality of Data-Driven Decisions -- KPMG LLP
*The financial services sector is taking a hard look at the role artificial intelligence (AI) and machine learning (ML) can play in monitoring key elements of the data -- such as quality, lineage, metadata, and master reference data.
*KPMG has developed a solution called Ambient Data Management that leverages AI and ML technology to automate the process of ingesting, profiling and analyzing data to uncover and eliminate anomalies.
"As enterprises wrestle with an exploding amount of data from a growing number of sources in a variety of different formats, it has become increasingly difficult to ensure that decisions are made based on high-quality information,"
Timely access to the best information has put the issue of data quality at the center of business transformation initiatives that are critical to the continued and sustained success of established institutions over the months and years to come. The immense volume of data, residing in heterogenous and often fragmented infrastructure spread out across the enterprise, can generate a lot of noise that makes it difficult to assess what information is valid for decision making.
It is for this reason that executives in the financial services sector are taking a hard look at the role artificial intelligence (AI) and machine learning (ML) can play in monitoring key characteristics of the data— such as quality, lineage, metadata, and master reference data—as it moves through an enterprise data lifecycle management pipeline.
"This is important for a number of reasons—especially in the financial sector. For one thing, global financial institutions face an increasing amount of pressure from regulators, clients and internal auditors to improve the quality of their data," explains Brian Radakovich, Managing Director, KPMG Financial Services Data practice.
"Within financial services, traditional methodologies of data quality management are still often executed in a manually-intensive manner that requires lots of human intervention. Given the sheer amount of data in today's environment, these traditional methods cannot ensure data quality management at scale," he says.
To read the full interview or listen to the podcast with Tom Haslam and Brian Radakovic, visit: