Article

Identifying Anomalies Through Risk Analytics: What is Your “Normal”?

calendar iconSeptember 1, 2022

You have likely heard it by now – “Data is the new oil.”  Effectively analyzing the relevant data affecting your business has been a best practice utilized by many successful organizations for years. However, there are many ways in which analytics can be used. They can identify trends that implement change, analyze failures to find root causes or predict outcomes for effective planning.  One of the more potent capabilities that analytics provides is the ability to identify anomalies: the “exceptions” that pique interest. These could be strange credit card transactions, an acquisition that does not align with the deal structure or an employee approving their own inputs. With every set of data, analytics can provide something that, within its context, seems suspicious. These exceptional or suspicious items would be considered to fall outside of the norm or standard industry practice. Well-designed risk analytics while leveraging technology and automation tools can help organizations increase risk coverage, drive down cost and re-engineer risk management programs in a cost-effective manner.

Defining “Normal” for Your Business is Paramount

What is “normal”? It seems like a strange question to ask but, by definition, exceptional or anomalous refers to out of the ordinary. Before anything that can be considered “exceptional,” whether positive or negative, we must first be able to define “normal”.

What can be Anomalous Data in Risk Analysis?

For example, your organization could have a “high” credit card expense, which is out of the ordinary because there is typically a lower “ordinary” expense amount. Furthermore, an entry booked to an account is only “strange” because the median entries for it are much different. Then it seems in these situations that the right question to ask is not “what is exceptional,” but “what is normal”? And how exactly does analytics help organize the data and help identify the anomalies?

Anomaly Detectors: Utilizing the Right Technology To Achieve Your Objectives

A significant amount of data is required to meaningfully focus data into usable metrics. At a certain point, it no longer is feasible without utilizing a variety of powerful analytic tools. Automated analytics tools are essential, particularly as the size of your data increases beyond what you might be able to maintain utilizing more universal or “off-the-shelf” tools.

As always, the tool should be tailored to the task. A simple example of this would be utilizing visualization tools – Tableau, QlikView or Microsoft Power BI – supported by scripting tools – SQL or Python – to transform very large and heterogenous sets to show more clearly what is “normal.” But before the tools and area of interest are established, the discussion of “normal” must be centered around the objectives.

Identifying a Clear Objective Is Key for Anomaly Detection

Before analytics can be performed on any set of data, there must be clearly-defined objectives to reshape the data to support a hypothesis of “normality.” A population of credit card transaction data will be transformed differently if the goal is to identify fraud, as opposed to T&E compliance. Understanding your organization’s objectives early can optimize the process throughout. Additionally, it can be more appropriate to build a tool with a feedback loop to enhance learnings. A feedback loop aims to continually update the key metrics with ongoing information. A simple example is to utilize tools, as discussed earlier, to continuously pull in credit card purchase data and update points, such as the normal curve predictors. This would act both on the new set of data and retroactively. Perhaps it could raise flags on past transactions as well. After all, “normal” can, and will, change as you obtain more data to define it.

While it is good to capture a slice of “normal” at any given time, a vital component of identifying outliers is developing a method that constantly captures “normal.” Normal today will not be the same as normal tomorrow. Built within the tools used to capture the data and perform analytics must be the ability to update the methodology to keep up with both change in the organization or task and objectives. This is the most important part. Better than developing the idea of “normal” at a point in time, is having the methodology in place to continually identify “normal”. As we described above, there are different ways to achieve this. Both human and machine learning can be utilized to consistently increase the precision of the tool.

The most essential aspects of defining outliers are understanding the goals of the analytic and subsequently defining “normal.” This definition should be derived on an evolving basis. The underlying methodology applied should capture “normal” at any interval of time, so long as the data is available.

How We Can Help with Risk Analytics

Cherry Bekaert blends data and analytics with traditional approaches to help companies learn from their transactional data and formulate specific objectives to set policy or change procedures. For information on how data and analytics can improve your company’s performance, contact our Risk Analytics group to assist with risk mitigation and anticipate potential exposure and loss.

Contact Us