Inside the Briefcase

Augmented Reality Analytics: Transforming Data Visualization

Augmented Reality Analytics: Transforming Data Visualization

Tweet Augmented reality is transforming how data is visualized...

ITBriefcase.net Membership!

ITBriefcase.net Membership!

Tweet Register as an ITBriefcase.net member to unlock exclusive...

Women in Tech Boston

Women in Tech Boston

Hear from an industry analyst and a Fortinet customer...

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

IT Briefcase Interview: Simplicity, Security, and Scale – The Future for MSPs

In this interview, JumpCloud’s Antoine Jebara, co-founder and GM...

Tips And Tricks On Getting The Most Out of VPN Services

Tips And Tricks On Getting The Most Out of VPN Services

In the wake of restrictions in access to certain...

The Four Faults of Manual Anomaly Detection: Why a Fully Automated Solution is Key for Business

June 1, 2017 No Comments

Featured blog by Debbie Fletcher, Independent Technology Writer

When data is the lifeblood of your company, it’s essential to know if you’re poisoned or in the early stages of sepsis. While many companies embrace a literal eyes-on approach to monitoring their vitals, it’s easy to run out of eyeballs fast as the organization grows. Manually reviewing plots of time series data for anomalies, while manageable for small company with a few dozen metrics, is surely better than no KPI monitoring at all, but even small teams realize that this approach is costly, incurs inherent delays and can’t scale as the company (and number of metrics) grows.

Unless someone is constantly watching a live plot of a KPI (while also viewing its entire history), this basic approach will always suffer from a delay between the anomaly occurrence and its detection (assuming each data point is plotted in real-time). Human beings are natural pattern recognizers, but a ratio of one person per metric is impractical even for the smallest companies.

Hence, BI tools came along and provided static thresholds, alerts and dashboards, giving rise to semi-automated anomaly detection. However, while this approach is better than having two eyes on every KPI, semi-automated anomaly detection still falls short of what companies really need in several ways.

Detection Delay: One large drawback of this semi-automation is that alerts are generated only after a threshold is crossed. Often, metrics will change their trajectory in a telling way long before they cross a threshold. These kind of changes can easily be spotted by someone monitoring the time series data in real time, but doing so defeats the purpose of semi-automated anomaly detection via static thresholds in the first place.

Exponential Costs: The costs of using manual and semi-automated anomaly detection in terms of man-hours rises exponentially as the number of metrics grows, as automated anomaly detection vendor Anodot explains. To summarize, the amount of communication overhead necessary for a required number of analysts to work effectively together to detect and correlate anomalies skyrockets exponentially as the number of metrics grows, thus limiting the feasibility of manual detection at most to a few dozen metrics.

Threshold Creep: As your company (hopefully) grows, many metrics have a tendency to trend up. For example, all other things being equal, you’d expect a city with a larger population (and thus more properties) to have more property crime. This upward drift in the metric requires an upward creep in the thresholds, which in turn requires analyst time to determine the proper upper and lower thresholds. If the metric drifts again, the whole process needs to be repeated.

And that’s if you get the thresholds right. Get them wrong and you could miss legitimate anomalies (false negatives) or erroneously flag as anomalies deviations which are actually due to seasonal patterns or the natural variability (noise) of the metric.

Alert Storms: Static thresholds are a blunt tool, because the alerts they generate only tell you if and when a particular metric passed the bounds. Those alerts are missing the context like the magnitude of the anomaly in the context of the metric and the magnitude of the anomaly compared to other anomalies from other metrics. Quantifying those two magnitudes requires a sophisticated model of each metric, which is beyond the scope and capability of the thresholds offered by traditional BI tools. Only automated anomaly detection systems built around advanced machine learning algorithms are able to understand an anomaly’s local and global context, and thus able to rank each detected anomaly as part of the reporting.

With manual anomaly detection, however, it’s up to your team to examine each anomaly in order to determine what’s significant enough to investigate further. And they have to do this while keeping up with the continuous alert storms which manual anomaly detection systems produce.

Businesses need a completely automated solution

For large-scale real-time anomaly detection systems to be truly automated, anomaly ranking and correlation -not just detection – must also be automated. Full automation is the only way to escape the compounding problems of detection delay, unmanageable alert storms, threshold creep and exponential labor costs, and uncover critical issues that must be dealt with as soon as possible.

 

 

 

 

Leave a Reply

(required)

(required)


ADVERTISEMENT

DTX ExCeL London

WomeninTech