How to use anomaly detection in Azure machine learning

8/04/2019

Machine learning is about more than vision and speech, as Azure’s latest machine learning service shows

One key part of Microsoft’s big bet on machine learning is that these technologies need to be democratized, turned into relatively simple-to-understand building blocks that Microsoft’s developer audience can quickly learn and use in their own applications.

That’s where Azure’s Cognitive Services come in. Instead of having to understand the layers of training that go into the ResNet50 deep learning neural network, or how to build learning platforms using TensorFlow or Microsoft Cognitive Toolkit (CNTK), these services are just APIs that are ready to use. Microsoft has already trained the neural nets for these services, and it continues to tune them and use real-world operations as a foundation for future improvements. They’re cheap to use compared to the compute and storage cost of building and running your own machine learning algorithms.

The machine learning tools on Azure have rapidly become an important resource for anyone wanting to add basic artificial intelligence to an app. It’s important to know that they are limited, with a focus on three key areas: computer vision, text analysis, and speech recognition. They’re all important areas, but they are a limited subset of what can be done with modern machine learning.

Azure Cognitive Services enters a new AI area

Fortunately, the first new cognitive service to explore other aspects of machine learning entered beta recently: adding anomaly detection to the roster. Anomaly detection is an important AI tool, analyzing time-series data for items that are outside normal operating characteristics for the data source. That makes it an extremely flexible tool because modern businesses have a lot of streamed data, from financial transactions to software logs to device telemetry. The ability to use one API to work across all these different feeds shouldn’t be underestimated, because it makes building appropriate software a lot easier.

Normally anomaly detection takes time to set up. You need to train your model against a large amount of data to determine what’s normal operation and what’s out of the ordinary. It’s how credit-card fraud-detection systems build a model of your spending (and of all their customers’ habits) to detect when a compromised card is used and to block any future transactions to keep losses to a minimum.

If you’re going to make that type of operation a general-purpose service, you’re going to need to be able to switch in an appropriate detection model for the type of data that’s being sent to the service. This is exactly the approach that the Azure Cognitive Services Anomaly Detector takes, with an adaptive inference engine that selects a detection model that fits the time-series data being used.

By choosing an algorithm at runtime, Microsoft is getting around the worst of the training costs of anomaly detection. The algorithm it uses may not be perfect, but it will be a lot better than having a one-size-fits-all rules engine handling anomaly detection. There’s an added benefit: You don’t have to spend significant amounts of time labeling gigabytes of training data.

Building an Anomaly Detector app

Like all Azure cognitive services, Anomaly Detector requires a subscription key, which can be generated in the Azure Portal, along with the endpoint URL for your subscription. Usefully, Microsoft provides a demo service, running in a Jupyter notebook, that you can use to quickly try out the service before using it with your own code and data.

In practice, you’ll send JSON formatted data to the service via an async function. If you’re working with streamed data, you can send a moving window of time-series data with each update and detect anomalies on the last piece of data in a series. If you’re using it to analyze batch data, you’ll get back a list of the positions of identified anomalies in the data set as an array of Boolean values for each data point. If true, it’s an anomaly and you can use the index of any true values to give you the index of the anomaly in the source data array.

Using time-series data

Anomaly Detector works like most Azure platform services, offering a REST API that accepts JSON-formatted data. A C# SDK makes it easier to build code to work with the service; you can use other languages but doing so requires building REST calls by hand.

Microsoft has some restrictions on the data format: The time interval between data has to be fixed, and although the system can accept data missing up to 10 percent of the expected points, it’s better to ensure that your data is complete. The number of data points in a batch can vary if you’re delivering data that has clear patterns. There’s a minimum of 12 points in a data set and a maximum of 8,640, with timestamps in UTC.

You’re not limited to using Anomaly Detector on streamed data; if you’re using a time-series database to record data, you can run it as a batch process over all your data, though that can means sending a lot of data. This approach can help identify past issues that may have been missed, such as spotting irregular financial transactions that are indicators of fraud or ongoing problems with machinery that may affect overall productivity. Running it over historical data can help you get the information you need to fine-tune the algorithm your using, making it more likely to spot issues in your particular business.

Tuning Anomaly Detector

One thing to note about the Anomaly Detector API: It’s not like the other cognitive services, because you can adjust how it works with your data. As part of the JSON request, you can specify details of the period of the data, its granularity, and two options that fine-tune the algorithm sensitivity. One, maxAnomalyRatio, helps identify data points that may or not be anomalous. The other, sensitivity, tunes the margin value of the algorithm; the lower the number, the larger the margin, keeping anomaly detection to a maximum.

You’ve got a lot of time-series data in your applications, and it’s often hard to extract value from it. By adding a little machine learning, you can start to see what doesn’t fit normal patterns, and then use that information to construct appropriate responses.

That’s why it’s a good idea to use tools like Jupyter Notebook to explore the results and tune your detectors before you build them into code. You need to see what anomalies occur, and you need to be able to tie them to the events you need to manage. By using interactive notebooks and historical data, you can find appropriate correlations that can help you design applications that can use near-real time anomaly detection to deliver results you can understand. That’s when you can start using the Anomaly Detection APIs for real business value.

Deel dit nieuws op

LAATSTE NIEUWSBERICHTEN

Mozilla brings Python data science to the browser

Mozilla brings Python data science to the browser

Pyodide project uses Emscripten and WebAssembly to run Python and its data science libraries in any major browser Mozilla’s experimental Pyodide ...
LEES MEER
How to use anomaly detection in Azure machine learning

How to use anomaly detection in Azure machine learning

Machine learning is about more than vision and speech, as Azure’s latest machine learning service shows One key part of ...
LEES MEER
How to model time-series anomaly detection for IoT

How to model time-series anomaly detection for IoT

Machines fail. By creating a time-series prediction model from historical sensor data, you can know when that failure is coming ...
LEES MEER

UITGELICHTE VACATURES

JOUW BERICHT HIER?

NEEM CONTACT OP

Geef een reactie

Het e-mailadres wordt niet gepubliceerd. Vereiste velden zijn gemarkeerd met *