If you read Part Two, then you know these are the steps I used for anomaly detection with K-means:
-
Segmentation – the process of splitting your time series data into small segments with a horizontal translation.
-
Windowing – the action of multiplying your segmented data by a windowing function to truncate the dataset before and after the window. The term windowing gets its name from its functionality: it allows you to only see the data in the window range since everything before and after (or outside the window) is multiplied by zero. Windowing allows you to seamlessly stitch your reconstructed data together.
-
Clustering – the task of grouping similar windowed segments and finding the centroids in the clusters. A centroid is at the center of a cluster. Mathematically, it is defined by the arithmetic mean position of all the points in the cluster.
-
Reconstruction – the process of rebuilding your time series data. Essentially, you are matching your normal time series data to the closest centroid (the predicted centroid) and stitching those centroids together to produce the reconstructed data.
-
Normal Error – The purpose of the Reconstruction is to calculate the normal error associated with the output of your time series prediction.
-
Anomaly Detection – Since you know what the normal error for reconstruction is, you can now use it as a threshold for anomaly detection. Any reconstruction error above that normal error can be considered an anomaly.
Read the whole thing. This is a really cool use case of a set of technologies along with a venerable (if sometimes troublesome) algorithm.