The basic idea here is “alert me if something has changed dramatically.” If there’s a corner of my business that has spiked or crashed in a big way, I want to know. If something has dramatically improved in a particular region, I may want to dive into that and see if it’s something we can replicate elsewhere. And if something has fallen off a cliff, well, I need to know that for obvious reasons too. And both kinds of dramatic change, positive and negative, can easily be obscured by overall aggregate values (so in some sense this is a similar theme to “Sara Problem?”)
So the first inclination is to evaluate distance from average performance. And maybe that would be fine with high-volume situations, but when we’re subdividing our business into hundreds or perhaps thousands of micro-segments, we end up looking at smaller and smaller sample sizes, and “normal” variation can be legitimately more random than we expect.
This looks really cool. If you read the comments, Rob notes that performance does break down at some point. If you start hitting that point, I’d think about shifting this to R.