Day: August 12, 2019

For beginners, Kafka Tutorials reveals the “shape” of the problems that event streaming can solve. It makes it easier to recognize the domain of things that you might use event streaming for. Moreover, each tutorial reliably takes you from zero to working code by following each of the steps.

For the experienced, it’s a crucial reference guide that makes your work easier. Easily look up how to join a stream and a table together when you’re rusty, or quickly recall how to merge discrete streams together. Over time, we’ll introduce more advanced material that makes use of the entire stack.

Check it out, including a discussion of their YAML renderer.

Categorical data works well with Decision Trees, while continuous data work well with Logistic Regression.

If your data is categorical, then Logistic Regression cannot handle pure categorical data (string format). Rather, you need to convert it into numerical data.

Each algorithm has its own uses and assumptions.

Many visualisations use colours to represent data values, either to show quantitative scales or categorical classifications. One of the most common colour metaphors used in visual displays involves the use of a red-green colour scheme, sometimes known as “RAG” or “traffic light” colours. These colours are used to convey notions of green = ‘good’ or ‘above average’ and red = ‘bad’ or ‘below average’ in some cultures, and the reverse in others. Such colour connotations are long-established and widely used, especially in financial or corporate contexts, but whilst they provide a certain immediacy in their meaning for many viewers, around 4.5% of the population are colour-blind (8% of men) with the red-green colour deficiency “Deuteranopia” being the most common form. This means a significant proportion of viewers may not be able to perceive important such visual encodings.

I’m not the biggest fan of some of them, but there are some really good ideas in here.

I was asked by a colleague why his where clause wasn’t being selective when filtering on a space value. The column was a char(1) data type. To understand the curious case of the space in char(1), we need to understand how the char data type works and also a bit more about the need for it in this scenario.

The ANSI standard makes sense, but it is something you have to keep in mind in cases like this.

When you’re troubleshooting parameter sniffing, the plans might not be totally different.

Sometimes a subtle change of index usage can really throw gas on things.

It’s also a good example of how Key Lookups aren’t always a huge problem.

Both plans had them, just in different places.

The plans had a small change, but that made a big difference.

You may sometimes have reports or other processes that are dependent on transactional replication being current.  If that is the case, you will probably need a mechanism to check and see if, in fact, replication is caught up.  Here is my solution to that, without having to resort to Replication Monitor all the time. The bonus?  This could be inserted into conditional workflows to help streamline processes (i.e., validate publications before moving on to Step 2 of process).

To do this, I chose to make three stored procedures.  The first one to just check all publications on a server, one to check just one publication on a server, and one central sproc to rule them all.  You simply execute the master stored procedure, and based on the parameters you feed, it decides which of the other two to execute.

Read on for those scripts.

In part 1 of this series – which I strongly recommend you read before reading this post – I showed how removing columns from a table can make a dramatic improvement to the performance of certain transformations in Power Query. In this post I’ll show some tricks taught to me by Curt Hagenlocher of the dev team that can improve performance even more.

Click through for the trick and an explanation of when it works and when it doesn’t.