In KSQL 5.2, the
These are some good additions.
A stochastic problem, with application to financial theory. Some say it goes back to Warren Buffett. I relied to my colleague Norman Markgraf, who pointed it out to me.
Assume there are two coins. One is fair, one is loaded. The loaded coin has a bias of 60-40. Now, the question is: How many coin flips do you need to be “sure enough” (say, 95%) that you found the loaded coin?
Let’s simulate la chose.
It took a few more flips than I had expected but the number is not outlandish.
For a later blog post I have been trying to generate some workload against an AdventureWorks database.
I found this excellent blog post by Pieter Vanhove thttps://blogs.technet.microsoft.com/msftpietervanhove/2016/01/08/generate-workload-on-your-azure-sql-database/ which references this 2011 post by Jonathan Kehayias t
Rob turns these into multi-threaded workload generators. If you’re looking at generating stress on servers, you might also look at PigDog, developed by Mark Willkinson (one of my co-workers, so I have seen the look of joy on his face when he brings SQL Server to its knees).
Before I dig into my case study, I want to make it absolutely clear that these techniques will help you do a lot more than uncover fraud in your environment. My hope is that there is no fraud going on in your environment and you never need to use these tools for that purpose.
Even with no fraud, there is an excellent reason to learn and use these tools: they help you better understand your data. A common refrain from data platform presenters is “Know your data.” I say it myself. Then we do some hand-waving stuff, give a few examples of what that entails, and go on to the main point of whatever talks we’re giving. Well, this series is dedicated to knowing your data and giving you the right tools to learn and know your data.
This first post sets the scene, with subsequent posts getting into detail on the technical aspects.
One of the easiest things to fix when performance tuning queries are Key Lookups or RID Lookups. The key lookup operator occurs when the query optimizer performs an index seek against a specific table and that index does not have all of the columns needed to fulfill the result set. SQL Server is forced to go back to the clustered index using the Primary Key and retrieve the remaining columns it needs to satisfy the request. A RID lookup is the same operation but is performed on a table with no clustered index, otherwise known as a heap. It uses a row id instead of a primary key to do the lookup.
As you can see these can very expensive and can result in substantial performance hits in both I/O and CPU. Imagine a query that runs thousands of times per minute that includes one or more key lookups. This can result in tremendous overhead which is generated by these extra reads it effects the overall engine performance.
Monica’s absolutely right: key lookups can take a decent query and make it into a performance hog.
Now, the idea is that we will join the Application.People table to the Numbers table for a number of rows. We will do this for all of the numbers that are from 1 to the length of the name. Then use that value to get the substring of the value for that 1 character. I also include the Unicode value in the output to allow for some case sensitive operations, since UNICODE(‘a’) <> UNICODE(‘A’).
This is an example of how powerful tally tables can be.
SQL Server supports different kinds of authentication mechanisms and protocols: the older NTLM protocol, and Kerberos. A lot of people cringe when you mention Kerberos because, well, Kerberos is hard. It’s arcane, it’s complex, and it’s hard to even describe unless you use it on the regular.
Simply put, it’s a ticketing and key system: you, a user, requests a ticket from a store, usually by authenticating to it via a username and password. If you succeed, you get a ticket that get stored within your local machine. Then, when you want to access a resource (like a SQL Server), the client re-ups with the store you got your initial ticket from (to make sure it’s still valid), and you get a “key” to access the resource. That key is then forwarded onto the resource, allowing you to access the thing you were trying to connect to. It’s way, way more complex than this, with lots of complicated terms and moving parts, so I’m doing a lot of hand-waving, but that’s the core of the system. If that kind of stuff excites you, go Google it, and I promise you’ll get more than you ever bargained for.
Kerberos is a scary beast to me, mostly because I don’t spend enough time working directly with it.
In this scenario, we’re going to keep the data for X days after it’s created. Then we delete it. That’s it. X could be 3 days or 3 years–it doesn’t matter, we’ll follow the same design pattern.
In today’s world, we generate loads of log data, sensor data, telemetry data, etc. All that data is super duper valuable. But only for a while. Eventually, all that granular data becomes less useful, and isn’t worth keeping around. Maybe it gets aggregated, summarized, or maybe it just gets thrown out.
You’ll have a lot of data with more complex requirements, but I think you’ll also be surprised at how much data has simple date-based retention based on it’s creation.
Also read the comments, as they include additional techniques.