Press "Enter" to skip to content

Curated SQL Posts

Tracking Database Logins: 5 Methods

Eugene Meidinger has a medley of options for tracking server logins:

I once had to some auditing for a customer and it was a complicated, multi-stage process. We had to be able to demonstrate who had admin access and what kind of activity was going on, on the server. But before we could do any of that, we first had to identify who was actually logging on.

We get a brief walkthrough of each, and an important warning.

Comments closed

Server-Level Triggers

Shane O’Neill makes me wish Policy-Based Management ever got the love it needed:

Now, my normal attitude with regard to triggers tends to run to the negative. Which is horrible, because triggers are just like any other tool; neutral by themselves and only good or bad based on how we use them.

So, with that being said, I’ve forced myself to think of a positive use for them. So here is a time when I’ve used triggers for a “good” cause and used them to get some visibility on when new databases are created.

DDL triggers can be useful things, as Shane shows us.

Comments closed

Saving Table History With Triggers

Bert Wagner shows us a way of saving table history in SQL Server using triggers:

Triggers are something that I rarely use.  I don’t shy away from them because of some horrible experience I’ve had, but rather I rarely have a good need for using them.

The one exception is when I need a poor man’s temporal table.

Check it out.  My main comment is, make sure you write the triggers to handle updating multiple rows; otherwise, you’ll be disappointed when rows go missing.

Comments closed

Managing Power BI Dashboard Releases

Jesse Gorter gives us a couple examples of process for releasing Power BI dashboard changes in a corporate environment:

Corporate BI is the ‘one version of the truth’ data that must be governed and tested. The report on top of that might need to be governed too. So any changes to the model or the report (or both) need to go through some testing before it can be accepted. Hence DTAP & Power BI.

Let’s explore a case where an organization has a HR department with a manager and a data analyst. Those two make the reports and decide what they want to publish to their target users: HR employees. A developer creates the datamart and the Analysis Services Tabular model cube on which they create Power BI reports. For the first version, they create a pbix file and upload it to Powerbi.com. What happens when the business wants the report or the model changed?

Read on for two potential solutions.

Comments closed

Table Swaps With Triggers

Jay Robinson walks through the process of making a breaking change to a large, active table with limited downtime:

I can only recall one time in the past several years (at least a decade) that I’ve found triggers to be useful. It involves data migration.

The problem: You have a massive, high-activity table. Let’s call this Table_A. You need to make significant changes to it. For example, the clustered index needs to change. How do you accomplish this?

I’ve used a similar process with good success in the past.

Comments closed

Disambiguating The Confusion Matrix

John Cook walks through a set of valuable terms derived from the core components of the confusion matrix:

How many terms are possible? There are four basic ingredients: TP, FP, TN, and FN. So if each term may or may not be included in a sum in the numerator and denominator, that’s 16 possible numerators and 16 denominators, for a total of 256 possible terms to remember. Some of these are redundant, such as one(a.k.a. ONE), given by TP/TP, FP/FP, etc. If we insist that the numerator and denominator be different, that eliminates 16 possibilities, and we’re down to a more manageable 240 definitions. And if we rule out terms that are the reciprocals of other terms, we’re down to only 120 definitions to memorize.

And of those, John points out the handful which are generally important, providing us an excellent table with definitions of commonly-used terms.

Comments closed

Thoughts On The Evolution Of Big Data

Praveen Sripati shares an opinion on where the various Hadoop and Big Data platforms are headed:

The different Cloud Vendors had been offering Big Data as a service for quite some time. Athena, EMR, RedShift, Kinesis are a few of the services from AWS. There are similar offerings from Google CloudMicrosoft Azure and other Cloud vendors also. All these services are native to the Cloud (built for the Cloud) and provide tight integration with the other services from the Cloud vendor.

In the case of Cloudera, MapR and HortonWorks the Big Data platforms were not designed with the Cloud into considerations from the beginning and later the platforms were plugged or force fitted into the Cloud. The Open Hybrid Architecture Initiative is an initiative by HortonWorks to make their Big Data platform more and more Cloud native.

It’ll be interesting to see where this goes.

Comments closed

Databricks Cluster-Scoped Init Scripts

Aayush Bhasin shares some background on a Databricks intern project, adding cluster-scoped initialization scripts to Databricks clusters:

One of the biggest pain points for customers used to be that init scripts for a cluster were not part of the cluster configuration and did not show up in the User Interface. Because of this, applying init scripts to a cluster was unintuitive, and editing or cloning a cluster would not preserve the init script configuration. Cluster-scoped init scripts addressed this issue by including an ‘Init Scripts’ panel in the UI of the cluster configuration page, and adding an ‘init_scripts’ field to the public API. This also allows init scripts to take advantage of cluster access control.

Read on to see how Aayush & co. solved this issue.

Comments closed

A Look At Automatic Plan Tuning In SQL Server 2017

John Sterrett shows us how to review automatic plan tuning suggestions:

SQL Server 2017 Automatic Tuning looks for queries where execution plans change and performance regresses. This feature depends on Query Store being enabled. Note, even if you don’t turn on Automatic Tuning you still get the benefits of having access to the data. That is right. Automatic Tuning would tell you what it would do if it was enabled.  Think of this as free performance tuning training.  Go look at the DMVs and try to understand why the optimizer would want to lock in an execution plan. We will actually go through a real-world example:

Click through for the example.  I wouldn’t automatically trust these automatic results, but my experience has been generally positive.

Comments closed

Customizing SSMS Using Powershell

Josh Simar shows us that you don’t need a GUI to customize SQL Server Management Studio:

There has been many a blog post out there that shows you some of the great ways you can customize the look and feel of your management studio windows. One of my favorite recently published ones is from the great people at Brent Ozar Unlimited (Brent himself in this case) here.

There is even many a blog post out there about how you can export your settings from one and import it to another here.

All of the blog posts I’ve read however have one problem for me. They seem very “clicky”, as in a lot of clicking of the mouse (or keyboard shortcuts) has to happen every time you want to import your settings and I thought there has to be a more programatic way.

After all, many of us espouse to the DRY (Don’t Repeat Yourself) technique when coding. Why wouldn’t we want to trim time and energy off of something as basic as a settings import.

And Josh has that solution.

Comments closed