Press "Enter" to skip to content

Curated SQL Posts

Power BI Keyboard Commands

Meagan Longoria shows off some of the new keyboard commands in Power BI:

The March 2019 release of Power BI Desktop has brought us keyboard accessible visual interactions. One of Power BI’s natural strengths is that you can click on a data point within a visual and have it cross-highlight or cross-filter the other visuals on a page. But keyboard-only users weren’t able to use this feature until now. This greatly raises the accessibility of the Power BI report consumption experience.

Click through to see a few of these shortcuts in action.

Comments closed

A Near-Zero Downtime Case Study

I have a post covering an example of making significant changes with near-zero downtime:

This is where we start the decline phase in our story. Our temporary procedures existed as a bridge from the old procedures which took ClientID and new procedures which will take ProfileID. With our final versions of procedures, we replace @ClientID with @ProfileID in the input parameters and update any conditional logic within filters to use ProfileID instead of ClientID.

The only remaining use of ClientID in these procedures is as an output in select statements, as we still need that for the old code; by this time, all of those references are ProfileID AS ClientID. Otherwise, these new procedures are built around ProfileID.

We still have the original procedures that the application code uses and they reference our now-finalized procedures. These procedures are now transition procedures—we need them right now but as we move application code over to the new procedures, we can deprecate these. But I’m getting ahead of myself a little bit.

Click through for the full story as well as a bunch of pictures which completely understate the vastness of work done.

Comments closed

T-SQL Tuesday 112 Roundup

Shane O’Neill has a roundup of T-SQL Tuesday #112:

I’m going to keep this intro short and sweet. I’d like to say it’s because I know these roundups are for acknowledging and thanking the people who have contributed, and this is the case! Mainly though it’s because I wrote this bit last and I’ve already written… checks … 2,795 words!

So thank you one and all for participating. I’ve never thought the posts would raise feelings of happiness, sadness, thoughtfulness, and appreciative-ness that this one has.

Read on for the full list and Shane’s thoughts.

Comments closed

Using Extended Events with Azure Data Studio

Erin Stellato shows how we can use the Extended Events Profiler (oh, how I dislike that name) with Azure Data Studio:

To clarify, the extension in ADS is like XEvent Profiler in Management Studio (which also is built using Extended Events).  The name “SQL Server Profiler” is confusing, as this is not the same tool (UI) that’s been available since SQL Server 7.0.

To install the extension, click on it, and then select Install.  Once it’s installed you can select Reload and it will move into the top half of the window under Enabled.  Notice that when you select the extension, information about how to use it also appears.

Erin has a lot of useful information here, so check it out.

Comments closed

Contrasting Oracle’s Architecture with SQL Server’s

Kellyn Pot’vin-Gorman helps us understand where Oracle’s architecture differs from SQL Server’s:

The first thing you’ll notice is what Oracle refers to as an INSTANCE is different to what SQL Server calls one.

Oracle’s instance is most closely related to what SQL Server calls their database, (although it includes the files that Oracle includes in their description, too) and the Oracle home is *relatively* SQL Server’s version of an instance.

Read on for a deeper comparison from someone who has spent quite a bit of time working with both platforms.

Comments closed

Data Modeling: Not Everything Is VARCHAR(8000)

Bert Wagner gives us several reasons why VARCHAR(8000) can be a bad fit for data types:

When first learning SQL Server, I had the brilliant idea of defining every column on my table as VARCHAR(8000). After all, the VARCHAR datatype only consumes as much storage space as the string itself (plus two bytes), so even if I defined the max length of a column as 8000, I wouldn’t be wasting storage space if my data was smaller than that.

My development life would be easier too since I wouldn’t have to think about the kind of data I was storing in my columns; I could define everything as VARCHAR(8000) once and never have to go back to make any changes. Brilliant!

While I was correct about not wasting storage space, it turns out the idea of making every column VARCHAR(8000) is a terrible idea.

Click through to learn why.

Comments closed

MRAppMaster Errors Running MapReduce Jobs

I have a post looking at potential causes when PolyBase MapReduce jobs are unable to find the MRAppMaster class:

Let me tell you about one of my least favorite things I like to see in PolyBase:

Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster

This error is not limited to PolyBase but is instead an issue when trying to run MapReduce jobs in Hadoop. There are several potential causes, so let’s cover each of them as they relate to PolyBase and hopefully one of these solves your issue.

Click through for four potential solutions to what ails you.

Comments closed

Sizing Azure SQL Database

Arun Sirpal takes us through finding the right size for Azure SQL Database:

Do you want to identify the correct Service Tier and Compute Size ( was once known as performance level) for your Azure SQL Database? How would you go about it? Would you use the DTU (Database Transaction Unit) calculator? What about the new pricing model vCore? How would you translate you current on-premises workload to the cloud?

It can be a form of trial and error especially if you are new to this but I really do recommend trying out the PowerShell script that you can access once you have installed  DMA – Database Migration Assistant.

Read on to see how to run this tool and potentially save some money.

Comments closed

Pivoting Performance Counter Data

Dave Bland shows how you can build a dynamic pivot to see performance counter data over a stretch of time:

The next step is to write the code to capture the counter values and insert the data it the temporary table created above.  Because we need to capture the values over a period of time, the WAITFOR DELAY is used.  In this case the delay is 10 seconds, although you can change this to suit your needs. Of course, don’t forget to increment the counter variable. You will need to determine what counters you would like to capture.  Notice in the WHERE clause, we are looking for an instance_name of ” or ‘_total’.  This will allow the code to only capture one row for each counter.  The number 10 is the number of times we want to capture the counter values.  If you want to capture the data more frequently, simpley modify the number of seconds in the WAITFOR DELAY.  Here is link to my post in this topic, WAITFOR.

Dynamic pivoting in SQL is unnecessarily difficult, especially compared to languages like R.

Comments closed

Cleaning Up After Yourself in Azure Data Factory

Rayis Imayev shows how you can automatically delete old files in Azure Data Factory:

File management may not be at the top of my list of priorities during data integration projects. I assume that once I learn enough about sourcing data systems and target destination platform, I’m ready to design and build a data integration solution between two or more connecting points. Then, a historical file management process becomes a necessity or a need to log and remove some of the incorrectly loaded data files. Basically, a step in my data integration process to remove (or clean) such files would be helpful. 

Click through to see how to do this.

Comments closed