Press "Enter" to skip to content

Author: Kevin Feasel

December 2020 SQL Tools Releases

Drew Skwiers-Koballa gives us an update on where SQL Server tooling is at:

The December releases of Azure Data Studio 1.25 and SQL Server Management Studio (SSMS) 18.8 are now generally available.  Additionally, the mssql extension for Visual Studio Code has recently been updated to version 1.10.0. Read on to learn more about each of these updates and grab the latest versions of SSMS, Azure Data Studio, or the mssql extension for VS Code.

Read on to learn more.

Comments closed

Power BI Composite Model Update

Matt Allington is excited:

The December 2020 version of Power BI desktop has just been released, and it is undoubtably the most important release since the first version way back in 2015. The super feature that has been released is an update to composite models using direct query of online datasets. The implications of this release are massive. Anyone with Power BI Desktop can now build their own, local version of a data model and enhance it with their own additional data without the need to have edit access to original data model. Oh, and it is a Pro feature, not a Premium feature!

Read on to see this in action.

Comments closed

PASS Dissolving

From the PASS board:

We are saddened to tell you that, due to the impact of COVID-19, PASS is ceasing all regular operations, effective January 15, 2021. 

Also check out their final meeting minutes (PDF):

Tim presented a recap of the non-reconciled PASS Virtual 2020 Summit numbers, showing that only $1,973,031 was brought in falling short by $1,642,39 of the budgeted Summit revenue of $3,615,427. He went on to show that with the Summit shortfall and no prospect of funding support from Microsoft, that even if all other revenue was achieved, it puts PASS is a deficit of potentially $3.22M. the non reconciled breakdown of registration and sales and the potential deficit of $3.22M if the remaining budgeted revenue is met.

H/T Brent Ozar for the minutes.

I’m sure I’ll have more to say in other venues, but my brief thoughts are as follows:

  • PASS was an excellent institution, nearly unique among its kind by being community-driven rather than a community effort owned by a parent company.
  • Another example of such an institution that I’m familiar with was INETA. Emphasis on “was” there.
  • I appreciate everything that PASS has done. I think that they certainly fulfilled their mission and although I hate to see them go, I am grateful that they were there.
  • .NET user groups certainly didn’t die with the passing of INETA, and SQL Server user groups won’t either. At the user group level, my expectation is that it’ll be status quo. This is an advantage of the decentralized user group model.
  • I hope that the SQL Saturday property will be spun off and saved. Yes, the community could make a new SQL Saturday, but my biggest concern is getting the sponsors sorted out. I think there’s some time to do this, as virtual events are quite inexpensive, so only a limited sponsor base is required. It’s the in-person events which have biggest monetary outlays.

Comments closed

MAX Type Variables in WHERE Clauses and Recompile

Erik Darling puts on his lab coat and goggles:

After blogging recently (maybe?) about filters, there was a Stack Exchange question about a performance issue when a variable was declared with a max type.

After looking at it for a minute, I realized that I had never actually checked to see if a recompile hint would allow the optimizer more freedom when dealing with them.

Read on for Erik’s findings.

Comments closed

Cosmos DB Custom App Logic in Functions

Hasan Savran shows off how we can use user-defined functions to add custom application logic to operations in Cosmos DB:

There are couple of things you need to know about them. First, User-defined functions are only for reading data. User-defined functions will always require more Request Units than regular SQL queries. You should always try to solve your application logic problem with regular queries first. Get familiar with system functions, be sure that you are not trying to write a user-defined function when there is already a system function solving the same problem. System functions will always use less Request Units than your custom user-defines function. Just like SQL Server, User-defined functions will cause a table-scan in Cosmos DB. That is why they cost more than regular queries. If you want to use the User-defined function in a where clause. Try to filter by other properties too. Other properties might hit to indexes and that will help you with request units.

Click through to see an example of them in action.

Comments closed

Row Estimates with Table Variables

Gail Shaw explains when table variables estimate one row and when they can generate estimates above one row:

At first glance, the question of how many rows are estimated from a table variable is easy.

But, is it really that simple? Well, not really. To dig into the why, first we need to identify why table variables estimate 1 row. The obvious answer is because they don’t have statistics. However…

Read on to learn the real answer.

Comments closed

A Use Case for Recursive CTEs

Jeffin Mathew takes us through a use case for recursive common table expressions:

An individual is working in HR and wants to find out which individual is managing who. This may be for several reasons such as, they need to ask the managers on the progress of their staff and if their appraisal is coming up or is due.

Another scenario may be that the company is enrolling more staff and wants to find out the capacity of the current staff or find individuals who have not yet got anyone to manage to give them the opportunity to do so.

Click through for the solution. Often times, we see recursive CTEs show up in hierarchical queries like this. When the number of records is small, they work really well. The issue comes with scale; that’s when a different table design becomes important.

Comments closed

Power BI Model Documenter

Marc Lelijveld has an update for us:

First of all, I worked on the cleanness of my code. As I’m not a native developer, I have to do a lot of trial and error to get stuff working exactly as I have it in mind. As of the beginning, the script contains a task to create the drop off folder for the connection file. Though, if the folder already existed, the script wrote an error to the screen, while everything was actually going as planned. In v1.2.0 of the model documenter, I enhanced the error handling to only write errors to the screen that actually matter.

Secondly, the transcript that runs while the tool is executed, generates a log file. This log file was not always entirely complete. I further enhanced the logging to easier debug in case of undesirable errors.

There are more improvements as well, so check it out.

Comments closed