Press "Enter" to skip to content

Author: Kevin Feasel

Tools for Optimizing Azure SQL MI Performance

Rie Merritt breaks out the toolbox:

Azure SQL Managed Instance provides options within and outside Azure portal for troubleshooting and optimizing performance.  Within the portal, you can leverage automatic tuning and Intelligent Insights. Outside of the Azure Portal, you can take advantage of the capabilities that are already in the database engine, such as query store and dynamic management views (DMV). In addition, Microsoft offers several monitoring options that are in preview: Azure SQL Insights inside Azure Monitor, which requires an agent on a VM you own, Azure SQL Analytics, and Azure diagnostic telemetry. 

Automatic tuning in SQL Managed Instance supports FORCE LAST GOOD PLAN, which identifies queries using an execution plan that is slower than the previous good plan. It forces queries to use the last known good execution plan. Since the system automatically monitors the workload performance, in case of changing workloads, the system dynamically adjusts to force the best performing query execution plan. 

Many of the things Rie describes are also available on-premises, though Azure SQL Analytics is only available in Azure SQL DB and Azure SQL MI, as of the time of this post.

Comments closed

Indexes and Stats on Tables with Always Encrypted

Matthew McGiffen gives us the low-down:

In a previous post we looked at executing queries against columns encrypted using Always Encrypted. In this short post we look at the ability to be able to have (or not) indexes on those columns. This information is relevant to the case where you are using Always Encrypted without enclaves, we’ll look at working with enclaves later on.

Click through to see how this all works.

Comments closed

A Primer on Microsoft Fabric Notebooks

Leila Etaati provides an explanation:

In Fabric, there are tools for different personas of the users to work with. For example, for a citizen data analyst, Dataflows and Power BI Datasets are the tools with which the analyst can build the data model. For Data Engineers and Scientists, one of the tools is Notebook.

The Notebook is a place to write and run codes in languages such as; PySpark (Python), Spark (Scala), Spark SQL, and SparkR (R). These languages are usually familiar languages for data engineers and data scientists. The Notebook provides an editor to write code in these languages, run it in the same place, and see the results. Consider this as the coding tool for the data engineer and scientist.

Click through for a video, as well as a regular blog post.

Comments closed

Power BI Visual Has Exceeded the Available Resources

Chris Webb diagnoses an error:

One of my most popular blog posts of the last few years is this one on the “Visual has exceeded available resources” error in the Power BI Service:

https://blog.crossjoin.co.uk/2020/01/20/visual-has-exceeded-the-available-resources-error-power-bi/

This error only used to appear in the Power BI Service, but the good news is – and trust me, this is good news – it may now appear in Power BI Desktop too following the May 2023 release.

It’s not often good news that you get a new error, but knowing that the Service will behave a certain way and replicating that on Desktop, at least it prevents an issue from popping up in production that you can’t find during initial development.

Comments closed

Code Is a Liability

Nate McMaster (indirectly) talks dollars and cents:

Early in my software engineering career, a senior engineer at Microsoft told me “the best solution is one that requires no new code.” At the time, I thought this was nonsense. Is not my role as a software engineer to write code? Why would writing less or no code be better? More code means more bug fixes, more features, more services, and more tools. So why is more not always better?

Fast forward to 2023 – now I am the most senior engineer on a team, and I give the same guidance. Prefer solutions that require less or no code.

What led to this shift in perspective? 

Read on for Nate’s answer, which is well-written and makes a lot of sense. It’s also close to a topic I’ve written about in the past.

Comments closed

Verti-Parquet and DirectLake in Fabric

Jordan Witcombe provides an explanation:

The VertiPaq engine cleverly uses columnar storage for efficient querying and processing. It employs multiple compression techniques, including Run-Length Encoding (RLE) and Dictionary Encoding, to minimise storage space. Through finding optimal sort orders and value encoding, it achieves maximum space efficiency and performance. VertiPaq also utilises ‘In-Memory Column Store’ for fast query performance, ‘Predicate Pushdown’ to eliminate unnecessary data at query time, and ‘Block Decompression’ to only decompress relevant data blocks, making it a powerhouse for data management and retrieval.

Now, because of these ingenious tricks, we wave goodbye to traditional file formats like JSON or CSV. Instead, all data stored within the managed area of Fabric and OneLake uses either Parquet or Delta. It’s time to embrace these efficient, high-performing formats that bring the best out of VertiPaq’s compressive power. Let’s explore these further in the next section.

Read on for some comparisons in file size between Fabric and Databricks, as well as how they perform in Power BI.

Comments closed

Union in KQL

Robert Cain combines some tables:

In today’s post we will look at the union operator. A union will create a result set that combines data from two or more tables into a single result set.

Unlike the join, which was covered in my previous post Fun With KQL – Join, the union does not combine the columns from each table into single rows. Rather it returns rows from the first table, then rows from the second table, then if supplied third, forth and so on.

Read on to see how union works as an operator and for several examples.

Comments closed

Ranking with DAX

Marco Russo and Alberto Ferrari break the ties:

DAX already offered two ranking functions so far: RANK.EQ and RANKX. While RANK.EQ is very seldom used, RANKX has been the primary ranking function for a long time. We wrote several articles about RANKX: one of them deals with ranking on multiple columns, a scenario that requires some DAX acrobatics to be solved as described in RANKX on multiple columns with DAX and Power BI.

The new RANK function makes ranking on multiple columns much easier because it offers sorting by multiple columns as a native feature, because it belongs to the family of window functions. Thanks to the concept of current row in window functions, RANK also helps in a subtle issue with RANKX – which despite being rare, might affect ranking on decimal numbers as described in Use of RANKX with decimal numbers in DAX.

Read on to see how you can use RANK() and how it compares to RANKX().

Comments closed

Commits, Auto versus Manual

Chen Hirsh switches platforms and hits a nasty surprise:

Anyway, I altered some views and stored procedures in the development environment, and after getting approval from the testers, tried to make the same changes in the production environment.

And then something strange occurred, I run an alter view command, and DBeaver told me the command run successfully. But when I ran a select from that view, it still had the old logic before the change. Right clicking the view, and getting its code, showed that it really hasn’t change.

I solved that mystery quickly, though I’ve also worked with auto-commit platforms (SQL Server) and platforms where the default commit is manual (Oracle, Postgres) and have been burned the same way in the past.

Comments closed

Creating an Azure DevOps YAML Pipeline for SQL Server Deploys

Oilivier Van Steenlandt updates to the new Azure DevOps model:

In one of my previous blog posts, I used the SQL Server database deploy task to deploy my DACPAC to SQL Server. Unfortunately, this task became deprecated in Release Pipelines. In this blog post, I would like to share the alternative.

Additionally, we will be moving from a Classic Release pipeline to a YAML pipeline. The YAML pipeline will be responsible for building and deploying our Database Projects.

Click through for the walkthrough.

Comments closed