Press "Enter" to skip to content

Author: Kevin Feasel

SQL Server 2019 CU 16 and Log Shipping

Lee Markum notes a change in SQL Server 2019 CU 16:

As a data professional managing SQL Servers, you need to be thinking about Windows and SQL Server patching.

You want to stay up to date, but you also have to be careful because sometimes staying fully up to date comes with the risk that a new patch might break a feature you are using. This is possibly the case for SQL Server 2019 CU 16. If you’re using Log Shipping on a database that has TDE enabled and compression is enabled on those backups, then you need to take heed.

I’ll walk you through what can happen and what I’ve seen work to resolve the problem.

This is the tricky part of getting rid of service packs: when I think cumulative update, I don’t think “Here’s a thing that could break backwards compatibility with other SQL Server instances which have not been patched.” Read on to see an example of this in action.

Comments closed

Organizing Data Domains in a Data Mesh

Paul Andrew continues a series on data mesh architecture:

Defining an organisation hierarchy is always hard, even more so for large enterprises with massive amounts of interlock between business functions. In the context of data analytics, we attempt to tackle the problem by creating an organisation dimension as part of our star schema data model. This could include things like region, operating company, branch, department, team etc.

So, my friends, how do we go about handling this when considering a data mesh architecture and the de-centralised domains that support the natural scalability we crave. For me, it feels like we are just frontloading the dimensional modelling problem. Tackling it from the beginning in the very foundations of our data platform. But, with a twist.

Read on for that twist and for some solid guidance on data domains in practice compared to the theory.

Comments closed

T-SQL Improvements in CTP 2.1

Itzik Ben-Gan looks at ways recent T-SQL improvements:

One of the complexities that is of a specific importance to this section is comparisons that potentially involve NULL comparands, such as ones that you use in filter and join predicates. Most operators that you use in such comparisons, including the equals (=) and different than (<>) operators, use three-valued logic. This means that there are three possible truth values as a result of a predicate that uses such operators: true, false and unknown. When both comparands are non-NULL, such operators return true or false as you would intuitively expect. When any of the comparands is NULL, including when both are NULL, such comparisons return the unknown truth value. Both filter and join predicates consider unknown as a non-match, and sometimes that’s not the behavior that you’re after.

The distinct predicate (IS [NOT] DISTINCT FROM) should simplify a lot of code in the wild.

Comments closed

Showing Filter Selections on Power BI Reports

Mara Pereira makes a Power BI breadcrumb:

When I’m developing reports, I’m asked multiple times to “hide” the slicers/filters from the report page. Usually this is to make space in the page for other visuals and because customers don’t want to use the filter pane for some reason.

This happened so many times, and only in the last couple of months I decided to try some things out and get a bit creative with Power BI.

I came up with two solutions, which I think work great in these scenarios.

Read on to see the solutions.

Comments closed

Power BI CAT Guidance

Matthew Roche advises customers:

I’ve written previously about some of what the Power BI CAT team does, but the Power BI guidance documentation only gets a passing mention… and it’s worth going into more deeply.

A lot of what the Power BI CAT team does involves working with large enterprise customers. These customers are often trying to achieve difficult goals that often involve complex data architectures, and Power BI is often a significant part of their end-to-end information supply chain. We get involved when these enterprise customers need help achieving their strategic goals, and this help often includes helping them effectively use the existing capabilities of Power BI.

Read on to learn more about this guidance documentation.

Comments closed

Variables in DAX

Marco Russo and Alberto Ferrari show us how to use variables in DAX:

Variables were introduced in DAX in 2015 and so far, they have proven to be the best enhancement of the DAX language ever. When presented with the concept of variables, most newbies focus on performance improvement, thinking that you introduce variables in your code mainly to obtain better performance. Although variables can improve performance, performance is a minor advantage. There are several more important considerations that should encourage any DAX developer to make extensive use of variables. In this article we share a few considerations, along with best practices about variables and DAX.

Read on to see what makes variables so powerful.

Comments closed

When Query Store Plan Forcing Fails

Joe Billingham roots out failure:

Query Store is a fantastic feature of both SQL Server and Azure SQL DB. It allows you to monitor how queries execute against the database which is invaluable for troubleshooting performance issues. More than that though, it gives you the option to force an erratic query to use a particular execution plan, this helps avoid queries from running with inefficient plans and provides predictability and stability on the server.

But it does fail for various reasons and Joe has a list of them.

Comments closed

B-Tree and Hash Indexes in Postgres

Paul Randal takes us through indexing in Postgres:

This article explores the PostgreSQL implementation of the B-Tree (the B stands for Balanced) and hash index data structures. As PostgreSQL grows in popularity as an open-source database system for developers and as a target for migrating from Oracle workloads, understanding how PostgreSQL indexes work is extremely important for database developers and administrators. PostgreSQL has several other types of indexes, such as GIN indexes, GiST indexes, and BRIN indexes. I will omit them for this article as they’re somewhat specialty indexes suited for text-based searches, geography, and other complex data types. And while B-Tree index usage makes up roughly 90% of use cases, hash indexes and their concepts are also important to understand.

Understanding and implementing the correct indexes for the workload is the foundation of any well-running relational database system. Adding and adjusting indexes to suit the workload has yielded some of the most significant performance gains over my many years of consulting. However, to add the right indexes, you must first understand them.

Read on to learn more. Even if you live in SQL Server, this is a really good article to read because the types of indexes available can differ radically between platforms and that naturally affects indexing strategy.

Comments closed

SQL Server and Golang

Shane O’Neill goes long:

It’s all very well and good to open up Golang and write a FizzBuzz (note to self: write a FizzBuzz), but I still work with databases.

So before I do anything with Golang, I’d like to know: can it interact with databases. 

Granted, everything can, but how easy is it? And does trying to interact with databases kill any interest I have in the language.

So, let’s give it a go.

Click through to see what Shane came up with.

Comments closed

Azure Synapse Analytics July 2022 Updates

Ryan Majidimehr notes that the Azure Synapse Analytics team has been busy:

Azure Synapse Link for SQL is an automated system for replicating data from your transactional databases into a dedicated SQL pool in Azure Synapse Analytics. Starting this month, you can make trade-offs between cost and latency in Synapse Link for SQL by selecting the continuous or batch mode to replicate your data.  

By selecting “continuous mode”, the runtime will be running continuously so that any changes applied to the SQL database or SQL Server will be replicated to Synapse with low latency. Alternatively, when you select “batch mode” with a specified interval, the changes applied to the SQL database or SQL Server will be accumulated and replicated to Synapse in batch mode with the specified interval. This can save cost as you are only charged for the time the runtime is required to replicate data. After each batch of data is replicated, the runtime will be shut down automatically. 

Click through for the complete list.

Comments closed