Press "Enter" to skip to content

Author: Kevin Feasel

Table-Level Locks in PostgreSQL

Gulcin Yildirim Jelinek takes us through locking in PostgreSQL:

All locking, whatever their type is, will reduce the throughput, and potentially increase the latency, which means a loss of performance, as nothing is ever free. If my intention is to make sure my data does not have corruption and everyone is getting a correct result at their time of query, I have to agree that I’d have to lock access when multiple transactions are targeting the same table or same row to make sure we take some time to keep the order of things instead of showing wrong results, fast.

Read on for a quick primer on multi-version concurrency control, locking, and lock mechanisms in PostgreSQL.

Comments closed

Parquet File Customization and SQL Server

Ed Pollack writes some files:

Previously, we introduced and discussed the Parquet file format and SQL Server and why it is an ideal format for storing analytic data when it does not already reside in a native analytic data store, such as a data lake, data warehouse, or an Azure managed service.

Both Python and the Parquet file format are quite flexible, allowing for significant customization to ensure that file-related tasks are as optimal as possible. Compatibility with other processes, as well as keeping file sizes and properties under control will also be introduced here.

Click through for some examples.

Comments closed

Set Operations in T-SQL

Erik Darling has a pair of videos covering 3 1/2 set operations. First up is UNION and UNION ALL. These are the set operations that most people know about and use fairly regularly, and Erik explains the difference between the two, including the performance difference between the two.

Then, Erik hits upon the two lesser-known set operations: INTERSECT and EXCEPT. These are extremely useful in certain circumstances, and tend to perform much better than other alternatives. For example, to figure out if two datasets are exactly the same, it’s really hard to go wrong with the following pair of queries:

SELECT a.* FROM a EXCEPT SELECT b.* FROM b;
SELECT b.* FROM b EXCEPT SELECT a.* FROM a;

You need both queries because the former tells you if there are any records in A that do not exist in B, either because the record simply is not there or because there is a difference in one or more values in B’s matching record. But then, you also have to check the opposite side, because there might be a record in B that does not exist in A and the first query will not expose it.

This is typically the way I’d write test cases, ensuring that both queries return 0 results. Granted, you could always just check that the count of the intersection equals the count of records:

DECLARE @c1 INT, @c2 INT;
SELECT @c1 = SELECT COUNT(*) FROM (SELECT a.* FROM a INTERSECT SELECT b.* FROM b);
SELECT @c2 = SELECT COUNT(*) FROM a;

In this case, @c1 and @c2 should be the same number.

1 Comment

Exploring SQL Databases in Microsoft Fabric

Jared Westover looks at the bright side of life:

Over the past few months, I’ve toyed with Microsoft Fabric, focusing on the Data Factory and Power BI experiences. Everything I’ve developed so far is in the proof-of-concept (POC) phase. Naturally, I’m skeptical about new game-changing features, and Fabric is no exception. Any new flashy tech brings bugs along in the early stages. We’ve all been there, working for weeks on a project to have random bugs throw a wrench in everything.

When Microsoft announced SQL databases in Fabric, I was intrigued. After watching the Ignite session, Power AI apps with insights from SQL database in Fabric, a few features instantly stood out, and I want to share my first impressions.

Read on to learn more.

Comments closed

The Importance of Virtual Environments in Python

Jack Wallen proselytizes for virtual environments:

When developing with Python, chances are pretty good that you’ll need to install various libraries, dependencies and apps to get your project started. The good news is that (in most cases) those installations are pretty straightforward (thanks to pip and other tools).

Problems can arise, however, if you simply install all of those project requirements on your system. It’s like installing any given application, hoping it won’t cause problems with other applications, your OS or your data. In most cases, it’s safe, but there’s always that one instance where things can quickly go awry.

Read on to see how virtual environments can alleviate many of these pains. It took a while for me to understand exactly why virtual environments are so important, but this is definitely something I recommend doing if you work with Python in any capacity.

Comments closed

Always On Lease Timeout Monitoring

Yvonne Vanslageren gives us one more thing to check:

SQL Server Always On Availability Groups are a robust solution for achieving high availability and disaster recovery for SQL Server databases. However, simply configuring them is not enough—you also need a solid monitoring strategy to ensure data integrity and system reliability. One key aspect of this monitoring process is keeping an eye on lease timeouts, which can signal larger issues and help prevent potentially catastrophic split-brain scenarios.

In this post, we’ll walk through the various health checks available for Always On Availability Groups, discuss how lease timeouts work, and explore practical methods for monitoring and troubleshooting.

Read on to learn more about the lease timeout concept, as well as where you can get this information and further recommendations around how to deal with the information.

Comments closed

Azure VM Constrained CPU and SQL Server Licensing

Rod Edwards works to save some money:

Our SQL servers have specific loads, patterns and resource requirements. We’ve monitored them, we know what they are inside and out. You know that x number of CPUs, and y GB or RAM means that your DBs happily sweat the hardware, but are sized to perfection to make sure it has just enough resource to complete what it needs to do. The negotiations with your Infra team in command of the VM resources were tough, but you’re not greedy… you appreciate that other VMs may require resource as well…

Now you have to migrate to Azure, and SQL on Azure VM is your only option. Great, loads of different types/flavours of VMs out there, one of them must be in the the goldilocks perfect porridge zone right? Right?

Licensing around this gets a little weird. Basically, Rod is absolutely right that an E64-16, for example, would require 16 cores of SQL Server, and that can save a good bit of scratch. For the OS, however, you’re paying for 64 cores, so keep that in mind while you’re rolling in a bed of the cash you’ve saved on SQL Server licensing.

Comments closed

Dealing with Duplicate Data via ROW_NUMBER()

Andy Brownsword removes the duplicates:

Data quality and consistency is key to the services we support and solutions we deliver. A gremlin which can undermine that is duplicate data. Let’s start the new year dealing with duplicate data and having a good clear-out.

For our example we’ll consider an Order Product table which contains an OrderID and ProductID, and the combination of these should be unique. Other fields for the duplicate records may differ so we may want to be selective about which records are removed.

This is where I get on my high horse and complain about laziness in data modeling, a very common problem. This takes nothing away from Andy’s post, which is a good method for solving a problem that has gotten out of hand. But if you know that some combination of attributes is unique, add a unique key constraint or a unique non-clustered index right then and there. Doing so will prevent improper duplicate data from ever being an issue. If you don’t know that some combination of attributes must be unique, discuss this with the business side in a way that makes sense for them. Yes, there’s always the risk that you’ll have a conversation later like, “Oh, it turns out that this really should be unique,” but in most cases, you can easily sort this kind of thing out up-front and save a lot of time and effort later on.

Comments closed