Press "Enter" to skip to content

Category: Internals

SQL Server Drivers and Access Tokens for Connection Pooling

David Engel gives us a deep dive:

We often get support escalations related to failures around expired access tokens when using Microsoft Entra ID (formerly Azure Active Directory) authentication. There is a lot of nuance in the various drivers about how and when access tokens are used and when they get renewed after expiring. I’m going to do a deep-dive and try to explain some of the details here.

This post is mainly targeted at people who have to troubleshoot issues in this area or those who are building large services and need to understand the impact of their authentication choices.

Read on to learn more about how the ODBC, JDBC, and .NET drivers differ.

Comments closed

PostgreSQL Row Visibility Indicators

Cary Huang explains how row visibility works in Postgres:

Simply put, the visibility refers to whether a row of data (Heap Tuple by default) should be displayed to the user in certain states, backend processes, or transactions.

For example,

  • A user changes a data record from ‘A’ to ‘B’ via UPDATE
  • PostgreSQL handles this by INSERTing a new record call ‘B’ while marking record ‘A’ as ‘invisible’
  • Both records exist in the system, but ‘B’ is visible, while ‘A’ is not.
  • A deleted or invisible record is also referred as a ‘dead’ tuple
  • One of the responsibilities of VACUUM process is to clear these ‘dead’ tuples to free up space.

Read on to learn more about various visibility indicators and how they ultimately tell us whether a row should be visible or not.

Comments closed

Processes in PostgreSQL

Semab Tariq continues a series on internals in PostgreSQL:

PostgreSQL is a client/server type relational database management system. It has a multi-process architecture that runs on a single host. A collection of multiple processes that manage a database cluster is usually referred to as a PostgreSQL server. In PostgreSQL, every operation is treated as a process, and each action we undertake within PostgreSQL follows an append-only approach. This means that every time we execute an action such as an insert, update, or delete, a new tuple is created rather than modifying the existing data directly.

Consequently, PostgreSQL does not execute updates or deletes in place. Instead, it appends new data or marks existing data as obsolete. This append-only methodology ensures data integrity and allows for efficient management of database changes over time.

Read on to learn more about how these processes work.

Comments closed

Working with the Schema Registry in Confluent

Italo Nesi shows off the schema registry:

If you are new to Schema Registry or don’t know the difference between schema, schema type, subject, compatibility type, schema ID, and subject version, I would recommend starting with this free course: Schema Registry 101 by Danica Fine.

This article will show the bits and bytes of what happens behind the scenes in Apache Kafka® producer and consumer clients when communicating with the Schema Registry and serializing/deserializing messages.

Read on to learn more about data quality rules and how the schema registry works.

Comments closed

An Overview of Postgres’s Planner

Cary Huang digs into one phase of Postgres query processing:

When you send a query to PostgreSQL, it normally would go through stages of query processing and return you the results at the end. These stages are known as:

  • Parse
  • Analyze
  • Rewrite
  • Plan
  • Execute

I wrote another blog to briefly explain the responsibility of each query processing stage. You can find it here. In this blog, we will only focus on the “plan” stage or the “planner” module as this is perhaps the most interesting or complex stage if you will. I will share my understanding of the planner module as I investigate its internal workings to handle a simple sequential scan. This will be based on PostgreSQL 16.

Read on to learn what the planner does and how it works, at a high level.

Comments closed

Postgres Internals: Database Clusters, Databases, and Tables

Semab Tariq begins a new series:

A database cluster is a collection of multiple databases managed by a single PostgreSQL server. It can be referred to as a data/base directory.

A database is a collection of database objects. Whereas a database object is a data structure used to store objects such as tables, views, indexes, extensions, Sequences functions, etc. In simple words, anything that we can create or store within a database is a database object

Read on to learn more about how Postgres lays out database files and tablespaces.

Comments closed

Hash Match and Stream Aggregate Operators in SQL Server

Andy Brownsword rounds up the usual suspects:

In the last post we looked at how TOP and MAX operators compared. We saw the execution plan for a MAX function used a Stream Aggregate operator which is one of two which we can use for aggregation

I wanted to look at the two operators and how they perform the same tasks in different ways. The way they function is key to understanding why the engine may choose to use one over the other and the impact this can have on the performance of a query.

The two operators in question: the Hash Match (Aggregate) and Stream Aggregate

Read on for a discussion of how each operator works and when each makes sense for the optimizer to use.

Comments closed

Hash Aggregates and Hash Joins in Postgres

Muhammad Ali plays matchmaker:

PostgreSQL employs various techniques for data joining and aggregation in its queries, among which the hash-based method stands out for its efficiency in particular situations and different data sizes. We will discuss hash joins and hash aggregates in PostgreSQL, providing insights on how they work and parameters which influence this algorithm.

Read on to learn more. This looks fundamentally similar to hash matches in SQL Server, so if you’re familiar with that, the concepts should be pretty clear.

Comments closed

Fast-Track Optimization and Window Functions

Hugo Kornelis digs into a performance improvement the SQL Server optimizer has for window functions:

This is part twenty-four of the plansplaining series. In the previous part, I explained the execution plans for basic window functions, with and without a window frame. Especially the latter group performed quite poorly in the examples. So let’s now look at an optimization that SQL Server can apply to most cases, that prevents this rather bad scaling.

Click through to see what this is, how it works, and when the optimizer is able to use it.

Comments closed