Press "Enter" to skip to content

Author: Kevin Feasel

Snake Draft Ordering in SQL Server

Aaron Bertrand demands order:

I play fantasy football. To start each season, there is a draft, where players are randomly assigned draft order. We’ve started some seasons with a “snake” draft, to make the selection of players most fair and balanced. A snake draft works like this (a simple example with four players):

  • First pick in the first round picks last in the second round.
  • Second pick in the first round picks third in the second round.
  • Third pick in the first round picks second in the second round.
  • Last pick in the first round picks first in the second round.
  • Then, in the third round, it reverts to the original order: The first pick from the first round picks first again (giving them two picks back-to-back).

Read on to see how to implement this in T-SQL. I’m happy to report that I anticipated the answer, though Aaron notes a couple of issues that I didn’t catch in my “compile the code in my head” approach.

Leave a Comment

Elastic Jobs in Azure SQL DB Now GA

Srini Acharya makes an announcement:

Elastic Jobs is a fully integrated Azure SQL database service that allows you to automate and manage administrative tasks across multiple SQL databases in a secure, scalable way. It can run one or more T-SQL job scripts in parallel using Azure portal, PowerShell, REST, or T-SQL APIs. Jobs can be run on a schedule or on-demand, targeting any tier of Azure SQL Database. Job target can include all databases in a server, in an elastic pool, across multiple servers and even databases across different subscriptions and geo regions on Azure. Servers and pools are dynamically enumerated at runtime, so jobs run against all databases that exist in the target group at the time of execution.

If you’ve held off on Azure SQL DB because of a lack of the SQL Agent, take a look at this option.

Leave a Comment

A Scaffolding Design Pattern for Microsoft Fabric Pipelines

Andy Leonard shares some thoughts on design:

When assigned a project, it’s tempting – and dangerous – to Just Start Coding. If you suffer from the urge to develop first and design later, you are not alone (there’s at least one other developer like you and he’s typing this post). Do yourself a favor and…

Read on for more information on Andy’s design-first mentality and a sample of how you might lay out that initial design.

Leave a Comment

SQL/JSON in Postgres

Hubert Lubaczewski makes note of some forthcoming functionality:

Back in March/April of 2022 Andres Dunstan committed a series of patches that added support for lots of really interesting features from SQL/JSON standard.

While I’m not avid user of json in database, I was very, very happy. Wrote couple of blogposts about it.

Then, around six month later they got reverted.

Lately, since last year, actually, these re-appeared again:

Read on for the list of commits and the current state of things with respect to implementing the SQL/JSON standard in Postgres.

Leave a Comment

Transactional Replication Slow: Check Missing Indexes

Lori Brown provides a good tip:

Have you ever had transactional replication mysteriously start showing significant latency at a subscriber server?  If so, check to see if the primary keys from the publisher database are missing on the subscriber database!

Replication was showing long latency while the publisher and subscriber servers were not heavily utilized.  Microsoft generated stored procedures that are used to send INSERT, UPDATE and DELETE operations from the publisher database to the subscriber database had no indexes to help query performance to the subscriber tables.  Since there were no indexes that matched the keys of the statements pushing through data changes, tables were being fully scanned and replication statements were piling up.  In other words, the subscriber was missing indexes. 

Read on to see how you can check this and also how you can quickly script those missing indexes.

Leave a Comment

Mirroring Azure SQL DB into Microsoft Fabric

Dennes Torres holds up a mirror:

You need to read data from production to build a single source of truth. If you create pipelines reading directly from production, you will create additional load over the production environment. The mirror allows you to do much of the production reporting from the mirror, leaving the production environment to serve other users. Keep in mind, production report, but not analytics report.

Mirroring a production database to Fabric is one method to ensure the load over production will be as low as possible and the data will be transferred fabric to complete the transformations from this point.

Only this? What about avoiding pipeline creation? Not really, you still need to create pipelines, as I will explain ahead.

Click through for the demo and explanation. This is an important thing for people to note: mirroring doesn’t eliminate ELT. You still have the data lake process to work through, as your transactional system does not and should not look like your reporting system.

Leave a Comment

SHA_256 Hashes and Data Type

Reitse Eskens hit an interesting issue:

The issue is quite simple. A text needs to be converted into a SHA2_256 hash for some authentication reasons. The example shown here is simplified. The thing is, the outcome of the hash isn’t accepted by the authorising party and when the input is checked via an online MD5 hashing site, there’s a difference between that output and that from the SQL Script.

Read on to see what the problem is and how it can affect you.

Leave a Comment

Converting Cursors to PL/pgSQL

Deepak Mahto explains a difference in cursors from Oracle:

In the blog, we will cover scenarios with cursors that differ from how Oracle handles them. During conversion, our initial approach is to match all codebases as closely as possible with the target compatibility. However, in some cases, although the code appears identical, the functionality might vary. Let’s explore one such case here.

Click through for the scenario.

Leave a Comment

Removing Multiple Rows from a DataFrame via Base R

Steven Sanderson gets rid of rows:

As data analysts and scientists, we often find ourselves working with large datasets where data cleaning becomes a crucial step in our analysis pipeline. One common task is removing unwanted rows from our data. In this guide, we’ll explore how to efficiently remove multiple rows in R using the base R package.

Read on for a couple of ways to do this, including removing by some filter and removing by some index.

Leave a Comment