Press "Enter" to skip to content

Author: Kevin Feasel

Using the mssql-python Driver

Hristo Hristov tries out a driver:

Programmatic interaction with SQL Server or Azure SQL from a Python script is possible using a driver. A popular driver has been pyodbc that can be used standalone or with a SQLAlchemy wrapper. SQLAlchemy on its own is the Python SQL toolkit and Object Relational Mapper for developers. In the end of 2025 Microsoft released v1 of their own Python SQL driver called mssql-python. How do you get started using mssql-python for programmatic access to your SQL Server?

Click through to see how it works. Hristo points out a couple of benefits to this driver over the classic pyodbc driver, though I’m curious if there are any performance differences between the two.

Leave a Comment

Predictive Analytics with Power BI and Microsoft Fabric

Ruixin Xu puts together a how-to guide:

Across industries, teams use Power BI to understand what has already happened. Dashboards show trends, highlight performance, and keep organizations aligned around a shared view of the business.

But leaders are asking new questions—not just what happened, but what is likely next and how outcomes might change if they act. They want insights that help teams prioritize, intervene earlier, and focus effort where it matters. This is why many organizations look to enrich Power BI reports with machine learning.

This challenge is especially common in financial services.

Consider a bank that uses Power BI to track customer activity, balances, and service usage. Historical analysis shows that around 20% of customers churn, with churn tied to factors such as customer tenure, product usage, service interactions, and balance changes.

Click through for the architecture example and process. The actual model is a LightGBM model, which is generally fine for two-class classification.

Leave a Comment

SELECT * in EXISTS Redux

Louis Davidson follows up from a prior post:

For example, it is often said that SELECT * makes your queries slower. In a nuanced way, this is often true, but only if changes occur in the database where columns are added. So many readers (myself included) see something that is demonstrably not 100% being treated as such, and they tune out.

There are plenty of other reasons you shouldn’t use that construct, no matter what.

In this post, I want to admit to having my mind changed, and I will go back and change the previous post.

One thing I really appreciate about Louis is his willingness to listen to new information, update his priors, and outright say “Hey, here’s what I thought before and now I believe this instead.” That’s a commendable trait.

Leave a Comment

Two Options for Content Layout in Power BI

Valerie Junk covers a pair of options:

In this tutorial, I want to show a small but very practical formatting setting in Power BI.

When we create a table or matrix visual, we sometimes end up with white space on the right side. For example, if you show data by month and you only have 6 months of data so far, but you design the visual to fit 12 months, the table/matrix is already sized for the full year, which leads to a lot of empty space.
In Power BI we have two column header formatting options:

Click through for the two options, where you can find the option, and some important information around both options.

Leave a Comment

Bluebox: An Evolving Sample Database for PostgreSQL

Ryan Booz has a sample database:

Sure, there are datasets everywhere. Kaggle currently lists over 600,000 public datasets, but most of them are static CSV files that you load once and never touch again. Great for a one-time analysis, not so great for learning how a real database behaves over time. The Postgres Wiki lists a few dozen sample databases, too. And shoot, your shiny new AI coding buddy can help you create one if you want to put the time in.

The problem with most of these datasets is that they’re primarily static. If you’re lucky, some of the datasets might produce new data dumps once a month to keep things “current”. But the problem is that you can’t really practice query tuning if your data never changes. You can’t explore vacuum behavior when there are no updates. You can’t test monitoring tools when nothing is happening.

Click through for more information on Bluebox, as well as a Docker container containing several helpful tools and processes to make this data evolve over time.

Leave a Comment

Transaction ID Locking

Hugo Kornelis disentangles two new features in SQL Server 2025:

One of these two features is Transaction ID (TID) Locking. Slated to end the memory waste of thousands of individual row locks, and the concurrency killer of lock escalation. What it is, how does it work, what are the limitations, and do we really get a free lunch?

Click through for the video, though I am firmly wedded to the idea that TANSTAAFL. I say this without spoiling any part of the video.

Leave a Comment

The Downsides of Python

Andy Brown writes a companion piece:

Four years ago I wrote a blog on this site explaining why Python is better than C# and, arguably, most other programming languages. To redress the balance, here are 10 reasons why you might want to avoid getting caught up in Python’s oh-so-tempting coils – particularly when building large, long-lived systems.

If this sounds like an attempt to have my cake and eat it, my defense is that I follow in my work what I preach here: I use Python for ad-hoc jobs, at which it is unsurpassed. For larger systems – such as our MV website – I use C#, due to its strengths in maintainability, tooling as well as the practical consideration that my personal preference for Visual Basic is not shared by the wider team.

Some of it is opinion, some of it is annoying. I’ve grown to appreciate the spacing, though it can be really painful when copying code from somewhere and the spacing gets all messed up. My short version of Python is that it requires you to have more discipline as a developer to prevent messes from occurring, and I think that’s a negative on net. But that same aspect simultaneously makes it so much easier to prototype and rapidly solve problems, so there’s a natural trade-off here.

Leave a Comment

Choosing between PCA and t-SNE

Shittu Olumide visualizes some data:

For data scientists, working with high-dimensional data is part of daily life. From customer features in analytics to pixel values in images and word vectors in NLP, datasets often contain hundreds and thousands of variables. Visualizing such complex data is difficult.

That’s where dimensionality reduction techniques come in. Two of the most widely used methods are Principal Component Analysis (PCA) and t-Distributed Stochastic Neighbor Embedding (t-SNE). While both reduce dimensions, they serve very different goals.

The thing that ultimately soured me on t-SNE is the stochastic nature. You can run the same set of operations multiple times and get significantly different results. It’s really easy to use and the output graphs are really pretty, but if you can’t trust the outputs to be at least somewhat stable, there’s a hard limit to its value.

Leave a Comment