Press "Enter" to skip to content

Author: Kevin Feasel

Automating SQL Server Deployments via dbatools

David Seis digs into scripted SQL Server installation:

In this and the next two blog posts I will be bringing diverse dbatools commands into scripts that can handle a complete deployment, do a checkup of major health and configuration metrics, and do a true up of a pre-existing instance. This post will cover the complete deployment, which if you have been reading the audit series will be much more than just the SQL install of last post. This time we are aiming for the whole thing. Install, update, configure host, configure SQL, Deploy maintenance. Everything  I can think of!!

Clicking next-next-next one or two times for SQL Server installation is fine—it gives you an idea of what capabilities are available and what you need to know about. By the time you’ve installed SQL Server 5-10 times, you should familiarize yourself with the configuration files (especially because they get auto-generated for you after you use the GUI—SQL Server itself uses these to install!), and should be looking for ways to automate this process and avoid misclicks or wasting time that you could otherwise be using by reading Curated SQL.

Leave a Comment

No More Default Semantic Models in Microsoft Fabric

Nicky van Vroenhoven has good news for us:

Another quick post, because today is an important day for everyone working with Fabric and Power BI!

Last month, Microsoft announced they are Sunsetting Default Semantic Models: Yaay! 
Today marks that day: No more automatic child semantic models!

The idea of having a default semantic model seemed like a good one, but the problem was that too many environments had very specific needs that a default semantic model couldn’t anticipate or address. As a result, these tended to confuse end users more than save them time.

Leave a Comment

RIP Phil Factor

Tony Davis has some sad news:

We are deeply saddened to share the news that Andrew Clarke, better known to Simple-Talk readers as Phil Factor, recently passed away. He was the site’s editor for several years and continued writing for Redgate long after. Many readers will have learned much of what they know about SQL from Andrew. Others will remember working with him on articles, benefiting from his sharp wit and knowledge, or perhaps meeting him at a PASS conference. To all who knew him, he was a uniquely talented, intelligent, kind, generous, and funny man.

I don’t think I ever met Andrew in person, but I loved his Phil Factor articles. I appreciated all of the work he would put into his testbenches, as well as the irreverent humor he’d sprinkle through. I think my favorite article he ever wrote was this one on the entity-attribute-value anti-pattern in T-SQL and how its siren-like allure drags wave after wave of developers to their doom.

Leave a Comment

Installing SQL Server 2025 RC0 on an Azure VM

Koen Verbeeck performs an installation:

I already had a virtual machine in Azure, running SQL Server 2025 CTP 2.0 (which uses a pre-made image). I explain how to set that one up in the article Install SQL Server 2025 Demo Environment in Azure. But I wanted to use the latest preview, which is Release Candidate 0 at the time of writing. Unfortunately, there’s no image available (yet?), so I had to do it the old-school way: installing SQL Server manually.

Read on to see how to do it, as well as a few extra things necessary to make everything work well in Azure.

Leave a Comment

An Introduction to Batch Normalization in Neural Networks

Ivan Palomares Carrascosa shows off one technique for optimizing neural networks:

Deep neural networks have drastically evolved over the years, overcoming common challenges that arise when training these complex models. This evolution has enabled them to solve increasingly difficult problems effectively.

One of the mechanisms that has proven especially influential in the advancement of neural network-based models is batch normalization. This article provides a gentle introduction to this strategy, which has become a standard in many modern architectures, helping to improve model performance by stabilizing training, speeding up convergence, and more.

Read on for a quick description of how it works and a demonstration in Keras.

Leave a Comment

Enabling Map Visuals in Power BI

Boniface Muchendu gets past the X:

Have you ever tried to create a map in Power BI only to see an error instead of your visualization? If your Power BI maps are not working, you’re not alone. By default, some map and filled map visuals may be disabled due to security settings. The good news? With a few quick adjustments, you can enable maps in Power BI Desktop or, if needed, in your organization’s tenant settings.

Read on to see why this visual might be disabled and how to enable it.

Leave a Comment

The Consequences of Hitting Semantic Model Guardrails

Chris Webb smashes into a wall:

Direct Lake mode in Power BI allows you to build semantic models on very large volumes of data, but because it is still an in-memory database engine there are limits on how much data it can work with. As a result it has rules – called guardrails – that it uses to check whether you are trying to build a semantic model that is too large. But what happens when you hit those guardrails? This week one of my colleagues, Gaurav Agarwal, showed me the results of some tests that he did which I thought I would share here.

Click through to see what happens when you go past one of those guardrails.

Leave a Comment

Batching Large Data Operations via Key Ranges

Andy Brownsword updates or deletes a batch of rows:

Effective batching in general helps us by:

  • Reduce transaction length and minimise blocking
  • Avoids unnecessary checking of the same rows repeatedly
  • Introduce graceful pacing to reduce impact on busy environments or data replication

I’m not the biggest fan of the OFFSET/FETCH combination there, at least if your key column is fairly well packed—like, say, 99+% of the rows are contiguous and you occasionally have a jump of a few thousand rows. Also, that batch size of 100K might be a little high, although that will certainly depend on what the operation is. Batch updating a column based on some fairly straightforward calculation? You can probably get away with 100K, though I’d still prefer 10K. But as you add more complexities (deleting rows, very high server throughput, triggers, limited hardware, etc.), that number should edge downward.

Leave a Comment

Load Testing SQL Server with HammerDB and Docker

Anthony Nocentino announces a new tool:

I’m excited to announce the release of a new open-source project that fully automates HammerDB benchmarking for SQL Server using Docker. If you’ve ever needed to run TPC-C or TPC-H benchmarks multiple times, you know how time-consuming the manual setup can be. This project removes the hassle and gets you up and running a single command: ./loadtest.sh.

Click through to learn more about the project and how you can grab the code.

Leave a Comment