Press "Enter" to skip to content

Author: Kevin Feasel

Changing IP Addresses in an Availability Group

Sreekanth Bandarla is ready to make a change:

In this blog post, let’s see how to change all the IP addresses involved in a typical Always on Availability group configuration. In my setup, I have an AG with two replicas and a listener. See below to get an idea of my current environment on which I am going to change all the underlying IP addresses.

Click through for a step-by-step process, as well as a few things to remember.

Comments closed

Combining Azure Synapse Analytics and Azure Purview

Wolfgang Strasser shows how we can integrate Azure Synapse Analytics with Azure Purview:

In the past months I had the chance to play with and build solutions based on Azure Synapse Analytics and Azure Purview.

Azure Synapse (my Synapse blog entries) as the foundation for a solid platform to store, analyze and build data solutions and Azure Purview (my Purview blog posts) as the data governance and data catalog solution in Azure.

During the writing of my latest blog post (What’s new in Azure Synapse Analytics?), I found a very interesting entry in the update feature list: Azure Purview Integration.

Read on to see how.

Comments closed

Power Query Folding Indicators

Matthew Roche points out a nice addition to Power Query:

Because of the performance benefit that query folding provides, experienced query authors are typically very careful to ensure that their queries take advantage of the capabilities of their data sources, and that they fold as many operations as possible. But for less experienced query authors, telling what steps will fold and which will not has not always been simple…

Until now.

Read on for more information. I saw this for the first time in a recent presentation and was pleasantly surprised at how well it works.

Comments closed

Importing Graph Data into SQL Server

Louis Davidson takes us through an interesting problem:

The problem was, if I wanted to recreate this graph in data, I had to type in a bunch of SQL statements (something I generally enjoy to a certain point, but one of my sample files cover the geography of Disney World, and it would take a very long time to manually type that into a database as it took quite a while just to do one section of the park). 

So I went hunting for a tool to do this for me, but ended right back with yEd. The default file type when you save in yEd is GraphML, which is basically some pretty complex XML that was well beyond my capabilities using XML in SQL or Powershell. Realistically I don’t care that much about anything other than just the nodes and edges, and what I found was that you can save graphs in the tool a format named Trivial Graph Format (TGF).

Click through to see it in action.

Comments closed

Model Post-Processing with insight

The easystats team talks about the insight package in R:

We are talking about the insight package. It is what allows other packages, like easystats (parameterseffectsizeperformancereport, …) or ggstatsplotsjstats or modelsummary to be as powerful as they are, supporting tons of different R models. So why make you life hard when you can be like them, and rely on insight?

It is made for developers (and users) that do some postprocessing of different models (e.g., extracting stuff like parameters, values, data, names, specifications, predictions, priors, etc.), whether it is to nicely display their results or to do further computation.

Click through for an example of what it does and how it works. H/T R-bloggers

Comments closed

Determining a Good Test Set Size

John Mount thinks about test set size:

In this note we will answer “what is a good test set size?” three ways.

– The usual practical answer.
– A decision theory answer.
– A novel variational answer.

Each of these answers is a bit different, as they are solved in slightly different assumed contexts and optimizing different objectives. Knowing all 3 solutions gives us some perspective on the problem.

My rule of thumb is that I want it to be as small as possible while containing the highest likelihood of hitting all real-world scenarios enough times to provide a valid comparison. This conversely maximizes the size of the training data set, giving us the best chance of seeing the widest variety of scenarios we can during the formative phase.

And as usual, John goes way deeper than my rules of thumb. I like this post a lot.

Comments closed

Checking for Missing Failover Cluster Dependencies

Chad Callihan ran into an error creating a new database:

A tool that restores a model type database and does a bit of configuration work was failing. I took a look at the stores procedures and started to go step by step. It didn’t take long before getting this error message when attempting to restore/create a database:

Msg 5184, Level 16, State 2, Line 3
Cannot use file ‘D:\sql_log\CC_Test_name_4.ldf’ for clustered server. Only formatted files on which the cluster resource of the server has a dependency can be used. Either the disk resource containing the file is not present in the cluster group or the cluster resource of the Sql Server does not have a dependency on it.

Click through for the solution.

Comments closed

Using Active Directory Authentication for SQL Server on Linux

Jamie Wick takes us through a lengthy process:

SQL Server has been supported on several Linux distributions for a couple of years now. For some people, the primary stumbling block to implementing SQL Server on Linux is the need to retain Active Directory (ie Windows-based) authentication for their database users and applications. Below we’ll go over how to join a Linux server (Ubuntu release 20.04) with SQL Server 2019 to an Active Directory domain, and then configure SQL Server to allow Windows-based logins.

There are quite a few steps here and I appreciate Jamie providing us an image-filled, step-by-step process.

Comments closed

Optimizing a SQL Server 2019 Project for a Dedicated SQL Pool

Kevin Chant shows us how we can modify a database schema intended for SQL Server 2019 to work best with an Azure Synapse Analytics dedicated SQL pool:

In this post I want to cover how you can transform your SQL Server database schema for a dedicated SQL Pool if you are using Azure DevOps. Because I covered it at Data Toboggan over the weekend and it can be very useful.

By the end of this post, you will know one way you can transform the schema of a database project for SQL Server 2019 if you are using Azure DevOps. So that you can make it optimal for dedicated SQL Pools.

Click through for the process and an example. Note that this isn’t a quick “check this box and you’re done” type of solution, but if you already have a proper star schema, this will help you think through some of the things you’ll need to do.

Comments closed