Press "Enter" to skip to content

Author: Kevin Feasel

mssparkutils now notebookutils and Validating DAGs in Fabric

Sandeep Pawar gives us two quick hits:

First, if you haven’t noticed mssparkutils has been officially renamed to notebookutils. Check out the official documentation for details. Be sure to use/update your notebooks to notebookutils.

Read on for a pair of notes around this name change, as well as some capabilities to validate DAGs when using runMultiple to orchestrate multiple notebook executions.

Comments closed

Random Forest Missing Data Imputation using missRanger

Michael Mayer handles missing data:

{missRanger} is a multivariate imputation algorithm based on random forests, and a fast version of the original missForest algorithm of Stekhoven and Buehlmann (2012). Surprise, surprise: it uses {ranger} to fit random forests. Especially combined with predictive mean matching (PMM), the imputations are often quite realistic.

This looks like an interesting package. At first, I thought it was a way of generating predictions outside the boundaries of training data and had concerns—a classic point (limitation?) of random forest as an algorithm is that it will not even try to predict values outside the range of what it sees in training data, so if the largest label is 10 and the smallest is 0, you won’t see a prediction of 11 or 50, no matter how you scale the inputs.

Instead of doing that, missRanger looks like it’s filling in missing data using a clever approach. That’s quite useful for dealing with incomplete data, a really common problem whose good solutions tend to be complex enough that people typically ignore them in favor of simple but less useful solutions like dropping rows altogether.

Comments closed

Databricks Notebook Package Installation and Variables

Chen Hirsh diagnoses a problem:

A friend called to ask for my help with a weird issue. In a Databricks notebook using Python, he declares and assigns a variable in the first cell. Something like that:

my_var = 1

He then runs the rest of the notebook, and somewhere along the way, tries to use this variable, and gets this message:

NameError: name 'my_var' is not defined

Going back to cell 1, and checking the value of my_var, he gets the same error.

Read on for the root cause of the issue, as well as a pair of helpful tips from Chen.

Comments closed

Setting Row Label and Key Columns in Power BI

Chris Webb makes use of metadata:

A few weeks ago I wrote a post on how to improve the results you get from Power BI Copilot by editing the Linguistic Schema. As I mentioned, though, there are in fact lots of different ways that you as a Power BI semantic model developer can improve the results you get from Copilot and in this post I’ll show you another one: setting the Row Labels and Key Columns properties on a table.

Read on to see how these can affect results.

Comments closed

System.Data.SqlClient Deprecated

David Engel has an announcement:

We announced Microsoft.Data.SqlClient in the first half of 2019 and shipped the first stable package later that year. That release coincided with .NET Core 3.0. We’ve been developing Microsoft.Data.SqlClient in the dotnet/SqlClient repo since that time, over the last five years. It’s now time to start the deprecation process for the System.Data.SqlClient package. We plan to take a staged process to deprecation, with the intent to align support changes and associated code transition activities between these libraries to natural migration points between .NET major versions.

I haven’t used System.Data.SqlClient in a while, so I’m not sure how much is in that library that isn’t in Microsoft.Data.SqlClient.

Comments closed

pg_dump and the Backup Tool Debate

Gulcin Yildirim Jelinek explains the debate around whether pg_dump is a backup tool or not:

Recently, while writing about the vulnerability affecting pg_dump, the topic of decommissioning pg_dump came up on Twitter. Unlike the nostalgic feelings many had for Pluto, there was less reluctance to see pg_dump reclassified. In fact, some people were eager to retire it as a backup utility, and I even got a bit of pushback for still referring to pg_dump as one

I was talking to my colleague Simona the other day, and she mentioned that everybody in Postgres circles says, “pg_dump is not a backup tool,” but perhaps it’s not always explained well why it is not.

Read on for that explanation.

Comments closed

Interpreting Linear Regression Model Coefficients

Vinod Chugani looks at a linear regression:

Linear regression models are foundational in machine learning. Merely fitting a straight line and reading the coefficient tells a lot. But how do we extract and interpret the coefficients from these models to understand their impact on predicted outcomes? This post will demonstrate how one can interpret coefficients by exploring various scenarios. We’ll delve into the analysis of a single numerical feature, investigate the role of categorical variables, and unpack the complexities introduced when these features are combined. Through this exploration, we aim to equip you with the skills needed to leverage linear regression models effectively, enhancing your analytical capabilities across different data-driven domains.

Click through for details, with examples in Python.

Comments closed

Time Series Anomaly Detection in Microsoft Fabric

Adi Eldar talks anomaly detection:

Anomaly Detector, one of Azure AI services, enables you to monitor and detect anomalies in your time series data. This service is based on advanced algorithms, SR-CNN for univariate analysis and MTAD-GAT for multivariate analysis. This service is being retired by October 2026, and as part of the migration process

  • The algorithms were open sourced and published by the new time-series-anomaly-detector · PyPI package.
  • We offer a time series anomaly detection workflow in Microsoft Fabric data platform.

Read on to see what replacements exist and how you can use the time-series-anomaly-detector package in Microsoft Fabric.

Comments closed

Reviewing the SQL Server Error Log

Jim Evans digs into the logs:

In SQL Server there are two primary sets of error logs. One for the database engine and a second for SQL Server Agent. Reviewing these logs is routine for Database Administrators and sometimes Developers when troubleshooting issues. What are the different ways to view these error logs? Are there different scenarios when you would use one view other another? Do any other error logs exist that SQL Server Professionals should review?

Read on for three ways to do this, including one outside of SQL Server itself.

Comments closed

Storing and Parsing JSON in SQL Server

Ed Pollack talks JSON:

Like XML, JSON is an open standard storage format for data, metadata, parameters, or other unstructured or semi-structured data. Because of its heavy usage in applications today, it inevitably will make its way into databases where it will need to be stored, compressed, modified, searched, and retrieved.

Even though a relational database is not the ideal place to store and manage less structured data, application requirements can oftentimes override an “optimal” database design. There is a convenience in having JSON data close to related relational data and architecting its storage effectively from the start can save significant time and resources in the future.

Read on for plenty of examples and tips. Ideologically, I have no problem parsing JSON to load data into SQL Server. I have no real problem storing data in JSON if the calling application takes that JSON as-is and does not expect the database to modify or shred that JSON. I have no problem taking relational data and creating JSON structures to send out to calling applications. My problem comes when you store the data as JSON but then expect the database to manage data. Treat the JSON blob as atomic and we’re fine; otherwise, I want to make that data relational, as befits a relational database.

2 Comments