Press "Enter" to skip to content

Author: Kevin Feasel

Linked Servers from SQL MI using Azure Entra ID

Luis Aranda has the first of a two-part series:

Lately, we have seen some customers interested on the options available to use linked servers from Managed Instance and using Entra Authentication (formerly Azure Active Directory). It is certainly possible to create Linked Servers on SQL Managed instance (SQL MI) to connect to other PaaS databases such as other SQL MIs, Azure SQL Databases or Synapse databases using Entra Authentication.

Click through to see how you can do this using a managed identity. In the next article, Luis promises to show us how to do it with pass-through authentication, so you use your credentials instead of the managed identity’s credentials to access the remote server.

Comments closed

VARCHAR() in Microsoft Fabric Lakehouses and SQL Endpoints

Gerhard Brueckl models some data:

Defining data types and knowing the schema of your data has always been a crucial factor for performant data platforms, especially when it comes to string datatypes which can potentially consume a lot of space and memory. For Lakehouses in general (not only Fabric Lakehouses), there is usually only one data type for text data which is a generic STRING of an arbitrary length. In terms of Apache Spark, this is StringType(). While this applies to Spark dataframes, this is not entirely true for Spark tables – here is what the docs say:

Read through for more information on that, as well as how to define a table in a Microsoft Fabric lakehouse using VARCHAR(). The display is a little weird, but Greg Low explains why in the comments.

Comments closed

Exploring the gRPC API in Spark Connect with .NET

Ed Elliott continues a series on Spark Connect. First, Ed builds out something DataFrame API-ish:

So there are two goals of this post, the first is to take a look at Apache Arrow and how we can do things like show the output from DataFrame.Show, the second is to start to create objects that look more familiar to us, i.e. the DataFrame API.

If that’s not enough for you, Ed then shows how you can analyze a plan:

In this post we will continue looking at the gRPC API and the AnalyzePlan method which takes a plan and analyzes it. To be honest I expected this to be longer but decided just to do the AnalyzePlan method.

This has been a really fun series so far from Ed, so check these out. The only downside is that the people demand more F#. And by “the people,” I mostly mean that I would love to see F# examples.

Comments closed

Parallelizing Notebook Runs in Microsoft Fabric via Python

Sandeep Pawar kicks off multiple notebooks at once:

The notebook class in mssparkutils has two methods to run notebooks – run and runMultiple . run allows you to trigger a notebook run for one single notebook. Mim wrote a nice blog to show how to use it and its usefulness.

runMultiple , on the other hand, allows you to create a Direct Acyclic Graph (DAG) of notebooks to execute notebooks in parallel and in specified order, similar to a pipeline run except in a notebook.

Read on to learn more about the advantages of this latter approach as well as how you can do it.

Comments closed

Replacing DISTINCT with EXISTS

Andy Brownsword makes a switch:

The DISTINCT clause in a query can help us quickly remove duplicates from our results. Sometimes it can be beneficial to stop and ask why. Why do we need to use the clause, why are we receiving duplicates from our data?

I see this typically due to a JOIN being used where we don’t really want all of those results. This could be a ‘does something exist’ check such as if a customer has ever ordered before. The issue comes when there are multiple rows returned like a frequent customer in this example.

As an alternative to this, Andy shows how you can use the EXISTS clause to find records matching some criterion.

Comments closed

Finding the Cake Dataset’s Original Source

Rasmus Baath has done a good deed for all:

In statistics, there are a number of classic datasets that pop up in examples, tutorials, etc. There’s the infamous iris dataset (just type iris in your nearest R prompt), the Palmer penguins (the modern iris replacement), the titanic dataset(s) (I hope you’re not a guy in 3rd class!), etc. While looking for a dataset to illustrate a simple hierarchical model I stumbled upon another one: The cake dataset in the lme4 package which is described as containing “data on the breakage angle of chocolate cakes made with three different recipes and baked at six different temperatures [as] presented in Cook (1938)1”. For me, this raised a lot of questions: Why measure the breakage angle of chocolate cakes? Why was this data collected? And what were the recipes?

Read on as Rasmus unravels the mysteries of the cake dataset with the help of several others. H/T R-Bloggers.

Comments closed

Finding the Earliest Date in R

Steven Sanderson puts on the archaeologist’s fedora and bullwhip:

Greetings, fellow data enthusiasts! Today, we embark on a quest to uncover the earliest date lurking within a column of dates using the power of R. Whether you’re a seasoned R programmer or a curious newcomer, fear not, for we shall navigate through this journey step by step, unraveling the mysteries of date manipulation along the way.

Imagine you have a dataset filled with dates, and you’re tasked with finding the earliest one among them. How would you tackle this challenge? Fear not, for R comes to our rescue with its arsenal of functions and packages.

Click through to see how, keeping those pernicious missing values in mind.

Comments closed