Press "Enter" to skip to content

Month: June 2023

Finding the Actual Error Line in sp_executesql

Thom Andrews solves a problem:

Notice that the error line states line 2 not line 8, which is the line the sys.sp_executesql was called on. Knowing the line the error occured on within the dynamic batch is certainly important, but if you’re working with multiple dynamic batches you have no way of knowing which dynamic batch might have produced the error; was it the one executed on line 100? Line 200? Line 350?

Solving this problem wasn’t wasn’t exactly simple, and it came with a couple of caveats.

Click through for the approach. As Thom mentions, it isn’t perfect, but it is reasonable and interesting.

Comments closed

Read and Write Data with PySpark

Dustin Vannnoy has two of the three R’s down:

Every Spark pipeline involves reading data from a data source or table. For data engineers we usually end the pipelines by writing the transformed data. In this tutorial we walk through some of the most common format and cloud storage locations for reading and writing with Spark. We’ll save some of the advanced Delta Lake capabilities for another tutorial.

Click through to see how to read from and write to CSV, JSON, and Parquet formats. Dustin has examples of working with Azure Blob Storage, S3, and Google Cloud Storage, and even some database examples with JDBC.

Comments closed

Creating Crosstabs in R

Steven Sanderson shows off the ability to perform crosstabs in R:

As a programmer, you’re constantly faced with the task of organizing and analyzing data. One powerful tool in your R arsenal is the xtabs() function. In this blog post, we’ll explore the versatility and simplicity of xtabs() for aggregating data. We’ll use the mtcars dataset and the healthyR.data::healthyR_data dataset to illustrate its functionality. Get ready to dive into the world of data aggregation with xtabs()!

Click through for two examples of it creating a crosstab (or, in SSRS/Power BI terms, a matrix) from your data.

Comments closed

Measuring Bandwidth with Powershell

Patrick Gruenauer checks download speed:

Short explanation: A file will be downloaded from my homepage. It’s a video about Powershell Profiles in German language. During the download of the file the process is measured with the Measure-Command cmdlet. At the end we get the final result and the time it took to download the file.

You can, of course, change the file location to whatever makes sense, but it’s nice to have something handy and not need to go to any websites for a speed test.

Comments closed

The Cost and Difficulty Level of Changes

Richard Swinbank has an image for us:

I spent some time working at a property portal, where users could look at online listings of homes for sale or rent, then go on to book a viewing appointment with an estate agent. On one occasion we were asked to build a Power BI report showing:

  • the number of appointments booked by portal users
  • the percentage of appointments where the user had viewed the online listing more than once.

Sounds easy enough, right?

Click through for the image. It makes intuitive sense, but is a good visual depiction of why some data requests are more challenging than others.

Comments closed

Running SqlBulkCopy in Parallel from Powershell

Jose Manuel Jurado Diaz has a script for us:

Today, we encountered an interesting service request of attempting to reduce the load times for 100,000 records from a table with 97 varchar(320) fields in an Azure SQL HyperScale database. Following, I would like to share my lessons learned here.

The idea is to split in different concurrent process the execution of multiples SqlBulkCopy. In this case, we are going to split this process in 5 processes running in parallel inserting 20,000 rows, let’s try to know the total size. 

Read on for the script, as well as a rough idea of how long it’ll take inserting into an Azure SQL DB Hyperscale instance.

Comments closed

An Overview of Statistics in SQL Server

Matthew McGiffen has a primer for us:

Statistics are vitally important in allowing SQL Server to find the most efficient way to execute your queries. In this post we learn more about them, what they are and how they are used.

Read on for plenty of information about stats. One thing I’d emphasize in this is that, for the most part, auto-generated stats are fine. The only time I’ve ever seen any value in creating my own stats is in multi-column statistics (which auto-generated stats doesn’t do), and even that’s of fairly limited value, at least in my experience. I’m sure that there’s a point in which hand-crafting your own statistics makes a tiny but noticeable marginal difference, but the systems I’ve worked with tend not to be anywhere near that point. There are usually much bigger fish to fry.

Comments closed

Indexes and Stats on Tables with Always Encrypted

Matthew McGiffen gives us the low-down:

In a previous post we looked at executing queries against columns encrypted using Always Encrypted. In this short post we look at the ability to be able to have (or not) indexes on those columns. This information is relevant to the case where you are using Always Encrypted without enclaves, we’ll look at working with enclaves later on.

Click through to see how this all works.

Comments closed

Tools for Optimizing Azure SQL MI Performance

Rie Merritt breaks out the toolbox:

Azure SQL Managed Instance provides options within and outside Azure portal for troubleshooting and optimizing performance.  Within the portal, you can leverage automatic tuning and Intelligent Insights. Outside of the Azure Portal, you can take advantage of the capabilities that are already in the database engine, such as query store and dynamic management views (DMV). In addition, Microsoft offers several monitoring options that are in preview: Azure SQL Insights inside Azure Monitor, which requires an agent on a VM you own, Azure SQL Analytics, and Azure diagnostic telemetry. 

Automatic tuning in SQL Managed Instance supports FORCE LAST GOOD PLAN, which identifies queries using an execution plan that is slower than the previous good plan. It forces queries to use the last known good execution plan. Since the system automatically monitors the workload performance, in case of changing workloads, the system dynamically adjusts to force the best performing query execution plan. 

Many of the things Rie describes are also available on-premises, though Azure SQL Analytics is only available in Azure SQL DB and Azure SQL MI, as of the time of this post.

Comments closed

A Primer on Microsoft Fabric Notebooks

Leila Etaati provides an explanation:

In Fabric, there are tools for different personas of the users to work with. For example, for a citizen data analyst, Dataflows and Power BI Datasets are the tools with which the analyst can build the data model. For Data Engineers and Scientists, one of the tools is Notebook.

The Notebook is a place to write and run codes in languages such as; PySpark (Python), Spark (Scala), Spark SQL, and SparkR (R). These languages are usually familiar languages for data engineers and data scientists. The Notebook provides an editor to write code in these languages, run it in the same place, and see the results. Consider this as the coding tool for the data engineer and scientist.

Click through for a video, as well as a regular blog post.

Comments closed