Press "Enter" to skip to content

Month: September 2023

Parameterizing Databricks Notebooks with Widgets

Meagan Longoria adds some widgets:

Widgets provide a way to parameterize notebooks in Databricks. If you need to call the same process for different values, you can create widgets to allow you to pass the variable values into the notebook, making your notebook code more reusable. You can then refer to those values throughout the notebook.

Click through to learn more about the four types of widgets and how they work.

Comments closed

Overlaying Lines with Points in Base R

Steven Sanderson adds points to those lines:

In this blog post, we’ll explore how to overlay points or lines on a plot using Base R. We’ll use the plot() function to create the initial plot and then show how to overlay points with points() and lines with lines(). We’ll provide several examples, explaining each code block in simple terms, and encourage you to try them out on your own datasets.

Read on to see how. It’s also pretty easy to do in ggplot2 or other visualization libraries.

Comments closed

Creating a Postgres Cluster on AWS with pg_cirrus

Salman Ahmed builds a cluster:

pg_cirrus is a simple and automated solution to deploy highly available 3-node PostgreSQL clusters with auto failover. It is built using Ansible and to perform auto failover and load balancing we are using pgpool.

We understand that setting up 3-node HA cluster using pg_cirrus on cloud environment isn’t as simple as setting it up on VMs. In this blog we will guide you in setting up a 3-node HA cluster using pg_cirrus on AWS EC2 instances.

Read on for the step-by-step instructions.

Comments closed

Metadata-Driven Pipelines for Azure Data Factory Loads

Marc Bushong doesn’t want to copy and paste:

Developing ETLs/ELTs can be a complex process when you add in business logic, large amounts of data, and the high volume of table data that needs to be moved from source to target. This is especially true in analytical workloads involving Azure SQL when there is a need to either fully reload a table or incrementally update a table. In order to handle the logic to incrementally update a table or fully reload a table in Azure SQL (or Azure Synapse), we will need to create the following assets:

  • Metadata table in Azure SQL
    • This will contain the configurations needed to load each table end to end
  • Metadata driven pipelines
    • Parent and child pipeline templates that will orchestrate and execute the ETL/ELT end to end
  • Custom SQL logic for incremental processing
    • Dynamic SQL to perform the delete and insert based on criteria the user provides in the metadata table

Read on for the demonstration, which reads from one Azure SQL DB into another.

Comments closed

Data Temperature in Microsoft Fabric

Marc Lelijveld breaks out the thermometer:

As part of Microsoft Fabric, a new storage mode to connect from Power BI to data in OneLake has been introduced. Direct Lake it makes to possible to use your data from OneLake in Power BI without taking an additional copy of the data. Where Direct Lake promises to deliver the performance of Import-mode with the real-time capabilities of Direct query, it is time to have a closer look how data gets loaded into memory and delving into the concept of data dictionary temperature.

In this blog I will explain when data gets loaded into memory, elaborate on how you can measure the dictionary temperature of your data and the effect of queries on the temperature.

Click through to see what affects this measure and how.

Comments closed

The Risk of Changing MaxDOP

Erik Darling recommends caution:

Like in yesterday’s post about Cost Threshold For Parallelism, changing MAXDOP settings will have a universal effect on the workload.

This is true whether you change it at the server level for all databases, or at the database level using a database scoped configuration for a single database.

It is a guardrail to prevent unwanted conditions as a whole, like excessive concurrent parallel queries causing worker thread starvation (THREADPOOL waits), or just pushing CPU to 100% for extended periods of time.

Read on to see what Erik recommends you think about after any MaxDOP change.

Comments closed

Faceted Images in ggplot2

Steven Sanderson shows multiple plots on one image:

Data visualization is a crucial tool in the data scientist’s toolkit. It allows us to explore and communicate complex patterns and insights effectively. In the world of R programming, one of the most powerful and versatile packages for data visualization is ggplot2. Among its many features, ggplot2 offers the facet_grid() function, which enables you to create multiple plots arranged in a grid, making it easier to visualize different groups of data simultaneously.

In this blog post, we’ll dive into the fascinating world of facet_grid() using a practical example. We’ll generate some synthetic data, split it into multiple groups, and then use facet_grid() to create a visually appealing grid of plots.

Read on for the demo script. The text talks about facet_grid() and the demo is facet_wrap(). The two behave very similarly, though they have slightly different use cases.

Comments closed

Building a Retry Mechanism for sqlcmd in Bash

Jose Manuel Jurado Diaz won’t let failure get him down:

Introduction:

Efficiently managing temporary failures and timeouts is crucial in production environments when connecting to databases. In this article, we’ll explore how to implement a retry mechanism with sqlcmd in a Bash script, dynamically increasing timeouts with each failed attempt.

Problem Statement:

Operations can fail due to network issues, overloaded servers, or other temporary problems when interacting with databases. Implementing a retry mechanism helps address these temporary issues without manual intervention.

Read on for the solution script. You could also adapt this to Powershell fairly easily, I think, though if you do go down that road, I’d recommend taking a look at Polly and PsPolly.

Comments closed

Thoughts on Cost Threshold for Parallelism

Erik Darling has some thoughts:

First, I’m not suggesting that anyone should be using the default value for Cost Threshold For Parallelism. It’s old and moldy and not a good fit for most workloads functioning on modern hardware.

My apologies to Azure SQLDB users who can’t change this setting and leave it up to Microsoft to maybe manage it for them based on ???

Some people out there really like fiddling with settings in a usually ill-informed reaction to Some Script They Found On The Internet, without reading the fine print.

Erik’s thoughts are reasonable overall. My recommendation is to use Michael J. Swart’s technique for tuning cost threshold for parallelism as a starting point, as it gives you a basis for what the net effect of your changes are.

Comments closed